Hard Light Productions Forums

Off-Topic Discussion => General Discussion => Topic started by: GhylTarvoke on August 08, 2015, 03:54:26 pm

Title: The "hard problem of consciousness"
Post by: GhylTarvoke on August 08, 2015, 03:54:26 pm
This will be a lengthy post (and possibly too weighty for GenDisc). Please feel free to engage with any part of it; I want it to be torn apart. Furthermore, I think that discussions on philosophy forums usually descend into gibberish and tiresome terminology, whereas discussions on this forum rarely do.  ;)  You've probably seen many of the ideas here, but I'd love to hear your thoughts on the subject. I've been obsessed with it during the past few days.

I won't pretend to be unbiased. I'm a fan of David Chalmers (who thinks consciousness is something fundamental, like mass and gravity), I'm a dualist (I believe in the magic essence of me-ness), and I believe that a teleporter would kill "me" (even though day-to-day cell replacement seemingly doesn't).


I. Some Online Discussions
------------------------------


First, previous HLP discussions:
Let me............... Tel-e-port you! (http://www.hard-light.net/forums/index.php?topic=66326)  Starts in the middle of page 6. This one is particularly good.
Peter Watts, 'Blindsight': finally, aliens without the bull**** (http://www.hard-light.net/forums/index.php?topic=75415)  Entire thread.
On religion, atheism and changing thread titles.... (http://www.hard-light.net/forums/index.php?topic=78955)  Also starts in the middle of page 6.
Supernatural Elements? (spoilers unmarked) (http://www.hard-light.net/forums/index.php?topic=82778)  Starts at the top of page 3.
Well it looks like vegans will just have to starve (http://www.hard-light.net/forums/index.php?topic=90228)  Entire thread.

Second, two discussions on philosophy forums that I thought were productive, though quite long:
Is there a Hard Problem of Consciousness? (http://forums.philosophyforums.com/threads/is-there-a-hard-problem-of-consciousness-67768.html)
The 'Explanatory Gap' (http://philpapers.org/bbs/thread.pl?tId=137)

Third, some writings on consciousness:
Consciousness Defined (http://www.nytimes.com/books/first/m/mcginn-flame.html)  An extract from Colin McGinn's "The Mysterious Flame".
What is it like to be a bat? (http://organizations.utep.edu/portals/1475/nagel_bat.pdf)  Thomas Nagel's "what it is like" description of qualia.
Epiphenomenal Qualia (http://philosophyfaculty.ucsd.edu/faculty/rarneson/Courses/FrankJacksonphil1.pdf)  Frank Jackson's knowledge argument.
Facing Up to the Problem of Consciousness (http://consc.net/papers/facing.html)  David Chalmers coins the phrase "hard problem of consciousness".
Moving Forward on the Problem of Consciousness (http://consc.net/papers/moving.html)  David Chalmers responds to criticism of the previous paper.
Blindsight (http://www.rifters.com/real/Blindsight.htm)  The novel under discussion in one of the above HLP threads.

Fourth, an enjoyable comic regarding teleportation:
The Machine (http://existentialcomics.com/comic/1)


II. Consciousness Defined?
-----------------------------


"Consciousness" is an overloaded word. Its philosophical meaning is not wakefulness, nor is it awareness in the pedestrian sense; some (sort of) synonyms are "the hard problem of consciousness", "p-consciousness", "qualia" (taken as a whole), and "subjective experience". It's also different from self-awareness, which may take place in the absence of consciousness, or vice versa. With that out of the way, what is it?

I think most people intuitively know what is meant by "consciousness". The idea is arguably present in popular culture (see "The Matrix", "Inception", some episodes of "The Twilight Zone", and the upcoming game "SOMA"). Nevertheless, it's notoriously difficult to define, due to its very nature. Consciousness is so elusive, in fact, that one can deny its existence - for which there may be no objective evidence - or label that particular use of the word as "not even wrong", and end the conversation immediately.

Long story short: we can only approach the concept indirectly. Consciousness Defined (http://www.nytimes.com/books/first/m/mcginn-flame.html) (a misnomer) is a good place to start, but here are some "intuition pumps". Needless to say, none of them are mine.

----------------------------------------------------------

The closest thing to a concise definition is probably by Thomas Nagel: "what it is like to be something". "Subjective experience" also seems to have the right flavor. (The word "soul" is sometimes used, but that's tangled with religion.) Because what we see influences us so powerfully, consciousness is sometimes intuitively described as a movie playing in your head, or a "homunculus" looking out through your eyes. But vision - not the mechanical process of light entering your eyes and so forth, but the experience of vision - is only one part of consciousness. Other parts are hearing, bodily awareness, and everything else that goes on "in your head".

Consciousness is claimed to be a completely private, subjective phenomenon, and inaccessible from the outside. According to Nagel, we could know everything there is to know about a bat from the outside, and still not know what it's like to be a bat. (Nagel chose a bat because it has a sense we do not - namely, echolocation.) Jackson's "knowledge argument" is that we could know every physical aspect of the color red (which objects are red, the behavior of light, how it's processed by the eye, etc.), and still not know what red is "like" until we've seen it ourselves. The experience of red is something that can't be communicated.

Many intuition pumps involve hypothetical beings called "p-zombies", or just "zombies". A zombie is something that looks like a human and acts like a human - in fact, is an exact replica of a human - but "lacks an inner life", "is all dark inside", and "has no-one looking out". It does all the same things a human does and processes information in all the same ways, but has no corresponding subjective experience, any more than (say) a rock. If such a being is possible, the argument goes, there must be "something more" to people than their physical makeup.

Another property of consciousness is its incorrigibility. In some sense, you cannot be mistaken in the belief that you are conscious. To deny that you are conscious is absurd, because it's the only thing you can be certain of (setting aside the possibly related, but distinct fact of your own existence). It is the brute fact, the only possible starting point; everything else, including the existence of an external world, is speculation. To assert that consciousness arises in the external world is to have things backwards.

(Tiresome note: at this level of abstraction, one could object to my use of the word "you", because "you" is a concept that may only exist as part of "your" consciousness. But objections like this are pedantic and stifle discussion. Similarly, metaphysical solipsism, the very strong assertion that "only I exist", is a dead end in every sense of the phrase.)

Lastly, a pointed but thought-provoking post from "I am Charlie" in Is there a Hard Problem of Consciousness? (http://forums.philosophyforums.com/threads/is-there-a-hard-problem-of-consciousness-67768.html):

Quote
There is a serious linguistic problem with the discussion you wish to have. There are those that understand what Nagel is alluding to with his description of conscious experience as the "what-it-is-like-to-be-me", and those that are blind to it, and members of these two groups have no common ground that would permit such a debate. Members of the former group have had the experience of being perplexed after a kind of reversal of attention back upon itself, leading them to 'notice' something that has not been noticed by members of the latter group. This perplexity has an analogy with the principal question of metaphysics -- "why is there anything rather than nothing at all?" -- and  in this sense: That question normally arises in respect of what we might want to refer to as the "objective world", whereas for those members of the latter group the question arises in respect of what we might want to refer to as the "subjective world". Now, language has evolved in the objective world and has utility therein, but this idea of p-consciousness (of the synchronic entirety of its constituents) has no utility in the objective world, and so language can gain no real traction upon it. Consequently members of the latter group, who are blind to the issue, are also blind to their error in any claims they make about it. There can be no common ground between these two groups, one claiming that the other is seeing something that isn't there, the other claiming that the first is not seeing something that is there.


III. The Hard Problem
------------------------


As I mentioned, if you deny the phenomenon entirely (Chalmers calls this the type-A materialist position), there can be no further discussion. If, on the other hand, you think there's something extra that needs explaining, you run into the "hard problem": why do we have subjective experience?

Chalmers distinguishes this from the "easy problems" of explaining (for example) the ability to react to stimuli, the reportability of mental states, and the difference between wakefulness and sleep. These problems are easy in the sense that we know what kind of solution is required: to explain the performance of a function, we need only specify a lower-level mechanism for the function. This is how we usually answer questions - with reductive arguments that explain complex processes in terms of simpler ones. The hard problem is different because even after we explain the functions, the question still remains: why are they accompanied by subjective experience?

(The above paragraph is compressed from Facing Up to the Problem of Consciousness (http://consc.net/papers/facing.html).)

Neuroscience, he goes on, is well-suited to explaining the performance of functions, but seems to have no handle on the hard problem, which requires a different kind of explanation. Something extra is needed.

Quote
It is tempting to note that all sorts of puzzling phenomena have eventually turned out to be explainable in physical terms. But each of these were problems about the observable behavior of physical objects, coming down to problems in the explanation of structures and functions. Because of this, these phenomena have always been the kind of thing that a physical account might explain, even if at some points there have been good reasons to suspect that no such explanation would be forthcoming. The tempting induction from these cases fails in the case of consciousness, which is not a problem about physical structures and functions. The problem of consciousness is puzzling in an entirely different way. An analysis of the problem shows us that conscious experience is just not the kind of thing that a wholly reductive account could succeed in explaining.

As "I am Charlie" notes, the hard problem is similar to the question, "why is there something rather than nothing?" It's impossible even to imagine what kind of answer would be satisfactory. In addition, subjective experiences appear to be inaccessible from the outside, so that there is no objective way to study them or even produce evidence that they exist. Chalmers concludes that consciousness must be taken as fundamental.


IV. A Possible Solution
-------------------------


This is the best part. From Moving Forward on the Problem of Consciousness (http://consc.net/papers/moving.html):

Quote
Here we can exploit an idea that was set out by Bertrand Russell (1926), and which has been developed in recent years by Grover Maxwell (1978) and Michael Lockwood (1989). This is the idea that physics characterizes its basic entities only extrinsically, in terms of their causes and effects, and leaves their intrinsic nature unspecified. For everything that physics tells us about a particle, for example, it might as well just be a bundle of causal dispositions; we know nothing of the entity that carries those dispositions. The same goes for fundamental properties, such as mass and charge: ultimately, these are complex dispositional properties (to have mass is to resist acceleration in a certain way, and so on). But whenever one has a causal disposition, one can ask about the categorical basis of that disposition: that is, what is the entity that is doing the causing?

One might try to resist this question by saying that the world contains only dispositions. But this leads to a very odd view of the world indeed, with a vast amount of causation and no entities for all this causation to relate! It seems to make the fundamental properties and particles into empty placeholders, in the same way as the psychon above, and thus seems to free the world of any substance at all. It is easy to overlook this problem in the way we think about physics from day to day, given all the rich details of the mathematical structure that physical theory provides; but as Stephen Hawking (1988) has noted, physical theory says nothing about what puts the "fire" into the equations and grounds the reality that these structures describe. The idea of a world of "pure structure" or of "pure causation" has a certain attraction, but it is not at all clear that it is coherent.

So we have two questions: (1) what are the intrinsic properties underlying physical reality?; and (2) where do the intrinsic properties of experience fit into the natural order? Russell's insight, developed by Maxwell and Lockwood, is that these two questions fit with each other remarkably well. Perhaps the intrinsic properties underlying physical dispositions are themselves experiential properties, or perhaps they are some sort of proto-experiential properties that together constitute conscious experience. This way, we locate experience inside the causal network that physics describes, rather than outside it as a dangler; and we locate it in a role that one might argue urgently needed to be filled. And importantly, we do this without violating the causal closure of the physical. The causal network itself has the same shape as ever; we have just colored in its nodes.

All I can say is that I find this very compelling.

----------------------------------------------------------

As I said, these ideas are not mine, and doubtless many of you are familiar with them. I just wanted to package them and see what you think.
Title: Re: The "hard problem of consciousness"
Post by: General Battuta on August 08, 2015, 04:15:15 pm
Everything is explained by physics! Deflationary monist compatibilism is the answer! Inventing magical thoughts to explain unnecessary intuitions is just an onanistic way to hide from the meat machine truth!
Title: Re: The "hard problem of consciousness"
Post by: GhylTarvoke on August 08, 2015, 04:31:23 pm
Well, Chalmers' point is that consciousness eludes the scientific method, but need not interfere with physics (e.g. by violating causal closure). Physics and consciousness could be complementary - one representing interactions, and the other representing the things that are interacting.

By the way, I was hoping I would snag you, Battuta!  :)
Title: Re: The "hard problem of consciousness"
Post by: zookeeper on August 08, 2015, 05:00:05 pm
Most philosophical questions I find pointless and uninteresting. A few questions I find interesting and as if I can actually make progress thinking about them. And then there's one which I find interesting but impossible to even begin to touch in any sort of coherent manner. Guess which one that is. :(
Title: Re: The "hard problem of consciousness"
Post by: Luis Dias on August 08, 2015, 05:13:23 pm
I'm also inclined towards monism and compatibilism. However, my mind is open in relation to two aspects. First, in how science itself was designed with an inherent subject - object divide, and focusing on the latter as an "object" of study (while the subject, studies). By this design, it does follow that some fundamental problems might arise when we try to objectify the subject without any loss of information. We might end up studying zimbos (zombies, etc.) instead, and failing to "get" why they are not more than zimbos.

The solution of concluding like Dennett does (almost) "Well then we are all zimbos, what's the big deal", still feels unsatisfactory at some core level.

Here's the second aspect I'm open to. I'm open to an idea similar to a Dunning Kruger effect, related to consciousness. We might be just too numbified by an unknown process that prevents us to understand how we ourselves *really* work.

Related homework: The Semantic Apocalypse (https://speculativeheresy.wordpress.com/2008/11/26/the-semantic-apocalypse/)
Title: Re: The "hard problem of consciousness"
Post by: GhylTarvoke on August 08, 2015, 05:27:39 pm
Everything is explained by physics! Deflationary monist compatibilism is the answer! Inventing magical thoughts to explain unnecessary intuitions is just an onanistic way to hide from the meat machine truth!

I forgot to mention: Chalmers also denies that there is anything mystical about positing consciousness as fundamental. He compares it to positing gravity as fundamental. Nobody complains that Newton didn't explain what gravity is; all we expect is an explanation of how gravity behaves. Physics says nothing about what things "are", and doesn't try to. That's a question for metaphysics - just as consciousness might be.

Most philosophical questions I find pointless and uninteresting. A few questions I find interesting and as if I can actually make progress thinking about them. And then there's one which I find interesting but impossible to even begin to touch in any sort of coherent manner. Guess which one that is. :(

I'm in the same boat. One of my favorite quotes, from Zee's popular science book "Fearful Symmetry", is this (regarding the possible role of consciousness in quantum mechanics):

Quote
The distinguished physicist Murph Goldberger was once asked by a television interviewer why he had never worked in this area. He answered that every time he decided to think about these questions, he would sit down, get out a clean piece of paper, sharpen his pencil - and then he just couldn't think of anything to put down.

Related homework: The Semantic Apocalypse (https://speculativeheresy.wordpress.com/2008/11/26/the-semantic-apocalypse/)

Looks juicy! I'll work on it.
Title: Re: The "hard problem of consciousness"
Post by: General Battuta on August 08, 2015, 06:24:30 pm
I've never seen anything that calls for a special, fundamental consciousness — except our desire to believe consciousness is important. As far as we can tell, qualia can be manipulated by manipulating the brain. There doesn't seem to be any parsimonious reason to look for anything else going on.
Title: Re: The "hard problem of consciousness"
Post by: Bobboau on August 08, 2015, 11:38:34 pm
but I do think it would be handy to have some way to determine if something had the quality of "consciousness". but I think there are better words to use for this. Person-hood is a good one. Agency is also a nice $64 word for this thing.

My general rule of thumb here is I should treat it as a person if it is capable of expressing it's desire that I do so. not that it does or desires to, but that it can. this is because in order to do that it would need to have desires to be communicated and that it had sufficient intelligence to be able to communicate.
Title: Re: The "hard problem of consciousness"
Post by: GhylTarvoke on August 09, 2015, 04:15:31 am
I've never seen anything that calls for a special, fundamental consciousness — except our desire to believe consciousness is important. As far as we can tell, qualia can be manipulated by manipulating the brain. There doesn't seem to be any parsimonious reason to look for anything else going on.

I would say that type-F dualism (the view described in the last quote of my OP) is parsimonious, because it kills two birds with one stone. Even if you identify consciousness with something like reportability, or deny its existence (which seems extremely counterintuitive, because it's the only thing you can be certain of; everything else follows), there's still a problem.

Physics still only addresses relationships between, say, fundamental particles. It has no handle on the intrinsic nature of particles - just as it has no handle on the question, "why is there something rather than nothing?" Particles may as well be black boxes. Sure, from a practical (and testable) standpoint, that's all you need. But from a philosophical standpoint, it seems an incomplete and hollow picture of the universe, with lots of rules but no substance.

Thus, by identifying "consciousness" with "intrinsic nature", type-F dualism allows consciousness to fill a gap in our understanding that was already present. Rather than viewing consciousness as a dangling, extraneous assumption, it binds consciousness and physics together in a completely natural way.

Here's the second aspect I'm open to. I'm open to an idea similar to a Dunning Kruger effect, related to consciousness. We might be just too numbified by an unknown process that prevents us to understand how we ourselves *really* work.

Related homework: The Semantic Apocalypse (https://speculativeheresy.wordpress.com/2008/11/26/the-semantic-apocalypse/)

Bakker's lecture was interesting, but I agree with the rebuttal that even if we're currently ill-equipped to analyze ourselves, we may have the tools in the future (e.g. via brain modification).

Incidentally, is "Neuropath" worth reading? It sounded promising.

but I do think it would be handy to have some way to determine if something had the quality of "consciousness". but I think there are better words to use for this. Person-hood is a good one. Agency is also a nice $64 word for this thing.

It would certainly be handy - it would clarify issues like abortion and animal rights - but it may be impossible, even in theory.
Title: Re: The "hard problem of consciousness"
Post by: General Battuta on August 09, 2015, 08:12:10 am
Nothing has any 'intrinsic nature.' Everything is physical, and all traits are physical traits.
Title: Re: The "hard problem of consciousness"
Post by: Luis Dias on August 09, 2015, 09:06:58 am
Nothing has any 'intrinsic nature.' Everything is physical, and all traits are physical traits.

The annoying thing in these sentences is how self-unawarely they form an incoherent thought. To say that everything is just "relations" all the way down, therefore there isn't anything "intrinsic" to things, etc. (i.e., let's bury Aristotle really deep into the ground and blast him with a nuke to make sure) is incoherent, IMHO, or at least I can't see how it isn't. If "everything is relations", then that's the "intrinsic nature of reality".

This is why I can't take physicalism very seriously, it naively reinstitutes what it sets out to demolish. Give me a hardline positivism over this stuff any day.
Title: Re: The "hard problem of consciousness"
Post by: General Battuta on August 09, 2015, 09:52:06 am
I don't understand your point. Imagine an 'intrinsic trait' that has no physical properties and affects nothing in the world. Who cares? The only relevant properties of anything are those with causal effect.

We know that whatever consciousness is, it is physical: we can infer this by altering the brain and altering consciousness. We know that consciousness is the result of physical operations in the brain.

Anything beyond that is a rearguard action — trying to cling to dualism by inventing reasons it might be necessary.
Title: Re: The "hard problem of consciousness"
Post by: Grizzly on August 09, 2015, 10:02:43 am
Quote
We know that whatever consciousness is, it is physical: we can infer this by altering the brain and altering consciousness.

This pretty much. If consciousness was not physical, we would not be able to influence it by taking drugs, for example.
Title: Re: The "hard problem of consciousness"
Post by: GhylTarvoke on August 09, 2015, 10:24:53 am
Nothing has any 'intrinsic nature.' Everything is physical, and all traits are physical traits.

Well, this view is at least consistent. It's precisely analogous to the view that there is no hard problem of consciousness, and all problems are easy problems (i.e. having to do with functions).

One response would be to turn the argument on its head. The reality is that nothing has any "physical nature". Everything is experiential, and all traits are experiential traits. There's no parsimonious reason to posit an external world; it's an extraneous assumption. The universe is more like a great thought than like a great machine.

Now, I don't believe in idealism for a moment. It smacks of solipsism (though it's closer to "only minds exist" than "only I exist"). But at least it recognizes that consciousness is where you start. To adhere strictly to physicalism is to forget that your own consciousness is the starting point for all inquiry, and to ignore the one brute fact. So although physicalism and idealism are both incomplete pictures of the universe, idealism is by far the more natural one.

I don't understand your point. Imagine an 'intrinsic trait' that has no physical properties and affects nothing in the world. Who cares? The only relevant properties of anything are those with causal effect.

Hence why I said that physics is all you need from a practical standpoint. (I know you were responding to Luis.)

We know that whatever consciousness is, it is physical: we can infer this by altering the brain and altering consciousness. We know that consciousness is the result of physical operations in the brain.

That doesn't follow. Experiential properties may supervene on physical properties.

(i.e., let's bury Aristotle really deep into the ground and blast him with a nuke to make sure)

 :wakka:

-------------------------

A random thought: another sign of consciousness in popular culture is the observation (arguably the basis for the Golden Rule and human rights in general), "I could have been born as someone else". Clearly, what is meant by "I" can't be my memory or thought patterns, because I'm imagining having someone else's brain. The only possible interpretation is that I'm imagining having my own consciousness implanted in a different body.
Title: Re: The "hard problem of consciousness"
Post by: General Battuta on August 09, 2015, 10:31:26 am
All properties are physical properties, and physicalism is a complete depiction of everything. No one could have been born as anyone else — their consciousness is their particular brain at a particular moment, and the illusion of continuity and 'selfness' is provided by memory. Alter the brain and you alter the person.

Deal insult to the brain and you can change anything: you can alter a man's personality, erase his loves, trick him into confabulating a new identity and history, fool him into believing he controls something he doesn't, make him experience divine presence. There is no self except the moment-by-moment meat of the brain.
Title: Re: The "hard problem of consciousness"
Post by: GhylTarvoke on August 09, 2015, 11:03:26 am
All properties are physical properties, and physicalism is a complete depiction of everything.
The reality is that nothing has any "physical nature". Everything is experiential, and all traits are experiential traits. There's no parsimonious reason to posit an external world; it's an extraneous assumption. The universe is more like a great thought than like a great machine.

My point is that experience comes first. It's the only thing you can be sure of; it's "epistemologically primary".

No one could have been born as anyone else — their consciousness is their particular brain at a particular moment, and the illusion of continuity and 'selfness' is provided by memory.

I'm not espousing the idea, only labeling it as an idea about consciousness. I agree that it doesn't make sense. "What could have happened" is an odd human concept.

Alter the brain and you alter the person.

Deal insult to the brain and you can change anything: you can alter a man's personality, erase his loves, trick him into confabulating a new identity and history, fool him into believing he controls something he doesn't, make him experience divine presence. There is no self except the moment-by-moment meat of the brain.
Experiential properties may supervene on physical properties.
Title: Re: The "hard problem of consciousness"
Post by: zookeeper on August 09, 2015, 11:43:29 am
And then there's one which I find interesting but impossible to even begin to touch in any sort of coherent manner. Guess which one that is. :(

At this point I find it relevant to clarify what I meant above: I wasn't referring to consciousness as such (how can we tell if someone is conscious, why is anyone conscious at all, etc), but exclusively to my own subjective experience (or qualia). I don't see any reason to assume there's anything non-physical or magical about consciousness, but that doesn't tell me why the view to the universe is from a first-person view in the skull of one specific being.
Title: Re: The "hard problem of consciousness"
Post by: General Battuta on August 09, 2015, 12:07:29 pm
I think experience comes last, not first. The physical universe is the most parsimonious explanation - discarding it leaves us with a bunch of useless and uninteresting ideas and no possible grounds for reasoning or evaluation.

I think the problem of qualia kind of answers itself - we see the world in the first person because we do. It's an anthropic issue. I think qualia probably emerge from mechanisms evolved to model social behavior in others - our self is a model used to integrate information and generate adaptive responses for the physical and social environment. But it may turn out to be something else, like a learning workspace or a way to resolve conflicting motor impulses. It's a question I'm interested in - but I don't think the answer will break the so-far universal monism of everything.

Title: Re: The "hard problem of consciousness"
Post by: GhylTarvoke on August 09, 2015, 01:29:42 pm
I think experience comes last, not first. The physical universe is the most parsimonious explanation - discarding it leaves us with a bunch of useless and uninteresting ideas and no possible grounds for reasoning or evaluation.

I'll put it another way. Any worldview must have consciousness as its basis, because everything other than consciousness may only exist as a constituent of consciousness.

Discarding the physical universe is extreme (again, I think that physics and consciousness are complementary), but not as extreme as metaphysical solipsism (which is truly uninteresting). For example, the universe may only exist as a multiplicity of minds, so that all information is first-person, and what we think of as physical laws are actually experiential laws.

This doesn't make science any less valid, but it shifts our perspective. It also shows that we can have a coherent, structured, and interesting worldview without positing an "objective reality".

I think the problem of qualia kind of answers itself - we see the world in the first person because we do. It's an anthropic issue. I think qualia probably emerge from mechanisms evolved to model social behavior in others - our self is a model used to integrate information and generate adaptive responses for the physical and social environment. But it may turn out to be something else, like a learning workspace or a way to resolve conflicting motor impulses. It's a question I'm interested in - but I don't think the answer will break the so-far universal monism of everything.

Viewing it as a tautology or an anthropic issue is interesting (anthropic reasoning is another topic that makes my head spin). When you describe possible mechanisms for consciousness, though, you use the third-person/functional language of "easy" problems.
Title: Re: The "hard problem of consciousness"
Post by: General Battuta on August 09, 2015, 02:09:31 pm
There are plenty of worldviews out there which exist without consciousness. We make use of them all the time. A worm has a worldview — although I suppose here we're descending into argument by definition.

I think consciousness is an easy problem, yeah. I am not much troubled by philosophical arguments that consciousness comes first, that all our understanding of the universe is predicated on assumption or that (say) physical law emerges from consciousness. So what?

Scientific investigation of physics is unique because it produces useful information. We create hypotheses, collect data, and use the outcome to generate theories. Then we use the theories to predict new truth — and often, we get results. Thus we have electronics, particle accelerators, cosmology, medicine...you know all this. The reason it is philosophically decisive to me is that at no point have we ever needed to introduce a special case related to mind or consciousness.

Of course we cannot claim that physics is inviolable, absolute truth. It's a useful explanatory framework for our observations. If our observations are systematically distorted, we're in trouble. But we have a powerful, comprehensive explanation of much of reality, and it requires nothing except simple causal rules. And as that explanation grows closer to completeness —

(as we discover that we can alter qualia, subjectivity, and mind in increasingly precise ways, using only these causal rules and materialism)

— it seems most likely that, of all the proposed models, the causally closed, monist, physicalist explanation is correct. Why? Because no other model produces even one interesting, useful prediction! No other model seems necessary to explain what we find. There are no proposals for a coherent, structured, interesting non-objective world that have ever produced explanatory power.
Title: Re: The "hard problem of consciousness"
Post by: GhylTarvoke on August 09, 2015, 03:01:57 pm
I'd say that the idealist model I described produces exactly the same predictions as the physicalist model. Practically speaking, there's no difference between the two. But then you could argue that the only difference is terminology, and we're really in agreement.

Anyway, I really like your post, and I think it's as good a place as any to leave the debate.
Title: Re: The "hard problem of consciousness"
Post by: Luis Dias on August 10, 2015, 04:19:56 pm
Before you go away, I'd like to reference Terrence Deacon's work regarding this attempt to bridge what appears as a difficult gap (which Battuta dismisses as an easy problem, well good for him).

His idea is that the Self is an emergent property that stems from symbiotic relationships between certain pattern structures in the physical world.

I'm on my phone so I have troubles easily linking you stuff. But do Google it. There's a good interview on YouTube with him talking about his thesis for over an half an hour.
Title: Re: The "hard problem of consciousness"
Post by: Turambar on August 10, 2015, 05:04:25 pm
ITT: arrangements of molecules with the ability to feel special
Title: Re: The "hard problem of consciousness"
Post by: GhylTarvoke on August 11, 2015, 07:54:50 am
Before you go away, I'd like to reference Terrence Deacon's work regarding this attempt to bridge what appears as a difficult gap (which Battuta dismisses as an easy problem, well good for him).

His idea is that the Self is an emergent property that stems from symbiotic relationships between certain pattern structures in the physical world.

I'm on my phone so I have troubles easily linking you stuff. But do Google it. There's a good interview on YouTube with him talking about his thesis for over an half an hour.

Thanks! I saw this (https://www.youtube.com/watch?v=BvFE1Au3S8Ul) video and wasn't sure where he was going at first, but it came together nicely at the end.
Title: Re: The "hard problem of consciousness"
Post by: Grizzly on August 11, 2015, 01:52:33 pm
ITT: arrangements of molecules with the ability to feel special

Doesn't this sum up the entire human race?
Title: Re: The "hard problem of consciousness"
Post by: Mikes on August 13, 2015, 07:31:51 am
ITT: arrangements of molecules with the ability to feel special

Doesn't this sum up the entire human race?

And cats?

:p
Title: Re: The "hard problem of consciousness"
Post by: Grizzly on August 14, 2015, 02:34:22 pm
ITT: arrangements of molecules with the ability to feel special

Doesn't this sum up the entire human race?

And cats?

:p

YES :D
Title: Re: The "hard problem of consciousness"
Post by: GhylTarvoke on October 14, 2015, 05:46:25 pm
Sorry about the necro - I have a question, and I'm very curious about how you'll respond. It's a sharpened version of the transporter question.

Transporter #1 works by splitting you into your component atoms, then reconstructing you at another location with the same atoms.
Transporter #2 works by scanning you, then reconstructing you at another location with different atoms. The scanning process annihilates the "original you".
Transporter #3 works by scanning you, then reconstructing you at another location with different atoms. The scanning process causes the "original you" to drop dead.
Transporter #4 works by scanning you, then reconstructing you at another location with different atoms. The data transmission causes the "original you" to die in agony.

Assume that all four transporters work exactly as advertised. How willingly would you use each transporter?
Title: Re: The "hard problem of consciousness"
Post by: General Battuta on October 14, 2015, 06:53:34 pm
1-3 produce a valid fork and seem safe since no fork experiences suffering or aversive events. #4 causes a causal descendant fork to experience agony and is unsafe from the perspective of the pre-fork mind.
Title: Re: The "hard problem of consciousness"
Post by: zookeeper on October 15, 2015, 03:20:52 am
Obviously I wouldn't use #4.

I might end up using #3 or #2 if I didn't mind dying (it would happen without the usual downsides of dying, after all).

I'd probably use #1, although only after observing others do it.

The differences between how I'd approach 1-3 are of course largely just psychological and not particularly rational.
Title: Re: The "hard problem of consciousness"
Post by: watsisname on October 15, 2015, 03:48:30 am
Why does #4 cause the descendant fork to experience agony?  (Honest question; I don't understand it).
Title: Re: The "hard problem of consciousness"
Post by: Luis Dias on October 15, 2015, 03:56:16 am
I wouldn't use anyone of those. And I think anyone who does have either not thought this sufficiently through, or they are insane.
Title: Re: The "hard problem of consciousness"
Post by: The E on October 15, 2015, 04:08:41 am
Why does #4 cause the descendant fork to experience agony?  (Honest question; I don't understand it).

For reasons. I think the specific mechanism of why one fork experiences an agonizing death doesn't really matter, the philosophical question of whether or not you're willing to use a piece of tech if it causes an exact copy of you pain and death is the interesting part here.
Title: Re: The "hard problem of consciousness"
Post by: watsisname on October 15, 2015, 04:39:15 am
@E, I do understand the purpose of the question (and my initial thoughts closely mirror's Battuta's), but I don't understand his claim on the effects from #4.  I'm interested in discussing that because I feel it might improve my understanding of the principles and affect the conclusions I draw from the exercise.
Title: Re: The "hard problem of consciousness"
Post by: The E on October 15, 2015, 04:50:27 am
Can you elaborate on what you do not understand?
Title: Re: The "hard problem of consciousness"
Post by: General Battuta on October 15, 2015, 07:28:22 am
I think the confusion might be in my answer? Specifically that I'm saying it causes a fork to die in agony.

My perspective on the whole teleporter problem is that we have to give up on the illusion of 'an original'. Think of yourself as a brain-state constantly copying itself forward, one instant at a time: yourself tomorrow will be a teleporter duplicate, produced by the 'teleportation' of simple causality. Teleporters just produce two valid descendants instead of one, two 'forks'.

So in my analysis of #4, I'm calling the "original you" just another valid causal fork. And to the pre-fork consciousness, it's important to understand that both the forks who result from the teleporter are going to be The Real You. There's no distinction between 'original' and 'duplicate'.

1-3 are okay because they involve the same amount of risk as day to day life: your current brain state will 'die' but it will propagate forward in time.

4 is not okay because it requires one of your future selves to go through agony.
Title: Re: The "hard problem of consciousness"
Post by: Luis Dias on October 15, 2015, 08:47:13 am
I find the hypothesis that the "original" is *just* equal to a past "me" therefore irrelevant, that my current "brain state" is in all purposes exactly as if it was just teleported from my previous "brain state" just a plank second ago (or something), a very intellectually interesting one.

To call the skepticism of this idea an "illusion" is mind boggling to me. As far as I am concerned, even if such technology could be possible, there would be no way to reliably test that idea. It's an unfalsifiable idea. If it is true, then the teleported person will act as if it was true, if it is false, the teleported person will act as if it was true, with the difference being, it's not really you anymore (you died, sorry about that).

It's the untestability of this idea that gives rise to the horror of it all. Star Trek might well be filled with creatures that constantly kill themselves and are being eternally substituted by exact clones without even noticing it, without anyone ever even be aware of it or even know the possibility of it (except the brightest paranoid in there, who was on to something). You see captain Janeway being destroyed by a beam only for an incredibly equal copy to emerge just besides you and you end up believing you'll be ok. So you'll beam yourself too. But those will just be the last few nanoseconds of your life.

This idea that you are just your current brain state and nothing else is interesting. Many difficulties start happening once you start questioning it (how far back in time must I go to say it's not "me" anymore?), and its weakness comes from purely taking the mechanical approach to consciousness - "Consciousness is just the result of a program running in the brain". An analogy as faulty as the one made a hundred years ago (that the human body was akin to a steam engine, and thus should let go some "steam" once in a while). Again, it is an interesting idea and one that might well be true (I really don't think so). I have no idea if it is true.

But to blindly accept it as such and declare the denial of it as something delusional, is just an intellectual overreach for the sake of philosophical edgelording IMO.

And even if you believe in it, just the possibility that it is indeed false (and the impossibility of knowing eitherwise) should render your decision as clear as day: Never teleport. EVER.
Title: Re: The "hard problem of consciousness"
Post by: zookeeper on October 15, 2015, 09:11:27 am
It's the untestability of this idea that gives rise to the horror of it all.

Or, alternatively, dispels the horror. Something that no one will ever know about or experience the consequences of can't be horrible, and blinking out of existence in a teleporter isn't any different.
Title: Re: The "hard problem of consciousness"
Post by: General Battuta on October 15, 2015, 09:12:29 am
An identical copy is you. All plausible credentials for you-ness are material: they are duplicated as well.

Our faith that we'll still be ourselves tomorrow is MORE alarming than our faith in a  perfect teleporter. A day gives a lot of time for drift!

And testability isn't a problem - the real problem is the testability of the alternative! If we're afraid of winking out and being replaced by a clone with new qualia, we need some logic or premise to support the fear. Yet the fork contains all plausible substrates for personality and consciousness. Fear of teleporters leads to the fear of dying every instant of every day.
Title: Re: The "hard problem of consciousness"
Post by: Luis Dias on October 15, 2015, 09:27:49 am
You're asserting things out of faith: it's an untestable assertion, and yet you claim it is true. It's an irrational position. That one may feel inclined to believe it might well be the state of things is one thing, I accept that, that you live by those beliefs is quite uncontroversial to me. That you are willing to kill yourself over that question (over and over) is not.

Your inference that we should fear "moment to moment" qualia states as much (or more! - I mean, can you cut your edgelordiness here for a sec?) as teleportation seems only sensible if you don't stop to realise that you have no choice on the "moment to moment" qualia matter. It is irrational to fear that problem because you cannot do anything about it. You cannot choose to remain in your previous state. However, you can choose to teleport yourself or not. The fact that you state that there's no difference isn't a proof that there isn't. It's mere belief.

Beliefs are fine. They drive things forward. I just have this knack of not betting my life on someone's whims on how exactly consciousness works or not and how "it doesn't really matter if your original brain dies". And I think badly of anyone who does. Not too badly though. Only intelligent literate and well informed people usually reaches these outrageous positions.

It's the untestability of this idea that gives rise to the horror of it all.

Or, alternatively, dispels the horror. Something that no one will ever know about or experience the consequences of can't be horrible, and blinking out of existence in a teleporter isn't any different.

So if you are going to die but you don't know anyway, it's no biggie. I mean, if this isn't the endgame of misanthropy and nihilism, I don't know what is.
Title: Re: The "hard problem of consciousness"
Post by: zookeeper on October 15, 2015, 09:57:36 am
So if you are going to die but you don't know anyway, it's no biggie. I mean, if this isn't the endgame of misanthropy and nihilism, I don't know what is.

Of course it's no biggie. And what would be closer to the endgame of misanthropy and nihilism would be to consider #4 to be no biggie, and no one's doing that.
Title: Re: The "hard problem of consciousness"
Post by: Luis Dias on October 15, 2015, 10:12:02 am
"Of course it's no biggie", well if you wanted to convince me you actually don't care for your life, you surely did a tremendous job.

e: "Hey people it's amazing, we got this new teleport machine working! We are 100% sure it will copy you into another place altogether, and we are mostly positive that you yourself won't really, really die because we have this crazy untestable metaphysics, and anyways, even if you die, you won't notice it! Isn't this the best product of the world?


Sign me the **** out. Immediately.
Title: Re: The "hard problem of consciousness"
Post by: zookeeper on October 15, 2015, 11:08:32 am
"Of course it's no biggie", well if you wanted to convince me you actually don't care for your life, you surely did a tremendous job.

It's not what I specifically wanted to do, but sure, of course I don't care for my life. That'd be somewhat silly and paradoxical, after all.
Title: Re: The "hard problem of consciousness"
Post by: Luis Dias on October 15, 2015, 11:58:00 am
Not sure if serious...
Title: Re: The "hard problem of consciousness"
Post by: General Battuta on October 15, 2015, 12:14:07 pm
The only crazy untestable metaphysics here is that suggesting that teleporters are dangerous at all! What we know of physics and neuroscience suggests NOTHING except a monist, physical mind. There's not even a loose thread to tug on to head towards 'teleporters might kill you.' It's purely the product of intuitive misunderstandings of the mind and qualia.

If your argument is that 'well, they are less dangerous to consciousness than the passage of a day, but we can't choose to stop time,' would you freeze yourself in stasis given the option? To save yourself from a process that will inevitably destroy more of your current brain state than a philosophical teleporter?

You'd be like a reverse Barclay, terrified of leaving the pattern buffer.
Title: Re: The "hard problem of consciousness"
Post by: General Battuta on October 15, 2015, 12:18:56 pm
I guess your argument is 'even if there is a vanishing chance a philosophical teleporter would kill you, it's not worth the risk.' My reply is 'any framework in which the teleporter kills also makes day to day life tremendously more risky and fatal, which is a nonsensical outcome.'
Title: Re: The "hard problem of consciousness"
Post by: GhylTarvoke on October 15, 2015, 12:37:51 pm
For those of you who would use #3, but not #4: would you be okay with #4 if you took a gun into the transporter, then blew your brains out after the scan?
Title: Re: The "hard problem of consciousness"
Post by: Grizzly on October 15, 2015, 12:46:05 pm
One thing I do want to point out (I'm woefully outclassed in these kind of discussions) is that Luis is using terms like "Edgelordyness" and "Misanthropy", both of which are terms used to describe a lack of empathy towards other human beings in the person the term is applied to. However, I would never suspect either Zookeper or Battutta of the posters of lacking such (and I'd certainly argue the opposite in favour of Empattuta). Not believing in the concept of a soul does not mean you do not have a moral code or that you don't care about other human beings in the same way that saying that Freespace 2 consists of a lot of lines ending in semicolons does not mean that you don't love it.
Title: Re: The "hard problem of consciousness"
Post by: Luis Dias on October 15, 2015, 01:06:31 pm
I guess your argument is 'even if there is a vanishing chance a philosophical teleporter would kill you, it's not worth the risk.' My reply is 'any framework in which the teleporter kills also makes day to day life tremendously more risky and fatal, which is a nonsensical outcome.'

That's what you keep claiming, and yet you have no way to test this... I was going to say "Hypothesis" but that would imply testability of any kind. I'm sorry, you can't even calculate the risk itself for there are no bayesian parameters you can even hold on to (and frequentist analysis is just laughing at you here).

Of course, what you are saying might be totally true and render my issues absolutely to the ground, *if*, for instance, what you are stating about brain states is the entire story of Consciousness. First, you cannot claim this without testing. Saying "We don't know of any other possibility" is no remedy since Consciousness hasn't been successfully sussed out yet. It's like claiming that thunders must come from God because "What else could it be?". It's a failure of the proponent to grasp his own limitations regarding what we don't know about the universe and Consciousness. In a word, it's called "Hubris".

Second, even if we "accept" this story, you could then say "Well see, the problem here is solved", but it could well be solved in a very unsatisfying way, because if you bring Heisenbergian notions to play here, you'd realise that an actual complete and precise copy of your brain state is quantum mechanically impossible. And so, the only way to make your crazy scenario possible would be to violate physics themselves. IOW, this could be the universe's way to "prevent" copies of consciousness (don't take me literally here, I'm making an analogy with how "naked singularities" are always hidden behind event horizons).

In that case, a further horror could emerge. Someone claiming they had achieved perfect copy, but only down to that barrier. But hey, as good merchants they would be, they would totally convince a lot of people that would be "Good Enough". But would it really? Well, perhaps it would. Perhaps it would make sufficiently similar copies to fool us. Would these not equal, but just "alike" harbor our consciousness? Well, we can see how this reasoning starts to become irrational, for there is no apparent barrier to "how similar" must a brain state be for me to "care" about it as much as about myself.

Think, does my wife harbor my consciousness? Does your SO harbor YOUR consciousness? Would you be happy to die since she's alive so... who cares? Why not?

From the scientific point of view, every consciousness is interchangeable. It's just behavior on top of behavior, atoms doing their thing, nothing remotely special about any of them, and any system that is equal to another is just another "instance" of it. And I do believe many people "believe" in this, at least intellectually. Of course, if they subjectively actually internalized this view they would completely fall apart and despair into full depression and suicide. It's simply *not true*. And the only thing you need to "falsify" this view is to have the subjective experience of yourself. It's not to say that the theory is wrong.... no, the problem is that it is incomplete.

And by "incomplete", I don't mean to say "yeah we need spirits". I mean to say, this is not accounted for. There's a whole LOT that is unnacounted for. And for this reason, to trust these insane engineers that would have invented this machine would be akin to believe in doctor Frankenstein and think "This is all going to be alright now".

No, it doesn't.

...Luis is using terms like "Edgelordyness" and "Misanthropy", both of which are terms used to describe a lack of empathy towards other human beings in the person the term is applied to.

Joshua, I won't be tone moderated by you, so kindly drop it. The fact that you misinterpreted those two words as the things you said is proof enough that you are incapable of performing that duty anyway. I will hereby explain to you what I actually did. By "edgelording" I mean that I do believe Battuta is trying too much to be "edgy" in his philosophical conclusions. That is, I sensed (decreasingly so, I must add) that he was just placing his most controversial statements out there and keeping his more moderate caveats to himself. The misanthropic note is a purely philosophical critique. The idea that you shouldn't care if you are about to die or not is necessarily predicated in a disregard for human subjective experience, which is all we have. Do I believe zookeeper is a misanthrope? No, I don't know enough about him to say that, but his idea clearly was.
Title: Re: The "hard problem of consciousness"
Post by: General Battuta on October 15, 2015, 01:24:55 pm
Our understanding of consciousness is not complete, but it is bounded. We know the explanation falls within a certain territory. The territory has parameters: it is monist, it is physical, it is causally closed. These parameters speak to the risk of teleportation. It's not like claiming that thunder must come from God, because what else could it be; it's like claiming that thunder must be a physical event, because we have no evidence for nonphysical events.

The objection to the engineering parameters of the teleporter is valid, which is why I've been careful to specify a philosophical teleporter: arbitrarily precise reconstruction of the physical body.

Internalizing the world-view that consciousnesses are 'interchangeable and material' is hardly a formula for suicide. It's not a disturbing attitude! It loses you nothing. You are a subprocess of the universe, computing yourself forward. Your own behavior is contingent on your past experiences, your knowledge, and beliefs. If anything it's humanizing.

The fact that my statements seem controversial or misanthropic is, I think, evidence of how deeply our society is still predicated on illusions about what we are. The notion that we are only a pattern of information, stored in meat and endlessly mutable, is somehow radical and depressing. Yet it requires only that we give up things we never actually had at all!

For those of you who would use #3, but not #4: would you be okay with #4 if you took a gun into the transporter, then blew your brains out after the scan?

Blowing your brains out after the scan implies serious loss of information! One fork would diverge and then die. The reason dropping dead immediately after the scan is acceptable to me is that it doesn't require any fork to subjectively experience suffering or trauma. Yet from the perspective of the fork who remains at the teleporter origin, they are now simply committing suicide after the teleporter looks at them. The other fork's diverging existence is cold comfort.

A pre-teleport instance might be okay with the gun scenario, since they know one of their causal descendants will survive. But they might prefer that all their causal descendants avoid death, in which case they'd turn down this variant on #4 (yet be perfectly happy with instantaneous vaporization, since it doesn't leave any fork to subjectively experience suffering and death).

The post-teleport instance would certainly not want to shoot himself in the head.

You have to remember that teleportation/mind forking is a great way to ensure life, but not a great way to avoid death. You also have to remember to abandon chauvinistic notions of a 'real self' — you are ONLY a pattern of information, bleeding into the world around you, held together as a loosely continuous object by the ability to copy information forward in time. The philosophical teleporter is no different than what happens to us moment to moment.

We are okay with annihilating ourself as we were yesterday by becoming the person we are today, as long as that person isn't around today to suffer. We would not be okay with peeling our future self off the past self and leaving the past self to drown.
Title: Re: The "hard problem of consciousness"
Post by: GhylTarvoke on October 15, 2015, 01:42:18 pm
I meant "blowing your brains out" to be an instantaneous death, so there would be no physical pain. If you mean the psychological pain of committing suicide after the scan, what if (before entering the transporter) you rig a gun to blow your brains out after the scan?
Title: Re: The "hard problem of consciousness"
Post by: zookeeper on October 15, 2015, 02:01:43 pm
The misanthropic note is a purely philosophical critique. The idea that you shouldn't care if you are about to die or not is necessarily predicated in a disregard for human subjective experience, which is all we have. Do I believe zookeeper is a misanthrope? No, I don't know enough about him to say that, but his idea clearly was.

Nonono. The idea that you shouldn't care if you live or not has nothing to do with how one values the contents of subjective experience; it's precisely because only subjective experience matters that the concern becomes irrelevant.

The question of whether you live or not is a question of whether your subjective experience exists at all. I hold that it would be paradoxical to care about it ending, because the only way you can make a value/preference judgement is from within that subjective viewpoint. To prefer existence over non-existence would require being able to compare them from some kind of objective viewpoint, which you cannot do, making "I prefer to exist" something of an oxymoron.

So, there is a disregard for existence of subjective experience, but not for what subjective experience is like. You can call that misanthropic (although the word is obviously unnecessarily antropocentric) if you want, but it seems like a simplistic and misleading characterization.
Title: Re: The "hard problem of consciousness"
Post by: General Battuta on October 15, 2015, 02:06:52 pm
I meant "blowing your brains out" to be an instantaneous death, so there would be no physical pain. If you mean the psychological pain of committing suicide after the scan, what if (before entering the transporter) you rig a gun to blow your brains out after the scan?

A gun is a pretty crude way to clean up a fork. You'll have massive chunks of the brain continuing to fire for milliseconds or whole seconds, you'll have the chance of survival, you have the rest of your body continuing to do its thing for a little...it doesn't offer the immediate and total annihilation of scan-and-vaporize/scan-by-vaporize.

Again, when thinking about forking you do want to think about the welfare of both forks. 'One fork will be instantly vaporized, experiencing nothing' is pretty safe — a physical instance of your mind is closed down, as with any other form of death, but its brainstate is propagated forward safely.

Even a tiny delay before a traumatic injury by gunshot means you're not safely propagating forward the brainstate that occurs between the instant of teleportation and the impact of the bullet. You're putting a fork in a pretty ****ty position.
Title: Re: The "hard problem of consciousness"
Post by: Luis Dias on October 15, 2015, 02:12:37 pm
Our understanding of consciousness is not complete, but it is bounded. We know the explanation falls within a certain territory. The territory has parameters: it is monist, it is physical, it is causally closed. These parameters speak to the risk of teleportation. It's not like claiming that thunder must come from God, because what else could it be; it's like claiming that thunder must be a physical event, because we have no evidence for nonphysical events.

I like debating this stuff, but still if you don't mind, I'll retort yet again.

My point about the "thunder" is precisely to say that we cannot say exactly what "bounds" consciousness until we actually know how it behaves. Someone who didn't even imagine what those "physical" things could even be, he would obviously run to his most parsimonious explanations. The correct explanation was just beyond his scope of understanding. Knowledge is inherently unbounded at all times, which is not to say that it is impossible. We do believe we understand thunders because we have modeled them and correctly identified all the parameters that enable them, etc. But consciousness? I'd say we are at a new level here. We have no idea if even other people have consciousness (we just assume they do, it's parsimonious, etc.). But if we would really wanted to know, there would be no means (and by means, I even say, philosophical means) to correctly detect if others have it or are plainly zimbos.

The monist attitude is fine (and I do have opinions on how to tackle this problem, but that's for another time), but the problem is at the very core of experience and how it is fundamentally undetectable by the outside (indistinguishable from "zimbos").

Quote
The objection to the engineering parameters of the teleporter is valid, which is why I've been careful to specify a philosophical teleporter: arbitrarily precise reconstruction of the physical body.

Except that the engineering objection destroys your philosophical one. You yourself claimed that what makes "me" "me" is the correct localization of my entire brain atoms (and more), so that if we correctly copied all of this, then that would be "me". Then you said something apparently contradictory: that there is no such thing as a constant mental state: we are always different. My brain state in a nano second ago differs from mine now. That should mean that my "me" me is just inherently different that the "me" me a nanosecond ago. We are two different people. This is true even colloquially, but it nevertheless demolishes the first point altogether.

Furthermore, if it is indeed true that these small differences do not matter because "what I am now is different than I was before", then those engineering differences shouldn't matter as well, at least philosophically speaking. The end result is just someone with a differerent mental state than myself, but that was going to be a given anyway, since I change myself every second, so we're still good.

But, philosophically, this escalates quickly into a reductio ad absurdum because you'd be forced to recognize that all mental states are therefore "equivalent" to you. What if the teleporter would kill you and substituted you for someone completely different? Clearly that is as philosophically valid as your own proposal. But this is clearly absurd and goes against your premise.

So something went awfully wrong. And what I am claiming is that this paradox points to an incompleteness of how you are framing consciousness by itself. Your claims about how you are able to make a boundary around Consciousness is overly optimistic.

Quote
Internalizing the world-view that consciousnesses are 'interchangeable and material' is hardly a formula for suicide. It's not a disturbing attitude! It loses you nothing. You are a subprocess of the universe, computing yourself forward. Your own behavior is contingent on your past experiences, your knowledge, and beliefs. If anything it's humanizing.

Yeah that might be true, people might be at peace with that. I'd submit that this comes with a different understanding of what those ideas imply, but at that point I'd also have to admit that I am equally in an unknown territory so I might be totally wrong.

Quote
The fact that my statements seem controversial or misanthropic is, I think, evidence of how deeply our society is still predicated on illusions about what we are. The notion that we are only a pattern of information, stored in meat and endlessly mutable, is somehow radical and depressing. Yet it requires only that we give up things we never actually had at all!

It's not that it's "radical and depressing", it's incomplete. People in the 19th century equated people to steam engines. "They are just like steam engines!", and ... sigh, it's like, "Oh look people are just atoms". Well, that's true in some sense, but it's just an incredibly ignorant sense. It ignores everything else that might be at play here and that is perhaps orders of magnitude more important than "atoms". Likewise, I feel you are merely happy to equate consciousness to some sorts of metaphors: "patterns of information", "thing that can be stored in meat", "endlessly mutable", etc., while the grander truth is that you actually don't know what consciousness really is! Engineeringly speaking.

It's akin to hear from someone in the 17th century that emotions are "bounded" in the regions of the heart. "We know this to be a fact", he would say. Well of course he would.

Which is not to say that I know. I just feel that I'm at least having more respect? for the big gap between what is actually true and our knowledge.


The misanthropic note is a purely philosophical critique. The idea that you shouldn't care if you are about to die or not is necessarily predicated in a disregard for human subjective experience, which is all we have. Do I believe zookeeper is a misanthrope? No, I don't know enough about him to say that, but his idea clearly was.

Nonono. The idea that you shouldn't care if you live or not has nothing to do with how one values the contents of subjective experience; it's precisely because only subjective experience matters that the concern becomes irrelevant.

The question of whether you live or not is a question of whether your subjective experience exists at all. I hold that it would be paradoxical to care about it ending, because the only way you can make a value/preference judgement is from within that subjective viewpoint. To prefer existence over non-existence would require being able to compare them from some kind of objective viewpoint, which you cannot do, making "I prefer to exist" something of an oxymoron.

So, there is a disregard for existence of subjective experience, but not for what subjective experience is like. You can call that misanthropic (although the word is obviously unnecessarily antropocentric) if you want, but it seems like a simplistic and misleading characterization.

I do see the problem here better, your answer clarifies a lot. You see, you're analysing this in an "objective" standpoint, while the original question was placed for your own subjectivity. This is your mistake. The question was, and I quote, How willingly would you use each transporter? Now, we can be all scientists all day and discuss this very objectively, but at the end of the day, the decision is not from the "universe" point of view, but from yours. You are about to enter the teleportation device. Will you say "energise" or not?  You even complain that these words are too "anthropocentric", but I failed to see that this was a problem for the bees or horses. This is a dillema for a human, and a question directly posed to one, not as a scientist, but as a human being with a will.

So yes, you do deny that your analysis is misanthropic, but it ends up being apathetic. Again, that's fine. You can do Hamlet all day and decide like some people do that there is no real difference between being alive or dead (is there any objective difference anyway?), but that is not responding to the question at hand.

I am a human being and I stand for my initial answer: I would NOT enter those teleporters, not in a million years.

Title: Re: The "hard problem of consciousness"
Post by: General Battuta on October 15, 2015, 02:31:53 pm
I am a human being and I stand for my initial answer: I would NOT enter those teleporters, not in a million years.

I have yet to read anything you've posted that seems to get at this objection. You seem okay to concede that a teleporter cannot be more hazardous than day-to-day existence: after all, your consciousness is already bound by the engineering constraints of the brain, which must propagate you forward in wetware. Nor is there a risk of reductio ad absurdum, because it's trivial to note that NOT all mental states are equivalent — what's vital is the preservation of information, a logical causal pathway. The teleporter cannot kill you and substitute someone completely different. That would be a critical loss of information. The teleporter must be safer, in terms of information loss, than day to day existence!

You didn't answer my super clever stasis question either.

I don't understand how you are constructing a teleporter such that it's more risky than going to sleep. Nor am I convinced by the idea that the boundaries of physicalism and monism will somehow be punctured by the study of consciousness, when those boundaries have remained inviolate since the beginning of human investigation of everything.

Title: Re: The "hard problem of consciousness"
Post by: Grizzly on October 15, 2015, 02:43:09 pm
Joshua, I won't be tone moderated by you, so kindly drop it. The fact that you misinterpreted those two words as the things you said is proof enough that you are incapable of performing that duty anyway. I will hereby explain to you what I actually did. By "edgelording" I mean that I do believe Battuta is trying too much to be "edgy" in his philosophical conclusions. That is, I sensed (decreasingly so, I must add) that he was just placing his most controversial statements out there and keeping his more moderate caveats to himself. The misanthropic note is a purely philosophical critique. The idea that you shouldn't care if you are about to die or not is necessarily predicated in a disregard for human subjective experience, which is all we have. Do I believe zookeeper is a misanthrope? No, I don't know enough about him to say that, but his idea clearly was.

For the purposes of clarification: I am not tone moderating you, I am pointing out that thje words you use make no sense in the context of this discussion (or that you do not understand the words you use). Misanthropy indicates an active dislike for human society, whilst edgelordyness is more related to 4chan cultures and it being the primary reason for the existence of misanthropy today.

The notion that the humans are nothing more then chemical machines is not misanthropic (nor edgy (https://www.youtube.com/watch?v=vZjX65NYVGM)) in itself, nor is it controversial (A game like The Witcher can mention it without anyone batting an eyelid). I would argue that it's egilitarian, any notion of a higher purpose or whatnot is eradicated. The only thing that remains is your unique arrangement of monecules. It only really becomes misanthropy when you start referring to human beings as meatbags and constantly calculate the best way to assinate any number of them at any given moment.

Also: WOLFENSTEIN! (https://youtu.be/EURnRZ0tb44?t=1m20s) (spoilers)
Title: Re: The "hard problem of consciousness"
Post by: GhylTarvoke on October 15, 2015, 02:48:32 pm
Regarding the gun thing: fair enough.

Nor is there a risk of reductio ad absurdum, because it's trivial to note that NOT all mental states are equivalent — what's vital is the preservation of information, a logical causal pathway. The teleporter cannot kill you and substitute someone completely different. That would be a critical loss of information.

I think Luis is saying that, since we're always changing, information is never preserved. So it's no big deal if the transporter reassembles us differently.
Title: Re: The "hard problem of consciousness"
Post by: Luis Dias on October 15, 2015, 03:08:35 pm
I think to say that it is "trivial to note that NOT all mental states are equivalent" without noting the philosophical elephant in the room is wanting. I see I did not express my reductio properly.

I will answer to your super clever stasis down at the bottom. I will also say, before continuing, that I din't say physicalism and monism have an exception on consciousness. I even hinted that I might have some ideas on how to engineeringly tackle that problem. What I did talk about is how certain things appear to be, right now, untestable. And since they are untestable, there is no way about distiguinshing whether if the teleported person is "equivalent" to the person who just had some sleep who is just "equivalent" to a person who is continuously awake for a few minutes.

These are just untestable things, not only but also because these terminologies are absolutely incomplete. Take your sentence: "Not all mental states are equivalent". That is a really vague statement. Equivalent in what about? You speak of information, but what are we measuring here? We have already established that our brains are continuously changing information in them, so some changes should be allowed. Objectively speaking, they are surely not equivalent. What is the acceptable delta here, objectively speaking? How can you even define a criteria to decide?

Look, you basically answer this with "[my criteria is] day to day loss", without any rational reason for why should this be the case. Sure, everyone would be (in a common sense way) subjectively more satisfied with that answer, because the dude going out of the teleporter would be very similar to the dude going in. But there's no philosophical reason why this should be preferable. If any delta in mental states are "enough" to carry your own subjective experiences with you (that is, you will keep living just like you are now living from this moment to... this moment to.... this moment.... etc), then there is no obvious philosophical barrier of change. You could be turned into a horse. It would "still be you", according to your own metaphysics.

And the reason that this is absurd is because the underlying metaphysics is absurd, not that the "horse" example is absurd. It follows from the premises.

IOW, it's an error that is born out of our very lacking in engineering precise grammar and jargon over what consciousness really deals about, making us write arguments that nevertheless accept the correctness of the terminologies we are using, which are most assuredly wrong. IOW, for someone in "the far future", it's like watching Deepak Chopra talk about Quantum Mechanics. The words might follow in a soothing manner, and it might even make some... ahhh.... sense, if you actually don't know anything technically about Quantum Mechanics, but once you do, you open your eyes in terror of the gruesome logics, reasonings, arguments and, gasp, conclusions some people reach.

Like perhaps saying that transmitting equal mental states to another "container" is what we need to transfer "yourself".

Regarding your stasis thing, look, you're basically telling me to either die half of me or take a chance that your metaphysics is correct. It's a terrible choice regardless. Would I take the bet or not? I have no idea, I'd rather stay here.
Title: Re: The "hard problem of consciousness"
Post by: General Battuta on October 15, 2015, 03:21:38 pm
The 'acceptable delta' (in answer to both above posts) is the preservation of information, namely retaining the ability of a brainstate to propagate itself forward by causal rules. Vaporizing your brain introduces entropy into the system, destroying it: the brainstate cannot copy itself forward without becoming very lossy. Teleporting your brain may vaporize it, but no information is lost: the brainstate propagates through the teleporter.

Day-to-day life brings in stimuli, which couple with the brain and alter the way the brainstate propagates forward. This alters our brainstates over time.
Title: Re: The "hard problem of consciousness"
Post by: GhylTarvoke on October 15, 2015, 03:49:12 pm
I apologize for harping on this, but I'm still confused. How perfect does the copy need to be to allow your brainstate to propagate, and how do teleportation imperfections differ from day-to-day stimuli?
Title: Re: The "hard problem of consciousness"
Post by: Luis Dias on October 15, 2015, 05:17:12 pm
The 'acceptable delta' (in answer to both above posts) is the preservation of information, namely retaining the ability of a brainstate to propagate itself forward by causal rules.

Sure, it has to survive the process, but that is a small consensual detail. What is more important is that this criteria fails to nullify why wouldn't an output like a "horse" be unacceptable then. If the only criteria is  survival of the end product we could even substitute "you" for a copy of anyone else.

Why would that be a problem at all?

The problem here is that we instinctively understand this is wrong because it's "not us it's someone else", but according to you this should not be a problem since my brain states are always changing anyway, and since you define a self as being that "brain state", that means I'm constantly stopping being myself anyway so it's not really different at all.

There are crude paradoxes here that are easy to recognize when you realise they all stem from how science regards everything as an object and how that starts to crack when it deals with subjectivity itself.





Quote
Vaporizing your brain introduces entropy into the system, destroying it: the brainstate cannot copy itself forward without becoming very lossy. Teleporting your brain may vaporize it, but no information is lost: the brainstate propagates through the teleporter.

Ok no "information" is lost but were you killed or not? Will the you you, your own experience of yourself live in that body or not?

This question seems of the utmost importance but it also seems unanswerable.
Title: Re: The "hard problem of consciousness"
Post by: Droid803 on October 15, 2015, 05:31:17 pm
If no "information" is lost does it matter? All evidence indicates that we consist of nothing more than this "information", after all. Isn't that all there is to this?
You may have been killed, but you did not die.
Title: Re: The "hard problem of consciousness"
Post by: General Battuta on October 15, 2015, 06:36:42 pm
The 'acceptable delta' (in answer to both above posts) is the preservation of information, namely retaining the ability of a brainstate to propagate itself forward by causal rules.

Sure, it has to survive the process, but that is a small consensual detail. What is more important is that this criteria fails to nullify why wouldn't an output like a "horse" be unacceptable then. If the only criteria is  survival of the end product we could even substitute "you" for a copy of anyone else.

By what sensible causal rule could a person's brain spontaneously become a horse's? You're trying to argue that a catastrophic failure is somehow equivalent.

The teleporter is safe because it allows your brain state to propagate forward by its ordinary causal rules. It does not change or distort the information present. It is wholly unlike the examples you're floating.

You become some other you by the introduction of stimuli which are processed according to the brain's logic.

Death is trivially defined: irrecoverable loss of information. If the brain state isn't lost and can keep firing itself forward on its own power, you're not dead.
Title: Re: The "hard problem of consciousness"
Post by: Scotty on October 15, 2015, 11:54:39 pm
Luis it's important to realize that the end product Battuta refers to isn't "the consciousness of a human being".  It's "You".  You yourself.  An arbitrary human being that undergoes the process and remains the same arbitrary human being, not another human being.  If the end result was not identical in all ways including thought and memory, it is not a real teleportation.  And if it is identical in all ways including thought and memory, then "You" are still alive, and arguably safer than any single other instance of your entire life.
Title: Re: The "hard problem of consciousness"
Post by: watsisname on October 16, 2015, 01:44:39 am
In other words, from the Comic in the OP:
(http://i.imgur.com/11uW3jB.jpg)

Can you elaborate on what you do not understand?

Oh yeah, that's clarified now. I had misinterpreted "descendant fork" in Batt's post to refer to the fork constructed from the transmitted data.  Which of course then led to confusion, since why would that fork experience agony from the other fork being killed by the transmission process, after the data was scanned?  The correct interpretation is probably pretty obvious, but I was very tired when I was reading.

So yeah, I'd be fine with #1-3, but find #4 to be unacceptable.

I apologize for harping on this, but I'm still confused. How perfect does the copy need to be to allow your brainstate to propagate, and how do teleportation imperfections differ from day-to-day stimuli?

The human brain can suffer some pretty severe traumas without breaking the continuity of the "self", (though the trauma may dramatically change the character of the person).  Or, we might say that the continuity of the self is an illusion, produced by the pattern of information propagating forward in time.  This pattern changes with every stimulus.  As long as the pattern maintains some essential history of your world line, your "memory" and "thoughts", you are still "you".

The hypothetical teleporter here is assumed to scan and reconstruct the particles of your body (whether by using the very same particles or not) flawlessly.  This is why some are arguing that the transporter is safer than ordinary day-to-day life.
Title: Re: The "hard problem of consciousness"
Post by: GhylTarvoke on October 16, 2015, 04:03:50 am
If the end result was not identical in all ways including thought and memory, it is not a real teleportation.  And if it is identical in all ways including thought and memory, then "You" are still alive, and arguably safer than any single other instance of your entire life.
I apologize for harping on this, but I'm still confused. How perfect does the copy need to be to allow your brainstate to propagate, and how do teleportation imperfections differ from day-to-day stimuli?

The human brain can suffer some pretty severe traumas without breaking the continuity of the "self", (though the trauma may dramatically change the character of the person).  Or, we might say that the continuity of the self is an illusion, produced by the pattern of information propagating forward in time.  This pattern changes with every stimulus.  As long as the pattern maintains some essential history of your world line, your "memory" and "thoughts", you are still "you".

The hypothetical teleporter here is assumed to scan and reconstruct the particles of your body (whether by using the very same particles or not) flawlessly.  This is why some are arguing that the transporter is safer than ordinary day-to-day life.

Okay, but what if the reconstruction isn't flawless? What if the copy is different from the original - but only by a single atom?

In this case (assuming that the flawless teleporter doesn't kill you), saying that the flawed teleporter kills you is even more absurd than saying that you die from moment to moment; the teleporter's imperfection is even more harmless than the effects of day-to-day stimuli. But now we reach a contradiction, because proceeding by induction, the teleporter can change every atom in your body without killing you, which is also absurd. The conclusion that our original assumption (the flawless teleporter doesn't kill you) was wrong seems inescapable.
Title: Re: The "hard problem of consciousness"
Post by: Meneldil on October 16, 2015, 06:36:58 am
Is one grain a heap of sand is a question we should leave with Ancient Greeks, ie. dead.
Induction works by taking a binary predicate, and a couple of ~true~ assumptions. The simplistic logic of this is how we do math, but it's useless for describing complex real life phenomena. For example, I don't think that "this is me" in the sense of this discussion is a binary predicate, nor that "changing one atom doesn't change me" is a binary truth, and that's why I find it wrong to go from "I breathe in, I breathe out" to "I'm a horse", and especially to claim that it's an inevitable conclusion.

I agree it's unsettling that we know so little about consciousness (especially since we value it so much!), but is this not also to some extent true about any other part of human body? What's the difference between consciousness and a liver?
Title: Re: The "hard problem of consciousness"
Post by: Meneldil on October 16, 2015, 06:38:31 am
double post, sorry
Title: Re: The "hard problem of consciousness"
Post by: GhylTarvoke on October 16, 2015, 07:31:04 am
Good point. My impulse is to respond, "me-ness is clearly defined, but heap-ness is not", but that would be begging the question.
Title: Re: The "hard problem of consciousness"
Post by: Luis Dias on October 16, 2015, 09:30:46 am
By what sensible causal rule could a person's brain spontaneously become a horse's? You're trying to argue that a catastrophic failure is somehow equivalent.

That's an irrelevant point. From the get go, we have been having engineering liberties. Indulge me, if you will, on this "liberty" as well, for philosophically these difficulties are equivalent. What matters is that there is a delta, and you have accepted a delta. If deltas are acceptable, and no other reason than "It seems it must be so" is given for some arbitrary "red line" similitude to the original, then it follows that there is no "red line" and we can posit the weirdest things imaginable.

Quote
The teleporter is safe because it allows your brain state to propagate forward by its ordinary causal rules. It does not change or distort the information present. It is wholly unlike the examples you're floating.

This is bad reasoning, and I see many other commenters making, sorry to say, the same mistake. The problem here is that you are presuming that Consciousness = Brain State, or C = "Information Present". But my point is precisely that you have failed to prove this point. This is just a hypothesis, a guess, your metaphor of what might be going on. A guess made by someone who knows his neuroscience ****, no doubt, but this is still 2015 and I don't think anyone has yet sussed what C is, and since you're just a mammal like me, I will infer that God hasn't spoken to you either about the "True Nature of The Universe".

Quote
You become some other you by the introduction of stimuli which are processed according to the brain's logic.

My brain paralyses with these magical uses of the word "You", wherein the "Me" Me "becomes" someone "Other [Me]" by the introduction of etc. Clearly we are dealing with a confusing semantical problem related to a mess between Aristotelian ways to use words like "identity" and "essences" and so on with Platonic ideas "It's just information that matters", with (anti) Cartesian, etc. IOW, the whole sentences are a complete mess, and might sound good, but ultimately they are meaningless.

Quote
Death is trivially defined: irrecoverable loss of information. If the brain state isn't lost and can keep firing itself forward on its own power, you're not dead.

This is what I mean by "edgelording", redefining words to mean something entirely different. When I lose a notebook, I never say it "died" or that the information within "died". That's absurd. Death is not something confinable to such definitions, it is related to "Life", which is more than information. I do see that this "Brains = Computers" metaphor is so ingrained in your mind that it is gruesomely hard to get you to see how limited in scope and meaning it is. It's like saying the ninth of Beethoven is "contained" or "bound" by the knowledge (or idea) that it's "all just soundwaves". Yeah, you won't get very far with that approach.


Luis it's important to realize that the end product Battuta refers to isn't "the consciousness of a human being".  It's "You".  You yourself.  An arbitrary human being that undergoes the process and remains the same arbitrary human being, not another human being.  If the end result was not identical in all ways including thought and memory, it is not a real teleportation.  And if it is identical in all ways including thought and memory, then "You" are still alive, and arguably safer than any single other instance of your entire life.

The redefinition of the word "YOU" into an object that is interchangeable in algebraic terms just like any other scientific object might well be what is meant, but it's an incorrect take on the word, given the question posed. The question is directed at the SUBJECTIVE YOU, not this "OBJECTIVE YOU-ness", that scientifically we can determine to be just about the same / equal to any other instance of itself.

If I were to use the latter, then yes, of course, the "Me" that would be alive after teleportation would be alive, and in "some way", I'd be alive through "Him". Everyone else would regard "him" as "me" and from their point of view, we are one and the same. And yet, from my point of view, it might as well just happen that my life ends right there and some other Consciousness is suddenly born with my memories and continues "my life". From the point of the universe, nothing really changed. A guy named Luis was at spot 1, then he teleported his information to spot 2 and a guy named Luis appeared from that spot. Everyone else acted normally as if it's the same Luis. For all purposes and forever, it is the same.

What we cannot ever possibly postulate is if whether this new person is the continuation of your own subjective experience, or if your life ended at that point, period.

Now you can be just like zookeeper and say "That's irrelevant, everything else keeps working, who cares if I die if I'm substituted by a perfect replica?", to which I'll just open my eyes in terror. It's like watching people go willingly to gas chambers because they want their replicas to go to Tokyo faster.




Lastly, I just want to reiterate that I know some people here believe that Consciousness is "just a pattern". I can be facile here and merely ask "Oh yeah, your proof?" and wait for the next hundred years for it. Instead, I'll just point out that irrespectively of your beliefs, you should acknowledge those are merely beliefs, that these metaphors you are using are most probably unable to capture the actual things that are going on in C, and that perhaps you shouldn't risk your life and your consciousness with a pre-22nd century analysis of what it's all really about.
Title: Re: The "hard problem of consciousness"
Post by: General Battuta on October 16, 2015, 09:51:26 am
I can't follow you at all. I feel like my (phone, alas) response would just be a series of emptyquotes of past statements in the thread.

Your stance reminds me of arguments for theism or quantum karma - we don't know exactly how it works, ergo magic! The delta argument in particular I find confounding, since you seem to be attacking yourself. If the teleporter introduces less drift than day to day life, how is saying 'what if it introduced MORE drift? All drift is the same!' in any way logical or useful to your stance?

Calling death what it is is not 'edgelording'. Death is only death when it leads to irretrievable loss of the brainstate's ability to copy forward under its own power. That's the only sensible idea of death we have.

These are not just 'beliefs', they are the only coherent hypotheses given the evidence we have. The Subjective You is physical because there is nothing but the physical. Wherever the Object You arises, so does the Subject. No other hypothesis has any support at all.
Title: Re: The "hard problem of consciousness"
Post by: General Battuta on October 16, 2015, 09:55:24 am
Is anyone following the all delta is the same argument and able to crumple it down for my greasy brain
Title: Re: The "hard problem of consciousness"
Post by: zookeeper on October 16, 2015, 10:17:27 am
I think Battuta might be a P-zombie. :eek:
Title: Re: The "hard problem of consciousness"
Post by: GhylTarvoke on October 16, 2015, 10:22:39 am
Is anyone following the all delta is the same argument and able to crumple it down for my greasy brain

I understood Luis to be making one of the following arguments:

I think Luis is saying that, since we're always changing, information is never preserved. So it's no big deal if the transporter reassembles us differently.
Okay, but what if the reconstruction isn't flawless? What if the copy is different from the original - but only by a single atom?

In this case (assuming that the flawless teleporter doesn't kill you), saying that the flawed teleporter kills you is even more absurd than saying that you die from moment to moment; the teleporter's imperfection is even more harmless than the effects of day-to-day stimuli. But now we reach a contradiction, because proceeding by induction, the teleporter can change every atom in your body without killing you, which is also absurd. The conclusion that our original assumption (the flawless teleporter doesn't kill you) was wrong seems inescapable.

Side note. In these discussions, I often get the impression that certain participants are zimbos. It's nonsense (I hope), but there seems to be an impenetrable language barrier.

EDIT: Ninja'd.  ;)
Title: Re: The "hard problem of consciousness"
Post by: AtomicClucker on October 16, 2015, 11:43:48 am
Don't have much to say but the discussion of what is "consciousness" devolves into a semantic mess of vagueness, and interpretations between mechanistic and deterministic responses. But I will say the vague nature of speech and its illogical function only makes the matter worse. To engage in consciousness is to delve into the vague, imprecise and illogical. These "vague" concepts upend logic, create paradoxes, and quickly crushes logical attempts at rationalizing the vague. Ergo, it's a giant black hole.

To put it simply, mechanistic systems collapse when confronted with vague, undefinable set of circumstances. YMMV on approaching consciousness, but it's important to keep in mind dealing with discussions of self and means confronting that ugly elephant in the room we call the problem of meaning.
Title: Re: The "hard problem of consciousness"
Post by: General Battuta on October 16, 2015, 11:54:28 am
Is anyone following the all delta is the same argument and able to crumple it down for my greasy brain

I understood Luis to be making one of the following arguments:

But these arguments are nonsense, and they have been answered pretty decisively. Not all changes are remotely equivalent! Your brain receiving stimuli from the optic nerve, encoding those stimuli using evolved pathways, parsing out factual information from the stimulus, activating semantic relationships, priming motor responses, moving concepts into working memory, and finally recording the image in long-term memory is a change that obeys the internal logic of the brain. The brain-state is feeding itself forward to the next Planck instant or whatever according to its own logic.

The brain suddenly being transmuted into a horse's brain obeys no such logic! The brainstate is not determining itself forward. Causal pathways are severed. Information is lost.

Remember, I'm the one arguing that the teleporter is strictly safer than day to day existence, and any argument which renders the teleporter unsafe also renders day to day existence unsafe! I still haven't seen (or maybe I have seen many times but am not recalling) any substantive refutation of this point. Just 'well subjectivity might work differently than the entire rest of the cosmos, for reasons we cannot begin to guess and which fall outside physicalism in some unanticipated way.'

The 'language barrier' from my perspective seems to be the reluctance to accept that subjectivity is simply a product of objective structure. If the object's there, so is the subject.
Title: Re: The "hard problem of consciousness"
Post by: Luis Dias on October 16, 2015, 12:16:18 pm
I can't follow you at all. I feel like my (phone, alas) response would just be a series of emptyquotes of past statements in the thread.

Your stance reminds me of arguments for theism or quantum karma - we don't know exactly how it works, ergo magic! The delta argument in particular I find confounding, since you seem to be attacking yourself. If the teleporter introduces less drift than day to day life, how is saying 'what if it introduced MORE drift? All drift is the same!' in any way logical or useful to your stance?

Wrong on all accounts. It's a language barrier between us. Just because you think you can explain something with a scientifically sounded metaphor doesn't mean you are right, nor does it mean that any skepticism towards those metaphors you are using are based upon magical thinking. IOW, you don't need to believe in magic in order to be skeptical about your ideas. IOW, there are more possibilities than either "Spirits!" or "Consciousness is just a pattern of information". I can't make it more simple than this, so you have to work this through in your end, I'm sorry.

Quote
Calling death what it is is not 'edgelording'. Death is only death when it leads to irretrievable loss of the brainstate's ability to copy forward under its own power. That's the only sensible idea of death we have.

Those sentences are already embedding a lot of models and beliefs that I find a tad overreaching, but nevertheless you are saying something different now. You equate a brain with a computer, and thus you analyse life and death through those lenses and words. The problem I have with this is that it gives the analysis a veneer of engineering-like quality to the statements, but without the actual reliable engineering effort underneath it that might justify them. IOW, you're borrowing the respectability of other types of analysis and insights to a field where these things have yet to show any reliable work regarding Consciousness.

TL DR, to say that the brain is dead because it can't "copy" forward is as silly as saying that muscles are overheaten because they need to vent some "steam". It's a kind of metaphor that is chronocentric to the fads of our age and do not reflect the true realities of our brains and consciousnesses.

Quote
These are not just 'beliefs', they are the only coherent hypotheses given the evidence we have. The Subjective You is physical because there is nothing but the physical. Wherever the Object You arises, so does the Subject. No other hypothesis has any support at all.

When someone does not know about one subject, the best course of action is to say "I don't know", rather than trying to come up with explanations and proclaim they "must be right" by fiat because no one else can really explain them. Of course the universe is physical and etc., but many things that seemingly follow from those simple premises turn into bizarre absurds if you think philosophically about them.

For instance, imagine that you can copy yourself into a thousand Battutas. Do you believe that your own consciousness will be "transferred" to any one of those? Or will it remain inside of yourself? And if you are still "inside of yourself", the original, do you see yourself as "interchangeable" between all those Battutas, or will you still prefer your life over all of those other Battutas?

For instance, take this scenario: Imagine 9 Battutas are immediately copied, resulting in 10 Battutas. However, because of a strange sequence of events, only one Battuta will survive and you are to choose who will. This choice is not to be discussed between you and other Battutas. You know they are exactly like you, but you alone are to choose (and are free to do so) if *you* are to survive or any other Battuta is to survive. Once you decide, 9 Battutas are immediately vaporized through a process that takes exactly 2 Planck-o-seconds.

Now, according to your thesis, it does not matter *which* Battuta survives. So what will you do here? Notice how you feel about your decision. Beware of sensations of "generosity" or "fairness" or "altruisticness", for they are already subtly implying a "sacrifice". But there is no sacrifice involved. This decision shouldn't be difficult at all and should be totally random: it's like breathing after all and aren't you doing *that* every single second?


Your last comment shows you failed to understand my own philosophical argument. You're very smart, so I take it it's my lack of expressing abilities. I'll try better next time.
Title: Re: The "hard problem of consciousness"
Post by: GhylTarvoke on October 16, 2015, 12:47:46 pm
Just 'well subjectivity might work differently than the entire rest of the cosmos, for reasons we cannot begin to guess and which fall outside physicalism in some unanticipated way.'

Consciousness is epistemologically primary. It's the only thing we can be certain of; "everything else" may only exist as a constituent of consciousness. This decisively sets consciousness apart from "everything else", at the deepest possible level.
Title: Re: The "hard problem of consciousness"
Post by: General Battuta on October 16, 2015, 01:01:35 pm
No invocation of computers or engineering required. All we need to do is to point to the constraints that circumscribe all knowledge of the universe: the brain is physical, so consciousness must be physical. We only have one viable, coherent model with predictive power, placing consciousness not at an epistemologically primary stage but far down at the end of the train. Our oTwn subjectivity is explained by this model.

Luis' argument about namechecking 'hard' disciplines is flawed, because we don't need to namecheck them at all! We just have to treat the brain as a physical system.

Quote
IOW, there are more possibilities than either "Spirits!" or "Consciousness is just a pattern of information". I can't make it more simple than this, so you have to work this through in your end, I'm sorry.

No work required. What are the other possibilities?

Quote
TL DR, to say that the brain is dead because it can't "copy" forward is as silly as saying that muscles are overheaten because they need to vent some "steam". It's a kind of metaphor that is chronocentric to the fads of our age and do not reflect the true realities of our brains and consciousnesses.

What does this mean? In terms of what you're actually saying it seems to be 'you're wrong because you aren't right.'

Quote
For instance, imagine that you can copy yourself into a thousand Battutas. Do you believe that your own consciousness will be "transferred" to any one of those? Or will it remain inside of yourself? And if you are still "inside of yourself", the original, do you see yourself as "interchangeable" between all those Battutas, or will you still prefer your life over all of those other Battutas?

We dissected this at great length. Any brain scan/copy process produces valid causal forks. All forks will feel that they have remained 'inside themselves.' Subjectivity is copied alongside everything else. Pre-fork Battuta doesn't care about which fork lives, since they are all valid causal descendants. Once the fork has occurred, the forks are causally unrelated and only care about themselves.

Quote
For instance, take this scenario: Imagine 9 Battutas are immediately copied, resulting in 10 Battutas. However, because of a strange sequence of events, only one Battuta will survive and you are to choose who will. This choice is not to be discussed between you and other Battutas. You know they are exactly like you, but you alone are to choose (and are free to do so) if *you* are to survive or any other Battuta is to survive. Once you decide, 9 Battutas are immediately vaporized through a process that takes exactly 2 Planck-o-seconds.

Now, according to your thesis, it does not matter *which* Battuta survives. So what will you do here? Notice how you feel about your decision. Beware of sensations of "generosity" or "fairness" or "altruisticness", for they are already subtly implying a "sacrifice". But there is no sacrifice involved. This decision shouldn't be difficult at all and should be totally random: it's like breathing after all and aren't you doing *that* every single second?

This is trivial if you're rigorous about defining 'you.' It's one of the reasons I'm so confident in the simple materialist model, because it resolves situations like this.

Battuta before the copying process doesn't care which of the 10 causal descendants survive. All will be valid causal forks. Remember, there is no 'original' and no 'copy'. Pick at random, it doesn't matter: this will feel no different than going about day to day life.

After the copying process, all 10 Battutas will be diverging, and none of the others will ever be valid causal descendants of them. All would choose themselves to survive. But the 9 who are vaporized will have no more than 2 Planck-o-seconds to diverge, and I'm perfectly happy (lol, this reminds me a bit of arguments about where life begins in pregnancy) to say that no information will be lost.

Pre-fork Battuta lives no matter what. 9 post-fork Battutas die. 1 post-fork Battuta lives. Forking is a great way to continue living, but not a good way to avoid death.
Title: Re: The "hard problem of consciousness"
Post by: General Battuta on October 16, 2015, 01:02:35 pm
Whatever credentials of subjective me-ness exist, they must be physical. They will be copied by any physically faithful copying process.

Subjectivity can be forked.
Title: Re: The "hard problem of consciousness"
Post by: Luis Dias on October 16, 2015, 01:35:41 pm
I will try to explain my Absurdio ad Equus case later. I think I'll have to be more formal in that one.

Now. You see, when you proclaim that what makes every "Fork" of Consciousness as equal as your own, then you are necessarily objectifying yourself. You're saying that your own Subjective experience is meaningless or at least irrelevant, what matters is that there exists an equivalent one "anywhere" in the Universe. From an algebraic point of view, that is true. From a moral "consequentialist" point of view, that might even be a good thing. But the question was not posed in this "uncaring" "objective" manner. We might be "objects" from a scientific point of view, but that's not necessarily true in a complete sense, it's just true in an assumption basis: Science only deals with objects, science can only deal with objects. To then proclaim we are "just as objects as anything else" is not really a "scientific conclusion", but something that sparked from its very own design. And if you only have a hammer, then you will regard everything.. etc.

My problem is compounded here by the fact that you're not just "treating the brain as a physical system", you are treating it as a computer, and consciousness as a software program. There are two ways to tackle this opinion.

First the philosophical / metaphysical one. Basically, your viewpoint necessarily requires that we adopt a worldview wherein our consciousness is actually an illusion, that this "permanence of being" is a lie that our brain tells us, and that the only thing that exists is a very sudden "Ourselves" in the "Very Present" at all times, and that if you die and something else sufficiently similar to you appears elsewhere then that's fine because that person will exist by "HimSelf" in the "Very Present" at all times. We are continuously dying and birthing at every second. Metaphorically speaking.

You need something like this in order to believe that it's ok to kill yourself and be "cloned" at the far end of the planet. Let's bear in mind that this looks like redefining the word "Death" into something as vulgar and present in everyday life that we come to allow ourselves to Die in the teleportation process (newspeak? Just putting that out there). Now, regardless, we could accept this. You can believe that, but you cannot confuse this metaphysics with science. This is a definition of what human beings are and what consciousness is in absolutist terms, down to details that you cannot possibly know as far as we can tell.

And that comes to my second tackle. My problem here is that you are reducing the problem of consciousness to arbitrary conceptual building blocks that might have zero to do with how a comprehensive brain science model still to be made will describe them. Not just "might", but almost "assuredly so". I say this, and you retort with "it seems to be 'you're wrong because you aren't right.'", it's a lot deeper and simpler than that:

Science is conservative. This means that reality is inherently difficult to parse and all ideas we have about it are most assuredly wrong. Some wronger than others, but it's fairly certain that many paradigms are yet to be found, many new "metaphors" much more productive, efficient and explanatory are to be written into textbooks. But until that time comes, the immense number of possible metaphors and the ways we use them are wrong. Why do I say this? Because there is no engineering level of rigor in terms of testing any of these criterias, how each lever goes, what kind of influence patterns have, what does it mean to transfer a bit or a byte of consciousness, etc. This utter void means that all these metaphors are almost certainly wrong.

Now, we could say, "well, ok Luis, you're conservative, but this is the best I have it's what I'll go with, OK?", fine. But here's the problem, you're going well ahead with your ideas and drawing out far reaching conclusions without any known method of testing them. And not only that, you're so certain of them, you're even willing to die for them.

I find that over the top. You are indeed an atheist, but one filled with a lot of faith.
Title: Re: The "hard problem of consciousness"
Post by: General Battuta on October 16, 2015, 01:49:57 pm
We are all filled with faith that we won't erupt in fractal jets of flesh that bind us together into a screaming world-meat, because we have a model of the world that tells us this cosmically unlikely.

So too with subjectivity. Wherever you have the object, you have the subject: all other explanations so far fall into the category of 'not even in contention.'

This is metaphysics only as far as it accepts physics as a complete explanation for the universe.

Your talk of existing only in the moment is correct, but you're 180 off on death: we do not die every moment, we live. Our brain state propagates forward in time. Death is the failure of this feed-forward and the irretrievable loss of information, which is not a computer term but a basic tenet of physics since Shannon.

As our ability to retrieve information improves, fewer and fewer states qualify as 'dead'.
Title: Re: The "hard problem of consciousness"
Post by: Luis Dias on October 16, 2015, 03:35:35 pm
Very well, I think we are not progressing here into a consensus, but I think I'm at least learning what the differences between our viewpoints are.

Ok, let me make another scenario, let's see if this expresses better than all the others I made.

Imagine that you are in a room. There's a scanning machine and a person in there. The person tells you that he's going to scan the entire body of yourself. You let him. After doing so, he will tell you that a new "You" has been born in a room identical to this one but on the other side of a huge wall, he's atomically identical to you. You find this acceptable, but still curious.

Then he informs you that you are about to be killed in a non painful way. You seem confused, and so he explains to you that this is just a manner of getting you through this thick wall, which was what you wanted in the first place.

"Wait a minute", you say, "but I don't see how this is making me go through this Wall. The only thing that is happening here is that a copy of me was made beyond the wall, but I'm still here, and apparently, I'm not going anywhere, I'm merely about to die!"

"No, you got it all wrong", he replies, "because what makes you "You" is just patterns. And we have transferred these patterns to the other side of the Wall. And since "you" = "patterns", then we have sucessfully transferred you to the other side of the wall. To complete the contract however, we must kill this instance of "You", for then, you see, mathematically speaking we would have had only kept half of our compromise".

"Wait, what? No, you don't have to k..." ZAP.


This, basically, is what you find acceptable in teleportation.
Title: Re: The "hard problem of consciousness"
Post by: watsisname on October 16, 2015, 03:51:19 pm
This is not an acceptable fork, because there is divergence between the forks after the scan but before one is terminated.  One fork is given information that the other is not.  This is not equivalent to scenarios 1-3 given earlier.
Title: Re: The "hard problem of consciousness"
Post by: General Battuta on October 16, 2015, 04:26:32 pm
Yep, watsisname is correct. You need to examine this scenario rigorously, from all the 'yous' involved:

From the pre-teleport person's perspective, they only know that their body's going to be scanned. If they knew the actual terms of the deal, they would say 'wait, wait, if we do this, one of my causal descendants is going to die! They'll diverge and then terminate irretrievably! Sure, the other fork will survive, but I don't want my child subjectivity to experience that!'

From Fork A's perspective, on the far side of the wall, they've suddenly jumped into an identical room but without the presence of the scan operator. Weird!

From Fork B's perspective, they have been given a body scan, and now suddenly they're going to be murdered! They are causally divergent from Fork A, and their brainstate is about to be eradicated. It will not feed forward through ordinary causality or through a teleporter. It's just done, gone, leaving.
Title: Re: The "hard problem of consciousness"
Post by: GhylTarvoke on October 16, 2015, 05:18:40 pm
Just 'well subjectivity might work differently than the entire rest of the cosmos, for reasons we cannot begin to guess and which fall outside physicalism in some unanticipated way.'

Consciousness is epistemologically primary. It's the only thing we can be certain of; "everything else" may only exist as a constituent of consciousness. This decisively sets consciousness apart from "everything else", at the deepest possible level.
No invocation of computers or engineering required. All we need to do is to point to the constraints that circumscribe all knowledge of the universe: the brain is physical, so consciousness must be physical. We only have one viable, coherent model with predictive power, placing consciousness not at an epistemologically primary stage but far down at the end of the train. Our oTwn subjectivity is explained by this model.

None of this refutes the fact that consciousness is unique. If your objection is that dualist models of consciousness have no additional predictive power, then I agree - but this objection also applies to all of metaphysics and philosophy. There's nothing wrong with restricting oneself to topics of practical value, but it's the scientist's mindset, not the philosopher's mindset.
Title: Re: The "hard problem of consciousness"
Post by: Meneldil on October 17, 2015, 05:05:13 am
Some materialists believe that with science, there is no more need for philosophy. I believe that they are very naive, and I don't see anyone expressing that opinion here.
What Battuta is saying for example is certainly half scientific, half philosophical: scientific in the description of consciousness, and philosophical in its valuation.


Also, @Luis if you don't mind, I'll restate my previous question: you seem to be fine with the entire human body being perfectly copyable - brain included - but the consciousness is one thing you're skeptical about. I think this exceptionalism is what people aren't understanding here.
Title: Re: The "hard problem of consciousness"
Post by: Grizzly on October 17, 2015, 07:35:34 am
Quote
Some materialists believe that with science, there is no more need for philosophy. I believe that they are very naive, and I don't see anyone expressing that opinion here.

Considering that science itself derives from philosophy (it was called natural philosophy in the past for good reason), this would be a hard thing to accomplish anyhow.
Title: Re: The "hard problem of consciousness"
Post by: Luis Dias on October 17, 2015, 09:55:41 am
I am indeed examining all of this very rigorously, "from all the yous involved", but I'm about to go out so I will develop the idea later. I'll just leave this hint if you want to guess where I'm going with it: a crime of murder does not necessitate that you prove your victim was aware it would die, or that it suffered in any way, or that it had "new memories". Killing people while sleeping is still murder, for example. But I'll be more specific, and, as you correctly put it, "very rigorous" later on.

@Meneldil, not in anything I wrote did I say that Consciousness is not copiable. That idea is completely perpendicular to my concerns.

There is a joke on how Consciousness is indeed something different than other "body parts", although it does not clarify the discussion I was having one bit (it's still funny though): we are all perfectly fine with having several organs of ours being transplanted, substituted by new ones. I will guess that you would mind to have a brain transplant.
Title: Re: The "hard problem of consciousness"
Post by: GhylTarvoke on October 17, 2015, 01:45:58 pm
Ah, I think I see your point. Suppose Bob1 walks into the transporter, and as he's being scanned, Bob2 is constructed elsewhere.

Now consider the moment immediately after the scan. Bob1 and Bob2 are distinct individuals who happen to be physically identical. If Bob1 had the time to think about his situation, then (as Battuta says) he would want to live; the existence of Bob2 would be cold comfort. Thus, giving Bob1 the opportunity to fear death would be unethical.

But annihilating Bob1 would also be unethical, since it would amount to killing Bob1 without his consent, regardless of whether or not he felt fear. So the "annihilation transporter" is unethical.
Title: Re: The "hard problem of consciousness"
Post by: General Battuta on October 17, 2015, 01:47:52 pm
The annihilation transporter is indistinguishable from everyday life.
Title: Re: The "hard problem of consciousness"
Post by: GhylTarvoke on October 17, 2015, 01:59:53 pm
That may be true when you compare the initial stage and the final stage, but the transporter murders someone at an intermediate stage.

Would it be ethical to blink someone into existence, then blink them out of existence? In the end, nothing changes, but I would say that creating life and then immediately terminating it is still unethical.
Title: Re: The "hard problem of consciousness"
Post by: General Battuta on October 17, 2015, 02:13:13 pm
The transporter is indistinguishable from everyday life at all stages. We are constantly spawning 'intermediate stages' and exterminating them an instant later.

Death is loss of information. If no information is lost, no death.
Title: Re: The "hard problem of consciousness"
Post by: GhylTarvoke on October 17, 2015, 02:25:06 pm
"Loss of information" seems a very weak and vague definition of death. Information is lost all the time. Yesterday, I took apart my little brother's Lego construction.

... I feel like I'm grappling with an alien mindset. (And I mean that respectfully.)
Title: Re: The "hard problem of consciousness"
Post by: General Battuta on October 17, 2015, 02:36:18 pm
The Lego construct died: it cannot feed its state forward. What other sensible definition of death exists? You're trying to apply a human word to a world that doesn't recognize objects, only particles and forces.

Here, maybe this will help clarify. Would you volunteer for a machine that could roll your whole brain back one second? Would you be afraid of it? Would it kill you?
Title: Re: The "hard problem of consciousness"
Post by: GhylTarvoke on October 17, 2015, 02:45:11 pm
OK, saying that the Lego construct died is reasonable. But in ethics, you have to apply human concepts at some point. Do you just take "human lives are worth more than Lego lives" as an axiom?

I wouldn't volunteer for any machine that changes my brain. From a philosophical standpoint, however, I doubt it would kill me.
Title: Re: The "hard problem of consciousness"
Post by: Grizzly on October 17, 2015, 03:05:12 pm
Lemme see if I got a grab on this.

You exist in 4 dimensions: Time, X, Y, Z.

Simply by breathing or by moving around you move along in those coördinates. You can never have two instances of yourself in the same spot, but thanks to time moving forward such collisions don't happen - the instance of you that was sitting in the chair in dimension T-1 does not exist in dimension T - otherwise you'd be very uncomfortable right now. You are constantly being moved forward without percieving it as such, previous instances of you dissapearing into the void to ensure that there is room for you. An instance of you that moves to the toilet and back again will not encounter oneself sitting in the chair due to that instance of you being from T-100 and thus no longer existing in your time dimension. Since humans can not travel along time except at a rate of 1 to 0 (depending on you approaching the pseed of light), the previous and future instances of you effectively cease to exist (although it's fairer to say that they never existed in the first place, you simply copied yourself into a new area).

When you look at yourself from a (T, X, Y, Z) perspective, there is no difference from you walking around (T+1, X+1, Y+1, Z) or from you being teleported (T+1, X+1000, Y+1000, Z+10000). Therefore, occam's razor states that all the other things are also the same, unless there is evidence to the contrary.
Title: Re: The "hard problem of consciousness"
Post by: General Battuta on October 17, 2015, 03:13:27 pm
OK, saying that the Lego construct died is reasonable. But in ethics, you have to apply human concepts at some point. Do you just take "human lives are worth more than Lego lives" as an axiom?

I wouldn't volunteer for any machine that changes my brain. From a philosophical standpoint, however, I doubt it would kill me.

Yet you allow your brain to change every day. You go to sleep or get anesthetized and trust that the matter of your brain will recreate your subjectivity. What's the difference?

If you got amnesia and lost a day, would you die and be replaced by a clone?

If your brain is injured and the lost areas are rebuilt exactly as they'd been, do you die?

Joshua: I think you are basically getting at the fact that there is no nuclear, single, immutable 'I'. We just slap a label on a loosely coherent and causally bound system and say it's us until it loses the ability to copy itself forward on its own power.
Title: Re: The "hard problem of consciousness"
Post by: GhylTarvoke on October 17, 2015, 03:50:52 pm
Yet you allow your brain to change every day. You go to sleep or get anesthetized and trust that the matter of your brain will recreate your subjectivity. What's the difference?

As Luis said, one difference is that I have no choice about day-to-day living. I do have a choice about using transporters or brain-altering machines. Not knowing exactly how consciousness is preserved or destroyed, I wouldn't touch them with a ten-foot pole.

For example, consciousness may be a product of continuity through time and space. This matches my intuition that "I" am not merely the configuration of my atoms at this particular moment, but the accumulation of all my previous configurations. Teleportation or brain reconstruction would disrupt the continuity and instantiate a new consciousness.
Title: Re: The "hard problem of consciousness"
Post by: General Battuta on October 17, 2015, 04:25:59 pm
That argument was debunked in the last teleportation thread. Continuity is basically a red herring - once you look at it you realize you're actually saying 'what matters is that my past mindstates influence my current mindstate,' which is exactly what the teleporter preserves.

Remember, in physicalism, saying 'I am the sum of my past configurations' is exactly the same as 'there is causal connection between my past brainstatea and my current one.' All that matters is Shannon information.

The 'choice' argument is flimsy. You think day to day life is as dangerous as teleporters, you just choose not to use the teleporter because you're helplessly resigned to constantly undergoing the same process?

Let me restate how important it is to get past the continuity fallacy. Unless you are a dualist, the past has no influence on the present except for the information transmitted forward by causal rules. That info is all stored in the brain at any given moment, in the physical meat.
Title: Re: The "hard problem of consciousness"
Post by: General Battuta on October 17, 2015, 04:27:32 pm
If I built an exact replica of your brain, I would duplicate your subjectivity too. Wherever the object, the subject.
Title: Re: The "hard problem of consciousness"
Post by: watsisname on October 17, 2015, 05:34:26 pm
I am reminded of an interesting thing.

As some of you know, I practice lucid dreaming, or being conscious and actively making decisions while within a dream.
Well, I was lucidly dreaming one night, some years ago, having a super fun time levitating and flying around.  Eventually the dream faded, I opened my eyes, and climbed out of bed while reflecting on the awesomeness of it all and ZAP!~





I found myself standing there, dumbstruck, unable to think of what I was doing or what had just happened.  It was as if this moment was the first moment of my existence, the whole thread of thought I was having had just ended, literally with a small electric-like shock felt in the head.

But I have memory.  I could recall those previous events, but they come as if from a distant past, not just mere seconds ago.  They were... fuzzy... and much of the fine details were lost, irretrievable.

This is the natural process of sleep.  It just happened that this time it (some part of it) happened while I was awake.  Data was massaged, some of it stored, some of it discarded, a brain state terminated and replaced with a fresh one ready to tackle the new day.


The me who is typing this is only a clone.  I died years ago.
Title: Re: The "hard problem of consciousness"
Post by: GhylTarvoke on October 17, 2015, 07:07:12 pm
The 'choice' argument is flimsy. You think day to day life is as dangerous as teleporters, you just choose not to use the teleporter because you're helplessly resigned to constantly undergoing the same process?

Okay, good point. I need to argue that day-to-day life may not be as dangerous as teleportation.

Remember, in physicalism, saying 'I am the sum of my past configurations' is exactly the same as 'there is causal connection between my past brainstatea and my current one.'
Unless you are a dualist, the past has no influence on the present except for the information transmitted forward by causal rules.

(Emphasis mine.) This is the crux of the problem. If we assume monism/physicalism, I think your claims follow straightforwardly.

You make the scientific observation that physicalism is all we need to make predictions. I agree. Nevertheless, physicalism has no handle on consciousness, for fundamental reasons.
----------
The following statement is the starting point for all inquiry: "I exist, and I am conscious." This is the Ultimate Axiom. Call it the first level of knowledge.
The second level is the statement: "I exist in an external world that follows certain rules." Unlike the first statement, this one may be false, but the only sane option is to assume it.
The third level, built on the second, contains models of the external world. This is the objective level, and the domain of science.

The third level is the source of all predictions, while much of philosophy concerns the first and second. ("Why is there something rather than nothing?") The crucial point is that science is confined to the third level. It assumes the second level, which is based on the first. Using the third level to conclude anything about the other two is a logical error, akin to circular reasoning.

Now, consciousness has third-level correlates ("self-awareness", "metacognition", "brain state", etc.) that are theoretically explicable with third-level constructions and reasoning. Chalmers refers to these as "easy" problems. The "hard" problem (why there should be a strong connection between the first and third levels) is in a completely different category.
----------
I apologize for the pedantry, but I hope my argument is clear now.
Title: Re: The "hard problem of consciousness"
Post by: General Battuta on October 17, 2015, 11:55:56 pm
Only one set of second and third-level models produces a useful explanation for anything - an internally consistent model which makes predictions we can test. This is physicalism. It also neatly predicts the mechanisms underpinning consciousness.

No other set of assumptions produces any useful predictions at all.

We have no reason to consider any model except monist physicalism. No other model has ever produced even the slightest utility.
Title: Re: The "hard problem of consciousness"
Post by: GhylTarvoke on October 18, 2015, 09:30:57 am
If your objection is that dualist models of consciousness have no additional predictive power, then I agree - but this objection also applies to all of metaphysics and philosophy. There's nothing wrong with restricting oneself to topics of practical value, but it's the scientist's mindset, not the philosopher's mindset.
You make the scientific observation that physicalism is all we need to make predictions. I agree. Nevertheless, physicalism has no handle on consciousness, for fundamental reasons.
The third level is the source of all predictions, while much of philosophy concerns the first and second.

Again, you're arguing from a practical/utilitarian/predictive standpoint, which ignores virtually every branch of philosophy and hence has no bearing on philosophical issues.

This is physicalism. It also neatly predicts the mechanisms underpinning consciousness.

Physicalism can only explain third-level correlates of consciousness. Using the third level to conclude anything about the other two is a logical error.
Title: Re: The "hard problem of consciousness"
Post by: Scotty on October 18, 2015, 10:07:03 am
Using acceleration to conclude anything about velocity or position is a logical error (and obviously with a few assumed it known things, much like this thread!)?  I'm on a phone, so I trust the parallel in that is apparent.
Title: Re: The "hard problem of consciousness"
Post by: GhylTarvoke on October 18, 2015, 10:18:01 am
Using acceleration to conclude anything about velocity or position is a logical error (and obviously with a few assumed it known things, much like this thread!)?  I'm on a phone, so I trust the parallel in that is apparent.

You cannot find an analogous situation in physics, because the third level of knowledge is self-contained. Velocity and acceleration don't exist in a vacuum; they stem from the same set of assumptions.
Title: Re: The "hard problem of consciousness"
Post by: General Battuta on October 18, 2015, 10:43:44 am
The physicalist model predicts subjective first-person experience: qualia arise in the brain as a consequence of the neural correlates of consciousness, whatever they might be. We don't know yet, but we do know with good confidence the boundaries inside which the solution will be found, because all other solutions to all other solved problems obey the same confinement. We can alter consciousness by altering the neural correlates of consciousness. They are, as far as we can tell, identical.

Philosophy once explained the behavior of moving objects and the causes of human behavior. As science has expanded, philosophy has yielded this territory. The same process is now underway with the mind.
Title: Re: The "hard problem of consciousness"
Post by: GhylTarvoke on October 18, 2015, 10:51:21 am
Consciousness appears to supervene on physical properties. This is precisely the hard problem: why there should be a strong connection between the first and third levels.

Every phenomenon that science can explain is a third-level phenomenon, including human behavior and motion. Consciousness is in a completely different category, because it is the one brute fact.
Title: Re: The "hard problem of consciousness"
Post by: Scotty on October 18, 2015, 10:54:55 am
So the objection becomes "It is because it is and it isn't because it isn't"?

There is no point to discussing philosophy that holds as a core tenet that it may not be defined.  It is useless, both to us and to the world.
Title: Re: The "hard problem of consciousness"
Post by: GhylTarvoke on October 18, 2015, 11:01:39 am
So the objection becomes "It is because it is and it isn't because it isn't"?

There is no point to discussing philosophy that holds as a core tenet that it may not be defined.  It is useless, both to us and to the world.

I don't understand your first sentence, but impracticality comes with the territory. If we only valued practical things, we wouldn't have philosophers.

From a philosophical standpoint, consciousness is extremely important.
Title: Re: The "hard problem of consciousness"
Post by: Mars on October 18, 2015, 11:02:45 am
GhylTarvoke, can you prove you're conscious? How are you even defining the word? Saying that nothing in life is provable beyond "I exist" is provable doesn't explain anything. If we're ever able to fully model consciousness, it will still be totally unexplained by your argument, because it's "first level."

Why should our own minds be any more certain than anything else? "We" as people may not even exist in the way we are inclined to think that we do.
Title: Re: The "hard problem of consciousness"
Post by: General Battuta on October 18, 2015, 11:46:44 am
We can build models of consciousness, predict how consciousness would change if we alter the brain, and test those predictions. Consciousness is nothing special.
Title: Re: The "hard problem of consciousness"
Post by: GhylTarvoke on October 18, 2015, 12:15:46 pm
Good questions. I haven't been clear on these points.

GhylTarvoke, can you prove you're conscious?

I can't prove it to you. I don't need to prove it to myself, because it's a brute fact, i.e. true by default - just as my own existence is true by default.

Saying that nothing in life is provable beyond "I exist" is provable doesn't explain anything.

The "levels of knowledge" aren't an argument for solipsism (that way lies madness), but a demonstration that there's an unbridgeable gulf between physicalism and consciousness.

Why should our own minds be any more certain than anything else? "We" as people may not even exist in the way we are inclined to think that we do.

My memories may be planted. I may be a digital simulation or a brain in a vat. But I cannot be mistaken in the belief that I am conscious.

If we're ever able to fully model consciousness, it will still be totally unexplained by your argument, because it's "first level."

Exactly. Science may eventually explain phenomena like self-awareness and metacognition, but scientific investigation of consciousness is doomed. Physicalists are going to wait in vain for a solution. The only possible solutions are philosophical, e.g. Chalmers' "node model" in the OP.

How are you even defining the word?

This is like the question, "What is existence?" The concept is intuitive, defying definition. But it's still important, and we still talk about it.

The OP has some quasi-definitions; Nagel's "what-it-is-like-to-be" is probably the most popular. Here's another one: your consciousness is one of the two things you can be sure of, with your existence being the other. You may claim that your existence is the only brute fact, or you may even claim that there are no brute facts. If so, we have reached an impasse.

We can build models of consciousness, predict how consciousness would change if we alter the brain, and test those predictions. Consciousness is nothing special.

If by "consciousness" you mean a third-level phenomenon like "self-report" or "self-awareness", then I agree.
Title: Re: The "hard problem of consciousness"
Post by: General Battuta on October 18, 2015, 12:27:54 pm
If being a brain in a vat or a digital simulation has no consequences, in what way is it real? Something that does nothing and cannot be known doesn't exist. It's causally decoupled from us.

Consciousness is already 'solved,' in that we know where it is and what makes it happen. We can alter our own consciousness in the first person using 'third order' effects, and we do it every day. Philosophical examination of consciousness is doomed: it depends on the invention of a problem where in fact there is only a clear and inevitable identity.
Title: Re: The "hard problem of consciousness"
Post by: General Battuta on October 18, 2015, 12:39:47 pm
Ironically this also seems like the final defeat of the 'teleporters are dangerous' argument. Even if you accept that consciousness is somehow an exceptional fact, the only absolutely certain and incontrovertible fact, then you now must concede the teleporter will be safe: barring dualism, you know that whoever comes out the other side has exactly the same first-person capacity to say 'I am conscious' and 'I exist'. The alternative is postulating that these capabilities somehow arise from nonphysical fantasy.
Title: Re: The "hard problem of consciousness"
Post by: Mars on October 18, 2015, 12:46:16 pm
It's really easy to say something is unsolvable to physical science when it has no form, definition or requirements. It just "is true."

"I can't prove it to you. I don't need to prove it to myself, because it's a brute fact, i.e. true by default - just as my own existence is true by default."

There's no reason for those things to be true by default.
Title: Re: The "hard problem of consciousness"
Post by: GhylTarvoke on October 18, 2015, 01:34:12 pm
If being a brain in a vat or a digital simulation has no consequences, in what way is it real? Something that does nothing and cannot be known doesn't exist. It's causally decoupled from us.

Consciousness appears to have no causal effect, but saying "consciousness does not exist" is as absurd as saying "I do not exist". Consciousness is known.

Consciousness is already 'solved,' in that we know where it is and what makes it happen. We can alter our own consciousness in the first person using 'third order' effects, and we do it every day. Philosophical examination of consciousness is doomed: it depends on the invention of a problem where in fact there is only a clear and inevitable identity.

That there's a strong connection between the first and third levels is not in dispute (though verifying this requires either faith or firsthand experience). The hard problem is why there's a strong connection, and this is the question that physicalism has no handle on, whereas philosophy does.

It's really easy to say something is unsolvable to physical science when it has no form, definition or requirements. It just "is true."

But it does have form and requirements. It appears to be correlated with the brain.

"I can't prove it to you. I don't need to prove it to myself, because it's a brute fact, i.e. true by default - just as my own existence is true by default."

There's no reason for those things to be true by default.

If you claim that there are no brute facts, we have reached an impasse. That said, I find it hard to understand how you don't take your own existence as given.

barring dualism
This is the crux of the problem. If we assume monism/physicalism, I think your claims follow straightforwardly.
Title: Re: The "hard problem of consciousness"
Post by: General Battuta on October 18, 2015, 02:05:43 pm
Dualism is fantasy. There are no grounds for conversation if you're a dualist.
Title: Re: The "hard problem of consciousness"
Post by: zookeeper on October 18, 2015, 02:08:17 pm
It's really easy to say something is unsolvable to physical science when it has no form, definition or requirements. It just "is true."

"I can't prove it to you. I don't need to prove it to myself, because it's a brute fact, i.e. true by default - just as my own existence is true by default."

There's no reason for those things to be true by default.

Well, it's also easy to pretend to be a P-zombie and claim to not understand what the other person is referring to, simply because it cannot be defined or demonstrated. Which sure seems like what's been going on here (among other things) for a few pages. It's one thing to argue about qualia, but what some people here seem to do is deny its existence (in the "why is the world perceived through this brain and not some other" sense) just because they can.
Title: Re: The "hard problem of consciousness"
Post by: General Battuta on October 18, 2015, 02:17:09 pm
I don't remotely deny the experience of qualia, I just think they're trivially explicable and clearly identified with physical processes - we turn them off every night!
Title: Re: The "hard problem of consciousness"
Post by: Mars on October 18, 2015, 03:50:25 pm
Well, it's also easy to pretend to be a P-zombie and claim to not understand what the other person is referring to, simply because it cannot be defined or demonstrated. Which sure seems like what's been going on here (among other things) for a few pages. It's one thing to argue about qualia, but what some people here seem to do is deny its existence (in the "why is the world perceived through this brain and not some other" sense) just because they can.

I am not being purposefully obtuse. I just don't know why one would take their own existence as being ultimately more provable than anything else, it doesn't make sense to hold the feeling of consciousness above any number of other observable things and putting it in a realm of mystical whimsy. Yes, the sense of being "me" exists, and I believe it to be real, I just don't think its any amount more real than say, a car.

 I could develop schizophrenia, and I would still have a sense of being "me" and I would imagine other people - but the sense of being "me" would be incorrect, because, at least in late stages of the disease, I would act like a completely different person.   
Title: Re: The "hard problem of consciousness"
Post by: zookeeper on October 18, 2015, 04:14:40 pm
I don't remotely deny the experience of qualia, I just think they're trivially explicable and clearly identified with physical processes - we turn them off every night!

Yes, but you are ignoring the clarification I gave. Objections to persistence of consciousness (or whatever you want to call it) in teleportation seem to ultimately be about that, which is something that no amount of physicalism can dispel. Trying to dispel it with physicalism can only mean that you're either missing what's being referred to, or that you're doing so to make a point. The first is kind of hard to believe, and the second would clearly be counterproductive.
Title: Re: The "hard problem of consciousness"
Post by: General Battuta on October 18, 2015, 04:58:13 pm
Qualia are physical. That seems to me to be pretty straight to the point.
Title: Re: The "hard problem of consciousness"
Post by: GhylTarvoke on October 18, 2015, 05:25:32 pm
Dualism is fantasy. There are no grounds for conversation if you're a dualist.

We've been over this already.

Chalmers also denies that there is anything mystical about positing consciousness as fundamental. He compares it to positing gravity as fundamental.

My entire argument has been an attack on physicalism. Strict adherence to physicalism seems to ignore the obvious.

I confessed to being a dualist in the OP, and others in the thread are either dualists or open to the possibility of dualism. We've still been able to have a discussion.

I could develop schizophrenia, and I would still have a sense of being "me" and I would imagine other people - but the sense of being "me" would be incorrect, because, at least in late stages of the disease, I would act like a completely different person.

You may not be who you think you are, but if you believe that you exist in some form, then you are correct.

@zookeeper: I like your presentation. I feel like I've been punching a wall, and Battuta probably does, as well.
Title: Re: The "hard problem of consciousness"
Post by: Scotty on October 18, 2015, 05:31:47 pm
Dualism is fantasy. There are no grounds for conversation if you're a dualist.

We've been over this already.

The part where we "went over this" consisted of neither the General nor myself being particularly impressed with your interpretation, which seems to rely on its own ignorance to function.

Please note, that's not a reflection on you, nor intended to be a negative reflection on your personal interpretation of the situation.  On one hand we have physicalism, which we can discuss and make predictions about.  On the other, we have dualism, which has no room for predictive or experimental processes.  One of these things is useful to this discussion.
Title: Re: The "hard problem of consciousness"
Post by: General Battuta on October 18, 2015, 05:40:29 pm
Right. Dualism seems as unnecessary as positing that consciousness is piped out to another dimension, illustrated on paper, and piped back. What does that add? What reason do we have to begin to believe it? Nothing whatsoever points to it. It's an idea for the sake of having an idea. It is based on nothing and explains nothing.

It's cruft. In a complete account of physics, gravity is a mathematically inevitable result of super symmetry - necessary and sufficient. No dualist account of consciousness is necessary for anything.

Dualism is a rear-guard action trying to rationalize phlogistons that could protect the mind from being included in the laws that run everything else.
Title: Re: The "hard problem of consciousness"
Post by: GhylTarvoke on October 18, 2015, 05:54:51 pm
The part where we "went over this" consisted of neither the General nor myself being particularly impressed with your interpretation, which seems to rely on its own ignorance to function.

My objection is to the labeling of dualism as fantasy. Chalmers doesn't believe in spirits or wizards, and neither do I. If by "fantasy" Battuta means "not physicalism", then he is stating a tautology.

Please note, that's not a reflection on you, nor intended to be a negative reflection on your personal interpretation of the situation.  On one hand we have physicalism, which we can discuss and make predictions about.  On the other, we have dualism, which has no room for predictive or experimental processes.  One of these things is useful to this discussion.
Right. Dualism seems as unnecessary as positing that consciousness is piped out to another dimension, illustrated on paper, and piped back. What does that add?

We're back to the "it's not practical" argument.

Again, you're arguing from a practical/utilitarian/predictive standpoint, which ignores virtually every branch of philosophy and hence has no bearing on philosophical issues.

That said, one benefit of accepting dualism is that we won't be waiting endlessly for neuroscience to give us an answer.

What reason do we have to begin to believe it? Nothing whatsoever points to it. It's an idea for the sake of having an idea. It is based on nothing and explains nothing.

See the levels of knowledge. The statement "nothing whatsoever points to it" is amusing, because everything you've ever experienced points to it. You're sticking your head in the sand.
Title: Re: The "hard problem of consciousness"
Post by: zookeeper on October 18, 2015, 06:00:44 pm
Qualia are physical. That seems to me to be pretty straight to the point.

I don't see what your point is. That's not what I was talking about, which is subjective experience. There's an unbridgeable (?) gap between subjective experience and understanding of the mechanics of subjective experience, and you seem to try to bridge that gap by just throwing lots and lots of the latter at it.

My point is only that physicalism cannot explain why the world is perceived through one particular brain instead of some other. And everyone here knows that everyone else knows that the world is perceived through one particular brain. No matter how perfectly one understands the physical processes of how consciousness, qualia and whatnot work (which I have no reason to assume are not physical processes), it doesn't dispel that fundamental "problem" with subjective experience.

Maybe you're just insisting on being silent on what can't be spoken of, or maybe you're actually a P-zombie; I can't tell, because you seem to specifically avoid even acknowledging that you recognize what's being referred to.


EDIT: I'm not a dualist, although in this context I don't think I'm talking about anything for which such distinctions matter anyway.
Title: Re: The "hard problem of consciousness"
Post by: General Battuta on October 18, 2015, 06:10:10 pm
I don't even begin to see the problem you think is unspoken. Physicalism tells us why we have first person experiences, why we a all experience the world in first person as a particular brain: because each brain's physical structure computes qualia. It's the simplest thing in the world. We are all ourselves.

Ghyl, you protest that we ignore philosophy and yet philosophy has nothing to offer. You cannot explain why physicalism is incomplete or why dualism would even begin to be necessary. Philosophy seems as tangential as the history of flags to this conversation.

Title: Re: The "hard problem of consciousness"
Post by: GhylTarvoke on October 18, 2015, 06:25:01 pm
Physicalism is complete in the sense that it is all we need to make predictions. Physicalism is incomplete in the sense that it cannot address the first-level issues of existence and consciousness.

If you think this discussion is completely unrelated to philosophy, I must conclude that by "consciousness" you have always meant a third-level concept like "metacognition" - which explains a lot.
Title: Re: The "hard problem of consciousness"
Post by: Scotty on October 18, 2015, 06:53:12 pm
That said, one benefit of accepting dualism is that we won't be waiting endlessly for neuroscience to give us an answer.

Well, you're technically correct, because dualism is never going to give you an answer anyway and you'll have simply given up waiting.

You're arguing for the existence of a speck of dust a million miles away.  You may be correct that it exists.  You may even be correct to its location and velocity (even though all signs point to no).  But that doesn't mean anything at all because it is a speck of dust a million miles away in a conversation that's about us, right here, and right now.
Title: Re: The "hard problem of consciousness"
Post by: GhylTarvoke on October 18, 2015, 07:09:37 pm
Dualist models of consciousness have already given us partial answers, e.g. Russell's/Chalmers' node model.

The existence of consciousness is a brute fact, and requires no argument. Saying that it's a million miles away is very strange - it's the closest, most intimate phenomenon imaginable.
Title: Re: The "hard problem of consciousness"
Post by: Scotty on October 18, 2015, 07:26:40 pm
The existence of consciousness, yes.  But the mechanism for consciousness is most certainly not as abstract as you're trying to say it is.  Human brains generate consciousness in the patterns of atoms and molecules that make them up.  This consciousness is mutable; it can be changed by experience, it can be surgically altered (however barbaric some of those surgeries are), it is wholly rationally explainable as a result of a biological process.
Title: Re: The "hard problem of consciousness"
Post by: Mongoose on October 18, 2015, 07:41:43 pm
This is certainly a fascinating discussion on all sides, although I have to to confess that my mind (however it's defined) tends to switch into an immensely-practical mode when confronted with such highfalutin concepts.  (As the joke goes, the best response to an existential crisis is usually someone unloading a squirt gun at you.)  In light of that, something about the original teleporter example is still bugging me.  Battuta has made the claim that this hypothetical device would be demonstrably safer than the basic act of falling asleep and waking up every day, and from a purely-mechanical standpoint he's probably right (though I've seen waaaay too many Star Trek transporter accident episodes to give it an automatic pass).  There's an issue that hasn't yet been raised though: practical experience.  I fall asleep every night, and I wake up the next morning with a high degree of confidence that I'm the same me who went to sleep the night before.  One can argue that I don't have objective evidence of such, and that's probably true, but I do remember what I did the day before, and the day before that, and so on, and for general purposes that's good enough for me to get out of bed and on with my day.  Sure, I know that something can biologically break and throw the whole process off, but so far (knock on wood) it hasn't, and I feel pretty good about that track record.

But being the first person to stand on that transporter pad and put our model of consciousness to its first practical test?  Knowing that if our physical understanding of qualia turns out to be flawed, it'll literally be the last thing that I (as myself) ever do?  There's not enough money on the planet to make me sign up.  I'm sticking to shuttlecraft.
Title: Re: The "hard problem of consciousness"
Post by: GhylTarvoke on October 18, 2015, 07:44:17 pm
The existence of consciousness, yes.  But the mechanism for consciousness is most certainly not as abstract as you're trying to say it is.  Human brains generate consciousness in the patterns of atoms and molecules that make them up.  This consciousness is mutable; it can be changed by experience, it can be surgically altered (however barbaric some of those surgeries are), it is wholly rationally explainable as a result of a biological process.

Exactly! We have a handle on the mechanisms for consciousness. We're fairly certain that experiential properties supervene on physical properties. What we don't have is an explanation of why they supervene on physical properties.

Maybe this analogy will help. There's a button and a light. Every time you press the button, the light turns on. Now, you could say: "pressing this button causes this light to turn on", and you'd probably be right, but that's barely even an explanation. The connection between the button and the light may as well be magic. What we really want is an explanation of why the button controls the light.
Title: Re: The "hard problem of consciousness"
Post by: General Battuta on October 18, 2015, 08:18:09 pm
Again, you're inventing a question so that you can propose an answer. We have not only the button and the light, but all the tracery of circuits in between; we have the power grid and the transformers; we have all the physics of electrons and resistors; we cannot yet build ourselves a button-light circuit, but we see nothing intrinsically unachievable about it.

There is no distinction between 'consciousness', 'metacognition', 'self-awareness', or anything else. These are all one and the same. Qualia are the first-person experience of these processes, nothing more. The mechanism is the why: we experience awareness in the first person because we are brains with mechanisms for generating first-person experience.

There are no levels of concepts. We can propose any system of truth we like, beginning with our own self-awareness. All of them compete in the same arena: can they use our perceptions to explain who we are, where we come from, and what we do?

Only one system is internally consistent, powerful, parsimonious, and useful. All others inevitably undercut themselves or spiral around into needlessness. Models like 'I am a Boltzmann brain' or 'I exist in a simulation' either produce the same results without that complication, or fizzle out into solipsism. Physicalism explains qualia. Qualia are not the first level, they are the last.

We can think up all kinds of models to explain why we might be. Maybe we're being systematically deceived by a god. Maybe we're in an alien experiment. We can apply these models to predict the universe, and see if we end up with a universe that contains ourselves, the information reaching us, and a reasonable set of systems that can explain ourselves and that information.

We know we have a good model when the model contains everything necessary and sufficient to lead to us. The god model and the alien model and all the others trip over the fact that they do not seem to do anything. If they are true, it apparently doesn't matter.

Consciousness is not a fundamental feature of the universe. Chalmers' notions of 'panpsychophysics' are, mildly put, masturbatory. They make no predictions, offer no solutions, explain nothing: they are as relevant and useful to the problem of consciousness as the Tooth Fairy. The idea of the ontologically autonomous consciousness is fatally testable: if we can account for everything happening in the brain, and if the brain is causing mental states, the ontologically autonomous 'consciousness' has no effect, it is causally decoupled from the universe, it is nothing, it does nothing, it does not exist.

The brain is as mysterious as a billiard table with a little quantum fuzz.

An organism with a human brain cannot be a p-zombie. It must have qualia. Qualia are created by the brain.

The deflationary solution is the solution.

What we should all REALLY be afraid of, in terms of hard problems, is cosmology!
Title: Re: The "hard problem of consciousness"
Post by: watsisname on October 18, 2015, 08:30:58 pm
Quote
There's an issue that hasn't yet been raised though: practical experience.  I fall asleep every night, and I wake up the next morning with a high degree of confidence that I'm the same me who went to sleep the night before.  One can argue that I don't have objective evidence of such, and that's probably true, but I do remember what I did the day before, and the day before that, and so on, and for general purposes that's good enough for me to get out of bed and on with my day.  Sure, I know that something can biologically break and throw the whole process off, but so far (knock on wood) it hasn't, and I feel pretty good about that track record.

Read my short story on page 6.  It is a true story!

You can decide if my waking self died and was replaced by a new self.  You can decide if this implies all of us die every night and are replaced with people who only have the memories of our previous selves.  If you are not disquieted by sleep, then you have no reason to fear the teleporter.

Quote
What we should all REALLY be afraid of, in terms of hard problems, is cosmology!

Quoted for ****ing truth!
Title: Re: The "hard problem of consciousness"
Post by: General Battuta on October 18, 2015, 09:03:36 pm
This thread made me think about Searle, who is so bad at understanding how brains work that it makes me angry to know his work was every taken as any kind of interesting or useful philosophy.
Title: Re: The "hard problem of consciousness"
Post by: GhylTarvoke on October 18, 2015, 09:24:16 pm
Again, you're inventing a question so that you can propose an answer. We have not only the button and the light, but all the tracery of circuits in between; we have the power grid and the transformers; we have all the physics of electrons and resistors; we cannot yet build ourselves a button-light circuit, but we see nothing intrinsically unachievable about it.

The button is the physical processes underpinning consciousness, and the light is consciousness. What could the circuitry possibly be? If it's the physical processes, then you're identifying the button with the circuitry, which effectively means that there is no circuitry. If you're identifying the light with the button (saying that consciousness "is" the physical processes underpinning it), then you're either making a category mistake or not talking about consciousness.

There is no distinction between 'consciousness', 'metacognition', 'self-awareness', or anything else. These are all one and the same. Qualia are the first-person experience of these processes, nothing more.

You clearly don't mean this literally, because those words have different definitions. I'm defining consciousness as "what-it-is-like-to-be", or more precisely, "one of the two things you can be sure of, where existence is the other". (The two concepts are distinguishable, because existence is binary.)

The mechanism is the why: we experience awareness in the first person because we are brains with mechanisms for generating first-person experience.

Analogously, "the button turns on the light because the button has mechanisms for turning on the light". Or perhaps, "the button is the light", which is a category mistake as explained above.

There are no levels of concepts. We can propose any system of truth we like, beginning with our own self-awareness. All of them compete in the same arena: can they use our perceptions to explain who we are, where we come from, and what we do?

You immediately contradict yourself by saying that we begin with our own self-awareness (I prefer "consciousness"). This distinguishes consciousness from everything else, and is precisely what I mean by the first level.

Only one system is internally consistent, powerful, parsimonious, and useful. All others inevitably undercut themselves or spiral around into needlessness. Models like 'I am a Boltzmann brain' or 'I exist in a simulation' either produce the same results without that complication, or fizzle out into solipsism.

It's important to note that of those four features (consistency, power, parsimony, and utility), the only one that dualism might not have is parsimony. Crudely, dualism is physicalism + 1; it's internally consistent and generates exactly the same predictions as physicalism. In fact, dualism is also parsimonious: physicalism has no handle on first-level concepts, and ignores the manifest.

Physicalism explains qualia. Qualia are not the first level, they are the last.

Based on this and your earlier claim that philosophy is irrelevant, I can only assume that we're not talking about the same thing.

Consciousness is not a fundamental feature of the universe. Chalmers' notions of 'panpsychophysics' are, mildly put, masturbatory. They make no predictions, offer no solutions, explain nothing: they are as relevant and useful to the problem of consciousness as the Tooth Fairy. The idea of the ontologically autonomous consciousness is fatally testable: if we can account for everything happening in the brain, and if the brain is causing mental states, the ontologically autonomous 'consciousness' has no effect, it is causally decoupled from the universe, it is nothing, it does nothing, it does not exist.

We're back to the practicality argument. Consciousness appears to have no causal effect, but the statement "consciousness does not exist" is as absurd as the statement "I do not exist".

The brain is as mysterious as a billiard table with a little quantum fuzz.

The brain, while incredibly complex, is not what's being discussed. It does appear to be correlated with consciousness.

An organism with a human brain cannot be a p-zombie. It must have qualia. Qualia are created by the brain.

We're in agreement. Experiential properties supervene on physical properties.

The deflationary solution is the solution.

What we should all REALLY be afraid of, in terms of hard problems, is cosmology!

The ultimate cosmological question ("Why is there something rather than nothing?") is closely related to the hard problem, but from a different perspective.
Title: Re: The "hard problem of consciousness"
Post by: Mars on October 18, 2015, 09:32:50 pm

The brain is as mysterious as a billiard table with a little quantum fuzz.

The brain, while incredibly complex, is not what's being discussed. It does appear to be correlated with consciousness.

This is absurd, and has already been addressed.

This consciousness is mutable; it can be changed by experience, it can be surgically altered (however barbaric some of those surgeries are), it is wholly rationally explainable as a result of a biological process.

How do you explain the changes to consciousness that result from a lobotomy?
Title: Re: The "hard problem of consciousness"
Post by: GhylTarvoke on October 18, 2015, 09:40:07 pm
The brain, while incredibly complex, is not what's being discussed. It does appear to be correlated with consciousness.
This is absurd, and has already been addressed.

Addressed where? What has already been addressed?

You can't possibly be objecting to my statement that the brain is correlated with consciousness. Are you claiming that this entire thread has been about the brain? The brain doesn't match any definition of consciousness that I've given; for example, I can't know for sure that I have a brain.
Title: Re: The "hard problem of consciousness"
Post by: Scotty on October 18, 2015, 10:05:30 pm
The existence of consciousness, yes.  But the mechanism for consciousness is most certainly not as abstract as you're trying to say it is.  Human brains generate consciousness in the patterns of atoms and molecules that make them up.  This consciousness is mutable; it can be changed by experience, it can be surgically altered (however barbaric some of those surgeries are), it is wholly rationally explainable as a result of a biological process.

Addressed here.

How do you explain the changes to consciousness that result from a lobotomy?
Title: Re: The "hard problem of consciousness"
Post by: General Battuta on October 18, 2015, 10:12:46 pm
Take all that **** out of the absurd GenDisc quote stacks I thought we'd left behind years ago and make it something readable.

All I can get out of it is that you've retreated to an argument that we have some trait which does nothing, is nothing, has no effect, and is unrelated to what we're discussing (the brain, which is our selves, which is consciousness).

Qualia are physical processes in the brain. There are only physical properties. Dualism offers no predictive power: it postulates a ghostly presence with no causal consequences, no connection to anything observable, no effects, and...no reason to exist.

Respond to arguments with something beyond 'you can't mean this, I don't understand it, you must have meant something different'.
Title: Re: The "hard problem of consciousness"
Post by: General Battuta on October 18, 2015, 10:14:01 pm
Have we at least established that teleporters are safer than day to day life, i.e. the reason we started all this?
Title: Re: The "hard problem of consciousness"
Post by: Mongoose on October 18, 2015, 10:20:20 pm
Quote
There's an issue that hasn't yet been raised though: practical experience.  I fall asleep every night, and I wake up the next morning with a high degree of confidence that I'm the same me who went to sleep the night before.  One can argue that I don't have objective evidence of such, and that's probably true, but I do remember what I did the day before, and the day before that, and so on, and for general purposes that's good enough for me to get out of bed and on with my day.  Sure, I know that something can biologically break and throw the whole process off, but so far (knock on wood) it hasn't, and I feel pretty good about that track record.

Read my short story on page 6.  It is a true story!

You can decide if my waking self died and was replaced by a new self.  You can decide if this implies all of us die every night and are replaced with people who only have the memories of our previous selves.  If you are not disquieted by sleep, then you have no reason to fear the teleporter.
I did read your story, and honestly what I mostly took away from it is that one should be careful about eating spicy foods before falling asleep.  But in all seriousness, the brain is a complex and often self-contradicting thing, and **** happens.  I can't count how many times I've walked into a room to get something and then stood there dumbfounded, because I have no ****ing idea what I initially walked in there to get.  But that single memory is no more all of "me" than the strangely-aborted dream you had was all of "you."  Overall, my me-ness seems to do a decent enough job of perpetuating itself, and I have no reason to believe that a process that involves taking a snapshot of the brain's physicality plus quantum state plus what have you and then recreating it ex-situ would do any better.  In fact, re: the aforementioned Trek episodes, I have several reasons to believe that it could quite easily do a demonstrably worse job.  Better the devil you know, right?

I guess a more succinct way to put it is, despite how confident one may be of the teleporter's safety, when it comes down to it, are you willing to be the first one to take that leap?  Or the tenth, or the hundredth?
Title: Re: The "hard problem of consciousness"
Post by: General Battuta on October 18, 2015, 10:23:17 pm
I guess I don't see a way to get around this fundamental disconnect right now:

As far as I am concerned, there's no sense talking about anything that doesn't contribute to a single, coherent, unified explanation of everything. 'The only thing we can be sure of is that we're conscious' is something I can agree with — but what do we do with that?

We look around, observe the universe, search out causal logic, and if we eventually arrive at a causal model that begins with nothing and ends up explaining us, including our consciousness, we say 'this model is useful and predictive, and unlike any other, it seems to provide an account that explains everything we see. We thus consider it to be a model of the universe, of which we are a subsystem.'

You seem to say, 'we can do that, but when we're done we shrug and say, well, we might also have ghostly dualist voodoo which has no detectable effect, is unnecessary to explain anything, and is not suggested by anything except our own cultural traditions and desire to believe we're special...but we can't disprove that consciousness is special somehow...'

Those of you who would argue that physicalism will never explain qualia must contend with the fact that it already has. Physicalism says that we each have our own subjectivity for the same reason cameras take pictures from their own perspectives: that's what the machine does. It monitors itself, models itself and others, manipulates symbols in a workspace, applies global states like 'emotion' to modify function in response to adaptive challenges, and generally does a lot of stuff which requires it to have a 'this is me' concept. The brain needs to be able to model itself from someone else's perspective, and to integrate conflicting motor responses, and to do all kinds of **** which, it turns out, we experience as subjectivity. How else would we experience those things? Like the man inside the Chinese Room, blindly manipulating symbols? We're not the man. We're the room.

A brain is a meat machine. You build the machine, you build everything in the brain. We are in our brains. We are meat.


The devil you know isn't better when you can compare how the two processes actually work and see that, wow, the devil I don't know really alters a lot less in my brain! Which is me, as I know because there is nothing in the universe except physics.

Conversely, you can get whacked on the head, black-out drunk, you can develop amnesia that rolls you back a month, and you still accept that you're you.

Would you use a machine that rolled your brain back one second, on a dare?

(Remember that me-ness is only ever retrospective. We have absolutely no ability to look forward and say, ah, yes, that is me tomorrow. We don't know who we will be. Causality hasn't arrived yet. Me-ness is a credential that you claim looking back.)
Title: Re: The "hard problem of consciousness"
Post by: Mongoose on October 18, 2015, 10:33:24 pm
I think the fundamental disconnect in this whole conversation is that you can make the statement "We are meat" with 100% certainty, and confident in that certainty, any anxiety over a teleporter seems absolutely nonsensical.  And again, from that standpoint, you're absolutely right.

But if you're even a shade under 100% certain...well, that's where things get interesting.  And even as someone who studied physics, I'm not willing to put good money down there.  So no, I wouldn't roll myself back a second, or jump myself from one room to the next, and even with its inherent risks and flaws and foibles, I'll stick to my own meat machine to handle its meat-ness (giggity).
Title: Re: The "hard problem of consciousness"
Post by: General Battuta on October 18, 2015, 10:59:15 pm
But rolling yourself back one second with perfect fidelity is far less hazardous than (say) drinking until blacked out. Are you concerned that those who get blackout drunk are exterminating their subjectivity and replacing themselves with a copy?
Title: Re: The "hard problem of consciousness"
Post by: GhylTarvoke on October 18, 2015, 11:08:32 pm
The existence of consciousness, yes.  But the mechanism for consciousness is most certainly not as abstract as you're trying to say it is.  Human brains generate consciousness in the patterns of atoms and molecules that make them up.  This consciousness is mutable; it can be changed by experience, it can be surgically altered (however barbaric some of those surgeries are), it is wholly rationally explainable as a result of a biological process.

Addressed here.

I can't see how Mars' quote is relevant. In any case, the physical mechanism isn't the hard problem. It's the gulf between the mechanism and consciousness, as illustrated by the analogy.

Take all that **** out of the absurd GenDisc quote stacks I thought we'd left behind years ago and make it something readable.

Each segment is readable. You could respond to any one of the segments. But very well, I'll summarize it for you:

1. In the button/light analogy, there is no physical candidate for the circuitry.
2. "Consciousness", "metacognition", and "self-awareness" have different definitions. I've already defined consciousness.
3. You claim that there are no levels of concepts, then contradict yourself by distinguishing consciousness from every other concept.
4. Dualism is just as consistent, powerful, and useful as monism. It's also parsimonious, because monism ignores the manifest.
5. I don't know how you're defining consciousness, especially since you say that philosophy is irrelevant.
6. Consciousness appears to have no causal effect, but it exists. To deny this is to stick your head in the sand.
7. Consciousness is much more mysterious than the brain.
8. Experiential properties supervene on physical properties.
9. Cosmology and the hard problem are closely related.

All I can get out of it is that you've retreated to an argument that we have some trait which does nothing, is nothing, has no effect, and is unrelated to what we're discussing (the brain, which is our selves, which is consciousness).

Qualia are physical processes in the brain. There are only physical properties. Dualism offers no predictive power: it postulates a ghostly presence with no causal consequences, no connection to anything observable, no effects, and...no reason to exist.

Respond to arguments with something beyond 'you can't mean this, I don't understand it, you must have meant something different'.

Consciousness has no causal effect, but its existence is a brute fact. The brain is not consciousness, because it fails the definition (I can't know for certain that I have a brain). Dualism offers precisely as much predictive power as monism. To be clear, do you agree with my definition of consciousness?

It seems that your arguments always boil down to "it's not practical".
Title: Re: The "hard problem of consciousness"
Post by: Scotty on October 18, 2015, 11:15:30 pm
I can't see how Mars' quote is relevant. In any case, the physical mechanism isn't the hard problem. It's the gulf between the mechanism and consciousness, as illustrated by the analogy.

What we're getting at is that this gulf is a gulf of your own manufacture: you have created it so that there is something to describe that which is nothing but what you believe to be something.  There is no gulf between the mechanism and consciousness except the one you stubbornly insist is there despite the lack of any and all reason to believe that it is there.
Title: Re: The "hard problem of consciousness"
Post by: watsisname on October 19, 2015, 12:16:50 am
It seems that your arguments always boil down to "it's not practical".

Well, if it has no predictive power then it is completely uninteresting.  I might as well suppose that the stars are lights shone through a black cosmic tarp by clever demons who know precisely how to mimic what a distant sun would look like.  This idea is not falsifiable, it does not help improve our understanding, and it does not serve as a pathway to new discoveries.  Why should I entertain it?

Quote
Dualism offers precisely as much predictive power as monism.

Can you give an example?  What observation could we hope to make that would distinguish between the two?
Title: Re: The "hard problem of consciousness"
Post by: GhylTarvoke on October 19, 2015, 01:03:36 am
@Scotty: The button is the physical processes underpinning consciousness; the light is consciousness. (Experiential properties supervene on physical properties.) What could the circuitry possibly be? Putting it another way, physicalism can only explain third-level phenomena. Consciousness is not a third-level phenomenon.

@watsisname: Dualism makes exactly the same predictions as physicalism, because it subsumes physicalism. Any prediction that physicalism makes, dualism also makes.
Title: Re: The "hard problem of consciousness"
Post by: AdmiralRalwood on October 19, 2015, 01:23:32 am
Pluralitas non est ponenda sine necessitate (https://en.wikipedia.org/wiki/Occam's_razor).
Title: Re: The "hard problem of consciousness"
Post by: The E on October 19, 2015, 01:42:56 am
@watsisname: Dualism makes exactly the same predictions as physicalism, because it subsumes physicalism. Any prediction that physicalism makes, dualism also makes.

But that was watsisname's objection: If Theory A makes the exact same predictions as Theory B, and Theory A is more complicated (in this case, dualism being the more complicated one by introducing things that are immeasurable), why keep Theory A around?
Title: Re: The "hard problem of consciousness"
Post by: watsisname on October 19, 2015, 01:47:49 am
Any prediction that the concordance model of cosmology makes, 'sufficiently clever tarp demons' also makes.
Title: Re: The "hard problem of consciousness"
Post by: Bobboau on October 19, 2015, 02:52:17 am
But that was watsisname's objection: If Theory A makes the exact same predictions as Theory B, and Theory A is more complicated (in this case, dualism being the more complicated one by introducing things that are immeasurable), why keep Theory A around?

well, that's not quite true, it makes all sorts of predictions.... that you cannot possibly test :|
Title: Re: The "hard problem of consciousness"
Post by: Meneldil on October 19, 2015, 04:10:12 am
Again, you're arguing from a practical/utilitarian/predictive standpoint, which ignores virtually every branch of philosophy and hence has no bearing on philosophical issues.
This is false to an extreme extent; just as an example, let's take what wikipedia lists as branches of philosophy, as peculiar as it may be:

Aesthetics, Epistemology, Ethics, Legal philosophy, Logic, Metaphysics, Political philosophy, Social philosophy.

That physicalism, as a form of metaphysics, does irreparable damage to the most of it is clear, but that's one branch (and good riddance tbh). There's no reason to lump the rest of philosophy with the cruft.

I am indeed examining all of this very rigorously, "from all the yous involved", but I'm about to go out so I will develop the idea later. I'll just leave this hint if you want to guess where I'm going with it: a crime of murder does not necessitate that you prove your victim was aware it would die, or that it suffered in any way, or that it had "new memories". Killing people while sleeping is still murder, for example. But I'll be more specific, and, as you correctly put it, "very rigorous" later on.

@Meneldil, not in anything I wrote did I say that Consciousness is not copiable. That idea is completely perpendicular to my concerns.

There is a joke on how Consciousness is indeed something different than other "body parts", although it does not clarify the discussion I was having one bit (it's still funny though): we are all perfectly fine with having several organs of ours being transplanted, substituted by new ones. I will guess that you would mind to have a brain transplant.
Hm, then there's definitely something I didn't get in this discussion, sorry.

As for the murder: the question is not is it okay to kill an unconscious person, it's can you get away with it if you say woah my bad and create an identical copy in a timely manner.
Title: Re: The "hard problem of consciousness"
Post by: zookeeper on October 19, 2015, 04:11:44 am
Physicalism tells us why we have first person experiences, why we a all experience the world in first person as a particular brain: because each brain's physical structure computes qualia. It's the simplest thing in the world. We are all ourselves.

Completely agreed! Physicalism is great that way. But clearly that wasn't what I was talking about.

Ok, last attempt: physicalism explains why we have first person experiences, yet physicalism doesn't explain why the world is perceived and experienced through your brain, and not mine. As far as physicalism goes, the world ought to be perceived and experienced through both, but funnily enough, it's only perceived and experienced through your brain.

There's no dualism or voodoo involved with that, only the fundamental limits and confines of subjectivity which cannot be escaped. Subjective experience is necessarily limited and kind of solipsist, and thus in some corner cases like this, incompatible (but not mutually exclusive) with objective explanations of the world and unable to fully internalize them.

Subjective experience is like a non-Turing-complete language. It has limits and just can't do some things, no matter how straightforward and provable they might be in and of themselves. And the persistence of subjective experience in teleportation might be one of those things that a human mind cannot really grasp in first person, even when that does not prevent it from understanding and agreeing with it on a rational level. If an objection to safety of teleportation is a result of the former, then throwing more of the latter at it isn't going to do anything.
Title: Re: The "hard problem of consciousness"
Post by: GhylTarvoke on October 19, 2015, 05:17:42 am
Pluralitas non est ponenda sine necessitate (https://en.wikipedia.org/wiki/Occam's_razor).
But that was watsisname's objection: If Theory A makes the exact same predictions as Theory B, and Theory A is more complicated (in this case, dualism being the more complicated one by introducing things that are immeasurable), why keep Theory A around?

Here were the claims made about dualism:

Dualism offers no predictive power
it has no predictive power

They are false, because dualism has exactly as much predictive power as physicalism. Occam's Razor is only relevant when talking about parsimony, which I addressed here:

It's important to note that of those four features (consistency, power, parsimony, and utility), the only one that dualism might not have is parsimony. Crudely, dualism is physicalism + 1; it's internally consistent and generates exactly the same predictions as physicalism. In fact, dualism is also parsimonious: physicalism has no handle on first-level concepts, and ignores the manifest.
----------
Again, you're arguing from a practical/utilitarian/predictive standpoint, which ignores virtually every branch of philosophy and hence has no bearing on philosophical issues.
This is false to an extreme extent; just as an example, let's take what wikipedia lists as branches of philosophy, as peculiar as it may be:

Aesthetics, Epistemology, Ethics, Legal philosophy, Logic, Metaphysics, Political philosophy, Social philosophy.

None of those branches of philosophy generate testable predictions.
Title: Re: The "hard problem of consciousness"
Post by: The E on October 19, 2015, 05:32:17 am
In order to accept Dualism as a superior theory to Physicalism, it must demonstrate greater predictive power. If, by your own admission, its predictive power is the same as that of Physicalism, how can we choose which theory is correct? Occam's Razor is the only criterion we have: In the absence of differences in predictive power, the simpler theory is to be preferred.
Title: Re: The "hard problem of consciousness"
Post by: General Battuta on October 19, 2015, 05:34:11 am
Ok, last attempt: physicalism explains why we have first person experiences, yet physicalism doesn't explain why the world is perceived and experienced through your brain, and not mine. As far as physicalism goes, the world ought to be perceived and experienced through both, but funnily enough, it's only perceived and experienced through your brain.

It does! It's basically the anthropic principle, or the camera analogy I mentioned earlier. Why does the camera take pictures from its own perspective? Because that's what it does. We experience the world subjectively, as ourselves, because that's what a brain is: a machine for creating subjectivity.

Altering the brain alters the mind. They are identical. Subjectivity is the brain. Consciousness is the brain.

Although this

Quote
Subjective experience is like a non-Turing-complete language. It has limits and just can't do some things, no matter how straightforward and provable they might be in and of themselves. And the persistence of subjective experience in teleportation might be one of those things that a human mind cannot really grasp in first person, even when that does not prevent it from understanding and agreeing with it on a rational level. If an objection to safety of teleportation is a result of the former, then throwing more of the latter at it isn't going to do anything.

is a pretty interesting point, and I get what you're saying.

~

And again, dualism has no predictive power. There is no dualist model. No one can write an equation or a model of a neuron using dualism. There is no means of predicting behavior or thought using dualism—no framework to even begin to make a prediction, since dualist consciousness has no properties. It has no consistency (because it makes no predictions), it has no power because it cannot explain anything, it has no utility because it is not good for anything.

Dualism is formally identical to physics+fairies. We have no reason to believe in the fairy.

The 'brute fact' of consciousness, that it must be true because it is true, is not just a fact but a question: why am I conscious? Why do I experience qualia?

Physicalism answers this question with a (not yet complete!) account of how the universe emerged, gave rise to life, and led to us. This model is necessary and sufficient. Dualism is neither necessary, nor sufficient: it does not subsume physicalism, it is an appendix attached to physicalism, an asterisk that says 'this makes us uncomfortable.'

Imagine a device like Maxwell's Demon which tracks every particle in the brain, watched by a highly intelligent system that knows how to translate particle movements into a thought. This is a complete account of the brain. What's more, it leaves no room for consciousness as a causal force: if consciousness acted, it would betray itself in the changed motion of particles. If consciousness does not act, it is not real.

If a future society wrote an incredibly elaborate set of quantum field equations, describing a human brain and its possible evolutions over time, and if those equations were computed (even by hand), the result would be a conscious brain. The meat here is the system doing the computing.

Teleporters are as safe as day to day life, the Chinese Room is dumb, and anyone who thinks consciousness is an intrinsic property of informational systems needs to think about what happens when he goes to sleep every night.

In order to accept Dualism as a superior theory to Physicalism, it must demonstrate greater predictive power. If, by your own admission, its predictive power is the same as that of Physicalism, how can we choose which theory is correct? Occam's Razor is the only criterion we have: In the absence of differences in predictive power, the simpler theory is to be preferred.

Right. Dualism is as plausible as a universe in which consciousness only exists during the lifetime of Newt Gingrich. Why not? Who's to say otherwise? If we had a cultural history stretching back thousands of years about the power of the phonemes 'Noot Ging Rich' that might actually be what we believe.
Title: Re: The "hard problem of consciousness"
Post by: GhylTarvoke on October 19, 2015, 05:44:59 am
And again, dualism has no predictive power.

Again, dualism has exactly as much predictive power as physicalism.

In order to accept Dualism as a superior theory to Physicalism, it must demonstrate greater predictive power. If, by your own admission, its predictive power is the same as that of Physicalism, how can we choose which theory is correct? Occam's Razor is the only criterion we have: In the absence of differences in predictive power, the simpler theory is to be preferred.

Okay, now we're getting somewhere. Consciousness appears to have no causal effect. Does this mean that we should pretend it doesn't exist? No, because it does exist; it is a brute fact. This is the sense in which physicalism is incomplete. Physicalism is completely silent on consciousness, and can only address third-level correlates like "the brain".
Title: Re: The "hard problem of consciousness"
Post by: General Battuta on October 19, 2015, 05:52:35 am
And again, dualism has no predictive power.

Again, dualism has exactly as much predictive power as physicalism.

In order to accept Dualism as a superior theory to Physicalism, it must demonstrate greater predictive power. If, by your own admission, its predictive power is the same as that of Physicalism, how can we choose which theory is correct? Occam's Razor is the only criterion we have: In the absence of differences in predictive power, the simpler theory is to be preferred.

Okay, now we're getting somewhere. Consciousness appears to have no causal effect. Does this mean that we should pretend it doesn't exist? No, because it does exist; it is a brute fact. This is the sense in which physicalism is incomplete. Physicalism is completely silent on consciousness, and can only address third-level correlates like "the brain".

Bro just reread the post directly above yours. Dualism predicts nothing. It's the Newt Gingrich hypothesis. All it says is — here, let's talk about pancakes:

A pancake is delicious.

The physicalist says, check it out. We can look at the chemistry of the process that makes the pancake. By altering it, we can alter the taste of the pancake. We can look at the psychophysics of the taste buds and think about the evolutionary pressures that would produce them. The pancake is delicious because of these chemical properties, and if we altered them, it wouldn't be.

The dualist says, well, yeah, that all seems to be correct, certainly I can't disprove it. Additionally, the pancake produces a qualia of deliciousness, which is...here because i say it's here.

I think sleep is also a good entry into some of the intuitions that are ****ing up the dualist position.

Sleep is a great litmus test for solipsism. Does the universe exist while we are unconscious? While our 'single brute fact' is, well, not true any more? We wake up and things have changed. Two explanations:

The universe does not exist while we're unconscious, but it changed when we decided to sleep, for...reasons. We try to apply this theory and slam into a brick wall. It explains nothing.

The universe is an objective system which operates on its internal logic, and we are only a subsystem. And voila — we can use this model to explain nearly everything.

Physicalism is the opposite of silent on consciousness. It shouts that there is no correlation. Consciousness is the brain. Thought is computable. Alter the meat, alter the mind.

Dualism fights in defense of a pocket universe which does nothing, says nothing, means nothing, and has no primacy. The 'brute fact' of consciousness turns off every night and yet when it returns things have changed. A pathology of the mind can totally alter consciousness.
Title: Re: The "hard problem of consciousness"
Post by: The E on October 19, 2015, 05:55:40 am
Okay, now we're getting somewhere. Consciousness appears to have no causal effect. Does this mean that we should pretend it doesn't exist? No, because it does exist; it is a brute fact. This is the sense in which physicalism is incomplete. Physicalism is completely silent on consciousness, and can only address third-level correlates like "the brain".

AIUI, Physicalism treats consciousness as an emergent effect of the neural connectome; thus changes to the infrastructure of the brain (be they chemical or physical) change consciousness. We do not know yet what the thresholds are for human-recognizable consciousness to emerge, but that's not a big issue: With continuing research, we'll eventually be able to say.

Dualism, on the other hand, postulates an external, unmeasurable, unseeable agent that imbues a quality of consciousness onto an object. By definition, we cannot test this, we cannot measure this, there's no way to derive consistency for this. Therefore, dualism has to be rejected.
Title: Re: The "hard problem of consciousness"
Post by: GhylTarvoke on October 19, 2015, 06:02:06 am
Dualism predicts nothing.

This would imply that physicalism predicts nothing, because dualism predicts everything that physicalism does. The E's objection was that dualism predicts no more than physicalism, and this is something that I agree with.

Physicalism is the opposite of silent on consciousness. It shouts that there is no correlation. Consciousness is the brain.

Consciousness is not the brain. The brain fails the definition of consciousness.

Dualism, on the other hand, postulates an external, unmeasurable, unseeable agent that imbues a quality of consciousness onto an object. By definition, we cannot test this, we cannot measure this, there's no way to derive consistency for this. Therefore, dualism has to be rejected.

Consciousness is (in a sense) unmeasurable and unseeable, but not external. Nevertheless, it exists; this is a brute fact. Physicalism makes the mistake of either ignoring consciousness, or confusing it with something like "the brain".
Title: Re: The "hard problem of consciousness"
Post by: General Battuta on October 19, 2015, 06:02:31 am
Oh, yo, let me add: physicalism is testable! It can be disproven!

Quote
Imagine a device like Maxwell's Demon which tracks every particle in the brain, watched by a highly intelligent system that knows how to translate particle movements into a thought. This is a complete account of the brain. What's more, it leaves no room for consciousness as a causal force: if consciousness acted, it would betray itself in the changed motion of particles. If consciousness does not act, it is not real.

If our monitor here saw particles behaving in acausal ways, and couldn't find a new physical theory to explain it, boom, physicalism disproven. That's all it takes!

Falsifiability.
Title: Re: The "hard problem of consciousness"
Post by: General Battuta on October 19, 2015, 06:04:03 am
Dualism predicts nothing.

This would imply that physicalism predicts nothing, because dualism predicts everything that physicalism does. The E's objection was that dualism predicts no more than physicalism, and this is something that I agree with.

Physicalism is the opposite of silent on consciousness. It shouts that there is no correlation. Consciousness is the brain.

Consciousness is not the brain. The brain fails the definition of consciousness.

Dualism, on the other hand, postulates an external, unmeasurable, unseeable agent that imbues a quality of consciousness onto an object. By definition, we cannot test this, we cannot measure this, there's no way to derive consistency for this. Therefore, dualism has to be rejected.

Consciousness is (in a sense) unmeasurable and unseeable, but not external. Nevertheless, it exists; this is a brute fact. Physicalism makes the mistake of either ignoring consciousness, or confusing it with something like "the brain".

You've posted all these statements over and over. I hate to post about posting, but you need to engage with the arguments being made against you.

If you think that dualism predicts everything physicalism does, make a prediction in which dualism is necessary and sufficient.

If you think that consciousness is not the brain, explain why altering the brain alters consciousness.

If you think that identifying consciousness as the brain is a mistake, explain why in a falsifiable fashion.
Title: Re: The "hard problem of consciousness"
Post by: General Battuta on October 19, 2015, 06:06:02 am
Like there ain't no sense effort posting if your responses are gonna be a flowchart. If 'the meat is the mind', say 'the brain fails the definition of consciousness.' If 'physicalism', say 'dualism does the same thing.' If 'consciousness is explained by physicalism', then 'only third level correlates are explained'. I don't mean to be a mega prick (it's 7:05 sorry) but I feel like we've been stuck there for a while.
Title: Re: The "hard problem of consciousness"
Post by: GhylTarvoke on October 19, 2015, 06:14:23 am
Quote
What's more, it leaves no room for consciousness as a causal force: if consciousness acted, it would betray itself in the changed motion of particles. If consciousness does not act, it is not real.

Causal effectiveness is not a requirement for existence.

You've posted all these statements over and over. I hate to post about posting, but you need to engage with the arguments being made against you.

I also hate to post about posting, but I've explained these statements over and over. You appear not to be listening.

If you think that dualism predicts everything physicalism does, make a prediction in which dualism is necessary and sufficient.

If you think that consciousness is not the brain, explain why altering the brain alters consciousness.

If you think that identifying consciousness as the brain is a mistake, explain why in a falsifiable fashion.

1. Dualism is sufficient to predict that (say) nothing travels faster than light. Dualism is not necessary to make this prediction, but it nevertheless makes the prediction, just as physicalism does.
2. This is precisely the hard problem. We know that it happens, but we don't know why.
3. The existence of consciousness is a brute fact. The existence of the brain is not.

I don't mean to be a mega prick (it's 7:05 sorry) but I feel like we've been stuck there for a while.

I understand; I've also found this irritating. It's a result of the language barrier.
Title: Re: The "hard problem of consciousness"
Post by: General Battuta on October 19, 2015, 06:22:14 am
Quote
What's more, it leaves no room for consciousness as a causal force: if consciousness acted, it would betray itself in the changed motion of particles. If consciousness does not act, it is not real.

Causal effectiveness is not a requirement for existence.

Why not? How can anything acausal exist? If it did, why would we care? This is the core of the problem: we can say that qualia and consciousness are acausal, floating out there, just being as brute facts...yet they seem to be subject to causality really hard, and even if consciousness is just an executive summary issued post facto, we're burning calories on it. Evolution tells us it's there for an adaptive purpose.

Quote
I also hate to post about posting, but I've explained these statements over and over. You appear not to be listening.

Broman, you can tell we're listening because we keep writing elaborate thought experiments to disprove them and try to push the conversation forward!

If dualism is unnecessary to make predictions about the universe, how can we distinguish it from the Newt Gingrich hypothesis? Don't they seem equally likely?

If there is a hard problem, if we don't know why altering the brain alters consciousness, why should we avoid the deflationary answer? Why don't we conclude that, hey, the brain is consciousness, it's in there, like a picture in a camera?

If the existence of consciousness is a brute fact, how do you answer the past several pages of people pointing out that the brute fact goes away and becomes untrue for big chunks of our lives? I don't know I'm conscious for ~30% of my existence and yet when I wake up my consciousness has changed. ****'s no prime mover.

I don't think there's a language barrier. I think you're using definitions as arguments. Restating a definition doesn't protect it.
Title: Re: The "hard problem of consciousness"
Post by: Luis Dias on October 19, 2015, 06:48:44 am
Ironically this also seems like the final defeat of the 'teleporters are dangerous' argument. Even if you accept that consciousness is somehow an exceptional fact, the only absolutely certain and incontrovertible fact, then you now must concede the teleporter will be safe: barring dualism, you know that whoever comes out the other side has exactly the same first-person capacity to say 'I am conscious' and 'I exist'. The alternative is postulating that these capabilities somehow arise from nonphysical fantasy.

So I've been away for a day and a half and already I see Battuta clinging to his ideas and declaring his victory by fiat. All of these arguments really boil my blood because they are basically objectifying every aspect of humanity and consciousness, and treating it as any other object, capable of being trasnferable in algebraic terms. This certainty is ludicrous. The belief is not. You can believe in all these things, but what really annoys the hell out of me is the nagging certainty you have of them without presenting any shred of philosophical argument for there being so. You just assume that if a certain brain state is equal to another brain state, then it follows it is the same consciousness. If we collapse one and create another at the exact same state, we are effectively moving Consciousness.

But this is just a definition of Consciousness that is being challenged. Not, again, that it is not true, but that you have no way to test if it is true.

I'll go back to my "two rooms, one wall" example, because your counter points are, quite simply, insufficient. I was very disappointed in them. So let's see, here was your counter:

Quote
From the pre-teleport person's perspective, they only know that their body's going to be scanned. If they knew the actual terms of the deal, they would say 'wait, wait, if we do this, one of my causal descendants is going to die! They'll diverge and then terminate irretrievably! Sure, the other fork will survive, but I don't want my child subjectivity to experience that!'

From Fork A's perspective, on the far side of the wall, they've suddenly jumped into an identical room but without the presence of the scan operator. Weird!

From Fork B's perspective, they have been given a body scan, and now suddenly they're going to be murdered! They are causally divergent from Fork A, and their brainstate is about to be eradicated. It will not feed forward through ordinary causality or through a teleporter. It's just done, gone, leaving.

This quote exposes such a simple flaw of the teleportation problem that I am aghast on how you, instead of acknowledging it, recognizing it, simply diverted your attention to a mere technicality that could be easily swept away.

Here are simple questions regarding my scenario:

 A) Can the destruction of "Fork B" be called "murder"? It seems that it is. If this person is not exterminated, it can continue existing in the world, if he is exterminated, the Cosmos is accounting for one less Consciousness in it. There's blood in the ground, there's a killer, there is an energy discharged to destroy this fork. I don't see how it is not murder;

 B) Is the murder of "Fork B" dependent on the pre-knowledge of Fork B that he is going to die? That is, does the determination that what happens to him is murder depends on the words the scan operator tells you? If the scan operator is silently killing you, does that stop being murder? Clearly, that is ridiculous. People don't get out of jail sentences for being silently killing people.

 C) Is the murder of "Fork B" dependent on the speed of his death? If he is immediately killed after the scan, can we say that the operator is therefore innocent of his actions? This is absurd: if he does not kill the Fork, the Fork lives as the laws of physics allows him to. It is the action by the operator that causes the extermination of the Fork. The speed to which he does this is irrelevant: He could wait an hour, he could wait a minute, he could wait a second, he could wait a micro-second. The murder is murder nevertheless, a Consciousness *has* been erradicated nevertheless.

 D) Something has been hinted at the "suffering" of Fork B. Oh, the humanity, so concerned with the "suffering". It's a total strawman. You can easily depict a scenario where this person was given a drug before being scanned that prevented his psychological suffering. This administration of this drug does not absolve anyone from the crime of murdering him.


What I *do* believe some people here are having an issue is they are suffering from the same illusion people going to see magic tricks suffer. They see a ball in one man's hand. And suddenly he closes his hand and opens his other hand, and voilá, the ball is there! But if he misses the sync, he will expose that in fact there were two balls there, and your brain will cry foul! "That wasn't really magic, those were always two balls!" They'll be quite right. And here it's the exact same thing. You pretend it's the same consciousness if the technical magicians are able to exactly transfer the information and rebuild the same brain state to another body, while killing the first before your brain calls it an illusion and the illlusion gets broken.

But it was an illusion all the time. Teleportation is, as described in this thread, killing a consciousness just after copying it into another body. And killing the only thing we know we have is, by far, the worst crime you can ever commit to anyone.
Title: Re: The "hard problem of consciousness"
Post by: The E on October 19, 2015, 07:03:53 am
What competing theories are there? Dualism, with its "what he said, but faeries did it!" approach?

Right now, to the best of our knowledge, physicalism is the only game in town. It's the only theory that is completely testable; believing in it or treating it as the absolute truth seems like a fairly safe bet in the absence of a complete or even partial disproval.

I must admit, I don't quite get what you are on about, Luis. Is it wrong to prefer one theory over the other? Wrong to argue for it? Wrong to assume a theory is fact when there are no indications that it can't be?
Title: Re: The "hard problem of consciousness"
Post by: GhylTarvoke on October 19, 2015, 07:45:44 am
Causal effectiveness is not a requirement for existence.
Why not? How can anything acausal exist? If it did, why would we care? This is the core of the problem: we can say that qualia and consciousness are acausal, floating out there, just being as brute facts...yet they seem to be subject to causality really hard, and even if consciousness is just an executive summary issued post facto, we're burning calories on it. Evolution tells us it's there for an adaptive purpose.

Now we're getting into the definition of "existence", which is just as slippery. It's perfectly possible for something to exist and be acausal. What prohibits this possibility?

A more legitimate concern is, as you say, why should we care? The answer is that almost always, we shouldn't care about "acausal things". Consciousness is unique among "acausal things", because we know it exists; its existence is a brute fact. The phenomenon is inescapable.

Broman, you can tell we're listening because we keep writing elaborate thought experiments to disprove them and try to push the conversation forward!

I suggest that we drop the "posting about posting". We're both claiming that the other isn't listening, when in fact the language barrier is probably to blame.

If dualism is unnecessary to make predictions about the universe, how can we distinguish it from the Newt Gingrich hypothesis? Don't they seem equally likely?

If there is a hard problem, if we don't know why altering the brain alters consciousness, why should we avoid the deflationary answer? Why don't we conclude that, hey, the brain is consciousness, it's in there, like a picture in a camera?

If the existence of consciousness is a brute fact, how do you answer the past several pages of people pointing out that the brute fact goes away and becomes untrue for big chunks of our lives? I don't know I'm conscious for ~30% of my existence and yet when I wake up my consciousness has changed. ****'s no prime mover.

I don't think there's a language barrier. I think you're using definitions as arguments. Restating a definition doesn't protect it.

We can't distinguish dualism from the Newt Gingrich hypothesis in a testable manner. (Nor, for that matter, can we distinguish physicalism from the Newt Gingrich hypothesis in a testable manner.) The difference between dualism and the Newt Gingrich hypothesis is that dualism addresses a phenomenon crying out for explanation: namely, the brute fact of consciousness.

We cannot say that the brain and consciousness are the same. Two things can only be the same if they have the same properties. Consciousness has the property that it is a brute fact; the brain does not.

You're asking me how physical events can affect consciousness, for example by making it go away. This is the hard problem.

Definitions are necessary for discussion. How are you defining consciousness? I'm defining it as one of the two brute facts, where existence is the other.

What competing theories are there? Dualism, with its "what he said, but faeries did it!" approach?

Right now, to the best of our knowledge, physicalism is the only game in town. It's the only theory that is completely testable; believing in it or treating it as the absolute truth seems like a fairly safe bet in the absence of a complete or even partial disproval.

I know you were being facetious, but dualism is not "faeries did it". There are many types of dualism, some of which make no attempt to "explain" consciousness. What they all have in common is that they include consciousness - the most obvious, familiar, intimate phenomenon in our lives - and physicalism does not.
Title: Re: The "hard problem of consciousness"
Post by: General Battuta on October 19, 2015, 08:00:16 am
Go back to my post about what you do with the knowledge that you exist. I think that's where we differ. I'd quote it but I am currently instanced on a cell phone.

What's vital is that dualism can be attached to physicalism, as some adjunct. But dualism can never lead to physicalism: it has no explanatory power, it's useless. It's a rear-guard action.

Note that the Newt Gingrich hypothesis does explain consciousness to the same standards as dualism - it happens because it happens. Both fall short of physicalism because they cannot explain where consciousness (the most familiar fact in our lives) comes from, what it's for, or why it exists. Physicalism provides a simple and powerful solution. Consciousness is the brain.

What interests me is the question of when, exactly, a system 'wakes up' and develops qualia. We don't know yet, but we have firm bounds around the answer - we know the answer must be physical. I think the solution is probably deflationary: qualia are simply the first-person experience of the mental states that organisms develop to produce adaptive behavior. The most primitive must be senses of reward and aversion. I think these are probably the germs of qualia, although I'm not sure they really coalesce into a sense of 'I am me' until organisms need the ability to model themselves from the perspective of others.
Title: Re: The "hard problem of consciousness"
Post by: General Battuta on October 19, 2015, 08:07:07 am
There is no hard problem. The problem is easy. What's hard is giving up the illusions that make us think the problem is difficult.

It's very similar to the anthropic principle. We know we exist in a universe that permits existence. But we don't say 'well, we know we exist in a universe, that's a brute fact - so our existence must be somehow special and bicameral.' We work to know what kind of processes create universes, and how ours settled on these values. We know that our first-person existence occurs because we live in a universe that permits first person existence, but we don't treat that as exceptional.

We're like a laser told to look for the darkness. Wherever we look, we do it with consciousness: so we treat consciousness as fundamental and primary. But it's an illusion.
Title: Re: The "hard problem of consciousness"
Post by: The E on October 19, 2015, 08:08:42 am
I know you were being facetious, but dualism is not "faeries did it". There are many types of dualism, some of which make no attempt to "explain" consciousness. What they all have in common is that they include consciousness - the most obvious, familiar, intimate phenomenon in our lives - and physicalism does not.

But what does treating consciousness as some sort of emanation of the luminiferous aether allow us to do? Does it offer any insight into the formation of consciousness? Does it offer any insight into the how and why of consciousness? Does it allow us to make better medicine, better therapies?

The monist approach, at the very least, allows us to say that any or all of these things can be within our grasp, provided we keep studying.


Dualism, to me, always sounds like a manifestation of the god of the gaps. "There must be something special about us because we are capable of metacognition, there must be something unexplainable, unmeasurable, unquantifiable that makes us human"; that's what I hear. Let's just gloss over the fact that we've just introduced an acausal mechanism into a universe that (to the best of our knowledge) can be completely described in terms of a limited set of interactions between physical entities, because consciousness must be special.
To me, that's not acceptable. Certainly not very useful.
Title: Re: The "hard problem of consciousness"
Post by: Luis Dias on October 19, 2015, 08:49:42 am
What competing theories are there? Dualism, with its "what he said, but faeries did it!" approach?

Right now, to the best of our knowledge, physicalism is the only game in town. It's the only theory that is completely testable; believing in it or treating it as the absolute truth seems like a fairly safe bet in the absence of a complete or even partial disproval.

I must admit, I don't quite get what you are on about, Luis. Is it wrong to prefer one theory over the other? Wrong to argue for it? Wrong to assume a theory is fact when there are no indications that it can't be?


Let me get off this "monism - dualism" train that somehow I've been put into I have no clue why. I don't care about this "monism-dualism" shenanigan, because to me they are both innefable concepts. I do understand why physicalists love monism: it all boils down to Ochkam's razor. I sympathize with that sentiment. A lot.

But at the end of the day it's still irrelevant. I feel like a caveman who is forced to either accept the idea that it is either the God of Thunder that gives lightnings, or the idea that it's the lack of harmony between the Four Elements playing itself out. As far as I can tell, I like that latter idea quite a lot more than the former (which is to say, I like the monist intuition a lot more than the dualist), but I am not obliged to commit to this idea about the Four Elements "or else" I'm some kind of an idiot who doesn't accept being teleported.

I said it before, by all means teleport yourselves. By my own standards and life experience, it won't effect me one bit that everyone besides me teleports itself. If you're correct and Consciousness is merely "transferred" like anything else, then my loved ones are merely transporting themselves. If you're not, well, I might be constantly losing the people I love, but they are being constantly substituted by indistinguishable copies. If I let myself forget the horror inscribed in these notions, it's all the same for me!

It's not me who is being stubborn here, I'm not the one saying that this is "irrefutable", that any other ideas were "debunked" and thus it's ridiculous to not teleport yourself.


But I'll note that no one took up my challenges and merely continued to declare that "We don't know yet, but we have firm bounds around the answer", well what can one say to such incredible statement but gasp?
Title: Re: The "hard problem of consciousness"
Post by: GhylTarvoke on October 19, 2015, 09:05:45 am
What's vital is that dualism can be attached to physicalism, as some adjunct. But dualism can never lead to physicalism: it has no explanatory power, it's useless. It's a rear-guard action.

We must now address our definitions of "dualism". Based on your previous uses of the word, you apparently view dualism as a bald assertion - something like "consciousness is nonphysical". On its own, this statement clearly has no predictive power. No sane person would view this as a complete description of the universe. I view dualism not as an adjunct to physicalism, but the completion of physicalism - which means that it has all the predictive power of physicalism.

Note that the Newt Gingrich hypothesis does explain consciousness to the same standards as dualism - it happens because it happens. Both fall short of physicalism because they cannot explain where consciousness (the most familiar fact in our lives) comes from, what it's for, or why it exists. Physicalism provides a simple and powerful solution. Consciousness is the brain.

There are different types of dualism. Not all of them attempt to "explain" consciousness (Russell's/Chalmers' does make this attempt), but they include consciousness, whereas physicalism does not.

I notice that you still haven't defined consciousness, so I'm forced to read your mind. You've said several times that existence and consciousness are both brute facts. Hence, unless you believe that there are more than two brute facts - which I don't think anyone in this thread has claimed - my definition ("consciousness is one of the two brute facts, where existence is the other") must coincide with yours. Hence the brain cannot be consciousness, because it is not a brute fact. This is simple logic.

It's very similar to the anthropic principle. We know we exist in a universe that permits existence. But we don't say 'well, we know we exist in a universe, that's a brute fact - so our existence must be somehow special and bicameral.' We work to know what kind of processes create universes, and how ours settled on these values. We know that our first-person existence occurs because we live in a universe that permits first person existence, but we don't treat that as exceptional.

Existence is exceptional, in the sense that physicalism cannot explain it. As I've said multiple times, the ultimate question of cosmology ("Why is there something rather than nothing?") is closely related to the hard problem. And just like the hard problem, it cannot be solved or even addressed by physicalism.

In fact, the situation is precisely analogous. When a scientist tries to answer the ultimate question, he says something like, "well, the Big Bang is a result of fluctuations in quantum fields", or some such. But this doesn't address the ultimate question at all, because he's viewing "nothing" as something like "a quantum field void of matter and energy", which is clearly not the philosophical meaning of "nothing".

But what does treating consciousness as some sort of emanation of the luminiferous aether allow us to do? Does it offer any insight into the formation of consciousness? Does it offer any insight into the how and why of consciousness? Does it allow us to make better medicine, better therapies?

The monist approach, at the very least, allows us to say that any or all of these things can be within our grasp, provided we keep studying.


Dualism, to me, always sounds like a manifestation of the god of the gaps. "There must be something special about us because we are capable of metacognition, there must be something unexplainable, unmeasurable, unquantifiable that makes us human"; that's what I hear. Let's just gloss over the fact that we've just introduced an acausal mechanism into a universe that (to the best of our knowledge) can be completely described in terms of a limited set of interactions between physical entities, because consciousness must be special.
To me, that's not acceptable. Certainly not very useful.

Treating consciousness as "an emanation of the luminiferous aether" (which isn't an accurate description of dualism, but never mind) is better than not addressing consciousness at all. A god of the gaps argument would claim that physicalism doesn't currently explain consciousness, and hence consciousness is special. The actual situation is much worse: physicalism doesn't even address consciousness.

We haven't "introduced an acausal mechanism into the universe". The phenomenon is inescapable.
Title: Re: The "hard problem of consciousness"
Post by: General Battuta on October 19, 2015, 09:16:04 am
Luis, I'm curious whether you think a set of really verbose quantum field equations describing a human body would be conscious (if worked out, say, by hand in an arbitrarily large colony of scriveners).

Ghyl, I've defined consciousness a lot of times! Consciousness is the experience of qualia due to a physical process in the brain. It is the result of functional computations in the meat. Qualia are the first-person experience of subjectivity, of mental states.
Title: Re: The "hard problem of consciousness"
Post by: General Battuta on October 19, 2015, 09:16:57 am
You think that physicalism doesn't address consciousness. I think that physicalism addresses everything, and that nothing is nonphysical.
Title: Re: The "hard problem of consciousness"
Post by: GhylTarvoke on October 19, 2015, 09:24:35 am
Thank you for making that clear.

Now for the rest of my paragraph: if you agree that existence and consciousness are brute facts, and you agree that these are the only brute facts, our definitions must coincide (though mine is more precise). Hence the brain cannot be consciousness, because the brain violates our definition: it is not a brute fact. This is simple logic.
Title: Re: The "hard problem of consciousness"
Post by: The E on October 19, 2015, 09:27:22 am
Treating consciousness as "an emanation of the luminiferous aether" (which isn't an accurate description of dualism, but never mind) is better than not addressing consciousness at all. A god of the gaps argument would claim that physicalism doesn't currently explain consciousness, and hence consciousness is special. The actual situation is much worse: physicalism doesn't even address consciousness.

We haven't "introduced an acausal mechanism into the universe". The phenomenon is inescapable.

But you did! Dualism postulates that consciousness is something that cannot be completely described in terms of physical interactions. But since consciousness has undeniable physical sideeffects, consciousness has to appear acausal from the point of view of a purely physical examiner.
Title: Re: The "hard problem of consciousness"
Post by: Luis Dias on October 19, 2015, 09:30:37 am
Luis, I'm curious whether you think a set of really verbose quantum field equations describing a human body would be conscious (if worked out, say, by hand in an arbitrarily large colony of scriveners).

I'll take your silly chinese room bait after you answer my points regarding murdering Fork B, kthnks.
Title: Re: The "hard problem of consciousness"
Post by: General Battuta on October 19, 2015, 09:32:59 am
I agree that we can begin at 'I exist' as a reasonable starting point. But I don't think that's simple or useful logic. Let me quote a post from upthread...

Quote
I guess I don't see a way to get around this fundamental disconnect right now:

As far as I am concerned, there's no sense talking about anything that doesn't contribute to a single, coherent, unified explanation of everything. 'The only thing we can be sure of is that we're conscious' is something I can agree with — but what do we do with that?

We look around, observe the universe, search out causal logic, and if we eventually arrive at a causal model that begins with nothing and ends up explaining us, including our consciousness, we say 'this model is useful and predictive, and unlike any other, it seems to provide an account that explains everything we see. We thus consider it to be a model of the universe, of which we are a subsystem.'

You seem to say, 'we can do that, but when we're done we shrug and say, well, we might also have ghostly dualist voodoo which has no detectable effect, is unnecessary to explain anything, and is not suggested by anything except our own cultural traditions and desire to believe we're special...but we can't disprove that consciousness is special somehow...'

Those of you who would argue that physicalism will never explain qualia must contend with the fact that it already has. Physicalism says that we each have our own subjectivity for the same reason cameras take pictures from their own perspectives: that's what the machine does. It monitors itself, models itself and others, manipulates symbols in a workspace, applies global states like 'emotion' to modify function in response to adaptive challenges, and generally does a lot of stuff which requires it to have a 'this is me' concept. The brain needs to be able to model itself from someone else's perspective, and to integrate conflicting motor responses, and to do all kinds of **** which, it turns out, we experience as subjectivity. How else would we experience those things? Like the man inside the Chinese Room, blindly manipulating symbols? We're not the man. We're the room.

A brain is a meat machine. You build the machine, you build everything in the brain. We are in our brains. We are meat.

We have a consciousness. We can do a lot of stuff with it. One thing we can do is poke the environment around us and see what happens.

As we do this, we begin to detect causal logic in how the environment behaves. This leads us to the choice between solipsism and objective reality. Solipsism is not useful: it undercuts itself.

Once we have chosen objective reality, we must begin trying to build models of how it works. And when we build a model that, in the end, explains ourselves, when it contains only the necessary and sufficient factors to explain our own consciousness, then we have come full circle. We know what we are, where we come from, and what our minds do. We have demoted consciousness from brute fact to a mundane subsystem of a universe built out of quantum fields, and we know that our own illusory certainty that consciousness comes first is only that: an illusion.

We are the laser told to search for the darkness. Wherever we look, we see qualia, so we assume that qualia have primacy. But consciousness comes last.

This is physicalism. It is the only account of consciousness with any value. It tells us why we have qualia.

Consciousness is a calculation conducted by the human brain. Consciousness is the brain. This is the only logic.

Luis, I'm curious whether you think a set of really verbose quantum field equations describing a human body would be conscious (if worked out, say, by hand in an arbitrarily large colony of scriveners).

I'll take your silly chinese room bait after you answer my points regarding murdering Fork B, kthnks.

It's not bait, I'm just curious. I know my answer for sure. Didn't we answer all the fork questions pages back?
Title: Re: The "hard problem of consciousness"
Post by: Luis Dias on October 19, 2015, 09:48:39 am
I don't think I've ever got a sufficiently interesting response to this comment: http://www.hard-light.net/forums/index.php?topic=90258.msg1799328#msg1799328

e: Regarding the chinese room thing, I have to reiterate my attitude regarding these things, which is to say that I have no idea how it works. Now, don't confuse stuff. To say I don't know exactly how the "chinese room" can produce consciousness is not the same as saying that I don't know if you can simply transfer this chinese room to another office and still call it the "same" consciousness, without dying. IOW, even if it all seems to fit engineering-wise, from the self point of view, I have no way to know if the pre-teleported person is just going to die, full stop, or not.

These are different questions. For the sake of the discussion I'm willing to accept we are all "chinese rooms", but with a design of which utterly escapes us. Just yesterday, for instance, I learnt that a good connected neuron can have 15 thousand synapses. *One* neuron. IDK, the scales are somewhat incredible.
Title: Re: The "hard problem of consciousness"
Post by: General Battuta on October 19, 2015, 10:09:41 am
Yeah, that's good ****. Okay —

Quote
A) Can the destruction of "Fork B" be called "murder"? It seems that it is. If this person is not exterminated, it can continue existing in the world, if he is exterminated, the Cosmos is accounting for one less Consciousness in it. There's blood in the ground, there's a killer, there is an energy discharged to destroy this fork. I don't see how it is not murder;

 B) Is the murder of "Fork B" dependent on the pre-knowledge of Fork B that he is going to die? That is, does the determination that what happens to him is murder depends on the words the scan operator tells you? If the scan operator is silently killing you, does that stop being murder? Clearly, that is ridiculous. People don't get out of jail sentences for being silently killing people.

 C) Is the murder of "Fork B" dependent on the speed of his death? If he is immediately killed after the scan, can we say that the operator is therefore innocent of his actions? This is absurd: if he does not kill the Fork, the Fork lives as the laws of physics allows him to. It is the action by the operator that causes the extermination of the Fork. The speed to which he does this is irrelevant: He could wait an hour, he could wait a minute, he could wait a second, he could wait a micro-second. The murder is murder nevertheless, a Consciousness *has* been erradicated nevertheless.

 D) Something has been hinted at the "suffering" of Fork B. Oh, the humanity, so concerned with the "suffering". It's a total strawman. You can easily depict a scenario where this person was given a drug before being scanned that prevented his psychological suffering. This administration of this drug does not absolve anyone from the crime of murdering him.

A) Sure, I guess you could call it murder, but I think that's because our concepts of 'death' and 'murder' aren't adapted to the realities of what we are — not single continuous entities with persistent existence, but a mind-state that changes every instant. I'd call this eradication of Fork B the same as day-to-day life (my favorite argument horse). Your mind state yesterday could've gone on living in any number of ways. But it was overwritten, it was lost, and only one way survived. What about the other ways it could have gone? They were lost.

I don't think you'll find this satisfactory: Fork B has no causal descendants if vaporized after Fork A diverges. If you find that distasteful, it's pretty simple to ensure that Fork A only comes into existence after Fork B is gone. But to be honest, I simply don't think that the loss of a couple milliseconds (or even seconds) of existence is a major loss, or constitutes subjectivity death. I would use that 'roll your brain back ten seconds' machine with only a very little thrill of worry.

In my mind, our qualia and subjectivity emerge from the physical brain. Consciousness follows mechanism. If we wind the mechanism back, if we get drunk, if we take a head blow or get viral encephalitis, even if we get vaporized and then rebuilt from a couple seconds ago, we change. But we don't die.

B and C) This is a good question, and really interesting. I think it probes at the inadequacies of current ethics. If we do not tell the scan target he's about to be scanned and duplicated, no distress will occur, even if we wait ten seconds to vaporize one fork. The pre-scan individual has survived. One of the post-scan forks has died. I'll get into whether this is murder after I handle speed.

As for speed: it's all a matter of what you think about divergence. I'm not worried about the fact that the fork could continue to exist, because your mind state yesterday might have continued to exist in any number of ways but it only got one. But I am, like you, uncomfortable with the idea of letting a fork diverge and then killing it.

Here is my cold-blooded fork answer: the person who steps into the teleporter can be absolutely assured that they will live. If one fork persists for long enough between the scan and the vaporization to experience subjectivity, then any divergence they undergo will be irretrievably lost. I'd probably be okay with this, but I think many wouldn't, and for that reason I think the teleporter should be built to avoid it. How long is too long? I don't know. You'd argue that any time is too long. You might be right.

D) Why not be concerned with suffering? If I am tranquilized and rendered unconscious before I go into the teleporter, I'm honestly...pretty okay with that, even if one of my surviving forks is vaporized hours later (as long as I don't wake up). All I'm scared of is the subjectivity of knowing that I'm about to die and leave no causal descendants.

Imagine that I fall asleep one night and develop an alternate personality in my dreams. I live a hundred years in dreamland. However, none of these experiences make it out of my short-term memory buffer, and when I wake up they have no causal effect on me. I never know I had this other life. From the post-dream fork's perspective, what was lost? Nothing. From the dream fork's perspective, what was lost? They lived, they died, but they never thought 'oh no, I am about to be murdered.'

What we are afraid of, all of us, is not the actual fact of stopping, failing to propagate our brainstate forward. Whatever. That's not inherently frightening. What we are afraid of is creating forks who have to live with this knowledge, right? Or even creating forks who have experiences that will be lost, even if they don't know they're about to die. We don't want to be that person because we will definitely be that person, even as we are also definitely going to be the other fork.

If I'm not conscious when I'm teleported/scanned/whatever, I can be sure there are no qualia getting lost. When my qualia reboot, they'll be rebooting from only my surviving fork. I'm cool with that.

Is that too freaky and posthuman to be sympathetic? I can understand how it'd be freaky and posthuman and disturbing. But I think it's an unintuitive but inevitable consequence of looking at this rigorously.
Title: Re: The "hard problem of consciousness"
Post by: General Battuta on October 19, 2015, 10:11:21 am
Some of that post might not be totally nailed down to rigor, I might have to cogitate on it a bit more. Those were great questions, Luis, thank you.
Title: Re: The "hard problem of consciousness"
Post by: General Battuta on October 19, 2015, 10:37:06 am
I really don't expect anyone to agree with me on 'it's cool to be tele-copied and then vaporized, as long as you're asleep', and probably you are right to disagree, I gotta chew on that one. It violates a lot of my own firmly monist principles.
Title: Re: The "hard problem of consciousness"
Post by: Luis Dias on October 19, 2015, 10:45:29 am
So, just to clarify two things:

1. My issue with "suffering" as not mattering is not to say that I don't care about suffering as a thing, rather that I don't care about it as an argument.

2. My thing about the whole "brain creates consciousness, so it's fine if we build a new brain with the exact same brain state elsewhere", is not that I'm "worried" about forks or what happens in the meanwhile or whatever. What worries me is that I just end. That is, there is nothing in Physicalism that guarantees that my own, personal experiencing of living will continue in a very similar brain elsewhere. I don't give a rat's ass if that brain is exactly like mine, behaving like I do, having a consciousness equal to mine.

My problem is simpler: Is it me doing the travelling or is it me giving birth to a clone to the place I bought a ticket into?

Physicalism - as you seemingly define it - treats both stories as one and the same, while I do think that the difference is between you experiencing death and blankness forever (while giving birth to a clone) and actually travelling to the other side of the wall.


Physicallism tells me there's no real difference between my own conscious experience and any other's. Except that I'm stuck in mine. Everyone's stuck in their own. And this stuckness is ill defined as of yet. We still do not understand it very well. We might say "Oh but you see it's all due to what is connected to your synapses and so on", ok, sure, it's still very unclear.

So my point is, if I'm stuck in my Consciousness, and I'm the Person who is being scanned and killed, then it logically follows that my Consciousness is stuck to die. The only technological miracle that is independent of this basic murder story is that, somehow, through a process, a new Consciousness will be born out of an equal chinese room in another room.

From the point of view of that guy who is just 2 seconds old, it's good. It's been fun! He just travelled thousands of whatever, closer to his goals. He will even gladly pay what he owes and shake the hands of the operators. And when he comes back again in a week, his life span will have been merely a week.
Title: Re: The "hard problem of consciousness"
Post by: GhylTarvoke on October 19, 2015, 10:51:46 am
Treating consciousness as "an emanation of the luminiferous aether" (which isn't an accurate description of dualism, but never mind) is better than not addressing consciousness at all. A god of the gaps argument would claim that physicalism doesn't currently explain consciousness, and hence consciousness is special. The actual situation is much worse: physicalism doesn't even address consciousness.

We haven't "introduced an acausal mechanism into the universe". The phenomenon is inescapable.

But you did! Dualism postulates that consciousness is something that cannot be completely described in terms of physical interactions. But since consciousness has undeniable physical sideeffects, consciousness has to appear acausal from the point of view of a purely physical examiner.

Yes, dualism postulates that consciousness cannot be completely described in terms of physical interactions. (In fact, I would argue that this is a logical necessity.) It does not, however, "postulate" the existence of consciousness: this is a brute fact. I think our disagreement is purely terminological.
----------
@Battuta: You're still claiming that consciousness is the brain, ignoring my counterargument.

Now for the rest of my paragraph: if you agree that existence and consciousness are brute facts, and you agree that these are the only brute facts, our definitions must coincide (though mine is more precise). Hence the brain cannot be consciousness, because the brain violates our definition: it is not a brute fact. This is simple logic.

Here's what I think is going on. You observe that changes in consciousness are always accompanied by changes in the brain. You then jump to the conclusion that consciousness is the brain (which is certainly tempting, because it "explains away" the hard problem). But this you cannot do, because the brain violates our agreed-upon definition of consciousness.

In short, you're trying to fit a square peg into a round hole. This is exactly what I mean by first-level and third-level knowledge.
Title: Re: The "hard problem of consciousness"
Post by: General Battuta on October 19, 2015, 11:08:02 am
No, I've explained how we get from consciousness as brute fact to consciousness as physical.

Quote
We have a consciousness. We can do a lot of stuff with it. One thing we can do is poke the environment around us and see what happens.

As we do this, we begin to detect causal logic in how the environment behaves. This leads us to the choice between solipsism and objective reality. Solipsism is not useful: it undercuts itself.

Once we have chosen objective reality, we must begin trying to build models of how it works. And when we build a model that, in the end, explains ourselves, when it contains only the necessary and sufficient factors to explain our own consciousness, then we have come full circle. We know what we are, where we come from, and what our minds do. We have demoted consciousness from brute fact to a mundane subsystem of a universe built out of quantum fields, and we know that our own illusory certainty that consciousness comes first is only that: an illusion.

We are the laser told to search for the darkness. Wherever we look, we see qualia, so we assume that qualia have primacy. But consciousness comes last.

This is physicalism. It is the only account of consciousness with any value. It tells us why we have qualia.

Consciousness is a calculation conducted by the human brain. Consciousness is the brain. This is the only logic.

This is not jumping to a conclusion, unless you call the entire development of human science 'jumping to conclusions'. Rather, it is use using our initial 'brute fact' definition to explore the possibility space for more information, locating one thread that works, and using it to update our definition of consciousness.

We have discovered that our peg is, in fact, square. It was never round at all.

Quote
2. My thing about the whole "brain creates consciousness, so it's fine if we build a new brain with the exact same brain state elsewhere", is not that I'm "worried" about forks or what happens in the meanwhile or whatever. What worries me is that I just end. That is, there is nothing in Physicalism that guarantees that my own, personal experiencing of living will continue in a very similar brain elsewhere. I don't give a rat's ass if that brain is exactly like mine, behaving like I do, having a consciousness equal to mine.

My problem is simpler: Is it me doing the travelling or is it me giving birth to a clone to the place I bought a ticket into?

Physicalism - as you seemingly define it - treats both stories as one and the same, while I do think that the difference is between you experiencing death and blankness forever (while giving birth to a clone) and actually travelling to the other side of the wall.

Physicallism tells me there's no real difference between my own conscious experience and any other's. Except that I'm stuck in mine. Everyone's stuck in their own. And this stuckness is ill defined as of yet. We still do not understand it very well. We might say "Oh but you see it's all due to what is connected to your synapses and so on", ok, sure, it's still very unclear.

So my point is, if I'm stuck in my Consciousness, and I'm the Person who is being scanned and killed, then it logically follows that my Consciousness is stuck to die. The only technological miracle that is independent of this basic murder story is that, somehow, through a process, a new Consciousness will be born out of an equal chinese room in another room.

From the point of view of that guy who is just 2 seconds old, it's good. It's been fun! He just travelled thousands of whatever, closer to his goals. He will even gladly pay what he owes and shake the hands of the operators. And when he comes back again in a week, his life span will have been merely a week.

Yeah, I get you. But I am making an end run around this whole problem. I think that our 'stuckness' is simply a story we tell ourselves because we are never forced to confront what we really are: the epiphenomenal experience of being a material brain, one Planck instant at a time. We actually aren't stuck. We are endlessly teleporting: giving birth to a clone who travels forward one tick.

I believe that everything we are, including our own subjective experience of being me, having been me since I was born — in short, our credentials, our qualia — is physical. If we rebuild the physical we rebuild that subjective experience.

This is why it is important to remember that we can only claim continuous identity retrospectively. We can say 'tomorrow I will', but we cannot remember ourselves doing it. We are only planning. Our credentials haven't been established yet.

To your worry about the man who takes a teleporter trip and lives only a week, I would say that he lives far longer than the man who sleeps, and lives only a day. He lives longer than the man who gets blackout drunk, and exists as a drunken and transient bubble of joy and vomit for only half an hour before he passes out.
Title: Re: The "hard problem of consciousness"
Post by: General Battuta on October 19, 2015, 11:10:27 am
We are not one core identity moving forward through time. We're more like a reel of film. Pull out one instantaneous frame and let it pop, like a hologram, and from it emerge all the qualia and sense of me-ness that existed at that moment. In that frame is consciousness. But the consciousness is within the frame, not vice versa: it emerges only from the physical states of the brain.
Title: Re: The "hard problem of consciousness"
Post by: Luis Dias on October 19, 2015, 11:26:56 am
Yeah, I get you. But I am making an end run around this whole problem. I think that our 'stuckness' is simply a story we tell ourselves because we are never forced to confront what we really are: the epiphenomenal experience of being a material brain, one Planck instant at a time. We actually aren't stuck. We are endlessly teleporting: giving birth to a clone who travels forward one tick.

Well that's fine. That's a good story. I never said it wasn't a coherent story. I'd gladly read that novel and have a real blast with it (perhaps I've done so in all Star Trek series so far). What I said was, you have no way to test if that story is true. It's therefore, not physics. It's not science. At most, it's metaphysics, and a tad in need of rigorous checks. It does seem to run into basic aristotelian roadblocks of essences and forms, for instance.

The testability is fundamental here. Look at your next paragraphs:

Quote
I believe that everything we are, including our own subjective experience of being me, having been me since I was born — in short, our credentials, our qualia — is physical. If we rebuild the physical we rebuild that subjective experience.

This is why it is important to remember that we can only claim continuous identity retrospectively. We can say 'tomorrow I will', but we cannot remember ourselves doing it. We are only planning. Our credentials haven't been established yet.

To your worry about the man who takes a teleporter trip and lives only a week, I would say that he lives far longer than the man who sleeps, and lives only a day. He lives longer than the man who gets blackout drunk, and exists as a drunken and transient bubble of joy and vomit for only half an hour before he passes out.

I like how they are phrased now. They espouse your beliefs. But consider this: when you speak on how such a person "will live longer than the man who gets blackout drunk" and so on, I have the sensation I'm not reading anything really rational or scientific now, merely poetical. As an analogy, it's like reading someone saying that "I'm going to live forever through my sons and daughters, I'll live forever through my work". At the end of the day, if I'm to decide to teleport myself or not, I will ponder through those things poetically only after considering my true existence of my stuck conscience in that world. Not all of these poetical things. Like Woody Allen said:

“I don't want to achieve immortality through my work; I want to achieve immortality through not dying. I don't want to live on in the hearts of my countrymen; I want to live on in my apartment.”

Substitute "work" for clones, substitute "hearts of my countrymen" for "teleported copies of my own".
Title: Re: The "hard problem of consciousness"
Post by: Scotty on October 19, 2015, 11:39:59 am
GhylTarvoke: In the metaphor of the button and the circuitry, the brain is not the button and consciousness is not the circuitry.  Qualia, experiences, are the button.  The brain is the circuitry.  Consciousness is the outcome.

I've noticed that this is something you've continually misapplied as an argument in your favor that there must be something else.
Title: Re: The "hard problem of consciousness"
Post by: GhylTarvoke on October 19, 2015, 11:40:11 am
That's all very nice, and completely dodges the question. I'm not sure how to say it more plainly, but I'll try.
----------
Assumptions
1. "Existence", "consciousness", and "brain" are meaningful words.
2. Whatever they are, existence and (the existence of) consciousness are brute facts.
3. The existence of brains is not a brute fact.
4. "Bruteness" is a property.
5. If two things do not have the same properties, then they are not the same.

If you're denying 1 or 2 (and I'm pretty sure you're not), we have reached an impasse.
If you're using the standard definition of "brain", 3 is clear. We may be digital simulations, and hence have no brains.
You've referred to brute facts repeatedly, so I'm pretty sure you're not denying 4.
If you're denying 5, you must be using an extremely nonstandard definition of "sameness".

Claim: With these assumptions, consciousness cannot be the brain.
Proof:
By 1, the words "existence", "consciousness", and "brain" are meaningful, and we can use them.
By 2, consciousness is a brute fact. By 3, the existence of brains is not.
By 4, bruteness is a property. Hence consciousness has a property (bruteness) that the brain does not.
By 5, this implies that consciousness and the brain are not the same.
----------
The proof is logically valid, so if you disagree with the conclusion, you must disagree with one of the premises. I can't figure out which one you disagree with.

EDIT: This is in response to Battuta.
Title: Re: The "hard problem of consciousness"
Post by: Luis Dias on October 19, 2015, 11:44:47 am
Battuta is not equating Consciousness to Brains prima facie. He's concluding that Consciousness = Brains through several findings both a priori and empirical.

I don't agree with the sentence "Consciousness = Brain", but I think he was just sacrificing rigor for summary there. The point he's making is that Consciousness is a product of the Brain, it's a physical process that happens within the Brain.
Title: Re: The "hard problem of consciousness"
Post by: Scotty on October 19, 2015, 11:51:15 am
Indeed.  A brain is not a consciousness.  A brain is a mechanism for interpreting experiences and producing consciousness from that data.
Title: Re: The "hard problem of consciousness"
Post by: General Battuta on October 19, 2015, 11:52:36 am
Well that's fine. That's a good story. I never said it wasn't a coherent story. I'd gladly read that novel and have a real blast with it (perhaps I've done so in all Star Trek series so far). What I said was, you have no way to test if that story is true. It's therefore, not physics. It's not science. At most, it's metaphysics, and a tad in need of rigorous checks. It does seem to run into basic aristotelian roadblocks of essences and forms, for instance.

The testability is fundamental here. Look at your next paragraphs:

I like how they are phrased now. They espouse your beliefs. But consider this: when you speak on how such a person "will live longer than the man who gets blackout drunk" and so on, I have the sensation I'm not reading anything really rational or scientific now, merely poetical. As an analogy, it's like reading someone saying that "I'm going to live forever through my sons and daughters, I'll live forever through my work". At the end of the day, if I'm to decide to teleport myself or not, I will ponder through those things poetically only after considering my true existence of my stuck conscience in that world. Not all of these poetical things. Like Woody Allen said:

“I don't want to achieve immortality through my work; I want to achieve immortality through not dying. I don't want to live on in the hearts of my countrymen; I want to live on in my apartment.”

Substitute "work" for clones, substitute "hearts of my countrymen" for "teleported copies of my own".

I think these things are as testable as whether we'll still be ourselves tomorrow, which is the only criteria we need. If I sound poetical it's because our ideas of 'self', 'dying', 'consciousness' and so on are just poetry — dressed-up terms to disguise the fact that we're reels of film.

I do not share your fear that living on through teleporters is like living on through your work. If you rebuild the object, you rebuild the subject. This is not just an untestable article of faith but the default conclusion of every single piece of evidence we have about the universe. I do not see any risk to the philosophical teleporter.

That's all very nice, and completely dodges the question. I'm not sure how to say it more plainly, but I'll try.
----------
Assumptions
1. "Existence", "consciousness", and "brain" are meaningful words.
2. Whatever they are, existence and (the existence of) consciousness are brute facts.
3. The existence of brains is not a brute fact.
4. "Bruteness" is a property.
5. If two things do not have the same properties, then they are not the same.

If you're denying 1 or 2 (and I'm pretty sure you're not), we have reached an impasse.
If you're using the standard definition of "brain", 3 is clear. We may be digital simulations, and hence have no brains.
You've referred to brute facts repeatedly, so I'm pretty sure you're not denying 4.
If you're denying 5, you must be using an extremely nonstandard definition of "sameness".

Claim: With these assumptions, consciousness cannot be the brain.
Proof:
By 1, the words "existence", "consciousness", and "brain" are meaningful, and we can use them.
By 2, consciousness is a brute fact. By 3, the existence of brains is not.
By 4, bruteness is a property. Hence consciousness has a property (bruteness) that the brain does not.
By 5, this implies that consciousness and the brain are not the same.
----------
The proof is logically valid, so if you disagree with the conclusion, you must disagree with one of the premises. I can't figure out which one you disagree with.

EDIT: This is in response to Battuta.

It answers the question nose on. We begin with 1, proceed to 2, between 2 and 3 we conduct a search for systems of logic that use 1 and 2 to explain our perceptions. We stumble on mathematics, physics, and all their consequences: the belief in an objective reality that obeys causal logic. We reach 3 knowing that brains are consciousness: the existence of brains is the same as the existence of consciousness. We are our brains, and whatever we are is material. Any brute facts of our existence are material. The entire universe and all its rules are physical. Failing to accept this sends us back to the search between 2 and 3, which we repeat, and find no better (necessary and sufficient) model to explain our own existence.

qwed~

e: seeing Luis and Scotty's posts I will happily say that consciousness is a material process within the brain, a function. It's true that the whole brain isn't devoted to consciousness. All consciousnesses are in brains.
Title: Re: The "hard problem of consciousness"
Post by: General Battuta on October 19, 2015, 11:56:32 am
Even if we're digital simulations, we still have brains, the simulation is computing us as little blobs of flesh. Brains are as brutally factual (:megadeth:) as consciousness.
Title: Re: The "hard problem of consciousness"
Post by: Luis Dias on October 19, 2015, 12:40:06 pm
I think these things are as testable as whether we'll still be ourselves tomorrow, which is the only criteria we need. If I sound poetical it's because our ideas of 'self', 'dying', 'consciousness' and so on are just poetry — dressed-up terms to disguise the fact that we're reels of film.

But surely you see the problem there: whenever terms are poetic, that means we have no real synthetic / scientific grasp on them, let alone engineeringly-wise. I see this as a gradient from "Mere Intuition - > Insight -> Poetic semantics -> Insight -> Philosophizing -> Pre-scientific terminology -> Insight -> More grounded Philosophizing -> Hypothesis -> Testability, testing, tinkering, empirical feedback -> Scientific terminology -> Technical insight -> Engineering -> Technology".

When you tell me to dismiss all this poetic language because what "we really are" (which is a phrase that should be followed by some technical thing) is reels of film, just shows how deep into the Plato Cave's shadows we really are in discussing these things. No, we're not "reels of film", although I do get your "poetic point" - isn't it all we have at this point anyway?

Quote
I do not share your fear that living on through teleporters is like living on through your work. If you rebuild the object, you rebuild the subject. This is not just an untestable article of faith but the default conclusion of every single piece of evidence we have about the universe. I do not see any risk to the philosophical teleporter.

You are claiming that the continuation of my own Consciousness, that prima facie can be said to be stuck inside this brain of mine, can be followed through the destruction and re-building of the support of it, anywhere, any time. That's fine. What's not fine is your claim that this is "the default conclusion". As established by whom? As tested by whom? How can you ever test this thing?

To recap, let's just focus on the testability of this. Imagine two possible metaphysics here:

1. Consciousness works like you say and you can transfer it through teleportation (let's ignore the unanswered murder aspects):

    a. I go to a room, my brain and body is scanned;
    b. I'm immediately executed, and the information is passed through a channell to its proper place;
    c. A clone is built and all the required information is inserted;
    d. This clone wakes up and you ask this clone: "Who are you, and are you really *you*, the one who just came from the other side?"
    e. The clone wll answer unequivocally ""Of course I am, I feel myself, I am conscious, I remember I was just scanned and here I am now, now let me go to my business will you?"

2. Consciousness does *not* work like you say, but it is "copiable". In this metaphysical scenario, every time you kill a brain, you kill a Conscience:

    a. I go to a room, my brain and body is scanned;
    b. I'm immediately executed, and the information is passed through a channell to its proper place;
    c. A clone is built and all the required information is inserted;
    d. This clone wakes up and you ask this clone: "Who are you, and are you really *you*, the one who just came from the other side?"
    e. The clone wll answer unequivocally ""Of course I am, I feel myself, I am conscious, I remember I was just scanned and here I am now, now let me go to my business will you?"

There's no way to distinguish both scenarios. There's no way to test it. And that's why you'll be left with mere beliefs. Always. But science is not dealing with beliefs. It deals with predictions and tests. Replication. Falsification. None of what you have said meet these criteria, therefore it is not Science, it is just... your beliefs.
Title: Re: The "hard problem of consciousness"
Post by: General Battuta on October 19, 2015, 01:08:38 pm
But surely you see the problem there: whenever terms are poetic, that means we have no real synthetic / scientific grasp on them, let alone engineeringly-wise. I see this as a gradient from "Mere Intuition - > Insight -> Poetic semantics -> Insight -> Philosophizing -> Pre-scientific terminology -> Insight -> More grounded Philosophizing -> Hypothesis -> Testability, testing, tinkering, empirical feedback -> Scientific terminology -> Technical insight -> Engineering -> Technology".

When you tell me to dismiss all this poetic language because what "we really are" (which is a phrase that should be followed by some technical thing) is reels of film, just shows how deep into the Plato Cave's shadows we really are in discussing these things. No, we're not "reels of film", although I do get your "poetic point" - isn't it all we have at this point anyway?

There are lots of words and concepts that refer to useful human abstractions — things that aren't real, but which represent concepts it's not useful to reduce to their actual components. 'Morality', for instance, is easier to talk about than 'sets of rules which, when obeyed, produce socially adaptive behavior.' 'Society' is easier to talk about than 'networks of human interaction which do not exist outside of individuals but appear to have influence on behavior that goes beyond the individual.' See where I'm rolling? 'Consciousness' is a lot easier to say than 'the retrospective illusion of continuity created by memory.'

Quote
There's no way to distinguish both scenarios. There's no way to test it. And that's why you'll be left with mere beliefs. Always. But science is not dealing with beliefs. It deals with predictions and tests. Replication. Falsification. None of what you have said meet these criteria, therefore it is not Science, it is just... your beliefs.

These are cleanly designed scenarios, and I wholly understand your point, but I think this actually points back to the utility of ~my model~. Consciousness is always claimed retrospectively. It is never claimed prospectively. It doesn't matter whether A or B is true: they are the same! The only valid claim of consciousness we ever experience is a 'copy' looking backwards and saying 'Yes, I am still me.'
Title: Re: The "hard problem of consciousness"
Post by: Luis Dias on October 19, 2015, 01:23:56 pm
e: Regarding the retrospective scenario, I'll ponder over it. I think it's an evasion of my point, but a clever one.

The only way you can arrive that conclusion is through deduction, but I don't see enough knowledge of the world to do such a thing. What I do recognize you doing is making a set of pressupositions based on some other pressupositions, make some good deductions and inferring (inducing) others less clear concepts into other concepts. I do think you're doing the best you can, but unless a proof comes that you aren't letting something "off the hook", I'm perfectly willing to believe there are millions of (unseen as yet) loopholes that could render your idea wrong.

Quote
See where I'm rolling? 'Consciousness' is a lot easier to say than 'the retrospective illusion of continuity created by memory.'

Thing is, the latter is gruesomely grotesque still. It assumes an illusion. It assumes someone is being "fooled". In an half-baked manner, the semantics point to Descartian theatrics, even if you didn't mean it to (obviously you didn't). You say "created by memory", and I get what you mean (different levels of memory, the most basic one giving you the impression of continuity, etc.), but these are merely low-level mechanisms. Leaves. Are you sure you are understanding the forest, or just clinging to the latest neuro-scientific findings to feel you have a grasp on what is going on?

I do think that this Being that Is, This Thing that I cannot but Be, isn't reflected anywhere in Science. I think that Science merely tells me how other things are related to this Being that Is Me, how the world around me can affect me through material ends, whatever. I will admit it will even say how the "Me" works, in a physical manner. It will predict my behavior and all my experiences. But I'm still Me, and that experience is ineffable from the scientific point of view.

(I said this previously, but) I do think there are some paths to poke this apparently unbreakable gap, and we will arrive to very curious experiments in that direction in a couple of decades.
Title: Re: The "hard problem of consciousness"
Post by: General Battuta on October 19, 2015, 02:02:34 pm
I think our disagreements are clear. I'd like to hear your thoughts about experiments!

I was going to ask Ghyl what he thought of the Cotard delusion and knowing for sure that you don't exist.
Title: Re: The "hard problem of consciousness"
Post by: General Battuta on October 19, 2015, 02:36:26 pm
This is a good threadnaught.
Title: Re: The "hard problem of consciousness"
Post by: GhylTarvoke on October 19, 2015, 04:17:40 pm
I can't believe I now find it necessary to address the meaning of the word "is".

The point he's making is that Consciousness is a product of the Brain, it's a physical process that happens within the Brain.

Then Battuta is not being clear or even honest when he says "consciousness is the brain". If he means, "consciousness is a product of the brain", then I agree 100%; I have never said otherwise.

Indeed.  A brain is not a consciousness.

Hooray!

It answers the question nose on. We begin with 1, proceed to 2, between 2 and 3 we conduct a search for systems of logic that use 1 and 2 to explain our perceptions. We stumble on mathematics, physics, and all their consequences: the belief in an objective reality that obeys causal logic. We reach 3 knowing that brains are consciousness: the existence of brains is the same as the existence of consciousness. We are our brains, and whatever we are is material. Any brute facts of our existence are material. The entire universe and all its rules are physical. Failing to accept this sends us back to the search between 2 and 3, which we repeat, and find no better (necessary and sufficient) model to explain our own existence.

You are once again not addressing the post. Do you understand the concept of a proof? You have only three options, none of which you have yet taken: 1) show that the proof is not logically valid; 2) accept that "consciousness is the brain" is false; and 3) deny one of the assumptions.

... unless your postscript is your way of taking option 2. If so, we are finally making progress.

Even if we're digital simulations, we still have brains, the simulation is computing us as little blobs of flesh. Brains are as brutally factual (:megadeth:) as consciousness.

I was hoping we could avoid this, but I must now ask you to define "brain". I think this will reveal your error.

I was going to ask Ghyl what he thought of the Cotard delusion and knowing for sure that you don't exist.

The hint is in the name. If they do exist, then those people are deluded. What's the problem?
Title: Re: The "hard problem of consciousness"
Post by: General Battuta on October 19, 2015, 04:24:04 pm
Then Battuta is not being clear or even honest when he says "consciousness is the brain". If he means, "consciousness is a product of the brain", then I agree 100%; I have never said otherwise.

This feels like sophistry. Consciousness is in the brain. It is only meat. It is entire physical.

Quote
You are once again not addressing the post. Do you understand the concept of a proof? You have only three options, none of which you have yet taken: 1) show that the proof is not logically valid; 2) accept that "consciousness is the brain" is false; and 3) deny one of the assumptions.

Writing five bullet points does not a proof make. Your logic's broken! I fixed it by pointing out that we can begin at a 'brute truth' and use that truth to derive a model of the universe which qualifies and corrects our starting point.

Quote
I was hoping we could avoid this, but I must now ask you to define "brain". I think this will reveal your error.

https://en.wikipedia.org/wiki/Brain

A brain is a brain. The substrate is irrelevant as long as the constituents behave functionally. This is elementary physicalism: what matters is not the raw stuff, the bits in the ship of Theseus, but their behavior. One atom can be exchanged for another. An atom can be exchanged for a nanite. A neuron can be replaced by a synthetic alternate or a simulation in a computer. It's all a brain.

This is why the teleporter is safe. Atoms are interchangeable.

Quote
The hint is in the name. If they do exist, then those people are deluded. What's the problem?

No. By your argument these people have access to a brute truth. The only thing they can be sure of is that they don't exist.
Title: Re: The "hard problem of consciousness"
Post by: GhylTarvoke on October 19, 2015, 04:40:34 pm
Consciousness is in the brain.

This is a separate claim, which I'll address later. "Consciousness is the brain" is provably false (unless you take option 1 or 3), and you've been making that statement again and again.

Writing five bullet points does not a proof make. Your logic's broken! I fixed it by pointing out that we can begin at a 'brute truth' and use that truth to derive a model of the universe which qualifies and corrects our starting point.

Is this option 1? If so, please pinpoint the logical error in the proof. Again, when arguing against a proof, you only have three options. (Don't make me define "proof".)

Quote
I was hoping we could avoid this, but I must now ask you to define "brain". I think this will reveal your error.

A brain is a brain. The substrate is irrelevant as long as the constituents behave functionally. This is elementary physicalism: what matters is not the raw stuff, the bits in the ship of Theseus, but their behavior. One atom can be exchanged for another. An atom can be exchanged for a nanite. A neuron can be replaced by a synthetic alternate or a simulation in a computer. It's all a brain.

This is why the teleporter is safe. Atoms are interchangeable.

If your definition of a brain is seriously "a brain", I despair.

Quote
The hint is in the name. If they do exist, then those people are deluded. What's the problem?

No. By your argument these people have access to a brute truth. The only thing they can be sure of is that they don't exist.

There are only two brute truths: existence, and consciousness. I thought we agreed on this.
Title: Re: The "hard problem of consciousness"
Post by: General Battuta on October 19, 2015, 04:53:50 pm
Your argument here is 'it's this way because I say it is'. Your proof isn't a proof, consciousness is wholly and only physical, we both know exactly what a brain is, and your 'brute truth' definition needs updating if it can't account for those whose conscious experience is a self-negating insistence that they don't exist. Apparently the brute truth is actually contingent on physical circumstances!
Title: Re: The "hard problem of consciousness"
Post by: General Battuta on October 19, 2015, 04:55:08 pm
We've goalposted from 'teleporters might destroy your qualia' to debates about wording and definitions. What exactly are you reading in 'consciousness is the brain' if it's not 'consciousness is monist and physical'?
Title: Re: The "hard problem of consciousness"
Post by: General Battuta on October 19, 2015, 04:59:24 pm
Actually I have good idea for another tack if we continue to make no progress here. It will involves ~TMS GUNS~
Title: Re: The "hard problem of consciousness"
Post by: GhylTarvoke on October 19, 2015, 05:11:07 pm
No goalposts are moving. There are bigger fish to fry, but I didn't expect such a hangup on this fairly simple point.

You've made the following statement over and over, apparently using it as a mantra or a substitute for an actual argument: "Consciousness is the brain". (I don't feel like going back and counting the instances of this statement.) Hence, the truth of this statement is extremely relevant.

Definitions are absolutely necessary for discussion. I think we've succeeded in defining "consciousness" precisely, which is great. If your definition of "brain" is either "brain" or a link to a Wikipedia page, there's a serious problem, especially since you seem to be using the word in an unusual way.

Finally, you've once again taken a fourth option by saying, "it's not a proof". Are you really forcing me to define "proof" here? This is the foundation of logical argument.
Title: Re: The "hard problem of consciousness"
Post by: General Battuta on October 19, 2015, 05:15:17 pm
What are you trying to discuss here? Dualism versus monism, or semantics? You know exactly what point I'm making about consciousness, and you have my objection to your proof: point 1 disproves itself as soon as you use it to examine the world. Engage or change topics.

What kind of dualist do you identify as, exactly?
Title: Re: The "hard problem of consciousness"
Post by: General Battuta on October 19, 2015, 05:23:10 pm
Are you concerned that consciousness cannot be the brain because there are segments of the brain that are not functionally accessible to consciousness? That seems a perfectly fair clarification to me, if not exactly a major logical stumbling point — it's just a matter of how you parse the wording.

This is what I'd point to as clarification, from a way back:

"Consciousness is a calculation conducted by the human brain. Consciousness is the brain. This is the only logic."

I might even insert 'Calculations are physical processes in which inputs are manipulated to produce a result."
Title: Re: The "hard problem of consciousness"
Post by: GhylTarvoke on October 19, 2015, 06:17:49 pm
What are you trying to discuss here? Dualism versus monism, or semantics? You know exactly what point I'm making about consciousness, and you have my objection to your proof: point 1 disproves itself as soon as you use it to examine the world. Engage or change topics.

What kind of dualist do you identify as, exactly?

I'm trying to discuss consciousness (or dualism vs. monism, if you like), but this is very difficult if we throw semantics out the window.

Phew, finally. Okay. I assume that by "point 1" you mean assumption 1, so you're taking the third option: namely, rejecting an assumption. (This rigmarole could have been avoided with the simple statement: "I reject assumption 1.") But as I said in the proof, we have now reached an impasse. If the words "existence", "consciousness", or "brain" are not meaningful, then the discussion ends immediately, and we've been talking about nothing this entire time.

I thought I knew what point you were making, but then "consciousness is the brain" completely confused me. I don't think you know what point I'm making, either. I still blame the language barrier.

I'd probably identify as a type-F dualist, the view in the OP. (Strangely enough, it's commonly called type-F monism, but I consider this a misnomer.) I don't view science's silence on the topics of existence and consciousness as a weakness; rather, I consider those topics to be strictly philosophical, and hence outside science's purview.

Are you concerned that consciousness cannot be the brain because there are segments of the brain that are not functionally accessible to consciousness? That seems a perfectly fair clarification to me, if not exactly a major logical stumbling point — it's just a matter of how you parse the wording.

This is what I'd point to as clarification, from a way back:

"Consciousness is a calculation conducted by the human brain. Consciousness is the brain. This is the only logic."

I might even insert 'Calculations are physical processes in which inputs are manipulated to produce a result."

This is not my concern at all. Consciousness and the brain aren't just different, they're in completely different categories. That they still manage to be intimately connected is the hard problem.
----------
As you predicted, I think this thread is rapidly approaching the point where we both say, "we'll never understand each other", and leave it at that, unless someone here understands both arguments and gives a translation. Nevertheless, I'm grateful for the discussion - it clarified my views, and I concede several points that you made regarding teleportation.
Title: Re: The "hard problem of consciousness"
Post by: General Battuta on October 19, 2015, 06:28:30 pm
It a good thread.

I don't think there is a hard problem. I think the existence of a hard problem is a case of begging the question caused by a category error. In much the same way you see consciousness as in a separate category from the brain, I think that experiences and explanations are separate categories. Sure, we can't ever experience what it's like to be a bat, but we can come up with a complete and bounded account of how a bat brain works, and if we ran it in simulation or fabricated a brain and let it go, we would be certain we had real bat qualia. Hell, we might even be able to build a virtual machine inside our own brains to experience bat-ness in the first person!

I am a deflationizer. I think that all mental events are identical to and reducible to physical events.
Title: Re: The "hard problem of consciousness"
Post by: GhylTarvoke on October 19, 2015, 07:17:12 pm
@Battuta: Even with my fear of mind machines (which you say is illogical, I know), I might consider experiencing a bat simulation.

I just now read your and Luis' exchange about teleportation. As you probably guessed, I'm mostly on Luis' side, but my feelings aren't very strong or clear in this regard.

GhylTarvoke: In the metaphor of the button and the circuitry, the brain is not the button and consciousness is not the circuitry.  Qualia, experiences, are the button.  The brain is the circuitry.  Consciousness is the outcome.

I've noticed that this is something you've continually misapplied as an argument in your favor that there must be something else.

@Scotty: At the risk of reopening this can of worms - I agree that consciousness is the light, but saying that qualia are the button makes no sense to me. Our mindsets may be irreconcilable.
Title: Re: The "hard problem of consciousness"
Post by: watsisname on October 19, 2015, 07:28:40 pm
Excellent thread, everyone.  Very good job keeping it civil.
Title: Re: The "hard problem of consciousness"
Post by: Bobboau on October 19, 2015, 10:43:18 pm
Consciousness is (as of the current state of technology) wholly dependent on a brain, but it is not a brain in the same way that a program is not a computer, it merely requires a computer to run, different hardware will run it just fine, though it it's not a perfect replica it might result in different executions. The program is a pattern it could be on a hard drive, or in memory or printed out on paper, like a circle, if you draw a circle with a pencil, the graphite is what the circle is made out of, but it is not the circle it's self. likewise, the neurons of your brain are the medium that your consciousness is written on but it is not the sum of the parts that make it up.

I see a lot of disagreements here and people are saying they are differences of belief but a lot of them seem to be more differences of semantics.

I think this is relevant to this thread
Title: Re: The "hard problem of consciousness"
Post by: AtomicClucker on October 20, 2015, 12:24:29 am
Depending on how one wishes to take, it almost reminds of Kant's discussion between synthetic and apriori (this discussion finally caused me to start reading up on consciousness and jumping back into Metaphysical knowledge). I'm a little rusty, but this vigorous discussion is refreshing.

Coming from my angle, I'm going to side with Battuta and agree that much of consciousness is a physical aspect that can be measured and to an extent, recreated with the same effects. As for the knowledge that is generated in consciousness? That's going to be a mix between synthetic (empirically drawn) and apriori (definitive drawn) knowledge. What's interesting is that certain aspects of consciousness is the ability to engage in knowledge, that is quite literally, outside of synthetic bounds - but quite inside - synthetic apriori. For example, we know that mathematics works and is true, but you can't necessarily experiment on it - it is in a Kantian sense, metaphsyical, but quite empirical despite having no physical properties. 1+1=2, even if it actually has no physical presence.

Going to say that the teleporter argument is an interesting principle - even if you're "killed" and "reborn" the sense of "you" persists, regardless of the question of conscience. Thanks for that video Bobbau, brought a warm fuzzy smile back. Between the ship of Theseus and my recent replay of Diaspora, all is good on dat ship.
Title: Re: The "hard problem of consciousness"
Post by: Scotty on October 20, 2015, 02:02:49 am
Luis, would it make more sense and/or make you feel better if you instead thought of the teleporter as killing you, and then an almost immeasurable amount of time later bringing you back to life in another location?  I feel like there's a sense here where there's a perceived interruption of the continuity of thought, when no such thing takes place.
Title: Re: The "hard problem of consciousness"
Post by: GhylTarvoke on October 20, 2015, 03:06:04 am
On a slightly different note, anyone played SOMA? I loved it. It's sometimes billed as a game about consciousness, and teleporter stuff comes up a lot.
Title: Re: The "hard problem of consciousness"
Post by: Bobboau on October 20, 2015, 09:00:16 am
well, for some of the modes of operation, in a way it's more like the teleporter is reviving you somewhere else, before killing you.

but I guess if these sort of things bug you then you've got what the intergalactic call a very planetary mindset.
Title: Re: The "hard problem of consciousness"
Post by: Grizzly on October 20, 2015, 09:04:53 am
On a slightly different note, anyone played SOMA? I loved it. It's sometimes billed as a game about consciousness, and teleporter stuff comes up a lot.
It's on my to play list, esp. after reading this (http://www.haywiremag.com/columns/storyplay-human-machines/). Spoilers though!
Title: Re: The "hard problem of consciousness"
Post by: Luis Dias on October 20, 2015, 09:36:25 am
Luis, would it make more sense and/or make you feel better if you instead thought of the teleporter as killing you, and then an almost immeasurable amount of time later bringing you back to life in another location?  I feel like there's a sense here where there's a perceived interruption of the continuity of thought, when no such thing takes place.

That's not the alternative scenario we are dealing here, that's the scenario I'm describing.

Of course I would not have problems with it if it worked exactly as waking up from sleep, as a complete continuation of my consciousness. My problem is that I have no way to decide beforehand if this is the case or not.

The cloning problem makes this only even clearer. Scotty, imagine the following scenario, that is being allowed in all of this philosophical teleportation device we've been dealing here:

Person A is scanned and killed. All required information goes to place 1000km away and is rebuilt. Person A wakes up sensing a total continuation of his Consciousness. Everything happened as you described in your question.

Now let's add a variable. Person A is scanned and killed. All required information goes to 1000 places 1000km apart. One thousand People "A" wake up sensing a total continuation of their Consciousnesses.

So my question is, who is you, you? Did your Consciousness really transferred itself out to another place without significant interruption, or was it merely copied?

Clearly, it was copied. And a thousand new Consciousnesses were "born". But if they were copied, they were not "transferred". Which means that in the first scenario, what happened was a copy mechanism, not a transfer mechanism.

Through Batutta's philosophy, these are one and the same, these words describe the same phenomena, they are de facto synonyms. And he insists that everything we know attest to this fact. But I submit that we don't, in fact, know this. We can speculate, we can guess, we can write interesting scenarios to test all these ideas. We can bring up Theseus ship and all sorts of stuff. This is all legitimate.

What I do not find legitimate is to ascertain that we know Conscience works exactly as it is described here in order for this teleportation device to work.
Title: Re: The "hard problem of consciousness"
Post by: General Battuta on October 20, 2015, 10:01:16 am
How else would it work? What causal mechanism could support another account? There doesn't seem to be any grounds for deviation.
Title: Re: The "hard problem of consciousness"
Post by: Luis Dias on October 20, 2015, 10:44:05 am
The burden of proof in saying that you can kill people and rebuild them and it will all work out with "continuity" for the first person is not on me, Battuta. I'm sure you are really really sure of your ideas, but the stakes are a tad too high IMHO.

Regardless, I haven't read it yet (my eyes are very tired today), but I'm sure this paper by Dennett seems to go around these same issues we've been dealing with, they might help disentangle some concepts, ideas or semantics we might be misusing. It's a small lecture, and it sounds a good lecture: http://www.lehigh.edu/~mhb0/Dennett-WhereAmI.pdf
Title: Re: The "hard problem of consciousness"
Post by: General Battuta on October 20, 2015, 10:53:16 am
I don't actually think much of continuity - it's illusory and retrospective.
Title: Re: The "hard problem of consciousness"
Post by: Luis Dias on October 20, 2015, 11:01:59 am
Continuity seems to me to be the core basis of every one of our concerns for our own future being. If "continuity" is false, why should my consciousness "care" about my future self at all? Why should I sacrifice one second of my life, or even plan for a future state of being, if there is no such thing as "Continuous"? Perhaps we are using different meanings for the same words.
Title: Re: The "hard problem of consciousness"
Post by: Bobboau on October 20, 2015, 11:13:05 am
saying it's illusory is meaningless if you consider that consciousness is in large part the perception it's self.
Title: Re: The "hard problem of consciousness"
Post by: Luis Dias on October 20, 2015, 11:28:52 am
Let's be careful in the discussion. What is being said to be illusory is *not* Consciousness, but the "Continuity" of Consciousness. Consciousness is a brute fact, here I wholeheartedly agree with GhylTarvoke (the Cotard delusion question notwithstanding, which does indicate I should ponder a bit more on the technicality of this issue). It is obviously ridiculous to say that the entire subjective experience is an "illusion", for that already assumes that "someone" is being "deluded" - it's a contradiction.

What happens when people say that (say) the Self or the "Continuity" of the Self or whatever other attribute of Consciousness is an Illusion is to say that while the first order experience is not an illusion, some or all of these attributes are, built by the brain, etc.
Title: Re: The "hard problem of consciousness"
Post by: AtomicClucker on October 20, 2015, 04:19:01 pm
Consciousness is physical, the concept of self isn't - while it is a byproduct of consciousness and a form of apriori knowledge - but I'd argue it's both illusory and metaphysical but at the same time "synthetic" when one realizes self is "true."

Depending if you peg personhood to purely physical systemic definitions, the knowledge of self is still a metaphysical transfer of knowledge from one clone to the next.

Since I'm clarifying self as a form of knowledge, even if your "clone" at the starting point is killed, and rebuilt at the other end, the "self" continues to persist from clone to clone, perhaps coming into jeopardy if more than one clone were living or knowledge that another was alive. However, I dissent and establish that "self" is illusory, but as form of knowledge, isn't bound by physical restraints.

Case in point, 1+1 = 2 regardless of time, place and location. It's math, and it's metaphysical. In my piss-poor argument, as long as the clone persists in a singular fashion, that transfer of knowledge persists of the self, being held as a self-evident truth.
Title: Re: The "hard problem of consciousness"
Post by: Luis Dias on October 20, 2015, 05:52:29 pm
We had already clarified the algebraic nature of that type of definition of Self.
Title: Re: The "hard problem of consciousness"
Post by: GhylTarvoke on February 10, 2016, 08:02:06 pm
I'd like to revive this discussion! For the past month, I've been turning an argument over and over in my head, and I think I'm now ready to present it.

I format the argument as a monologue, a la Descartes; if you like, you can imagine yourself saying the same things. I occasionally make "notes to the reader", like so:  [Here's a note.]

Part 1: ConsciousnessMe


Ideally, I'd begin by defining consciousness - but as I've seen, this is tricky. I instead begin with some simpler definitions, given from my perspective.

----------
Definition: We say that something certainly exists (in actuality, not merely as an abstract notion) if it's impossible for it not to exist.
----------

For example, the light I'm seeing right now certainly exists, because it's a fundamental part of my current experience. Even if I'm living in a digital simulation, the light certainly exists within the simulation. One could say that it doesn't even matter if I'm living in a simulation; for me, the light still exists in every way that matters. Similarly, if I'm dreaming, the light still exists as part of my dream.

On the other hand, Pluto does not certainly exist, because I've never seen it. Maybe everyone's been lying to me about its existence, or maybe an even greater deception is going on.

----------
Definition: ConsciousnessMe (i.e. "my consciousness") is the collection C that satisfies the following criteria:
----------

There are three possible concerns with this definition:
The first two concerns are relatively easy to address. First of all, something certainly exists.  [If you truly believe it's possible that nothing exists, then I can't help you.]  Let C be the collection of all objects that certainly exist. Then by construction, C satisfies both criteria (which addresses the first concern), and C is unique (which addresses the second).

Finally, I think ConsciousnessMe is exactly what I mean by "my consciousness": it's the accumulation of everything in my awareness. Furthermore, it agrees with almost every quasi-definition that I've seen - e.g. "my subjective experience" or "what-it-is-like to be me" - because almost everybody who talks about consciousness agrees that its existence is a brute fact.  [This includes General Battuta, but excludes people who deny the phenomenon entirely.]  Hence my definition is sound.

I now observe that scientific theories play the same role in ConsciousnessMe that they play in the "real world" (which, unlike ConsciousnessMe, might not exist). ConsciousnessMe appears to obey certain laws. For example, if "I" "let go of" "an apple", "the apple" "falls" (where scare quotes indicate that everything takes place within ConsciousnessMe). Science thus allows me to make predictions with great accuracy. Incidentally, it also predicts the existence of Pluto.

One last definition:

----------
Definition: The constituents of ConsciousnessMe are QualiaMe.
----------

Two rough examples of QualiaMe: (my perception of) "the quality of deep blue", and (my perception of) "the sensation of middle C". My computer, as a combination of many different QualiaMe, is more complex.

I'll be back with the punchline soon.
Title: Re: The "hard problem of consciousness"
Post by: GhylTarvoke on February 13, 2016, 08:41:10 am

Part 2: ConsciousnessBob


When I try to generalize my definition of ConsciousnessMe, I encounter some semantic difficulties. There are certain combinations of QualiaMe that I refer to as "people". By definition, they certainly exist - but I can't be sure that they exist (or anything exists) in a "real" sense, i.e. in a sense that's independent of ConsciousnessMe. The entire universe might actually be ConsciousnessMe. Now, this would be a bizarre and inelegant state of affairs, and I'd be supremely egocentric to take the possibility seriously - but nothing precludes it. With that in mind, I'll refer to people freely, without using quotation marks.

One important property of QualiaMe is that they appear to have a "locus", or point of view; they appear to center on a particular person.  [In my case, the person happens to be a twenty-year-old student.]  This property allows me to hypothesize the existence of QualiaMe-like objects. Conveniently, there's a person standing in my room right now. His name is Bob. I define QualiaBob by way of analogy:

----------
Definition: QualiaBob are the objects that resemble QualiaMe, but have Bob as their locus instead of me.
----------

This definition is completely natural; to go from QualiaMe to QualiaBob, I simply change the subscript. But it's not clear that I've actually defined anything, for the following reason. Since QualiaMe and QualiaBob have different loci, they can't be the same things. Thus, by my definition of QualiaMe, I can't be sure that QualiaBob exist.

Nevertheless, the abstract notion of QualiaBob isn't hard to grasp. I just "put myself in Bob's shoes".  [Empathetic human beings do it all the time.]  More than that, I firmly believe QualiaBob exist, and most people appear to share this belief. As before, the alternative is bizarre, inelegant, and egocentric.

Finally, my definition of ConsciousnessBob should be obvious:

----------
Definition: ConsciousnessBob is the collection of QualiaBob.
----------

Remark: At this point, I can define Consciousness (without a subscript) to be the collection of all QualiaL, where L is a locus, and I can define Qualia to be the constituents of Consciousness. But I won't be needing these definitions for my argument.

In the third and final part, I'll take another look at Bob, who has agreed to stay in my room.
Title: Re: The "hard problem of consciousness"
Post by: GhylTarvoke on February 13, 2016, 12:46:31 pm

Part 3: The Argument


With the basic definitions taken care of, there are at least two possible models of reality. The first one is more natural - in fact, it's usually taken for granted - but both models completely account for my observations. Additionally, both models incorporate science in its entirety.

First Model: I'm not special. QualiaBob exist; Bob exists in the same way that I do, and is conscious in the same way that I am. Bob and I both live in an external universe. Scientific theories explain not only the behavior of QualiaMe, but also the behavior of the external objects that QualiaMe represent.

Second Model: I'm special. QualiaBob don't exist; Bob only exists as a combination of QualiaMe. The universe is ConsciousnessMe. Scientific theories merely explain the behavior of QualiaMe.

Okay. So what?

Here's the point. Although the models have differing perspectives, they both incorporate science in its entirety: for every scientific argument that goes through in one model, the same argument also goes through in the other model. (To reiterate my example from Part 1, science predicts the existence of Pluto in both models; the models only disagree on the metaphysical issue of Pluto's "true nature".) Furthermore, both models are sound: they account for all of my observations, and lead to no logical contradictions.  [Unless our current understanding of science is self-contradictory.]  But ConsciousnessBob exists in the first model, whereas ConsciousnessBob does not exist in the second model. Thus, ConsciousnessBob (unlike virtually everything else, including Pluto) is logically independent of science, in the sense that science says nothing about its existence. Even if I assume the inviolate truth of science, I can't conclude anything about the existence of ConsciousnessBob.

This is not the same as saying that science and ConsciousnessBob are unrelated! Based on my own experience, there's a strict correspondence between ConsciousnessMe and (if it exists) the external universe.  [As General Battuta writes: wherever the object, the subject.]  It's reasonable to hypothesize a similar correspondence between ConsciousnessBob (if it exists) and the external universe (if it exists). My argument is that science and ConsciousnessBob are logically independent, which undermines every scientific attempt to "explain" consciousness.

Title: Re: The "hard problem of consciousness"
Post by: GhylTarvoke on February 13, 2016, 05:11:13 pm
Addendum: Science is the branch of knowledge dealing with testable predictions, and only observations are testable. So when I say (in either model) that science predicts Pluto's existence, I really mean that science predicts observations of Pluto's existence. The assertion that Pluto "really exists" isn't testable.
Title: Re: The "hard problem of consciousness"
Post by: watsisname on February 13, 2016, 08:07:55 pm
My argument is that science and ConsciousnessBob are logically independent, which undermines every scientific attempt to "explain" consciousness.

I can make a model of stars wherein they are lights shone through holes in a black tarp by very clever demons.  This undermines every scientific attempt to explain stars.
Title: Re: The "hard problem of consciousness"
Post by: GhylTarvoke on February 13, 2016, 10:10:08 pm
Stars may only exist as qualia, or as part of a simulated reality, or as illusions created by very clever demons. Nevertheless, they exist - and as far as we know, science can explain every facet of their behavior.
Title: Re: The "hard problem of consciousness"
Post by: watsisname on February 13, 2016, 11:46:26 pm
And is the clever tarp demon model potentially falsifiable with current abilities?  What is its motivation?  How does it fit into the framework of the rest of our understanding of nature?

This is what I and a few others have been trying to help you to understand.  You have repeatedly said that dualism and physicalism are both models of reality and they make the same set of predictions.  But I really don't think you understand what the predictive power of a model means, or why we should want to favor one model over another even if they both are consistent with observations.
Title: Re: The "hard problem of consciousness"
Post by: GhylTarvoke on February 14, 2016, 12:55:04 am
The demon's existence is not falsifiable. Science says nothing about its existence, so Occam's Razor advises me to favor a model without the demon.

The problem is that ConsciousnessBob holds exactly the same status as the demon! Science says nothing about its existence, so Occam's Razor advises me to favor a model in which I am the only locus of experience.
Title: Re: The "hard problem of consciousness"
Post by: Scotty on February 14, 2016, 01:00:52 am
Assuming that consciousness exists allows us to form predictive models of behavior and experience.  Occam's Razor is not a proof; invoking it and ignoring that the assumption grants us the ability to use (falsifiable) predictive models is fallacious.
Title: Re: The "hard problem of consciousness"
Post by: GhylTarvoke on February 14, 2016, 01:18:29 am
If by "consciousness", you mean ConsciousnessMe, then it exists by definition. If you mean ConsciousnessBob, then how does it help me form predictive models? Positing the existence of multiple loci is completely unnecessary, and doesn't help explain anything. Like everything else, Bob's behavior can be explained purely by reductive arguments.
Title: Re: The "hard problem of consciousness"
Post by: NGTM-1R on February 14, 2016, 03:13:26 am
Why are you ultimately different from Bob?
Title: Re: The "hard problem of consciousness"
Post by: GhylTarvoke on February 14, 2016, 12:48:26 pm
Maybe I am, or maybe I'm not. The "difference" under consideration is irrelevant when it comes to forming predictive models.

Thus, ConsciousnessBob (unlike virtually everything else, including Pluto) is logically independent of science, in the sense that science says nothing about its existence. Even if I assume the inviolate truth of science, I can't conclude anything about the existence of ConsciousnessBob.
Title: Re: The "hard problem of consciousness"
Post by: watsisname on February 14, 2016, 04:36:52 pm
The demon's existence is not falsifiable. Science says nothing about its existence, so Occam's Razor advises me to favor a model without the demon.

The problem is that ConsciousnessBob holds exactly the same status as the demon! Science says nothing about its existence, so Occam's Razor advises me to favor a model in which I am the only locus of experience.

Man, you were so close to getting it that I could almost taste it.

This model of ConsciousnessBob holds exactly the same status as the demon model.  This is exactly why you should not subscribe to it!

You have another model, where ConsciousnessBob is an emergent property of the system Bob, in very much the same way that ConsciousnessMe is an emergent property of the system Me.  Like the astrophysical models of stellar structure, it has excellent explanatory power.  It fits in the context of prior understanding of the universe (it's all just physics!), and it acts as a pathway to further understanding.

Dualism as a model of mind is the very antithesis of how we approach modelling in a scientific perspective.  Like supposing that the celestial light is just a trick by clever tarp demons, or that dinosaur bones were put in the ground by Satan to fool us, or that all of reality exists only in Newt Gingrich's lifetime... it's kind of dumb.
Title: Re: The "hard problem of consciousness"
Post by: GhylTarvoke on February 14, 2016, 05:01:35 pm
You have another model, where ConsciousnessBob is an emergent property of the system Bob, in very much the same way that ConsciousnessMe is an emergent property of the system Me.

This model still assumes the existence of ConsciousnessBob - an unnecessary assumption for which I have no evidence. Despite my intuition, the very notion of ConsciousnessBob may be incoherent.

Like the astrophysical models of stellar structure, it has excellent explanatory power.

What additional explanatory power does it offer? Why must ConsciousnessBob exist at all, if I can explain Bob's behavior using purely reductive arguments?

It fits in the context of prior understanding of the universe (it's all just physics!), and it acts as a pathway to further understanding.

"It's all just physics" is a metaphysical claim about the nature of reality. Regardless, the model in which Bob is not conscious also satisfies the claim.

If you're asking me to take the existence of ConsciousnessBob on faith, your argument sounds very similar to dualism.
Title: Re: The "hard problem of consciousness"
Post by: The E on February 14, 2016, 05:13:15 pm
If your consciousness is an emergent property of your construction, then it follows that objects of similar construction will have similar emergent properties.


I mean, I am not really following these arguments of yours; it all reads so very solipsistic. Your claim that you can explain another's behaviour through purely reductive reasoning is already a sign of you massively overstepping the bounds of your confidence. Furthermore, if you posit that a being that is in all important aspects completely identical to you and that acts as if conscious actually isn't, then what evidence do you have that you are conscious? Yes, you claim it to be self-evident, but is it really?
Title: Re: The "hard problem of consciousness"
Post by: NGTM-1R on February 14, 2016, 05:44:29 pm
What additional explanatory power does it offer? Why must ConsciousnessBob exist at all, if I can explain Bob's behavior using purely reductive arguments?

Why must you exist at all, in this paradigm? The moment you reduce all others' consciousness to reductive argument, so yours must be as well; unless you are capable of laying out a coherent argument for why you are a special case.

You are using a very simplistic argument-by-definition, one which holds that your viewpoint is unique. Bob can make the exact same argument as you are for his viewpoint being the unique one and his argument will be just as valid as yours is, unless you can establish some qualitative difference between you and Bob.

You have not done this. Your argument offers nothing over Bob's argument; both cannot be true at the same time but are otherwise in all respects identical; both are therefore likely false.
Title: Re: The "hard problem of consciousness"
Post by: GhylTarvoke on February 14, 2016, 06:06:25 pm
If your consciousness is an emergent property of your construction, then it follows that objects of similar construction will have similar emergent properties.

"My consciousness is an emergent property of my construction" is an unnecessary assumption that offers no additional explanatory power.

The problem becomes even thornier when I try to define "similar". Okay, let's say Bob is conscious, since he and I are both human (whatever that means). Is a dog conscious? What about a plant? I face exactly the same difficulty that I faced with Bob, and only faith lets me draw a line.

I mean, I am not really following these arguments of yours; it all reads so very solipsistic. Your claim that you can explain another's behaviour through purely reductive reasoning is already a sign of you massively overstepping the bounds of your confidence.

The "Bob is not conscious" model is not solipsism. Science still applies, and everything still exists - except for ConsciousnessBob. If the model contains no contradictions, then studying consciousness in a scientific way becomes extremely difficult. In particular, I have no way of testing whether something is conscious.

Science may not currently explain Bob's behavior, but it can in theory. If you disagree, you seem to believe that there's something acausal about free will.

Furthermore, if you posit that a being that is in all important aspects completely identical to you and that acts as if conscious actually isn't, then what evidence do you have that you are conscious? Yes, you claim it to be self-evident, but is it really?
Why must you exist at all, in this paradigm?

See my definition of ConsciousnessMe. Bob and I both exist.

The moment you reduce all others' consciousness to reductive argument, so yours must be as well; unless you are capable of laying out a coherent argument for why you are a special case.

You are using a very simplistic argument-by-definition, one which holds that your viewpoint is unique. Bob can make the exact same argument as you are for his viewpoint being the unique one and his argument will be just as valid as yours is, unless you can establish some qualitative difference between you and Bob.

You have not done this. Your argument offers nothing over Bob's argument; both cannot be true at the same time but are otherwise in all respects identical; both are therefore likely false.

"Reducing others' consciousness" presupposes that the notion of others' consciousness is coherent. Bob may make the same statements I make, but his statements may not even make sense. Furthermore (unless free will is acausal), I can theoretically use reductive arguments to explain why he makes those statements.

The only justification you've offered for positing others' consciousness is a massive extrapolation: based on a sample size of one (myself), I draw conclusions about a population of seven billion, and that's when I only consider human beings. Against Occam's Razor, this reasoning seems flimsy.
Title: Re: The "hard problem of consciousness"
Post by: The E on February 15, 2016, 01:01:21 am
[quote author=GhylTarvoke link=topic=90258.msg1814007#msg1814007 date=1455494785
"My consciousness is an emergent property of my construction" is an unnecessary assumption that offers no additional explanatory power.[/quote]

Then explain why people under the influence of drugs behave differently compared to when they are sober.

Quote
The problem becomes even thornier when I try to define "similar". Okay, let's say Bob is conscious, since he and I are both human (whatever that means). Is a dog conscious? What about a plant? I face exactly the same difficulty that I faced with Bob, and only faith lets me draw a line.

There is plenty of medical research, and quite a few successful product lines, based around the idea that human brains are similar enough that drugs will produce reproducable effects. Therefore, we have to assume that human brains share properties, and that if something is true for one brain, it will be true for any number of other brains as well.

Following from that, since we know that brains and consciousness are intimately connected, we have to assume that if one brain is conscious, others have to be too. This is basic inductive reasoning.

Quote
The "Bob is not conscious" model is not solipsism. Science still applies, and everything still exists - except for ConsciousnessBob. If the model contains no contradictions, then studying consciousness in a scientific way becomes extremely difficult. In particular, I have no way of testing whether something is conscious.

Yes, it is. Let's quote the Wiki, shall we:
Quote
Solipsism (from Latin solus, meaning "alone", and ipse, meaning "self") is the philosophical idea that only one's own mind is sure to exist. As an epistemological position, solipsism holds that knowledge of anything outside one's own mind is unsure; the external world and other minds cannot be known and might not exist outside of the mind. As a metaphysical position, solipsism goes further to the conclusion that the world and other minds do not exist.

That does seem to fit your stance rather well, doesn't it?

Quote
Science may not currently explain Bob's behavior, but it can in theory. If you disagree, you seem to believe that there's something acausal about free will.

Science can explain Bob's behaviour. It can also explain yours, using the same assumptions. It follows then that, on sxome level, you and Bob are more or less indistinguishable.

Quote
See my definition of ConsciousnessMe. Bob and I both exist.

Yes, but you have explicitly said that while you assume yourself to be conscious, no such assumption can be made for others; All we're asking is why you believe your assumption about yourself to be true.

Quote
"Reducing others' consciousness" presupposes that the notion of others' consciousness is coherent. Bob may make the same statements I make, but his statements may not even make sense. Furthermore (unless free will is acausal), I can theoretically use reductive arguments to explain why he makes those statements.

If Bob makes the same arguments you do, but Bob's do not make sense, then it follows that your arguments do not make sense either.

Quote
The only justification you've offered for positing others' consciousness is a massive extrapolation: based on a sample size of one (myself), I draw conclusions about a population of seven billion, and that's when I only consider human beings. Against Occam's Razor, this reasoning seems flimsy.

And the assumption that you alone are the only conscious entity in the universe somehow fulfills Occam's simplicity criterion isn't flimsy?
Title: Re: The "hard problem of consciousness"
Post by: GhylTarvoke on February 15, 2016, 05:27:36 am
Then explain why people under the influence of drugs behave differently compared to when they are sober.

The difference between stoned Bob's behavior and sober Bob's behavior can be explained purely by reductive reasoning. Asking about the difference between ConsciousnessStoned Bob and ConsciousnessSober Bob only makes sense if I assume a model in which Bob is conscious.

There is plenty of medical research, and quite a few successful product lines, based around the idea that human brains are similar enough that drugs will produce reproducable effects. Therefore, we have to assume that human brains share properties, and that if something is true for one brain, it will be true for any number of other brains as well.

Following from that, since we know that brains and consciousness are intimately connected, we have to assume that if one brain is conscious, others have to be too. This is basic inductive reasoning.

This is question-begging. Your reasoning only applies if I assume a model in which other people are conscious.

Yes, it is. Let's quote the Wiki, shall we:
Quote
Solipsism (from Latin solus, meaning "alone", and ipse, meaning "self") is the philosophical idea that only one's own mind is sure to exist. As an epistemological position, solipsism holds that knowledge of anything outside one's own mind is unsure; the external world and other minds cannot be known and might not exist outside of the mind. As a metaphysical position, solipsism goes further to the conclusion that the world and other minds do not exist.

That does seem to fit your stance rather well, doesn't it?

Yes, you're right. I should have said that my stance isn't metaphysical solipsism.

Science can explain Bob's behaviour. It can also explain yours, using the same assumptions. It follows then that, on sxome level, you and Bob are more or less indistinguishable.

Science can indeed explain my behavior (where "my" refers to the locus of ConsciousnessMe). Bob and I are distinguishable because I am the locus of ConsciousnessMe, whereas Bob is not.

Yes, but you have explicitly said that while you assume yourself to be conscious, no such assumption can be made for others; All we're asking is why you believe your assumption about yourself to be true.

ConsciousnessMe exists by definition. I define "me" to be the locus of ConsciousnessMe. But I can't prove to you that I am conscious, and vice versa.

If Bob makes the same arguments you do, but Bob's do not make sense, then it follows that your arguments do not make sense either.

If Bob defines "me" to be the locus of ConsciousnessMe, then his argument makes sense. If he defines "me" to be the locus of ConsciousnessBob, then his argument may not make sense.

And the assumption that you alone are the only conscious entity in the universe somehow fulfills Occam's simplicity criterion isn't flimsy?

Your model posits the existence of seven billion entities that explain nothing. If that doesn't qualify for Occam's Razor, I don't know what does.
Title: Re: The "hard problem of consciousness"
Post by: The E on February 15, 2016, 07:09:23 am
The difference between stoned Bob's behavior and sober Bob's behavior can be explained purely by reductive reasoning. Asking about the difference between ConsciousnessStoned Bob and ConsciousnessSober Bob only makes sense if I assume a model in which Bob is conscious.

And what happens when you take drugs? Do they have similar effects on you?


Quote
This is question-begging. Your reasoning only applies if I assume a model in which other people are conscious.

No, it applies if your physiological makeup is more or less identical to that of an entity you presume to be unconscious. Which it is. To a ridiculous degree.

So, to restate: Assuming you have a brain, and assuming that altering your brain alters your consciousness in ways similar to the alterations observed when another's brain is so altered, then it follows that there are similar mechanisms at work. Since it is undeniable that the brain is the seat of your consciousness, and since it is provable that other people have brains of largely similar construction and complexity, you have to prove that you are in some way fundamentally different to others for your assumptions to work.

Basically, if you start with the axiom that you are conscious and then hypothesize that others aren't, you need to identify the key difference between you and others. You have so far failed to do so; despite your proclaimed beliefs in the scientific method, you aren't following it.

Quote
Science can indeed explain my behavior (where "my" refers to the locus of ConsciousnessMe). Bob and I are distinguishable because I am the locus of ConsciousnessMe, whereas Bob is not.

Sure. But the seat of your consciousness, if extracted from its skullprison, is indistinguishable from Bob's. We can do fine structure scans and see differences in the connectome, but overall, the differences are really minor and not enough to explain why you should be conscious and he isn't.


Quote
ConsciousnessMe exists by definition. I define "me" to be the locus of ConsciousnessMe. But I can't prove to you that I am conscious, and vice versa.

Of course you can. Just do things while I am sleeping.

Quote
If Bob defines "me" to be the locus of ConsciousnessMe, then his argument makes sense. If he defines "me" to be the locus of ConsciousnessBob, then his argument may not make sense.

No, let's posit this. Bob makes the exact same statements you have made. He puts forth the same arguments you put forth to prove that he, not you, is the only conscious entity present. What do you do? Do you prove him wrong? Do you agree with him? Do you two get into a big fight about who the conscious one in this relationship is?

Quote
Your model posits the existence of seven billion entities that explain nothing. If that doesn't qualify for Occam's Razor, I don't know what does.

But they explain a whole lot of things. For example, the appearance of roads in my vicinity. Or parking tickets. I can prove to my satisfaction that the house I am in exists. I can further prove that I had nothing to do with its construction. Therefore, other agencies must be present, and astonishingly, there are entities all around me that are fundamentally similar to me, that share many of my qualities and therefore can be safely assumed to be grossly similar to me. Thus I have proven to my satisfaction that consciousness is a universal quality found in many different places.


I would like to ask you something though. If we buy into your theory, that you are the only conscious being in the universe, why do you wear clothes?
Title: Re: The "hard problem of consciousness"
Post by: GhylTarvoke on February 15, 2016, 08:24:16 am
And what happens when you take drugs? Do they have similar effects on you?

If I take drugs, the difference in behavior between stoned me and sober me can be explained by reductive reasoning. The change in ConsciousnessMe can also be explained by reductive reasoning: based on previous observations, there's a strong connection between ConsciousnessMe and its locus.

No, it applies if your physiological makeup is more or less identical to that of an entity you presume to be unconscious. Which it is. To a ridiculous degree.

So, to restate: Assuming you have a brain, and assuming that altering your brain alters your consciousness in ways similar to the alterations observed when another's brain is so altered, then it follows that there are similar mechanisms at work. Since it is undeniable that the brain is the seat of your consciousness, and since it is provable that other people have brains of largely similar construction and complexity, you have to prove that you are in some way fundamentally different to others for your assumptions to work.

Basically, if you start with the axiom that you are conscious and then hypothesize that others aren't, you need to identify the key difference between you and others. You have so far failed to do so; despite your proclaimed beliefs in the scientific method, you aren't following it.

Yes, Bob and I are extremely similar. If my brain is altered, I can observe the effects on both my behavior and ConsciousnessMe in general. If Bob's brain is altered, I can observe the effects on Bob's behavior, but saying that I can observe the effects on ConsciousnessBob is wrong on two counts. First, the statement presupposes that Bob is conscious; second, even if QualiaBob exist, they cannot be QualiaMe, which prevents me from "observing" them.

The key difference between me and Bob is that I am the locus of ConsciousnessMe, whereas Bob is not. Science says nothing about the existence of ConsciousnessBob. I can neither prove nor disprove it.

Sure. But the seat of your consciousness, if extracted from its skullprison, is indistinguishable from Bob's. We can do fine structure scans and see differences in the connectome, but overall, the differences are really minor and not enough to explain why you should be conscious and he isn't.

Asking why ConsciousnessMe exists is the same as asking why anything exists. Answering the question is impossible and unnecessary. To explain why ConsciousnessBob might not exist, I need only demonstrate the possibility of its nonexistence, which is exactly what the solipsist model demonstrates.

Quote
ConsciousnessMe exists by definition. I define "me" to be the locus of ConsciousnessMe. But I can't prove to you that I am conscious, and vice versa.

Of course you can. Just do things while I am sleeping.

I'm not sure what this would prove. You seem to be using the pedestrian definition of "consciousness".

No, let's posit this. Bob makes the exact same statements you have made. He puts forth the same arguments you put forth to prove that he, not you, is the only conscious entity present. What do you do? Do you prove him wrong? Do you agree with him? Do you two get into a big fight about who the conscious one in this relationship is?

I can't prove him wrong, nor can I prove him correct.

Suppose he makes exactly the same statements I've made. Suppose I also assume that his speech carries meaning. What could he mean by ConsciousnessMe? If he and I are referring to the same entity, then Bob's argument makes sense. Otherwise, he's referring to something that may not exist, so his argument may not make sense.

But they explain a whole lot of things. For example, the appearance of roads in my vicinity. Or parking tickets. I can prove to my satisfaction that the house I am in exists. I can further prove that I had nothing to do with its construction. Therefore, other agencies must be present, and astonishingly, there are entities all around me that are fundamentally similar to me, that share many of my qualities and therefore can be safely assumed to be grossly similar to me. Thus I have proven to my satisfaction that consciousness is a universal quality found in many different places.

The existence of roads, parking tickets, my house, and even other people is not in dispute. As for the similarity argument, see above. It's nothing more than a massive extrapolation.

I would like to ask you something though. If we buy into your theory, that you are the only conscious being in the universe, why do you wear clothes?

The fact that I wear clothes can theoretically be explained by reductive reasoning, but that's probably not what you meant.

I don't believe in the solipsist model. If I did, I'd be in a mental institution instead of debating with you. What the solipsist model shows is that science and the existence of ConsciousnessBob are logically independent, which means that scientific investigation of consciousness has fundamental limits. In particular, I have no way of testing whether something is conscious.
Title: Re: The "hard problem of consciousness"
Post by: GhylTarvoke on February 16, 2016, 02:29:32 am
Anyway, if you accept that scientific investigation of consciousness is limited (and in particular, there's no way to test for consciousness), that doesn't mean consciousness is fundamentally different from everything else. "In reality", consciousness and other phenomena may hold the same status. My argument only shows that this claim is independent of science.
Title: Re: The "hard problem of consciousness"
Post by: The E on February 16, 2016, 02:33:28 am
The existence of roads, parking tickets, my house, and even other people is not in dispute. As for the similarity argument, see above. It's nothing more than a massive extrapolation.

Explain why any of these things exist, then.

I don't believe in the solipsist model. If I did, I'd be in a mental institution instead of debating with you. What the solipsist model shows is that science and the existence of ConsciousnessBob are logically independent, which means that scientific investigation of consciousness has fundamental limits. In particular, I have no way of testing whether something is conscious.

Only if you subscribe to the idea that consciousness is unexplainable. I do not. I strongly believe that given time, it is possible to fully map the processes running in a mammalian brain, and that we'll find consciousness to be an emergent property of sufficiently complex neural networks (with the lower bound for complexity probably being far lower than we would think).

That's the big problem your model has: By declaring yourself to be conscious, your argument essentially becomes "If consciousness is a quality of me, and other people are not me, then I cannot be sure they're conscious". You make no attempt to explain the hows and whys of consciousness, you accept it as a given. The solipsistic model isn't useful in any way, because it does not provide a framework to examine yourself. In it, objectivity is unattainable and science (or rather, the scientific method) cannot be used to explain yourself to you.
Title: Re: The "hard problem of consciousness"
Post by: GhylTarvoke on February 16, 2016, 03:08:09 am
The existence of roads, parking tickets, my house, and even other people is not in dispute. As for the similarity argument, see above. It's nothing more than a massive extrapolation.

Explain why any of these things exist, then.

They exist as combinations of QualiaMe. If you're asking whether they "really" exist, this is a metaphysical question about the nature of reality.

I strongly believe that given time, it is possible to fully map the processes running in a mammalian brain, and that we'll find consciousness to be an emergent property of sufficiently complex neural networks (with the lower bound for complexity probably being far lower than we would think).

If by "consciousness" you mean ConsciousnessL for some locus L, then the claim "consciousness is an emergent property of sufficiently complex neural networks" is untestable and independent of science. If you mean something else, then I agree.

That's the big problem your model has: By declaring yourself to be conscious, your argument essentially becomes "If consciousness is a quality of me, and other people are not me, then I cannot be sure they're conscious". You make no attempt to explain the hows and whys of consciousness, you accept it as a given.

Not accepting ConsciousnessMe as a given is the same as denying that something certainly exists. If you truly believe it's possible that nothing exists, then I can't help you.

The solipsistic model isn't useful in any way, because it does not provide a framework to examine yourself. In it, objectivity is unattainable and science (or rather, the scientific method) cannot be used to explain yourself to you.

Any scientific argument that goes through in the "Bob is conscious" model also goes through in the solipsistic model. Science explains the behavior of "me", and it also explains the behavior of ConsciousnessMe. Science says nothing about the existence or behavior of ConsciousnessBob.
Title: Re: The "hard problem of consciousness"
Post by: The E on February 16, 2016, 03:30:50 am
If by "consciousness" you mean ConsciousnessL for some locus L, then the claim "consciousness is an emergent property of sufficiently complex neural networks" is untestable and independent of science. If you mean something else, then I agree.

I am not going to mess up perfectly good words with subscript madness. Defining consciousness as the result of a sufficiently capable neural net backed by large enough memory storage capacity to enable the gathering and evaluation of experiential data that is processing inputs into outputs is a perfectly sufficient and testable definition of consciousness, and that is the one I subscribe to.

Quote
Not accepting ConsciousnessMe as a given is the same as denying that something certainly exists. If you truly believe it's possible that nothing exists, then I can't help you.

Where, in this entire topic, have I ever even come close to the idea that nothing exists?

My criticism of your starting point is that you declare something to be axiomatically true that doesn't need to be. By using a completely physicalist definition of consciousness, I can devise tests to see if something is conscious, I can even arrive at a complete model of it and replicate it. In your approach, you seemingly throw your hands in the air and say that science is great, but there's this barrier right here that it can't cross. It is no wonder then that you cannot prove another's consciousness exists; You can't even prove that you are conscious (Just as we cannot prove that |x| + |y| > |x| and |x| + |y| > |y|). Solipsism, to me, is intellectually lazy. It's a capitulation.

Quote
Any scientific argument that goes through in the "Bob is conscious" model also goes through in the solipsistic model. Science explains the behavior of "me", and it also explains the behavior of ConsciousnessMe. Science says nothing about the existence or behavior of ConsciousnessBob.

Except it does if you're not using a solipsistic model.
Title: Re: The "hard problem of consciousness"
Post by: GhylTarvoke on February 16, 2016, 04:20:49 am
I am not going to mess up perfectly good words with subscript madness. Defining consciousness as the result of a sufficiently capable neural net backed by large enough memory storage capacity to enable the gathering and evaluation of experiential data that is processing inputs into outputs is a perfectly sufficient and testable definition of consciousness, and that is the one I subscribe to.

"Consciousness" is an extremely overloaded word; the subscripts make my meaning precise. I agree that science can eventually explain consciousness by your definition. However, the solipsistic model shows that science says nothing about ConsciousnessBob. So here are your choices:

1. You can either accept or not accept the definition of ConsciousnessMe. If you don't accept it, then you deny that something exists. Let's assume you accept it.
2. You can either accept or not accept the definition of ConsciousnessBob. If you don't accept it, then you're in the solipsistic model. Let's assume you accept it.
3. You can either consider or not consider the question: "Is science independent of ConsciousnessBob?" If you don't consider it, then you're sticking your head in the sand. If you do consider it, then the solipsistic model shows that the answer is "yes".

Where, in this entire topic, have I ever even come close to the idea that nothing exists?

My criticism of your starting point is that you declare something to be axiomatically true that doesn't need to be. By using a completely physicalist definition of consciousness, I can devise tests to see if something is conscious, I can even arrive at a complete model of it and replicate it. In your approach, you seemingly throw your hands in the air and say that science is great, but there's this barrier right here that it can't cross. It is no wonder then that you cannot prove another's consciousness exists; You can't even prove that you are conscious (Just as we cannot prove that |x| + |y| > |x| and |x| + |y| > |y|). Solipsism, to me, is intellectually lazy. It's a capitulation.

Regarding the definition of consciousness, see above. ConsciousnessMe exists by definition, so "proving its existence" makes no sense.

Again, I don't believe in the solipsistic model (though I have no justification for my disbelief). It's a tool that demonstrates the independence of science and ConsciousnessBob.

Except it does if you're not using a solipsistic model.

If science "explains" ConsciousnessBob in the "Bob is conscious" model, then it also does so in the solipsistic model. But ConsciousnessBob doesn't even exist in the solipsistic model, so this is a contradiction.
Title: Re: The "hard problem of consciousness"
Post by: The E on February 16, 2016, 04:56:55 am
1. You can either accept or not accept the definition of ConsciousnessMe. If you don't accept it, then you deny that something exists. Let's assume you accept it.
2. You can either accept or not accept the definition of ConsciousnessBob. If you don't accept it, then you're in the solipsistic model. Let's assume you accept it.

I do not accept your definitions, plural. I do not "deny the existance of something". I do not accept your initial setup as valid, and consider it to be deeply flawed and misguided. In case that wasn't clear enough already.
 
Quote
3. You can either consider or not consider the question: "Is science independent of ConsciousnessBob?" If you don't consider it, then you're sticking your head in the sand. If you do consider it, then the solipsistic model shows that the answer is "yes".

I considered the question. I considered the setup. I found your setup to be flawed on a conceptual level, and thus conclusions drawn from it to be similarly invalid.

Quote
Regarding the definition of consciousness, see above. ConsciousnessMe exists by definition, so "proving its existence" makes no sense.

Oh for ****'s sake.

Let's be absolutely clear here: Your setup defines consciousness as an intrinsic quality of you, and you alone. By that definition, sure, no statements can be made about others. But that definition is deeply, fatally, flawed, as pointed out above. You are trying to win this argument by forcing everyone to play by the rules you set up, and if I or someone else points out to you that your rules are not making sense and do not lead to a place that allows for meaningful inquiry about the hows and whys of human cognition, you keep retreating to your definition.

This isn't fun. Not for me, not for anyone else still reading this topic, I imagine.

What exactly are you trying to learn here anyway? What is the point of this discussion? What are your goals for it?
 
Quote
Again, I don't believe in the solipsistic model (though I have no justification for my disbelief). It's a tool that demonstrates the independence of science and ConsciousnessBob.
No, it's not a tool. It's a desire to not have to deal with others dressed up in pretty philosophical language (in this regard, it shares qualities with libertarianism).

Quote
If science "explains" ConsciousnessBob in the "Bob is conscious" model, then it also does so in the solipsistic model. But ConsciousnessBob doesn't even exist in the solipsistic model, so this is a contradiction.

Which is why the solipsistic model is invalid.
Title: Re: The "hard problem of consciousness"
Post by: GhylTarvoke on February 16, 2016, 05:34:36 am
I do not accept your definitions, plural. I do not "deny the existance of something". I do not accept your initial setup as valid, and consider it to be deeply flawed and misguided. In case that wasn't clear enough already.

... okay, let's start with step 1. Exactly how do you not accept the definition of ConsciousnessMe, and simultaneously believe that something exists? Would you prefer to give it a different name?

Oh for ****'s sake.

Let's be absolutely clear here: Your setup defines consciousness as an intrinsic quality of you, and you alone. By that definition, sure, no statements can be made about others. But that definition is deeply, fatally, flawed, as pointed out above. You are trying to win this argument by forcing everyone to play by the rules you set up, and if I or someone else points out to you that your rules are not making sense and do not lead to a place that allows for meaningful inquiry about the hows and whys of human cognition, you keep retreating to your definition.

Consciousness is not an intrinsic quality of me. ConsciousnessMe is an intrinsic property of me, which makes perfect sense (unless you believe that ConsciousnessMe is the collective consciousness of the entire human race, or something). I can make (and test!) lots of statements about other people. What I cannot do is test the existence or behavior of ConsciousnessBob.

I'm not forcing you to do anything. You're free to believe that ConsciousnessMe doesn't exist, or ConsciousnessBob doesn't exist. But I think we both believe that they do exist.

The hows and whys of human cognition are perfectly susceptible to the scientific method. It's an extremely interesting subject in its own right.

This isn't fun. Not for me, not for anyone else still reading this topic, I imagine.

What exactly are you trying to learn here anyway? What is the point of this discussion? What are your goals for it?

I think this is fun. It might be the most challenging discussion I've ever had.

There's a lot of fuss about "consciousness". My first goal is to extract the really difficult part of that concept. My second goal is to show that the really difficult part isn't susceptible to the scientific method, and that people are waiting for a scientific (rather than philosophical) answer in vain.

No, it's not a tool. It's a desire to not have to deal with others dressed up in pretty philosophical language (in this regard, it shares qualities with libertarianism).

Okay, this is getting ridiculous. Again: even if I believed in the solipsistic model - which I don't - you, Bob, and I still exist.

Quote
If science "explains" ConsciousnessBob in the "Bob is conscious" model, then it also does so in the solipsistic model. But ConsciousnessBob doesn't even exist in the solipsistic model, so this is a contradiction.

Which is why the solipsistic model is invalid.

Invalid in what sense? Not in a way that contradicts science, because science works perfectly fine in the solipsistic model. If you're saying it "feels wrong", then I agree, but that's not a scientific argument.
Title: Re: The "hard problem of consciousness"
Post by: The E on February 16, 2016, 06:32:47 am
Let's go back to your initial punchline. Most things leading up to it are valid, but the conclusion you draw here is wrong:
Quote
Here's the point. Although the models have differing perspectives, they both incorporate science in its entirety: for every scientific argument that goes through in one model, the same argument also goes through in the other model. (To reiterate my example from Part 1, science predicts the existence of Pluto in both models; the models only disagree on the metaphysical issue of Pluto's "true nature".) Furthermore, both models are sound: they account for all of my observations, and lead to no logical contradictions.  [Unless our current understanding of science is self-contradictory.]  But ConsciousnessBob exists in the first model, whereas ConsciousnessBob does not exist in the second model. Thus, ConsciousnessBob (unlike virtually everything else, including Pluto) is logically independent of science, in the sense that science says nothing about its existence. Even if I assume the inviolate truth of science, I can't conclude anything about the existence of ConsciousnessBob.

In the solipsistic model, science cannot exist. There is no way to prove that anything outside your immediate perceptions exists, because there is no external data to be had; Every piece of information that reaches you second-hand is suspect because the agencies bringing you that information are impossible to verify. This renders your assertion that both models are complete and effectively equivalent invalid.

Consciousness is not an intrinsic quality of me. ConsciousnessMe is an intrinsic property of me, which makes perfect sense (unless you believe that ConsciousnessMe is the collective consciousness of the entire human race, or something). I can make (and test!) lots of statements about other people. What I cannot do is test the existence or behavior of ConsciousnessBob.

That's because you cling to the belief that consciousness and its constituent parts are nonphysical entities. As far as we can tell, they're not; we can observe action in a brain that corresponds to input it receives. By excluding nonphysical nonsense, we can arrive at a definition of and test for consciousness that undermines your assertion that it is impossible to prove other people are conscious.

Quote
I'm not forcing you to do anything. You're free to believe that ConsciousnessMe doesn't exist, or ConsciousnessBob doesn't exist. But I think we both believe that they do exist.

Not forcing me to do anything?

Quote
1. You can either accept or not accept the definition of ConsciousnessMe. If you don't accept it, then you deny that something exists. Let's assume you accept it.
2. You can either accept or not accept the definition of ConsciousnessBob. If you don't accept it, then you're in the solipsistic model. Let's assume you accept it.
3. You can either consider or not consider the question: "Is science independent of ConsciousnessBob?" If you don't consider it, then you're sticking your head in the sand. If you do consider it, then the solipsistic model shows that the answer is "yes".

You are setting up rhetorical questions here that are forcing me to conform to your model in its entirety. I refuse to do so.

Quote
The hows and whys of human cognition are perfectly susceptible to the scientific method. It's an extremely interesting subject in its own right.

And consciousness is an intrinsic part of it. By proclaiming consciousness to be off-limits to science, you are limiting any model for human cognition in such a way as to make research into it useless.

Quote
There's a lot of fuss about "consciousness". My first goal is to extract the really difficult part of that concept. My second goal is to show that the really difficult part isn't susceptible to the scientific method, and that people are waiting for a scientific (rather than philosophical) answer in vain.

Then you've failed. You have so far not shown any reason why a model based purely on biology, chemistry, physics and math is insufficient.

Quote
Invalid in what sense? Not in a way that contradicts science, because science works perfectly fine in the solipsistic model. If you're saying it "feels wrong", then I agree, but that's not a scientific argument.

You are again trying to win an argument by retreating to your definitions. Stop it, and at least try to consider that your definitions are off.
Title: Re: The "hard problem of consciousness"
Post by: GhylTarvoke on February 16, 2016, 10:32:55 am
Let's go back to your initial punchline. Most things leading up to it are valid, but the conclusion you draw here is wrong:
Quote
Here's the point. Although the models have differing perspectives, they both incorporate science in its entirety: for every scientific argument that goes through in one model, the same argument also goes through in the other model. (To reiterate my example from Part 1, science predicts the existence of Pluto in both models; the models only disagree on the metaphysical issue of Pluto's "true nature".) Furthermore, both models are sound: they account for all of my observations, and lead to no logical contradictions.  [Unless our current understanding of science is self-contradictory.]  But ConsciousnessBob exists in the first model, whereas ConsciousnessBob does not exist in the second model. Thus, ConsciousnessBob (unlike virtually everything else, including Pluto) is logically independent of science, in the sense that science says nothing about its existence. Even if I assume the inviolate truth of science, I can't conclude anything about the existence of ConsciousnessBob.

In the solipsistic model, science cannot exist. There is no way to prove that anything outside your immediate perceptions exists, because there is no external data to be had; Every piece of information that reaches you second-hand is suspect because the agencies bringing you that information are impossible to verify. This renders your assertion that both models are complete and effectively equivalent invalid.

There isn't a way to prove that anything outside your perception exists (I invite you to try). The statement that something "really exists" is a metaphysical claim about the nature of reality. Fortunately, it's irrelevant to the scientific method, which deals with testable predictions.

If you want to assume that something "really exists", you run into two problems. First, the assumption is completely unnecessary; science deals with relations, and doesn't give a fig about the "true nature" of things. (Battuta said something similar on the first page of the thread.) Second, what do you assume "really exists"? Do parking tickets really exist? What about roads or houses? And if you simply assume that everything in your perception "really exists", you're back to square one.

Consciousness is not an intrinsic quality of me. ConsciousnessMe is an intrinsic property of me, which makes perfect sense (unless you believe that ConsciousnessMe is the collective consciousness of the entire human race, or something). I can make (and test!) lots of statements about other people. What I cannot do is test the existence or behavior of ConsciousnessBob.

That's because you cling to the belief that consciousness and its constituent parts are nonphysical entities. As far as we can tell, they're not; we can observe action in a brain that corresponds to input it receives. By excluding nonphysical nonsense, we can arrive at a definition of and test for consciousness that undermines your assertion that it is impossible to prove other people are conscious.

You and I are in complete agreement on consciousness as you define it. If you insist on pretending that I never mentioned ConsciousnessMe (perhaps because you're clinging to the belief that science can explain everything), nothing more can be said.

Quote
I'm not forcing you to do anything. You're free to believe that ConsciousnessMe doesn't exist, or ConsciousnessBob doesn't exist. But I think we both believe that they do exist.

Not forcing me to do anything?

Quote
1. You can either accept or not accept the definition of ConsciousnessMe. If you don't accept it, then you deny that something exists. Let's assume you accept it.
2. You can either accept or not accept the definition of ConsciousnessBob. If you don't accept it, then you're in the solipsistic model. Let's assume you accept it.
3. You can either consider or not consider the question: "Is science independent of ConsciousnessBob?" If you don't consider it, then you're sticking your head in the sand. If you do consider it, then the solipsistic model shows that the answer is "yes".

You are setting up rhetorical questions here that are forcing me to conform to your model in its entirety. I refuse to do so.
You are again trying to win an argument by retreating to your definitions. Stop it, and at least try to consider that your definitions are off.

I'm trying my best to read your mind here. What do you mean by "your definitions are off"? Are you saying that they don't actually define anything? That they're meaningless gibberish? I mean, if you're going to pretend I never said anything, there's no point in continuing.

You argument seems to be something like this: "Both accepting and rejecting the definition of ConsciousnessMe lead to conclusions that threaten my worldview. Therefore, I refuse to take a stance." This is hardly arguing in good faith.
Title: Re: The "hard problem of consciousness"
Post by: GhylTarvoke on February 16, 2016, 05:08:33 pm
Okay, I think I know how to make this more palatable. The solipsistic model (in which ConsciousnessMe is "everything") is only one model among unimaginably many. So if you find it too strange, here are some other possibilities.

Preliminary: Let M be "everything". Note that M must contain ConsciousnessMe, by definition. Hence the solipsistic model is minimal.

Option 1 (Physicalism): M is a physical universe that closely resembles ConsciousnessMe. One difference is that Pluto may not actually exist in ConsciousnessMe (though science predicts observations of Pluto under the right circumstances), whereas it does exist in M. Another difference is that M has no special relationship with any one of its inhabitants.

Option 2 (The Matrix): M is a physical universe that obeys basic laws similar to those in ConsciousnessMe, but is otherwise quite different. ConsciousnessMe is a digital simulation within M.

Option 3 (The Clever Demons): M is populated with clever demons that enjoy manipulating ConsciousnessMe. M itself could be a version of hell.

Option 4 (???): M is utterly different from ConsciousnessMe, in ways that are impossible to fathom.
Title: Re: The "hard problem of consciousness"
Post by: watsisname on February 16, 2016, 08:27:08 pm
Any one of these descriptions could be correct.  None of them can be verified or falsified.

I want to emphasize the difference between model as "interpretation of the ultimate nature of reality" with model as "description of how observable phenomena function according to causal rules, with the purpose of having predictive and explanatory power over them".

We could live in a Matrix, and all of our science is still perfectly valid.  In the event that some aspect of the simulation changes, we'll make more observations and update the models to explain that change.  Science still functions.  Meanwhile, we still have no way of proving or disproving that we live in a Matrix.  Any bug or change in the system still looks like the "laws of nature".  Maybe black holes are just a bug.

Maybe the Moon was just a simulated ball of light until humans first landed and walked on its surface.  This is observationally indistinguishable from the model wherein it is an astrophysical object produced from a collision with Earth billions of years ago.  But the astrophysical model has wonderful explanatory power.  It fits within the framework of our understanding of the solar system.  The idea that the Moon was just a simulated ball of light until we landed on it has no explanatory power at all.  It has no motivation from prior knowledge, nor does it further our understanding of anything.  That is not a model in any scientific sense.  It is the antithesis of a model.

We cannot prove to you that "ConsciousnessMe is everything" is wrong.  But we can examine consciousness and formulate explanations for how it arises and operates with the scientific method.  I get the sense that you think these are mutually exclusive (they're not) and that they also have equal footing (they don't).  These may be the most difficult things to wrap your head around.
Title: Re: The "hard problem of consciousness"
Post by: GhylTarvoke on February 16, 2016, 08:59:50 pm
Any one of these descriptions could be correct.  None of them can be verified or falsified.

I want to emphasize the difference between model as "interpretation of the ultimate nature of reality" with model as "description of how observable phenomena function according to causal rules, with the purpose of having predictive and explanatory power over them".

We could live in a Matrix, and all of our science is still perfectly valid.  In the event that some aspect of the simulation changes, we'll make more observations and update the models to explain that change.  Science still functions.  Meanwhile, we still have no way of proving or disproving that we live in a Matrix.  Any bug or change in the system still looks like the "laws of nature".  Maybe black holes are just a bug.

Maybe the Moon was just a simulated ball of light until humans first landed and walked on its surface.  This is observationally indistinguishable from the model wherein it is an astrophysical object produced from a collision with Earth billions of years ago.  But the astrophysical model has wonderful explanatory power.  It fits within the framework of our understanding of the solar system.  The idea that the Moon was just a simulated ball of light until we landed on it has no explanatory power at all.  It has no motivation from prior knowledge, nor does it further our understanding of anything.  That is not a model in any scientific sense.  It is the antithesis of a model.

Yes, well put!

We cannot prove to you that "ConsciousnessMe is everything" is wrong.  But we can examine consciousness and formulate explanations for how it arises and operates with the scientific method.  I get the sense that you think these are mutually exclusive (they're not) and that they also have equal footing (they don't).  These may be the most difficult things to wrap your head around.

Of the models I listed, solipsism and physicalism raise the fewest questions. Solipsism doesn't explain the existence of ConsciousnessMe, because explaining "why anything exists" is impossible. At the cost of economy, physicalism does explain the existence of ConsciousnessMe, in the sense that it describes ConsciousnessMe as an emergent property. (But like all non-solipsistic models, it just pushes "the impossible question" one level higher.)

Two more remarks about physicalism. First, it posits a perspective from which Bob and I are essentially the same; I consider this an attractive feature. Second, it seems that ConsciousnessMe would simply be "my real brain". I'm not sure whether this implies that everything is conscious in some sense, or that nothing is conscious.
Title: Re: The "hard problem of consciousness"
Post by: Nyctaeus on February 18, 2016, 08:00:18 am
Wow guys...

Consciousness is nothing but a very basic fundament of self. I mean, If a man would have only consciousness, the only thing he would say is something like "I am" or "I exist"... Or he would rather only be aware of his existence, but he would not be able to describe it with any language, because we would not know any. Our memory, emotions, character etc. depends strictly on brain structure and genetics. Our brain and senses are the way that consciousness is perciving the world around.

The way we percive everything around is a matter of perspective. You should see what we experiance when our pineal gland releases some dose of N,N-Dimethyltryptamine. World is soooo colorful and so cool in that state :P. I mean no synthetic DMT and other drug crap. Pineal gland produces psychdelics in delta stage of sleep to create dreams, and rarely in some other circumstances. The way we see or hear everything around us differs from person to person.

The world around us is not anykind of Matrix. Laws of physics are solid, science is valid... And I'm sure that the tree in front of me is real, and everything is not anykind of illusion. Perceptions differ depending on brain state. When we are mad, sad, have depression or any other crap, we see the world around us as gray, cold and mostly not interesting. When we are under all the good hormons like oxitocin or serotonin, we see everything as colorfull, beautiful, and cool! Natural DMT gives the best result. I'm calling that "spiritual high" :P.

Solipsism is in short a very radical version of all that I mentioned above. A happy person would see the tree as colorfull, green and anything like that, depressed person would see the tree as gray, ugly etc. but it's still the same tree. And dozen other people will see the same tree as well, but experiances will differ between each other. I can say that I got the point where the autor of this view came from, but it's actually over-interpretation. Examples of happy and sad man are just a tip of the iceberg, because there are prople in psychotic stages and heavy drag users, which would see the tree in somekind of abstract way... Or see tentacle monster or some other crap instead. But as I said, they are drugs and psychosis' of all kinds. We may debate why all of this people has halluciantions and corrupted perspective, and how it really affects on perciving the reallity. I would rather say that psychosis is psychosis, and almost no things that they say can be threated seriously.

The other things are brainwaves and how they affects us. We have five known amplitudes:
- Gamma - 80–100 Hz - Most motoric functions
- Beta - 12-28 Hz - Standard, casual brain activity
- Alpha - 8-13 Hz - Relax, calmity, resting
- Theta - 4–7 Hz
- Delta - 0,5-3 Hz
As people usually work on beta or gamma amplitudes during most of the time, there are also lower amplitudes - gamma and delta. Gamma is present during the first and the last stage of sleep, in hypnosis and meditation. Delta is the lowest known amplitude. Present during the deepest stage of sleep and long time meditation. The way of perciving the reallity here is actually completly different. During the deep meditation, people experiance no time, no space, no matter... And I guess no trees, but albo probably because a man in meditation that deep have eyes closed :P.

I don't know if the conscounssness is a direct product of brain activity, or can exist without the brain but on other principles. The only think that I know about consciousness is the fact, that I am the consciousness. I'm experiancing the world around through my senses and my brain structure, my emotions, my memories and stuff are filters for the information that I'm perciving. I'd like to thing that my brain is somekind of a quantum computer as some new science theories said. I can't wait for the science to understand and describe this phenomena. Both solipsism and psychicism are true at some point, but it's not depending on actual laws of physics or anything else. These terms were created because people used to think about everything in bipolar, dualistic and [very often] radical way. These are two ways of describing human perception.

Well... I can say that when I see the tree, it's definitly there and I can say that after years of meditation :P
Title: Re: The "hard problem of consciousness"
Post by: The E on February 18, 2016, 08:20:21 am
Well... I can say that when I see the tree, it's definitly there and I can say that after years of meditation :P

Wrong. It is impossible to prove that the reality you're experiencing is actually real and not the product of an elaborate simulation in a computer somewhere (see this (https://en.wikipedia.org/wiki/Simulation_hypothesis) for reference).

That being said, it's a hypothesis with very little bearing on day-to-day life unless evidence is discovered that exploiting the simulation is possible.
Title: Re: The "hard problem of consciousness"
Post by: Bobboau on February 18, 2016, 10:22:11 am
No, the tree is real, it is just that your understanding of the nature of reality is possibly inaccurate. The simulation has an entity in it you can interact with, you can pick it's fruit, you can lay in it's leaves, you can chop it down. If the entire universe is simulated on some massively powerful supercomputer in another dimension it still exists, when I type out the character "A" it still exists. just because that existence is defined by a pattern of electrical charges in a computer's memory doesn't make it un-real. when I submit this post and that pattern is transferred to a data packet, it still exists, when it gets stored in the database as a pattern of magnetic polarization it still exists. If everything is simulated, you can still know things about that simulation, and learning that it is a simulation is within that purview.
Title: Re: The "hard problem of consciousness"
Post by: watsisname on February 18, 2016, 01:47:04 pm

The way we percive everything around is a matter of perspective. You should see what we experiance when our pineal gland releases some dose of N,N-Dimethyltryptamine. World is soooo colorful and so cool in that state :P. I mean no synthetic DMT and other drug crap. Pineal gland produces psychdelics in delta stage of sleep to create dreams, and rarely in some other circumstances. The way we see or hear everything around us differs from person to person.

The world around us is not anykind of Matrix. Laws of physics are solid, science is valid... And I'm sure that the tree in front of me is real, and everything is not anykind of illusion. Perceptions differ depending on brain state. When we are mad, sad, have depression or any other crap, we see the world around us as gray, cold and mostly not interesting. When we are under all the good hormons like oxitocin or serotonin, we see everything as colorfull, beautiful, and cool! Natural DMT gives the best result. I'm calling that "spiritual high" :P.

NN-DMT is pretty neat. :)

I have also never found credible evidence that the tiny amounts of DMT present in the human body are produced in the pineal gland, or that this is responsible for our dreams or altered states of consciousness in any meaningful way.  If this were true then I'd expect someone to have found a pretty obvious change in people who have gone through pinealectomy.

And, yeah, what Bob and E said.
Title: Re: The "hard problem of consciousness"
Post by: GhylTarvoke on December 23, 2017, 05:48:39 pm
Apologies for necroposting yet again! I figured I'd keep the consciousness talk in one place; however, please feel free to split the thread.

I recently read three articles, all of them fascinating. (Biased as I am, the last two articles discuss the challenge that consciousness poses to science.) For convenience, I've linked and copy-pasted the articles. Bear in mind that I omitted some stuff when copy-pasting, e.g. footnotes and tangential material.

First, a collision between two of my favorite things: Peter Watts, marine biologist and author of "Blindsight", reviews deep-sea horror game "SOMA". (http://www.rifters.com/crawl/?p=6953)
Spoiler:
“If there’s an afterlife, is my place taken? Is heaven full of people who would call me an imposter?”
— Simon Jarrett, upon realizing that he is a digitized copy.

Ever since the turn of the century I’ve had a— well, not a love/hate relationship with video games so much as a love/indifference one. I’ve worked on several game projects that never made it to market, wrote a tie-in novel for a game that did. Occasionally my work has inspired games I’ve had nothing to do with; the creators of Bioshock 2 and Torment: Tides of Numanera cite me as an influence, for example. There’s a vampire in The Witcher 3 named Sarasti. Eclipse Phase, the paper-based open-source role-playing game, names me in their references. And so on.

For one reason or another, I’ve never got around to actually playing any of these games. But a fan recently gifted me with a download of Frictional Games’ SOMA, whose creators also cite me as inspirational (alongside Greg Egan, China Miéville, and Philip K. Dick). And in the course of the occasional egosurf I’ve stumbled across various blogs and forums in which people have commented on the peculiar Wattsiness of this particular title. So what the hell, I figured; I needed something to write about this week, and it was either gonna be SOMA or my first acid trip.

Major Spoilers after the graphic, so stop reading if you’re still saving yourself for your own run at the game. (Although if you’re still doing that a solid year after its release, you’re even further behind the curve than I am.)

In SOMA you play Simon: a regular dude from 2015 Toronto, who— following a brain scan at the notoriously-disreputable York University— suddenly finds himself a hundred years in the future, just after a cometary impact has wiped out all life on the surface of the earth. Simon doesn’t have to worry about that, though— not in the short term, at least— because he’s not on the surface of the Earth. He’s stuck in a complex of derelict undersea habitats near a  geothermal vent, where (among other things) he is attacked by giant mutant viperfish and caught up in a story centering around the nature of consciousness. “I’d really like to know who thought sending a Canadian to the bottom of the sea was a good idea,” he blurts out at one point. “I miss Toronto. In Toronto I knew who I was.”

So yeah, I can see a certain Watts influence. Maybe even a bit of homage.

If I was feeling especially egotistical I could really push it. Those subway stations Simon cruises through on his way to York— not that far from where I used to live. His in-game buddy Catherine once mistakenly remarks that he comes from Vancouver, where I lived before that. Hell, if I wanted to pull out all the stops I could even point out that Jesus Christ’s Number Two Man (and the first of the popes) was called Simon Peter. Coincidence?

Yeah, probably. That last thing, anyway. Then again, any game whose major selling point was its Peter Watts references would be shooting for a pretty limited market. Fortunately, SOMA is more substantive. In fact, it may not be so much inspired by my writing (or Dick’s, or Egan’s, or Miéville’s) as we all are inspired by the same scary-cool stuff that underlies human existence. We’re all drinking from the same well, we all lie awake at night haunted by the same existential questions: how can meat possibly wake up? Where does subjective awareness come from? What is it like to be attacked by giant mutant viperfish at four thousand meters?

SOMA’s influences extend beyond the the usual list of authors you’ll find online (or quoted at the top of this post, for that matter). The biocancer that infests and reshapes everything from people to anglerfish seems more than a little reminiscent of the Melding Plague in Alistair Reynolds’ Revelation Space, for example. And while Simon’s belated discovery that he’s basically a digitized brain scan riding a corpse in a suit of armor might seem lifted directly from the Nanosuit in my Crysis 2 tie-in novel, I lifted that idea in turn from Richard Morgan’s game script.

So much for the parts SOMA cannibalized.  How does it stitch them together?

For starters, the game is gorgeous to behold and insanely creepy to hear. The murk of the conshelf, the punctuated blackness of the abyss, the clanks and creaks of overstressed hull plating just this side of implosion keep you awestruck and on-edge in equal measure. Of course, these days that’s true for pretty much any game worth reviewing (Alien: Isolation comes to mind— you might almost describe SOMA as an undersea Alien: Isolation with a neurophilosophy filling). SOMA’s technology seems strangely antiquated for the 22nd century — flickering fluorescent light tubes, seventies-era video cameras, desktop computers that look significantly less advanced than the latest offerings down at Staples— but that’s also true for a lot of games these days. (Alien: Isolation gets a pass on that because it was honoring the aesthetic of the movie. The Deus Ex franchise, not so much.)

There’s not much of an interface to interfere with the view, no hit points or health icons cluttering up the edges of your display. You know you’ve been injured when your vision blurs and you can’t run any more.  You have no weapons to keep track of. The inventory option is a joke: for 90% of the game, you’re completely empty-handed except for a glorified door-opener to help you get around. It’s way more minimalist than most player interfaces, and the better for it.

Likewise, dialog options are pretty much nonexistent. Now and then you can choose to start a conversation, but from that point on you’re essentially listening to a radio play. I think Frictional made the right choice here, too. All those clunky dialog menus that pop up in Fallout or Mass Effect— those same four or five options offered up time after time, regardless of context (Really?  I want to ask Piper about our relationship now?)— offer just enough conversational flexibility to really drive home how little conversational flexibility you have. It’s one of the inherent weaknesses of computer games as an art form— game tech just isn’t advanced enough to improvise decent dialog on the fly.

SOMA cuts the player out of the loop entirely during the talky bits. The cost is that we lose the illusion of control (which is actually kind of meta if you think about it); the benefit is that we get richer dialog, deeper characters, shock and tantrums and emotional investment to go along with the thought experiment. Simon isn’t some empty vessel for the player to pour themselves; he’s a living character in his own right.

I’ll grant that he’s not a very bright one. He mentions at one point that he used to work in a bookstore, but given how long it takes him to catch on to certain things I’m willing to bet that its SF and pop-science sections were pretty crappy.  Simon’s a nice guy, and I really felt for him— but if his home town was, in fact, a nod to my own, I can only hope the same cannot be said for his intellect.

On the other hand, who’s to say I’d be any quicker on the draw if I was the dusty photocopy of a long-dead brain, thrown headlong and without warning into Apocalypse? I don’t know if anyone would be firing on all synapses under those conditions; and the languid pace at which Simon clues in does provide a convenient opportunity to hammer home certain philosophical issues to which a lot of players won’t have given much prior thought.  The fact that Simon’s sidekick Catherine grows increasingly impatient with his “bull****”, with the fact that she has to keep repeating herself, suggests that this was a deliberate decision on Frictional’s part.

But if Simon’s a bit slow on the uptake, SOMA isn’t. Even the scenery is smart. Wandering the seabed, at depths ranging from a few hundred meters to four thousand, the fauna just looks right: spider crabs, rattails, tiny bioluminescent squid and tube worms and iridescent, gorgeous ctenophores (ctenophores! How many of you even know what those are?) Inside one of the habitats, a dead scientist’s lab notes remark upon the sighting of a Chauliodus (“viperfish” to you yokels): “Not usually found at this depth— anomaly”. I wet myself a little when I read that. Writing Starfish back in the nineties, I too had to grapple with the fact that viperfish don’t foray into the deep abyss. I had to come up with my own explanation for why they did so at Channer Vent.

Smart or dumb, though, the ocean floor is mere setting: SOMA’s story revolves around issues of consciousness. Frictional did their homework here too. Sure, there’s the usual throwaway stuff— one model of sapience-compatible drone is dubbed “Qualia-class”— but stuff like the Body Transfer and Rubber Hand Illusions aren’t just name-checked; they actively inform vital elements of the plot.  People come equipped with “black boxes” in their brains that can be forensically data-mined post-mortem. (This proves useful in figuring out SOMA’s backstory, an ingenious new twist on the usual Let’s find personal diaries lying around everywhere more commonly employed in such games.) Most of the lynchpin events in this story occur not to effect the course of the plot, but to make you think about its underlying themes.

By way of comparison, look to SOMA’s spiritual cousin, Bioshock. For all its explicit in-your-face references to Ayn-Randian ideology,  Bioshock fails as analysis. (At best, its analysis amounts to Objectivism is bad because when capitalism runs amok, genetically-engineered nudibranchs will result in widespread insanity and the ability to shoot live bees out of your hands.) Andrew Ryan’s political beliefs serve as mere backdrop to the action, and as wall-paper rationale for the setting; but the events of the story could have just as easily gone down in a failed socialist utopia as a capitalist one. Bioshock was brilliant in the way it used the mechanics of game play to inform one of its themes (I’ve yet to see its equal in that regard), but that particular theme revolved around the existence of free will, with no substantive connection to Objectivist ideology. SOMA, in contrast, actually grapples with the issues it presents; it makes them part of the plot.

In fact, you could argue that SOMA is actually more rumination than game, an extended scenario that systematically builds a case towards an inevitable, nihilistic conclusion (two nihilistic conclusions actually, the second superficially brighter and happier than the first but actually way more depressing if you stop to think about it). If there’s a problem with this game, it’s that the the story is so tight, the rumination so railbound, that it can’t afford to give the player much freedom for fear they’ll screw up the scenario. There’s really only one way to play SOMA. Discoveries and revelations have to happen in a specific order, conversations must proceed in a certain way. The obligatory monsters— justified as failed prototypes, built by an AI trying to create Humanity 2.0— don’t really do anything story-wise. You can’t kill them. You can’t talk to them. You can’t scavenge their carcasses for booty, or fashion a makeshift cannon from local leftovers and  blow them away. Your interactive options consist exclusively of run and hide. SOMA’s monsters serve no real purpose except to creep you out, and slow your progress along a narrative monorail.

There are choices to be made— surprisingly affecting ones— but they don’t affect the outcome of the plot. Your reaction to the last surviving human— wasting away in some flickering half-lit locker at the bottom of the sea, IV needle festering in her arm, pictures of her beloved Greenland (gone now, along with everything else) scattered across the deck— who only wants to die. The repeated activation and interrogation of an increasingly panicky being who doesn’t know he’s digitized (although he sure as **** knows something‘s wrong), a being you simply discard once you have what you need from him. The treatment of your own earlier iterations, still inconveniently extant after your transcription into a new host. These powerful moments exist not so much to further the story as to inspire reflection upon a story already decided— and they might be missed entirely by a player with too much freedom, able to go where they will and when. It’s the age-old tension between sandbox and narrative, autonomy and storytelling. Frictional has sacrificed one for the other, so— as immersive as this game is— it’s bound to suck at replay value.

It’s easy enough to justify such creative decisions in principle; in practise, the result sometimes feels like a cheat. I spent half an hour tromping around the seabed looking for a particular item among the wreckage— a computer chip— that would spare me the need to kill a sapient drone for the same vital part. It would have been easy enough for Frictional to give me that option;   they’d already littered the seabed with wrecked drones, it wouldn’t have killed them to leave me some usable salvage. But no. The the only way forward was to slaughter an innocent being. It made the point, philosophically, but it felt wrong somehow. Forced.

This would normally be the point at which I ***** and moan about how, for all the “inspiration” game developers attribute to me, it would be really nice if they might someday be inspired to actually hire me instead of just mining my stories. It would be an utterly bull**** whinge—  I’ve admitted to gaming gigs in my past on this very post— but I’d make it anyway because, Hey: if one of your inspirations is sitting right there in the corner next to the potted philodendron, why not ask him for a dance? He might just teach you a couple of new steps.[1]

This time, though, I’m going to restrain myself. SOMA could not have been an easy assignment; I could ***** about the monorail gameplay constraints or the intermittent dimness of the protagonist, but given the limitations of the medium I don’t know that I could do any better without compromising mission priorities.  SOMA is a game in the straight-up survival-horror mode, but the horror is more existential than visceral. And those conventional mechanics serve the most substantive theme I’ve ever encountered in a video game.

Bottom line, I think they did a damn fine job.

[1] This metaphor is in no way meant to imply that I am any kind of dancer.  My most recent memories of dancing involve jumping wildly up and down and slapping my thighs in approximate time to Money for Nothing.

Next, while reviewing one of Scott Bakker's books, Edward Feser describes the "lump under the rug" fallacy. (http://edwardfeser.blogspot.com/2015/01/post-intentional-depression.html) ("If everything else can be explained by science, why should consciousness be any different?")
Spoiler:
Bakker wonders why we are “so convinced that we are the sole exception, the one domain that can be theoretically cognized absent the prostheses of science.”  After all, other aspects of the natural world have been radically re-conceived by science.  So why do we tend to suppose that human nature is not subject to such radical re-conception -- for instance, to the kind of re-conception proposed by eliminativism?  Bakker’s answer is that we take ourselves to have a privileged epistemic access to ourselves that we don’t have to the rest of the world.  He then suggests that we should not regard this epistemic access as privileged, but merely different.

Now, elsewhere I have noted the fallaciousness of arguments to the effect that neuroscience has shown that our self-conception is radically mistaken.  For instance, in one of the posts on Rosenberg alluded to above, I respond to claims to the effect that “blindsight” phenomena and Libet’s free will experiments cast doubt on the reliability of introspection.  Here I want to focus on the presupposition of Bakker’s question, and on another kind of fallacious reasoning I’ve called attention to many times over the years.  The presupposition is that science really has falsified our commonsense understanding of the rest of the world, and the fallacy behind this presupposition is what I call the “lump under the rug” fallacy.

Suppose the wood floors of your house are filthy and that the dirt is pretty evenly spread throughout the house.  Suppose also that there is a rug in one of the hallways.  You thoroughly sweep out one of the bedrooms and form a nice little pile of dirt at the doorway.  It occurs to you that you could effectively “get rid” of this pile by sweeping it under the nearby rug in the hallway, so you do so.  The lump under the rug thereby formed is barely noticeable, so you are pleased.  You proceed to sweep the rest of the bedrooms, the bathroom, the kitchen, etc., and in each case you sweep the resulting piles under the same rug.  When you’re done, however, the lump under the rug has become quite large and something of an eyesore.  Someone asks you how you are going to get rid of it.  “Easy!” you answer.  “The same way I got rid of the dirt everywhere else!  After all, the ‘sweep it under the rug’ method has worked everywhere else in the house.  How could this little rug in the hallway be the one place where it wouldn’t work?  What are the odds of that?”

This answer, of course, is completely absurd.  Naturally, the same method will not work in this case, and it is precisely because it worked everywhere else that it cannot work in this case.  You can get rid of dirt outside the rug by sweeping it under the rug.  You cannot get of the dirt under the rug by sweeping it under the rug.  You will only make a fool of yourself if you try, especially if you confidently insist that the method must work here because it has worked so well elsewhere.

Now, the “Science has explained everything else, so how could the human mind be the one exception?” move is, of course, standard scientistic and materialist shtick.  But it is no less fallacious than our imagined “lump under the rug” argument.

Here’s why.  Keep in mind that Descartes, Newton, and the other founders of modern science essentially stipulated that nothing that would not fit their exclusively quantitative or “mathematicized” conception of matter would be allowed to count as part of a “scientific” explanation.  Now to common sense, the world is filled with irreducibly qualitative features -- colors, sounds, odors, tastes, heat and cold -- and with purposes and meanings.  None of this can be analyzed in quantitative terms.  To be sure, you can re-define color in terms of a surface’s reflection of light of certain wavelengths, sound in terms of compression waves, heat and cold in terms of molecular motion, etc.  But that doesn’t capture what common sense means by color, sound, heat, cold, etc. -- the way red looks, the way an explosion sounds, the way heat feels, etc.  So, Descartes and Co. decided to treat these irreducibly qualitative features as projections of the mind.  The redness we see in a “Stop” sign, as common sense understands redness, does not actually exist in the sign itself but only as the quale of our conscious visual experience of the sign; the heat we attribute to the bathwater, as common sense understands heat, does not exist in the water itself but only in the “raw feel” that the high mean molecular kinetic energy of the water causes us to experience; meanings and purposes do not exist in external material objects but only in our minds, and we project these onto the world; and so forth.  Objectively there are only colorless, odorless, soundless, tasteless, meaningless particles in fields of force.

In short, the scientific method “explains everything else” in the world in something like the way the “sweep it under the rug” method gets rid of dirt -- by taking the irreducibly qualitative and teleological features of the world, which don’t fit the quantitative methods of science, and sweeping them under the rug of the mind.  And just as the literal “sweep it under the rug” method generates under the rug a bigger and bigger pile of dirt which cannot in principle be gotten rid of using the “sweep it under the rug” method, so too does modern science’s method of treating irreducibly qualitative, semantic, and teleological features as mere projections of the mind generate in the mind a bigger and bigger “pile” of features which cannot be explained using the same method.

This is the reason the qualia problem, the problem of intentionality, and other philosophical problems touching on human nature are so intractable.  Indeed, it is one reason many post-Cartesian philosophers have thought dualism unavoidable.  If you define “material” in such a way that irreducibly qualitative, semantic, and teleological features are excluded from matter, but also say that these features exist in the mind, then you have thereby made of the mind something immaterial.  Thus, Cartesian dualism was not some desperate rearguard action against the advance of modern science; on the contrary, it was the inevitable consequence of modern science (or, more precisely, the inevitable consequence of regarding modern science as giving us an exhaustive account of matter).

So, like the floor sweeper who is stuck with a “dualism” of dirt-free floors and a lump of dirt under the rug, those who suppose that the scientific picture of matter is an exhaustive picture are stuck with a dualism of, on the one hand, a material world entirely free of irreducibly qualitative, semantic, or teleological features, and on the other hand a mental realm defined by its possession of irreducibly qualitative, semantic, and teleological features.  The only way to avoid this dualism would be to deny that the latter realm is real -- that is to say, to take an eliminativist position.  But as I have said, there is no coherent way to take such a position.  The eliminativist who insists that intentionality is an illusion -- where illusion is, of course, an intentional notion (and where no eliminativist has been able to come up with a non-intentional substitute for it) -- is like the yutz sweeping the dirt that is under the rug back under the rug while insisting that he is thereby getting rid of the dirt under the rug.

Finally, Sam Harris on the weirdness of consciousness.

Part one. (https://www.samharris.org/blog/item/the-mystery-of-consciousness)
Spoiler:
You are not aware of the electrochemical events occurring at each of the trillion synapses in your brain at this moment. But you are aware, however dimly, of sights, sounds, sensations, thoughts, and moods. At the level of your experience, you are not a body of cells, organelles, and atoms; you are consciousness and its ever-changing contents, passing through various stages of wakefulness and sleep, and from cradle to grave.

The term “consciousness” is notoriously difficult to define. Consequently, many a debate about its character has been waged without the participants’ finding even a common topic as common ground. By “consciousness,” I mean simply “sentience,” in the most unadorned sense. To use the philosopher Thomas Nagel’s construction: A creature is conscious if there is “something that it is like” to be this creature; an event is consciously perceived if there is “something that it is like” to perceive it. ⁠Whatever else consciousness may or may not be in physical terms, the difference between it and unconsciousness is first and foremost a matter of subjective experience. Either the lights are on, or they are not.

To say that a creature is conscious, therefore, is not to say anything about its behavior; no screams need be heard, or wincing seen, for a person to be in pain. Behavior and verbal report are fully separable from the fact of consciousness: We can find examples of both without consciousness (a primitive robot) and consciousness without either (a person suffering “locked-in syndrome”).

It is surely a sign of our intellectual progress that a discussion of consciousness no longer has to begin with a debate about its existence. To say that consciousness may only seem to exist is to admit its existence in full—for if things seem any way at all, that is consciousness. Even if I happen to be a brain in a vat at this moment—all my memories are false; all my perceptions are of a world that does not exist—the fact that I am having an experience is indisputable (to me, at least).  This is all that is required for me (or any other conscious being) to fully establish the reality of consciousness. Consciousness is the one thing in this universe that cannot be an illusion.

As our understanding of the physical world has evolved, our notion of what counts as “physical” has broadened considerably. A world teeming with fields and forces, vacuum fluctuations, and the other gossamer spawn of modern physics is not the physical world of common sense. In fact, our common sense seems to be stuck somewhere in the 16th century. We have also generally forgotten that many of the patriarchs of physics in the first half of the 20th century regularly impugned the “physicality” of the universe. Nonreductive views like those of Eddington, Jeans, Pauli, Heisenberg, and Schrödinger seem to have had no lasting impact. In some ways we can be thankful for this, for a fair amount of mumbo jumbo was in the air. Wolfgang Pauli, for instance, though one of the titans of modern physics, was also a devotee of Carl Jung, who apparently analyzed no fewer than 1,300 of the great man’s dreams. Pauli’s thoughts about the irreducibility of mind seem to have had as much to do with Jung’s least credible ideas as with quantum mechanics.

Such numinous influences eventually subsided. And once physicists got down to the serious business of building bombs, we were apparently returned to a universe of objects—and to a style of discourse, across all branches of science and philosophy, that made the mind seem ripe for reduction to the “physical” world.

The problem, however, is that no evidence for consciousness exists in the physical world. Physical events are simply mute as to whether it is “like something” to be what they are. The only thing in this universe that attests to the existence of consciousness is consciousness itself; the only clue to subjectivity, as such, is subjectivity. Absolutely nothing about a brain, when surveyed as a physical system, suggests that it is a locus of experience. Were we not already brimming with consciousness ourselves, we would find no evidence of it in the physical universe—nor would we have any notion of the many experiential states that it gives rise to. The painfulness of pain, for instance, puts in an appearance only in consciousness. And no description of C-fibers or pain-avoiding behavior will bring the subjective reality into view.

If we look for consciousness in the physical world, all we find are increasingly complex systems giving rise to increasingly complex behavior—which may or may not be attended by consciousness.  The fact that the behavior of our fellow human beings persuades us that they are (more or less) conscious does not get us any closer to linking consciousness to physical events.  Is a starfish conscious? A scientific account of the emergence of consciousness would answer this question. And it seems clear that we will not make any progress by drawing analogies between starfish behavior and our own. It is only in the presence of animals sufficiently like ourselves that our intuitions about (and attributions of) consciousness begin to crystallize. Is there “something that it is like” to be a cocker spaniel? Does it feel its pains and pleasures? Surely it must. How do we know? Behavior, analogy, parsimony.

Most scientists are confident that consciousness emerges from unconscious complexity. We have compelling reasons for believing this, because the only signs of consciousness we see in the universe are found in evolved organisms like ourselves. Nevertheless, this notion of emergence strikes me as nothing more than a restatement of a miracle. To say that consciousness emerged at some point in the evolution of life doesn’t give us an inkling of how it could emerge from unconscious processes, even in principle.

I believe that this notion of emergence is incomprehensible—rather like a naive conception of the big bang. The idea that everything (matter, space-time, their antecedent causes, and the very laws that govern their emergence) simply sprang into being out of nothing seems worse than a paradox. “Nothing,” after all, is precisely that which cannot give rise to “anything,” let alone “everything.” Many physicists realize this, of course. Fred Hoyle, who coined “big bang” as a term of derogation, is famous for opposing this creation myth on philosophical grounds, because such an event seems to require a “preexisting space and time.” In a similar vein, Stephen Hawking has said that the notion that the universe had a beginning is incoherent, because something can begin only with reference to time, and here we are talking about the beginning of space-time itself. He pictures space-time as a four-dimensional closed manifold, without beginning or end—much like the surface of a sphere.

Naturally, it all depends on how one defines “nothing.” The physicist Lawrence Krauss has written a wonderful book arguing that the universe does indeed emerge from nothing. But in the present context, I am imagining a nothing that is emptier still—a condition without antecedent laws of physics or anything else. It might still be true that the laws of physics themselves sprang out of nothing in this sense, and the universe along with them—and Krauss says as much. Perhaps that is precisely what happened. I am simply claiming that this is not an explanation of how the universe came into being. To say “Everything came out of nothing” is to assert a brute fact that defies our most basic intuitions of cause and effect—a miracle, in other words.

Likewise, the idea that consciousness is identical to (or emerged from) unconscious physical events is, I would argue, impossible to properly conceive—which is to say that we can think we are thinking it, but we are mistaken. We can say the right words, of course—“consciousness emerges from unconscious information processing.” We can also say “Some squares are as round as circles” and “2 plus 2 equals 7.” But are we really thinking these things all the way through? I don’t think so.

Consciousness—the sheer fact that this universe is illuminated by sentience—is precisely what unconsciousness is not. And I believe that no description of unconscious complexity will fully account for it. It seems to me that just as “something” and “nothing,” however juxtaposed, can do no explanatory work, an analysis of purely physical processes will never yield a picture of consciousness. However, this is not to say that some other thesis about consciousness must be true. Consciousness may very well be the lawful product of unconscious information processing. But I don’t know what that sentence means—and I don’t think anyone else does either.

Part two. (https://www.samharris.org/blog/item/the-mystery-of-consciousness-ii)
Spoiler:
The universe is filled with physical phenomena that appear devoid of consciousness. From the birth of stars and planets, to the early stages of cell division in a human embryo, the structures and processes we find in Nature seem to lack an inner life. At some point in the development of certain complex organisms, however, consciousness emerges. This miracle does not depend on a change of materials—for you and I are built of the same atoms as a fern or a ham sandwich. Rather, it must be a matter of organization. Arranging atoms in a certain way appears to bring consciousness into being. And this fact is among the deepest mysteries given to us to contemplate.

Many readers of my previous essay did not understand why the emergence of consciousness should pose a special problem to science. Every feature of the human mind and body emerges over the course development: Why is consciousness more perplexing than language or digestion? The problem, however, is that the distance between unconsciousness and consciousness must be traversed in a single stride, if traversed at all. Just as the appearance of something out of nothing cannot be explained by our saying that the first something was “very small,” the birth of consciousness is rendered no less mysterious by saying that the simplest minds have only a glimmer of it.

This situation has been characterized as an “explanatory gap” and the “hard problem of consciousness,” and it is surely both. I am sympathetic with those who, like the philosopher Colin McGinn and the psychologist Steven Pinker, have judged the impasse to be total: Perhaps the emergence of consciousness is simply incomprehensible in human terms. Every chain of explanation must end somewhere—generally with a brute fact that neglects to explain itself. Consciousness might represent a terminus of this sort. Defying analysis, the mystery of inner life may one day cease to trouble us.

However, many people imagine that consciousness will yield to scientific inquiry in precisely the way that other difficult problems have in the past. What, for instance, is the difference between a living system and a dead one? Insofar as the question of consciousness itself can be kept off the table, it seems that the difference is now reasonably clear to us. And yet, as late as 1932, the Scottish physiologist J.S. Haldane (father of J.B.S. Haldane) wrote:

"What intelligible account can the mechanistic theory of life give of the…recovery from disease and injuries? Simply none at all, except that these phenomena are so complex and strange that as yet we cannot understand them. It is exactly the same with the closely related phenomena of reproduction. We cannot by any stretch of the imagination conceive a delicate and complex mechanism which is capable, like a living organism, of reproducing itself indefinitely often."

Scarcely twenty years passed before our imaginations were duly stretched. Much work in biology remains to be done, of course, but anyone who entertains vitalism at this point stands convicted of basic ignorance about the nature of living systems. The jury is no longer out on questions of this sort, and more than half a century has passed since the earth’s creatures required an élan vital to propagate themselves or to recover from injury. Are doubts that we will arrive at a physical explanation of consciousness analogous to doubts about the feasibility of explaining life in terms of processes that are not alive?

The analogy is a bad one: Life is defined according to external criteria; Consciousness is not (and, I think, cannot be). We would never have occasion to say of something that does not eat, excrete, grow, or reproduce that it might nevertheless be “alive.” It might, however, be conscious.

But other analogies seem to offer hope. Consider our sense of sight: Doesn’t vision emerge from processes that are themselves blind? And doesn’t such a miracle of emergence make consciousness seem less mysterious?

Unfortunately, no. In the case of vision, we are speaking merely about the transduction of one form of energy into another (electromagnetic into electrochemical). Photons cause light-sensitive proteins to alter the spontaneous firing rates of our rods and cones, beginning an electrochemical cascade that affects neurons in many areas of the brain—achieving, among other things, a topographical mapping of the visual scene onto the visual cortex. While this chain of events is complicated, the fact of its occurrence is not in principle mysterious. The emergence of vision from a blind apparatus strikes us as a difficult problem simply because when we think of vision, we think of the conscious experience of seeing. That eyes and visual cortices emerged over the course of evolution presents no special obstacles to us; that there should be “something that it is like” to be the union of an eye and a visual cortex is itself the problem of consciousness—and it is as intractable in this form as in any other.

But couldn’t a mature neuroscience nevertheless offer a proper explanation of human consciousness in terms of its underlying brain processes? We have reasons to believe that reductions of this sort are neither possible nor conceptually coherent. Nothing about a brain, studied at any scale (spatial or temporal), even suggests that it might harbor consciousness. Nothing about human behavior, or language, or culture, demonstrates that these products are mediated by subjectivity. We simply know that they are—a fact that we appreciate in ourselves directly and in others by analogy.

Here is where the distinction between studying consciousness and studying its contents becomes paramount. It is easy to see how the contents of consciousness might be understood at the level of the brain. Consider, for instance, our experience of seeing an object—its color, contours, apparent motion, location in space, etc. arise in consciousness as a seamless unity, even though this information is processed by many separate systems in the brain. Thus when a golfer prepares to hit a shot, he does not first see the ball’s roundness, then its whiteness, and only then its position on the tee. Rather, he enjoys a unified perception of a ball. Many neuroscientists believe that this phenomenon of “binding” can be explained by disparate groups of neurons firing in synchrony. Whether or not this theory is true, it is perfectly intelligible—and it suggests, as many other findings in neuroscience do, that the character of our experience can often be explained in terms of its underlying neurophysiology. However, when we ask why it should be “like something” to see in the first place, we are returned to the mystery of consciousness in full.

For these reasons, it is difficult to imagine what experimental findings could render the emergence of consciousness comprehensible. This is not to say, however, that our understanding of ourselves won’t change in surprising ways through our study of the brain. There seems to be no limit to how a maturing neuroscience might reshape our beliefs about the nature of conscious experience. Are we fully conscious during sleep and merely failing to form memories? Can human minds be duplicated or merged? Is it possible to love your neighbor as yourself? A precise, functional neuroanatomy of our mental states would help to answer such questions—and the answers might well surprise us. And yet, whatever insights arise from correlating mental and physical events, it seems unlikely that one side of the world will be fully reduced to the other.

While we know many things about ourselves in anatomical, physiological, and evolutionary terms, we do not know why it is “like something” to be what we are. The fact that the universe is illuminated where you stand—that your thoughts and moods and sensations have a qualitative character—is a mystery, exceeded only by the mystery that there should be something rather than nothing in this universe. How is it that unconscious events can give rise to consciousness? Not only do we have no idea, but it seems impossible to imagine what sort of idea could fit in the space provided. Therefore, although science may ultimately show us how to truly maximize human well-being, it may still fail to dispel the fundamental mystery of our mental life. That doesn’t leave much scope for conventional religious doctrines, but it does offer a deep foundation (and motivation) for introspection. Many truths about ourselves will be discovered in consciousness directly, or not discovered at all.
Title: Re: The "hard problem of consciousness"
Post by: Luis Dias on December 24, 2017, 03:22:25 am
Thank you, those were great!
Title: Re: The "hard problem of consciousness"
Post by: The E on December 24, 2017, 03:36:30 am
I laughed out loud at this:

Quote
This is the reason the qualia problem, the problem of intentionality, and other philosophical problems touching on human nature are so intractable.  Indeed, it is one reason many post-Cartesian philosophers have thought dualism unavoidable.  If you define “material” in such a way that irreducibly qualitative, semantic, and teleological features are excluded from matter, but also say that these features exist in the mind, then you have thereby made of the mind something immaterial.  Thus, Cartesian dualism was not some desperate rearguard action against the advance of modern science; on the contrary, it was the inevitable consequence of modern science (or, more precisely, the inevitable consequence of regarding modern science as giving us an exhaustive account of matter).

This is a quite hilarious (and somewhat desperate) rearguard action to portrait dualism as something necessary when it really isn't.

More hilarious wrongness:
Quote
However, many people imagine that consciousness will yield to scientific inquiry in precisely the way that other difficult problems have in the past. What, for instance, is the difference between a living system and a dead one? Insofar as the question of consciousness itself can be kept off the table, it seems that the difference is now reasonably clear to us. And yet, as late as 1932, the Scottish physiologist J.S. Haldane (father of J.B.S. Haldane) wrote:

"What intelligible account can the mechanistic theory of life give of the…recovery from disease and injuries? Simply none at all, except that these phenomena are so complex and strange that as yet we cannot understand them. It is exactly the same with the closely related phenomena of reproduction. We cannot by any stretch of the imagination conceive a delicate and complex mechanism which is capable, like a living organism, of reproducing itself indefinitely often."

There are quite a few misconceptions in this. I wonder if you can spot them.

Yeah, Ghyl, sorry to say but Dualism is still very thoroughly dead. Or rather, nothing in what you quoted here makes a compelling case that Dualism is necessary.
Title: Re: The "hard problem of consciousness"
Post by: GhylTarvoke on December 24, 2017, 08:59:11 am
I laughed out loud at this:

Quote
This is the reason the qualia problem, the problem of intentionality, and other philosophical problems touching on human nature are so intractable.  Indeed, it is one reason many post-Cartesian philosophers have thought dualism unavoidable.  If you define “material” in such a way that irreducibly qualitative, semantic, and teleological features are excluded from matter, but also say that these features exist in the mind, then you have thereby made of the mind something immaterial.  Thus, Cartesian dualism was not some desperate rearguard action against the advance of modern science; on the contrary, it was the inevitable consequence of modern science (or, more precisely, the inevitable consequence of regarding modern science as giving us an exhaustive account of matter).

This is a quite hilarious (and somewhat desperate) rearguard action to portrait dualism as something necessary when it really isn't.

The quoted paragraph is Feser's conclusion. His reasoning is in a previous paragraph:

Quote
Here’s why.  Keep in mind that Descartes, Newton, and the other founders of modern science essentially stipulated that nothing that would not fit their exclusively quantitative or “mathematicized” conception of matter would be allowed to count as part of a “scientific” explanation.  Now to common sense, the world is filled with irreducibly qualitative features -- colors, sounds, odors, tastes, heat and cold -- and with purposes and meanings.  None of this can be analyzed in quantitative terms.  To be sure, you can re-define color in terms of a surface’s reflection of light of certain wavelengths, sound in terms of compression waves, heat and cold in terms of molecular motion, etc.  But that doesn’t capture what common sense means by color, sound, heat, cold, etc. -- the way red looks, the way an explosion sounds, the way heat feels, etc.  So, Descartes and Co. decided to treat these irreducibly qualitative features as projections of the mind.  The redness we see in a “Stop” sign, as common sense understands redness, does not actually exist in the sign itself but only as the quale of our conscious visual experience of the sign; the heat we attribute to the bathwater, as common sense understands heat, does not exist in the water itself but only in the “raw feel” that the high mean molecular kinetic energy of the water causes us to experience; meanings and purposes do not exist in external material objects but only in our minds, and we project these onto the world; and so forth.  Objectively there are only colorless, odorless, soundless, tasteless, meaningless particles in fields of force.

As a side note, I think Feser is referencing Newton's work in optics. Newton wrote about the mechanisms of vision (light, the retina, the optic nerve, etc.), but purposely avoided the experience of vision:

Quote
But, to determine more absolutely, what Light is, after what manner refracted, and by what modes or actions it produceth in our minds the Phantasms of Colours, is not so easie.

More hilarious wrongness:
Quote
However, many people imagine that consciousness will yield to scientific inquiry in precisely the way that other difficult problems have in the past. What, for instance, is the difference between a living system and a dead one? Insofar as the question of consciousness itself can be kept off the table, it seems that the difference is now reasonably clear to us. And yet, as late as 1932, the Scottish physiologist J.S. Haldane (father of J.B.S. Haldane) wrote:

"What intelligible account can the mechanistic theory of life give of the…recovery from disease and injuries? Simply none at all, except that these phenomena are so complex and strange that as yet we cannot understand them. It is exactly the same with the closely related phenomena of reproduction. We cannot by any stretch of the imagination conceive a delicate and complex mechanism which is capable, like a living organism, of reproducing itself indefinitely often."

There are quite a few misconceptions in this. I wonder if you can spot them.

You realize that Harris is describing a triumph of science, right? Vitalists believed that science could never explain life, but they were wrong. Harris is comparing vitalism with dualism.
Title: Re: The "hard problem of consciousness"
Post by: The E on December 24, 2017, 10:03:24 am
The quoted paragraph is Feser's conclusion. His reasoning is in a previous paragraph:

Quote
...snipped...

As a side note, I think Feser is referencing Newton's work in optics. Newton wrote about the mechanisms of vision (light, the retina, the optic nerve, etc.), but purposely avoided the experience of vision:

Quote
But, to determine more absolutely, what Light is, after what manner refracted, and by what modes or actions it produceth in our minds the Phantasms of Colours, is not so easie.

And that reasoning is bad.

It seems to me that he's saying that there can be no scientific definition of what "heat" or "red" or stuff like that means because we infuse those terms with meaning beyond mere physical attributes, and that this in turn means that there can be no scientific definition of "consciousness".

That's just plain stupid. We can define terms scientifically. We can measure the impact sensory perceptions have on the brain. There is no magic point at which a water temperature of 40+ degrees C suddenly turns into "Warm" and is thus imbued with metaphysical aspects. The point here is that, again, none of these writings make a clear point that dualism is a necessary hypothesis without which we cannot explain consciousness. They assign undue meaning to what is to my mind just the brain adding metadata to sensory perceptions based on previous experiences.


More hilarious wrongness:
Quote
However, many people imagine that consciousness will yield to scientific inquiry in precisely the way that other difficult problems have in the past. What, for instance, is the difference between a living system and a dead one? Insofar as the question of consciousness itself can be kept off the table, it seems that the difference is now reasonably clear to us. And yet, as late as 1932, the Scottish physiologist J.S. Haldane (father of J.B.S. Haldane) wrote:

"What intelligible account can the mechanistic theory of life give of the…recovery from disease and injuries? Simply none at all, except that these phenomena are so complex and strange that as yet we cannot understand them. It is exactly the same with the closely related phenomena of reproduction. We cannot by any stretch of the imagination conceive a delicate and complex mechanism which is capable, like a living organism, of reproducing itself indefinitely often."

There are quite a few misconceptions in this. I wonder if you can spot them.

You realize that Harris is describing a triumph of science, right? Vitalists believed that science could never explain life, but they were wrong. Harris is comparing vitalism with dualism.
[/quote]

I do. I also realize that he's utterly wrong here:
Quote
But couldn’t a mature neuroscience nevertheless offer a proper explanation of human consciousness in terms of its underlying brain processes? We have reasons to believe that reductions of this sort are neither possible nor conceptually coherent. Nothing about a brain, studied at any scale (spatial or temporal), even suggests that it might harbor consciousness. Nothing about human behavior, or language, or culture, demonstrates that these products are mediated by subjectivity. We simply know that they are—a fact that we appreciate in ourselves directly and in others by analogy.
Title: Re: The "hard problem of consciousness"
Post by: GhylTarvoke on December 24, 2017, 09:44:38 pm
It seems to me that he's saying that there can be no scientific definition of what "heat" or "red" or stuff like that means because we infuse those terms with meaning beyond mere physical attributes, and that this in turn means that there can be no scientific definition of "consciousness".

That's just plain stupid. We can define terms scientifically. We can measure the impact sensory perceptions have on the brain. There is no magic point at which a water temperature of 40+ degrees C suddenly turns into "Warm" and is thus imbued with metaphysical aspects. The point here is that, again, none of these writings make a clear point that dualism is a necessary hypothesis without which we cannot explain consciousness. They assign undue meaning to what is to my mind just the brain adding metadata to sensory perceptions based on previous experiences.

It's necessary to clarify what you mean by "red". If you define it "in terms of a surface’s reflection of light of certain wavelengths", then you're speaking objectively, using the language of science. Your experience of redness, on the other hand, is a subjective phenomenon. You know it exists, but you have no way of comparing it with other people's experiences of redness. (In fact - going down the rabbit holes of solipsism and the simulation hypothesis - you can't even verify that other people have experiences of redness.) Because this particular aspect of redness cannot be analyzed or verified objectively, science sweeps it under "the rug of the mind", along with other subjective phenomena.

I also realize that he's utterly wrong here:
Quote
But couldn’t a mature neuroscience nevertheless offer a proper explanation of human consciousness in terms of its underlying brain processes? We have reasons to believe that reductions of this sort are neither possible nor conceptually coherent. Nothing about a brain, studied at any scale (spatial or temporal), even suggests that it might harbor consciousness. Nothing about human behavior, or language, or culture, demonstrates that these products are mediated by subjectivity. We simply know that they are—a fact that we appreciate in ourselves directly and in others by analogy.

See above. You can only assume that the subjective aspect of redness exists in other people by analogy. And when you consider other lifeforms, even analogy breaks down. From part one of Harris' article:

Quote
If we look for consciousness in the physical world, all we find are increasingly complex systems giving rise to increasingly complex behavior—which may or may not be attended by consciousness.  The fact that the behavior of our fellow human beings persuades us that they are (more or less) conscious does not get us any closer to linking consciousness to physical events.  Is a starfish conscious? A scientific account of the emergence of consciousness would answer this question. And it seems clear that we will not make any progress by drawing analogies between starfish behavior and our own. It is only in the presence of animals sufficiently like ourselves that our intuitions about (and attributions of) consciousness begin to crystallize. Is there “something that it is like” to be a cocker spaniel? Does it feel its pains and pleasures? Surely it must. How do we know? Behavior, analogy, parsimony.

To be clear, I don't think Harris is a dualist. I think his view is epistemological. Consciousness isn't necessarily special in a cosmic sense, but it might lie beyond the grasp of the scientific method. (Like, for instance, questions about the nature of reality.)

Quote
Perhaps the emergence of consciousness is simply incomprehensible in human terms. Every chain of explanation must end somewhere—generally with a brute fact that neglects to explain itself. Consciousness might represent a terminus of this sort.
Title: Re: The "hard problem of consciousness"
Post by: Mikes on December 27, 2017, 04:30:12 am
It's necessary to clarify what you mean by "red". If you define it "in terms of a surface’s reflection of light of certain wavelengths", then you're speaking objectively, using the language of science. Your experience of redness, on the other hand, is a subjective phenomenon. You know it exists, but you have no way of comparing it with other people's experiences of redness. (In fact - going down the rabbit holes of solipsism and the simulation hypothesis - you can't even verify that other people have experiences of redness.) Because this particular aspect of redness cannot be analyzed or verified objectively, science sweeps it under "the rug of the mind", along with other subjective phenomena.

/sigh "Red" is nothing more than "learned behavior".
Wipe out the collective memories of the human race and start fresh and the label "red" becomes meaningless.
But the phenomenon that is being described as red will still exist. Maybe it will get a different name like "rot" or "blau" oder "black". When people start communicate with each other again and decide to give a name to what they perceive.

Language is also learned behavior. Now try enjoying your consciousness for a while without the use of language. Try having some conscious thoughts without the use of language ... Notice something? Now make an informed guess what that tells us about the nature of consciousness.
Title: Re: The "hard problem of consciousness"
Post by: Luis Dias on January 08, 2018, 11:26:24 am
I do. I also realize that he's utterly wrong here:
Quote
But couldn’t a mature neuroscience nevertheless offer a proper explanation of human consciousness in terms of its underlying brain processes? We have reasons to believe that reductions of this sort are neither possible nor conceptually coherent. Nothing about a brain, studied at any scale (spatial or temporal), even suggests that it might harbor consciousness. Nothing about human behavior, or language, or culture, demonstrates that these products are mediated by subjectivity. We simply know that they are—a fact that we appreciate in ourselves directly and in others by analogy.

I had previously commented on this thread, but my phone went sour on me and deleted everything and I gave up. I don't even remember what I was going to say.

Nevertheless, let me jump in here. What Harris is mentioning in here is something that was also detected by Dennett (despite himself), who cleverly defined this as the zimbo problem. There's nothing that we can scientifically detect that won't be exactly like something that exists without conscience. That is to say, we can posit beings that are just like humans in every conceivable way, except they are not conscious like we are. That is, the only thing that exists in such beings is a bit like zombies, the sort of chinese room monsters Peter Watts is obsessed about. We may well define and know all about its inherent machinations, but there is seemingly no point in this analysis that can point towards the very experience we all have while living our own lives, thus such beings would be totally indistinguishable from us human beings.

And yet, we wouldn't call such beings to be even alive, if we knew the "lights were off" within their brains, and all that was going on was a bunch of hacks on top of hacks (chinese rooms).

The problem isn't even whether consciousness exists or not in such zimbo brains, but the effective epistemological and fundamental inability of science to ever detect it at all.

This isn't usually a problem, because we just assume that brains harbor consciousness, period. But we're about (within a hundred years) to bring forth very highly intelligent artificial brains into the world. We should know whether we're also bringing some form of consciousness to life through some emergent quality that we just don't understand. Because if we are, then we run the risk of commiting incredible crimes on these new forms of life. There's the other problem of being deceived into thinking such artificial brains *do* have consciousness without it being true, with other unintended terrible consequences being brought to our world.
Title: Re: The "hard problem of consciousness"
Post by: The E on January 08, 2018, 11:49:24 am
But do Zimbos exist in reality?
Title: Re: The "hard problem of consciousness"
Post by: Luis Dias on January 08, 2018, 11:54:17 am
Well that depends what kind of language game we're using, doesn't it? Harris' point is that the language of science can only talk about zimbos. It cannot really talk about people.
Title: Re: The "hard problem of consciousness"
Post by: The E on January 08, 2018, 12:27:31 pm
But that's based on the assumption that there actually is a difference between a complete human being and a complete human being as described by science.
Title: Re: The "hard problem of consciousness"
Post by: Luis Dias on January 08, 2018, 12:44:54 pm
Well, that goes without saying. To say otherwise is to proclaim that science has established every single fact about humans, which is obviously false.

The relevant question here is if whether science can ever, not only in the future, but in principle, resolve the question of consciousness itself, the experience of qualia and all of that, or if there's some kind of inherent ontological barrier that prevents science from dealing with that.

I'm completely on board with trying to (and I think there are some future prospects of doing so), but I do think this hasn't been resolved yet.

This becomes patently obvious when you distill what Dennett (the best promoter of the collapse of this entire idea) says about these subjects. He just dismisses all of this like you do, saying something to the effect of "if you can't differentiate between a zombie and a human being, what does it matter", or saying that, obviously, the brain is a bunch of hacks anyway, and consciousness is like the equivalent of a user interface, filled with simplified experiences that hide what the brain is actually doing. The problem always lies in the wordings here. Who is being "fooled" here? Whose experience is being "simplified"?

We do have to talk about this being that is being fooled about having free will and stuff like that. To just ignore it is to precisely admit that science cannot talk about it.
Title: Re: The "hard problem of consciousness"
Post by: Luis Dias on January 09, 2018, 06:11:00 am
Sam Harris just posted his newest podcast episode, wherein he tackles this very problem with Anil Seth, it's a great conversation / debate on the subject. I highly recommend it.

https://soundcloud.com/samharrisorg/113-consciousness-and-the-self