There is a serious linguistic problem with the discussion you wish to have. There are those that understand what Nagel is alluding to with his description of conscious experience as the "what-it-is-like-to-be-me", and those that are blind to it, and members of these two groups have no common ground that would permit such a debate. Members of the former group have had the experience of being perplexed after a kind of reversal of attention back upon itself, leading them to 'notice' something that has not been noticed by members of the latter group. This perplexity has an analogy with the principal question of metaphysics -- "why is there anything rather than nothing at all?" -- and in this sense: That question normally arises in respect of what we might want to refer to as the "objective world", whereas for those members of the latter group the question arises in respect of what we might want to refer to as the "subjective world". Now, language has evolved in the objective world and has utility therein, but this idea of p-consciousness (of the synchronic entirety of its constituents) has no utility in the objective world, and so language can gain no real traction upon it. Consequently members of the latter group, who are blind to the issue, are also blind to their error in any claims they make about it. There can be no common ground between these two groups, one claiming that the other is seeing something that isn't there, the other claiming that the first is not seeing something that is there.
It is tempting to note that all sorts of puzzling phenomena have eventually turned out to be explainable in physical terms. But each of these were problems about the observable behavior of physical objects, coming down to problems in the explanation of structures and functions. Because of this, these phenomena have always been the kind of thing that a physical account might explain, even if at some points there have been good reasons to suspect that no such explanation would be forthcoming. The tempting induction from these cases fails in the case of consciousness, which is not a problem about physical structures and functions. The problem of consciousness is puzzling in an entirely different way. An analysis of the problem shows us that conscious experience is just not the kind of thing that a wholly reductive account could succeed in explaining.
Here we can exploit an idea that was set out by Bertrand Russell (1926), and which has been developed in recent years by Grover Maxwell (1978) and Michael Lockwood (1989). This is the idea that physics characterizes its basic entities only extrinsically, in terms of their causes and effects, and leaves their intrinsic nature unspecified. For everything that physics tells us about a particle, for example, it might as well just be a bundle of causal dispositions; we know nothing of the entity that carries those dispositions. The same goes for fundamental properties, such as mass and charge: ultimately, these are complex dispositional properties (to have mass is to resist acceleration in a certain way, and so on). But whenever one has a causal disposition, one can ask about the categorical basis of that disposition: that is, what is the entity that is doing the causing?
One might try to resist this question by saying that the world contains only dispositions. But this leads to a very odd view of the world indeed, with a vast amount of causation and no entities for all this causation to relate! It seems to make the fundamental properties and particles into empty placeholders, in the same way as the psychon above, and thus seems to free the world of any substance at all. It is easy to overlook this problem in the way we think about physics from day to day, given all the rich details of the mathematical structure that physical theory provides; but as Stephen Hawking (1988) has noted, physical theory says nothing about what puts the "fire" into the equations and grounds the reality that these structures describe. The idea of a world of "pure structure" or of "pure causation" has a certain attraction, but it is not at all clear that it is coherent.
So we have two questions: (1) what are the intrinsic properties underlying physical reality?; and (2) where do the intrinsic properties of experience fit into the natural order? Russell's insight, developed by Maxwell and Lockwood, is that these two questions fit with each other remarkably well. Perhaps the intrinsic properties underlying physical dispositions are themselves experiential properties, or perhaps they are some sort of proto-experiential properties that together constitute conscious experience. This way, we locate experience inside the causal network that physics describes, rather than outside it as a dangler; and we locate it in a role that one might argue urgently needed to be filled. And importantly, we do this without violating the causal closure of the physical. The causal network itself has the same shape as ever; we have just colored in its nodes.
Everything is explained by physics! Deflationary monist compatibilism is the answer! Inventing magical thoughts to explain unnecessary intuitions is just an onanistic way to hide from the meat machine truth!
Most philosophical questions I find pointless and uninteresting. A few questions I find interesting and as if I can actually make progress thinking about them. And then there's one which I find interesting but impossible to even begin to touch in any sort of coherent manner. Guess which one that is. :(
The distinguished physicist Murph Goldberger was once asked by a television interviewer why he had never worked in this area. He answered that every time he decided to think about these questions, he would sit down, get out a clean piece of paper, sharpen his pencil - and then he just couldn't think of anything to put down.
Related homework: The Semantic Apocalypse (https://speculativeheresy.wordpress.com/2008/11/26/the-semantic-apocalypse/)
I've never seen anything that calls for a special, fundamental consciousness — except our desire to believe consciousness is important. As far as we can tell, qualia can be manipulated by manipulating the brain. There doesn't seem to be any parsimonious reason to look for anything else going on.
Here's the second aspect I'm open to. I'm open to an idea similar to a Dunning Kruger effect, related to consciousness. We might be just too numbified by an unknown process that prevents us to understand how we ourselves *really* work.
Related homework: The Semantic Apocalypse (https://speculativeheresy.wordpress.com/2008/11/26/the-semantic-apocalypse/)
but I do think it would be handy to have some way to determine if something had the quality of "consciousness". but I think there are better words to use for this. Person-hood is a good one. Agency is also a nice $64 word for this thing.
Nothing has any 'intrinsic nature.' Everything is physical, and all traits are physical traits.
We know that whatever consciousness is, it is physical: we can infer this by altering the brain and altering consciousness.
Nothing has any 'intrinsic nature.' Everything is physical, and all traits are physical traits.
I don't understand your point. Imagine an 'intrinsic trait' that has no physical properties and affects nothing in the world. Who cares? The only relevant properties of anything are those with causal effect.
We know that whatever consciousness is, it is physical: we can infer this by altering the brain and altering consciousness. We know that consciousness is the result of physical operations in the brain.
(i.e., let's bury Aristotle really deep into the ground and blast him with a nuke to make sure)
All properties are physical properties, and physicalism is a complete depiction of everything.
The reality is that nothing has any "physical nature". Everything is experiential, and all traits are experiential traits. There's no parsimonious reason to posit an external world; it's an extraneous assumption. The universe is more like a great thought than like a great machine.
No one could have been born as anyone else — their consciousness is their particular brain at a particular moment, and the illusion of continuity and 'selfness' is provided by memory.
Alter the brain and you alter the person.
Deal insult to the brain and you can change anything: you can alter a man's personality, erase his loves, trick him into confabulating a new identity and history, fool him into believing he controls something he doesn't, make him experience divine presence. There is no self except the moment-by-moment meat of the brain.
Experiential properties may supervene on physical properties.
And then there's one which I find interesting but impossible to even begin to touch in any sort of coherent manner. Guess which one that is. :(
I think experience comes last, not first. The physical universe is the most parsimonious explanation - discarding it leaves us with a bunch of useless and uninteresting ideas and no possible grounds for reasoning or evaluation.
I think the problem of qualia kind of answers itself - we see the world in the first person because we do. It's an anthropic issue. I think qualia probably emerge from mechanisms evolved to model social behavior in others - our self is a model used to integrate information and generate adaptive responses for the physical and social environment. But it may turn out to be something else, like a learning workspace or a way to resolve conflicting motor impulses. It's a question I'm interested in - but I don't think the answer will break the so-far universal monism of everything.
Before you go away, I'd like to reference Terrence Deacon's work regarding this attempt to bridge what appears as a difficult gap (which Battuta dismisses as an easy problem, well good for him).
His idea is that the Self is an emergent property that stems from symbiotic relationships between certain pattern structures in the physical world.
I'm on my phone so I have troubles easily linking you stuff. But do Google it. There's a good interview on YouTube with him talking about his thesis for over an half an hour.
ITT: arrangements of molecules with the ability to feel special
ITT: arrangements of molecules with the ability to feel special
Doesn't this sum up the entire human race?
ITT: arrangements of molecules with the ability to feel special
Doesn't this sum up the entire human race?
And cats?
:p
Why does #4 cause the descendant fork to experience agony? (Honest question; I don't understand it).
It's the untestability of this idea that gives rise to the horror of it all.
It's the untestability of this idea that gives rise to the horror of it all.
Or, alternatively, dispels the horror. Something that no one will ever know about or experience the consequences of can't be horrible, and blinking out of existence in a teleporter isn't any different.
So if you are going to die but you don't know anyway, it's no biggie. I mean, if this isn't the endgame of misanthropy and nihilism, I don't know what is.
"Of course it's no biggie", well if you wanted to convince me you actually don't care for your life, you surely did a tremendous job.
I guess your argument is 'even if there is a vanishing chance a philosophical teleporter would kill you, it's not worth the risk.' My reply is 'any framework in which the teleporter kills also makes day to day life tremendously more risky and fatal, which is a nonsensical outcome.'
...Luis is using terms like "Edgelordyness" and "Misanthropy", both of which are terms used to describe a lack of empathy towards other human beings in the person the term is applied to.
For those of you who would use #3, but not #4: would you be okay with #4 if you took a gun into the transporter, then blew your brains out after the scan?
The misanthropic note is a purely philosophical critique. The idea that you shouldn't care if you are about to die or not is necessarily predicated in a disregard for human subjective experience, which is all we have. Do I believe zookeeper is a misanthrope? No, I don't know enough about him to say that, but his idea clearly was.
I meant "blowing your brains out" to be an instantaneous death, so there would be no physical pain. If you mean the psychological pain of committing suicide after the scan, what if (before entering the transporter) you rig a gun to blow your brains out after the scan?
Our understanding of consciousness is not complete, but it is bounded. We know the explanation falls within a certain territory. The territory has parameters: it is monist, it is physical, it is causally closed. These parameters speak to the risk of teleportation. It's not like claiming that thunder must come from God, because what else could it be; it's like claiming that thunder must be a physical event, because we have no evidence for nonphysical events.
The objection to the engineering parameters of the teleporter is valid, which is why I've been careful to specify a philosophical teleporter: arbitrarily precise reconstruction of the physical body.
Internalizing the world-view that consciousnesses are 'interchangeable and material' is hardly a formula for suicide. It's not a disturbing attitude! It loses you nothing. You are a subprocess of the universe, computing yourself forward. Your own behavior is contingent on your past experiences, your knowledge, and beliefs. If anything it's humanizing.
The fact that my statements seem controversial or misanthropic is, I think, evidence of how deeply our society is still predicated on illusions about what we are. The notion that we are only a pattern of information, stored in meat and endlessly mutable, is somehow radical and depressing. Yet it requires only that we give up things we never actually had at all!
The misanthropic note is a purely philosophical critique. The idea that you shouldn't care if you are about to die or not is necessarily predicated in a disregard for human subjective experience, which is all we have. Do I believe zookeeper is a misanthrope? No, I don't know enough about him to say that, but his idea clearly was.
Nonono. The idea that you shouldn't care if you live or not has nothing to do with how one values the contents of subjective experience; it's precisely because only subjective experience matters that the concern becomes irrelevant.
The question of whether you live or not is a question of whether your subjective experience exists at all. I hold that it would be paradoxical to care about it ending, because the only way you can make a value/preference judgement is from within that subjective viewpoint. To prefer existence over non-existence would require being able to compare them from some kind of objective viewpoint, which you cannot do, making "I prefer to exist" something of an oxymoron.
So, there is a disregard for existence of subjective experience, but not for what subjective experience is like. You can call that misanthropic (although the word is obviously unnecessarily antropocentric) if you want, but it seems like a simplistic and misleading characterization.
I am a human being and I stand for my initial answer: I would NOT enter those teleporters, not in a million years.
Joshua, I won't be tone moderated by you, so kindly drop it. The fact that you misinterpreted those two words as the things you said is proof enough that you are incapable of performing that duty anyway. I will hereby explain to you what I actually did. By "edgelording" I mean that I do believe Battuta is trying too much to be "edgy" in his philosophical conclusions. That is, I sensed (decreasingly so, I must add) that he was just placing his most controversial statements out there and keeping his more moderate caveats to himself. The misanthropic note is a purely philosophical critique. The idea that you shouldn't care if you are about to die or not is necessarily predicated in a disregard for human subjective experience, which is all we have. Do I believe zookeeper is a misanthrope? No, I don't know enough about him to say that, but his idea clearly was.
Nor is there a risk of reductio ad absurdum, because it's trivial to note that NOT all mental states are equivalent — what's vital is the preservation of information, a logical causal pathway. The teleporter cannot kill you and substitute someone completely different. That would be a critical loss of information.
The 'acceptable delta' (in answer to both above posts) is the preservation of information, namely retaining the ability of a brainstate to propagate itself forward by causal rules.
Vaporizing your brain introduces entropy into the system, destroying it: the brainstate cannot copy itself forward without becoming very lossy. Teleporting your brain may vaporize it, but no information is lost: the brainstate propagates through the teleporter.
The 'acceptable delta' (in answer to both above posts) is the preservation of information, namely retaining the ability of a brainstate to propagate itself forward by causal rules.
Sure, it has to survive the process, but that is a small consensual detail. What is more important is that this criteria fails to nullify why wouldn't an output like a "horse" be unacceptable then. If the only criteria is survival of the end product we could even substitute "you" for a copy of anyone else.
Can you elaborate on what you do not understand?
I apologize for harping on this, but I'm still confused. How perfect does the copy need to be to allow your brainstate to propagate, and how do teleportation imperfections differ from day-to-day stimuli?
If the end result was not identical in all ways including thought and memory, it is not a real teleportation. And if it is identical in all ways including thought and memory, then "You" are still alive, and arguably safer than any single other instance of your entire life.
I apologize for harping on this, but I'm still confused. How perfect does the copy need to be to allow your brainstate to propagate, and how do teleportation imperfections differ from day-to-day stimuli?
The human brain can suffer some pretty severe traumas without breaking the continuity of the "self", (though the trauma may dramatically change the character of the person). Or, we might say that the continuity of the self is an illusion, produced by the pattern of information propagating forward in time. This pattern changes with every stimulus. As long as the pattern maintains some essential history of your world line, your "memory" and "thoughts", you are still "you".
The hypothetical teleporter here is assumed to scan and reconstruct the particles of your body (whether by using the very same particles or not) flawlessly. This is why some are arguing that the transporter is safer than ordinary day-to-day life.
By what sensible causal rule could a person's brain spontaneously become a horse's? You're trying to argue that a catastrophic failure is somehow equivalent.
The teleporter is safe because it allows your brain state to propagate forward by its ordinary causal rules. It does not change or distort the information present. It is wholly unlike the examples you're floating.
You become some other you by the introduction of stimuli which are processed according to the brain's logic.
Death is trivially defined: irrecoverable loss of information. If the brain state isn't lost and can keep firing itself forward on its own power, you're not dead.
Luis it's important to realize that the end product Battuta refers to isn't "the consciousness of a human being". It's "You". You yourself. An arbitrary human being that undergoes the process and remains the same arbitrary human being, not another human being. If the end result was not identical in all ways including thought and memory, it is not a real teleportation. And if it is identical in all ways including thought and memory, then "You" are still alive, and arguably safer than any single other instance of your entire life.
Is anyone following the all delta is the same argument and able to crumple it down for my greasy brain
I think Luis is saying that, since we're always changing, information is never preserved. So it's no big deal if the transporter reassembles us differently.
Okay, but what if the reconstruction isn't flawless? What if the copy is different from the original - but only by a single atom?
In this case (assuming that the flawless teleporter doesn't kill you), saying that the flawed teleporter kills you is even more absurd than saying that you die from moment to moment; the teleporter's imperfection is even more harmless than the effects of day-to-day stimuli. But now we reach a contradiction, because proceeding by induction, the teleporter can change every atom in your body without killing you, which is also absurd. The conclusion that our original assumption (the flawless teleporter doesn't kill you) was wrong seems inescapable.
Is anyone following the all delta is the same argument and able to crumple it down for my greasy brain
I understood Luis to be making one of the following arguments:
I can't follow you at all. I feel like my (phone, alas) response would just be a series of emptyquotes of past statements in the thread.
Your stance reminds me of arguments for theism or quantum karma - we don't know exactly how it works, ergo magic! The delta argument in particular I find confounding, since you seem to be attacking yourself. If the teleporter introduces less drift than day to day life, how is saying 'what if it introduced MORE drift? All drift is the same!' in any way logical or useful to your stance?
Calling death what it is is not 'edgelording'. Death is only death when it leads to irretrievable loss of the brainstate's ability to copy forward under its own power. That's the only sensible idea of death we have.
These are not just 'beliefs', they are the only coherent hypotheses given the evidence we have. The Subjective You is physical because there is nothing but the physical. Wherever the Object You arises, so does the Subject. No other hypothesis has any support at all.
Just 'well subjectivity might work differently than the entire rest of the cosmos, for reasons we cannot begin to guess and which fall outside physicalism in some unanticipated way.'
IOW, there are more possibilities than either "Spirits!" or "Consciousness is just a pattern of information". I can't make it more simple than this, so you have to work this through in your end, I'm sorry.
TL DR, to say that the brain is dead because it can't "copy" forward is as silly as saying that muscles are overheaten because they need to vent some "steam". It's a kind of metaphor that is chronocentric to the fads of our age and do not reflect the true realities of our brains and consciousnesses.
For instance, imagine that you can copy yourself into a thousand Battutas. Do you believe that your own consciousness will be "transferred" to any one of those? Or will it remain inside of yourself? And if you are still "inside of yourself", the original, do you see yourself as "interchangeable" between all those Battutas, or will you still prefer your life over all of those other Battutas?
For instance, take this scenario: Imagine 9 Battutas are immediately copied, resulting in 10 Battutas. However, because of a strange sequence of events, only one Battuta will survive and you are to choose who will. This choice is not to be discussed between you and other Battutas. You know they are exactly like you, but you alone are to choose (and are free to do so) if *you* are to survive or any other Battuta is to survive. Once you decide, 9 Battutas are immediately vaporized through a process that takes exactly 2 Planck-o-seconds.
Now, according to your thesis, it does not matter *which* Battuta survives. So what will you do here? Notice how you feel about your decision. Beware of sensations of "generosity" or "fairness" or "altruisticness", for they are already subtly implying a "sacrifice". But there is no sacrifice involved. This decision shouldn't be difficult at all and should be totally random: it's like breathing after all and aren't you doing *that* every single second?
Just 'well subjectivity might work differently than the entire rest of the cosmos, for reasons we cannot begin to guess and which fall outside physicalism in some unanticipated way.'
Consciousness is epistemologically primary. It's the only thing we can be certain of; "everything else" may only exist as a constituent of consciousness. This decisively sets consciousness apart from "everything else", at the deepest possible level.
No invocation of computers or engineering required. All we need to do is to point to the constraints that circumscribe all knowledge of the universe: the brain is physical, so consciousness must be physical. We only have one viable, coherent model with predictive power, placing consciousness not at an epistemologically primary stage but far down at the end of the train. Our oTwn subjectivity is explained by this model.
Some materialists believe that with science, there is no more need for philosophy. I believe that they are very naive, and I don't see anyone expressing that opinion here.
OK, saying that the Lego construct died is reasonable. But in ethics, you have to apply human concepts at some point. Do you just take "human lives are worth more than Lego lives" as an axiom?
I wouldn't volunteer for any machine that changes my brain. From a philosophical standpoint, however, I doubt it would kill me.
Yet you allow your brain to change every day. You go to sleep or get anesthetized and trust that the matter of your brain will recreate your subjectivity. What's the difference?
The 'choice' argument is flimsy. You think day to day life is as dangerous as teleporters, you just choose not to use the teleporter because you're helplessly resigned to constantly undergoing the same process?
Remember, in physicalism, saying 'I am the sum of my past configurations' is exactly the same as 'there is causal connection between my past brainstatea and my current one.'
Unless you are a dualist, the past has no influence on the present except for the information transmitted forward by causal rules.
If your objection is that dualist models of consciousness have no additional predictive power, then I agree - but this objection also applies to all of metaphysics and philosophy. There's nothing wrong with restricting oneself to topics of practical value, but it's the scientist's mindset, not the philosopher's mindset.
You make the scientific observation that physicalism is all we need to make predictions. I agree. Nevertheless, physicalism has no handle on consciousness, for fundamental reasons.
The third level is the source of all predictions, while much of philosophy concerns the first and second.
This is physicalism. It also neatly predicts the mechanisms underpinning consciousness.
Using acceleration to conclude anything about velocity or position is a logical error (and obviously with a few assumed it known things, much like this thread!)? I'm on a phone, so I trust the parallel in that is apparent.
So the objection becomes "It is because it is and it isn't because it isn't"?
There is no point to discussing philosophy that holds as a core tenet that it may not be defined. It is useless, both to us and to the world.
GhylTarvoke, can you prove you're conscious?
Saying that nothing in life is provable beyond "I exist" is provable doesn't explain anything.
Why should our own minds be any more certain than anything else? "We" as people may not even exist in the way we are inclined to think that we do.
If we're ever able to fully model consciousness, it will still be totally unexplained by your argument, because it's "first level."
How are you even defining the word?
We can build models of consciousness, predict how consciousness would change if we alter the brain, and test those predictions. Consciousness is nothing special.
If being a brain in a vat or a digital simulation has no consequences, in what way is it real? Something that does nothing and cannot be known doesn't exist. It's causally decoupled from us.
Consciousness is already 'solved,' in that we know where it is and what makes it happen. We can alter our own consciousness in the first person using 'third order' effects, and we do it every day. Philosophical examination of consciousness is doomed: it depends on the invention of a problem where in fact there is only a clear and inevitable identity.
It's really easy to say something is unsolvable to physical science when it has no form, definition or requirements. It just "is true."
"I can't prove it to you. I don't need to prove it to myself, because it's a brute fact, i.e. true by default - just as my own existence is true by default."
There's no reason for those things to be true by default.
barring dualism
This is the crux of the problem. If we assume monism/physicalism, I think your claims follow straightforwardly.
It's really easy to say something is unsolvable to physical science when it has no form, definition or requirements. It just "is true."
"I can't prove it to you. I don't need to prove it to myself, because it's a brute fact, i.e. true by default - just as my own existence is true by default."
There's no reason for those things to be true by default.
Well, it's also easy to pretend to be a P-zombie and claim to not understand what the other person is referring to, simply because it cannot be defined or demonstrated. Which sure seems like what's been going on here (among other things) for a few pages. It's one thing to argue about qualia, but what some people here seem to do is deny its existence (in the "why is the world perceived through this brain and not some other" sense) just because they can.
I don't remotely deny the experience of qualia, I just think they're trivially explicable and clearly identified with physical processes - we turn them off every night!
Dualism is fantasy. There are no grounds for conversation if you're a dualist.
Chalmers also denies that there is anything mystical about positing consciousness as fundamental. He compares it to positing gravity as fundamental.
I could develop schizophrenia, and I would still have a sense of being "me" and I would imagine other people - but the sense of being "me" would be incorrect, because, at least in late stages of the disease, I would act like a completely different person.
Dualism is fantasy. There are no grounds for conversation if you're a dualist.
We've been over this already.
The part where we "went over this" consisted of neither the General nor myself being particularly impressed with your interpretation, which seems to rely on its own ignorance to function.
Please note, that's not a reflection on you, nor intended to be a negative reflection on your personal interpretation of the situation. On one hand we have physicalism, which we can discuss and make predictions about. On the other, we have dualism, which has no room for predictive or experimental processes. One of these things is useful to this discussion.
Right. Dualism seems as unnecessary as positing that consciousness is piped out to another dimension, illustrated on paper, and piped back. What does that add?
Again, you're arguing from a practical/utilitarian/predictive standpoint, which ignores virtually every branch of philosophy and hence has no bearing on philosophical issues.
What reason do we have to begin to believe it? Nothing whatsoever points to it. It's an idea for the sake of having an idea. It is based on nothing and explains nothing.
Qualia are physical. That seems to me to be pretty straight to the point.
That said, one benefit of accepting dualism is that we won't be waiting endlessly for neuroscience to give us an answer.
The existence of consciousness, yes. But the mechanism for consciousness is most certainly not as abstract as you're trying to say it is. Human brains generate consciousness in the patterns of atoms and molecules that make them up. This consciousness is mutable; it can be changed by experience, it can be surgically altered (however barbaric some of those surgeries are), it is wholly rationally explainable as a result of a biological process.
There's an issue that hasn't yet been raised though: practical experience. I fall asleep every night, and I wake up the next morning with a high degree of confidence that I'm the same me who went to sleep the night before. One can argue that I don't have objective evidence of such, and that's probably true, but I do remember what I did the day before, and the day before that, and so on, and for general purposes that's good enough for me to get out of bed and on with my day. Sure, I know that something can biologically break and throw the whole process off, but so far (knock on wood) it hasn't, and I feel pretty good about that track record.
What we should all REALLY be afraid of, in terms of hard problems, is cosmology!
Again, you're inventing a question so that you can propose an answer. We have not only the button and the light, but all the tracery of circuits in between; we have the power grid and the transformers; we have all the physics of electrons and resistors; we cannot yet build ourselves a button-light circuit, but we see nothing intrinsically unachievable about it.
There is no distinction between 'consciousness', 'metacognition', 'self-awareness', or anything else. These are all one and the same. Qualia are the first-person experience of these processes, nothing more.
The mechanism is the why: we experience awareness in the first person because we are brains with mechanisms for generating first-person experience.
There are no levels of concepts. We can propose any system of truth we like, beginning with our own self-awareness. All of them compete in the same arena: can they use our perceptions to explain who we are, where we come from, and what we do?
Only one system is internally consistent, powerful, parsimonious, and useful. All others inevitably undercut themselves or spiral around into needlessness. Models like 'I am a Boltzmann brain' or 'I exist in a simulation' either produce the same results without that complication, or fizzle out into solipsism.
Physicalism explains qualia. Qualia are not the first level, they are the last.
Consciousness is not a fundamental feature of the universe. Chalmers' notions of 'panpsychophysics' are, mildly put, masturbatory. They make no predictions, offer no solutions, explain nothing: they are as relevant and useful to the problem of consciousness as the Tooth Fairy. The idea of the ontologically autonomous consciousness is fatally testable: if we can account for everything happening in the brain, and if the brain is causing mental states, the ontologically autonomous 'consciousness' has no effect, it is causally decoupled from the universe, it is nothing, it does nothing, it does not exist.
The brain is as mysterious as a billiard table with a little quantum fuzz.
An organism with a human brain cannot be a p-zombie. It must have qualia. Qualia are created by the brain.
The deflationary solution is the solution.
What we should all REALLY be afraid of, in terms of hard problems, is cosmology!
This is absurd, and has already been addressed.The brain is as mysterious as a billiard table with a little quantum fuzz.
The brain, while incredibly complex, is not what's being discussed. It does appear to be correlated with consciousness.
This consciousness is mutable; it can be changed by experience, it can be surgically altered (however barbaric some of those surgeries are), it is wholly rationally explainable as a result of a biological process.
The brain, while incredibly complex, is not what's being discussed. It does appear to be correlated with consciousness.This is absurd, and has already been addressed.
The existence of consciousness, yes. But the mechanism for consciousness is most certainly not as abstract as you're trying to say it is. Human brains generate consciousness in the patterns of atoms and molecules that make them up. This consciousness is mutable; it can be changed by experience, it can be surgically altered (however barbaric some of those surgeries are), it is wholly rationally explainable as a result of a biological process.
How do you explain the changes to consciousness that result from a lobotomy?
I did read your story, and honestly what I mostly took away from it is that one should be careful about eating spicy foods before falling asleep. But in all seriousness, the brain is a complex and often self-contradicting thing, and **** happens. I can't count how many times I've walked into a room to get something and then stood there dumbfounded, because I have no ****ing idea what I initially walked in there to get. But that single memory is no more all of "me" than the strangely-aborted dream you had was all of "you." Overall, my me-ness seems to do a decent enough job of perpetuating itself, and I have no reason to believe that a process that involves taking a snapshot of the brain's physicality plus quantum state plus what have you and then recreating it ex-situ would do any better. In fact, re: the aforementioned Trek episodes, I have several reasons to believe that it could quite easily do a demonstrably worse job. Better the devil you know, right?QuoteThere's an issue that hasn't yet been raised though: practical experience. I fall asleep every night, and I wake up the next morning with a high degree of confidence that I'm the same me who went to sleep the night before. One can argue that I don't have objective evidence of such, and that's probably true, but I do remember what I did the day before, and the day before that, and so on, and for general purposes that's good enough for me to get out of bed and on with my day. Sure, I know that something can biologically break and throw the whole process off, but so far (knock on wood) it hasn't, and I feel pretty good about that track record.
Read my short story on page 6. It is a true story!
You can decide if my waking self died and was replaced by a new self. You can decide if this implies all of us die every night and are replaced with people who only have the memories of our previous selves. If you are not disquieted by sleep, then you have no reason to fear the teleporter.
The existence of consciousness, yes. But the mechanism for consciousness is most certainly not as abstract as you're trying to say it is. Human brains generate consciousness in the patterns of atoms and molecules that make them up. This consciousness is mutable; it can be changed by experience, it can be surgically altered (however barbaric some of those surgeries are), it is wholly rationally explainable as a result of a biological process.
Addressed here.
Take all that **** out of the absurd GenDisc quote stacks I thought we'd left behind years ago and make it something readable.
All I can get out of it is that you've retreated to an argument that we have some trait which does nothing, is nothing, has no effect, and is unrelated to what we're discussing (the brain, which is our selves, which is consciousness).
Qualia are physical processes in the brain. There are only physical properties. Dualism offers no predictive power: it postulates a ghostly presence with no causal consequences, no connection to anything observable, no effects, and...no reason to exist.
Respond to arguments with something beyond 'you can't mean this, I don't understand it, you must have meant something different'.
I can't see how Mars' quote is relevant. In any case, the physical mechanism isn't the hard problem. It's the gulf between the mechanism and consciousness, as illustrated by the analogy.
It seems that your arguments always boil down to "it's not practical".
Dualism offers precisely as much predictive power as monism.
@watsisname: Dualism makes exactly the same predictions as physicalism, because it subsumes physicalism. Any prediction that physicalism makes, dualism also makes.
But that was watsisname's objection: If Theory A makes the exact same predictions as Theory B, and Theory A is more complicated (in this case, dualism being the more complicated one by introducing things that are immeasurable), why keep Theory A around?
Again, you're arguing from a practical/utilitarian/predictive standpoint, which ignores virtually every branch of philosophy and hence has no bearing on philosophical issues.This is false to an extreme extent; just as an example, let's take what wikipedia lists as branches of philosophy, as peculiar as it may be:
I am indeed examining all of this very rigorously, "from all the yous involved", but I'm about to go out so I will develop the idea later. I'll just leave this hint if you want to guess where I'm going with it: a crime of murder does not necessitate that you prove your victim was aware it would die, or that it suffered in any way, or that it had "new memories". Killing people while sleeping is still murder, for example. But I'll be more specific, and, as you correctly put it, "very rigorous" later on.Hm, then there's definitely something I didn't get in this discussion, sorry.
@Meneldil, not in anything I wrote did I say that Consciousness is not copiable. That idea is completely perpendicular to my concerns.
There is a joke on how Consciousness is indeed something different than other "body parts", although it does not clarify the discussion I was having one bit (it's still funny though): we are all perfectly fine with having several organs of ours being transplanted, substituted by new ones. I will guess that you would mind to have a brain transplant.
Physicalism tells us why we have first person experiences, why we a all experience the world in first person as a particular brain: because each brain's physical structure computes qualia. It's the simplest thing in the world. We are all ourselves.
Pluralitas non est ponenda sine necessitate (https://en.wikipedia.org/wiki/Occam's_razor).
But that was watsisname's objection: If Theory A makes the exact same predictions as Theory B, and Theory A is more complicated (in this case, dualism being the more complicated one by introducing things that are immeasurable), why keep Theory A around?
Dualism offers no predictive power
it has no predictive power
It's important to note that of those four features (consistency, power, parsimony, and utility), the only one that dualism might not have is parsimony. Crudely, dualism is physicalism + 1; it's internally consistent and generates exactly the same predictions as physicalism. In fact, dualism is also parsimonious: physicalism has no handle on first-level concepts, and ignores the manifest.----------
Again, you're arguing from a practical/utilitarian/predictive standpoint, which ignores virtually every branch of philosophy and hence has no bearing on philosophical issues.This is false to an extreme extent; just as an example, let's take what wikipedia lists as branches of philosophy, as peculiar as it may be:
Aesthetics, Epistemology, Ethics, Legal philosophy, Logic, Metaphysics, Political philosophy, Social philosophy.
Ok, last attempt: physicalism explains why we have first person experiences, yet physicalism doesn't explain why the world is perceived and experienced through your brain, and not mine. As far as physicalism goes, the world ought to be perceived and experienced through both, but funnily enough, it's only perceived and experienced through your brain.
Subjective experience is like a non-Turing-complete language. It has limits and just can't do some things, no matter how straightforward and provable they might be in and of themselves. And the persistence of subjective experience in teleportation might be one of those things that a human mind cannot really grasp in first person, even when that does not prevent it from understanding and agreeing with it on a rational level. If an objection to safety of teleportation is a result of the former, then throwing more of the latter at it isn't going to do anything.
In order to accept Dualism as a superior theory to Physicalism, it must demonstrate greater predictive power. If, by your own admission, its predictive power is the same as that of Physicalism, how can we choose which theory is correct? Occam's Razor is the only criterion we have: In the absence of differences in predictive power, the simpler theory is to be preferred.
And again, dualism has no predictive power.
In order to accept Dualism as a superior theory to Physicalism, it must demonstrate greater predictive power. If, by your own admission, its predictive power is the same as that of Physicalism, how can we choose which theory is correct? Occam's Razor is the only criterion we have: In the absence of differences in predictive power, the simpler theory is to be preferred.
And again, dualism has no predictive power.
Again, dualism has exactly as much predictive power as physicalism.In order to accept Dualism as a superior theory to Physicalism, it must demonstrate greater predictive power. If, by your own admission, its predictive power is the same as that of Physicalism, how can we choose which theory is correct? Occam's Razor is the only criterion we have: In the absence of differences in predictive power, the simpler theory is to be preferred.
Okay, now we're getting somewhere. Consciousness appears to have no causal effect. Does this mean that we should pretend it doesn't exist? No, because it does exist; it is a brute fact. This is the sense in which physicalism is incomplete. Physicalism is completely silent on consciousness, and can only address third-level correlates like "the brain".
Okay, now we're getting somewhere. Consciousness appears to have no causal effect. Does this mean that we should pretend it doesn't exist? No, because it does exist; it is a brute fact. This is the sense in which physicalism is incomplete. Physicalism is completely silent on consciousness, and can only address third-level correlates like "the brain".
Dualism predicts nothing.
Physicalism is the opposite of silent on consciousness. It shouts that there is no correlation. Consciousness is the brain.
Dualism, on the other hand, postulates an external, unmeasurable, unseeable agent that imbues a quality of consciousness onto an object. By definition, we cannot test this, we cannot measure this, there's no way to derive consistency for this. Therefore, dualism has to be rejected.
Imagine a device like Maxwell's Demon which tracks every particle in the brain, watched by a highly intelligent system that knows how to translate particle movements into a thought. This is a complete account of the brain. What's more, it leaves no room for consciousness as a causal force: if consciousness acted, it would betray itself in the changed motion of particles. If consciousness does not act, it is not real.
Dualism predicts nothing.
This would imply that physicalism predicts nothing, because dualism predicts everything that physicalism does. The E's objection was that dualism predicts no more than physicalism, and this is something that I agree with.Physicalism is the opposite of silent on consciousness. It shouts that there is no correlation. Consciousness is the brain.
Consciousness is not the brain. The brain fails the definition of consciousness.Dualism, on the other hand, postulates an external, unmeasurable, unseeable agent that imbues a quality of consciousness onto an object. By definition, we cannot test this, we cannot measure this, there's no way to derive consistency for this. Therefore, dualism has to be rejected.
Consciousness is (in a sense) unmeasurable and unseeable, but not external. Nevertheless, it exists; this is a brute fact. Physicalism makes the mistake of either ignoring consciousness, or confusing it with something like "the brain".
QuoteWhat's more, it leaves no room for consciousness as a causal force: if consciousness acted, it would betray itself in the changed motion of particles. If consciousness does not act, it is not real.
You've posted all these statements over and over. I hate to post about posting, but you need to engage with the arguments being made against you.
If you think that dualism predicts everything physicalism does, make a prediction in which dualism is necessary and sufficient.
If you think that consciousness is not the brain, explain why altering the brain alters consciousness.
If you think that identifying consciousness as the brain is a mistake, explain why in a falsifiable fashion.
I don't mean to be a mega prick (it's 7:05 sorry) but I feel like we've been stuck there for a while.
QuoteWhat's more, it leaves no room for consciousness as a causal force: if consciousness acted, it would betray itself in the changed motion of particles. If consciousness does not act, it is not real.
Causal effectiveness is not a requirement for existence.
I also hate to post about posting, but I've explained these statements over and over. You appear not to be listening.
Ironically this also seems like the final defeat of the 'teleporters are dangerous' argument. Even if you accept that consciousness is somehow an exceptional fact, the only absolutely certain and incontrovertible fact, then you now must concede the teleporter will be safe: barring dualism, you know that whoever comes out the other side has exactly the same first-person capacity to say 'I am conscious' and 'I exist'. The alternative is postulating that these capabilities somehow arise from nonphysical fantasy.
From the pre-teleport person's perspective, they only know that their body's going to be scanned. If they knew the actual terms of the deal, they would say 'wait, wait, if we do this, one of my causal descendants is going to die! They'll diverge and then terminate irretrievably! Sure, the other fork will survive, but I don't want my child subjectivity to experience that!'
From Fork A's perspective, on the far side of the wall, they've suddenly jumped into an identical room but without the presence of the scan operator. Weird!
From Fork B's perspective, they have been given a body scan, and now suddenly they're going to be murdered! They are causally divergent from Fork A, and their brainstate is about to be eradicated. It will not feed forward through ordinary causality or through a teleporter. It's just done, gone, leaving.
Causal effectiveness is not a requirement for existence.Why not? How can anything acausal exist? If it did, why would we care? This is the core of the problem: we can say that qualia and consciousness are acausal, floating out there, just being as brute facts...yet they seem to be subject to causality really hard, and even if consciousness is just an executive summary issued post facto, we're burning calories on it. Evolution tells us it's there for an adaptive purpose.
Broman, you can tell we're listening because we keep writing elaborate thought experiments to disprove them and try to push the conversation forward!
If dualism is unnecessary to make predictions about the universe, how can we distinguish it from the Newt Gingrich hypothesis? Don't they seem equally likely?
If there is a hard problem, if we don't know why altering the brain alters consciousness, why should we avoid the deflationary answer? Why don't we conclude that, hey, the brain is consciousness, it's in there, like a picture in a camera?
If the existence of consciousness is a brute fact, how do you answer the past several pages of people pointing out that the brute fact goes away and becomes untrue for big chunks of our lives? I don't know I'm conscious for ~30% of my existence and yet when I wake up my consciousness has changed. ****'s no prime mover.
I don't think there's a language barrier. I think you're using definitions as arguments. Restating a definition doesn't protect it.
What competing theories are there? Dualism, with its "what he said, but faeries did it!" approach?
Right now, to the best of our knowledge, physicalism is the only game in town. It's the only theory that is completely testable; believing in it or treating it as the absolute truth seems like a fairly safe bet in the absence of a complete or even partial disproval.
I know you were being facetious, but dualism is not "faeries did it". There are many types of dualism, some of which make no attempt to "explain" consciousness. What they all have in common is that they include consciousness - the most obvious, familiar, intimate phenomenon in our lives - and physicalism does not.
What competing theories are there? Dualism, with its "what he said, but faeries did it!" approach?
Right now, to the best of our knowledge, physicalism is the only game in town. It's the only theory that is completely testable; believing in it or treating it as the absolute truth seems like a fairly safe bet in the absence of a complete or even partial disproval.
I must admit, I don't quite get what you are on about, Luis. Is it wrong to prefer one theory over the other? Wrong to argue for it? Wrong to assume a theory is fact when there are no indications that it can't be?
What's vital is that dualism can be attached to physicalism, as some adjunct. But dualism can never lead to physicalism: it has no explanatory power, it's useless. It's a rear-guard action.
Note that the Newt Gingrich hypothesis does explain consciousness to the same standards as dualism - it happens because it happens. Both fall short of physicalism because they cannot explain where consciousness (the most familiar fact in our lives) comes from, what it's for, or why it exists. Physicalism provides a simple and powerful solution. Consciousness is the brain.
It's very similar to the anthropic principle. We know we exist in a universe that permits existence. But we don't say 'well, we know we exist in a universe, that's a brute fact - so our existence must be somehow special and bicameral.' We work to know what kind of processes create universes, and how ours settled on these values. We know that our first-person existence occurs because we live in a universe that permits first person existence, but we don't treat that as exceptional.
But what does treating consciousness as some sort of emanation of the luminiferous aether allow us to do? Does it offer any insight into the formation of consciousness? Does it offer any insight into the how and why of consciousness? Does it allow us to make better medicine, better therapies?
The monist approach, at the very least, allows us to say that any or all of these things can be within our grasp, provided we keep studying.
Dualism, to me, always sounds like a manifestation of the god of the gaps. "There must be something special about us because we are capable of metacognition, there must be something unexplainable, unmeasurable, unquantifiable that makes us human"; that's what I hear. Let's just gloss over the fact that we've just introduced an acausal mechanism into a universe that (to the best of our knowledge) can be completely described in terms of a limited set of interactions between physical entities, because consciousness must be special.
To me, that's not acceptable. Certainly not very useful.
Treating consciousness as "an emanation of the luminiferous aether" (which isn't an accurate description of dualism, but never mind) is better than not addressing consciousness at all. A god of the gaps argument would claim that physicalism doesn't currently explain consciousness, and hence consciousness is special. The actual situation is much worse: physicalism doesn't even address consciousness.
We haven't "introduced an acausal mechanism into the universe". The phenomenon is inescapable.
Luis, I'm curious whether you think a set of really verbose quantum field equations describing a human body would be conscious (if worked out, say, by hand in an arbitrarily large colony of scriveners).
I guess I don't see a way to get around this fundamental disconnect right now:
As far as I am concerned, there's no sense talking about anything that doesn't contribute to a single, coherent, unified explanation of everything. 'The only thing we can be sure of is that we're conscious' is something I can agree with — but what do we do with that?
We look around, observe the universe, search out causal logic, and if we eventually arrive at a causal model that begins with nothing and ends up explaining us, including our consciousness, we say 'this model is useful and predictive, and unlike any other, it seems to provide an account that explains everything we see. We thus consider it to be a model of the universe, of which we are a subsystem.'
You seem to say, 'we can do that, but when we're done we shrug and say, well, we might also have ghostly dualist voodoo which has no detectable effect, is unnecessary to explain anything, and is not suggested by anything except our own cultural traditions and desire to believe we're special...but we can't disprove that consciousness is special somehow...'
Those of you who would argue that physicalism will never explain qualia must contend with the fact that it already has. Physicalism says that we each have our own subjectivity for the same reason cameras take pictures from their own perspectives: that's what the machine does. It monitors itself, models itself and others, manipulates symbols in a workspace, applies global states like 'emotion' to modify function in response to adaptive challenges, and generally does a lot of stuff which requires it to have a 'this is me' concept. The brain needs to be able to model itself from someone else's perspective, and to integrate conflicting motor responses, and to do all kinds of **** which, it turns out, we experience as subjectivity. How else would we experience those things? Like the man inside the Chinese Room, blindly manipulating symbols? We're not the man. We're the room.
A brain is a meat machine. You build the machine, you build everything in the brain. We are in our brains. We are meat.
Luis, I'm curious whether you think a set of really verbose quantum field equations describing a human body would be conscious (if worked out, say, by hand in an arbitrarily large colony of scriveners).
I'll take your silly chinese room bait after you answer my points regarding murdering Fork B, kthnks.
A) Can the destruction of "Fork B" be called "murder"? It seems that it is. If this person is not exterminated, it can continue existing in the world, if he is exterminated, the Cosmos is accounting for one less Consciousness in it. There's blood in the ground, there's a killer, there is an energy discharged to destroy this fork. I don't see how it is not murder;
B) Is the murder of "Fork B" dependent on the pre-knowledge of Fork B that he is going to die? That is, does the determination that what happens to him is murder depends on the words the scan operator tells you? If the scan operator is silently killing you, does that stop being murder? Clearly, that is ridiculous. People don't get out of jail sentences for being silently killing people.
C) Is the murder of "Fork B" dependent on the speed of his death? If he is immediately killed after the scan, can we say that the operator is therefore innocent of his actions? This is absurd: if he does not kill the Fork, the Fork lives as the laws of physics allows him to. It is the action by the operator that causes the extermination of the Fork. The speed to which he does this is irrelevant: He could wait an hour, he could wait a minute, he could wait a second, he could wait a micro-second. The murder is murder nevertheless, a Consciousness *has* been erradicated nevertheless.
D) Something has been hinted at the "suffering" of Fork B. Oh, the humanity, so concerned with the "suffering". It's a total strawman. You can easily depict a scenario where this person was given a drug before being scanned that prevented his psychological suffering. This administration of this drug does not absolve anyone from the crime of murdering him.
Treating consciousness as "an emanation of the luminiferous aether" (which isn't an accurate description of dualism, but never mind) is better than not addressing consciousness at all. A god of the gaps argument would claim that physicalism doesn't currently explain consciousness, and hence consciousness is special. The actual situation is much worse: physicalism doesn't even address consciousness.
We haven't "introduced an acausal mechanism into the universe". The phenomenon is inescapable.
But you did! Dualism postulates that consciousness is something that cannot be completely described in terms of physical interactions. But since consciousness has undeniable physical sideeffects, consciousness has to appear acausal from the point of view of a purely physical examiner.
Now for the rest of my paragraph: if you agree that existence and consciousness are brute facts, and you agree that these are the only brute facts, our definitions must coincide (though mine is more precise). Hence the brain cannot be consciousness, because the brain violates our definition: it is not a brute fact. This is simple logic.
We have a consciousness. We can do a lot of stuff with it. One thing we can do is poke the environment around us and see what happens.
As we do this, we begin to detect causal logic in how the environment behaves. This leads us to the choice between solipsism and objective reality. Solipsism is not useful: it undercuts itself.
Once we have chosen objective reality, we must begin trying to build models of how it works. And when we build a model that, in the end, explains ourselves, when it contains only the necessary and sufficient factors to explain our own consciousness, then we have come full circle. We know what we are, where we come from, and what our minds do. We have demoted consciousness from brute fact to a mundane subsystem of a universe built out of quantum fields, and we know that our own illusory certainty that consciousness comes first is only that: an illusion.
We are the laser told to search for the darkness. Wherever we look, we see qualia, so we assume that qualia have primacy. But consciousness comes last.
This is physicalism. It is the only account of consciousness with any value. It tells us why we have qualia.
Consciousness is a calculation conducted by the human brain. Consciousness is the brain. This is the only logic.
2. My thing about the whole "brain creates consciousness, so it's fine if we build a new brain with the exact same brain state elsewhere", is not that I'm "worried" about forks or what happens in the meanwhile or whatever. What worries me is that I just end. That is, there is nothing in Physicalism that guarantees that my own, personal experiencing of living will continue in a very similar brain elsewhere. I don't give a rat's ass if that brain is exactly like mine, behaving like I do, having a consciousness equal to mine.
My problem is simpler: Is it me doing the travelling or is it me giving birth to a clone to the place I bought a ticket into?
Physicalism - as you seemingly define it - treats both stories as one and the same, while I do think that the difference is between you experiencing death and blankness forever (while giving birth to a clone) and actually travelling to the other side of the wall.
Physicallism tells me there's no real difference between my own conscious experience and any other's. Except that I'm stuck in mine. Everyone's stuck in their own. And this stuckness is ill defined as of yet. We still do not understand it very well. We might say "Oh but you see it's all due to what is connected to your synapses and so on", ok, sure, it's still very unclear.
So my point is, if I'm stuck in my Consciousness, and I'm the Person who is being scanned and killed, then it logically follows that my Consciousness is stuck to die. The only technological miracle that is independent of this basic murder story is that, somehow, through a process, a new Consciousness will be born out of an equal chinese room in another room.
From the point of view of that guy who is just 2 seconds old, it's good. It's been fun! He just travelled thousands of whatever, closer to his goals. He will even gladly pay what he owes and shake the hands of the operators. And when he comes back again in a week, his life span will have been merely a week.
Yeah, I get you. But I am making an end run around this whole problem. I think that our 'stuckness' is simply a story we tell ourselves because we are never forced to confront what we really are: the epiphenomenal experience of being a material brain, one Planck instant at a time. We actually aren't stuck. We are endlessly teleporting: giving birth to a clone who travels forward one tick.
I believe that everything we are, including our own subjective experience of being me, having been me since I was born — in short, our credentials, our qualia — is physical. If we rebuild the physical we rebuild that subjective experience.
This is why it is important to remember that we can only claim continuous identity retrospectively. We can say 'tomorrow I will', but we cannot remember ourselves doing it. We are only planning. Our credentials haven't been established yet.
To your worry about the man who takes a teleporter trip and lives only a week, I would say that he lives far longer than the man who sleeps, and lives only a day. He lives longer than the man who gets blackout drunk, and exists as a drunken and transient bubble of joy and vomit for only half an hour before he passes out.
Well that's fine. That's a good story. I never said it wasn't a coherent story. I'd gladly read that novel and have a real blast with it (perhaps I've done so in all Star Trek series so far). What I said was, you have no way to test if that story is true. It's therefore, not physics. It's not science. At most, it's metaphysics, and a tad in need of rigorous checks. It does seem to run into basic aristotelian roadblocks of essences and forms, for instance.
The testability is fundamental here. Look at your next paragraphs:
I like how they are phrased now. They espouse your beliefs. But consider this: when you speak on how such a person "will live longer than the man who gets blackout drunk" and so on, I have the sensation I'm not reading anything really rational or scientific now, merely poetical. As an analogy, it's like reading someone saying that "I'm going to live forever through my sons and daughters, I'll live forever through my work". At the end of the day, if I'm to decide to teleport myself or not, I will ponder through those things poetically only after considering my true existence of my stuck conscience in that world. Not all of these poetical things. Like Woody Allen said:
“I don't want to achieve immortality through my work; I want to achieve immortality through not dying. I don't want to live on in the hearts of my countrymen; I want to live on in my apartment.”
Substitute "work" for clones, substitute "hearts of my countrymen" for "teleported copies of my own".
That's all very nice, and completely dodges the question. I'm not sure how to say it more plainly, but I'll try.
----------
Assumptions
1. "Existence", "consciousness", and "brain" are meaningful words.
2. Whatever they are, existence and (the existence of) consciousness are brute facts.
3. The existence of brains is not a brute fact.
4. "Bruteness" is a property.
5. If two things do not have the same properties, then they are not the same.
If you're denying 1 or 2 (and I'm pretty sure you're not), we have reached an impasse.
If you're using the standard definition of "brain", 3 is clear. We may be digital simulations, and hence have no brains.
You've referred to brute facts repeatedly, so I'm pretty sure you're not denying 4.
If you're denying 5, you must be using an extremely nonstandard definition of "sameness".
Claim: With these assumptions, consciousness cannot be the brain.
Proof:
By 1, the words "existence", "consciousness", and "brain" are meaningful, and we can use them.
By 2, consciousness is a brute fact. By 3, the existence of brains is not.
By 4, bruteness is a property. Hence consciousness has a property (bruteness) that the brain does not.
By 5, this implies that consciousness and the brain are not the same.
----------
The proof is logically valid, so if you disagree with the conclusion, you must disagree with one of the premises. I can't figure out which one you disagree with.
EDIT: This is in response to Battuta.
I think these things are as testable as whether we'll still be ourselves tomorrow, which is the only criteria we need. If I sound poetical it's because our ideas of 'self', 'dying', 'consciousness' and so on are just poetry — dressed-up terms to disguise the fact that we're reels of film.
I do not share your fear that living on through teleporters is like living on through your work. If you rebuild the object, you rebuild the subject. This is not just an untestable article of faith but the default conclusion of every single piece of evidence we have about the universe. I do not see any risk to the philosophical teleporter.
But surely you see the problem there: whenever terms are poetic, that means we have no real synthetic / scientific grasp on them, let alone engineeringly-wise. I see this as a gradient from "Mere Intuition - > Insight -> Poetic semantics -> Insight -> Philosophizing -> Pre-scientific terminology -> Insight -> More grounded Philosophizing -> Hypothesis -> Testability, testing, tinkering, empirical feedback -> Scientific terminology -> Technical insight -> Engineering -> Technology".
When you tell me to dismiss all this poetic language because what "we really are" (which is a phrase that should be followed by some technical thing) is reels of film, just shows how deep into the Plato Cave's shadows we really are in discussing these things. No, we're not "reels of film", although I do get your "poetic point" - isn't it all we have at this point anyway?
There's no way to distinguish both scenarios. There's no way to test it. And that's why you'll be left with mere beliefs. Always. But science is not dealing with beliefs. It deals with predictions and tests. Replication. Falsification. None of what you have said meet these criteria, therefore it is not Science, it is just... your beliefs.
See where I'm rolling? 'Consciousness' is a lot easier to say than 'the retrospective illusion of continuity created by memory.'
The point he's making is that Consciousness is a product of the Brain, it's a physical process that happens within the Brain.
Indeed. A brain is not a consciousness.
It answers the question nose on. We begin with 1, proceed to 2, between 2 and 3 we conduct a search for systems of logic that use 1 and 2 to explain our perceptions. We stumble on mathematics, physics, and all their consequences: the belief in an objective reality that obeys causal logic. We reach 3 knowing that brains are consciousness: the existence of brains is the same as the existence of consciousness. We are our brains, and whatever we are is material. Any brute facts of our existence are material. The entire universe and all its rules are physical. Failing to accept this sends us back to the search between 2 and 3, which we repeat, and find no better (necessary and sufficient) model to explain our own existence.
Even if we're digital simulations, we still have brains, the simulation is computing us as little blobs of flesh. Brains are as brutally factual (:megadeth:) as consciousness.
I was going to ask Ghyl what he thought of the Cotard delusion and knowing for sure that you don't exist.
Then Battuta is not being clear or even honest when he says "consciousness is the brain". If he means, "consciousness is a product of the brain", then I agree 100%; I have never said otherwise.
You are once again not addressing the post. Do you understand the concept of a proof? You have only three options, none of which you have yet taken: 1) show that the proof is not logically valid; 2) accept that "consciousness is the brain" is false; and 3) deny one of the assumptions.
I was hoping we could avoid this, but I must now ask you to define "brain". I think this will reveal your error.
The hint is in the name. If they do exist, then those people are deluded. What's the problem?
Consciousness is in the brain.
Writing five bullet points does not a proof make. Your logic's broken! I fixed it by pointing out that we can begin at a 'brute truth' and use that truth to derive a model of the universe which qualifies and corrects our starting point.
QuoteI was hoping we could avoid this, but I must now ask you to define "brain". I think this will reveal your error.
A brain is a brain. The substrate is irrelevant as long as the constituents behave functionally. This is elementary physicalism: what matters is not the raw stuff, the bits in the ship of Theseus, but their behavior. One atom can be exchanged for another. An atom can be exchanged for a nanite. A neuron can be replaced by a synthetic alternate or a simulation in a computer. It's all a brain.
This is why the teleporter is safe. Atoms are interchangeable.
QuoteThe hint is in the name. If they do exist, then those people are deluded. What's the problem?
No. By your argument these people have access to a brute truth. The only thing they can be sure of is that they don't exist.
What are you trying to discuss here? Dualism versus monism, or semantics? You know exactly what point I'm making about consciousness, and you have my objection to your proof: point 1 disproves itself as soon as you use it to examine the world. Engage or change topics.
What kind of dualist do you identify as, exactly?
Are you concerned that consciousness cannot be the brain because there are segments of the brain that are not functionally accessible to consciousness? That seems a perfectly fair clarification to me, if not exactly a major logical stumbling point — it's just a matter of how you parse the wording.
This is what I'd point to as clarification, from a way back:
"Consciousness is a calculation conducted by the human brain. Consciousness is the brain. This is the only logic."
I might even insert 'Calculations are physical processes in which inputs are manipulated to produce a result."
GhylTarvoke: In the metaphor of the button and the circuitry, the brain is not the button and consciousness is not the circuitry. Qualia, experiences, are the button. The brain is the circuitry. Consciousness is the outcome.
I've noticed that this is something you've continually misapplied as an argument in your favor that there must be something else.
On a slightly different note, anyone played SOMA? I loved it. It's sometimes billed as a game about consciousness, and teleporter stuff comes up a lot.It's on my to play list, esp. after reading this (http://www.haywiremag.com/columns/storyplay-human-machines/). Spoilers though!
Luis, would it make more sense and/or make you feel better if you instead thought of the teleporter as killing you, and then an almost immeasurable amount of time later bringing you back to life in another location? I feel like there's a sense here where there's a perceived interruption of the continuity of thought, when no such thing takes place.
My argument is that science and ConsciousnessBob are logically independent, which undermines every scientific attempt to "explain" consciousness.
Thus, ConsciousnessBob (unlike virtually everything else, including Pluto) is logically independent of science, in the sense that science says nothing about its existence. Even if I assume the inviolate truth of science, I can't conclude anything about the existence of ConsciousnessBob.
The demon's existence is not falsifiable. Science says nothing about its existence, so Occam's Razor advises me to favor a model without the demon.
The problem is that ConsciousnessBob holds exactly the same status as the demon! Science says nothing about its existence, so Occam's Razor advises me to favor a model in which I am the only locus of experience.
You have another model, where ConsciousnessBob is an emergent property of the system Bob, in very much the same way that ConsciousnessMe is an emergent property of the system Me.
Like the astrophysical models of stellar structure, it has excellent explanatory power.
It fits in the context of prior understanding of the universe (it's all just physics!), and it acts as a pathway to further understanding.
What additional explanatory power does it offer? Why must ConsciousnessBob exist at all, if I can explain Bob's behavior using purely reductive arguments?
If your consciousness is an emergent property of your construction, then it follows that objects of similar construction will have similar emergent properties.
I mean, I am not really following these arguments of yours; it all reads so very solipsistic. Your claim that you can explain another's behaviour through purely reductive reasoning is already a sign of you massively overstepping the bounds of your confidence.
Furthermore, if you posit that a being that is in all important aspects completely identical to you and that acts as if conscious actually isn't, then what evidence do you have that you are conscious? Yes, you claim it to be self-evident, but is it really?
Why must you exist at all, in this paradigm?
The moment you reduce all others' consciousness to reductive argument, so yours must be as well; unless you are capable of laying out a coherent argument for why you are a special case.
You are using a very simplistic argument-by-definition, one which holds that your viewpoint is unique. Bob can make the exact same argument as you are for his viewpoint being the unique one and his argument will be just as valid as yours is, unless you can establish some qualitative difference between you and Bob.
You have not done this. Your argument offers nothing over Bob's argument; both cannot be true at the same time but are otherwise in all respects identical; both are therefore likely false.
The problem becomes even thornier when I try to define "similar". Okay, let's say Bob is conscious, since he and I are both human (whatever that means). Is a dog conscious? What about a plant? I face exactly the same difficulty that I faced with Bob, and only faith lets me draw a line.
The "Bob is not conscious" model is not solipsism. Science still applies, and everything still exists - except for ConsciousnessBob. If the model contains no contradictions, then studying consciousness in a scientific way becomes extremely difficult. In particular, I have no way of testing whether something is conscious.
Solipsism (from Latin solus, meaning "alone", and ipse, meaning "self") is the philosophical idea that only one's own mind is sure to exist. As an epistemological position, solipsism holds that knowledge of anything outside one's own mind is unsure; the external world and other minds cannot be known and might not exist outside of the mind. As a metaphysical position, solipsism goes further to the conclusion that the world and other minds do not exist.
Science may not currently explain Bob's behavior, but it can in theory. If you disagree, you seem to believe that there's something acausal about free will.
See my definition of ConsciousnessMe. Bob and I both exist.
"Reducing others' consciousness" presupposes that the notion of others' consciousness is coherent. Bob may make the same statements I make, but his statements may not even make sense. Furthermore (unless free will is acausal), I can theoretically use reductive arguments to explain why he makes those statements.
The only justification you've offered for positing others' consciousness is a massive extrapolation: based on a sample size of one (myself), I draw conclusions about a population of seven billion, and that's when I only consider human beings. Against Occam's Razor, this reasoning seems flimsy.
Then explain why people under the influence of drugs behave differently compared to when they are sober.
There is plenty of medical research, and quite a few successful product lines, based around the idea that human brains are similar enough that drugs will produce reproducable effects. Therefore, we have to assume that human brains share properties, and that if something is true for one brain, it will be true for any number of other brains as well.
Following from that, since we know that brains and consciousness are intimately connected, we have to assume that if one brain is conscious, others have to be too. This is basic inductive reasoning.
Yes, it is. Let's quote the Wiki, shall we:QuoteSolipsism (from Latin solus, meaning "alone", and ipse, meaning "self") is the philosophical idea that only one's own mind is sure to exist. As an epistemological position, solipsism holds that knowledge of anything outside one's own mind is unsure; the external world and other minds cannot be known and might not exist outside of the mind. As a metaphysical position, solipsism goes further to the conclusion that the world and other minds do not exist.
That does seem to fit your stance rather well, doesn't it?
Science can explain Bob's behaviour. It can also explain yours, using the same assumptions. It follows then that, on sxome level, you and Bob are more or less indistinguishable.
Yes, but you have explicitly said that while you assume yourself to be conscious, no such assumption can be made for others; All we're asking is why you believe your assumption about yourself to be true.
If Bob makes the same arguments you do, but Bob's do not make sense, then it follows that your arguments do not make sense either.
And the assumption that you alone are the only conscious entity in the universe somehow fulfills Occam's simplicity criterion isn't flimsy?
The difference between stoned Bob's behavior and sober Bob's behavior can be explained purely by reductive reasoning. Asking about the difference between ConsciousnessStoned Bob and ConsciousnessSober Bob only makes sense if I assume a model in which Bob is conscious.
This is question-begging. Your reasoning only applies if I assume a model in which other people are conscious.
Science can indeed explain my behavior (where "my" refers to the locus of ConsciousnessMe). Bob and I are distinguishable because I am the locus of ConsciousnessMe, whereas Bob is not.
ConsciousnessMe exists by definition. I define "me" to be the locus of ConsciousnessMe. But I can't prove to you that I am conscious, and vice versa.
If Bob defines "me" to be the locus of ConsciousnessMe, then his argument makes sense. If he defines "me" to be the locus of ConsciousnessBob, then his argument may not make sense.
Your model posits the existence of seven billion entities that explain nothing. If that doesn't qualify for Occam's Razor, I don't know what does.
And what happens when you take drugs? Do they have similar effects on you?
No, it applies if your physiological makeup is more or less identical to that of an entity you presume to be unconscious. Which it is. To a ridiculous degree.
So, to restate: Assuming you have a brain, and assuming that altering your brain alters your consciousness in ways similar to the alterations observed when another's brain is so altered, then it follows that there are similar mechanisms at work. Since it is undeniable that the brain is the seat of your consciousness, and since it is provable that other people have brains of largely similar construction and complexity, you have to prove that you are in some way fundamentally different to others for your assumptions to work.
Basically, if you start with the axiom that you are conscious and then hypothesize that others aren't, you need to identify the key difference between you and others. You have so far failed to do so; despite your proclaimed beliefs in the scientific method, you aren't following it.
Sure. But the seat of your consciousness, if extracted from its skullprison, is indistinguishable from Bob's. We can do fine structure scans and see differences in the connectome, but overall, the differences are really minor and not enough to explain why you should be conscious and he isn't.
QuoteConsciousnessMe exists by definition. I define "me" to be the locus of ConsciousnessMe. But I can't prove to you that I am conscious, and vice versa.
Of course you can. Just do things while I am sleeping.
No, let's posit this. Bob makes the exact same statements you have made. He puts forth the same arguments you put forth to prove that he, not you, is the only conscious entity present. What do you do? Do you prove him wrong? Do you agree with him? Do you two get into a big fight about who the conscious one in this relationship is?
But they explain a whole lot of things. For example, the appearance of roads in my vicinity. Or parking tickets. I can prove to my satisfaction that the house I am in exists. I can further prove that I had nothing to do with its construction. Therefore, other agencies must be present, and astonishingly, there are entities all around me that are fundamentally similar to me, that share many of my qualities and therefore can be safely assumed to be grossly similar to me. Thus I have proven to my satisfaction that consciousness is a universal quality found in many different places.
I would like to ask you something though. If we buy into your theory, that you are the only conscious being in the universe, why do you wear clothes?
The existence of roads, parking tickets, my house, and even other people is not in dispute. As for the similarity argument, see above. It's nothing more than a massive extrapolation.
I don't believe in the solipsist model. If I did, I'd be in a mental institution instead of debating with you. What the solipsist model shows is that science and the existence of ConsciousnessBob are logically independent, which means that scientific investigation of consciousness has fundamental limits. In particular, I have no way of testing whether something is conscious.
The existence of roads, parking tickets, my house, and even other people is not in dispute. As for the similarity argument, see above. It's nothing more than a massive extrapolation.
Explain why any of these things exist, then.
I strongly believe that given time, it is possible to fully map the processes running in a mammalian brain, and that we'll find consciousness to be an emergent property of sufficiently complex neural networks (with the lower bound for complexity probably being far lower than we would think).
That's the big problem your model has: By declaring yourself to be conscious, your argument essentially becomes "If consciousness is a quality of me, and other people are not me, then I cannot be sure they're conscious". You make no attempt to explain the hows and whys of consciousness, you accept it as a given.
The solipsistic model isn't useful in any way, because it does not provide a framework to examine yourself. In it, objectivity is unattainable and science (or rather, the scientific method) cannot be used to explain yourself to you.
If by "consciousness" you mean ConsciousnessL for some locus L, then the claim "consciousness is an emergent property of sufficiently complex neural networks" is untestable and independent of science. If you mean something else, then I agree.
Not accepting ConsciousnessMe as a given is the same as denying that something certainly exists. If you truly believe it's possible that nothing exists, then I can't help you.
Any scientific argument that goes through in the "Bob is conscious" model also goes through in the solipsistic model. Science explains the behavior of "me", and it also explains the behavior of ConsciousnessMe. Science says nothing about the existence or behavior of ConsciousnessBob.
I am not going to mess up perfectly good words with subscript madness. Defining consciousness as the result of a sufficiently capable neural net backed by large enough memory storage capacity to enable the gathering and evaluation of experiential data that is processing inputs into outputs is a perfectly sufficient and testable definition of consciousness, and that is the one I subscribe to.
Where, in this entire topic, have I ever even come close to the idea that nothing exists?
My criticism of your starting point is that you declare something to be axiomatically true that doesn't need to be. By using a completely physicalist definition of consciousness, I can devise tests to see if something is conscious, I can even arrive at a complete model of it and replicate it. In your approach, you seemingly throw your hands in the air and say that science is great, but there's this barrier right here that it can't cross. It is no wonder then that you cannot prove another's consciousness exists; You can't even prove that you are conscious (Just as we cannot prove that |x| + |y| > |x| and |x| + |y| > |y|). Solipsism, to me, is intellectually lazy. It's a capitulation.
Except it does if you're not using a solipsistic model.
1. You can either accept or not accept the definition of ConsciousnessMe. If you don't accept it, then you deny that something exists. Let's assume you accept it.
2. You can either accept or not accept the definition of ConsciousnessBob. If you don't accept it, then you're in the solipsistic model. Let's assume you accept it.
3. You can either consider or not consider the question: "Is science independent of ConsciousnessBob?" If you don't consider it, then you're sticking your head in the sand. If you do consider it, then the solipsistic model shows that the answer is "yes".
Regarding the definition of consciousness, see above. ConsciousnessMe exists by definition, so "proving its existence" makes no sense.
Again, I don't believe in the solipsistic model (though I have no justification for my disbelief). It's a tool that demonstrates the independence of science and ConsciousnessBob.No, it's not a tool. It's a desire to not have to deal with others dressed up in pretty philosophical language (in this regard, it shares qualities with libertarianism).
If science "explains" ConsciousnessBob in the "Bob is conscious" model, then it also does so in the solipsistic model. But ConsciousnessBob doesn't even exist in the solipsistic model, so this is a contradiction.
I do not accept your definitions, plural. I do not "deny the existance of something". I do not accept your initial setup as valid, and consider it to be deeply flawed and misguided. In case that wasn't clear enough already.
Oh for ****'s sake.
Let's be absolutely clear here: Your setup defines consciousness as an intrinsic quality of you, and you alone. By that definition, sure, no statements can be made about others. But that definition is deeply, fatally, flawed, as pointed out above. You are trying to win this argument by forcing everyone to play by the rules you set up, and if I or someone else points out to you that your rules are not making sense and do not lead to a place that allows for meaningful inquiry about the hows and whys of human cognition, you keep retreating to your definition.
This isn't fun. Not for me, not for anyone else still reading this topic, I imagine.
What exactly are you trying to learn here anyway? What is the point of this discussion? What are your goals for it?
No, it's not a tool. It's a desire to not have to deal with others dressed up in pretty philosophical language (in this regard, it shares qualities with libertarianism).
QuoteIf science "explains" ConsciousnessBob in the "Bob is conscious" model, then it also does so in the solipsistic model. But ConsciousnessBob doesn't even exist in the solipsistic model, so this is a contradiction.
Which is why the solipsistic model is invalid.
Here's the point. Although the models have differing perspectives, they both incorporate science in its entirety: for every scientific argument that goes through in one model, the same argument also goes through in the other model. (To reiterate my example from Part 1, science predicts the existence of Pluto in both models; the models only disagree on the metaphysical issue of Pluto's "true nature".) Furthermore, both models are sound: they account for all of my observations, and lead to no logical contradictions. [Unless our current understanding of science is self-contradictory.] But ConsciousnessBob exists in the first model, whereas ConsciousnessBob does not exist in the second model. Thus, ConsciousnessBob (unlike virtually everything else, including Pluto) is logically independent of science, in the sense that science says nothing about its existence. Even if I assume the inviolate truth of science, I can't conclude anything about the existence of ConsciousnessBob.
Consciousness is not an intrinsic quality of me. ConsciousnessMe is an intrinsic property of me, which makes perfect sense (unless you believe that ConsciousnessMe is the collective consciousness of the entire human race, or something). I can make (and test!) lots of statements about other people. What I cannot do is test the existence or behavior of ConsciousnessBob.
I'm not forcing you to do anything. You're free to believe that ConsciousnessMe doesn't exist, or ConsciousnessBob doesn't exist. But I think we both believe that they do exist.
1. You can either accept or not accept the definition of ConsciousnessMe. If you don't accept it, then you deny that something exists. Let's assume you accept it.
2. You can either accept or not accept the definition of ConsciousnessBob. If you don't accept it, then you're in the solipsistic model. Let's assume you accept it.
3. You can either consider or not consider the question: "Is science independent of ConsciousnessBob?" If you don't consider it, then you're sticking your head in the sand. If you do consider it, then the solipsistic model shows that the answer is "yes".
The hows and whys of human cognition are perfectly susceptible to the scientific method. It's an extremely interesting subject in its own right.
There's a lot of fuss about "consciousness". My first goal is to extract the really difficult part of that concept. My second goal is to show that the really difficult part isn't susceptible to the scientific method, and that people are waiting for a scientific (rather than philosophical) answer in vain.
Invalid in what sense? Not in a way that contradicts science, because science works perfectly fine in the solipsistic model. If you're saying it "feels wrong", then I agree, but that's not a scientific argument.
Let's go back to your initial punchline. Most things leading up to it are valid, but the conclusion you draw here is wrong:QuoteHere's the point. Although the models have differing perspectives, they both incorporate science in its entirety: for every scientific argument that goes through in one model, the same argument also goes through in the other model. (To reiterate my example from Part 1, science predicts the existence of Pluto in both models; the models only disagree on the metaphysical issue of Pluto's "true nature".) Furthermore, both models are sound: they account for all of my observations, and lead to no logical contradictions. [Unless our current understanding of science is self-contradictory.] But ConsciousnessBob exists in the first model, whereas ConsciousnessBob does not exist in the second model. Thus, ConsciousnessBob (unlike virtually everything else, including Pluto) is logically independent of science, in the sense that science says nothing about its existence. Even if I assume the inviolate truth of science, I can't conclude anything about the existence of ConsciousnessBob.
In the solipsistic model, science cannot exist. There is no way to prove that anything outside your immediate perceptions exists, because there is no external data to be had; Every piece of information that reaches you second-hand is suspect because the agencies bringing you that information are impossible to verify. This renders your assertion that both models are complete and effectively equivalent invalid.
Consciousness is not an intrinsic quality of me. ConsciousnessMe is an intrinsic property of me, which makes perfect sense (unless you believe that ConsciousnessMe is the collective consciousness of the entire human race, or something). I can make (and test!) lots of statements about other people. What I cannot do is test the existence or behavior of ConsciousnessBob.
That's because you cling to the belief that consciousness and its constituent parts are nonphysical entities. As far as we can tell, they're not; we can observe action in a brain that corresponds to input it receives. By excluding nonphysical nonsense, we can arrive at a definition of and test for consciousness that undermines your assertion that it is impossible to prove other people are conscious.
QuoteI'm not forcing you to do anything. You're free to believe that ConsciousnessMe doesn't exist, or ConsciousnessBob doesn't exist. But I think we both believe that they do exist.
Not forcing me to do anything?Quote1. You can either accept or not accept the definition of ConsciousnessMe. If you don't accept it, then you deny that something exists. Let's assume you accept it.
2. You can either accept or not accept the definition of ConsciousnessBob. If you don't accept it, then you're in the solipsistic model. Let's assume you accept it.
3. You can either consider or not consider the question: "Is science independent of ConsciousnessBob?" If you don't consider it, then you're sticking your head in the sand. If you do consider it, then the solipsistic model shows that the answer is "yes".
You are setting up rhetorical questions here that are forcing me to conform to your model in its entirety. I refuse to do so.
You are again trying to win an argument by retreating to your definitions. Stop it, and at least try to consider that your definitions are off.
Any one of these descriptions could be correct. None of them can be verified or falsified.
I want to emphasize the difference between model as "interpretation of the ultimate nature of reality" with model as "description of how observable phenomena function according to causal rules, with the purpose of having predictive and explanatory power over them".
We could live in a Matrix, and all of our science is still perfectly valid. In the event that some aspect of the simulation changes, we'll make more observations and update the models to explain that change. Science still functions. Meanwhile, we still have no way of proving or disproving that we live in a Matrix. Any bug or change in the system still looks like the "laws of nature". Maybe black holes are just a bug.
Maybe the Moon was just a simulated ball of light until humans first landed and walked on its surface. This is observationally indistinguishable from the model wherein it is an astrophysical object produced from a collision with Earth billions of years ago. But the astrophysical model has wonderful explanatory power. It fits within the framework of our understanding of the solar system. The idea that the Moon was just a simulated ball of light until we landed on it has no explanatory power at all. It has no motivation from prior knowledge, nor does it further our understanding of anything. That is not a model in any scientific sense. It is the antithesis of a model.
We cannot prove to you that "ConsciousnessMe is everything" is wrong. But we can examine consciousness and formulate explanations for how it arises and operates with the scientific method. I get the sense that you think these are mutually exclusive (they're not) and that they also have equal footing (they don't). These may be the most difficult things to wrap your head around.
Well... I can say that when I see the tree, it's definitly there and I can say that after years of meditation :P
The way we percive everything around is a matter of perspective. You should see what we experiance when our pineal gland releases some dose of N,N-Dimethyltryptamine. World is soooo colorful and so cool in that state :P. I mean no synthetic DMT and other drug crap. Pineal gland produces psychdelics in delta stage of sleep to create dreams, and rarely in some other circumstances. The way we see or hear everything around us differs from person to person.
The world around us is not anykind of Matrix. Laws of physics are solid, science is valid... And I'm sure that the tree in front of me is real, and everything is not anykind of illusion. Perceptions differ depending on brain state. When we are mad, sad, have depression or any other crap, we see the world around us as gray, cold and mostly not interesting. When we are under all the good hormons like oxitocin or serotonin, we see everything as colorfull, beautiful, and cool! Natural DMT gives the best result. I'm calling that "spiritual high" :P.
This is the reason the qualia problem, the problem of intentionality, and other philosophical problems touching on human nature are so intractable. Indeed, it is one reason many post-Cartesian philosophers have thought dualism unavoidable. If you define “material” in such a way that irreducibly qualitative, semantic, and teleological features are excluded from matter, but also say that these features exist in the mind, then you have thereby made of the mind something immaterial. Thus, Cartesian dualism was not some desperate rearguard action against the advance of modern science; on the contrary, it was the inevitable consequence of modern science (or, more precisely, the inevitable consequence of regarding modern science as giving us an exhaustive account of matter).
However, many people imagine that consciousness will yield to scientific inquiry in precisely the way that other difficult problems have in the past. What, for instance, is the difference between a living system and a dead one? Insofar as the question of consciousness itself can be kept off the table, it seems that the difference is now reasonably clear to us. And yet, as late as 1932, the Scottish physiologist J.S. Haldane (father of J.B.S. Haldane) wrote:
"What intelligible account can the mechanistic theory of life give of the…recovery from disease and injuries? Simply none at all, except that these phenomena are so complex and strange that as yet we cannot understand them. It is exactly the same with the closely related phenomena of reproduction. We cannot by any stretch of the imagination conceive a delicate and complex mechanism which is capable, like a living organism, of reproducing itself indefinitely often."
I laughed out loud at this:QuoteThis is the reason the qualia problem, the problem of intentionality, and other philosophical problems touching on human nature are so intractable. Indeed, it is one reason many post-Cartesian philosophers have thought dualism unavoidable. If you define “material” in such a way that irreducibly qualitative, semantic, and teleological features are excluded from matter, but also say that these features exist in the mind, then you have thereby made of the mind something immaterial. Thus, Cartesian dualism was not some desperate rearguard action against the advance of modern science; on the contrary, it was the inevitable consequence of modern science (or, more precisely, the inevitable consequence of regarding modern science as giving us an exhaustive account of matter).
This is a quite hilarious (and somewhat desperate) rearguard action to portrait dualism as something necessary when it really isn't.
Here’s why. Keep in mind that Descartes, Newton, and the other founders of modern science essentially stipulated that nothing that would not fit their exclusively quantitative or “mathematicized” conception of matter would be allowed to count as part of a “scientific” explanation. Now to common sense, the world is filled with irreducibly qualitative features -- colors, sounds, odors, tastes, heat and cold -- and with purposes and meanings. None of this can be analyzed in quantitative terms. To be sure, you can re-define color in terms of a surface’s reflection of light of certain wavelengths, sound in terms of compression waves, heat and cold in terms of molecular motion, etc. But that doesn’t capture what common sense means by color, sound, heat, cold, etc. -- the way red looks, the way an explosion sounds, the way heat feels, etc. So, Descartes and Co. decided to treat these irreducibly qualitative features as projections of the mind. The redness we see in a “Stop” sign, as common sense understands redness, does not actually exist in the sign itself but only as the quale of our conscious visual experience of the sign; the heat we attribute to the bathwater, as common sense understands heat, does not exist in the water itself but only in the “raw feel” that the high mean molecular kinetic energy of the water causes us to experience; meanings and purposes do not exist in external material objects but only in our minds, and we project these onto the world; and so forth. Objectively there are only colorless, odorless, soundless, tasteless, meaningless particles in fields of force.
But, to determine more absolutely, what Light is, after what manner refracted, and by what modes or actions it produceth in our minds the Phantasms of Colours, is not so easie.
More hilarious wrongness:QuoteHowever, many people imagine that consciousness will yield to scientific inquiry in precisely the way that other difficult problems have in the past. What, for instance, is the difference between a living system and a dead one? Insofar as the question of consciousness itself can be kept off the table, it seems that the difference is now reasonably clear to us. And yet, as late as 1932, the Scottish physiologist J.S. Haldane (father of J.B.S. Haldane) wrote:
"What intelligible account can the mechanistic theory of life give of the…recovery from disease and injuries? Simply none at all, except that these phenomena are so complex and strange that as yet we cannot understand them. It is exactly the same with the closely related phenomena of reproduction. We cannot by any stretch of the imagination conceive a delicate and complex mechanism which is capable, like a living organism, of reproducing itself indefinitely often."
There are quite a few misconceptions in this. I wonder if you can spot them.
The quoted paragraph is Feser's conclusion. His reasoning is in a previous paragraph:Quote...snipped...
As a side note, I think Feser is referencing Newton's work in optics. Newton wrote about the mechanisms of vision (light, the retina, the optic nerve, etc.), but purposely avoided the experience of vision:QuoteBut, to determine more absolutely, what Light is, after what manner refracted, and by what modes or actions it produceth in our minds the Phantasms of Colours, is not so easie.
More hilarious wrongness:QuoteHowever, many people imagine that consciousness will yield to scientific inquiry in precisely the way that other difficult problems have in the past. What, for instance, is the difference between a living system and a dead one? Insofar as the question of consciousness itself can be kept off the table, it seems that the difference is now reasonably clear to us. And yet, as late as 1932, the Scottish physiologist J.S. Haldane (father of J.B.S. Haldane) wrote:
"What intelligible account can the mechanistic theory of life give of the…recovery from disease and injuries? Simply none at all, except that these phenomena are so complex and strange that as yet we cannot understand them. It is exactly the same with the closely related phenomena of reproduction. We cannot by any stretch of the imagination conceive a delicate and complex mechanism which is capable, like a living organism, of reproducing itself indefinitely often."
There are quite a few misconceptions in this. I wonder if you can spot them.
But couldn’t a mature neuroscience nevertheless offer a proper explanation of human consciousness in terms of its underlying brain processes? We have reasons to believe that reductions of this sort are neither possible nor conceptually coherent. Nothing about a brain, studied at any scale (spatial or temporal), even suggests that it might harbor consciousness. Nothing about human behavior, or language, or culture, demonstrates that these products are mediated by subjectivity. We simply know that they are—a fact that we appreciate in ourselves directly and in others by analogy.
It seems to me that he's saying that there can be no scientific definition of what "heat" or "red" or stuff like that means because we infuse those terms with meaning beyond mere physical attributes, and that this in turn means that there can be no scientific definition of "consciousness".
That's just plain stupid. We can define terms scientifically. We can measure the impact sensory perceptions have on the brain. There is no magic point at which a water temperature of 40+ degrees C suddenly turns into "Warm" and is thus imbued with metaphysical aspects. The point here is that, again, none of these writings make a clear point that dualism is a necessary hypothesis without which we cannot explain consciousness. They assign undue meaning to what is to my mind just the brain adding metadata to sensory perceptions based on previous experiences.
I also realize that he's utterly wrong here:QuoteBut couldn’t a mature neuroscience nevertheless offer a proper explanation of human consciousness in terms of its underlying brain processes? We have reasons to believe that reductions of this sort are neither possible nor conceptually coherent. Nothing about a brain, studied at any scale (spatial or temporal), even suggests that it might harbor consciousness. Nothing about human behavior, or language, or culture, demonstrates that these products are mediated by subjectivity. We simply know that they are—a fact that we appreciate in ourselves directly and in others by analogy.
If we look for consciousness in the physical world, all we find are increasingly complex systems giving rise to increasingly complex behavior—which may or may not be attended by consciousness. The fact that the behavior of our fellow human beings persuades us that they are (more or less) conscious does not get us any closer to linking consciousness to physical events. Is a starfish conscious? A scientific account of the emergence of consciousness would answer this question. And it seems clear that we will not make any progress by drawing analogies between starfish behavior and our own. It is only in the presence of animals sufficiently like ourselves that our intuitions about (and attributions of) consciousness begin to crystallize. Is there “something that it is like” to be a cocker spaniel? Does it feel its pains and pleasures? Surely it must. How do we know? Behavior, analogy, parsimony.
Perhaps the emergence of consciousness is simply incomprehensible in human terms. Every chain of explanation must end somewhere—generally with a brute fact that neglects to explain itself. Consciousness might represent a terminus of this sort.
It's necessary to clarify what you mean by "red". If you define it "in terms of a surface’s reflection of light of certain wavelengths", then you're speaking objectively, using the language of science. Your experience of redness, on the other hand, is a subjective phenomenon. You know it exists, but you have no way of comparing it with other people's experiences of redness. (In fact - going down the rabbit holes of solipsism and the simulation hypothesis - you can't even verify that other people have experiences of redness.) Because this particular aspect of redness cannot be analyzed or verified objectively, science sweeps it under "the rug of the mind", along with other subjective phenomena.
I do. I also realize that he's utterly wrong here:QuoteBut couldn’t a mature neuroscience nevertheless offer a proper explanation of human consciousness in terms of its underlying brain processes? We have reasons to believe that reductions of this sort are neither possible nor conceptually coherent. Nothing about a brain, studied at any scale (spatial or temporal), even suggests that it might harbor consciousness. Nothing about human behavior, or language, or culture, demonstrates that these products are mediated by subjectivity. We simply know that they are—a fact that we appreciate in ourselves directly and in others by analogy.