Author Topic: The "hard problem of consciousness"  (Read 48583 times)

0 Members and 1 Guest are viewing this topic.

Re: The "hard problem of consciousness"
I do not accept your definitions, plural. I do not "deny the existance of something". I do not accept your initial setup as valid, and consider it to be deeply flawed and misguided. In case that wasn't clear enough already.

... okay, let's start with step 1. Exactly how do you not accept the definition of ConsciousnessMe, and simultaneously believe that something exists? Would you prefer to give it a different name?

Oh for ****'s sake.

Let's be absolutely clear here: Your setup defines consciousness as an intrinsic quality of you, and you alone. By that definition, sure, no statements can be made about others. But that definition is deeply, fatally, flawed, as pointed out above. You are trying to win this argument by forcing everyone to play by the rules you set up, and if I or someone else points out to you that your rules are not making sense and do not lead to a place that allows for meaningful inquiry about the hows and whys of human cognition, you keep retreating to your definition.

Consciousness is not an intrinsic quality of me. ConsciousnessMe is an intrinsic property of me, which makes perfect sense (unless you believe that ConsciousnessMe is the collective consciousness of the entire human race, or something). I can make (and test!) lots of statements about other people. What I cannot do is test the existence or behavior of ConsciousnessBob.

I'm not forcing you to do anything. You're free to believe that ConsciousnessMe doesn't exist, or ConsciousnessBob doesn't exist. But I think we both believe that they do exist.

The hows and whys of human cognition are perfectly susceptible to the scientific method. It's an extremely interesting subject in its own right.

This isn't fun. Not for me, not for anyone else still reading this topic, I imagine.

What exactly are you trying to learn here anyway? What is the point of this discussion? What are your goals for it?

I think this is fun. It might be the most challenging discussion I've ever had.

There's a lot of fuss about "consciousness". My first goal is to extract the really difficult part of that concept. My second goal is to show that the really difficult part isn't susceptible to the scientific method, and that people are waiting for a scientific (rather than philosophical) answer in vain.

No, it's not a tool. It's a desire to not have to deal with others dressed up in pretty philosophical language (in this regard, it shares qualities with libertarianism).

Okay, this is getting ridiculous. Again: even if I believed in the solipsistic model - which I don't - you, Bob, and I still exist.

Quote
If science "explains" ConsciousnessBob in the "Bob is conscious" model, then it also does so in the solipsistic model. But ConsciousnessBob doesn't even exist in the solipsistic model, so this is a contradiction.

Which is why the solipsistic model is invalid.

Invalid in what sense? Not in a way that contradicts science, because science works perfectly fine in the solipsistic model. If you're saying it "feels wrong", then I agree, but that's not a scientific argument.

 

Offline The E

  • He's Ebeneezer Goode
  • 213
  • Nothing personal, just tech support.
    • Steam
    • Twitter
Re: The "hard problem of consciousness"
Let's go back to your initial punchline. Most things leading up to it are valid, but the conclusion you draw here is wrong:
Quote
Here's the point. Although the models have differing perspectives, they both incorporate science in its entirety: for every scientific argument that goes through in one model, the same argument also goes through in the other model. (To reiterate my example from Part 1, science predicts the existence of Pluto in both models; the models only disagree on the metaphysical issue of Pluto's "true nature".) Furthermore, both models are sound: they account for all of my observations, and lead to no logical contradictions.  [Unless our current understanding of science is self-contradictory.]  But ConsciousnessBob exists in the first model, whereas ConsciousnessBob does not exist in the second model. Thus, ConsciousnessBob (unlike virtually everything else, including Pluto) is logically independent of science, in the sense that science says nothing about its existence. Even if I assume the inviolate truth of science, I can't conclude anything about the existence of ConsciousnessBob.

In the solipsistic model, science cannot exist. There is no way to prove that anything outside your immediate perceptions exists, because there is no external data to be had; Every piece of information that reaches you second-hand is suspect because the agencies bringing you that information are impossible to verify. This renders your assertion that both models are complete and effectively equivalent invalid.

Consciousness is not an intrinsic quality of me. ConsciousnessMe is an intrinsic property of me, which makes perfect sense (unless you believe that ConsciousnessMe is the collective consciousness of the entire human race, or something). I can make (and test!) lots of statements about other people. What I cannot do is test the existence or behavior of ConsciousnessBob.

That's because you cling to the belief that consciousness and its constituent parts are nonphysical entities. As far as we can tell, they're not; we can observe action in a brain that corresponds to input it receives. By excluding nonphysical nonsense, we can arrive at a definition of and test for consciousness that undermines your assertion that it is impossible to prove other people are conscious.

Quote
I'm not forcing you to do anything. You're free to believe that ConsciousnessMe doesn't exist, or ConsciousnessBob doesn't exist. But I think we both believe that they do exist.

Not forcing me to do anything?

Quote
1. You can either accept or not accept the definition of ConsciousnessMe. If you don't accept it, then you deny that something exists. Let's assume you accept it.
2. You can either accept or not accept the definition of ConsciousnessBob. If you don't accept it, then you're in the solipsistic model. Let's assume you accept it.
3. You can either consider or not consider the question: "Is science independent of ConsciousnessBob?" If you don't consider it, then you're sticking your head in the sand. If you do consider it, then the solipsistic model shows that the answer is "yes".

You are setting up rhetorical questions here that are forcing me to conform to your model in its entirety. I refuse to do so.

Quote
The hows and whys of human cognition are perfectly susceptible to the scientific method. It's an extremely interesting subject in its own right.

And consciousness is an intrinsic part of it. By proclaiming consciousness to be off-limits to science, you are limiting any model for human cognition in such a way as to make research into it useless.

Quote
There's a lot of fuss about "consciousness". My first goal is to extract the really difficult part of that concept. My second goal is to show that the really difficult part isn't susceptible to the scientific method, and that people are waiting for a scientific (rather than philosophical) answer in vain.

Then you've failed. You have so far not shown any reason why a model based purely on biology, chemistry, physics and math is insufficient.

Quote
Invalid in what sense? Not in a way that contradicts science, because science works perfectly fine in the solipsistic model. If you're saying it "feels wrong", then I agree, but that's not a scientific argument.

You are again trying to win an argument by retreating to your definitions. Stop it, and at least try to consider that your definitions are off.
If I'm just aching this can't go on
I came from chasing dreams to feel alone
There must be changes, miss to feel strong
I really need lifе to touch me
--Evergrey, Where August Mourns

 
Re: The "hard problem of consciousness"
Let's go back to your initial punchline. Most things leading up to it are valid, but the conclusion you draw here is wrong:
Quote
Here's the point. Although the models have differing perspectives, they both incorporate science in its entirety: for every scientific argument that goes through in one model, the same argument also goes through in the other model. (To reiterate my example from Part 1, science predicts the existence of Pluto in both models; the models only disagree on the metaphysical issue of Pluto's "true nature".) Furthermore, both models are sound: they account for all of my observations, and lead to no logical contradictions.  [Unless our current understanding of science is self-contradictory.]  But ConsciousnessBob exists in the first model, whereas ConsciousnessBob does not exist in the second model. Thus, ConsciousnessBob (unlike virtually everything else, including Pluto) is logically independent of science, in the sense that science says nothing about its existence. Even if I assume the inviolate truth of science, I can't conclude anything about the existence of ConsciousnessBob.

In the solipsistic model, science cannot exist. There is no way to prove that anything outside your immediate perceptions exists, because there is no external data to be had; Every piece of information that reaches you second-hand is suspect because the agencies bringing you that information are impossible to verify. This renders your assertion that both models are complete and effectively equivalent invalid.

There isn't a way to prove that anything outside your perception exists (I invite you to try). The statement that something "really exists" is a metaphysical claim about the nature of reality. Fortunately, it's irrelevant to the scientific method, which deals with testable predictions.

If you want to assume that something "really exists", you run into two problems. First, the assumption is completely unnecessary; science deals with relations, and doesn't give a fig about the "true nature" of things. (Battuta said something similar on the first page of the thread.) Second, what do you assume "really exists"? Do parking tickets really exist? What about roads or houses? And if you simply assume that everything in your perception "really exists", you're back to square one.

Consciousness is not an intrinsic quality of me. ConsciousnessMe is an intrinsic property of me, which makes perfect sense (unless you believe that ConsciousnessMe is the collective consciousness of the entire human race, or something). I can make (and test!) lots of statements about other people. What I cannot do is test the existence or behavior of ConsciousnessBob.

That's because you cling to the belief that consciousness and its constituent parts are nonphysical entities. As far as we can tell, they're not; we can observe action in a brain that corresponds to input it receives. By excluding nonphysical nonsense, we can arrive at a definition of and test for consciousness that undermines your assertion that it is impossible to prove other people are conscious.

You and I are in complete agreement on consciousness as you define it. If you insist on pretending that I never mentioned ConsciousnessMe (perhaps because you're clinging to the belief that science can explain everything), nothing more can be said.

Quote
I'm not forcing you to do anything. You're free to believe that ConsciousnessMe doesn't exist, or ConsciousnessBob doesn't exist. But I think we both believe that they do exist.

Not forcing me to do anything?

Quote
1. You can either accept or not accept the definition of ConsciousnessMe. If you don't accept it, then you deny that something exists. Let's assume you accept it.
2. You can either accept or not accept the definition of ConsciousnessBob. If you don't accept it, then you're in the solipsistic model. Let's assume you accept it.
3. You can either consider or not consider the question: "Is science independent of ConsciousnessBob?" If you don't consider it, then you're sticking your head in the sand. If you do consider it, then the solipsistic model shows that the answer is "yes".

You are setting up rhetorical questions here that are forcing me to conform to your model in its entirety. I refuse to do so.
You are again trying to win an argument by retreating to your definitions. Stop it, and at least try to consider that your definitions are off.

I'm trying my best to read your mind here. What do you mean by "your definitions are off"? Are you saying that they don't actually define anything? That they're meaningless gibberish? I mean, if you're going to pretend I never said anything, there's no point in continuing.

You argument seems to be something like this: "Both accepting and rejecting the definition of ConsciousnessMe lead to conclusions that threaten my worldview. Therefore, I refuse to take a stance." This is hardly arguing in good faith.

 
Re: The "hard problem of consciousness"
Okay, I think I know how to make this more palatable. The solipsistic model (in which ConsciousnessMe is "everything") is only one model among unimaginably many. So if you find it too strange, here are some other possibilities.

Preliminary: Let M be "everything". Note that M must contain ConsciousnessMe, by definition. Hence the solipsistic model is minimal.

Option 1 (Physicalism): M is a physical universe that closely resembles ConsciousnessMe. One difference is that Pluto may not actually exist in ConsciousnessMe (though science predicts observations of Pluto under the right circumstances), whereas it does exist in M. Another difference is that M has no special relationship with any one of its inhabitants.

Option 2 (The Matrix): M is a physical universe that obeys basic laws similar to those in ConsciousnessMe, but is otherwise quite different. ConsciousnessMe is a digital simulation within M.

Option 3 (The Clever Demons): M is populated with clever demons that enjoy manipulating ConsciousnessMe. M itself could be a version of hell.

Option 4 (???): M is utterly different from ConsciousnessMe, in ways that are impossible to fathom.

 

Offline watsisname

Re: The "hard problem of consciousness"
Any one of these descriptions could be correct.  None of them can be verified or falsified.

I want to emphasize the difference between model as "interpretation of the ultimate nature of reality" with model as "description of how observable phenomena function according to causal rules, with the purpose of having predictive and explanatory power over them".

We could live in a Matrix, and all of our science is still perfectly valid.  In the event that some aspect of the simulation changes, we'll make more observations and update the models to explain that change.  Science still functions.  Meanwhile, we still have no way of proving or disproving that we live in a Matrix.  Any bug or change in the system still looks like the "laws of nature".  Maybe black holes are just a bug.

Maybe the Moon was just a simulated ball of light until humans first landed and walked on its surface.  This is observationally indistinguishable from the model wherein it is an astrophysical object produced from a collision with Earth billions of years ago.  But the astrophysical model has wonderful explanatory power.  It fits within the framework of our understanding of the solar system.  The idea that the Moon was just a simulated ball of light until we landed on it has no explanatory power at all.  It has no motivation from prior knowledge, nor does it further our understanding of anything.  That is not a model in any scientific sense.  It is the antithesis of a model.

We cannot prove to you that "ConsciousnessMe is everything" is wrong.  But we can examine consciousness and formulate explanations for how it arises and operates with the scientific method.  I get the sense that you think these are mutually exclusive (they're not) and that they also have equal footing (they don't).  These may be the most difficult things to wrap your head around.
In my world of sleepers, everything will be erased.
I'll be your religion, your only endless ideal.
Slowly we crawl in the dark.
Swallowed by the seductive night.

 
Re: The "hard problem of consciousness"
Any one of these descriptions could be correct.  None of them can be verified or falsified.

I want to emphasize the difference between model as "interpretation of the ultimate nature of reality" with model as "description of how observable phenomena function according to causal rules, with the purpose of having predictive and explanatory power over them".

We could live in a Matrix, and all of our science is still perfectly valid.  In the event that some aspect of the simulation changes, we'll make more observations and update the models to explain that change.  Science still functions.  Meanwhile, we still have no way of proving or disproving that we live in a Matrix.  Any bug or change in the system still looks like the "laws of nature".  Maybe black holes are just a bug.

Maybe the Moon was just a simulated ball of light until humans first landed and walked on its surface.  This is observationally indistinguishable from the model wherein it is an astrophysical object produced from a collision with Earth billions of years ago.  But the astrophysical model has wonderful explanatory power.  It fits within the framework of our understanding of the solar system.  The idea that the Moon was just a simulated ball of light until we landed on it has no explanatory power at all.  It has no motivation from prior knowledge, nor does it further our understanding of anything.  That is not a model in any scientific sense.  It is the antithesis of a model.

Yes, well put!

We cannot prove to you that "ConsciousnessMe is everything" is wrong.  But we can examine consciousness and formulate explanations for how it arises and operates with the scientific method.  I get the sense that you think these are mutually exclusive (they're not) and that they also have equal footing (they don't).  These may be the most difficult things to wrap your head around.

Of the models I listed, solipsism and physicalism raise the fewest questions. Solipsism doesn't explain the existence of ConsciousnessMe, because explaining "why anything exists" is impossible. At the cost of economy, physicalism does explain the existence of ConsciousnessMe, in the sense that it describes ConsciousnessMe as an emergent property. (But like all non-solipsistic models, it just pushes "the impossible question" one level higher.)

Two more remarks about physicalism. First, it posits a perspective from which Bob and I are essentially the same; I consider this an attractive feature. Second, it seems that ConsciousnessMe would simply be "my real brain". I'm not sure whether this implies that everything is conscious in some sense, or that nothing is conscious.

 

Offline Nyctaeus

  • The Slavic Engineer
  • 212
  • My "FS Ships" folder is 582gb.
    • Minecraft
    • Exile
Re: The "hard problem of consciousness"
Wow guys...

Consciousness is nothing but a very basic fundament of self. I mean, If a man would have only consciousness, the only thing he would say is something like "I am" or "I exist"... Or he would rather only be aware of his existence, but he would not be able to describe it with any language, because we would not know any. Our memory, emotions, character etc. depends strictly on brain structure and genetics. Our brain and senses are the way that consciousness is perciving the world around.

The way we percive everything around is a matter of perspective. You should see what we experiance when our pineal gland releases some dose of N,N-Dimethyltryptamine. World is soooo colorful and so cool in that state :P. I mean no synthetic DMT and other drug crap. Pineal gland produces psychdelics in delta stage of sleep to create dreams, and rarely in some other circumstances. The way we see or hear everything around us differs from person to person.

The world around us is not anykind of Matrix. Laws of physics are solid, science is valid... And I'm sure that the tree in front of me is real, and everything is not anykind of illusion. Perceptions differ depending on brain state. When we are mad, sad, have depression or any other crap, we see the world around us as gray, cold and mostly not interesting. When we are under all the good hormons like oxitocin or serotonin, we see everything as colorfull, beautiful, and cool! Natural DMT gives the best result. I'm calling that "spiritual high" :P.

Solipsism is in short a very radical version of all that I mentioned above. A happy person would see the tree as colorfull, green and anything like that, depressed person would see the tree as gray, ugly etc. but it's still the same tree. And dozen other people will see the same tree as well, but experiances will differ between each other. I can say that I got the point where the autor of this view came from, but it's actually over-interpretation. Examples of happy and sad man are just a tip of the iceberg, because there are prople in psychotic stages and heavy drag users, which would see the tree in somekind of abstract way... Or see tentacle monster or some other crap instead. But as I said, they are drugs and psychosis' of all kinds. We may debate why all of this people has halluciantions and corrupted perspective, and how it really affects on perciving the reallity. I would rather say that psychosis is psychosis, and almost no things that they say can be threated seriously.

The other things are brainwaves and how they affects us. We have five known amplitudes:
- Gamma - 80–100 Hz - Most motoric functions
- Beta - 12-28 Hz - Standard, casual brain activity
- Alpha - 8-13 Hz - Relax, calmity, resting
- Theta - 4–7 Hz
- Delta - 0,5-3 Hz
As people usually work on beta or gamma amplitudes during most of the time, there are also lower amplitudes - gamma and delta. Gamma is present during the first and the last stage of sleep, in hypnosis and meditation. Delta is the lowest known amplitude. Present during the deepest stage of sleep and long time meditation. The way of perciving the reallity here is actually completly different. During the deep meditation, people experiance no time, no space, no matter... And I guess no trees, but albo probably because a man in meditation that deep have eyes closed :P.

I don't know if the conscounssness is a direct product of brain activity, or can exist without the brain but on other principles. The only think that I know about consciousness is the fact, that I am the consciousness. I'm experiancing the world around through my senses and my brain structure, my emotions, my memories and stuff are filters for the information that I'm perciving. I'd like to thing that my brain is somekind of a quantum computer as some new science theories said. I can't wait for the science to understand and describe this phenomena. Both solipsism and psychicism are true at some point, but it's not depending on actual laws of physics or anything else. These terms were created because people used to think about everything in bipolar, dualistic and [very often] radical way. These are two ways of describing human perception.

Well... I can say that when I see the tree, it's definitly there and I can say that after years of meditation :P
« Last Edit: February 18, 2016, 08:04:05 am by Betrayal »
Exile | Shadow Genesis | Inferno | Series Resurrecta  | DA Profile | P3D Profile

Proud owner of NyctiShipyards. Remember - Nyx will fix it!

All of my assets including models, textures, skyboxes, effects may be used under standard CC BY-NC 4.0 license.

 

Offline The E

  • He's Ebeneezer Goode
  • 213
  • Nothing personal, just tech support.
    • Steam
    • Twitter
Re: The "hard problem of consciousness"
Well... I can say that when I see the tree, it's definitly there and I can say that after years of meditation :P

Wrong. It is impossible to prove that the reality you're experiencing is actually real and not the product of an elaborate simulation in a computer somewhere (see this for reference).

That being said, it's a hypothesis with very little bearing on day-to-day life unless evidence is discovered that exploiting the simulation is possible.
If I'm just aching this can't go on
I came from chasing dreams to feel alone
There must be changes, miss to feel strong
I really need lifе to touch me
--Evergrey, Where August Mourns

 

Offline Bobboau

  • Just a MODern kinda guy
    Just MODerately cool
    And MODest too
  • 213
Re: The "hard problem of consciousness"
No, the tree is real, it is just that your understanding of the nature of reality is possibly inaccurate. The simulation has an entity in it you can interact with, you can pick it's fruit, you can lay in it's leaves, you can chop it down. If the entire universe is simulated on some massively powerful supercomputer in another dimension it still exists, when I type out the character "A" it still exists. just because that existence is defined by a pattern of electrical charges in a computer's memory doesn't make it un-real. when I submit this post and that pattern is transferred to a data packet, it still exists, when it gets stored in the database as a pattern of magnetic polarization it still exists. If everything is simulated, you can still know things about that simulation, and learning that it is a simulation is within that purview.
Bobboau, bringing you products that work... in theory
learn to use PCS
creator of the ProXimus Procedural Texture and Effect Generator
My latest build of PCS2, get it while it's hot!
PCS 2.0.3


DEUTERONOMY 22:11
Thou shalt not wear a garment of diverse sorts, [as] of woollen and linen together

 

Offline watsisname

Re: The "hard problem of consciousness"

The way we percive everything around is a matter of perspective. You should see what we experiance when our pineal gland releases some dose of N,N-Dimethyltryptamine. World is soooo colorful and so cool in that state :P. I mean no synthetic DMT and other drug crap. Pineal gland produces psychdelics in delta stage of sleep to create dreams, and rarely in some other circumstances. The way we see or hear everything around us differs from person to person.

The world around us is not anykind of Matrix. Laws of physics are solid, science is valid... And I'm sure that the tree in front of me is real, and everything is not anykind of illusion. Perceptions differ depending on brain state. When we are mad, sad, have depression or any other crap, we see the world around us as gray, cold and mostly not interesting. When we are under all the good hormons like oxitocin or serotonin, we see everything as colorfull, beautiful, and cool! Natural DMT gives the best result. I'm calling that "spiritual high" :P.

NN-DMT is pretty neat. :)

I have also never found credible evidence that the tiny amounts of DMT present in the human body are produced in the pineal gland, or that this is responsible for our dreams or altered states of consciousness in any meaningful way.  If this were true then I'd expect someone to have found a pretty obvious change in people who have gone through pinealectomy.

And, yeah, what Bob and E said.
In my world of sleepers, everything will be erased.
I'll be your religion, your only endless ideal.
Slowly we crawl in the dark.
Swallowed by the seductive night.

 
Re: The "hard problem of consciousness"
Apologies for necroposting yet again! I figured I'd keep the consciousness talk in one place; however, please feel free to split the thread.

I recently read three articles, all of them fascinating. (Biased as I am, the last two articles discuss the challenge that consciousness poses to science.) For convenience, I've linked and copy-pasted the articles. Bear in mind that I omitted some stuff when copy-pasting, e.g. footnotes and tangential material.

First, a collision between two of my favorite things: Peter Watts, marine biologist and author of "Blindsight", reviews deep-sea horror game "SOMA".
Spoiler:
“If there’s an afterlife, is my place taken? Is heaven full of people who would call me an imposter?”
— Simon Jarrett, upon realizing that he is a digitized copy.

Ever since the turn of the century I’ve had a— well, not a love/hate relationship with video games so much as a love/indifference one. I’ve worked on several game projects that never made it to market, wrote a tie-in novel for a game that did. Occasionally my work has inspired games I’ve had nothing to do with; the creators of Bioshock 2 and Torment: Tides of Numanera cite me as an influence, for example. There’s a vampire in The Witcher 3 named Sarasti. Eclipse Phase, the paper-based open-source role-playing game, names me in their references. And so on.

For one reason or another, I’ve never got around to actually playing any of these games. But a fan recently gifted me with a download of Frictional Games’ SOMA, whose creators also cite me as inspirational (alongside Greg Egan, China Miéville, and Philip K. Dick). And in the course of the occasional egosurf I’ve stumbled across various blogs and forums in which people have commented on the peculiar Wattsiness of this particular title. So what the hell, I figured; I needed something to write about this week, and it was either gonna be SOMA or my first acid trip.

Major Spoilers after the graphic, so stop reading if you’re still saving yourself for your own run at the game. (Although if you’re still doing that a solid year after its release, you’re even further behind the curve than I am.)

In SOMA you play Simon: a regular dude from 2015 Toronto, who— following a brain scan at the notoriously-disreputable York University— suddenly finds himself a hundred years in the future, just after a cometary impact has wiped out all life on the surface of the earth. Simon doesn’t have to worry about that, though— not in the short term, at least— because he’s not on the surface of the Earth. He’s stuck in a complex of derelict undersea habitats near a  geothermal vent, where (among other things) he is attacked by giant mutant viperfish and caught up in a story centering around the nature of consciousness. “I’d really like to know who thought sending a Canadian to the bottom of the sea was a good idea,” he blurts out at one point. “I miss Toronto. In Toronto I knew who I was.”

So yeah, I can see a certain Watts influence. Maybe even a bit of homage.

If I was feeling especially egotistical I could really push it. Those subway stations Simon cruises through on his way to York— not that far from where I used to live. His in-game buddy Catherine once mistakenly remarks that he comes from Vancouver, where I lived before that. Hell, if I wanted to pull out all the stops I could even point out that Jesus Christ’s Number Two Man (and the first of the popes) was called Simon Peter. Coincidence?

Yeah, probably. That last thing, anyway. Then again, any game whose major selling point was its Peter Watts references would be shooting for a pretty limited market. Fortunately, SOMA is more substantive. In fact, it may not be so much inspired by my writing (or Dick’s, or Egan’s, or Miéville’s) as we all are inspired by the same scary-cool stuff that underlies human existence. We’re all drinking from the same well, we all lie awake at night haunted by the same existential questions: how can meat possibly wake up? Where does subjective awareness come from? What is it like to be attacked by giant mutant viperfish at four thousand meters?

SOMA’s influences extend beyond the the usual list of authors you’ll find online (or quoted at the top of this post, for that matter). The biocancer that infests and reshapes everything from people to anglerfish seems more than a little reminiscent of the Melding Plague in Alistair Reynolds’ Revelation Space, for example. And while Simon’s belated discovery that he’s basically a digitized brain scan riding a corpse in a suit of armor might seem lifted directly from the Nanosuit in my Crysis 2 tie-in novel, I lifted that idea in turn from Richard Morgan’s game script.

So much for the parts SOMA cannibalized.  How does it stitch them together?

For starters, the game is gorgeous to behold and insanely creepy to hear. The murk of the conshelf, the punctuated blackness of the abyss, the clanks and creaks of overstressed hull plating just this side of implosion keep you awestruck and on-edge in equal measure. Of course, these days that’s true for pretty much any game worth reviewing (Alien: Isolation comes to mind— you might almost describe SOMA as an undersea Alien: Isolation with a neurophilosophy filling). SOMA’s technology seems strangely antiquated for the 22nd century — flickering fluorescent light tubes, seventies-era video cameras, desktop computers that look significantly less advanced than the latest offerings down at Staples— but that’s also true for a lot of games these days. (Alien: Isolation gets a pass on that because it was honoring the aesthetic of the movie. The Deus Ex franchise, not so much.)

There’s not much of an interface to interfere with the view, no hit points or health icons cluttering up the edges of your display. You know you’ve been injured when your vision blurs and you can’t run any more.  You have no weapons to keep track of. The inventory option is a joke: for 90% of the game, you’re completely empty-handed except for a glorified door-opener to help you get around. It’s way more minimalist than most player interfaces, and the better for it.

Likewise, dialog options are pretty much nonexistent. Now and then you can choose to start a conversation, but from that point on you’re essentially listening to a radio play. I think Frictional made the right choice here, too. All those clunky dialog menus that pop up in Fallout or Mass Effect— those same four or five options offered up time after time, regardless of context (Really?  I want to ask Piper about our relationship now?)— offer just enough conversational flexibility to really drive home how little conversational flexibility you have. It’s one of the inherent weaknesses of computer games as an art form— game tech just isn’t advanced enough to improvise decent dialog on the fly.

SOMA cuts the player out of the loop entirely during the talky bits. The cost is that we lose the illusion of control (which is actually kind of meta if you think about it); the benefit is that we get richer dialog, deeper characters, shock and tantrums and emotional investment to go along with the thought experiment. Simon isn’t some empty vessel for the player to pour themselves; he’s a living character in his own right.

I’ll grant that he’s not a very bright one. He mentions at one point that he used to work in a bookstore, but given how long it takes him to catch on to certain things I’m willing to bet that its SF and pop-science sections were pretty crappy.  Simon’s a nice guy, and I really felt for him— but if his home town was, in fact, a nod to my own, I can only hope the same cannot be said for his intellect.

On the other hand, who’s to say I’d be any quicker on the draw if I was the dusty photocopy of a long-dead brain, thrown headlong and without warning into Apocalypse? I don’t know if anyone would be firing on all synapses under those conditions; and the languid pace at which Simon clues in does provide a convenient opportunity to hammer home certain philosophical issues to which a lot of players won’t have given much prior thought.  The fact that Simon’s sidekick Catherine grows increasingly impatient with his “bull****”, with the fact that she has to keep repeating herself, suggests that this was a deliberate decision on Frictional’s part.

But if Simon’s a bit slow on the uptake, SOMA isn’t. Even the scenery is smart. Wandering the seabed, at depths ranging from a few hundred meters to four thousand, the fauna just looks right: spider crabs, rattails, tiny bioluminescent squid and tube worms and iridescent, gorgeous ctenophores (ctenophores! How many of you even know what those are?) Inside one of the habitats, a dead scientist’s lab notes remark upon the sighting of a Chauliodus (“viperfish” to you yokels): “Not usually found at this depth— anomaly”. I wet myself a little when I read that. Writing Starfish back in the nineties, I too had to grapple with the fact that viperfish don’t foray into the deep abyss. I had to come up with my own explanation for why they did so at Channer Vent.

Smart or dumb, though, the ocean floor is mere setting: SOMA’s story revolves around issues of consciousness. Frictional did their homework here too. Sure, there’s the usual throwaway stuff— one model of sapience-compatible drone is dubbed “Qualia-class”— but stuff like the Body Transfer and Rubber Hand Illusions aren’t just name-checked; they actively inform vital elements of the plot.  People come equipped with “black boxes” in their brains that can be forensically data-mined post-mortem. (This proves useful in figuring out SOMA’s backstory, an ingenious new twist on the usual Let’s find personal diaries lying around everywhere more commonly employed in such games.) Most of the lynchpin events in this story occur not to effect the course of the plot, but to make you think about its underlying themes.

By way of comparison, look to SOMA’s spiritual cousin, Bioshock. For all its explicit in-your-face references to Ayn-Randian ideology,  Bioshock fails as analysis. (At best, its analysis amounts to Objectivism is bad because when capitalism runs amok, genetically-engineered nudibranchs will result in widespread insanity and the ability to shoot live bees out of your hands.) Andrew Ryan’s political beliefs serve as mere backdrop to the action, and as wall-paper rationale for the setting; but the events of the story could have just as easily gone down in a failed socialist utopia as a capitalist one. Bioshock was brilliant in the way it used the mechanics of game play to inform one of its themes (I’ve yet to see its equal in that regard), but that particular theme revolved around the existence of free will, with no substantive connection to Objectivist ideology. SOMA, in contrast, actually grapples with the issues it presents; it makes them part of the plot.

In fact, you could argue that SOMA is actually more rumination than game, an extended scenario that systematically builds a case towards an inevitable, nihilistic conclusion (two nihilistic conclusions actually, the second superficially brighter and happier than the first but actually way more depressing if you stop to think about it). If there’s a problem with this game, it’s that the the story is so tight, the rumination so railbound, that it can’t afford to give the player much freedom for fear they’ll screw up the scenario. There’s really only one way to play SOMA. Discoveries and revelations have to happen in a specific order, conversations must proceed in a certain way. The obligatory monsters— justified as failed prototypes, built by an AI trying to create Humanity 2.0— don’t really do anything story-wise. You can’t kill them. You can’t talk to them. You can’t scavenge their carcasses for booty, or fashion a makeshift cannon from local leftovers and  blow them away. Your interactive options consist exclusively of run and hide. SOMA’s monsters serve no real purpose except to creep you out, and slow your progress along a narrative monorail.

There are choices to be made— surprisingly affecting ones— but they don’t affect the outcome of the plot. Your reaction to the last surviving human— wasting away in some flickering half-lit locker at the bottom of the sea, IV needle festering in her arm, pictures of her beloved Greenland (gone now, along with everything else) scattered across the deck— who only wants to die. The repeated activation and interrogation of an increasingly panicky being who doesn’t know he’s digitized (although he sure as **** knows something‘s wrong), a being you simply discard once you have what you need from him. The treatment of your own earlier iterations, still inconveniently extant after your transcription into a new host. These powerful moments exist not so much to further the story as to inspire reflection upon a story already decided— and they might be missed entirely by a player with too much freedom, able to go where they will and when. It’s the age-old tension between sandbox and narrative, autonomy and storytelling. Frictional has sacrificed one for the other, so— as immersive as this game is— it’s bound to suck at replay value.

It’s easy enough to justify such creative decisions in principle; in practise, the result sometimes feels like a cheat. I spent half an hour tromping around the seabed looking for a particular item among the wreckage— a computer chip— that would spare me the need to kill a sapient drone for the same vital part. It would have been easy enough for Frictional to give me that option;   they’d already littered the seabed with wrecked drones, it wouldn’t have killed them to leave me some usable salvage. But no. The the only way forward was to slaughter an innocent being. It made the point, philosophically, but it felt wrong somehow. Forced.

This would normally be the point at which I ***** and moan about how, for all the “inspiration” game developers attribute to me, it would be really nice if they might someday be inspired to actually hire me instead of just mining my stories. It would be an utterly bull**** whinge—  I’ve admitted to gaming gigs in my past on this very post— but I’d make it anyway because, Hey: if one of your inspirations is sitting right there in the corner next to the potted philodendron, why not ask him for a dance? He might just teach you a couple of new steps.[1]

This time, though, I’m going to restrain myself. SOMA could not have been an easy assignment; I could ***** about the monorail gameplay constraints or the intermittent dimness of the protagonist, but given the limitations of the medium I don’t know that I could do any better without compromising mission priorities.  SOMA is a game in the straight-up survival-horror mode, but the horror is more existential than visceral. And those conventional mechanics serve the most substantive theme I’ve ever encountered in a video game.

Bottom line, I think they did a damn fine job.

[1] This metaphor is in no way meant to imply that I am any kind of dancer.  My most recent memories of dancing involve jumping wildly up and down and slapping my thighs in approximate time to Money for Nothing.

Next, while reviewing one of Scott Bakker's books, Edward Feser describes the "lump under the rug" fallacy. ("If everything else can be explained by science, why should consciousness be any different?")
Spoiler:
Bakker wonders why we are “so convinced that we are the sole exception, the one domain that can be theoretically cognized absent the prostheses of science.”  After all, other aspects of the natural world have been radically re-conceived by science.  So why do we tend to suppose that human nature is not subject to such radical re-conception -- for instance, to the kind of re-conception proposed by eliminativism?  Bakker’s answer is that we take ourselves to have a privileged epistemic access to ourselves that we don’t have to the rest of the world.  He then suggests that we should not regard this epistemic access as privileged, but merely different.

Now, elsewhere I have noted the fallaciousness of arguments to the effect that neuroscience has shown that our self-conception is radically mistaken.  For instance, in one of the posts on Rosenberg alluded to above, I respond to claims to the effect that “blindsight” phenomena and Libet’s free will experiments cast doubt on the reliability of introspection.  Here I want to focus on the presupposition of Bakker’s question, and on another kind of fallacious reasoning I’ve called attention to many times over the years.  The presupposition is that science really has falsified our commonsense understanding of the rest of the world, and the fallacy behind this presupposition is what I call the “lump under the rug” fallacy.

Suppose the wood floors of your house are filthy and that the dirt is pretty evenly spread throughout the house.  Suppose also that there is a rug in one of the hallways.  You thoroughly sweep out one of the bedrooms and form a nice little pile of dirt at the doorway.  It occurs to you that you could effectively “get rid” of this pile by sweeping it under the nearby rug in the hallway, so you do so.  The lump under the rug thereby formed is barely noticeable, so you are pleased.  You proceed to sweep the rest of the bedrooms, the bathroom, the kitchen, etc., and in each case you sweep the resulting piles under the same rug.  When you’re done, however, the lump under the rug has become quite large and something of an eyesore.  Someone asks you how you are going to get rid of it.  “Easy!” you answer.  “The same way I got rid of the dirt everywhere else!  After all, the ‘sweep it under the rug’ method has worked everywhere else in the house.  How could this little rug in the hallway be the one place where it wouldn’t work?  What are the odds of that?”

This answer, of course, is completely absurd.  Naturally, the same method will not work in this case, and it is precisely because it worked everywhere else that it cannot work in this case.  You can get rid of dirt outside the rug by sweeping it under the rug.  You cannot get of the dirt under the rug by sweeping it under the rug.  You will only make a fool of yourself if you try, especially if you confidently insist that the method must work here because it has worked so well elsewhere.

Now, the “Science has explained everything else, so how could the human mind be the one exception?” move is, of course, standard scientistic and materialist shtick.  But it is no less fallacious than our imagined “lump under the rug” argument.

Here’s why.  Keep in mind that Descartes, Newton, and the other founders of modern science essentially stipulated that nothing that would not fit their exclusively quantitative or “mathematicized” conception of matter would be allowed to count as part of a “scientific” explanation.  Now to common sense, the world is filled with irreducibly qualitative features -- colors, sounds, odors, tastes, heat and cold -- and with purposes and meanings.  None of this can be analyzed in quantitative terms.  To be sure, you can re-define color in terms of a surface’s reflection of light of certain wavelengths, sound in terms of compression waves, heat and cold in terms of molecular motion, etc.  But that doesn’t capture what common sense means by color, sound, heat, cold, etc. -- the way red looks, the way an explosion sounds, the way heat feels, etc.  So, Descartes and Co. decided to treat these irreducibly qualitative features as projections of the mind.  The redness we see in a “Stop” sign, as common sense understands redness, does not actually exist in the sign itself but only as the quale of our conscious visual experience of the sign; the heat we attribute to the bathwater, as common sense understands heat, does not exist in the water itself but only in the “raw feel” that the high mean molecular kinetic energy of the water causes us to experience; meanings and purposes do not exist in external material objects but only in our minds, and we project these onto the world; and so forth.  Objectively there are only colorless, odorless, soundless, tasteless, meaningless particles in fields of force.

In short, the scientific method “explains everything else” in the world in something like the way the “sweep it under the rug” method gets rid of dirt -- by taking the irreducibly qualitative and teleological features of the world, which don’t fit the quantitative methods of science, and sweeping them under the rug of the mind.  And just as the literal “sweep it under the rug” method generates under the rug a bigger and bigger pile of dirt which cannot in principle be gotten rid of using the “sweep it under the rug” method, so too does modern science’s method of treating irreducibly qualitative, semantic, and teleological features as mere projections of the mind generate in the mind a bigger and bigger “pile” of features which cannot be explained using the same method.

This is the reason the qualia problem, the problem of intentionality, and other philosophical problems touching on human nature are so intractable.  Indeed, it is one reason many post-Cartesian philosophers have thought dualism unavoidable.  If you define “material” in such a way that irreducibly qualitative, semantic, and teleological features are excluded from matter, but also say that these features exist in the mind, then you have thereby made of the mind something immaterial.  Thus, Cartesian dualism was not some desperate rearguard action against the advance of modern science; on the contrary, it was the inevitable consequence of modern science (or, more precisely, the inevitable consequence of regarding modern science as giving us an exhaustive account of matter).

So, like the floor sweeper who is stuck with a “dualism” of dirt-free floors and a lump of dirt under the rug, those who suppose that the scientific picture of matter is an exhaustive picture are stuck with a dualism of, on the one hand, a material world entirely free of irreducibly qualitative, semantic, or teleological features, and on the other hand a mental realm defined by its possession of irreducibly qualitative, semantic, and teleological features.  The only way to avoid this dualism would be to deny that the latter realm is real -- that is to say, to take an eliminativist position.  But as I have said, there is no coherent way to take such a position.  The eliminativist who insists that intentionality is an illusion -- where illusion is, of course, an intentional notion (and where no eliminativist has been able to come up with a non-intentional substitute for it) -- is like the yutz sweeping the dirt that is under the rug back under the rug while insisting that he is thereby getting rid of the dirt under the rug.

Finally, Sam Harris on the weirdness of consciousness.

Part one.
Spoiler:
You are not aware of the electrochemical events occurring at each of the trillion synapses in your brain at this moment. But you are aware, however dimly, of sights, sounds, sensations, thoughts, and moods. At the level of your experience, you are not a body of cells, organelles, and atoms; you are consciousness and its ever-changing contents, passing through various stages of wakefulness and sleep, and from cradle to grave.

The term “consciousness” is notoriously difficult to define. Consequently, many a debate about its character has been waged without the participants’ finding even a common topic as common ground. By “consciousness,” I mean simply “sentience,” in the most unadorned sense. To use the philosopher Thomas Nagel’s construction: A creature is conscious if there is “something that it is like” to be this creature; an event is consciously perceived if there is “something that it is like” to perceive it. ⁠Whatever else consciousness may or may not be in physical terms, the difference between it and unconsciousness is first and foremost a matter of subjective experience. Either the lights are on, or they are not.

To say that a creature is conscious, therefore, is not to say anything about its behavior; no screams need be heard, or wincing seen, for a person to be in pain. Behavior and verbal report are fully separable from the fact of consciousness: We can find examples of both without consciousness (a primitive robot) and consciousness without either (a person suffering “locked-in syndrome”).

It is surely a sign of our intellectual progress that a discussion of consciousness no longer has to begin with a debate about its existence. To say that consciousness may only seem to exist is to admit its existence in full—for if things seem any way at all, that is consciousness. Even if I happen to be a brain in a vat at this moment—all my memories are false; all my perceptions are of a world that does not exist—the fact that I am having an experience is indisputable (to me, at least).  This is all that is required for me (or any other conscious being) to fully establish the reality of consciousness. Consciousness is the one thing in this universe that cannot be an illusion.

As our understanding of the physical world has evolved, our notion of what counts as “physical” has broadened considerably. A world teeming with fields and forces, vacuum fluctuations, and the other gossamer spawn of modern physics is not the physical world of common sense. In fact, our common sense seems to be stuck somewhere in the 16th century. We have also generally forgotten that many of the patriarchs of physics in the first half of the 20th century regularly impugned the “physicality” of the universe. Nonreductive views like those of Eddington, Jeans, Pauli, Heisenberg, and Schrödinger seem to have had no lasting impact. In some ways we can be thankful for this, for a fair amount of mumbo jumbo was in the air. Wolfgang Pauli, for instance, though one of the titans of modern physics, was also a devotee of Carl Jung, who apparently analyzed no fewer than 1,300 of the great man’s dreams. Pauli’s thoughts about the irreducibility of mind seem to have had as much to do with Jung’s least credible ideas as with quantum mechanics.

Such numinous influences eventually subsided. And once physicists got down to the serious business of building bombs, we were apparently returned to a universe of objects—and to a style of discourse, across all branches of science and philosophy, that made the mind seem ripe for reduction to the “physical” world.

The problem, however, is that no evidence for consciousness exists in the physical world. Physical events are simply mute as to whether it is “like something” to be what they are. The only thing in this universe that attests to the existence of consciousness is consciousness itself; the only clue to subjectivity, as such, is subjectivity. Absolutely nothing about a brain, when surveyed as a physical system, suggests that it is a locus of experience. Were we not already brimming with consciousness ourselves, we would find no evidence of it in the physical universe—nor would we have any notion of the many experiential states that it gives rise to. The painfulness of pain, for instance, puts in an appearance only in consciousness. And no description of C-fibers or pain-avoiding behavior will bring the subjective reality into view.

If we look for consciousness in the physical world, all we find are increasingly complex systems giving rise to increasingly complex behavior—which may or may not be attended by consciousness.  The fact that the behavior of our fellow human beings persuades us that they are (more or less) conscious does not get us any closer to linking consciousness to physical events.  Is a starfish conscious? A scientific account of the emergence of consciousness would answer this question. And it seems clear that we will not make any progress by drawing analogies between starfish behavior and our own. It is only in the presence of animals sufficiently like ourselves that our intuitions about (and attributions of) consciousness begin to crystallize. Is there “something that it is like” to be a cocker spaniel? Does it feel its pains and pleasures? Surely it must. How do we know? Behavior, analogy, parsimony.

Most scientists are confident that consciousness emerges from unconscious complexity. We have compelling reasons for believing this, because the only signs of consciousness we see in the universe are found in evolved organisms like ourselves. Nevertheless, this notion of emergence strikes me as nothing more than a restatement of a miracle. To say that consciousness emerged at some point in the evolution of life doesn’t give us an inkling of how it could emerge from unconscious processes, even in principle.

I believe that this notion of emergence is incomprehensible—rather like a naive conception of the big bang. The idea that everything (matter, space-time, their antecedent causes, and the very laws that govern their emergence) simply sprang into being out of nothing seems worse than a paradox. “Nothing,” after all, is precisely that which cannot give rise to “anything,” let alone “everything.” Many physicists realize this, of course. Fred Hoyle, who coined “big bang” as a term of derogation, is famous for opposing this creation myth on philosophical grounds, because such an event seems to require a “preexisting space and time.” In a similar vein, Stephen Hawking has said that the notion that the universe had a beginning is incoherent, because something can begin only with reference to time, and here we are talking about the beginning of space-time itself. He pictures space-time as a four-dimensional closed manifold, without beginning or end—much like the surface of a sphere.

Naturally, it all depends on how one defines “nothing.” The physicist Lawrence Krauss has written a wonderful book arguing that the universe does indeed emerge from nothing. But in the present context, I am imagining a nothing that is emptier still—a condition without antecedent laws of physics or anything else. It might still be true that the laws of physics themselves sprang out of nothing in this sense, and the universe along with them—and Krauss says as much. Perhaps that is precisely what happened. I am simply claiming that this is not an explanation of how the universe came into being. To say “Everything came out of nothing” is to assert a brute fact that defies our most basic intuitions of cause and effect—a miracle, in other words.

Likewise, the idea that consciousness is identical to (or emerged from) unconscious physical events is, I would argue, impossible to properly conceive—which is to say that we can think we are thinking it, but we are mistaken. We can say the right words, of course—“consciousness emerges from unconscious information processing.” We can also say “Some squares are as round as circles” and “2 plus 2 equals 7.” But are we really thinking these things all the way through? I don’t think so.

Consciousness—the sheer fact that this universe is illuminated by sentience—is precisely what unconsciousness is not. And I believe that no description of unconscious complexity will fully account for it. It seems to me that just as “something” and “nothing,” however juxtaposed, can do no explanatory work, an analysis of purely physical processes will never yield a picture of consciousness. However, this is not to say that some other thesis about consciousness must be true. Consciousness may very well be the lawful product of unconscious information processing. But I don’t know what that sentence means—and I don’t think anyone else does either.

Part two.
Spoiler:
The universe is filled with physical phenomena that appear devoid of consciousness. From the birth of stars and planets, to the early stages of cell division in a human embryo, the structures and processes we find in Nature seem to lack an inner life. At some point in the development of certain complex organisms, however, consciousness emerges. This miracle does not depend on a change of materials—for you and I are built of the same atoms as a fern or a ham sandwich. Rather, it must be a matter of organization. Arranging atoms in a certain way appears to bring consciousness into being. And this fact is among the deepest mysteries given to us to contemplate.

Many readers of my previous essay did not understand why the emergence of consciousness should pose a special problem to science. Every feature of the human mind and body emerges over the course development: Why is consciousness more perplexing than language or digestion? The problem, however, is that the distance between unconsciousness and consciousness must be traversed in a single stride, if traversed at all. Just as the appearance of something out of nothing cannot be explained by our saying that the first something was “very small,” the birth of consciousness is rendered no less mysterious by saying that the simplest minds have only a glimmer of it.

This situation has been characterized as an “explanatory gap” and the “hard problem of consciousness,” and it is surely both. I am sympathetic with those who, like the philosopher Colin McGinn and the psychologist Steven Pinker, have judged the impasse to be total: Perhaps the emergence of consciousness is simply incomprehensible in human terms. Every chain of explanation must end somewhere—generally with a brute fact that neglects to explain itself. Consciousness might represent a terminus of this sort. Defying analysis, the mystery of inner life may one day cease to trouble us.

However, many people imagine that consciousness will yield to scientific inquiry in precisely the way that other difficult problems have in the past. What, for instance, is the difference between a living system and a dead one? Insofar as the question of consciousness itself can be kept off the table, it seems that the difference is now reasonably clear to us. And yet, as late as 1932, the Scottish physiologist J.S. Haldane (father of J.B.S. Haldane) wrote:

"What intelligible account can the mechanistic theory of life give of the…recovery from disease and injuries? Simply none at all, except that these phenomena are so complex and strange that as yet we cannot understand them. It is exactly the same with the closely related phenomena of reproduction. We cannot by any stretch of the imagination conceive a delicate and complex mechanism which is capable, like a living organism, of reproducing itself indefinitely often."

Scarcely twenty years passed before our imaginations were duly stretched. Much work in biology remains to be done, of course, but anyone who entertains vitalism at this point stands convicted of basic ignorance about the nature of living systems. The jury is no longer out on questions of this sort, and more than half a century has passed since the earth’s creatures required an élan vital to propagate themselves or to recover from injury. Are doubts that we will arrive at a physical explanation of consciousness analogous to doubts about the feasibility of explaining life in terms of processes that are not alive?

The analogy is a bad one: Life is defined according to external criteria; Consciousness is not (and, I think, cannot be). We would never have occasion to say of something that does not eat, excrete, grow, or reproduce that it might nevertheless be “alive.” It might, however, be conscious.

But other analogies seem to offer hope. Consider our sense of sight: Doesn’t vision emerge from processes that are themselves blind? And doesn’t such a miracle of emergence make consciousness seem less mysterious?

Unfortunately, no. In the case of vision, we are speaking merely about the transduction of one form of energy into another (electromagnetic into electrochemical). Photons cause light-sensitive proteins to alter the spontaneous firing rates of our rods and cones, beginning an electrochemical cascade that affects neurons in many areas of the brain—achieving, among other things, a topographical mapping of the visual scene onto the visual cortex. While this chain of events is complicated, the fact of its occurrence is not in principle mysterious. The emergence of vision from a blind apparatus strikes us as a difficult problem simply because when we think of vision, we think of the conscious experience of seeing. That eyes and visual cortices emerged over the course of evolution presents no special obstacles to us; that there should be “something that it is like” to be the union of an eye and a visual cortex is itself the problem of consciousness—and it is as intractable in this form as in any other.

But couldn’t a mature neuroscience nevertheless offer a proper explanation of human consciousness in terms of its underlying brain processes? We have reasons to believe that reductions of this sort are neither possible nor conceptually coherent. Nothing about a brain, studied at any scale (spatial or temporal), even suggests that it might harbor consciousness. Nothing about human behavior, or language, or culture, demonstrates that these products are mediated by subjectivity. We simply know that they are—a fact that we appreciate in ourselves directly and in others by analogy.

Here is where the distinction between studying consciousness and studying its contents becomes paramount. It is easy to see how the contents of consciousness might be understood at the level of the brain. Consider, for instance, our experience of seeing an object—its color, contours, apparent motion, location in space, etc. arise in consciousness as a seamless unity, even though this information is processed by many separate systems in the brain. Thus when a golfer prepares to hit a shot, he does not first see the ball’s roundness, then its whiteness, and only then its position on the tee. Rather, he enjoys a unified perception of a ball. Many neuroscientists believe that this phenomenon of “binding” can be explained by disparate groups of neurons firing in synchrony. Whether or not this theory is true, it is perfectly intelligible—and it suggests, as many other findings in neuroscience do, that the character of our experience can often be explained in terms of its underlying neurophysiology. However, when we ask why it should be “like something” to see in the first place, we are returned to the mystery of consciousness in full.

For these reasons, it is difficult to imagine what experimental findings could render the emergence of consciousness comprehensible. This is not to say, however, that our understanding of ourselves won’t change in surprising ways through our study of the brain. There seems to be no limit to how a maturing neuroscience might reshape our beliefs about the nature of conscious experience. Are we fully conscious during sleep and merely failing to form memories? Can human minds be duplicated or merged? Is it possible to love your neighbor as yourself? A precise, functional neuroanatomy of our mental states would help to answer such questions—and the answers might well surprise us. And yet, whatever insights arise from correlating mental and physical events, it seems unlikely that one side of the world will be fully reduced to the other.

While we know many things about ourselves in anatomical, physiological, and evolutionary terms, we do not know why it is “like something” to be what we are. The fact that the universe is illuminated where you stand—that your thoughts and moods and sensations have a qualitative character—is a mystery, exceeded only by the mystery that there should be something rather than nothing in this universe. How is it that unconscious events can give rise to consciousness? Not only do we have no idea, but it seems impossible to imagine what sort of idea could fit in the space provided. Therefore, although science may ultimately show us how to truly maximize human well-being, it may still fail to dispel the fundamental mystery of our mental life. That doesn’t leave much scope for conventional religious doctrines, but it does offer a deep foundation (and motivation) for introspection. Many truths about ourselves will be discovered in consciousness directly, or not discovered at all.

 

Offline Luis Dias

  • 211
Re: The "hard problem of consciousness"
Thank you, those were great!

 

Offline The E

  • He's Ebeneezer Goode
  • 213
  • Nothing personal, just tech support.
    • Steam
    • Twitter
Re: The "hard problem of consciousness"
I laughed out loud at this:

Quote
This is the reason the qualia problem, the problem of intentionality, and other philosophical problems touching on human nature are so intractable.  Indeed, it is one reason many post-Cartesian philosophers have thought dualism unavoidable.  If you define “material” in such a way that irreducibly qualitative, semantic, and teleological features are excluded from matter, but also say that these features exist in the mind, then you have thereby made of the mind something immaterial.  Thus, Cartesian dualism was not some desperate rearguard action against the advance of modern science; on the contrary, it was the inevitable consequence of modern science (or, more precisely, the inevitable consequence of regarding modern science as giving us an exhaustive account of matter).

This is a quite hilarious (and somewhat desperate) rearguard action to portrait dualism as something necessary when it really isn't.

More hilarious wrongness:
Quote
However, many people imagine that consciousness will yield to scientific inquiry in precisely the way that other difficult problems have in the past. What, for instance, is the difference between a living system and a dead one? Insofar as the question of consciousness itself can be kept off the table, it seems that the difference is now reasonably clear to us. And yet, as late as 1932, the Scottish physiologist J.S. Haldane (father of J.B.S. Haldane) wrote:

"What intelligible account can the mechanistic theory of life give of the…recovery from disease and injuries? Simply none at all, except that these phenomena are so complex and strange that as yet we cannot understand them. It is exactly the same with the closely related phenomena of reproduction. We cannot by any stretch of the imagination conceive a delicate and complex mechanism which is capable, like a living organism, of reproducing itself indefinitely often."

There are quite a few misconceptions in this. I wonder if you can spot them.

Yeah, Ghyl, sorry to say but Dualism is still very thoroughly dead. Or rather, nothing in what you quoted here makes a compelling case that Dualism is necessary.
If I'm just aching this can't go on
I came from chasing dreams to feel alone
There must be changes, miss to feel strong
I really need lifе to touch me
--Evergrey, Where August Mourns

 
Re: The "hard problem of consciousness"
I laughed out loud at this:

Quote
This is the reason the qualia problem, the problem of intentionality, and other philosophical problems touching on human nature are so intractable.  Indeed, it is one reason many post-Cartesian philosophers have thought dualism unavoidable.  If you define “material” in such a way that irreducibly qualitative, semantic, and teleological features are excluded from matter, but also say that these features exist in the mind, then you have thereby made of the mind something immaterial.  Thus, Cartesian dualism was not some desperate rearguard action against the advance of modern science; on the contrary, it was the inevitable consequence of modern science (or, more precisely, the inevitable consequence of regarding modern science as giving us an exhaustive account of matter).

This is a quite hilarious (and somewhat desperate) rearguard action to portrait dualism as something necessary when it really isn't.

The quoted paragraph is Feser's conclusion. His reasoning is in a previous paragraph:

Quote
Here’s why.  Keep in mind that Descartes, Newton, and the other founders of modern science essentially stipulated that nothing that would not fit their exclusively quantitative or “mathematicized” conception of matter would be allowed to count as part of a “scientific” explanation.  Now to common sense, the world is filled with irreducibly qualitative features -- colors, sounds, odors, tastes, heat and cold -- and with purposes and meanings.  None of this can be analyzed in quantitative terms.  To be sure, you can re-define color in terms of a surface’s reflection of light of certain wavelengths, sound in terms of compression waves, heat and cold in terms of molecular motion, etc.  But that doesn’t capture what common sense means by color, sound, heat, cold, etc. -- the way red looks, the way an explosion sounds, the way heat feels, etc.  So, Descartes and Co. decided to treat these irreducibly qualitative features as projections of the mind.  The redness we see in a “Stop” sign, as common sense understands redness, does not actually exist in the sign itself but only as the quale of our conscious visual experience of the sign; the heat we attribute to the bathwater, as common sense understands heat, does not exist in the water itself but only in the “raw feel” that the high mean molecular kinetic energy of the water causes us to experience; meanings and purposes do not exist in external material objects but only in our minds, and we project these onto the world; and so forth.  Objectively there are only colorless, odorless, soundless, tasteless, meaningless particles in fields of force.

As a side note, I think Feser is referencing Newton's work in optics. Newton wrote about the mechanisms of vision (light, the retina, the optic nerve, etc.), but purposely avoided the experience of vision:

Quote
But, to determine more absolutely, what Light is, after what manner refracted, and by what modes or actions it produceth in our minds the Phantasms of Colours, is not so easie.

More hilarious wrongness:
Quote
However, many people imagine that consciousness will yield to scientific inquiry in precisely the way that other difficult problems have in the past. What, for instance, is the difference between a living system and a dead one? Insofar as the question of consciousness itself can be kept off the table, it seems that the difference is now reasonably clear to us. And yet, as late as 1932, the Scottish physiologist J.S. Haldane (father of J.B.S. Haldane) wrote:

"What intelligible account can the mechanistic theory of life give of the…recovery from disease and injuries? Simply none at all, except that these phenomena are so complex and strange that as yet we cannot understand them. It is exactly the same with the closely related phenomena of reproduction. We cannot by any stretch of the imagination conceive a delicate and complex mechanism which is capable, like a living organism, of reproducing itself indefinitely often."

There are quite a few misconceptions in this. I wonder if you can spot them.

You realize that Harris is describing a triumph of science, right? Vitalists believed that science could never explain life, but they were wrong. Harris is comparing vitalism with dualism.

 

Offline The E

  • He's Ebeneezer Goode
  • 213
  • Nothing personal, just tech support.
    • Steam
    • Twitter
Re: The "hard problem of consciousness"
The quoted paragraph is Feser's conclusion. His reasoning is in a previous paragraph:

Quote
...snipped...

As a side note, I think Feser is referencing Newton's work in optics. Newton wrote about the mechanisms of vision (light, the retina, the optic nerve, etc.), but purposely avoided the experience of vision:

Quote
But, to determine more absolutely, what Light is, after what manner refracted, and by what modes or actions it produceth in our minds the Phantasms of Colours, is not so easie.

And that reasoning is bad.

It seems to me that he's saying that there can be no scientific definition of what "heat" or "red" or stuff like that means because we infuse those terms with meaning beyond mere physical attributes, and that this in turn means that there can be no scientific definition of "consciousness".

That's just plain stupid. We can define terms scientifically. We can measure the impact sensory perceptions have on the brain. There is no magic point at which a water temperature of 40+ degrees C suddenly turns into "Warm" and is thus imbued with metaphysical aspects. The point here is that, again, none of these writings make a clear point that dualism is a necessary hypothesis without which we cannot explain consciousness. They assign undue meaning to what is to my mind just the brain adding metadata to sensory perceptions based on previous experiences.


More hilarious wrongness:
Quote
However, many people imagine that consciousness will yield to scientific inquiry in precisely the way that other difficult problems have in the past. What, for instance, is the difference between a living system and a dead one? Insofar as the question of consciousness itself can be kept off the table, it seems that the difference is now reasonably clear to us. And yet, as late as 1932, the Scottish physiologist J.S. Haldane (father of J.B.S. Haldane) wrote:

"What intelligible account can the mechanistic theory of life give of the…recovery from disease and injuries? Simply none at all, except that these phenomena are so complex and strange that as yet we cannot understand them. It is exactly the same with the closely related phenomena of reproduction. We cannot by any stretch of the imagination conceive a delicate and complex mechanism which is capable, like a living organism, of reproducing itself indefinitely often."

There are quite a few misconceptions in this. I wonder if you can spot them.

You realize that Harris is describing a triumph of science, right? Vitalists believed that science could never explain life, but they were wrong. Harris is comparing vitalism with dualism.
[/quote]

I do. I also realize that he's utterly wrong here:
Quote
But couldn’t a mature neuroscience nevertheless offer a proper explanation of human consciousness in terms of its underlying brain processes? We have reasons to believe that reductions of this sort are neither possible nor conceptually coherent. Nothing about a brain, studied at any scale (spatial or temporal), even suggests that it might harbor consciousness. Nothing about human behavior, or language, or culture, demonstrates that these products are mediated by subjectivity. We simply know that they are—a fact that we appreciate in ourselves directly and in others by analogy.
If I'm just aching this can't go on
I came from chasing dreams to feel alone
There must be changes, miss to feel strong
I really need lifе to touch me
--Evergrey, Where August Mourns

 
Re: The "hard problem of consciousness"
It seems to me that he's saying that there can be no scientific definition of what "heat" or "red" or stuff like that means because we infuse those terms with meaning beyond mere physical attributes, and that this in turn means that there can be no scientific definition of "consciousness".

That's just plain stupid. We can define terms scientifically. We can measure the impact sensory perceptions have on the brain. There is no magic point at which a water temperature of 40+ degrees C suddenly turns into "Warm" and is thus imbued with metaphysical aspects. The point here is that, again, none of these writings make a clear point that dualism is a necessary hypothesis without which we cannot explain consciousness. They assign undue meaning to what is to my mind just the brain adding metadata to sensory perceptions based on previous experiences.

It's necessary to clarify what you mean by "red". If you define it "in terms of a surface’s reflection of light of certain wavelengths", then you're speaking objectively, using the language of science. Your experience of redness, on the other hand, is a subjective phenomenon. You know it exists, but you have no way of comparing it with other people's experiences of redness. (In fact - going down the rabbit holes of solipsism and the simulation hypothesis - you can't even verify that other people have experiences of redness.) Because this particular aspect of redness cannot be analyzed or verified objectively, science sweeps it under "the rug of the mind", along with other subjective phenomena.

I also realize that he's utterly wrong here:
Quote
But couldn’t a mature neuroscience nevertheless offer a proper explanation of human consciousness in terms of its underlying brain processes? We have reasons to believe that reductions of this sort are neither possible nor conceptually coherent. Nothing about a brain, studied at any scale (spatial or temporal), even suggests that it might harbor consciousness. Nothing about human behavior, or language, or culture, demonstrates that these products are mediated by subjectivity. We simply know that they are—a fact that we appreciate in ourselves directly and in others by analogy.

See above. You can only assume that the subjective aspect of redness exists in other people by analogy. And when you consider other lifeforms, even analogy breaks down. From part one of Harris' article:

Quote
If we look for consciousness in the physical world, all we find are increasingly complex systems giving rise to increasingly complex behavior—which may or may not be attended by consciousness.  The fact that the behavior of our fellow human beings persuades us that they are (more or less) conscious does not get us any closer to linking consciousness to physical events.  Is a starfish conscious? A scientific account of the emergence of consciousness would answer this question. And it seems clear that we will not make any progress by drawing analogies between starfish behavior and our own. It is only in the presence of animals sufficiently like ourselves that our intuitions about (and attributions of) consciousness begin to crystallize. Is there “something that it is like” to be a cocker spaniel? Does it feel its pains and pleasures? Surely it must. How do we know? Behavior, analogy, parsimony.

To be clear, I don't think Harris is a dualist. I think his view is epistemological. Consciousness isn't necessarily special in a cosmic sense, but it might lie beyond the grasp of the scientific method. (Like, for instance, questions about the nature of reality.)

Quote
Perhaps the emergence of consciousness is simply incomprehensible in human terms. Every chain of explanation must end somewhere—generally with a brute fact that neglects to explain itself. Consciousness might represent a terminus of this sort.

 

Offline Mikes

  • 29
Re: The "hard problem of consciousness"
It's necessary to clarify what you mean by "red". If you define it "in terms of a surface’s reflection of light of certain wavelengths", then you're speaking objectively, using the language of science. Your experience of redness, on the other hand, is a subjective phenomenon. You know it exists, but you have no way of comparing it with other people's experiences of redness. (In fact - going down the rabbit holes of solipsism and the simulation hypothesis - you can't even verify that other people have experiences of redness.) Because this particular aspect of redness cannot be analyzed or verified objectively, science sweeps it under "the rug of the mind", along with other subjective phenomena.

/sigh "Red" is nothing more than "learned behavior".
Wipe out the collective memories of the human race and start fresh and the label "red" becomes meaningless.
But the phenomenon that is being described as red will still exist. Maybe it will get a different name like "rot" or "blau" oder "black". When people start communicate with each other again and decide to give a name to what they perceive.

Language is also learned behavior. Now try enjoying your consciousness for a while without the use of language. Try having some conscious thoughts without the use of language ... Notice something? Now make an informed guess what that tells us about the nature of consciousness.
« Last Edit: December 27, 2017, 04:34:37 am by Mikes »

 

Offline Luis Dias

  • 211
Re: The "hard problem of consciousness"
I do. I also realize that he's utterly wrong here:
Quote
But couldn’t a mature neuroscience nevertheless offer a proper explanation of human consciousness in terms of its underlying brain processes? We have reasons to believe that reductions of this sort are neither possible nor conceptually coherent. Nothing about a brain, studied at any scale (spatial or temporal), even suggests that it might harbor consciousness. Nothing about human behavior, or language, or culture, demonstrates that these products are mediated by subjectivity. We simply know that they are—a fact that we appreciate in ourselves directly and in others by analogy.

I had previously commented on this thread, but my phone went sour on me and deleted everything and I gave up. I don't even remember what I was going to say.

Nevertheless, let me jump in here. What Harris is mentioning in here is something that was also detected by Dennett (despite himself), who cleverly defined this as the zimbo problem. There's nothing that we can scientifically detect that won't be exactly like something that exists without conscience. That is to say, we can posit beings that are just like humans in every conceivable way, except they are not conscious like we are. That is, the only thing that exists in such beings is a bit like zombies, the sort of chinese room monsters Peter Watts is obsessed about. We may well define and know all about its inherent machinations, but there is seemingly no point in this analysis that can point towards the very experience we all have while living our own lives, thus such beings would be totally indistinguishable from us human beings.

And yet, we wouldn't call such beings to be even alive, if we knew the "lights were off" within their brains, and all that was going on was a bunch of hacks on top of hacks (chinese rooms).

The problem isn't even whether consciousness exists or not in such zimbo brains, but the effective epistemological and fundamental inability of science to ever detect it at all.

This isn't usually a problem, because we just assume that brains harbor consciousness, period. But we're about (within a hundred years) to bring forth very highly intelligent artificial brains into the world. We should know whether we're also bringing some form of consciousness to life through some emergent quality that we just don't understand. Because if we are, then we run the risk of commiting incredible crimes on these new forms of life. There's the other problem of being deceived into thinking such artificial brains *do* have consciousness without it being true, with other unintended terrible consequences being brought to our world.

 

Offline The E

  • He's Ebeneezer Goode
  • 213
  • Nothing personal, just tech support.
    • Steam
    • Twitter
Re: The "hard problem of consciousness"
But do Zimbos exist in reality?
If I'm just aching this can't go on
I came from chasing dreams to feel alone
There must be changes, miss to feel strong
I really need lifе to touch me
--Evergrey, Where August Mourns

 

Offline Luis Dias

  • 211
Re: The "hard problem of consciousness"
Well that depends what kind of language game we're using, doesn't it? Harris' point is that the language of science can only talk about zimbos. It cannot really talk about people.