Sunday 31 October 2021

If IIT is true, IIT is false (The Unfolded-Tononi Paradox)

A lot has been written about the unfolding argument against Integrated Information Theory. Herzog et al.’s recent article provides a much needed summary of the debate, as well as responses to the criticisms this argument has raised. 
 

In their section on “Dissociative Epiphenomenalism”, Herzog et al. note the following consequence of the unfolding argument for IIT:


For example, we may wire this robot to always experience the smell of coffee, independently of what it claims to perceive. The robot may perform complex tasks and report about them, but, according to IIT, consciously never experiences anything other than the smell of coffee. Alternatively, we can wire up the robot so that conscious percepts permanently change, completely independently from the robot’s behavior (for example, the robot may always report that it is smelling coffee, but according to IIT its experiences are ever-changing). … Since IIT’s consciousness is truly dissociated from [Input/Output] functions, we may all be such creatures. We may in truth all be perpetually experiencing the smell of coffee but be unable to express it, because neither the level nor the content of consciousness has any impact on reports or any other third-person data. ... We may all verbally agree that the scenario is wrong but, as just shown, third-person verbal agreement is no evidence about our true conscious states (according to IIT) because consciousness is fully dissociated from what we can behaviorally express.


I believe that Herzog et al. have made their argument slightly weaker than it could have been. That’s because they only focus on the psychological function of providing reports. But the unfolding argument implies that IIT-consciousness dissociates from any function – not just reports. Here’s the thing: thinking about one’s own mental states, forming beliefs about them, and evaluating whether those beliefs are true or not, are all psychological functions.


Because of that, following the unfolding argument, conscious experiences dissociate not only from reports about those experiences, but also from judgments and beliefs about those experiences. It follows that if IIT is correct, a large set of weird entities are not only conceivable but actually possible. Ultimately, I believe that one of those cases leads to a paradox, which I call the Unfolded-Tononi paradox.


Let me start with some examples of weird possible entities. There’s a possible entity that only experiences the smell of coffee but believes with absolute certainty that it has a wide variety of conscious experiences.


For all we know, you and I could be that entity. Sure, you can pound your fist on the table and insist that you’re different from that entity. You really have a wide variety of conscious experiences. But an entity like this one would be just as convinced as you are. After all, being absolutely convinced of something is a functional state. And according to IIT what you experience is entirely dissociable from your functional states. So the phenomenal properties you actually experience could be totally different from those you believe you experience (with whatever level of certainty you want).


Here’s another fun Dennett-style case. An entity could have the worst pain experience you can possibly imagine but at the same time functionally desire being in that state of intense pain more than anything else in the world. In addition, that entity could also be absolutely convinced that experiencing this intense pain is the best experience it can possibly enjoy. Again, this case is possible according to IIT because believing and desiring are functional states, and functional states can dissociate from the phenomenal character of experience.


There’s also a possible entity that has no consciousness whatsoever but believes with absolute certainty that it is conscious – an entity akin to a philosophical zombie (except that this entity would only be a functional duplicate). Just as there’s an entity with rich conscious experiences who is absolutely convinced that it experiences nothing at all. It’s not just that it reports experiencing nothing. It believes that it experiences nothing, with absolute certainty – the same certainty with which you believe that you experience something.


If some of those entities are inconceivable to you, then you should hold that IIT is wrong, because the theory predicts that those entities are not only conceivable but possible. And by that I don’t mean just metaphysically possible. But physically possible given the laws of nature as they currently are. To build one of those entities, one ‘just’ needs to replicate the relevant functional states with whatever structure is required to get the desired phenomenal character. This is a direct result of what Doerig et al. (2019) have shown in Appendix C of the paper introducing the unfolding argument – all functions can be implemented with arbitrary phenomenal character.


Ultimately you can invent as many of those weird scenarios as you want. They’re fun. But perhaps that’s not really a problem because proponents of IIT could just answer that they’re indeed possible, but very unlikely to occur. We should start by taking people’s reports and beliefs about their own experiences at face value, and we’ll worry about weird cases later. (Although I guess I’d still feel uncomfortable with the idea that I don’t really know whether all I ever experience is the smell of coffee or not until someone has checked the structure of my brain!)


So, if you think those entities are possible, I have a thought experiment for you.


There’s a possible world where some unfolded entities (purely feedforward networks), have developed general intelligence – call it the Unfolded-World. 


In this world, there’s a creature called ‘Unfolded-Tononi’. This entity has all the same functional states as Tononi in our world. Just as Tononi, Unfolded-Tononi believes in the axioms of IIT with a high degree of certainty. And of all the axioms, the one he is the most certain of is the axiom of existence. “Consciousness exists: each experience is actual—indeed, that my experience here and now exists (it is real) is the only fact I can be sure of immediately and absolutely.” This axiom seems just as self-evident to Unfolded-Tononi as it does to Tononi in our world.


In the Unfolded-World, Unfolded-Tononi developed IIT, just as it was developed in our world. Unfolded-Tononi is fairly certain that his theory is true since it is based on axioms that he regards as self-evident.


But Unfolded-Tononi later makes a discovery that will change his life forever. He discovers that his theory isn’t true of him! After analyzing the architecture of his own brain, he realizes that, being an unfolded system, he is in fact a zombie. He never had any conscious experience. Since he is convinced that IIT is true, Unfolded-Tononi accepts this unfortunate conclusion – he is, himself, an unfolded zombie.


Unfolded-Tononi then realizes that the possibility of his own situation gives rise to a puzzling consequence for his theory. 


He started from the axiom of existence, which says that “that my experience here and now exists (it is real) is the only fact I can be sure of immediately and absolutely.” At the time, this axiom seemed self-evident to him, which is why he developed IIT in the first place. In fact, he remembers saying to his skeptical colleagues: “IIT has to be true, since the axioms are true!” Yet, his own theory led him to deny what he was previously so sure of. So, he exclaimed: “The existence axiom is false, since IIT is true!” To which his colleagues answered: “But IIT can’t be true, since the existence axiom is false!”


That’s the Unfolded-Tononi paradox. If IIT is true, the existence axiom has to be true. But IIT also says that unfolded entities are possible. These entities can be just as certain of the axiom of existence as you are, since being certain of something amounts to being in a certain functional state, which unfolded entities can replicate. So what makes you so sure that you’re not one of them? The answer can’t be that you know that you’re conscious. Unfolded entities would be just as certain that they’re conscious! Their functional state of certainty would simply be realized in an unfolded architecture. So the only way for you to know that you’re conscious is if you know that you’re not an unfolded entity. This consequence is rather puzzling in itself: if IIT is correct I can’t know for sure whether I’m conscious or not without checking the architecture of my own brain.


But there’s also a more profound consequence for the theory. Indeed, if IIT is true, the axiom of existence is false: you can’t be absolutely and immediately certain that you’re conscious! Instead, to know that you’re conscious, you have to know that you’re not an unfolded network. Which means that you can’t immediately know whether you’re conscious or not. And you can’t know it with absolute certainty either. After all, your mind could be implemented in a feedforward simulation that just makes you think that you have a non-unfolded brain. That simulation would be akin to Descartes’ Evil Demon, except that in that case it would successfully deceive you into thinking that you’re conscious when in fact you’re not.


If you’re not convinced that the truth of IIT implies that the axiom of existence is wrong, think about the following case – the unfolding-machine.


There’s a possible world where scientists develop an unfolding-machine. When you enter into that machine, it (randomly) either unfolds your brain and leaves all your functional states the same, or does nothing. You come out of that machine. You can’t check your brain. Are you phenomenally conscious or not? Your beliefs about your own mental life as well as all your other functional states are exactly the same in both scenarios. So there’s no way for you to know what the machine did. Thus, there’s no way for you to know whether you’re phenomenally conscious or not when you come out. Therefore, if IIT is true, the axiom of existence is wrong.


So, we said that if IIT is true, the axiom of existence has to be true. But we’ve also just seen that if IIT is true, the axiom of existence is false. So, if IIT is true, then IIT is false. That’s a funny paradox.


I’m not sure about this, but I guess the more general lesson we can draw from this is that one can’t both hold that zombies (or unconscious functional duplicates) are possible, and at the same time hold that something like the existence axiom is true. A functional duplicate would have whatever beliefs you have (with whatever level of certainty you have) about your own mental life. Whatever you think or believes separates you from a zombie, a zombie would think and believe the same thing. (Maybe you’ll say that, while zombies are possible, you’re sure that you’re conscious because you’re immediately acquainted (whatever that means) with your own conscious states and acquaintance is a non-functional relation, but then zombies could also be absolutely certain that they have that special relation to their own mental states too, so I’m not sure that’d help).


Of course if unconscious functional duplicates aren’t possible then you’re fine.



I thank Adrien Doerig for discussing his paper with me.

 

 

UPDATE: I just learned that Murray Shanahan previously made a very similar argument. As far as I can tell, the main difference is that we now know (since the unfolding argument) that cases like the unfolded-Tononi are not only conceivable, but also physically possible according to the integrated information theory.

8 comments:

  1. Very fun to read - thanks! It seems here that you are on a slippery slope towards illusionism. If beliefs are functional states, then so is the belief that there is something "else" to explain about internal states, other than than the mechanics of neuronal networks, information processing, meta-cognition, etc...

    ReplyDelete
    Replies
    1. All physicalists should be committed to denying that there's something 'extra' over and above the physical stuff you mentioned (not just illusionists), right? Functionalists believe that being conscious amounts to performing some functions, and so from there you can't get the kind of scenarios I talk about in the post (you obviously can't perform all the same functions, but without consciousness).

      Delete
  2. Interesting! Such a vivid exploration of the disconnection between IIT's results and phenomenal judgments. I think the same could be said for results that turn on the Exclusion Postulate, since whether a system is embedded in, or embeds, a higher phi system can in principle be entirely dissociable from how it functions in generating introspective judgments. For example: https://schwitzsplinters.blogspot.com/2014/07/tononis-exclusion-postulate-would-make.html

    ReplyDelete
    Replies
    1. Thanks Eric, that's a very good point. I never thought of that. I suppose we could get a similar thought experiment where Tononi discovers that he is in fact embedded in a system with a higher Phi value than his own Phi value (especially given that the unfolding argument shows that we can vary the Phi value of any system arbitrarily without any change in behavior)! So if that's right there are actually multiple threats to the 'existence axiom' from within IIT itself.

      Delete
  3. Hi Matthias, very interesting blog post! I appreciate you taking the time to flesh this out, as I was unaware that beliefs are considered functional states. Is this a relatively uncontroversial assumption? I have always purposefully avoided including judgments and beliefs in any sort of topological description of a system because I assumed they were too hard to precisely pin down. If they are simply functional states, as you describe, then I definitely agree there is a paradox inherent in the existence axiom.

    ReplyDelete
    Replies
    1. Hi, I think that's a relatively uncontroversial assumption, although as with everything else I'm assuming you could find some people who disagree with that haha. I agree with you that they're hard to precisely pin down empirically, but mental states like beliefs, judgments and desires seem to be good examples of mental states that can be described functionally, at least in principle. For more on the specific case of beliefs, you can check here: https://plato.stanford.edu/entries/belief/#Func .

      Delete
  4. This comment has been removed by the author.

    ReplyDelete
  5. (1) Skeptical scenarios do not require the unfolding argument. Any physicalist theory of consciousness that identifies consciousness with the realizer of a functional role rather than with the role itself is subject to some such skeptical scenarios.
    (2) But when you look at plausible candidates for the realizers of our functional states, e.g. ones that involve recurrent networks or global broadcasting, the neural plausibility that those states could freely vary over functional states as per your skeptical scenarios drops to near zero. Maybe IIT has more of a problem here than other realizer theories, but I would like to see the argument for this.
    (3) You say "the phenomenal properties you actually experience could be totally different from those you believe you experience (with whatever level of certainty you want)." One kind of belief about what one experiences INCORPORATES the experience itself. One might express such beliefs as "I am experiencing this now." Realizer theories do not dictate that such beliefs can vary independently of conscious states.
    (4) In any case, skeptical scenarios don't show anything. Consider: If the brain determines all your experiences, then it could have been wired up to input and output organs by mad scientists so that when you think you are relaxing at the beach you are drowning in arctic waters and when you think you are eating dinner you are running a marathon. Such skeptical scenarios are extremely unlikely but not inconceivable. Of course the disconnect between phenomenology and belief in your scenarios is even weirder but I don't see an argument for inconceivability.

    ReplyDelete

If IIT is true, IIT is false (The Unfolded-Tononi Paradox)

A lot has been written about the unfolding argument against Integrated Information Theory. Herzog et al.’s recent article provides a much ...