Sunday 31 October 2021

If IIT is true, IIT is false (The Unfolded-Tononi Paradox)

A lot has been written about the unfolding argument against Integrated Information Theory. Herzog et al.’s recent article provides a much needed summary of the debate, as well as responses to the criticisms this argument has raised. 
 

In their section on “Dissociative Epiphenomenalism”, Herzog et al. note the following consequence of the unfolding argument for IIT:


For example, we may wire this robot to always experience the smell of coffee, independently of what it claims to perceive. The robot may perform complex tasks and report about them, but, according to IIT, consciously never experiences anything other than the smell of coffee. Alternatively, we can wire up the robot so that conscious percepts permanently change, completely independently from the robot’s behavior (for example, the robot may always report that it is smelling coffee, but according to IIT its experiences are ever-changing). … Since IIT’s consciousness is truly dissociated from [Input/Output] functions, we may all be such creatures. We may in truth all be perpetually experiencing the smell of coffee but be unable to express it, because neither the level nor the content of consciousness has any impact on reports or any other third-person data. ... We may all verbally agree that the scenario is wrong but, as just shown, third-person verbal agreement is no evidence about our true conscious states (according to IIT) because consciousness is fully dissociated from what we can behaviorally express.


I believe that Herzog et al. have made their argument slightly weaker than it could have been. That’s because they only focus on the psychological function of providing reports. But the unfolding argument implies that IIT-consciousness dissociates from any function – not just reports. Here’s the thing: thinking about one’s own mental states, forming beliefs about them, and evaluating whether those beliefs are true or not, are all psychological functions.


Because of that, following the unfolding argument, conscious experiences dissociate not only from reports about those experiences, but also from judgments and beliefs about those experiences. It follows that if IIT is correct, a large set of weird entities are not only conceivable but actually possible. Ultimately, I believe that one of those cases leads to a paradox, which I call the Unfolded-Tononi paradox.


Let me start with some examples of weird possible entities. There’s a possible entity that only experiences the smell of coffee but believes with absolute certainty that it has a wide variety of conscious experiences.


For all we know, you and I could be that entity. Sure, you can pound your fist on the table and insist that you’re different from that entity. You really have a wide variety of conscious experiences. But an entity like this one would be just as convinced as you are. After all, being absolutely convinced of something is a functional state. And according to IIT what you experience is entirely dissociable from your functional states. So the phenomenal properties you actually experience could be totally different from those you believe you experience (with whatever level of certainty you want).


Here’s another fun Dennett-style case. An entity could have the worst pain experience you can possibly imagine but at the same time functionally desire being in that state of intense pain more than anything else in the world. In addition, that entity could also be absolutely convinced that experiencing this intense pain is the best experience it can possibly enjoy. Again, this case is possible according to IIT because believing and desiring are functional states, and functional states can dissociate from the phenomenal character of experience.


There’s also a possible entity that has no consciousness whatsoever but believes with absolute certainty that it is conscious – an entity akin to a philosophical zombie (except that this entity would only be a functional duplicate). Just as there’s an entity with rich conscious experiences who is absolutely convinced that it experiences nothing at all. It’s not just that it reports experiencing nothing. It believes that it experiences nothing, with absolute certainty – the same certainty with which you believe that you experience something.


If some of those entities are inconceivable to you, then you should hold that IIT is wrong, because the theory predicts that those entities are not only conceivable but possible. And by that I don’t mean just metaphysically possible. But physically possible given the laws of nature as they currently are. To build one of those entities, one ‘just’ needs to replicate the relevant functional states with whatever structure is required to get the desired phenomenal character. This is a direct result of what Doerig et al. (2019) have shown in Appendix C of the paper introducing the unfolding argument – all functions can be implemented with arbitrary phenomenal character.


Ultimately you can invent as many of those weird scenarios as you want. They’re fun. But perhaps that’s not really a problem because proponents of IIT could just answer that they’re indeed possible, but very unlikely to occur. We should start by taking people’s reports and beliefs about their own experiences at face value, and we’ll worry about weird cases later. (Although I guess I’d still feel uncomfortable with the idea that I don’t really know whether all I ever experience is the smell of coffee or not until someone has checked the structure of my brain!)


So, if you think those entities are possible, I have a thought experiment for you.


There’s a possible world where some unfolded entities (purely feedforward networks), have developed general intelligence – call it the Unfolded-World. 


In this world, there’s a creature called ‘Unfolded-Tononi’. This entity has all the same functional states as Tononi in our world. Just as Tononi, Unfolded-Tononi believes in the axioms of IIT with a high degree of certainty. And of all the axioms, the one he is the most certain of is the axiom of existence. “Consciousness exists: each experience is actual—indeed, that my experience here and now exists (it is real) is the only fact I can be sure of immediately and absolutely.” This axiom seems just as self-evident to Unfolded-Tononi as it does to Tononi in our world.


In the Unfolded-World, Unfolded-Tononi developed IIT, just as it was developed in our world. Unfolded-Tononi is fairly certain that his theory is true since it is based on axioms that he regards as self-evident.


But Unfolded-Tononi later makes a discovery that will change his life forever. He discovers that his theory isn’t true of him! After analyzing the architecture of his own brain, he realizes that, being an unfolded system, he is in fact a zombie. He never had any conscious experience. Since he is convinced that IIT is true, Unfolded-Tononi accepts this unfortunate conclusion – he is, himself, an unfolded zombie.


Unfolded-Tononi then realizes that the possibility of his own situation gives rise to a puzzling consequence for his theory. 


He started from the axiom of existence, which says that “that my experience here and now exists (it is real) is the only fact I can be sure of immediately and absolutely.” At the time, this axiom seemed self-evident to him, which is why he developed IIT in the first place. In fact, he remembers saying to his skeptical colleagues: “IIT has to be true, since the axioms are true!” Yet, his own theory led him to deny what he was previously so sure of. So, he exclaimed: “The existence axiom is false, since IIT is true!” To which his colleagues answered: “But IIT can’t be true, since the existence axiom is false!”


That’s the Unfolded-Tononi paradox. If IIT is true, the existence axiom has to be true. But IIT also says that unfolded entities are possible. These entities can be just as certain of the axiom of existence as you are, since being certain of something amounts to being in a certain functional state, which unfolded entities can replicate. So what makes you so sure that you’re not one of them? The answer can’t be that you know that you’re conscious. Unfolded entities would be just as certain that they’re conscious! Their functional state of certainty would simply be realized in an unfolded architecture. So the only way for you to know that you’re conscious is if you know that you’re not an unfolded entity. This consequence is rather puzzling in itself: if IIT is correct I can’t know for sure whether I’m conscious or not without checking the architecture of my own brain.


But there’s also a more profound consequence for the theory. Indeed, if IIT is true, the axiom of existence is false: you can’t be absolutely and immediately certain that you’re conscious! Instead, to know that you’re conscious, you have to know that you’re not an unfolded network. Which means that you can’t immediately know whether you’re conscious or not. And you can’t know it with absolute certainty either. After all, your mind could be implemented in a feedforward simulation that just makes you think that you have a non-unfolded brain. That simulation would be akin to Descartes’ Evil Demon, except that in that case it would successfully deceive you into thinking that you’re conscious when in fact you’re not.


If you’re not convinced that the truth of IIT implies that the axiom of existence is wrong, think about the following case – the unfolding-machine.


There’s a possible world where scientists develop an unfolding-machine. When you enter into that machine, it (randomly) either unfolds your brain and leaves all your functional states the same, or does nothing. You come out of that machine. You can’t check your brain. Are you phenomenally conscious or not? Your beliefs about your own mental life as well as all your other functional states are exactly the same in both scenarios. So there’s no way for you to know what the machine did. Thus, there’s no way for you to know whether you’re phenomenally conscious or not when you come out. Therefore, if IIT is true, the axiom of existence is wrong.


So, we said that if IIT is true, the axiom of existence has to be true. But we’ve also just seen that if IIT is true, the axiom of existence is false. So, if IIT is true, then IIT is false. That’s a funny paradox.


I’m not sure about this, but I guess the more general lesson we can draw from this is that one can’t both hold that zombies (or unconscious functional duplicates) are possible, and at the same time hold that something like the existence axiom is true. A functional duplicate would have whatever beliefs you have (with whatever level of certainty you have) about your own mental life. Whatever you think or believes separates you from a zombie, a zombie would think and believe the same thing. (Maybe you’ll say that, while zombies are possible, you’re sure that you’re conscious because you’re immediately acquainted (whatever that means) with your own conscious states and acquaintance is a non-functional relation, but then zombies could also be absolutely certain that they have that special relation to their own mental states too, so I’m not sure that’d help).


Of course if unconscious functional duplicates aren’t possible then you’re fine.



I thank Adrien Doerig for discussing his paper with me.

 

 

UPDATE: I just learned that Murray Shanahan previously made a very similar argument. As far as I can tell, the main difference is that we now know (since the unfolding argument) that cases like the unfolded-Tononi are not only conceivable, but also physically possible according to the integrated information theory.

Monday 16 November 2020

Inferotemporal cells represent both conscious and unconscious percepts in binocular rivalry

 

Hesse & Tsao just published this great study. It’s a binocular rivalry study (BR). In BR, incompatible stimuli are presented to each eye –– for instance: Obama’s face, and a Taco. Conscious perception alternates between the two every few seconds. So, researchers can compare conscious perception of a stimulus (e.g. Obama’s face) with unconscious processing of the same stimulus (e.g. Obama’s face when it is suppressed and the subject is conscious of the taco instead).


What’s new about Hesse & Tsao’s study? They developed a new no-report paradigm. It’s probably better than other no-report paradigms. But I’m not so sure the no-report aspect of the study is so important. They focus mostly on neuronal activity in the inferotemporal (IT) cortex, which shouldn’t exhibit so much report-related activity –– compared to PFC, for instance. So I won’t talk about the “no-report” aspect of this study (despite the hype, no-report is not new, and quite often unnecessary, in my humble opinion).


From a methodological perspective, the main new thing with this study – compared for instance to previous work by Logothetis et al. addressing similar questions – is more advanced recording and decoding. Hesse & Tsao recorded from multiple (mostly face-selective) IT neurons simultaneously during BR, and could decode the representational contents carried by those populations on a trial-by-trial basis.


Now the result. As expected IT neurons modulate their activity with BR alternations. But the main new and exciting finding is that these neurons encode both the conscious percept and the suppressed percept at the same time. That’s a case of mixed selectivity. IT neurons do track the conscious percept, but they also track the unconscious percept.


Since we can decode unconscious contents from IT neurons, it means that a special kind of population code in IT neurons correlates with the conscious content, and not any neuronal activity. That’s a really cool finding.


On Twitter Doris Tsao opposed these findings to a common – but incorrect – interpretation of earlier single-neuron recording findings, by Sheinberg & Logothetis


Here’s, for instance, what Koch et al. (2016) write about this: “Single-neuron recordings in monkeys, carried out during paradigms such as binocular rivalry, suggest that activity in most V1 neurons is linked to the identity of the physical stimulus rather than the percept. This contrasts with the activity of neurons higher up in the visual hierarchy, which correlates with the percept rather than the stimulus.”


To the best of my knowledge, Logothetis et al. never claimed that IT neurons track only the conscious percept. In any case, Hesse & Tsao have now proved this story wrong. They also convincingly argue that there’s no subset of IT neurons dedicated only to the representation of the conscious percept.


So that’s it? We’ve found that the neural correlate of consciousness (of faces) is a special kind of population coding in IT neurons?


Probably not. Binocular rivalry can happen for unconscious percepts. There’s also evidence that binocular rivalry – at least for vertical vs. horizontal gratings – operates at an earlier level than other forms of suppression like dichoptic metacontrast masking. (And metacontrast masking likely operates earlier than crowding, which itself probably operates earlier than object substitution masking)


So, the mechanism responsible for rivalry is upstream from the mechanism responsible for consciousness of contents. Which means that whatever contrast you get from comparing conscious vs. suppressed contents encompasses neural correlates of consciousness and a set of pre-requisites of consciousness.


Now, there’s a big difference between finding the neural correlate of a content of consciousness, and finding the neural correlate of consciousness of that content. 


Hesse & Tsao have convincingly shown that the conscious perceptual content correlates with a specific population code in IT. They have also shown that the same neuronal population can carry both conscious contents and unconscious contents at the same time. That’s a nice discovery.


Does it mean that it is in virtue of being coded in a specific manner by IT neurons that the content is conscious rather than non-conscious? We don’t really know. We still don’t know whether this kind of population coding in IT would be sufficient on its own for consciousness of the representational content carried by this neuronal population. But with more studies like Hesse & Tsao’s we’ll surely get there.



I thank Janis Hesse & Doris Tsao for discussing their study with me and for reading a draft of this blog post.

Thursday 10 September 2020

The unconscious lag between reality and conscious perception

It seems that we consciously perceive events exactly as they unfold in time. But that’s probably wrong. Here’s an example.


On the left, a green disk and a red disk are successively presented at the same location for 20 milliseconds each. On the right, the red and green disks are successively presented in different locations. The disks on the right are clearly visible. On the left, however, if the effect works as planned, you don’t perceive the green and red disks, but only a single yellow-ish disk. In this case the effect partly depends on the properties of your screen – it’s better if the disks are equiluminant – so it’s possible that the demonstration above doesn’t work so well for you. In any case, this phenomenon is called “color fusion”, and in proper lab settings, subjects can’t discriminate between the fused green / red disks and a yellow disk.


If we perceived events exactly as they unfold in time, we’d perceive a green disk quickly followed by a red disk. That’s not what we see.


It’s possible to show that the green and red disks are processed in your visual cortex. That’s evidence that you do see them. But it doesn’t feel like it. Why? A reasonable hypothesis is that your brain/mind unconsciously integrates the green and red disks during 40 milliseconds, and then you become conscious of the result of this unconscious processing as a single, integrated yellow disk.


This is an example of a postdictive effect. The presentation of the red disk causes a change in the way you see the green disk, even if the red disk is presented after the green disk. A wide variety of postdictive effects have been documented in the 100 to 150 milliseconds range. The way you consciously perceive something at time t depends on what’s going to happen at time t + 150 milliseconds – approximately the time of an eye blink.


I find these effects fascinating, but they’re not revolutionary. Unless you’re lucky enough to have an immaterial soul doing that job for you, signal processing performed by a material object like the brain has to take some time. As you’re reading this, it already takes approximately 50 or 60 milliseconds for visual signals to go from your retina to your visual cortex. So, I don’t find it so hard to believe that conscious perception can be preceded by 150 milliseconds of unconscious processing.


Some postdictive effects, however, can operate over a longer time range. In a recent article, Herzog, Drissi-Daoudi, and Doerig, review long-lasting postdictive effects in the 350-450 milliseconds range. I’ll talk about some of their main findings here, but I really encourage you to read the article to get the full picture. With Adrien Doerig, we also talked about some of the consequences of this research for theories of consciousness, so if you're interested you can check this article.


 

 

Here’s an example with the “Sequential Metacontrast Masking Paradigm”. This paradigm involves Vernier stimuli (the vertical bars). These stimuli integrate very well when presented one after the other, a bit like the colored disks above. If a Vernier with an offset to the right is followed by a Vernier with an offset to the left (call that an anti-Vernier), what observers report seeing is just a single Vernier with no offset at all (a “neutral Vernier”). This is a postdictive effect: the anti-Vernier changes the appearance of the Vernier even if the former is presented after the latter.


This integration also works when the Verniers appear to move. As illustrated in the figure above (V-PV3), the offset of the first Vernier is “transported” in the stream of Verniers, even if the subsequent Verniers are neutral. Participants just report seeing a Vernier moving to the right, and no neutral Verniers.


Here’s the main finding now. As depicted in the figure above (V-AV3), if a Vernier is followed by neutral Verniers, themselves followed by an anti-Vernier, observers report perceiving a stream of neutral Verniers. It’s a bit as if the anti-Vernier reached back in time to change the way in which the entire stream of Verniers is perceived.


How far back in time? Well, that’s where things get a little crazy… An anti-Vernier can change the way in which the entire stream of Verniers is perceived even if it appears 450 milliseconds after the first Vernier has been presented. This means that the reported appearance of the stream of Verniers depends on what will be presented 450 milliseconds after the stream has started. Yes, you read that correctly. 450 milliseconds. This finding suggests that, at least in this case, there’s a window of 450 milliseconds of unconscious processing before the entire stream is perceived.


This experiment also provides evidence that conscious perception is constituted of discrete episodes, instead of being continuous. But I won’t focus on this aspect here. I won’t focus either on all the other cases of long-lasting postdiction reported by Herzog et al. Instead, let me finish with a short reflection on what this research teaches us about consciousness, and baseball.


Baseball is a game of milliseconds. A fastball thrown at 100mph (161 Km/h) reaches the batter in about 375 milliseconds. If Herzog et al. are right, it takes 400ms for the batter to construct a conscious percept of the ball. Which means that the batter’s conscious visual percept of the ball lags far behind its actual location. In fact, it lags so far behind that the batter hits the ball before she consciously perceives the pitch leaving the pitcher’s hand.

 

To illustrate a similar – though less extreme – scenario, Nijhawan & Wu (2009) made this great picture:

 

 

The white circle represents the hypothetical perceived location of a tennis ball travelling at 60mph, compared to its actual location, assuming a processing latency of just 100 milliseconds. I let you imagine the perceived location of the ball if the latency is actually 400 milliseconds, as Herzog et al. convincingly argue.

That's just unbelievable. I don’t know much about the phenomenology of hitting a 100mph fastball. But my guess is that baseball players don’t feel like they swing the bat before consciously seeing the ball leaving the pitcher’s hand. Instead, they probably feel like they swing the bat at the moment they consciously see the ball approaching towards home plate.


If Herzog et al. are correct, and if that’s indeed how baseball players feel, baseball players are deeply wrong. It’s impossible for them to actually hit the ball at the moment they consciously see it approaching towards home plate. Instead, they unconsciously process the type of pitch the pitcher has thrown, whether it’s going to end up in the strike zone or not, whether they should swing or not, and how. All that, long before they consciously perceive the actual trajectory of the ball.


This leaves us with a puzzle. How can we, baseball players, and Roger Federer of all people, be so wrong? How is it possible that we don’t realize that our conscious experiences can lag 400ms behind reality? I think that two phenomena can independently contribute to the fact that we don’t realize it. Just a caveat before I continue: I’m not an expert on this, so that’s entirely speculative and I’m probably wrong.


First, most of the time the visual world isn’t like an unpredictable stream of Verniers stimuli, and most of us don’t spend our lives guessing what the next pitch is going to be. In ordinary situations, conscious percepts could be constituted before the entire bottom-up visual processing is over, based on predictions of what’s likely to come next. So, your stream of consciousness could “catch-up” with reality if predicting what’s coming requires less processing time than processing incoming sensory information from scratch.


Second, consciousness could be constituted of discrete episodes, and the intuition that the stream of consciousness is a continuous succession of feelings is an illusion. That’s the view held by Herzog et al. I still have a hard time understanding it, even if I’m starting to see the appeal, so I apologize if what comes next is a bit confusing.

 

Let's start with an analogy. A picture can represent 3-dimensional spatial relations between depicted objects, even if the picture itself only has two dimensions. Just as a picture doesn’t need to be 3-dimensional to represent spatial relations in a 3D space, your experience doesn’t need to be continuous to represent a succession of events. According to Herzog et al.'s view, your experience represents a succession of events, which gives rise to a feeling of succession, even if the experience itself is not constituted by a succession of feelings.


Here’s what actually happens if this view is correct. In the case of the stream of Verniers, for instance, we do not have a succession of experiences – each experience representing a single Vernier in the stream. There’s no succession of feelings. Instead, there’s just a feeling of succession. The entire stream of Verniers is experienced in a single conscious experience with temporal properties assigned to each element of the stream. The fact that elements of the experience are “tagged” with these temporal properties give us a feeling of succession in a single, discrete conscious episode, even if there’s no actual succession of feelings.


According to this view, in between those discrete conscious episodes – which give you the impression that consciousness is continuous, you’re not conscious of anything. We live most of our lives as zombies, unconsciously accumulating information, and reconstructing conscious experiences of what just happened after the facts.


It’s starting to be a bit too vertiginous for my taste, so I’ll stop here. There’s a lot to think about in Herzog et al.’s article. I thank Adrien Doerig for discussing these issues with me, and for reading a previous draft of this blog post.

If IIT is true, IIT is false (The Unfolded-Tononi Paradox)

A lot has been written about the unfolding argument against Integrated Information Theory. Herzog et al.’s recent article provides a much ...