The User Illusion by Tor Nørretranders is a high quality pop science book first published in Danish in the early 90s, then in English in the late 90s. It examines consciousness from the perspective of information theory and riffs in an original way on various implications that can (or could perhaps) be drawn from this approach. Despite the book jacket blurb, it doesn’t explain consciousness, but it’s an enjoyable, thought-provoking read, and it includes stuff about information theory that catches something metaphorically very vibey, very fruitful. If you’re interested, I’d recommend it, but read the Amazon reviews first, in particular a critical review on Amazon.com by a Joao Leao which I think is rather good. The two key things for me, my main (and lasting) takes from the book are that in terms of information processing conscious awareness is very low rate – 16 bits/second tops – and therefore something qualitative (subjective awareness) is slipping through the information theory (IT) conceptual net, but more positively, there’s something in IT that makes a nice, fruitful metaphor for pondering mind and creativity.
But what does IT say?
Lots. But the bit that’s important here is the idea of randomness, and how we know something even is random. The classic idea that catches unaware people out is tossing a coin. If you ask people not in on IT to imagine tossing a coin to write down a 0 for heads and a 1 for tails, they are very likely to write out a sequence that isn’t actually entirely random. This is because in real life when you toss a coin, sometimes you’ll get a whole load of heads or tails in a row, and if you don’t include sequences like that, your imaginary coin tossing session won’t be properly random after all. Randomness goes with entropy, with noise. Nørretranders gives the example of the information contained in dirty dishes – it’s just not interesting to us. We discard the unwanted information, the noise. (And do the dishes.) In IT the discarded stuff is called exformation.
There’s then the related idea of compressing information. The fraction 3/7 written in full continues forever – 0.428571428571428571… But if you write it as 3/7, that’s a whole lot less information. Note too that 3/7 is exactly right, whereas the decimal fraction can only ever be an approximation as it continues without end. Also, if you toss a coin (imaginary or otherwise) 12 times, that’s more information than 3/7. The coin tossing example reminds me of the more prolix French continental philosophers – lots of verbiage, not so much precision or clarity. It’s not so much that it’s meaningless (although sometimes I do wonder – see the Sokal affair), as the information is quite resistant to compression. It would be like a very lengthy decimal fraction that nonetheless can’t be compressed very much – 1,528,248/2,661,993, say. Or just a load of uncompressible noise. Though to be fair it must be said that precision and clarity don’t feature much in our everyday life, and we often go through life guessing, intuiting, going on hunches – and philosophy really ought to inherently be about our life here on this planet, so let’s not be too hard on those thinkers. But their thinking can all get a bit messy and opaque, and only appealing to other similarly-minded philosophers, which takes it out of our lives and into the halls of academe, which is a shame.
Anyway. Clarity. Another way of looking at this is to consider zipped computer files. The basic idea here is that the zipping software analyses where, say, there’s a load of 0s or 1s in a row and tidies it up into, say “4,536 1s in a row here”, which uses a lot less information. The very way in which huge files can be shrunk so drastically shows how powerful this technique is.
Meanwhile, TUI states “Intelligence is thus not about remembering lots of microstates at once in sequence. Intelligence is being able to see which macrostates combine all the microstates”.
At which point I think it’s fair to ask – what is this ‘seeing’ then? How does that work? This failure to address ‘seeing’ persists in AI, on and on, decade after decade, leading intelligent researchers and philosophers astray. It’s a real blind spot (so to speak).
Which leads to the next point. So far all this has had a certain passivity to it. You discard the exformation and keep the information, it’s all compressed nicely, then you unzip it et voila – there it all is again. But there’s a mystery in how this process can result in new ideas so heavily loaded with new information that they can change whole paradigms, in art or science. To do this requires vast amounts of information and exformation – and an intuitive leap that is an inherent part of the mystery of creation. And that happens in the subjective. No amount of computing, no matter how clever, ever results – and I would say ever can result – in profound new insight on its own. There always needs to be a human mind involved somewhere.
In TUI psychologist David Hargreaves, who has a written extensively on the psychology of music and musicians, is quoted as saying “The theory [of musical preference] has its base in information theory, but the important insight comes from the distinction between this conception of ‘information’ and its psychological counterpoint. Fundamentally, the coding of physical information contained in a musical composition, as in information theory, predicts very little of interest, but coding the information in ‘subjective’ terms predicts quite a lot. Whether a person likes a particular piece or not depends on the information they are able to take out of it, rather than the information that is already ‘in there’.”
‘Macrostates’ are what you end up with when a great deal of exformation has been discarded and compressed into notions encompassing that vast amount of exformation. The mystery is that this is even possible. How can certain ideas contain so much by way of having discarded so much? And how are we able to ‘see’ the outline of Big Ideas as such in the first place? Big Ideas start out looking simple, but are the result of an enormous amount of discarded information that they still paradoxically somehow contain, or infer, and after those new Big Ideas appear, they are then unpacked at great length by armies of scientists and/or artists, which is only possible because those Ideas contain so much novelty. They resonate. They have a kind of interiority that can be explored, and those explorations uncover all manner of new treasures as we shine our consciousness on them, before which we couldn’t see them. As TUI puts it, what we experience has acquired meaning before we become conscious of it. Perhaps this is connected with how we somehow intuit that there’s something Big there. It’s not necessarily immediately obvious, either – usually when something Big comes along there’s a huge amount of reflexive attack from certain quarters before wider acceptance is found. Which in itself is interesting but perhaps for another article.
The second big take for me from TUI is the small, tiny even, amount of bits per second that are processed consciously. This is a strong comeback to the whole ‘reign of quantity’ idea that measuring and counting is all. The few bits/sec of subjective conscious awareness are utterly, profoundly different to all that incoming raw data. But why should we even be surprised at that low bitrate? Perhaps because we’re not used to putting the qualitative first instead of the quantitative. But it’s not the amount of bits/sec, it’s the fact that those 16 or so bits are processed ‘in’ (whatever that means) or ‘through’ (whatever that means) subjective awareness.
And that’s what (the) TUI metaphor is, for me at any rate. I had to discard a lot of information trying to catch the essence of it, and I hope it hasn’t been too confusing as a result. As ever, I’m trying to be simultaneously clear but also put across an intuition, a vibe, a feel for the idea. But here it has a particular extra level of ‘meta’-ness, so I can only apologise if anybody’s feeling a bit dizzy. Maybe it’s time for a cup of tea.