Speed and The Great Asymptote

One morning I put the TV on to watch while I had my breakfast, always a risky proposition given the ever-present danger of the sudden random appearance of yet another sick-makingly vile politician, and sure enough I found the same particularly noisome individual popping up at the same time on both BBC and ITV, so I switched to Channel 4 as they usually show comedy in the mornings these days. And found myself watching a curious slo-mo scenario where a key character – called Raymond I think – sailed over a settee as a soft toy headed in the opposite direction over his arcing body as he dived towards what looked like a front door. To the soundtrack of Beethoven’s Ode to Joy, the camera cut towards a hand reaching slowly to the outside doorknob, then cut back to Raymond continuing on his journey to the other side of the door, presumably to lock the door and stop this dreaded person from getting in, then back to the threatening hand outside. The cuddly toy then hit the hi-fi, causing it to somewhat theatrically explode, emitting sparks and stopping the music, perhaps to increase the psychological tension. Raymond managed to make it to the door and lock it, sliding down the door to the floor with a look of relief on his face, now in Beethoven-free normal time. But there was a twist – the hand of dread came through the letterbox and ruffled his hair. The credits rolled.

This was obviously the denouement of a comedic scenario, but devoid of the setup it was just strange. There was a sense of something hanging in the air, a gnomic code without the necessary key to unlock its secrets, to ‘get’ it. And there was no way in principle of working out just what had led up to this scenario. I could guess that it was very important that the somebody on the outside didn’t get in, although the way this somebody affectionately played with Raymond’s hair was perhaps a clue that they weren’t that bad really, but the cuddly toy was a mystery that would require deep speculation. And why was the hi-fi playing Ode to Joy? Presumably it was to add an ironically dramatic soundtrack to the scenario, but what was it doing on the hi-fi in the first place? Was that glossed over for the purposes of the scene or did it in fact have a deeper significance that could be found only by finding out what the rest of the script was? Was the cuddly toy a comedic flourish or a MacGuffin or a leitmotif throughout the rest of the show? I’m guessing a flourish, but who knows. 

There would certainly be ways of applying an enormous amount of guesswork to create possible lead-in scenarios, setting them off against each other to see how they pan out, but for the humour to work fully as created by the scriptwriters, what would be required would be what the scriptwriters actually wrote, or maybe at the most extreme a group of skilled comedy writers could as a creative challenge write different leadups to the same endgame. Perhaps as part of the scriptwriting process that’s just what happened – quite likely, in fact.  But the writers would have to have that extra skill that only comes through learning, through practice, as creativity is particularly sensitive to subtlety, ambiguity, allusion, irony, character

Character. Also found when the art’s overloaded with depth, with resonant communication from the soul.

Of course we are asked to believe that the soul doesn’t exist, barrages of intellectual ideas aimed at the concept until it (supposedly) goes away, destroyed by all the cleverness. What might it mean in practice? Well the best sort of cleverness is sciencey cleverness, precisely because it does away with the fogs of superstition and woolly thinking and shows us how things really are which enables us to make and do properly. Which is all very clever of course… in a way. To consider what way that might be, it’s worth examining what happens when we use clever science to examine intellect. Which means computers of course.

There is an almost touching tendency abroad to suppose that if you add computers to things, it will somehow make them better.  It doesn’t matter how many expensive failures this approach is responsible for, we feel that somehow next time it will be different.  Sometimes next time is different, and things are improved, which keeps us going back to computers. But if you examine in what way things have improved, or how often they’re genuinely improved, it isn’t quite that simple. One curious aspect of it all is the way that the apparent ‘improvement’ always seems to go with speed for some reason. Things are quicker, which is more convenient, there’s less time spent with that awful frustrating pain of waiting. Which brings us to the first problem:

SPEED = GOOD

This is the problem of impatience, which leads to the idea that speed is inherently good.  It’s taken a long time to attain the proven discoveries of science up to now, and this apparent slowness offends the Big Data evangelists for some reason.  The ego is excellent at inventing false timetables, where it decrees that something somehow ‘ought’ to be done by a certain time, which time it often abstracts out of thin air due to its disembodied disconnect from the concrete flow of life.  But because scientific progress has always included data crunching based on observation, somehow the impatient Big Data gang have let themselves fall for the idea that that’s all that’s needed… so if we just speed that up and do away with unreliable scientists, we’ll get all our goodies in triple quick time. The futility of massive speed for speed’s sake.  Land speed records are always on the flat, but Life is not flat. Life has bumps, twists and turns. Life lives, speed for speed’s sake is not actually life. Scrape enough data for AI art and the creepiness resultant of flat pure speed becomes visible, permeating the toxic imagery that results. Think of that nauseating oddness as a visual presentation of something that normally is invisible. Ponder it a while, but perhaps not too long. (I find much AI art reminds me of when I was a kid and had eaten something that disagreed with me – I knew something was wrong when I began finding unwanted intrusions of the memory of the food in question would start popping back into my mind. The memories would be partly visual, partly conceptual, and it was so striking that that’s why I still remember it now all these decades later. Vomiting would usually follow – but when you’ve ingested some visual poisoned food, what do you do exactly to rid yourself of that poison? You can’t vomit it out the way you can with food. Bear that in mind when making choices over what to visually ‘eat’.)

As science is a real-world project, it might be instructive to look at what happens out there in the world if you believe in speed and do science accordingly. A good example would be the Human Brain Project, which launched in 2013 with a $1.3billion award from the EU. It involved 150 organisations located around the world. It was going to provide us with profound insights into how the brain works. There were going to be ‘breakthroughs’. But the more data the Human Brain Project consumed, the more chimeric those breakthroughs became. After a few years of non-success, it was downgraded on the quiet to a software project providing tools to scientists for research. The original leader of the project, Dr Henry Markham from the Swiss Federal Institute of Technology, claimed in a 2009 TED talk that within 10 years we would have a complete computer simulation of the human brain. Does it look like we’ve made much – or any – real progress there? This is just one major example, but it’s easy to find others, to find yourself reading the same hype tropes of how the Breakthrough is coming down the line for sure this time… But these misguided projects have money momentum, and are thus subject to sunk cost fallacies, and we have human pride and ego in the mix as is always the way with real-world science. Ever more data is fed into colossally expensive projects seeking to show that data in itself can somehow give rise to creative insight – and if you make foundational assumptions of that sort then you lock yourself in to a sterile circularity, a wasteful running on the spot that will drain money and resources for either nothing or next to nothing.

By rushing to get there, we get nowhere.  Where even is ‘there’?  Can there even be a there when we are speeding to get ‘there’?  ‘There’ existentially is the here and now, a here and now that contains the past and is faced towards the future.  We can conceptualise ‘there’ for practical purposes but seem irresistibly to fall into impatience in terms of the journey ‘there’.  Very wanty and aggressively acquisitive.  But the uniquely human pathology of speed for speed’s sake of disconnects us and we move out of our bodies and into our heads and we seem strangely unaware that this is even happening, so we continue to get faster and faster, rushing to get ‘there’, never arriving, only starting to feel something’s wrong when we develop mind/body illnesses… 

Speed goes with anger.  Real life is never impatient, for when we’re impatient it’s a kind of haze that fades life out. Rushing often leads to things being done less completely than when they’re done in their right course, i.e. mindfully and completely. There is a nexus here between ‘objective’ research and human psychology.  To rush is to be out of accord with the Tao. 

Speed = Good masks the mystery of the concept, how the hypothesis comes from and through intuition. This masking can be seen with the next big problem:

INHERENT LIMITS OF DATA ITSELF

There are two limitations inherent in data that can never be escaped, namely data saturation and overfitting.

Data saturation refers to a point at which collecting more data becomes futile. Knowing when we’ve reached that point is where the human comes in. This can have aspects of trying to evade human subjectivity, but in a controlled, contained way in order to hopefully bring through new objective theories. But the human is nonetheless a requirement in the decision-making process – how do we decide which data to collect, and when to stop collecting? Naturally there is much disagreement in various fields of science on this topic – as there really ought to be in order to keep science ‘well oiled’ and working properly.  But what is not in doubt is that data saturation is a serious problem in AI research – for serious researchers, at least.  The manbaby Musks of this world seem to think that by ignoring it it’ll somehow go away. Close your eyes, and the world disappears! Meanwhile, back in the real world, we see here an echo of the problem that invokes the Great Asymptote – ever deeper diving into ever more detailed data doesn’t actually lead to the intuitive leap, because the human is too fully absent from the machine calculation.   But the ‘just shut up and calculate’ crew claim that the Beatles were wrong – pragmatism is all you need. Love, and intuition and all that woolly stuff is all very nice (if a bit girly), but we’re talking Clever here, so move over you intuitive silly-minded wusses and let’s get calculating.

The assumption that number crunching is all that is needed is now widespread. It has its own culture, its own set of beliefs – dataism. You can find books by low-truth high-sensationalism authors such as Yuval Noah Harari in WHSmiths. They seem very popular with a lot of science fans despite not being that scientific when examined more closely. They always seek to put us in our place somehow by appeals to science without being particularly scientific. (Years ago we had to make do with Desmond Morris’s The Naked Ape, which again is full of contentious ‘science’ while being at heart more a matter of ‘us humans think we’re so clever but really we’re just apes’ stuff.) Amongst various other important-sounding futurey things, Harari’s books claim, based on nothing particularly scientific, that we’re ‘clouds of data’. (As for how scientifically valid YNH’s writing is, you might want to read this.) But what is it that watches the data? What is it that perceives unbroken, unsampled movement? What is it that understands the data? What is it that separates the data into true and false?

For but one example of the vulnerabilities that can occur as a result of data saturation, consider how the AlexNet large scale visual recognition system was tricked into confirming with high probability that images of a school bus, a temple, a praying mantis and a shih tzu were all ostriches. This was done by manipulating just a few well-chosen pixels in the pictures.  A few pixels is a tiny amount of data, and the pictures in question looked identical to the human eye, yet in the right place those altered pixels rendered an award-winning AI recognition system powerless.  But life can change drastically due to ‘a few pixels’ – it’s inherently got a factor X that comes seemingly from nowhere, disasters or happy accidents that come from leftfield, things that suddenly break, or go unpredictably wrong in a way that’s never happened before. Humans appear to have something of a knack for dealing with the unexpected, up to a point at least. We don’t need to learn all possible scenarios by rote in order to deal with them. But what we’re actually discovering as time goes on is that there is no way in principle any AI system can deal with these scenarios. 

This has turned out to be a serious problem for self-driving cars, another area of scientific research always over-optimistically hyped, always generating made-up-out-of-thin-air predictive timeframes that imply some kind of sentience or quasi-sentience 10 years or so in the future, which predictions of course never materialise in the here and now.  In 2018 in Tempe, Florida Elaine Herzberg died because she wheeled her bike out into the road with bags on the handlebars of her bicycle, which particular eventuality wasn’t in the coding of the recognition sensors of an Uber self-driving car.  There was also the issue of the safety driver being distracted by watching a TV talent show on her phone.  Safety driver?  Funny how they’re always there – looks like they’re going to always be, too.  Which drastically curtails the whole idea of self-driving cars in the first place.  Yet so many supposedly scientifically-minded types are so gullible here, so eager to parrot tech bro hype. For some reason. A human always needs to be present at some end point to breathe life into the data, to bring it into the world of human life, with their human mind. The very fact that this is so is taken as something to overcome, perhaps because it’s a bit of an insult to those seeking to eliminate human unreliability from shiny pure tech. But this necessity for human involvement reminds the science-gullible that there’s something interesting about the human mind that resists being pinned down by science as it presently is, and they get uncomfortable with that. Which hints at a kind of worldview that perhaps is not particularly healthy, or human…

As for overfitting, this refers to when a mathematical model contains too many parameters for the amount of data.  This results in a reduction in the ability to predict future behaviour – the very thing that data collection is meant to be for.  Some of that data may be noise, some may be crucial, but we can’t tell merely by digging ever further into the measuring – see the horror cascades below.  The fuel for progress remains intuition.

Data on its own ultimately always kills itself. To the excessively rationalistic this creates a kind of horror that we are lost in an ultimately meaningless universe. To those who retain their awareness of the numinous, it’s a huge relief, and results in deeper understandings of our freedom and the obligations that come with it.

Without a theory there can be no hypotheses to test, to collect suitable data for in the first place, and to have a sane idea of where to stop looking, when to stop counting.  But something about the creation or discovery of theory doesn’t fit with mere calculation.  Theories never originate from machine code, they come from intuitive insight.  Denial of this grounding, foundational truth is ultimately what has led to this ever-faster onward AI charge.  There is no concept behind the idea that more data will lead to more insight – all we have is empty speed as if that will for some reason make things happen when the data’s big enough.

This all creepily displays back to us our crisis of meaning, where we can’t tell what’s significant, or if anything at all is significant, or if there’s such a thing as morality, or if anything or nothing is true – because we confuse counting with meaning.

That clue leads to the heart of science, to an outstanding feature of the mind itself that AI is in principle currently unable to deal with, and which it is unlikely to ever be able to deal with barring an ultra-profound scientific breakthrough which would inherently change the character of science. 

This sounds grandiose but only because the understanding of what science is has become so distorted. There are legions of misguided people out there who simply keep repeating that science will somehow sort this like it sorted so much else that initially seemed mysterious, as if just repeating that will draw attention away from the billions wasted so far on getting precisely nowhere with AI in terms of either creativity or conscious self-awareness or real world common sense. How on earth could common sense knowledge turn out to be a colossal granite-hard problem for AI research when science is supposedly so pragmatic and down-to-earth? Hold to that, for it’s important…

Meanwhile, let’s examine scientific method a little, to see if the ‘science has solved other mysterious things so it’ll sort this as well’ crew have a decent understanding of how science actually progresses. 

SCIENCE AND ITS NEAR ENEMY

Twenty years or so ago, I remember reading snide comments in books and online about how science comes up with truths, with cold hard facts (never soft fluffy facts), unlike those woolly vague philosophers, so I take some pleasure from seeing that just as I thought would happen, AI researchers and neuroscientists alike are beginning to look to philosophy, and even taking philosophers on board for help (see the Introduction to Phenomenology book referenced in the Cynthia Cruz article for various examples of current scientific research inherently including concepts from philosophers like Bergson and Heidegger). More insidiously, philosophy is getting into scientists’ minds directly as scientists are increasingly realising that they have to consider philosophical aspects of consciousness if they want to progress further and more fully, and this is resulting in changes of worldview. Funny how something as supposedly epiphenomenal, trivial and vague as consciousness turns out to have the heft to expel false concepts to the point where new worldviews are created – worldviews that change scientific behaviour. One example would be erstwhile reductive materialist Christof Koch changing his mind and now espousing a version of panpsychism. It would be lovely if more of the ‘just keep calculating’ brigade could look up from their coding and take note of how the same sort of thing keeps happening to AI researchers and neuroscientists, and perhaps have a little wonder as to what that might indicate. This would require looking past the sludgy floods of bullshit hype, of course. 

Those sludgy floods are (yet another) example of a striking phenomenon of this world whereby there is always a true version of something and another version that seems to follow it around that somehow parodies it, feeds off it, or is otherwise fake while resembling the real thing. This concept is found in Buddhism as the near enemy – pity instead of compassion, for instance.

So let us assume that there is a science-1, which is true science, and a science-2 which is its near enemy, a fake that looks like the real thing but is ultimately lacking a centre, a ‘truthiness’ (a neatly imprecise term sometimes used by scientists who have more of an idea of the inherent necessity of intuition as fuel for scientific enterprise). Despite being fake, science-2 is commonplace and often presented, and thought of, and defended as, science-1. This may seem a bold claim as science is supposed to be all about truth, about the way things really are. But the whole point of the near enemy concept is that it often appears to be the genuine concept it’s parodying. Parody is only possible when there’s a real version. To turn to a western concept, we have all the stories of Satan’s con tricks, that work by way of surface-attractive counterfeit. And there’s the more widespread idea that the Earth is the Trickster’s domain. 

We know the difference between compassion and pity, and which is the right, the ‘good’ version.  We know the difference between cruelty and kindness, we know the difference between good behaviour and bad.  We know when somebody’s basically a decent person, or not.  We know the truthiness of truth. Yet we can’t easily explain why this is so. Still we know. We may be initially mistaken, we may come to kind of mixed picture, but still we have the true and the fake, the good and the bad – and this is how intuition works. Some kind of pondering or consideration based on real-world, embodied experience. Compare and contrast with the watery vacuity of ‘it’s all data’.

But how do we discriminate between science-1 and science-2?  Let us assume the commonplace description of science as being all about objective measurement is true. Cold hard facts all the way. With a heavy dose of ‘you may not like the facts but the facts don’t care about your wimpy, sentimental human-ness so you’re gonna have to man up, cissy’. Which isn’t objective at all obviously but offers wonderful opportunities for people (it must be said usually young men) to posture online (see also Nietzsche when it comes to philosophy). Science is cold hard facts, all about measurements, not about humans apparently. This is the science of number crunching, and not just number crunching, either.  In the (high quality, recommended) pop-science book The Knowledge Machine author Michael Strevens writes about research scientist Andrew Schally needing to ‘process’, as in grind up, the hypothalami of 160,000 pigs in order to obtain less than a milligram of the hormone LRF in order to discover its structure. Here we already see something of the Machine, mechanical and grinding, non-biological even as it engages with the biological. And as Strevens addresses, something is particularly tedious about it too. Vast quantities of scientific work, the large majority of it, consists of repetitious lab work, testing and testing, hours a day, on and on and on for years, with no guarantee of any sort of success.  Strevens compares the results of tried-and-tested, truthful scientific research to a coral reef, which with its reference to (albeit microscopic) skeletons also nods to something non-living about the whole enterprise. (As an aside, the Finnish word for computer, tietokone, is a compound word consisting of ‘tieto’, knowledge, and ‘kone’, machine. (And it’s quite surprising to see that there currently aren’t any minimal wave bands called Knowledge Machine.)) 

This grindingly dull, uninspired aspect of scientific work, Machine work, is of course widespread in many other fields. (And much office drone work has a similarly lifeless, repetitious, robotic quality. More on this later.)

And this scientific grind has of course given rise to the thought ‘Wouldn’t it be nice if we could use our ultimate fast repetition machines to speed things up and reduce the drudgery?’. Which is where computers come in. And speed=good, after all. 

‘Speed = Good’ is perhaps so attractive due to complexity. Indeed science is currently at a point where the vast complexity of the human organism as a whole, and in particular the brain, could perhaps be seen as taking on a somewhat daunting aspect. Biosciences in general and neuroscience in particular seem to be getting mired in extreme complexity at all turns due to the ever-burgeoning realisation of the large-scale interconnectedness of living biological systems. To take but one example, a team of molecular biologists in Brussels decided to investigate the interactions between four cellular cascades (Dumont, Pécasse & Maenhaut 2001). They discovered that ‘With four cascades of five steps, the number of possible positive and negative interactions is 760. This does not take into account the multiplicity of different isoforms of proteins at the different levels of the cascades, the multiplicity of effects of each intermediate in each cascade, the stimulation by a cascade of the secretion of extracellular signals, or feedback or feedforward controls within cascades. In fact, so many interactions are now described (everything does everything to everything) that it is difficult to reconcile this concept with the known specificity of action of signals in each cell.’  It’s been nicknamed the horror cascade for some reason:

Five cascades. How many cells would, say, your body contain? Then there’s the active way that DNA behaves in living organisms in an environment. And of course the brain, the most complex object in the known universe, estimated to have 86 billion neurons each of which possibly with tens of thousands of synaptic connections. Which is where we start to see the outlines of the near enemy… The brain’s so complex the answers must be in there somewhere, mustn’t it? So let’s use AI to speed up the quest. Genomics and neuroscience, perhaps the two sciences most affected by trying to find a way through extreme complexity, are natural candidates for AI research. Which in itself is actually fine – as long as we understand our conceptual limits.

The near enemy doesn’t even generate itself from science-1 per se. Science-1 consists of the hypothesis aspect, which we can term science-1(H), which gives rise to the ‘FAFO‘ testing aspect, science-1(E), where E is for experiment of course. Experiments can sometimes find themselves enhanced by human insight, which is a particularly unmediated form of science-1(H) fueling science-1(E), right there in the lab, seemingly spontaneously. But science-2 claims data is all you need these days, confusing science-1(E) with some kind of non-human number crunching, as if insight can emerge from pure data. That very assumption is the problem.

The reason why science-2 is fake is precisely because we have no working models of consciousness. And we have to have some kind of working model to get the hypotheses and testing of science-1 up and running – it’s not an option. This is how we can spot science-2 – it looks like science but it’s more that the lights are on but nobody’s at home. It’s a vacuous parody lacking an essential interiority found in the real thing.

Yet somehow the inanely overconfident idea that we can somehow discover or generate valid hypotheses by pure data crunching has got of a lot of AI research in its talons. This false (E) = (D) view has already resulted in an incomprehensibly vast waste of money and resources none of which have remotely got anywhere near the breakthroughs that have been sought for so long. It’s a fool’s expedition because we don’t even know what the breakthrough even could be. There are no hypotheses, unless you count a bunch of ideas that various groups of scientists and researchers just argue amongst themselves over. They argue over hypotheses too, but hypotheses lead to experiment design, to research.

Hypotheses are in themselves somewhat strange.  Where do they come from?  They can get people to believe, or to at least entertain, all manner of weird and wonderful ideas.  This seems to be linked with the way that radical hypotheses in science are so often slammed as ‘woo’, cranky, bizarre.  There is a trope so often brought into play along the lines of ‘yes, while scientific breakthroughs slammed as nutty have often turned out to be right in the past, far more that were attacked as peculiar turned out to be nonsense’.  But this is back-to-front. The trope puts the emphasis in the wrong place. The issue is more that for a breakthrough, especially a deep/wide-ranging breakthrough, to be valid it is necessary that it contains something of the strange about it even if that of itself is not sufficient for its validity.  The attacking of breakthroughs tends to veer towards the idea of ‘rationality’, of being able to clean out the weird, as if progress could be made without it, purely by following rational processes… and here again we start to see the near enemy.  The fear of the outlandish does have a role to play in saving wastes of time – but where does it come from?  And why is it so often so superior and contemptuous?  And why is it a matter of saying ‘well this time your weirdness worked, so we’ll make an exception for it this time, but…’ over and over and over again for all genuine breakthroughs?  This implies that next time, pure rationality will somehow sort us out, no woo required, while ignoring the necessary strangeness of true breakthroughs. This is a kind of mindset that refuses to look at the strangeness of intuition, of the creativity inherent in coming up with theories or hypotheses, arguing instead that all that stuff’s ‘woo’. Science-2, the fake one, insists that FAFO on its own will drive science forward. As if. With regard to AI we can see that so far at least, this has very expensively not been the case – but the built-in ‘optimism’ (in fact a self-replicating design flaw akin to a machine not having an off switch) means that the ‘it’s just round the corner’ aspect continues unabated, which in turn brings in sunk cost fallacies, which keeps it all going, and going, and going…

(Science seems to regard its woo-phobic aspect as really good for the way it stringently strips out timewasting concepts. Well up to a point sure, but a bit more modesty here wouldn’t go amiss. Consider eugenics, for example – commonplace amongst bien-pensant intellectuals on the left for decades, and would still be today if it wasn’t for the Nazis. Indeed as a thought experiment it might be worth considering an alternative history where Nazism never happened – is there anything in science per se that would have raised any ethical concerns with eugenics? I haven’t been able to spot anything myself. Oh sure there’s lots of ethics around – but when you love science and buy into ‘it’s all just self-replicating DNA in a meaningless universe’ and scoff at those silly-minded religious sorts and their backwards superstitions, and you have the ever-growing worship of data and that entirely genuine desire to make life better for everybody, and you’re not able to give any kind of account of morality other than preservation of genes… well… try running with those ideas for a while…)

Without a theory, there can be no hypotheses to test, to collect data for.  And something about the creation or discovery of theory inherently isn’t found in mere calculation.  Theories never originate from machine code, they come from intuitive insight, but because the origin of theories is not out there in the light of objective reality, it’s easy to deny that there’s even an issue. One common denial is that place in the mind where theories come from is ‘just’ the subconscious. This implies that it’s merely the same process as our rational, analytical intellect really, which makes it all seem less mysterious, and thus no doubt less threatening to the supposedly ‘scientifically’ minded. But this glosses over that curious coincidence that the sub/unconscious aspect of the process just happens to be where the key difference in ratiocination lies. How convenient. The mystery remains.  There is no concept behind the idea that more data will lead to more insight – all we have is empty speed as if that will for some reason make it happen when the data’s big enough. 

Science-2 eventually meets the brick wall of reality and gets smashed up by it, but somehow Terminator-style rearranges itself to continue its pestilential attacks. Which brings us to the next issue:

ANALOGUE vs DIGITAL, SOLVE vs COAGULA

This has cropped up before on this blog, in terms of ∑ versus ∫.

On a more esoteric (or subjective) level, we appear to have forgotten that our mind has analytical and intuitive aspects.  This could perhaps also be expressed neurologically as the left hemisphere and right hemisphere of the brain.  There is a fair bit of noise around concerning how this particular dichotomy is supposedly false, but as is the way with science things move on and the left hemisphere/right hemisphere (LH/RH) hypothesis is looking to be a goer again (see Iain McGilchrist’s The Matter With Things if you harbour any doubts on this).  But to avoid getting tangled in arguments let’s for now use a modified take on Jung’s concept of solve et coagula, where solve is intellectual analysis, focus and clarity and the like, and coagula is intuition, seeing as a whole, and so on.  The coagula term needs modifying however, as the alchemical term coagula means ‘solidify’, whereas for whatever reasons in practice our human mind has it the other way round, solidifying the solve, turning analysis into coral reefs of dead data, while our intuition, our coagula is of something that flows.  You never step into the same river twice – reality is not static. Yet we have fallen in love with our measurements, our measurements that act against the flow, because they help us make tools to do stuff, and that is automatically good (apparently).

Why exactly we solidify what we have ‘dissolved’ into data, and view the intuitive as somehow vague and needing to be pinned down, is another matter.  Consider the various myths of the Fall, and consider how precisely because that Fall can be thought of as a takeover of the heart/intuition by the head/intellect, the ways back out of the mess we’ve fallen into must therefore be communicated through intuition first, intellect second, and that the intellect will nonetheless constantly seek to barge in and take over, thus trapping us again.  It is a vulnerability of the intuitive that the intuitive must include the rational – which then constantly seeks to override intuition, to take its insights and reduce them to feed the arrogant, insecure ego. This hijacking process perhaps suggests why we appear to be drifting off into a pop-science-2 dreamworld where solve has all the answers that we could ever need to all our questions if only we’d just keep pressing on, further into the dream, sinking ever further downwards into the bottomless abyss of complexity while telling ourselves things are getting brighter because we’re getting rid of ‘superstition’.  We are in danger of forgetting that solve may have brought us the many wonders of science – but only when it’s been in the service of intuition, of ethics, of our sense of value, of our sense of meaning.  Without the warmth of the heart, of our inherently compassionate nature, science immediately becomes a blitz-strength toxin.  The very fact that this is so contains in itself a clue to our way out. The very neutrality of science with regard to ethics shows that science per se cannot solve issues found in subjectivity – including the social, the political.  The kind of issues that have their own fields of research that are snootily dismissed as ‘soft sciences’ or not even science at all by those labouring in the illusion that ‘science’ can solve everything.  Solve may be necessary but it isn’t of itself sufficient for true progress in any area of life.

Henri Bergson wrote uncannily presciently about the excessive solve, data based approach 100 years ago, but more positively he also made a valiant attempt at an alternative worldview, an alternative metaphysics, that would at least show some promise for both philosophy and science, creating – or rather restoring – the bridge that used to exist between them which then somehow got badly degraded from the second half of the 19th century onwards. In doing so he gives us a flavour, or a feel, of what an analogue approach to consciousness might be like. There is a tone of metaphor here, of provisionality, but also therefore of a certain philosophical power of insight. Science arose from philosophy, was birthed by it, and we can thus call philosophy the mother of insight, and we thus must point out that this ‘female’ aspect leaves it vulnerable to violation by aggressive, inhumane ‘masculine’ intellect, showing its negative principle of domination, control, violence. This shows in modern discourse as patronising contempt for philosophy, the assumption that philosophy must bend to science instead of the other way round (which latter way is of course the right way), and just as violators love to laugh at and mock their victims, so we see the most precious aspects of the human scorned, held in contempt, maliciously parodied, by a sadistic, bitter, loveless intellect, which on one level only acts in that way out of a deep fear. This badness skulks in the shadows, creeping around and about in the background of the discourses of scientism, semi-hidden because its proponents do know somewhere inside what they’re doing and how evil it is. This stuff is therefore deadly serious. So bear this in mind when reading the following.

In his Introduction to Metaphysics, Bergson zeroes in on memory as being a particular mystery. He compares its operation as being more like the fixing of an image on a photographic plate than the collation of a series of snapshots. This is crucial. The ‘snapshots’ concept is of course ∑. The Asymptote is where it has brought us. It’s also brought us some very cool, and life-changingly good, technologies along the way, but we’re still waiting for it to produce anything of merit in respect of consciousness per se, or of the nature of AGI, and assuming that it can and will do so has wasted us a lot of money and research hours. The fixing of a photographic plate is of course analogue, and using this analogy for memory dodges the serious problems caused by the infinite gap/total qualitative difference between the summing, ‘bitty’ ∑ and the integral ‘whole’ ∫. If memory was a series of snapshots, Bergson asks, how many millions of images need to be summed to create a specific memory? How exactly does nature decide how often to take those snapshots in a way that will sum them sufficiently? Obviously nature doesn’t do this. Furthermore, Bergson points out that the various forms of aphasia show that despite disconnect with memory, emotion or shocks or jolts (including physical jolts) can suddenly bring back memories supposedly lost forever. This suggests the brain is not a massive storage device. Bergson points to the way we actually seek to remember memories, which is more a kind of casting about involving both mind and body. This has an echo in Michael Polanyi’s classic metaphor in ‘The Tacit Dimension’ of progressing as a blind man learns to use a cane, by a process of feeling one’s way, which gradually forms a kind of intuitive unity between the subject and the world, the cane eventually becoming an inherent part of the person using it. Which metaphor of course is intuitive, not all about some kind of supposed pure ‘rationality’ or data.

The analogue, holistic nature of memory can be seen in an experience we’ve all had, namely redintegration. Back in 2011 I found myself attending a conference in East Grinstead, a place I had no recollection of ever having visited before. I turned a corner into the high street, only to suddenly find myself looking at a familiar scene from childhood, with the immediate realisation that this was a familiar place somehow. After my parents divorced in the mid 70s, sometimes during access visits my father used to take me and my sister around and about various places in Sussex, and in 2011 it all came back – we were here regularly once. Furthermore, I had a visual memory of being in the back of the car as we went past the fire station training tower near the high street, and automatically knew that this was a regular occurrence, and indeed knew that in some way it was a kind of ‘sign’, I think that we were going back home for tea, though that bit’s not quite clear. But the whole complex of associations appeared as a unified piece, not as something constructed out of some kind of data pixels. If I’d been asked to remember the places I went back in the mid-70s with my Dad on access visits, I’d’ve turned merely to a few well-worn memories, none of which included East Grinstead, which I would never have even remembered visiting. Haywards Heath (and various other more rural locations), yes, East Grinstead, no. (It’s not a particularly arresting scene to be fair.) Visiting a place I’d forgotten I’d ever been prompted an instantaneous re-development of the ‘photographic plate’ of my memory, involving not just the visuals but a whole set of childhood memories they were part of.

But where, and how, does life become memory?

Bergson had the crucial insight that we are analogue and our approximations of measurement, being data pointillist, are qualitatively profoundly different. Memory is of course intimately part of time – it’s through time that we build up memories, and time is analogue. Insuperable difficulties appear when we seek to account for memory while using measured time and its associated pixelated approach to brain function. Our clock time depends on repetition of division in a kind of circular schema with repeating labelling (seconds, minutes, hours), whereas our mind, our inner nature, our very humanity, depends on analogue repetition, also in a ‘circular’ schema but one which Bergson compares to the base of a cone, a base that gets ever wider the more memories accumulate in our lives, with the point of the cone an ever-onward-moving point sensing its way as we go forward, intuitively probing like the cane of Polanyi’s blind man.

Bergson uses another analogy for the way that memories are ‘fixed’ that being intuitive and deep can bring on a sense of vertigo – appropriately so given the profundity of the issue. He describes two train tracks curving in different directions that nonetheless for a time are joined before they curve away again. Where they join is where matter becomes memory. And this is a metaphor, and a suggestion of a starting point, not a call to start ever more minutely dividing up two supposedly separate substances like ‘mind’ and ‘matter’ in order to get one to somehow imprint on the other. To even attempt to think about the metaphor in those terms is to immediately fail to understand it. The whole point of the analogy is that it is a different way of taking that chopping up, clock time approach.

Bergson’s Matter and Memory outlines in minutely thorough detail the processes of trial-and-error in life, of neurological/neuromuscular circular/centripetal feedback gaining ever greater complexity as life evolves, the whole process being unbroken, flowing, developing memory somehow as it gains in complexity.   It is noteworthy in this regard how Matter and Memory isn’t ivory tower philosophy, and refers throughout to a great deal of then current neurological research and discovery. This is embodied philosophy. It’s also noteworthy that as time goes on, at least some AI and consciousness researchers are turning to philosophers such as Bergson for inspiration. Those issues outlined by Bergson at such length are still very much live. ‘Snapshot’-based research has still provided us with some seriously useful aids to the reduction of human misery so let’s not be too down on it – but as the Asymptote gets ever more sharp-edged, a pressure is developing to get something concrete to happen about consciousness instead of just pretending that we’re ‘nearly there’ or just copping out of the whole thing and saying it doesn’t matter, and that pressure is only intensifying the profound gap between where we’re seeking to be and the never-quite-there place we find ourselves with our summing approach.

Due to the trial-and-error nature of how we ‘tune in’, we gradually build up a worldview that is based on making our way in this world.  Bergson states that this way that we build up understanding based on how we live as humans means that we have a natural tendency to confuse metaphysics with what works for us as living beings – which insight potentially offers a really good, sound link between science-1 and philosophy.  But this embodied nature of memory leads to it being inherently based on potential, or virtual action in the world.  This in turn means that we get metaphysically confused, one prime example being the way that we regard Platonic ideals as supposedly more real than the real-world version.  Or indeed regard numbers as somehow transcendent. (More on this later.)

This can all be found in quite mundane aspects of life , as you would expect if this whole tuning-in is inherently based on sussing out the practicalities of life.  Anybody who’s had a desk job will know how a system might be set up to deal with, say, incoming paperwork.  The system starts to run in the real world, issues come up in practice that weren’t foreseen, including the occasional unusual situation or outlier, the system is modified, sometimes drastically, and this process of trial and error continues for a while until the best possible system in practice is found, ‘best possible’ not meaning perfect. (Complete with the issue of how much ‘weighting’ to give to outliers – precisely because they’re rare, how much do you change the system to allow for them bearing in mind the various (time/effort/money) costs of changing systems? It’s very easy to get this sort of thing very wrong and either not notice it or to deny it.) A set of provisional ideas is tested in the real world and subject to a process of modification.

And there it is again – analogue, embodied common sense, right there amidst the quotidian world of office work.  It doesn’t have to be all high-minded concepts pertaining to metaphysics and philosophy – except of course, the curious twist is that actually, once you start addressing this everyday stuff in AI/consciousness research it does… if you want to get anywhere with deeper progress in understanding consciousness, that is.  There’s a mystery in the heart of the mundane, at the heart of the human…

Science-1 works in a similar way. A hypothesis is mooted, ideas for experimental testing are put forward, and based on a kind of intuitive taking a punt, small, ‘low’ quality research projects check the ideas out, the lack of quality being perhaps more a matter of sample size and various technical issues to do with double or triple blinding and the like. It’s a tentative tap of Polanyi’s cane, a feeling the way in the dark based effectively on informed guesswork (not out-of-thin-air guesswork). If the results suggest (note that word) that things may be worth following up, better quality research is planned and performed. Science-2 on the other hand, will tell you that if there’s no peer-reviewed research then there must be nothing to it, whatever ‘it’ is. Science-2 implies that rough-and-ready initial scientific work isn’t really a thing. It’s all pure calculation, which error also leads it it to tout some kind of supposed perfect predictivity, because with enough data points we can become omniscient. But all the scientific work that has given us high-accuracy predictivity always started as well-educated intuitive hunches and low-quality research. Scientific progress cannot happen any other way. Meanwhile Science-2 has created nothing radically new at all. All the AI projects that have helped (such as with protein-folding research) have been applied to Science-1 systems. Nobody ever created new ideas out of thin air that magically worked first time and perfect prediction is impossible, yet perfect prediction is sought after (by automatic car designers for example) and it’s using up a lot of mental energy and money, for nothing.

Life ‘tunes in’ through repetition.  To learn to play a musical instrument well takes years of practice, even (or especially) for prodigies.  Now consider how this applies to neurotic overprediction.  It seems strangely analogous to the way repetitive practice is a must in order to ‘get the hang’ of music, or art in general – but also basic life survival and social skills.  Skill contains the learning but not just the learning.  It don’t mean a thing if it ain’t got that swing. 

The ‘spirit’, the swing, differs from ‘outlier pixels’ in one key aspect however – it is found throughout the melody, the art, the joke and the way it’s told, the context.  Chasing after outlier pixels as if collecting all of them will catch spirit only makes spirit ever more elusive. Yet Big Data AI is deliberately set on a path of trying to capture, or even just convincingly mimic that swing, the swing of our everyday human understanding and our more unusual intuition and creativity, through ultra-exact predictive techniques, as if the boring office filing system could be implemented immediately and perfectly, with no testing in the field.  And of course this can never happen, so we’re all being dragged into online bot training for systems that must have real-world feedback in order to ‘learn’. One tiresome example of this would be the way Facebook introduced all manner of ‘profanity’ or abuse filters that landed people in Facebook jail for, say, using the word ‘ho’ at Christmas as part of Santa-based jollity posts. As of the date of this article, it’s still hiding and flagging as ‘potentially offensive’ posts that are anything but. These bots have no sense of humour either and are unlikely to ever allow for it. And when you get a Captcha with images of, say, lorries that include ambiguous images and you have to try again, that’s bot training too. But the random unforeseen, the factor X, is also what stops prediction from ever being perfect, as prediction in principle misses the spirit.  Data saturation and overfitting are but externalised examples found in AI research of a more general power-seeking, neurotic mindset that tirelessly seeks to finally pin things down once and for all.  Which you can do by killing things, but then they’re permanently out of context.

And again, if you think perfect prediction is possible, consider the possibility that given the dictates of what we already see outlined by the Great Asymptote, as we seek ever greater refinements of prediction, due to the impassability of the sought-after breakthrough wall, AI may instead just warp and bend and spread over time, and thus by the time it reaches that sought-after point of perfection, it will have changed so greatly that it would consist qualitatively of what we already see – a mix of rote repetition and deeper learning that can’t actually ‘predict’ in the way originally intended as it still lacks the human sense of context. 

To return to the difference between the analogue and digital approach to science and philosophy, in ‘Matter and Memory’ Bergson uses the analogy of a poem and how analysing it word by word cannot in principle then result in the poem’s meaning being discovered, or recreated in the mind of the reader.  We’ve all seen what happens with google translate. In other writings he uses the analogy of music, whereby analysing the notes of a melody can never explain anything about the melody – all it can provide is separate data points.  The simplicity of this point paradoxically makes its profundity easy to overlook.  The melody exists across the entire melody.  The hilarity of the comedic scene I caught by accident on the TV inherently includes the whole scene, the whole episode, to some extent the whole series due to the way the series as a whole sets up various kinds of ‘resonances’ or characters or vibes that then form part of the new comedy situations.  Life’s meaning is only found in life as a whole.  But of course that whole goes back, and back, and back… into the vast echoing abyss of history.  My first cat’s name was written nowhere yet was somehow in the minds of the people who looked after her, who met her. It’s still in my mind. So is my second cat’s name, which I’ve never told anybody and never used as the answer to a security question. You’ll never know what it is. We know we’re born, we know we’re going to die, but we’re inherently a part of something unimaginably vaster than us that nonetheless somehow gave rise to us. ‘Nihil ex nihilo fit’ – nothing comes from nothing – yet existentially that is just what we all apparently did. And if you grab too hard with your analytic intellect, you risk the unbearable lightness of being as you try to sever yourself from that vast organismic growth and flow, losing its counterbalancing weight that normally keeps you grounded, suddenly grasping for the solid in an existence where everything moves, launching you into panic – 

And more prosaically, you risk throwing a lot of computer power at the wall with little to show for it. 

Bertrand Russell and George Santayana are interesting here. Russell disagreed for years with Santayana’s view that the beautiful panoply of mathematics was probably after all based on what we need to live, but in later life changed his mind and admitted to Santayana that he’d come round to his point of view. Which is ironic considering Russell’s earlier slamming of Bergson’s philosophy which suggests such a similar view to Santayana’s.

But Bergson’s philosophy puts the idea that numbers are based on what we need to live in its proper place. As memory is virtual, and exists in consciousness, there is a strange squaring of the circle here whereby numbers being in consciousness gain a kind of transcendentality due to their existence in virtual memory. Numbers may be based on the practicalities of life, but the practicalities of life become suffused by the mysterious flame of consciousness, which is transcendent in a way which we know deep down inside yet find tantalisingly difficult to even define.

Of course repetition, so innately linked with counting and therefore number, is also inherently found in time, that great mystery that we yet all know so well yet cannot easily give an account of to ourselves.

Consciousness is somehow, we still don’t know how, not a matter of mere adding.  It is not found in numbers, or division of any sort.  It transcends all of that, completely, all the way through.  Could this perhaps be a kind of bridge between the Platonists and numbers-are-just-what-we-need-to-get-by-in-the-worldists?  Note that the latter like to be skeptical, to reduce, to remove ‘specialness’, to deny transcendence is even a thing.  And note how the Platonists posit an unchanging transcendental eternity of forms.  Now consider that coagula in truth is not static, and how flow, Bergson’s duration, is what contains counting, and how in the inyo, the Japanese yin/yang, the feminine principle encircles the male and contains it, and consider now how when two electrons approach each other perpendicularly (and here one axis represents counting and one axis represents flow), the counterintuitive result is that they start to dance, spiralling around each other…

The systems that tune in to what works have this remarkable paradoxicality that they are precise, and they are not, they deal with discrete quanta – of letters coming in that need filing in the right place, and perhaps then moving to another place dependent on ‘IF’… ‘THEN’ type instructions which themselves can have self-modifying loops to take various factors into account, for example – yet they also flow. There are bits in them – bits of paper in an office, say – but there is a seamless flow too. This curious dual nature of points and flow leads us now to the concept of…

THE TAU

This is defined as the ratio of the size of a projected image to the rate of change of the image’s size and is an important aspect to understanding optic flow. Optic flow is for example found when playing a first person video game, or when piloting a plane to land. It’s called ‘centrifugal’ as everything’s coming straight at you from a virtual central point. Here’s a music video that starts with centrifugal optic flow:

Laurie Anderson – Sharkey’s Day (youtube.com)

Tau is a feature of the light available at the eye and has been shown in animal research with gannets to guide their (diving) behaviour without internal calculations (David Lee 1980, Lee and Reddish 1981). Tau doesn’t give information about absolute distance, it’s about time-to-impact. Note the inherent importance of time to this concept, as opposed to some kind of timeless datapoint. It is available at the retina – no need for complex calculation. That’s another key thing to note. It’s crucial to you when you cross the road, as you need time-to-impact – not ‘how far away in metres is this car which is going at x km/h and what is its acceleration if any so I can calculate’. Tau is immediate. Tau is a ratio that exists in duration. Tau does not depend on epicycles, on adding ever more bits of data to bits of data. We are at one with the tau.

Ponder now that tau is simultaneously mathematical – a ratio – yet it moves. Data of itself can only bring us static points of valuation – to even speak of data as flowing is to fatally subvert the essential deadness of mere number counting, yet instead of seeing this, far too often animated data is presented as if it was really flowing.  When watching film or TV we don’t see frames per second, as our minds provide the flow. Platonic ideals are lifeless yet number lives, number is intimately part of how we live our human lives – but not in a reductive way, as our concepts of number (and of species, of types of things in general) come through repetition occurring in lived duration, not just abstract time. As with the tau, our sense of mathematical ratio is inherently of living biological life, yet is found in something entirely transcendent of that life – it’s contained in consciousness.

And in this paradox we see a fingerprint of our truest nature. And again it is to do with our being in the world, or human living, and all living on this earth, while also knowing something else.

Intuition comes first, the flow envelops the discrete, not the other way round. 

What happens when we take a ‘millions of snapshots’ approach, with its ever-increasing need for more data labelling?

THE FRAME PROBLEM

Always adding a bit more data to correct something left out, we end up with epicycles added to epicycles, a known sign in science that the model is probably wrong, and possibly completely wrong. I still remember seeing on the telly back in the 70s comedian Kenny Everett proudly displaying a gadget from Japan for sticking to the edge of a toilet seat so that you wouldn’t need to touch a potentially dirty seat with your fingers. But as Everett gleefully pointed out, once you’d touched that gizmo, you’d need another gizmo to attach to that in order to avoid getting your fingers dirty, then another one to attach to that… The younger me found it hilarious, but also fascinating.

Here’s a typical version of the problem from a paper called ‘Cognitive Wheels: The Frame problem in artificial intelligence’, by a certain Daniel Dennett no less, published all the way back in 1987 (‘The Robot’s Dilemma: The Frame Problem in Artificial Intelligence’, ed. ZW Pylyshyn, pub. Ablex). Dennett asks us to imagine a robot that needs to move its spare battery out of a room in which a bomb is about to explode. The robot formulates a plan to roll a cart with its power supply out of the room. But the bomb is also on the cart. How can the robot assess what to do given that:

  1. Moving the cart would change the position of the power supply
  2. Moving the cart would change the position of the bomb
  3. Moving the cart would change the position of the robot
  4. Moving the cart would change the position of the shadow cast by the cart
  5. Moving the cart would not change the colour of the carpet
  6. Moving the cart would not change the mass of the cart
  7. Moving the cart would change the distance between the cart and the Eiffel Tower
  8. Moving the cart would change the distance between the power supply and the Eiffel Tower…

… and so on, forever. The point being that the robot needs to know which side effects of its actions are or aren’t relevant, in order to only consider the relevant side effects. This was in 1987, and of course still Dennett insists that consciousness doesn’t exist, and still we have the framing problem, pristine, untouched despite vast increases in computing speed and power.

It’s data epicycles all the way.

Diagrammatic lists proliferate when the framing problem is around. Often with many, many arrows busily to-ing and fro-ing between their multifarious items, curiously reminiscent of the horror cascades above. And to even notice these schemas, these layouts, and to ponder them, is to be outside the machine. Alas, the mind feels lost trying to even consider what an integral, ∫ science could even be, we’re so mired in bit-ty calculation… but in a funny sort of way perhaps we can hope that the Great Asymptote might eventually bring about a more widespread fruitful culture of communication and collaboration between scientists, philosophers and artists. This would require excluding the mindset of denial of transcendence, of dataism, of scientism though, and indeed require artists not to show unwarranted obeisance to science. Maybe this might yet still happen.

All this cross-pollination would of course involve Science-1. What would the science-2 version look like? How could we sniff it out? We can say that it would be lifeless, it would be overconfident, and it would be scared of the intuitive. As luck would have it, we have an embarras de richesse of examples all showing just these character traits, brought to us by

THE SMUG BLANKETS

You will have encountered smug blanketeers at some point. They’re all over the internet. On a more mundane level, they’re often found talking quasi-sciencey-sounding rubbish about AI and consciousness on social media, but unfortunately they are also sometimes to be found writing books or worse still, driving scientific research. They insist that concepts are not needed for hypotheses, and hypotheses are not needed for testing, in direct contradiction of the history of science to date.

We’re currently amidst a vast proliferation of a science-2 culture where the ‘just so’ story rules, ironically in direct contradiction to the supposed all-encompassing explanatory power of science.  Just so stories are by definition not proof, yet they proliferate in smug blanket culture as if they were. It’s a heavily loaded worldview which paradoxically depends for its social transmission on a nice, neat simple denial that it is so – the very lightness of that denial is crucial as it makes scary non-rationality seem small and irrelevant, as if it wasn’t massively powerful. Its predominant emotional tone is a kind of condescending smugness.

The smug blankets’ constant numb-minded repetition of ‘we just need more data’, usually combined with a kind of patronising contempt for the (supposedly) woolly-minded has the distinctive scent of a suffocator.  It’s creepily similar to the suffocating idea that data moving through circuits in a computer will somehow ‘see’ things by its own action in itself.  The seemingly irresistible pseudoreason here resembles the idea that by drawing a kind of circle in the right way, then that will somehow as if by magic give rise to the Answer, as if drawing a 2-D line in the right kind of circle could create a 3-D image, and furthermore that that image will somehow show us an Answer to a question we don’t know how to articulate in the first place.  2-D drawings can create the illusion of 3-D, but that’s the point – they are always 2-D.  And smug blanketeers get round this by insisting that 3-D is ‘really’ 2-D, and by insisting the 3-D image in this metaphor will be some kind of Qualitatively New Breakthrough. Watch for how often science-2 peddlers refer to circular analogies for consciousness – things like car security systems that sum inputs from different parts of the car for example – without stopping to consider what circles have to do with this issue in the first place. It does seem to be quite a common theme. And ironically does seem to be a good metaphor for what happens when you’re conceptually out of your depth but insist that more of the same will somehow come good one day.

Consciousness is a particularly deep mystery and thus a particularly deep insult to the surface-fixated minds of the smug blankets, so it results in particularly peculiar denials – the smug blanket acolyte will blandly state ‘consciousness is not significant’, or ‘consciousness is merely an evolutionary adaptation’, with no proof, no philosophical backup, just as bare fact. It must be so! Because it just is so. No need to concern yourself with it.  Online scientific and/or philosophical articles about the problem of consciousness always garner a crop of robotic-automatic ‘what problem?’ comments from blanketian denialists.

The scientistic disortion of science pushed by the smug blanketeers insists it’s all about practical facts and claims to be pragmatic, but it chooses what to examine in the first place based on a set of metaphysical principles it claims are sound because they bring results and are based on fact. This creates a curious circular reasoning whereby there is ‘no proof’ of certain phenomena but those phenomena are not studied in the first place as we already know they’re illusory… because there’s no proof for them. This doesn’t need to be ‘psychic’ stuff, either – Daniel Dennett (see above) believes consciousness doesn’t exist. As for (perhaps) less way out ideas, such as our supposed lack of free will, despite there being no proof for this (how could there be?) it is nonetheless taken as a given. Here we are now in the world of assumptions and intuitions, yet this is denied or ignored or waved away. This has non-abstract implications in the world of human behaviour. One example would be the recent publication of a pop-sci book by neuroscientist Robert Sapolsky that ‘explains’ why we don’t have free will (‘Determined: Life Without Free Will’). As (in this sense at least) it’s impossible to prove a negative, the book has to take determinism as a given, and the book is thus constructed around this unprovable assumption, which somehow is supposed to gain a kind of truth due to the clouds of ideas, of data, that form around it. Of course it’s garnered plentiful 5-star reviews on Amazon, presumably from people who like to consider themselves as non-gullible. The self-nihilising psychology of it all is quite strange, resembling a kind of cheap knock-off near enemy fake version of Buddhism. This appeals to a lot of westerners for some reason.

The dataist approach confuses quantity with quality. To bring home how serious the problem is, consider that when you add a lot of ‘small’ together you get ‘big’, which is a kind of change of quality – but then look at what’s been happening with AI research for so long now, where addressing change of quality in this ‘change of size’ way has got us precisely nowhere despite exponentially increasing computer power. But the change from small to big is a change of size, and it is a change purely of static ‘Platonic’ concepts. Successfully addressing the change from slow to fast – now that would be a massive breakthrough. But it would involve movement in time…

Just as the way that a requirement for an escape velocity that must be reached in order to escape the Earth’s gravitational field is not immediately obvious and needs to be worked out with the aid of maths in order to be proven to be necessary, so it is with the ongoing failure of AI to attain real-world viability, which viability requires a hypothesis in order for Science-1(E) to proceed.  Escape velocity must be attained – but the twist is that when it comes to data mere speed is not ‘velocity’ in this analogy, as it represents still going up a staircase as if that in itself will get us there, as if running up steps instead of walking up them will enable us to break free and start to fly.  Running up steps will never get us there.  Something different is required.  Unlike the escape velocity concept, which can be shown with some reasonably easy maths, we are entirely in the dark here.  We’re effectively casting about at random hoping something comes up.  One current of thought in AI is that we need to address abductive inference.  Or perhaps quantum computers will help, if they’re ever viable for proper research, because they’ll be far faster, because speed is good apparently.

But no amount of huffing and puffing about wooliness or vagueness or woo can even slightly dent the immovable requirement that science rests on a metaphysical foundation. Science so often claims to be the royal road to Truth while belittling philosophy, but without a certain input from philosophy science ossifies into a series of ‘just so’ stories, as if explaining how something works or where it comes from exhausts all explanation. This very ossification shows that philosophy has a power to renew science, and is thus in a profound way superior to science. It’s certainly humbling science these days in matters of consciousness and AGI. Science tends to the rigid and dead (the coral reef of the Strevens book) and thus periodically needs the enlivening influx of philosophical understanding to free it up to explore more fully and further. Science is contained within philosophy, and when you start seeing philosophy being mocked by allegedly ‘scientific’ people you know their understanding isn’t as deep as they presumably think it is.

Perhaps smug blanketeers are perhaps just a little scared of what they see as the bottomless pit of unreason that is the intuitive.  Assuming it’s nothing really makes it seem small.  Scientism seeks to put intuition in a box, as if it was the servant of reason instead of the other way round.  But intuition is always there at the heart of science.  To give but one example, the Strevens book goes into quite a bit of detail concerning the way Eddington ‘fudged’ (or really, just altered) the data on his 1919 Brazilian expedition due to it containing blurred images from his telescope and ‘blurred’ data, but there are countless examples in research where the intuition that a hypothesis is correct leads to ‘tweaking’ of data to get it to fit.  This is how actual science, Science-1, progresses in the real world, away from dataist fantasies. (There is a serious concern here that the mindset of Science-2 worshippers with their denial of the role of intuitive insight at the heart of science then gives rise by reaction to its shadow form – a rejection of science altogether as being inherently unreliable or dishonest or even evil. This can have disastrous effects for example with anti-vaxx garbage. It is worth at least considering how much scientistic posturing and aggression has caused vaccine scepticism, and indeed various other bad health outcomes by driving people towards pseudoscience.)

Ironically, the smug blankets’ dull, matter-of-fact droning takes place in slow, real time, the time of Bergson’s duration, the time where the intuitive somehow lives. Speed everything up, and intuition disappears. This is a deep aspect of the mystery. Of course the smug blankets regard ‘appeals to mystery’ as somehow invalid. But how exactly is the word ‘mystery’ being used here? In practice it appears to mean ‘appeals to the supernatural’, with the associated assumption that the ‘supernatural’, whatever that is, doesn’t exist. But science progresses through mystery, and science-1 will always throw off any restrictive blanket that seeks to limit its intuitive powers.

Smug blanketeers use a false dichotomy framing where they already ‘know’ better and the correctness of their worldview is taken as a given.  Any attempts at genuine discourse with them are first forced into the frame of this procrustean bed, then after they have been fatally mutilated, the results are held up as ‘proof’ that the ideas were wrong in their unmutilated form.  It’s what happens when the LH, or the egoic intellect, takes what doesn’t belong to it (data that originally was in an intuitive habitat), keeps it for itself, then assembles it into a false worldview that being false, will always attack all that is precious, as that which is precious will never be found in mere data. The whole idea is precisely to erase the numinous by forcing it out of discussion.  This often shows in two stages. As erasing the numinous wholesale would create horrors visible in broad daylight along the way (on a pointless journey to the impossible in any case), there is a first stage where the metaphysics of reductive materialism are taken as a given while constant appeals are made to a sense of ‘wonder’. This ‘wonder’ – really just a kind of mean-spiritedly reluctant (and ironically mandatory) acceptance of the intuitive – is corralled into the scientistic approach with anything explicitly numinous derided, awaiting its transport to the antiseptic scientistic abbatoir. Naturally the smug blanketeers protest – they love poetry, they love music, they love literature – but they are working hard to recast that ‘love’ as an empty hallucination, a mere epiphenomenon that’s all very nice but ultimately an evanescently frothing meaninglessness set against the absolute zero hard cold of a dead cosmos, a cosmos that we can somehow only genuinely decode through cold hard reason.  Quite Schopenhauerian that, although not-so-cuddly uncle Arthur’s worldview was more of energy than cold, with the blind energy of Will constantly writhing.  Still, the inescapability of metaphysics is quite something to behold. You’re going to choose a metaphysics whether you like it or not, though where and how the choice takes place may not be easy to discern.

Wonder, like consciousness, can’t be simultaneously non-mysterious yet also wonder. But the smug blanket project is stubbornly determined to ignore this and proceed at all costs – anything rather than open the floodgates to the dreaded irrational.  But despite the supposed ‘rationality’ (as evidenced by the numbness of the smug blanketeers), false dichotomies always depend on emotional charging (note a similarity here with the contract between the objective notes of music and the emotionality of the actual music the notes represent) and the smug blanket nihilist worldview thus does very well amongst those who are easily pleased by having their prejudices tweaked.

Smug blanketeers will reinvent life as a machine, then showing you this strange, somewhat ugly unliving clockwork puppet parody, this faintly nauseating AI image exuded from the depths of uncanny valley, will with no sense of irony or humour and a completely straight face, explain to you that this is what life ‘really’ is.  Their only response to your protestations that it seems utterly otherwise will be to take their little diddy two-bit ideas and try to bring them to life, which will necessarily involve nihilistic pronouncements that it ‘just’ seems that life is precious, that it’s all ‘really’ a kind of mistake from the human point of view, that humans are a kind of spandrel, our consciousness the accidental creation of an illusion as a byproduct of an evolutionary process that blindly falls into an ever-more efficient way of prolonging its transmission in a universe heading for the eternal heat death triumph of entropy.  Which is a philosophical bewitchment, a trickery that steals our truest inner knowing of who we really are, but which is enjoying a certain ascendency in the west these days.  Trickle-down economics is of course bullshit, but trickle-down philosophies can be profoundly effective precisely because we all have this profound philosophical aspect as a key part of being human. Always remember however that the determinism of Hume – a key aspect of the smug blanketeers’ worldview – is unable to confirm that the sun will come up tomorrow and in principle it never can.  Science proceeds by deduction, not induction.  An inconvenient truth requiring diversionary tactics.

Claims that science just ‘tells is as it is’ would perhaps be more convincing if so many scientists, and gullible science fans didn’t so often make grandiose metaphysical pronouncements to do with matters such as free will, the nature of nature, the nature of matter, consciousness, meaning and so on. These statements are usually emotionally tinged, using terms such as ‘woo’, ‘nonsense’ or ‘delusion’, or at other times a kind of negative appeal to the numinous (“and then in trillions of years’ time the last remaining red giant will burn out, leaving the universe cold, entropic, dead, forever…” as Brian Cox solemnly intones in a pop astronomy programme on the BBC) and they are by design destructive of various important aspects of what we feel – or rather know – to be human . All of these behaviours betray hidden metaphysical assumptions but as can be seen from the quick tendency to rage when challenged, belief systems. And it’s important that they are denied as such, which is why we have the effortlessly supercilious-grand claims that the physical world is the only world there is, that the supernatural is mere woo, and so on. Of course these metaphysically-infused claims are precisely there because of scientism, and if they’re not there, neither is scientism.  They specifically rely on the straw man versions of ‘spiritual’ stuff, which of course also follows the real thing around to feed off it to create its own counterfeit.  As soon as any half-way respectable evidence for certain phenomena is produced a hard rush commences to blitz the unwanted data, move those goalposts, and attempt to subject it to granulisation in order to make it go away.

Who’d’ve thought that the metaphysical is intimately linked with emotion – or whatever it is in us that gives rise to emotion in the first place.  Maybe emotion is more important than we’ve been led to believe.  Maybe it’s linked with our sense of aesthetics and morality…

The repetition of learning to play an instrument, to paint etc enables you to bring through what these things are really about – they are channels.  But as poems are play, as music is play, so the repetition of work is deadeningly uncreative. That soul-sapping cruelty of being worked ever-harder, subject to ever more surveillance, ever more monitoring. Again that suspicion of the creative person – which is ironic because as previously mentioned humans are inherently creative anyway – which can then turn into the various cringey work cultures whereby people are persuaded to get enthusiastic about their job. Though if you’re forced to work in order to live, a kind of Stockholm syndrome does make sense, really.  As long as you leave it behind when you go home…

For now, the ticking of clocks gets ever louder. Clock time is where unruly life is trammeled and trained, made to jump through hoops by a powerful, cold intellect. And this leads to…

THE ILLUSIONS OF THE ALIAS FREQUENCIES

Let’s continue in a metaphorical way with Bergson’s concepts of clock-regulated time (temps) and our lived sense of duration (durée). Duration is of course where we live. It’s unbroken and unbreakable. No amount of trying to catch it in the net of clock counting will ever get us anywhere with it. Confusing clock time with duration gives us Zeno’s paradox, and a load of futile neurological research on consciousness and memory. It also enables people to convince themselves that free will doesn’t exist, and write books to that effect where they don’t come near to proving it, and everybody praises them for having proved it. This has worrying aspects (the ever-increasing seepage of destructive nihilism away from the ivory towers of academia into the everyday life of humanity) but once again shows how that pesky metaphysics just won’t go away and thus offers at least a gesture to the way out.

Clock time is the time in which data is made to move by a rapid succession of separate datapoints. There is always a clock in the background in data flows. It’s how the flow’s synchronised, how it’s sampled. Despite the name, data ‘flows’ are never analogue – they are the near-enemy of the analogue. Something in us sees rapid data change, like the frames of a film, as flow… and that something is not digital. The fake ‘flow’ of digital data brings to mind the viewing devices of the grimly dualist gnostics of Theodore Roszak’s novel Flicker. (Which features a secret anti-Cathar Catholic group called Oculus Dei and a protagonist with the surname Gates – fill yer boots, conspiracy theorists!)

Meanwhile, when there’s a clock there’s the possibility of illusions created by interference patterns between the clock and regularly repeating data points. We see this sort of thing when we’re in a vehicle going past two sets of railings and shimmering light and dark bands appear. In digital sound recording a similar effect is called aliasing. (It’s worth just looking at the ‘alias’ photo on that wiki page as it’s reminiscent of the way that the back of Lt Frank Drebin’s suit jacket in Police Squad has wide bands of interference actually sewn into the pattern, which is (a) and utterly hilarious visual gag, and (b) seems not to have been noticed by anybody else, which I find incredible.)

As for analogue interference patterns, the classic example is of dropping two stones a certain distance apart into water, which creates outspreading concentric circles. This is the difference – the interference patterns thus created are really there. The alias artefacts aren’t.

Consider now the idea of the near-enemy and how it acts against beauty, and consider now the gambit used for the problem of evil whereby it’s said that no matter how terrible its effects it ultimately isn’t really there. Consider also the classic advaita analogy of seeing a snake in the twilight that turns out when fully lit to be a rope. Consider furthermore that the near-enemy of science, science-2, can’t actually create anything new no matter how much computer power is thrown at it, and how the demons of the night side of the Tree of Life can only challenge, never create…

… and how all of this is not new and always tends to spread, abstractions filtering always downwards into the concrete world of duration and entangling it, and how it’s always spreading, which brings us to…

THE ILLNESS OF COLD HARD REASON

Cold Hard Reason does not exist in the world (it’s metaphorically an ‘alias frequency’) and we should not submit to its supposed acolytes’ empty-headed demands to somehow justify the noumenal, not least because the phantom of CHR always sets things up so that it can never have what it demands.  CHR is a hungry ghost, ever leading us onwards to our breakage upon the adamantine wall of the Asymptote. CHR is the near enemy of intellect. Reason that works, reason that is grounded and embodied, always has an emotional aspect, in the same way that the notes of a musical score are just a code for music that must be heard by a human to be appreciated, to be understood – and understood in a specifically musical way.  To analyse the notes of a song will never lead to the appreciation of the song as a song, i.e. musically.  Just as breaking a poem down into single words will destroy it, so it is with what it is to be human – our inherent creativity. Taking things to bits, or converting them to bits, will inevitably result in the impossibility of making them truly live again. Creativity part of us whether we’re ‘artists’ or not. Cracking a joke to affectionately take the piss out of a workmate shows this creativity.

Of course the ultimate triumph of some kind of Cold Hard Reason is thus impossible – but never underestimate how deeply humanity can be damaged while heading for that unreachable destination. We are each of us a galaxy, and through giving away our connection with the numinous we allow ourselves to get ever more closed off from that galaxy. To even announce that you are an atheist puts a kind of suffocator over a key aspect of our intuitive vision.  Just as how the most sophisticated facial recognition system can be rendered useless by wearing a simple cardboard mask, so does the suffocator of self-declared ‘atheism’ place an uncomplicated but highly effective block over the place where the light gets in, rendering everything permanently twilit. The gate is narrow, and merely asserting a positive atheism will close it off and render the multifarious wildlife of the phenomenological jungle ultimately meaningless and dead and dying.

This particular fascination with ‘Atheism’ is a peculiarly Western phenomenon as perhaps can be seen by watching what happens when some supposedly ‘sensible’ or ‘clever’ westerners encounter Eastern systems such as Buddhism that are non-theistic, or Vedanta which is technically theistic but not in the way that the west understands this (for example, Bhagavad Gita ch. 13 v. 13 states that the Brahman is beyond existence or non-existence). The arrogance of the west is that somehow through ‘science’ of some sort, we have proved that the material world is all there is – this being a metaphysical view at heart, not proven by science-1 and ultimately not provable. But as the West apparently knows best on this score, we have the (in fact entirely unearned) right to ‘westsplain’ to profound systems of philosophical insight and learning that have been around for literally thousands of years, barging in and taking over, correcting anything in there that we deem to be ‘supserstition’, confidently stating that our metaphysics is the best, there’s no supreme being, there’s no such thing as transcendence, and we can thus affirm ‘Atheism’. It’s nothing to do with science, either – it’s a kind of arrogance born of the same worldview that has given rise to science-2. It’s a destructive take on an unimaginably profound system of insights into what it is to be human, and it’s a fantastic way to screw up the Dharma as a result. And it’s

COLONIAL CONSCIOUSNESS

Having messed up many countries and cultures with its unjustified overconfidence, and now facing at least some reckoning with this, the west has moved this strain of colonial thought away from terrestrial lands into the mind itself.

The spiritual sickness of the West is to obsessively seek a settled finality, so that the subject of this grasping can be turned into a failsafe method, a gadget, a machine. (Which is perhaps the mindset that enables Yuval Noah Harari to be an accomplished Vipassana meditator while spouting specious dataist guff – for YNH, meditation is a useful tool to get somewhere, nothing else.) As Thich Nhat Hanh once pointed out, westerners are fascinated by machines. But while machines that, say, keep hearts beating properly are obviously good, there is a grim tendency abroad to make machines of everything else – including the human mind.  Worse still, the illusion of smooth motion, like the frames of a film or ultra-fast processing, is taken as having somehow ‘solved’ the innate difference between ∑ and ∫.

Just remember that in the name of some kind of uniquely Western ‘progress’ we’ve had (for example) asbestos, CFCs, DDT, nuclear waste and microplastics, all of which created long-term destructive messes that have taken and still are taking huge long-term effort to clean up.

We do not (if we’ve got any proper understanding of Science-1 that is) deny climate change, and indeed in this respect it’s always worth referring to the incredible coincidence of the industrial revolution with the onset of this new and potentially gravely destructive phenomenon on earth.

Yet we affect not to notice, or to properly look at, the incredible coincidence of the widespread appearance of the ‘Atheist’ mindset amongst the ‘educated’ in the West around the same time as the industrial revolution. Whatever Marx’s role was or wasn’t here, the real issue is that ‘Atheism’ really took hold at that point in (western) history. It’s also around the time that lies were propagated to the effect that people used to believe the Earth was flat (we surely know this is bullshit now, right?), or that the Copernican revolution was perceived as a tragedy when in fact the Earth was regarded as a muddily chaotic warzone between light and dark. Montagine expressed it thus:

“Man is the most blighted and frail of all creatures and, moreover, the most given to pride. This creature knows and sees that he is lodged down here, among the mire and shit of the world, bound and nailed to the deadest, most stagnant part of the universe, in the lowest storey of the building, the farthest from the vault of heaven; his characteristics place him in the third and lowest category of animate creatures, yet, in thought, he sets himself above the circle of the Moon, bringing the very heavens under his feet. The vanity of this same thought makes him equal himself to God; attribute to himself God’s mode of being; pick himself out and set himself apart from the mass of other creatures…”

The ‘Copernican Revolution’ and its supposed devastating effect on religion was bullshit emanating from self-satisfied Victorian ‘Atheists’ with a (false) point to prove. Mere ‘education’, the accumulation of facts, would apparently dispel the darkness of ignorance… which seems strangely familiar when watching Big Data overconfident pillocks insisting that just fact-truths will bring about paradise on earth. This is the idea already mentioned here that ideas can take on different garb in different times while still having strangely atemporal aspects, aspects more real than their evanescent manifestations in culture, to do with Good and Evil, Dark and Light, War and Peace, Destruction and Creation…

This so-called ‘progress’ still continues, but now as part of the deliberate colonisation of the numinous with ‘reasonable’ counterfeits in the name of leaving supposed ‘superstition’ behind.  But once it becomes apparent that these approaches are the asbestos of the spirit, complete with brutal side-effects that weren’t apparent at the time, there will then once again begin the long-term project of trying to undo the damage caused by the sophstistupidity of overconfident ‘rationalists’ in the name of this supposed ‘progress’ that poisons the whole world, including the world of the mind, of the spirit.

This patronising western assumption that the numinous is either ‘superstition’ or of the nature of a machine or gadget like every other aspect of the brain and mind and everything else is no better than the arrogantly scornful response of the first western explorers of the East when they encountered ‘pagan idols’ with their gods and goddesses with multiple heads and hands and arms and legs… before finally later it came to light just how profoundly ignorant the western invaders were, in so many different ways. After which the repair work once again needed to be undertaken, and the West began learning from the East.

The west is going to learn yet more, though.  The world is not a machine. After error comes correction.  After a haughty attitude comes the fall.  This looks to be already well underway, showing as the world’s ecosystems becoming more wrathful in aspect. Put the power of the intuitive in a box as if it was your pet to torment and mistreat at will, and it will start to get wilder, and of course more powerful in its wildness. The intuitive will ultimately never be boxed and it will have its way in the end. It will reclaim its rightful place and its energy will break through to do so by any means necessary.

The personal is the metaphysical.  This isn’t about AI as such, it’s about what is reflected back to us when we are under a collective illusion of thought of a particular kind that is valorised, regarded as the lodestone for some kind of ‘progress’, that in turn through the technologies it creates erodes our attentivity and our very sense of embodied being in the world, making us ever more demanding, impatient, angry, volatile, disconnected, while gradually wrecking the environment on which we depend for our lives.

Outside of AI research, a far more all-encompassing, and all-trapping, manifestation of the western colonial mindset is of course the world of work, where we are all being screwed to produce ever more work ever more efficiently for ever less financial reward.  Ever more intense, exhausting, cognitive running on the spot. Your life is being marshalled into following the same outlines as the asymptote.  You are expected to work ever faster without ever quite getting ‘there’.  Your true life, Life-1, is being colonised by its near enemy, Life-2, a life based on empty speed and ‘efficiency’. This includes your ‘leisure’ time, which following this process becomes a matter of never-ending projects.  Should you actually manage the rare feat of completing a clock time project, you will find that the promised satisfaction, the closure, evaporates quite quickly, leaving you feeling incomplete.  The issue is ‘once I get X then the itch will be permanently soothed’, a compelling illusion that we all fall for again and again. It’s the near-enemy version of enjoying a challenge and feeling due satisfaction after a success, or of say enjoying eating and eating properly. It’s the overdrive itch that’s the false version.

Be wary of ‘goals’ – they so easily lead to this feeling of still being hungry after you’ve got there. A far better approach is that of learning an art. This requires long-term self-discipline without a goal in mind, thus evading the emptiness/hungriness issues found in goal-based projects. You just keep going, enjoying it for its own sake, patiently gaining in artistry and skill and intuitive understanding year on year on year. And you never feel the sugar rush of a goal achieved followed by the post-crash emptiness that comes afterwards. And on top of that, long-term non goal-based discipline is much less likely to be harshly applied. Firmly yes, harshly, never. Repetition can be like learning a musical instrument, or it can have a desperate, driven quality. The former joins objective and subjective time in the way that we live when we are in accord with the Tao, the latter urges us out of our bodies, into our minds, setting us on paths of aggressive overreaction, of cruelty to others and to ourselves, leaving us permanently hungry. And this false path is what gives rise to science-2, which is then fed into a feedback loop of science-2 culture which then reinforces itself and disseminates and develops that culture of injury. And it spreads into our daily work, and it drags on our hearts. Once again, ponder the difference in tone between firm but realistic discipline, which can teach you a lot about yourself into the bargain as you discover what it’s like to lean into yourself, and harsh discipline, which is more like treating yourself as a wild animal, or a donkey, to be whipped and beaten into good behaviour. You can guess which of these approaches is the one that sells apps, courses, classes and the like, which also has an associated dazzlingly vacuous online culture of showing off, of a coldly glittering ecstasy of surface-to-surface communication, and all the injuring vacuity and fakery of it all…

As we move out of our bodies and into our intellect, we lose kindness, empathy, intersubjectivity, compassion, and move towards the cruel.  This in itself shows us we’re going in the wrong direction, away from the true humanity that resides in our heart, yet we’re being relentlessly trained into denying our own innately embodied compassionate nature.

To move in the right direction, however, the direction of ever-deepening humanity, will always require…

THE LEAP OF FAITH

The arrogant LH ‘scientific’ intellect likes to mock the leaps of faith of religion, portraying them as a simple indoctrination into untruths. In a sense they often are, but once again appeal must be made to the true and the false version that is found in all matters of the mind. There is a real version of the leap of faith, and indoctrination into falsehoods is the fake version, of course.

When considering what the idea of the leap of faith might truthfully refer to, we consider that in consciousness there is the apprehension of the whole, the RH, the intuitive.  To break free of the downward pull of reductionism, of ever greater data granularity and its entrapment of our life in its rigid nets, we have to let go and make a kind of jump. This ‘leap of faith’ is actually just us acting intuitively, letting go of the handrail and exercising our wings, and it’s not good enough to say that because sometimes we’re wrong, that means we must therefore turn to ever more complex numerical tables.  This is fear of flying.  All it will do is shackle us to the earth, whereby we will lose the capacity to ‘fly’, to live as we ultimately truly are, with that ‘intuitive’ aspect that contains everything including data while being ultimately beyond all things. We actually ‘fly’ any time we act ‘from within’, making our way in the world based on informed vibes, or sudden hunches, or negotiating our way through the world of work, or enjoying the companionship of friends, but always from intuition. Here again the everyday is infused with the numinous. The transcendent is reached through realising that we live amidst conceptual approximations and successfully freeing ourselves from those approximations – which thus makes it fully a matter of the here and now, and more real than the merely approximate. Indeed, it’s completely real in a way that mere concepts, being approximations, only obscure. Bear this in mind when you’re encountering claims that there’s no transcendence.

The profound issue here is that the destructiveness of the mindset that gave us dataism has come about as a result of denying that a ‘leap of faith’ is necessary – or even possible.  But no matter how much data is collected we must act, we must live, and in that acting, that living and being, the data is inherently more than just summed because it is in a greater context of mind.  Just adding ever more datapoints is more likely to cause data saturation of the mind, a kind of paralysis over what actually matters to us.  OCD in one sense represents an overloading of attempts at pinning down that act against the flow of life, against our innate tendency to live in flow, not through excessive analysis.

Like fledglings leaving the nest, we have to make that jump, and in the end despite the risk it’s entirely natural, it’ actually just the way we are.

APPROACHING THE END

As the approach to the unattainable axis of breakthrough into meaning gets supposedly ever closer, ever more demandingly frustrating even as ever greater computative power is pumped into it, we are seeing ever more clearly the outlines of a pathology ever more furiously denied as time has rolled on.  This problem may seem somehow purely objective, yet it is always accompanied by a feeling tone of rage and desperation, a tendency to arrogant control-mania, just under the surface, ready to come out for example whenever the more scientistically-minded are reminded of the lack of progress – and the shape of that lack of progress – towards the goal that is apparently so important.  This problem is found in our personal cognition, our group cognition, our collective scientific strivings, our non-scientific urgent seekings of a better world, and our very sense of meaning. Meaning is found in time, as with music (and film – let’s not forget that other inherently timebound art). But time is unbroken, the drama requires movement, and time and movement are utterly alien to the left hemisphere egoic intellect, a hungry and ultimately terrified ghost that finds itself defensively occupying but a small part of our greater embodied consciousness that lives and loves and dies…

It’s not even just the repeated failures, it’s also the way they all fail in the same kind of way, with the same ever-more-clearly-delineated outline…

For now, we seem morbidly unable to break free from the machine, the rigidly robotic repetition we’ve all been herded into.  We know it’s wrong, but we go along with it because we have to in order to survive, and we can’t see an alternative. Just as with AI research, we think that endless repetition of the same will somehow magically lead to profound breakthroughs in life.  We’re chasing a chimera, always on the horizon, never getting nearer – the failure of Big Data AI to become creative, to gain common sense knowledge, is just a reflection of what’s happening to all of us, of the slipstream we’ve been pulled in to, and which we’re increasingly desperate to wriggle out of.  And if this way of living is so great, why all the suffering?

Be agnostic instead.  But stay agnostic through and through, and keep going on that journey, never surrendering to western-style static certainty, to the western tendency to treat everything as a tool, to the western disdain for ambiguity.  Use tools, but don’t give them undue significance. Approach the journey in a way more like learning to play a musical instrument, and forget any goal. Something beyond the still and moving will eventually come into view.

Despite the inane witterings of the ‘just so’ story-tellers, we are an interface between time and eternity, between the limited and the infinite, and we know this in our innermost core.  The increasingly profound loss of this understanding and its subsequent ways of living is effected via techniques of blocking the ways this understanding works, the behaviours that are inherently of the way it works, the way of its aliveness.  Ritual, living symbolism, community – all are being subjected to Big Data and the destruction that goes with it.  Big Data barges in and takes over because it knows best. It’s microplastics of the mind, set to cause a similar deep wreckage of the human that will take vast effort to undo, but this time in the ecosystems of the human spirit rather than the outside world.

The understanding of the heart is that of something moving, living, and thus innately a matter of behaving and behaviours.  These have their own concepts – but the heart, the begetter of behaviour and of living, is in truth primary, not the concepts.  We have it the wrong way round, always falling into being driven by concept.  Our entanglement in concepts is why we have the Asymptote, as we can never break through to a point where the concepts gain life, which they can never do as they are not concepts of the heart and thus life but instead concepts of the disembodied intellect.  Bergson invented the concepts of temps (clock time) and durée (subjectively perceived duration) precisely because he intuited that there is a profound difference between abstracted regularities of clock time and the seamless flow of life. 

Just as explaining a joke destroys it, so the ‘all is data’ mindset destroys life.  Currently the grand project to shackle humans to the Machine is well advanced, but it too will one day hit the wall.  The extent to which you’ve immersed yourself in dataist culture, in terms of action (e.g. using apps, becoming ever-more impatient and demanding of convenience) and in the abstract (e.g. buying into the reductive philosophies associated with dataism) will affect how hard the wall hits you.  You therefore ought to take steps now to cultivate a disengagement with dataist, online culture and a re-engagement with real life involving real people in the real world. (It’s worth remembering that this ‘project’ doesn’t need any kind of shadowy elite to run it, either – it’s much more likely to be stigmergy.) 

This article is a series of showings of the limitations of the dataist project. It’s written from the RH, integral∫ viewpoint because it has to be, because the LH, summing ∑ approach is incapable in principle of any real understanding of those issues, instead preferring to create its own fake clockwork version. ∫ resembles redintegration – seeing things as a whole, in one go. It’s an active consideration that’s far more likely to bring us fruitful routes of research than neurotically zeroing in ever more microscopically on a part of the whole.

But precisely because this has been a pointing, a showing, the smug blanketeers (who emanate a strangely matter-of-fact ‘blank’ness) will naturally just continue to argue that more data will explain it all.  There are deep reasons why this will ultimately fail, but because those reasons lie outside of the counting-obsessed mind which always denies there’s even an issue, all that can be done is to show, to point, to try communicating a feel, and this leaves the depth of the intuitive vulnerable to cold hard dataist violation. 

But despite the constant witterings of the ‘just so’ peddlers, we are an interface between time and eternity, between the limited and the infinite, and we know this in our innermost core.  The profound loss of this understanding and its subsequent ways of living is effected via techniques of blocking the ways this understanding works, the behaviours that are inherently of the way it works, the way of its aliveness.  The heart, the begetter of our life, is in truth primary, not the concepts.  We have it the wrong way round, always falling into being driven by concept.  Our entanglement in concepts is why we have the Asymptote, as we can never break through to a point where the concepts gain life, which they can never do as they are not concepts of the heart and thus life but instead concepts of the disembodied intellect. 

How can we get people to break their hypnotised gaze upon data and look up? Perhaps we can start by asking: what is it that watches the data? What is it that understands it, that contextualises it? What is it that perceives movement?  We need to be aware that there are two answers to these sorts of questions, one in the metaphysical abstract through which we could find our liberation, and the other in terms of everyday life which represents our imprisoning.  The first answer is a kind of koan and leads to transcendence – it is found by putting the heart, intuition first, with the intellect in proper relation to it and thus functioning powerfully and rightly.  The second answer is more mundane and not good.  It is that a human is required to recognise the data.  And data these days is power and money, and arcane – do you know where your data’s kept, where it’s taken from and what use is made of it?  As with money, humans have a thing about power, and the more they get, the more it tends to go to their heads. (It scarcely needs stating here that money and power are near enemies of… you work it out.)

But there’s a twist. Our true being will always escape surveillance, monitoring, assessments and tick boxing, as long as we make the ‘leap of faith’, because we ‘fly’ always in our pure consciousness, and we always have some kind of access to it, from meditation through to helping somebody in need, comforting the grieving, painting, making music… We have our primary intuitive aspect throughout all our daily quotidian doings – it’s always there and as the Asymptote shows, it will and can never be entirely imprisoned by the Machine…

PS – An article from the everyday world of football:

Jürgen Klopp is right: man-management skills are being lost in a rush of data | Jürgen Klopp | The Guardian

Leave a Reply

Your email address will not be published. Required fields are marked *