Philosophy is a battle against bewitchment of our intelligence by languageWittgenstein
Language is a strange thing. There are words with multiple meanings, in some cases even words with opposite meanings to themselves: contronyms, such as sanction, oversight, dust. Despite this we are generally pretty good at figuring out the meaning of even these words by context.
It becomes trickier when words have close but distinct meanings. Names in particular have power, and sometimes a name can exploit ‘adjacent’ meanings of a word, or bake in an assumption. Here’s some examples I’ve collected over the years.
Social Media. John B highlighted this to me back in 2008. As Web 2.0 was becoming a thing and the mainstream started to find ways to be social online, ‘social media’ became the term of choice. This baked in the assumption that social content could function as a new ‘media’ in the sense that it was a kind of content that you could put adverts in, like TV or newspapers. It took quite a few years to make that work financially, but that is exactly what happened. I do think it’s unlikely it would have gone a different way if we had called it something else – but sometimes I wonder.
Influencers. In a similar way, as the power-law curve kicked in for social media, some people became a lot more visible than others. That meant they could be used to sell stuff in a different way! By terming them ‘influencers’, the message is that the main thing they do is influence their viewers/readers – most probably to buy products that they just happen to mention. But is this really the best way to think about them?
Web 3. Somehow in all the hype and froth of the crypto frenzy, the idea was fomented that this represented a paradigm shift similar to Web 2.0, and the end result would be a collection of services and algorithms we should call Web 3. But the parallels are strange – in particular blockchain technology, while clever in many ways, does not naturally have the kind of scaling properties we would want for anything that looks like the web as we know it. What’s fun here is that the number ‘3’ has now been effectively reserved, so assuming blockchain doesn’t live up to the name ‘web 3’, the next big internet thing will have to find another way to go – my money is on ‘Internet 3.1’.
Crypto Winter. Speaking of blockchain, this is perhaps the most obvious example of a name with an assumption baked in. The metaphor of seasons is completely assumed: a winter will naturally be followed by a spring, and eventually a summer just as glorious as the last. But that’s not how it always goes – sometimes things just die! A more apt framing here is probably the ‘trough of disillusionment’ from the Gartner hype cycle – but that’s certainly less catchy. (Side-note, the value of bitcoin itself is having a bit of a ‘spring’ right now, but I’m less sure about the wider blockchain paradigm).
Fan Service. Moving out of tech, in manga/anime and now beyond, the term “fan service” arose to describe… let’s say moments in which the sexual gaze of the (usually presumed hetro male) reader/viewer is titillated by a particular choice of camera angle or staging of action. I suspect this term generally spread half-ironically, but the way it bakes in an assumption of who a fan is and what they want is not ideal, and can reinforce the implied gatekeeping of communities discussing this sort of content.
(There’s another meaning which is just ‘give the fans what they want’ in the sense of “see the cool super-powered person use their powers to the max!!”, which is a bit less problematic)
Statistical Significance. In statistics the term ‘significance’ has a very specific meaning; it tends to mean that the results of some sort of test ‘signify’ that two test populations are different in some way. But in everyday language, if we describe a difference or change as ‘significant’, we usually mean that it is large! Two things that have a ‘statistically significant difference’ may not be very different at all, or different in a way that is very unimportant, but the term’s connotations say otherwise. I think it may even be plausible that this ‘bug’ was viewed as a feature by the founders of these sorts of statistics, as it turns out they were a bunch of eugenics enthusiasts very keen to find ways to show that one group of people is different to another, as this long article quite fascinatingly lays out.
Smart Anything. Emergently, describing an object as ‘smart’ now means that it is connected to the internet. That isn’t always going to be a good idea, but the connotations of ‘smart’ suggest that it is.
Artificial Intelligence. The temptation with computers or even simple algorithms is to think of them like our own brains: taking some input, evaluating it, and taking an action as a result. We consider ourselves intelligent (arguably homo sapiens could also be on this list as a biased name), so it feels natural to describe a process that looks like this as some kind of intelligence. But like the two meanings of ‘significance’, intelligence can span a spectrum of behaviour (from low intelligence to high intelligence), but if we describe someone as ‘intelligent’ we mean they are at the higher end. So while it is arguably fair to describe even fairly simple algorithms as some form of ‘intelligence’, the term AI has the connotation of high intelligence. Great for anyone who wants to impress people – perhaps to gain funding – about some sort of tech endeavour. More on that later.
Natural Gas. Moving outside of digital technology, describing methane as ‘natural gas’ is a great piece of propaganda. It exploits the fact that ‘natural’ has positive connotations, while technically also being anything that occurs in nature – which includes a lot of things that aren’t nice at all. Looking it up, it does not seem as if the term was coined for this reason, but those connotations have more recently been leveraged to encourage use of gas instead of renewable energy.
This is all very well, but can I come up with better names for these things? Honestly, probably not. But here’s my suggestions anyway:
- Social Media -> Digital socialisation
- Influencers -> Social hubs
- Web 3 -> On-chain paradigm
- Crypto Winter -> Crypto disillusionment
- Fan Service -> Titillation
- Statistical Significance -> Statistically Signified
- Smart anything -> Online anything
- Natural gas -> Methane gas (technically there are impurities so it isn’t just methane, but you get the idea)
Turning Test reductio ad absurdam
In pondering an approach to the question of whether machines could ‘think’, Turing proposed a test that eventually took his name: can a machine convince a human interacting with it through text that it is actually human?
Some extrapolate this rather too far and conclude that if a machine can do this, it proves that it can “think” or is “intelligent” (in the colloquial sense). Existential comics deploys a beautiful reductio ad absurdam to this argument that you should definitely read in full here.
(I tweeted this a long time ago but it’s well worth re-visiting, especially in the age of generative AI!)
As I’m certain Things readers will have noticed, AI became the new hot thing after crypto.
The ability to generate surprisingly plausible images from a text prompt surprised a lot of people, and the advances in that tech since have also been rapid and impressive. At first it was easy to laugh at how the ‘machine’ struggled to understand how hands worked or render scenes with multiple people in them convincingly, and then very quickly that became a solved problem (for the better models, anyway).
Just as that was happening, Large Language Models took hold, through ChatGPT in the most mainstream case. John B (him again, 15 years later!) pointed me at this purported ‘leaked Google memo’ on the topic which concludes with an excellent timeline of events describing how this came about.
This brought the ambiguity of ‘Intelligence’ and the Turing Test quite suddenly to the fore. LLMs solve some of the obvious weaknesses of previous language-generating-algorithms in that they can hold a pretty convincing thread of conversation. With a few guidance prompts and a less obviously superhuman typing speed, it could very likely pass the Turing Test in many cases. But it is a big mistake to consider it ‘intelligent’ or to actually be ‘thinking’.
First there are what is called ‘hallucinations’. (Note again the bias of the word – the most common use of the term is something that humans experience, tacitly encouraging us to think of an LLM as a mind). These are cases where the output says something completely fictional. I asked ChatGPT to list the solstices and equinoxes of all the planets in the solar system, and while it did a beautiful job of laying out the answers (much better than a Google search), it got quite a few of the answers completely wrong. I wouldn’t be too surprised if the most egregious examples of this can be fixed, but this problem will run deep because ultimately there is no algorithm for truth. It doesn’t necessarily show something isn’t ‘thinking’, but it can very quickly undermine an impression of high intelligence.
Second and more significantly there is no actual reasoning. It’s just a language model! It’s just producing words that look plausible in context! The fact it can give smart answers to some difficult questions does not mean any thinking is taking place. This can be tested by proposing simple riddles. My colleague Ben H challenged ChatGPT to figure out how someone could reach an object given some restrictions and a few objects to use (including a pencil and chair), and got a response of a sequence of steps that included “straighten the pencil by placing it between two sturdy objects such as the legs of the chair and gently pushing down on the middle of the pencil until it is straight”. There are layers of problems there: pencils are straight; you only need one sturdy object to straighten something; if you did need two they would presumably be close together in a way that chair legs are not.
It has taken me so long to finish this issue of Things that it feels like the generative AI hype has settled into a – perhaps shallow – trough of disillusionment, and generally the above concerns are I think widely recognised. The use-case of someone already being adequate at writing code and using ChatGPT to help you seems pretty strong.
Generative AI + Metcalfe’s law = massively expanded collaboration
In terms of interesting new paradigms that are unlocked, this is quite frivolous but may be a sign: the Mona Lisa AI Cinematic Universe.
First, an emergent format in the ChatGPT Reddit is to generate an image with a prompt and generate more in sequence incrementing something each time (e.g. A cool dude who gets cooler each time, a marshmallow that gets angrier each time).
Then people subvert that format by deviating from the stated rubric to give a twist ending of some sort. Someone did “Average day in France“, so the increment is time – but the man ends up stealing the Mona Lisa. People then started expanding on that story with a day in the life of different nations, and the whole thing spiralled out – see the diagram above.
What I think is interesting here is you have a collaborative silly comic, but many more people than usual can contribute much faster, because anyone can write a prompt. It’s not a terribly amazing new emergent art form, at least not yet, but it’s something I think is categorically new!
Video Games: Superliminal
Superliminal has a bit of a tough time because the closest reference game is Portal: a fairly short, linear, mind-boggling puzzle experience with a cute narrative framing. But Portal was a ludicrously good game, setting the bar very high. Superliminal unsurprisingly can’t reach that bar, and felt to me like it took a little while to find its feet, but it gets close enough that I think it’s well worth the time.
It takes perhaps just 3 hours to playthrough, which I found to be ideal. I recommend diving into it knowing nothing else, but if you need more convincing you can watch this trailer that lets you know what kind of approach it takes to puzzles.
Video Games: Tangle Tower
I much prefer media that is outstanding in a few areas with a few flaws to anything that is uniformly good (but not great). I also love to see innovation in what a video game can be. This is exactly what I found in Tangle Tower.
Superficially the game most closely resembles a point-and-click adventure, but with a locked-room murder mystery framing. The ‘real’ game, though, is finding various clues, and talking to the nine suspects. You can talk to any suspect about any clue or any other suspect. That possibility space multiplies pretty quickly, and this is what enables you to try to be a ‘proper’ detective: by asking the most meaningful questions out of the very wide possible range. That can still get a bit overwhelming, but there’s a nice in-game hint system if you find yourself baffled or overwhelmed at any point.
What really sets the game apart is that even though the above design makes it dialogue heavy, every line is voiced, and the writing is great and the voice acting is brilliant, and the art and animation of the characters is stylised and fantastic! This completely elevates what could easily have been a slog (I have seen a lot of bad writing in games) to something I found consistently entertaining.
The ending was a bit disappointing, but I did not mind this at all as the journey was far more important than the destination. At around 6 hours to get through, I found this another highly enjoyable and reasonably short indie game.
TV series: Star Wars – Andor (Disney+)
Although I don’t have it in writing, I’ll always let people know that I anticipated the Star Wars universe as ripe for TV series from around the release of Episode I in 1999. It’s such a rich playground for stories of all kinds. What I didn’t properly understand then was that the budgets required to pull that off were not reasonable until the last few years, when the streaming wars pushed budgets up and advances in technology pushed the cost of special effects down to actually meet in the middle.
That said, despite being a weirdly huge fan of all of the Star Wars films (aaalllll of them!!!), I didn’t understand the hype around The Mandalorian, I found The Book of Boba Fett infantile (even for the kid-focussed Star Wars universe), and Obi-Wan astonishingly non-compelling. I was about ready to give up on the whole concept until people started saying how great Andor was.
It took a few episodes to get there but those people were absolutely right. Andor does what some of the best TV series manage to do (going right back to The Wire), introducing interesting characters on all sides of a conflict and playing things out in a compelling way.
I really hope the upcoming seemingly endless stream of Star Wars TV series continue to explore new tones and themes, as my original optimism for the whole endeavour is now fully reignited.
[Update: this Things has been so long in the writing that another series came and went: Ahsoka. It was… okay.]
Film: Spider-Man: Across the Spider-verse (2023)
Back in 2018, Spider-Man: Into the Spider-verse finally broke the mould in feature-length animation, introducing some brilliant stylistic innovation that has since been widely copied. I wasn’t sure how they could up the ante in a sequel, but they found a way – actually multiple ways. Anyone at all interested in animation, or superhero stories with a bit of a meta theme should seek it out:
Film: The Adventures of Buckaroo Banzai Across the 8th Dimension (1984)
This is something I can only recommend if you like weird/cult 80’s films and want an amazing example of how not to tell a story and introduce a world. Stand-out features:
- Peter Weller (Robocop) as Buckaroo, a does-it-all hero (musician, brain surgeon, scientist…), like a nicer but more violent Dr Who
- Also stars Christopher Lloyd and Jeff Goldblum!
- Had a budget similar to Star Wars (1977)… not all of which shows up on screen, but allows it to be a lot weirder than other bad films
- Features a sci-fi car accelerating to break a law of physics, and came out around 5 months before Back to the Future started filming. Interesting!
If that sounds at all interesting to you then do check it out. And if you do, I highly recommend following it up with the 7th episode of Guillermo del Toro’s Cabinet of Curiosities, ‘The Viewing’. Directed by Panos Cosmatos, this is similarly quite weird (although a lot more stylish and competently put together), but more importantly stars Peter Weller again, nearly 40 years on, in a role I enjoyed imagining as a much older Buckaroo Banzai after decades of weird adventures and a bit of time travel.
Podcast: The Sound: Mystery of Havana Syndrome
I’ve not got much into podcasts but this one was well worth seeking out. Nicky Woolf gets quite seriously investigative into exactly what is going on with the Anomalous Health Incidents (AHI) widely termed ‘Havana Syndrome’, with interviews with a very impressive range of relevant folks.
AHI have been variously attributed to sonic or electro-magnetic weapons, or psychogenic effects triggered by the sound of particularly loud crickets. I was left with the strong impression that, quite amazingly, all of those explanations are probably true to different degrees (although it’s EM rather than sonic weaponry that looks most likely).
The documentary also features some excellent original music, and while it occasionally veers into an overhyped sense of “what a dramatic new twist to this mystery!! This overturns everything we thought we knew!!!” it’s overall as clear and thorough as you could reasonably hope for in such a complicated topic.
Check it out here. (The name is not sufficiently distinct to just say ‘find it wherever you get your podcasts’)
Book: The Vegan Baking Bible
This book was not named lightly. Karolina Tegelaar is extremely intense on the subject of vegan baking, and from what I’ve seen of it so far the book lives up to the name. I particularly enjoyed her foreword, which Clare pointed out to me, and which reads like a mission statement carved into a stone tablet – as likely to scare someone off as it is to convince them to buy the book! Here’s an abridged version of it:
I hate the low standards that are so common in vegan baking. I have hated them ever since I became a vegan over a decade ago, when I realized what people would accept and what was served as vegan. The whole point of baking is that it should be luxurious and decadent. My feeling is that anything you bake that doesn’t taste really good is pointless. Therefore, this book is not just one baking book among many. It is not just about feel-good baking, it is packed with information. It does not just want you to bake cakes, it wants you to learn a new way of baking and make the world a better place at the same time. […]
There has never been a basic book about vegan baking, but one like this couldn’t have been written before as the methods needed to succeed did not exist until now. […]
I have developed and test baked all the recipes in this book many times so that you can succeed when using them. However, as I also discovered and developed many of the methods used, it is important that you read the instructions at the beginning of the book so that you understand how they work. Particularly important is the section on the different stages of whisking aquafaba, as otherwise it is easy to overwhip the aquafaba and the sugar, which produces poor results.
Wow! That’s really how it ends too. Aquafaba is critical.
Awful abbreviated aphorisms
Language is determined by usage, and the same thing is true of sayings or aphorisms. But what I find particularly fascinating is when that usage turns a saying completely on its head. When this happens, it tells us something about human nature.
Here’s the examples I have collected so far.
Build it and they will come
People think this is a line from Field of Dreams, and it is used as a short-hand for the idea that if you make or build something great, the world will notice and appreciate it. But in the film, nobody says this – the line is actually “If you build it, he will come”, referring to the ghost of Kostner’s characters father. Still, the idea of it does sort-of happen in the movie.
I think as humans we love the idea of this meritocracy. The problem is it’s just not true. My favourite example of this is the game Among Us, which is wildly popular, but existed for 2 years before it actually got picked up among streamers and became popularised. If the saying was true, the game would have taken off much sooner.
One bad apple…
The original “one bad apple spoils the bunch” gets shortened to “one bad apple” or “a few bad apples”… and in so doing completely loses the original meaning. When an organisation is found to have a few corrupt members, those in leadership like to characterise that the problem is not pervasive. It seems unintentionally revealing that this is the phrase that they fall back on, describing the problem as limited to “a few bad apples”. By invoking this expression they inadvertently invite us to consider that the actions of the corrupt few will spread to the rest.
Hell hath no fury like a woman scorn’d
From William Congreve’s 1697 play, the original phrasing is “Heav’n has no Rage, like Love to Hatred turn’d, Nor Hell a Fury, like a Woman scorn’d”. By adopting a fragment, the quote seems to be about women specifically, and takes on a vaguely derogatory and perhaps misogynistic tone. But if we remember the quotation in full we actually have a much greater and more important truth that tells us something about the kind of toxic fandom we see today.
Great minds think alike
Many aphorisms are not so much great truths as they are short-hand for an idea. For example, “Many hands make light work” sounds good but if you want to argue the opposite you say “Too many cooks spoil the broth”. In this particular example, the aphorism and counter-aphorism are wrapped up in one when given in full, the full thing being “great minds think alike, but fools seldom differ”. Taken as a whole, it tells us that agreement does not imply rightness or wrongness. But it seems we like the idea of social proof so much that we only keep the first half.
Last time I listed some songs where the motif of repetition implied endorsement, sometimes to weird effect. More generally, it’s easy to assume any topic sung about is an implied endorsement of whatever the lyrics are saying. This doesn’t work if a singer is being sarcastic or satirical.
Randy Newman wrote a song that was very mean about Short People (sample lyrics: “They got little cars that go beep, beep, beep; they got little voices goin’ peep, peep, peep”). The song is of course meant to be a satire about prejudice, and indeed has lyrics in the bridge running against this prejudice, but some people still took it seriously and he even received threats about it.
(People don’t notice the countervailing bridge lyrics in much the same way as they don’t notice Meat Loaf giving the exact list of things he won’t do for love: the lyrics are simply less audibly / catchily delivered)
Janelle Monae’s “Americans” swings between two very different viewpoints, which will confuse an uncautious listener, including things you wouldn’t expect her to say such as “I like my woman in the kitchen, I teach my children superstitions”. In her case, I think her general vibe makes it pretty clear these statements are not intended sincerely.
Dire Straits ‘Money for Nothing’ has some very catchy turns of phrase intended to denigrate rock stars: they get “money for nothing, and chicks for free”, and so on. Mark Knopfler has described how he was inspired by a man working in an appliance store commenting on the music videos playing on MTV on the display televisions. It seems that from Knopfler’s position these remarks were amusing since they are pithlily expressed but untrue, coming ultimately from a place of envy. However, if this is the ‘joke’ it certainly looks like an example of ‘punching down’, and I would say… that ain’t working.
- Transmission ends