Categories
New

Things 2025 Q3: Neuromancer, Expertise, Consciousness

Brian David Gilbert (BDG)

BDG makes the kind of weird videos I wish I had my act together enough to make. Here’s two examples that show his range.

A short song about hats:

A slightly spooky promotional video about get-rich-quick schemes:

I think we need more of this kind of thing, so please go watch all of his videos and even consider his Patreon.

(Side-note: BDG reminds me of Spike Jonze both as a person and for his creative output.)

Neuromancer and the Prism of Hindsight

I recently read William Gibson’s 1984 debut novel, foundational cyberpunk text ‘Neuromancer’.

It projects ahead to an unspecified time in which everything is online, and hackers enter some sort of cyberspace ‘matrix’ to conduct various shenanigans. It’s also very much about cybernetic enhancement, with some consideration of AI and some space business. It’s noirish and fast-paced but also dense and intense. There’s a lot going on.

My first thought was that being written in 1984, it seems astoundingly prescient about the future online world.

Then I read that Gibson didn’t really know much about computers or networks, he just liked the language of it. So my second thought was perhaps you need to be far enough removed from a thing, like he was, to see where it will lead.

Then I paid attention to the cover of the copy I had been lent (by Nick H), and realised something strange: it seemed to have some of the hallmarks of an AI-generated image.

Examples:

  • The core composition is a bit odd
  • The shape of the hairline is dramatically and weirdly asymmetrical
  • There’s a strange artefact on the hand that doesn’t seem specific or prominent enough to represent anything
  • The cityscape in the background has some repeating patterns that a human artist would probably try to avoid
  • Some of the domes in the cityscape seem unintentionally asymmetrical

Of course, this thing was published in 1984, and this art is by a human (Steve Crisp). I’m looking at something from the past through the prism of hindsight; in the context of a “futuristic” image, I’m primed to look for AI signifiers.

So comes my third thought: my thoughts on the book being astoundingly prescient also come through the prism of hindsight.
– I’m discounting everything that doesn’t really add up (the 3D visual interface isn’t realistic or sensible; there’s an eye-hacking thing (I think?) that doesn’t really make sense; the stuff in space seems very fanciful)
– I’m over-reading the things that were prescient (the everything-is-online aspect, the ability to leverage that fact to achieve powerful feats with ‘hacking’
– I’m under-reading the parts that really weren’t prescient, at least so far (the cyber-business and simulation aspects mostly).

This doesn’t really diminish the book – it’s a fascinating and impressive work, building out its own strange reality, and inspiring The Matrix (1999) even more directly than I had assumed. You just have to be a bit careful when judging prescience.

Very Short Animal Videos

Thanks to the Reddit algorithm for serving me these tasty and very short animal videos. They are optimised for portrait though and I use YouTube videos to embed things, so I’m not sure how well this will work:

“My cat will eat anything”:

Eating anything
byu/TheHenanigans inUnexpected

“Cat tries ice-cream for the first time”

he tried ice cream for the first time
byu/tuanusser inholdmycatnip

Sound needed for these:

Surprise:

Trying out a new de-corker when..
byu/fpotw infunny

The Paradox of Expertise

An exchange I saw recounted online and can no longer find went something like this:

A: Oh, do you consider yourself some sort of expert in vaccines then?

B: Well yes, I studied medicine and specialise in vaccines

A: Don’t you think that makes you biased?

Humans are prone to confirmation bias. We tend to give heavier weight to things that support what we already believe, and lighter weight – or none at all – to those that contradict it.

What I find even more insidious is a kind of second-degree confirmation bias: we discount someone’s remarks as being due to their confirmation bias… due to our own confirmation bias. For example, someone might doubt a particular bit of well-evidenced medicine, but when they hear a medical expert defend that thing, they assume the expert is only defending it due to the expert’s own confirmation bias.

Without getting deep into the concept of hierarchical trust networks, this is quite difficult to cleanly dissect, because confirmation bias is a real thing.

For example, you may recall that Researcher Bias exists*: a researcher who believes that an experiment will yield a certain outcome is more likely to end up getting that outcome, even if they are not intentionally manipulating the experiment to that end.

*But aren’t the studies looking into Researcher Bias suspect? As I wrote about in Things 133, a meta-analysis and even a meta-meta-analysis cannot satisfyingly answer this question.

You also see this in Planck’s principle: “A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die and a new generation grows up that is familiar with it” – colloquially and bleakly paraphrased as: Science progresses one funeral at a time.

Or the Upton Sinclair quote:

It is difficult to get a man to understand something, when his salary depends on his not understanding it

If all this sounds a little vague and theoretical, I recently faced it head-on: a small stand of bamboo began to spread in my small garden. I know that some particular species of bamboo can spread very aggressively and do real damage. So what I need is an expert who can identify what kind of bamboo it is, and then I’ll know if I need to pay for some other expert to help get rid of it.

The trouble is those two experts are the same company. They will assess the bamboo for you, and then if they think it needs to be removed they will offer you the (quite expensive) service of removing it. The obvious question is: can I trust them to diagnose it correctly, if they know they can make money from one particular diagnosis? (My best guess for this was to at least consider the opinions of two different experts).

The same can be said of any product you buy in which the amount of it you should use is unclear. How many Aspirin should you take, how much sunscreen to put on how often, what collection of skincare products? The people who make these things should really know the answer, but they also make more money if they can convince you to use more than you need.

Infamously, Alka-Seltzer increased sales by normalising the use of two tablets instead of one through their advertising (and tagline, ‘plop plop, fizz fizz’, or ‘plink plink fizz’ in the UK). Still, the origin story (Snopes link) does at least suggest this did originate with a doctor suggesting two would work better than one.

This also runs the other way – a product could offer a legitimate advantage, but by default we don’t believe it when they tell us. I recall the story of a certain battery manufacturer having a significant research breakthrough making their batteries as much as 20% more efficient, an advantage they kept for a few years. Unfortunately from the consumer perspective, all batteries are claiming some kind of mysteriously special efficacy, so it’s hard to trust any one of them as being particularly meaningful. (I wish I could remember who this actually was!)

One possible answer here is Which?, who try to assess consumer product effectiveness with scientific tests. Of course, when they find a product doesn’t do what it should, the manufacturer will usually counter that they didn’t test it properly, and claim that they have a better understanding and more accurate test of their own product. Depending on the product in question I tend to find this more or less compelling.

So what, really, should we do about this?

In some cases, as I alluded to earlier, there are ‘trust networks’. I don’t need to trust a single vaccine expert on their effectiveness, because they are endorsed by thousands of disparate experts, and disparaged by a small number of non-experts (who can also have their own biases, if for example they are selling an alternative).

In other cases the direct incentive structure seems to run very strongly one way – it doesn’t seem to me that climate scientists finding evidence for climate change benefit from that conclusion anywhere near as much as climate-deniers trying to sell you an online course about their views benefit from people believing their denial.

For substances such as sun-screen and painkillers, the proper quantity to use tends to be endorsed by professional bodies, not just the people who sell them. In the case of painkillers you are of course free to experiment with a lower dose and judge the results for yourself.

When it comes to academic research, you can often look into the funding source. If a study casting doubt on climate change is funded by a big oil company, maybe it’s worth looking for other studies.

It feels like I’ve climbed all the way up a mountain of concern only to climb all the way back down again, so, er, maybe it’s fine???

Emel – the Man Who Sold the World

I enjoy David Bowie much more as an actor (Labyrinth, The Prestige) than as a musician, but this cover of ‘The Man Who Sold the World’ stopped me in my tracks. Emel’s delivery seems much more suitable for the slightly spooky lyrics than Bowie’s, and the extended glissando vocal at the end was so compelling I bought a Theremin (this one) in an ultimately misguided attempt to find a way to make a similar sound.

The Science of Consciousness

Here’s the ‘hard problem of consciousness’ as David Chalmers puts it:

It is widely agreed that experience arises from a physical basis, but we have no good explanation of why and how it so arises. Why should physical processing give rise to such a rich inner life at all? It seems objectively unreasonable that it should, and yet it does.

I had previously thought there was nothing interesting here. We only have our own experience of consciousness to go on, so it seems unjustified to consider it “objectively unreasonable”; this is just how it turns out and there’s nothing more to say. (Previously I wrote about Chalmers’ other formulation, the meta-hard problem of consciousness, although perhaps I misread his intent).

I read the book “Being You: a new science of consciousness” by Anil Seth, and I’m excited to have slightly changed my mind as a result!

An argument against there being anything about consciousness to dig into is the ‘philosophical zombie’: a creature that in every way resembles and reacts like a normal human but lacks consciousness. This is easy to imagine, and suggests there’s nothing you can “do science on” because there’s no way to distinguish the zombie from a human that does have consciousness.

Seth makes this counter-argument: “Can you imagine an A380 flying backwards?” In one sense, this is easy – just picture a large plane in the air moving backwards. But “the more you know about aerodynamics and aeronautical engineering, the less conceivable it becomes”. The plausibility of the argument is “inversely related to the amount of knowledge one has”.

One could imagine the same thing applies to consciousness – it does seem like if you deeply understood the way in which consciousness arises, “imagining” a philosophical zombie would be a lot harder. That does seem fair to me!

But still, how do you find a way in to this topic?

Seth’s answer is what he calls the ‘real problem of consciousness’: to explain, predict and control the phenomenological properties of conscious experience. Still difficult, but at least something specific to aim for.

His first way in is to consider how we might measure how conscious someone is – specifically the level of awareness rather than wakefulness. The diagram below shows how different states sit across these two axes.

So we’re looking for some kind of measurement that would show regular conscious wakefulness as having a similar level to lucid dreaming, for example.

He talks about some interesting research showing that a measure of the complexity of electrical signals in the brain seems to correlate well with what we think of as consciousness. Even better, there are tests that can distinguish someone with ‘locked-in syndrome’ (conscious and aware but unable to move any part of the body) from someone in a ‘vegetative state’.

A simpler precedent to the complexity model is this: simply imagining playing tennis produces a detectably different pattern of brain activity to imagining navigating a house. These two kinds of thoughts can therefore be mapped to ‘yes’ and ‘no’, enabling someone with locked-in syndrome to communicate. This dramatically debunks my thought that there was nothing useful to look into here!

Unfortunately, the rest of the book gets quite a bit heavier and less compelling.

First there is Giulio Tononi’s “Integrated Information Theory” (IIT) of consciousness. Put tersely it posits that consciousness is integrated information – kind of a huge claim as it arguably means even atoms are perhaps a ‘little bit’ conscious. It suggests a very specific measure of consciousness: Φ (Phi), essentially how much an information system is more than the sum of its parts.

This theory doesn’t seem to go very far just yet. Seth’s summary of where it is at:

…some predictions of IIT may be testable […] there are alternative interpretations of IIT, more closely aligned with the real problem than the hard problem, which are driving the development of new measures of conscious level that are both theoretically principled and practically applicable.

So it seems we just have to wait to hear a bit more about that.

Next is the Karl Friston’s “Free Energy Principle”. In this, the term ‘free energy’ can be thought of as a quantity that approximates sensory entropy. The clearest summary Seth makes is this:

Following the FEP, we can now say that organisms maintain themselves in the low-entropy states that ensure their continued existence by actively minimising this measurable quantity called free energy. But what is free energy from the perspective of the organism? It turns out, after some mathematical juggling, that free energy is basically the same thing as sensory prediction error. When an organism is minimising sensory prediction error, as in schemes like predictive processing and active inference, it is also minimising this theoretically more profound quantity of free energy.

This is not really a theory of consciousness but, Seth considers, something that will help explain consciousness eventually. I get the impression Seth understands this enough to see how it might be of value, but not well enough to explain it so others can see that – at least not me.

Finally Seth considers the possibilities of animal and machine consciousness, and largely concludes it’s very hard to say anything about these, which is a bit disappointing but is also quite fair.

To summarise, I thought there was nothing useful to say or do about consciousness, but after reading ‘Being You’ I now think that’s wrong; it seems like there is something to dig into here, but so far our theories are only just scratching the surface of it.

(If you want a more detailed recounting of the book with added commentary, not all of which I agree with, you can read this long review by ‘Alexander’ on LessWrong)

  • Transmission ends
Categories
New

Things 2025 Q2: Constraints, Polarised light, Kuleshov, Infohazards

Artists vs. Constraints

The medium of any given art form creates restraints or encourages certain properties in the art itself.

For example, movies and theatrical productions are somewhat constrained in duration by the capacity of the human bladder. Paintings tend to be painted at certain scales that are easier to perceive, to distribute, and to display. Writers paid for serialised content (such as Charles Dickens or people writing for a long-running TV series) can often see better financial returns if they can find a way to spin the story out for longer.

Here are three examples of this I find particularly pointed.

1 – Tintin

The Tintin comics were originally serialised in a newspaper supplement one page at a time. To entice people to purchase the next newspaper, it helps if there’s a cliffhanger of some sort at the end of each page. I found it very hard to read a collected Tintin comic in full once I spotted this pattern, because it turns out just about every single page contrives a cliffhanger, sometimes in a very silly way.

Two classic end-of-page cliffhangers
Two of the milder end-of-page cliffhangers
Example of a decoy end-of-page cliffhanger

2 – Lubalin internet drama

Lubalin is a musician who (among other things) sets internet drama to music – that is, he takes slightly deranged exchanges (typos and all), and arranges them to music. Execution is everything, so check out part 1 as an example:

You can check out the rest in the series here.

The constraint here is that the current dominant algorithms really like 1-minute videos (specifically TikTok and Youtube shorts). In a ‘making-of’ video you can see the edited highlights of Lubalin constructing one of these songs. The notable moment comes at 8m 42s (link to that timestamp) when he plays what he has developed in full and is horrified to find it is quite a bit longer than a minute! So he has to adapt.

Noooo!! It’s like… double the length!

You get to see the constraint impact the art directly – and I think you can see how it slightly helps, but also slightly hinders. (You can skip to the final 1m song here)

3 – Calvin and Hobbes

Finally, Bill Watterson’s comic ‘Calvin and Hobbes’ was syndicated to newspapers between 1985 and 1995. The constraints on the Sunday supplement colour format were particularly harsh: panels 1 and 2 had to be optional (as they are sometimes dropped for space), and there had to be panel breaks in certain places so the comic could be reconfigured as necessary to fit a full, third or quarter-page format.

Panel breaks must fall where specified (but you can have more). Panels 1 and 2 must be optional. Sheesh!

Eventually the strip grew popular enough that Watterson was able to mandate a single full-page format – which is still a constraint! – with some excellent results. Check the 25 examples here to see the range possible (note, just the ones in colour are the ones where these specific rules apply or were eventually avoided).

What do we conclude from this?

I like the two opposite reactions one can have:
– Take something wildly inappropriate for the constraints of the medium and try to cram it in anyway.
– When you can, question those constraints and see what you can achieve if you break some of them.

Perceiving Polarised Light with Haidinger’s brushes

This is incredible to me: it’s actually possible to perceive polarised light with your very own human eyes! Find an area of pure white on a polarised LCD screen (very likely whatever you are reading this on), then tilt your head from side to side. Faint bow-tie shaped areas of yellow (and apparently blue, but not for me?) will briefly be dimly visible as you tilt your head, caused by polarised light. Read more here:

https://theconversation.com/polarised-light-and-the-super-sense-you-didnt-know-you-had-44032

An advert where a bear directs a film

This is one of those adverts where the creative agency has so much fun with the execution that the purpose of the ad seems a bit of an afterthought. I love it!

This thing is 13 years old now and I keep going back to it every few years so I’m giving it official Thing status:

Bonus bear content via Clare:

Wildheart Animal Sanctuary on the Isle of Wight recently rescued two bears that had grown up knowing only cages and concrete – article here. They raised the funds for a purpose-built large natural enclosure. The bears arrived recently and have been exploring their new home. You can try to spot the bears via one of the live-streamed cameras, or watch them explore their new habitat in recent videos such as this one.

The Kuleshov Effect (via Josie)

Terse description from Wikipedia:

“The Kuleshov effect is a film editing (montage) effect demonstrated by Russian film-maker Lev Kuleshov in the 1910s and 1920s. It is a mental phenomenon by which viewers derive more meaning from the interaction of two sequential shots than from a single shot in isolation.”

When moving pictures first became possible, it was not obvious that a ‘cut’ would be something a viewer would accept, and early movies were often presented with very few cuts following the established art form of the play.

It turns out several dramatic things happen in the human mind with a cut:
1) An instant change in perspective is … just completely acceptable!
2) Much like Grice’s maxim of relation in language, we assume a cut within a scene has meaning – for example, if we see a character notice something off-screen, then cut to a thing, we assume they are looking at that thing. The mild version of the Kuleshov effect.
3) Building on top of the above point, the full surprise of the Kuleshov effect is that we may even overinterpret the images either side of an edit to have them make more sense!

You get essentially the same effect in comics or any sequence of panels, for example just putting these two images together implies they are related, and our mind invents a story:

You can also see the same effect distilled in GIF form:

You also get a version of this when giving a slide presentation – you can accompany some text or spoken word with an illustration, and the mind of the viewer will automatically interpret them together. This is why a presentation can be improved even with some barely relevant images!

Beyond that, I think there are even weirder effects somehow going on that relate to how we process movies:
4) Non-linear editing, in which edits go forward and back in time (such as Memento, The Prestige, Speed Racer) are also fully comprehensible, if handled carefully
5) Non-diegetic music (music that is not happening in the scene, e.g. the sound of a John Williams score in a space battle) weirdly doesn’t seem weird

One of my favourite examples of this: from pure sound design and Kuleshov effect, Rian Johnson conveyed Rey and Kylo communicating with one another across space in The Last Jedi with nothing more than an edit.

Another corollary is that you can splice new footage into old, and if you take advantage of how we interpret edits, you can recontextualise the old footage to make it seem as if, for example, Star Wars characters are drinking Cristal Beer.

Anti-basilisks

Before we begin, some necessary context: an “infohazard” is information that could harm people if known. There are many tweaks, shades and nuances to this; you can find ‘fun’ examples over at the SCP Foundation (their wiki page on the topic; search for SCP entries tagged as infohazard here), or this very enticing trailhead I have not followed at Lesswrong: “a typology of infohazards”.

Below I’m going to describe something that some consider to be an infohazard, or just dangerously adjacent to one. I will go on to explain why actually I don’t think it is (although the adjacency remains possible). This is the last Thing in the post, so if that freaks you out you can just stop now!

I will give you some space to do so.

I’m referring to Roko’s basilisk, which is nicely summarised on Wikipedia. It’s a theoretical superintelligent AI that we could build which would “punish anyone who knew of its potential existence but did not directly contribute to its advancement or development, in order to incentivize said advancement”.

So, you see the problem. It’s clearly a very stupid thing to build, but now you know about it, so you can infer that other people know about it, which means that some of them might eventually have the means and motivation to build it, which in turn maybe means you should help them do that to avoid this ‘punishment’ – probably at minimum by telling more people about it!

The first version I encountered contained the threat that even if this basilisk does not come about until long after I’m dead, nonetheless it would review history and then simulate the minds of the non-contributors in order to punish them. At first I felt no concern about what may happen to some theoretical simulated me, but I suppose the real threat is that the ‘me’ right now might actually be the simulation it is running rather than the original me, and as such I am not safe!

So, maybe if you squint hard you should worry about this a bit, but my counter is that if such a thing is possible, many other similar things are possible that could easily cancel this out.

The anti-Roko’s-basilisk

A superintelligent AI that destroys any instance of Roko’s basilisk it can find, and will reward (or retroactively simulate rewarding) anyone who helps bring it about.

The double anti-Roko’s-basilisk

It’s like the anti-Roko’s-basilisk but it also specifically rewards anyone that would be punished by a Roko’s basilisk twice over (perhaps by simulating two instances of them having a great time), and anyone who helps build it three times over. So you should definitely build this one, and even if you don’t, maybe other people will, and it more than cancels out the antics of Roko’s basilisk.

At this point I think it’s pretty clear we’re just inventing imaginary monsters having imaginary fights, making the whole thing seem quite childish and inconsequential, and certainly not motivating enough to start worrying about making or not making any such things.

Anyway, just to keep things sufficiently spooky, let’s talk about adjacency.

LessWrong was founded by Eliezer Yudkowsky, and when LessWrong user Roko posted the original formulation of the basilisk, Yudkowsky gave an uncharacteristically blunt response and banned the topic for 5 years (as recounted in the Wikipedia article). It did look like a bit of an overreaction, but as that article goes on to recount, he later explained (or post-rationalised) his reaction, most notably with this:

“The problem was that Roko’s post seemed near in idea-space to a large class of potential hazards, all of which, regardless of their plausibility, had the property that they presented no potential benefit to anyone.”

I find that idea quite compelling. It’s very hard to know how large this hazardous part of idea-space is, but it seems like it could be non-zero. I can imagine using the basilisk concept as a springboard into conceiving the most hazardous ideas possible, but at this time I am choosing not to do that, because there really is no potential benefit to anyone.

What about the opposite… the infobeneficial? Well, hopefully that’s everything else you find in Things, at least to some extent.

  • Transmission ends
Categories
New

Things 2025 Q1: Muji music, Russian disinformation, Reincarnation

Exercise for the reader, part 1

I found this very simple two-part thought exercise incredibly powerful. It’s more effective to separate the parts out, so I’ll post the first here and the second at the end, along with the source.

In your next life you can choose to be reborn as anyone, with any job, anywhere and any time – what do you choose?

I recommend giving this some thought! Feel free to think of a silly answer and a serious one, or just as many as come to mind.

Unused Muji soundtrack

In 1983 for the opening of their first store in Tokyo, Muji commissioned Japanese music legend Haruomi Hosono to create a soundtrack. As Jen Monro sums up in this excellent overview, the tracks he produced are “not as neutral, or even chipper, as one might expect for storefront use: they willfully stray into eerie, dissociative territory, suggesting hypnosis and foggy, dreamlike states.”

That’s great, but the cherry on the top is that a Youtube upload of these tracks has inspired a pattern of upvoted comments in which people provide whimsical descriptions of what the music (specifically the first track, I suspect) sounds like to them. Samples below, best read while listening to it:

“can only assume this is what it feels like to be a fungus”

fadesblue

“This is the music that plays when you get to the end of youtube.”

Carcosahead

“this is the music that plays as the credits roll on the movie of your entire life, the theater is dark and empty except for you and you know you’re going to have to get up and leave soon and you’re okay with that but you want to sit and watch all those familiar names scroll past for just a little longer.”

midnightcthulhu5551

Use-cases for text-based AI

I remember back when Wolfram Alpha was released in 2009, I tried to figure out how to fit it into my mental model of online tools. For example, as well as conveniently solving some maths problems, you can also just ask it “How old was Mark Hamill when Star Wars came out” rather than go to IMDb/Wikipedia and do some maths yourself. Well, I didn’t manage to adapt to it very well and hardly ever remember to use it in practice.

Now we have a dramatic increase in capability with AI tools of many different kinds, and once again we need to work out how best to use these new tools. (There’s also a huge rabbit-hole of data-source ethics and workforce implications which I am putting to one side for now; if you want a blog that gets more into AI stuff John B recommended Interconnected, so try that!). I feel like I’m being quite slow at picking this up, so I thought I’d share my pretty basic use-cases and ask you, the Things readers – what do you use AI for? Let’s just focus on text for now.

Examples:

  • Answering vague queries, like “what was that film from the 80’s where there was a portal and weird monsters came out of it”
  • Summarising long text (although I’ve found the compromises in accurate insight too great to rely on this)
  • Generating a terse summary of leads on a research topic that you can then follow up via more reliable means (e.g. what are some considerations for building an interstellar spaceship)
  • Code (taught to me by Beinn): Use Windsurf to get some quick game prototypes up and running – in practice I am still so far out of my depth on this that even incredibly helpful AI can’t really help me make what I want!
  • Weirdly, technical help (e.g. I was struggling to find a certain system option on my Macbook, and even Google’s AI had the solution better covered than regular Google search which surfaced irrelevant answers for different make/model/OSes)

Examples I know of others using that I can’t quite get my head around:

  • Using AI to coach you on challenging conversations
  • Giving AI several complicated documents (e.g. small print of different insurance options) and asking it to make a decision for you that relies on understanding the full contents of the documents

So, how do you use text AI?

Animated film sequels: getting worse, doing better

I remember the old rule-of-thumb for sequels was that they would make about 2/3 the box office of the original. This might have been a bit of a self-fulfilling prophecy as studios might invest less in the sequel given how reliable that revenue could be regardless.

In recent times, with franchises making so much money, some of that calculus has changed, and my sense is the success of a sequel is much less predictable.

Most notably though I realised animated sequels seemed to almost always make more money than the original. To test out that hunch, I charted the difference in global box office for each of the top grossing animations with sequels, and put it against the difference in IMDb rating. The results are pretty dramatic:

Sure enough, every animated sequel made more money than the original – and with exception of wild outlier Ne Zha and also Spiderverse, was also worse based on IMDb ratings!

It feels like animated films in particular are being chosen by parents who have a strong desire to find something reliably entertaining for their children. The fact a film got a sequel is an endorsement (Ian’s suggestion), and I think children can also show a very strong interest in the orginal in home media, and that gives parents more confidence to take them to the cinema for the sequel. These effects may even artificially reduce box office of animated originals!

The largely consistent decline in IMDb rating of sequels could be covered by the effects I wrote about in Paradoxic Fandom.

Russian Disinformation

I read a long time ago that Russia had state-funded ‘troll farms’ generating content on social media with the intent to manipulate the Western audience towards their own ends. Having worked in marketing, I was doubtful about how effective this could be as I knew how hard it is to shift anyone’s opinion.

But first, a weird tangent before we go on:

  • Web analytics tools will frequently use extra text in a link in order to report information about it, for example adding something like “source=potatoes” to indicate a link came from this blog
  • By default this data is not sanitised, so you can manually edit the text of a URL (for example change ‘potatoes’ to ‘hello-world’), and when you then follow that URL, you can pass through a fake campaign name which an analyst may then see. For example, when I worked at Skype and looked at the web site visitors by source, I saw 1(one) visit from a campaign called “i-hate-bill-gates”!
  • On my own websites, I would routinely see clicks from these manually-faked campaigns where the text they have added is for some kind of website they want me to purchase things from – it’s a spam vector!
  • This problem got so bad I even started to see spam links selling the ability to stop this from happening (meta-spam!)
  • … but to return to the original point, in 2016 I got a wave of these fake campaigns all saying words to the effect of ‘elect Trump’
  • The fact someone somewhere was doing that to such an extent that even my tiny websites were caught up in it tends to make me believe a larger operation was at work, but of course I can’t infer who.

End tangent

The idea that Russia’s online efforts might actually be effective gains credibility for me when I recall two things:

  • Social media has a strong pareto effect: a very small number of people account for a very large number of posts.
  • We often form our ideas about what is happening not by careful consideration of credible sources, but by what we tend to see evidence of repeatedly (e.g. a newspaper repeatedly reporting on a particular type of crime makes people think it’s a big deal, rather than a careful consideration of crime statistics)

If you then combine that with social media’s built-in tendency of pushing inflammatory content (because algorithms prioritise engagement and this is one of the easiest ways to get it), it suddenly becomes much more credible to me that this sort of campaign could help drive the kind of increasing polarisation we’ve seen in the West.

Despite all of that, it still felt a little bit like a conspiracy-theory to me, which is why this Reddit post is very helpful to substantiate the idea – it gives a long sequence of examples and credible citations for each, making the primary contention, “You’re being targeted by disinformation networks”, very credible:

Even having read that, I think it’s still easy to forget. For example, I saw a Reddit post by someone saying they worked for the US Government and will soon be fired; their Republican-voting parents’ response was that “there are plenty of jobs at McDonalds”. How heartless! Their own child!! These Republicans!!!

Now, it remains possible that this is a true story (actually because of this very problem), but it does also seem like exactly the kind of thing you might fabricate if you wanted to further polarise things.

At minimum we should remember that this sort of content is anecdotal evidence of behaviour at best, so should be considered relatively low in terms of how much it shapes your opinion on what is really going on.

(Of course, this is just one aspect of Russia’s grey-zone aggression, this Observer summary of interference in democracy in Moldova is quite salutary and has this excellent quote:”Moscow wants to show that it can use all measures short of outright invasion to keep nations it sees in its “zone of influence” chronically destabilised.”)

An ethical interaction with Sugar Gliders

I found this promotional flyer for ‘Cuddly Colony’ in Brighton. I just really love the way they promote this thing – looking at cute animals in a very serious and carefully considered way:

Do note that it’s about £45 for this ethical interaction and I can’t vouch for it personally.

Exercise for the reader, part 2

As a reminder, part 1 was as follows:

In your next life you can choose to be reborn as anyone, with any job, anywhere and any time – what do you choose?

Do you have something in mind? If not, just think of something now! The first thing that pops in to your mind may well be springing from your subconscious!

With that fresh in your mind, here is the second part:

Given you desire that alternative life on some level, and that reincarnation is not real*, what could you do in your own life to get some way towards that desire?

For example, if you imagined being a cowboy, could you at least go horse-riding? If you wanted to be an astronaut, perhaps you could get a telescope? If you want to be an author, could you just write a short story?

The world is so full of incredible possibility (as a random example, you could go to Asda and buy an item for each letter of the alphabet), that it can be very hard to work out what to do, especially over the long run. I found this exercise extremely helpful.

I encountered it in Julia Cameron’s The Artist’s Way, a book that Clare got me and I found very inspirational. It is based on Cameron’s long-running workshops to help people be more creative, and to overcome creative block (or really anything that acts as a block to creativity). Like Alcoholics Anonymous, it is a 12-step program, and also like AA weirdly involves God – but Cameron helpfully outlines how the process can be made useful even for an atheist. For example, she might say it’s helpful to have as a mantra “Great Creator, I will take care of the quantity. You take care of the quality”, but instead of God/Creator you can just make it a general trust in the process, or a faith in your subconscious, and the effect is much the same.

I highly recommend the whole book, to everyone, but if you were to just take 3 things from it, one is the above exercise, and the other are these two simple habits that facilitate the creative process:

  1. Morning Pages
    Each morning, write three sides of hand-written text. There is no goal of what to write about, and you should not re-read it or share it afterwards. You just write and see what comes out. My experience is this kind of cleans out your preoccupations that cloud your mind, often turning them into concrete actions for a to do list, and this then leaves space for more creative thoughts. (In practice I only make time for this 2 times a week, but it is still useful!)
  2. The ‘Artist’s Date’
    Cameron’s mental model is that you have an ‘inner artist’ which is very childlike and needs pampering before it can create. To do this, you should take your inner artist (i.e. yourself) on a ‘date’: 2 hours each week doing something nice – something that appeals to you intuitively, something that do on your own, that can be inspirational, or just expose you to new things, to distract you just enough that ideas can come to you naturally. This could be watching a film, going for a walk where you haven’t been before, going to an art gallery – or (I think) a lot of things you might do while listening to a podcast, just without the podcast so you are free to think. This better enables the inner artist / subconscious to create moments of inspiration.


So, when being reincarnated, what do I want to do? A lot of things, but notably I really want to write stories and make weird games. I do already do a little bit of both, but I should do more!

*Reincarnation without memory is indistinguishable from no reincarnation in the life we lead today. Reasons to doubt reincarnation in general are left as an exercise for the reader.

  • Transmission ends
Categories
New Uncategorized

Things 2024 Q4: Reddit highlights, repetition, meteorites

LCD Soundsystem track of the quarter

For a moment there, Things appeared quarterly, and it just so happened I got particularly into one single LCD Soundsystem track each quarter: Q1 was New Body Rhumba, Q2 the music video for Oh Baby, and this then continued, but Things did not.

My 2024 Q3 LCD Soundsystem obsession was “How Do You Sleep”, which is widely read as an authentically emotional lament for a friend and business partner that went rather dramatically off the rails. After a sparse percussion-heavy intro, this incredible airy synth bass hits at 3’38” ultimately carrying the lyrics to the dramatic denoument signalled up-front by the track’s name. I recommend listening to it once without really concentrating too much, then reading the high-level story behind the lyrics, then later listening more carefully to the lyrics with the story in mind for maximum effect.

Then in Q4, out of nowhere Kottke recommended this 2011 manual mashup of “New York I Love You, But You’re Bringing Me Down” with a Miles Davis improvisation. Many mashups rely on a baseline familiarity with one or ideally both tracks, but this fully stands alone and is worth a listen (and watch, to see the combining at work on two Youtube instances, no longer possible due to a copyright strike against the Miles Davis track).

Repetition Legitimises

While we’re jazz-adjacent, this two-word phrase captures a brilliant nugget of human nature: repetition legitimises! If you hear a strange collection of notes it might not seem like music, but if they then repeat it legitimises it as a piece of music in the mind. Here’s where I first saw the phrase:

You get this even more dramatically with Steve Reich tracks like “It Ain’t Gonna Rain” in which the repetition of a spoken-word sample transforms the way you perceive it into something with a rhythm and a melody. Repetition legitimises!

https://www.youtube.com/watch?v=vugqRAX7xQE

Of course, this doesn’t just go for music. A lot of people are wearing masks for COVID, so people feel like they should too. A lot of people stop wearing masks for COVID, so people feel weird about wearing a mask again and stop. Repetition legitimises!

Your newspaper / social-media algorithm of choice keeps showing you stories about Outgroup X doing bad things. Wow, sure seems like Outgroup X just does bad things all the time! Better elect someone who says they will clamp down on Outgroup X. Repetition legitimises!

Meanwhile in my algorithmic feed

Given enough of your viewing data, Youtube – like a lot of short-form online content services – gets pretty good at recommending stuff you would like. One side-effect is that you might see something you think is really great and share it with friends – who are then not that interested, because it was actually algorithmically perfect for you and not many others. For example, I linked to a bunch of Youtube videos above that I love, but you probably only clicked on one at most and found it merely slightly interesting!

My other source of algorithmically supplied content is Reddit, which by now is fairly well-tuned at showing me things I’m either interested in or can’t help but pause to look at because they’re awful.

So, here are some fairly random things that turned up there that I think are pretty great. I’m curious how much your mileage may vary:

But, we can go deeper! Here’s some things that I also think are pretty great, based on the way they build on another weird thing I’m already familiar with. If you’re not familiar with the original thing these probably only half make sense, but you can probably decode what’s going on anyway? I’ll give some pointers after each image if you want to check.

Some Minnesota state flag submissions included the ‘laser loon’
Vince McMahon reaction format + baseline X-me familiarity
Repurposing of woman yelling at cat

Meteorite size vs. Recency

Meteors hit the earth all the time, and those that make it to the surface without burning up are called meteorites. Their size follows a power-law distribution (tiny ones are common, rather large ones are very rare), and as a result we get a roughly log-line plot between how often we get hit by meteors of exponentially larger sizes:

Brought to my attention by Chris Impey’s book “How It Ends” (page 136) is a cute corollary: we can expect to find craters on earth of different sizes, and larger craters will usually be from meteors that hit longer ago (more on that later).

Here’s a few notable examples.

The Tunguska Event happened about 126 years ago, in which a 30m object exploded 6-10km above the earth causing widespread devastation in a remote area and shattering windows hundreds of kilometers away.

Barringer Crater (aka “Meteor Crater”) in Arizona is estimated to be 50,000 years old and caused by a 50m meteorite, leaving a 1.2km diameter crater.

The Manicouagan Reservoir is an annular lake formed in the remains of a 100km diameter impact crater formed 214 million years ago by a 5km meteorite.

Finally, the Chicxulub crater is not as pleasingly obvious on a map, but was inferred from NASA’s Shuttle Radar Topography Mission STS-99. The diameter is 200km, the age is only 66 million years, and the asteroid causing it estimated to be about 10km. You may note the age aligns with the extinction of the dinosaurs and indeed this is thought to be the origin of the Cretaceous–Paleogene extinction event.

So, I took these and a few other notable craters and made a chart of size vs recency:

You can check my data table on Google Drive here

The power-law axes are doing a lot of work to squash these points towards a line, but you get the general idea.

But, as Things readers, I know what you’re thinking: this doesn’t quite add up. In particular, there are some other factors at work:

  • The upper right of the chart pretty much can’t be populated or you wouldn’t be reading this right now (you probably don’t get to reading-Things-level biology even 1m* years after a 10k meteorite crater event)
  • The bottom left ‘should’ be more populated as those smaller meteorites are more common – but we lose the ability to see those over 1m+ years due to natural erosion and tectonic plate shifting
  • Even accounting for the above point, this is not at all a complete sampling. I just picked the top obvious craters I could find. In particular, there will also have been many large meteorites that landed in the sea (or on land that is now undersea). See this nice map for reference.
  • Perhaps over a billion years the solar system is in general calming down, with fewer giant impact events in general (Josie pointed this out).

*Speculation by me

All told, just based on my chart of obvious craters and age, it’s tempting to think we’re kind of “safe” and that big meteorites are a thing of the past, and maybe we are a bit, but with all the above caveats that seems unlikely.

As some nice context, at the time of writing a 90m meteor has an estimated 2.3% of hitting the earth in 2032, with an impact similar to the Tunguska event.

Sauropod Giganticism

Dinosaurs ruled the earth for a very long time indeed, and many of them were enormous. Since the most recent giant meteorite helped put an end to them, we haven’t seen such giganticism on land. Why is that?

Making that question even tougher to answer is this great study, helped by recent decades of substantially more fossil finds, that shows Sauropod giganticism evolved repeatedly over millions of years. The full articles is here, but this is the lovely visualisation of that pattern – each coloured line is an example of giganticism emerging:

The direction of causality means in this case we can’t really say repetition legitimises. The article sketches out a few factors that push Sauropods towards growth, but the question begged by the above chart remains, ultimately, unanswered.

My Album of the Year 2024

Inspired by a tree (specifically Tāne Mahuta), this Jon Metcalfe album fills me with wonder and hope even in the face of probable cosmic annihilation, especially when it gets into Night.

Youtube Tree album playlist

  • Transmission ends