The scarcity of novelty

i

It’s only natural to focus on the abundance side of the new tech we now have. It’s exciting — we can now do so much more with AI! We can build ultra‑specialised software; ship faster than ever before; the entire world’s expertise is at our fingertips. Abundance is great. But historically, fortunes were not made from newly found source of abundance. All the great success stories of the past were about scarcity rather than abundance.

New groundbreaking technology appears and solves a class of problems that were impossible or expensive to solve before; it spreads like wildfire; that changes how people do things; which makes something else scarce that wasn’t scarce; and that is what is often a much bigger opportunity — or rather, much more promising directions to explore if you are picking one, because everyone and their dog is already trying to benefit directly from new tech, whereas not many are looking for less obvious things like second‑order scarcity.

This writeup is an attempt to explore this idea, which is tbh isn’t yet even fully formed in my head. I don’t yet have comprehensive historical references at hand, only vague intuition that the great innovation arcs of the past like railroads and electricity and automobiles might have shared similar properties with what is unfolding now. Let’s see what we arrive at.

It’s worth noting that the current flood of AI‑generated stuff didn’t start with LLMs — it started much earlier. Long before the full‑cycle entertainment machine worked, we had most of the moving parts in place. Recommendation at scale is arguably the harder part of it because it requires knowing every user’s preferences intimately, which takes time. This tech was started by social media majors when they moved away from pure social graphs to recommendation and later perfected in short‑term video feeds. The only human‑generated part remaining was the post content itself — and now that is gone too.

So we now have abundance of personalised entertainment on‑tap. I’d rather not go into the ethical side of it — many implications are fairly obvious and the ones that aren’t receive sufficient attention already. Whats of particular interest to me to explore here is what this newly found abundance makes scarce — which, if my historical intuitions are right, is a fertile ground for new business ideas because a) new tech which is AI today is a tailwind and b) has much less competitions because most entrepreneurs are looking for 1st‑order opportunities to apply the new tech directly.

I got the first hint of this idea when I noticed that some people I have deep respect for started posting on X stuff that clearly didn’t sound like them. It was coherent and the ideas made sense — it just wasn’t their voice. If you read even in moderate amounts you probably know that you can often name the author simply from reading a short excerpt — because people have unique writing styles just like they have voice or gait. So noticing people I know suddenly speaking in a different voice was a bit unexpected.

Then I started noticing that many of the popular posts on X started displaying smth that can be described as “purposeful carelessness”. All lowercase, with lots of grammar mistakes and complete disregard of proper punctuation — all that suddenly wasn’t a sign of carelessness, but rather proof that it was genuinely written by a human. People adapt very quickly — as soon as polished writing becomes cheap, it stops being perceived as valuable.

This led me to start experimenting with “proof of human authorship” myself. For a while I’ve been writing long‑form pieces like this in one go, recording the screen as I am typing. It also helps to focus and finish the piece much quicker than I otherwise would because there’s simply no room for overthinking every word — a bit like public speaking. It comes out as it comes out, I just generate the next token as if I was an LLM myself and move on to the next one. But the unintended consequence of this approach was that I could now easily prove that my writing is genuine by sharing a link to the screen recording. And only after writing several pieces like this I realised a deeper implication of all this.

ii True novelty is now scarce

It became so easy to create “new‑looking” information that very few people are going to bother to do it themselves without AI assistance. If it’s already happening on X where people who pride themselves as original thinkers are losing their voices, I doubt that any other platform or format would be immune. But “new‑looking” and “novel” are not the same thing, and people are already noticing the difference and voting with their attention.

The difference might not even be structural — let’s assume for the sake of the argument that LLMs are already good enough to produce original ideas better than humans (i don’t think they are btw, but this line of thought benefits more from the opposite view). People come to platforms like X to exchange and develop ideas together with other people; but you cannot remove “people” entirely from the equation. I’m sure I’m not the only one completely losing interest in the post I’m reading as soon as I start suspecting that it’s LLM‑generated. This isn’t true for all posts — I can remember a few distinct occasions when I found a response by a bot genuinely insightful and adding value to the conversation — but most posts, at least for me, at least on X, are much more about who said it rather than what is said. People are social animals, blah blah.

Which leads me to believe that there’s enormous alpha right now in “proof of human authorship”, and it is only going to grow in the near term. Simply demonstrating that the post you wrote was in fact hand‑written will go a long way. I also suspect that there’s a hard limit on the originality of ideas that can be produced by LLMs, at least in their current form, but many people believe otherwise. Regardless of LLM capability though, people seem to value non‑original ideas created by other humans higher than content that they know is AI‑generated.

I don’t even know how to conclude this; this is I guess a “conclusion”. Feels incomplete admittedly, perhaps there is some writing technique that I’m missing here, idk. What I know is that I started writing this to explore this “scarcity of novelty” idea and it it now feels much more explored than before. One possible outcome of it was realising half‑way through that its a completely wrong take, which didn’t happen.

I’m building opencomputer.dev — a sandbox for running AI agents inside it.

Written by a human · Igor Zalutski

iii Postscriptum

I’m adding this on the following day so it’s not on the proof‑of‑authorship video. It felt incomplete and I couldn’t stop thinking why. At the same time, if the underlying assumption is true, then incompleteness shouldn’t matter. So what’s missing? Today I figured: nowhere it spells out clearly what was the point that I was trying to make.

It goes as follows: for a brief period of time, proof of human authorship alone is both necessary and sufficient to stand out. This is clearly not good writing — it’s just a stream of consiousness, un‑edited, with lots of typos and questionable narrative structure. It’s a first draft at best; lots of editing is needed to improve the reader experience. At a minimum, it should be far more concise. And yet if the underlying assumption is correct, then the proof alone makes it good enough despite all other shortcomings.