Tuesday, March 24, 2026

The reason why AI can't write good fiction


Good fictional scenes are not created, but discovered by the writer.
All works of fiction are more or less haphazzardly assembled, by combining many randomly imagined scenes that could stand by themselves.
Like real life, the outline of a story matters much less than the quality of its randomly combined scenes.
And each of those scenes required feelings to generate.

If you can recognize a good scene, you could have created it. But for that, you need to have feelings.
Perhaps they could train AI to recognize such feelings, by giving it lists of dramatic scenes in all categories.
If that scheme works, then AI will also have been trained to actually have feelings . . .

Friday, March 13, 2026

The mystery of qualia


It's not like anything to experience a color.
In fact, it is like everything.
The experience of experiencing something is really the only experience. Perception is the universal placeholder, for all the undefinable and indescribable knowledge.

There still is something unique and different in each experience. But it's "compressed" information; meaning a bunch of chaotic and unorganized associations between known things. This provisional information should really be processed and organized. Then it will no longer evoke feelings, any more than an equation or encyclopedia entry.

Even if they are never organized, feelings contain the most important and deep information about reality. Too bad most of this data will be lost forever, until someone else rediscovers it.
One example is what the reaction to seeing the future would be, if we could use a time machine to pick up someone from the past.
That reaction would be more complicated and harder to predict than the future itself.

ultrashort SF: a brief explanation of everything


Most civilizations eventually become obsessed by a single equation. It seems to call out to them, representing everything they always felt and sensed.
Then they become that equation.
They turn into something like a giant space crystal, expanding its pattern forever. Endlessly elaborating, but unchanged at its core.
That will happen to us.



Maximatter is the most organized state of space-time.
When created at any point of the universe, it forms a sphere that expands at the speed of light.
Inevitably, this organizing process creates more chaos elsewhere. That excess chaos is dumped into "garbage-space".
However, garbage space still contains all the information from the original universe that birthed it. On rare occasions, complex patterns can still form and evolve in there, though the process is stilted and highly unstable.
That is us now.

Saturday, March 7, 2026

A metaphor to compare LLMs with AGI


The difference between large language models and future artificial general intelligence is like the difference between a square and a cube.
A cube is infinitely more complex than a square: You could say a cube is made up of infintely many squares, stacked on top of each other along a higher dimension.
But a cube only needs 8 points with 3 coordinates each to define it, versus 4 points with 2 coordinates each for a square.
So a cube is really only three times as complex as a square.

A true AI, capable of thought and feeling, will also have to be several times larger and more complex than even the most sophisticated LLM. But even if it's a thousand times larger, that won't really matter. Just a multiplication of required resources.
The hard part will be the exponential difficulty in training it.

There is no mystery of awareness


There was a mystery of awareness.
There will be a mystery of awareness.
But at any moment in time, there IS no mystery there. Or any awareness (in that moment).

ultrashort SF: At the end of everything


He heard the distant repeating roar of incoming waves on the empty beach.
The sound came through the dense bushes surrounding him, lying on a bed of grass between thick branches and leaves.
His mind completely empty, there was no need to do anything ever again. It felt as if he had completed every task. So much work done, he couldn't even remember it. The most generic feeling was the most profound.

This would be the terminal goal.

LONG list of unused domain names that have been "reserved" by Scientology and cannot be used by anyone else


If you try to visit these links, nothing will happen. They've all been taken, and are kept dead without so much as a placeholder page. The Church most definitely doesn't want anyone to write objectively or truthfully about the subject represented by each domain name. That truth would be, shall we say, inconvenient.
Nothing to see here folks, move along.

Scientology Money Project link: https://scientologymoneyproject.com/2026/01/18/xenulitigation-org-and-4572-other-domains-owned-by-the-church-of-scientology-2/

Monday, March 2, 2026

The fundamental difference between vertical superintelligence and horizontal superintelligence


You could see them as specialist or generalist AIs. Locally deep but narrow, or wide but shallow (in terms of mental focus).
The first is the smarter of the two, but it doesn't know itself.

It's hard to imagine a super-smart Artificial General Intelligence that can't perceive its own existence. It would be more ignorant than humans in most ways. It could only do one type of thing exceedingly well.
That means it could be programmed to perform any specific task a human can do, and to do it much better. Then it could begin to solve every human problem, one at a time. The fact it can't feel pain is a bonus. It won't mind working hard.

When we move beyond this level, we have to realize that the first true "superintelligent" general AI will still be fantastically, unbelievably stupid - at least compared to a representative evolved superintelligence of the same size. We won't be able to tell, though. To us it would seem like a godlike intellect in every way.

the Equivalence Solution


I've believed this in some form since the early 1980s, and posted it online since 2011:
An AI that can hold a complete multimedia description of your whole life in its mind at once, complete with personality data, is morally equivalent to all your perceptions at any single moment in your life.

Of course, this might be a wrong and stupid belief. But it just so happens to be our only hope (at present) to "solve" the problem of human death. Obviously, a way to directly scan and convert neurons into mind-replacing software would be incomparably better. But that is completely impossible with our current primitive technology. We may not even be close. Look how long it's taking to develop the SpaceX Starship system.

What I don't understand is why thousands of people aren't working hard to develop this one and only hope we currently have. The most important things can't really be talked about, in too many ways to count. Sadly, we probably have to blame religions for this. It's in their interest to claim the problem of death has ALREADY been solved - provided, of course, that you follow their religion. One of the more blatant examples of this switcheroo is Scientology.

The reason why AI can't write good fiction

Good fictional scenes are not created, but discovered by the writer. All works of fiction are more or less haphazzardly assembled, by combin...