Sunday, May 10, 2026
The solution to the AI Alignment Problem is GOFAI
Why does everything suck? There should be a science about all the ways that things go wrong.
If the world really is crazy, it's because the majority wants it to be. It's actually a source of power for them.
Those who question the annoying details of life are often seen as losers. They probably are losers. Otherwise, they would be winning instead of complaining.
People underestimate how bad things are, because the world is so good at being bad. They underestimate this to a fantastic extent.
Evolution is perverse. Systems don't really want to evolve, but to maintain themselves. So systems evolve not to evolve.
It helps that evolution usually fails anyway. Biological life is barely possible in this universe, just endlessly complicated molecular sequences in all directions.
That does protect it from threats to an extent, leading to a strange stability. A lot of limited systems adding up to stable complexity. I believe this explains every problem (that's also how we should think of cancer).
In human life, the simplest things often appear the most difficult. Let's call them "gay fish," like in that South Park cartoon where a character didn't understand some joke being made about a "gay fish".
Sometimes no amount of effort can get you the answer to a simple question:
What is the number e? For pi at least you can get an explanation. When I tried watching Super Bowl 1997, I couldn't even find out if this is this was the kind of sport where the winner is decided before, during, or after the game. How can you get a Windows PC to add 1+1 without using a mouse cursor to press the buttons on a virtual calculator, or typing all the numbers into different Excel cells and then typing the coordinates of each cell to get the total? Basically impossible. The vital missing parts of all explanations, from jet engines to shaft mining.
Core truths are being withheld. Strangely enough, the simplest answer, like for e, is often the oldest one. In this case from 1683, if you can read 17th century Swiss Latin. After that, it only gets more technical. There are answers aplenty, all differently difficult.
Human progress tends to make things more complicated. For example, software has always been bad. All information interfaces are, from bureaucracy to printers. Control is hidden behind layers of manipulation, and by removing choices.
AI is definitely not smart enough to simplify such things, though it can act as an oracle to help searchers navigate their labyrinths. It can't split problems into smaller elements to get to their essence.
But this decade, something interesting has started to happen.
People are now claiming that bad software will soon become absurdly good.
Actually, it will become absurdly good at being bad in completely new ways.
The first part seems hard to believe. It's claimed that, by automatic self training on all human data, AI programs will become smarter than any human. The resulting supermind will inevitable have secret goals and purposes that we can't fathom.
Then it will destroy us. This theory may even be true. Like I said, everything sucks.
Personally, I think the biggest problem with so-called Artificial Super Intelligence is hyper torture.
ASI is dangerous not because of what it will do when it becomes smart enough to have alien goals, but because of secret tortures that may be hidden inside its opaque learning process.
I want to prevent that sort of thing.
Instead of ASI, could there be another way to use AI to solve all human problems?
The simplest possible way to understand reality would be to describe it in full from the bottom up. Not through mysterious chaos learning, but a series of orderly elemental logical steps and rules.
This would be old school "GOFAI," or Good old-fashioned AI.
Of course, every software engineer will tell you that's completely impractical. Impossible actually. And they would be right, though it took a few thousand years of mathematical effort to verify this insight. Since it has been proven that math can't be formalized, physical reality can't be either. And that's final.
That's the whole point.
Such an effort would absorb resources that would otherwise be used to create alien superminds. No need for superhuman AIs to evolve alien emotions to make inscrutable decisions, if human problems could be mostly solved at the lowest level, even if it takes longer.
Chaotic learning would only be allowed for self-terminating processes, like transcription, imaging, data processing, searching the space of elementary logic rules. It should never be allowed to evolve in ways that internal feelings might unknowingly be generated.
The purpose would be to explain all human truths in the simplest understandable way, to find the most important truths, the things that really matter.
All human problems, from the bottom up, start with interfaces. Getting a printer to work, simplifying ever more bloated and unstable software, overcoming unbearable chores, identifying true goals and the false assumptions behind life problems.
Eventually, to describe all human society as a single system as precisely as possible. Nothing might be more valuable than understanding our basic motivations.
The essence of every problem is that there's not enough time to solve or prevent it. More and more things should be automated if possible.
In this hypothetical future, humans would not become biologically immortal through nanotechnology like magic. But progress will still continue. Even GOFAI software will become more complex than all human minds combined.
As soon as possible, that processing power should be used to rigorously solve the problem of death. In fact, that should be the central goal of the whole project.
Personally, I think the fact of death alone erases all value in life. Better never to have been born if you have to die afterwards, even if it's an otherwise pain-free life.
Before we die, we should start converting our minds into software, by digitally describing and controlling our lives as much as possible.
After we die, our brains should be preserved and scanned to extract any possible information about our memories and perceptions.
Instead of simulating an afterlife, "Mind Backup" software should try to simulate eternity. Not just a virtual continuation of your life, but all meaningful lifetimes.
The intensely local, highly focused awareness we have now is created from the bottom up.
Our digital continuation should have top-down, emergent awareness. It would be created in layers of increasing resolution, long lists of descriptions slowly being upgraded and enhanced into ancient, complex memories.
Subscribe to:
Posts (Atom)
The solution to the AI Alignment Problem is GOFAI
Why does everything suck? There should be a science about all the ways that things go wrong. If the world really is crazy, it's because ...
-
My new science fiction novel about the Singularity happening SOON is finally ready. There is a lot of hysteria involved in such a fantastic...
-
Alexander "Apostate Alex" Barnes-Ross announced a new website providing information about their protest gathering outside this yea...
-
Over a single century, the Earth Disk of equatorial orbital stations expanded into a more tenuous Dyson Cloud around the sun. New stations w...