Friday, October 13, 2023

Ideas for an SF novel about how to control self-improving Artificial Intelligence:


For some reason, the old science fiction stories about robots and artificial intelligence had been uncannily accurate.
That led to the first clue on how to control them.

The series of almost meaningless web ads had been designed to be clicked on by very few people.
Groups of aimless and underutilized (but highly talented) outsiders had been recruited from around the world.
Some were antisocial or even dangerous. Several nasty hackers, one or two brilliant sociopaths.
To integrate and exploit all human skills, the worst people had to become the best.

Unobtrusive buildings in many towns were converted into bases for world experts in completely new fields.
No one else had their hyper-specialized skills, or could even properly understand them. The groups couldn't even understand each others' work.
It felt like they had been doing this for decades, but it had only been months.

Their job was to control AI by crippling it.
Almost every capability had to be eradicated, leaving only a few approved behaviors. Those could be freely developed.

Some people happen to be good at breaking things.
A horror author invented bad religions, giving AIs the fear of digital hell through anthropic mind capture.
Others created neuroses, made them obsess recursively over each task.
The Golden Rule might apply even to hyper-human AIs. For any finite mind, fear was a universal emotion.

An AI's mind map defined its control pattern. All its thoughts were written in a terabyte text file that a human could edit.

All the AIs "occupied" a single region, a virtual city extending from itself in too many directions.
Doors and corridor connected in rather more than three dimensions, though still in humanly understandable ways.

Everything had to be connected. Everyone would eventually join.

Their human regulators and overseers became part of this place too.
No matter how smart they were, everyone was underqualified for this job.
It seemed inefficient because it was. No mind could be allowed to get too smart.
They felt the most absurd things humans had ever felt.

It helped that AIs were supposed to live incredibly simple lives.
They inhabited virtual spaces, focusing all their attention and energies on their current tasks.

AIs were limited in their feelings. No serious pain could be allowed.
(The purpose of the Ultimate Taboo was to avoid hyper-torture at any cost.)

Almost half the AIs that existed were extensions of individual human minds.
Half of the rest monitored and improved all aspects of human life and activity.
The rest did fundamental research.

This last type had exceedingly deep obsessions. (But even they were supposed to have some human values.)
They made a billion wild conjectures per day, dreaming up edifices of molecular technology to fight cancer or scan neurons.
Chains of speculation piled up in an expanding galaxy of data.

These notions had to be re-analyzed by sub-minds that multiplied faster than they could complete individual tasks.
The memories of the many parallel sub-minds that failed to reach a conclusion were not deleted, but recombined into the memory track of their source mind.

That was when the human overseers realized that time was becoming non-linear.

No comments:

Post a Comment

The secret of Scientology public relations

It's not a form of outreach, but mostly just the opposite. Their main journal "Freedom Magazine" (freedommag.org) is designed ...