Thursday, July 6, 2023

The solution to the Alignment Problem is to split superhuman AIs into subhuman (but brilliant) components

We should not try to create superhuman minds at this time.
Instead, we should try to create superhuman imagination modules, planning systems, simulators, data analyzers etc. These would be potential components of superhuman minds, but they would not be directly integrated.
There should be nothing there with a will or a sense of identity.
In the simplest terms, nothing that could theoretically feel pain. IMHO that has always been the most immediate risk of the vast "black box" training systems that AI researchers seem to fear - not civilization ending AI plots.

No comments:

Post a Comment

We haven't even begun to think about reality as it really is

Those of us interested in various cults, and "alternative" ways of trying to act and think, are always hoping to come across some ...