Thursday, July 6, 2023

The solution to the Alignment Problem is to split superhuman AIs into subhuman (but brilliant) components

We should not try to create superhuman minds at this time.
Instead, we should try to create superhuman imagination modules, planning systems, simulators, data analyzers etc. These would be potential components of superhuman minds, but they would not be directly integrated.
There should be nothing there with a will or a sense of identity.
In the simplest terms, nothing that could theoretically feel pain. IMHO that has always been the most immediate risk of the vast "black box" training systems that AI researchers seem to fear - not civilization ending AI plots.

No comments:

Post a Comment

(VIDEO) New "leaks" about current conditions at the Flag land Base!

So many missing people are being held in Clearwater in prison-like conditions . . . Their abuse is being supported and protected by bribed g...