Thursday, July 6, 2023

The solution to the Alignment Problem is to split superhuman AIs into subhuman (but brilliant) components

We should not try to create superhuman minds at this time.
Instead, we should try to create superhuman imagination modules, planning systems, simulators, data analyzers etc. These would be potential components of superhuman minds, but they would not be directly integrated.
There should be nothing there with a will or a sense of identity.
In the simplest terms, nothing that could theoretically feel pain. IMHO that has always been the most immediate risk of the vast "black box" training systems that AI researchers seem to fear - not civilization ending AI plots.

No comments:

Post a Comment

October 25-27 near East Grinstead

Alexander "Apostate Alex" Barnes-Ross announced a new website providing information about their protest gathering outside this yea...