Thursday, July 6, 2023

The solution to the Alignment Problem is to split superhuman AIs into subhuman (but brilliant) components

We should not try to create superhuman minds at this time.
Instead, we should try to create superhuman imagination modules, planning systems, simulators, data analyzers etc. These would be potential components of superhuman minds, but they would not be directly integrated.
There should be nothing there with a will or a sense of identity.
In the simplest terms, nothing that could theoretically feel pain. IMHO that has always been the most immediate risk of the vast "black box" training systems that AI researchers seem to fear - not civilization ending AI plots.

No comments:

Post a Comment

short SF: The last class action suit

It was ironic: behind their high bench, the row of judges looked almost like a depiction of ancient gods. 15 years ago, this scene would hav...