Thursday, July 6, 2023

The solution to the Alignment Problem is to split superhuman AIs into subhuman (but brilliant) components

We should not try to create superhuman minds at this time.
Instead, we should try to create superhuman imagination modules, planning systems, simulators, data analyzers etc. These would be potential components of superhuman minds, but they would not be directly integrated.
There should be nothing there with a will or a sense of identity.
In the simplest terms, nothing that could theoretically feel pain. IMHO that has always been the most immediate risk of the vast "black box" training systems that AI researchers seem to fear - not civilization ending AI plots.

No comments:

Post a Comment

It's as if the world is some kind of evil hoax

https://scottaaronson.blog/?p=7886 I like this guy! He often complains about the bizarre ways that things don't work in reality. They ...