Monday, May 29, 2023
Why LessWrong is MoreWrong
The elitist crowd at LessWrong is obsessed with super advanced technology they can't describe that will likely exterminate humanity.
So that sucks. At the same time, they completely refuse to allow criticism of technology that blatantly sucks right now.
Like, maybe find ways to make current technology suck less, to learn how to prevent things from sucking in general?
For me, that was the fastest way to get banned from LessWrong. It was simply taboo, as if I made an already unsolvable problem exponentially more difficult by talking about a completely different problem.
But that is the only way to make progress. You can't solve a big problem without first understanding a smaller problem that may be part of it.
Now LessWrong has become part of the problem instead of the solution.
For me, the world has always been evil beyond comprehension, but normal people have evolved not to see this. It's their greatest strength, the thing that often makes them invincible, and what really makes the world go round.
But with all the theoretical AI dangers that LessWrong keeps blathering about, it's time to finally face this evil. IF the AI danger is as real as they claim, and humanity is really at risk, then they should be willing to leave their respectable comfort zone, and allow controversial speculations and brainstorming.
This almost certainly won't happen on LessWrong, which is about respectable heroism. No gadflies or comic relief characters are allowed.
at May 29, 2023
A post about life in the Sea Org in the "good old days" in LA: tonyortega.substack.com/p/scientology-in-the-good-old-days Today...
When you realize how evil the world is, and how totally deliberate this evil is, and above all how all-pervasive this evil is, things become...
My new science fiction novel about the Singularity happening SOON is finally ready. There is a lot of hysteria involved in such a fantastic...
These posts below got me kicked out from their "web community". My crime was to claim that their focus on future AI risks complete...