These posts below got me kicked out from their "web community". My crime was to claim that their focus on future AI risks completely ignores the issue of current and future complexity sclerosis. They want to solve the ultimate problem of evil by banning certain discussions of actual evil.
The problem is not chaotic overcapability but predatory dysfunction. What I'm saying is that instead of advanced technology failing malevolently because it works too well, the reality is advanced technology failing malevolently because it works increasingly inefficiently. Instead of the Death Star it's like the failure of communism. Observed reality and hype diverge ever more at the lowest level.
As a system gets more complex, more things can go wrong, but you can't complain about it because YOU are supposed to be (a subservient) part of the system. Stating the absurd truth doesn't get you downvoted but banned outright from LessWrong. In retrospect I should have said it differently though, and not held back so much:
Insufficient awareness of how everything sucks
There are so many theories about how future AIs will take over the world, when in the real world software is evil because it DOESN'T WORK.
It takes almost a minute to open a damn txt file on my Windows 10 desktop PC. Things were actually slightly faster in Windows 95. Websites keep loading slower and slower. Bloatware keeps expanding for less functionality. Click and click and click and nothing happens.
Software isn't evil because it evolves beyond the user's goals toward alien goals, but because the computer programmers goals are opposed to the user's goals.
Microsoft hired every devil in hell to work overtime to make it keep crashing and freezing while unstoppably downloading more malware.
I cannot describe in words how hypersatanically evil the world's programmers are, malevolent, diabolical, demonic (that last one not literally). It's been going on for decades, the constant problem of my daily life.
My Pixma scanner is held hostage by the shitbastards at Canon corporation because an attached ink cartridge apparently expired. My h2owireless cellphone sim card stopped working with $7.50 on account and they won't respond in any way. There is NEVER NEVER NEVER any useful info on their websites.
I understand people on this site are deeply worried that future software, instead of being able to do almost nothing for the user, will suddenly be able to do everything for itself.
But everything goes wrong so badly that even things going wrong will go wrong.
People probably WILL use AIs eventually to invent designer neurotoxins or interacting retroviruses or nano shrapnel or something.
Seems misleading to me because
ImE, the premise is untrue. My Windows crashes much more rarely than it used to, my PC boots much faster, most websites I use work more smoothly than they used to, etc.
I suspect observations about regular software aren't all that relevant for Machine Learning.
I don't think that your experience with consumer software is at all similar to Engineer/scientist/modeler experience with large language and predictive models. To be clear, those suck too, and require an insane amount of effort and thought to get anything useful done. But they suck in ways that can be fixed over time, and in ways that (seem to) correlate with the underlying complexity of the world.
Complaining about corporate decisions that happen to be implemented in software doesn't quite connect. At least by that pathway. Worrying that consumer software usually seems adversarial to the consumer, and that there may be a similar problem where AI is adversarial to everyone but the "owner" of the AI is probably justified.
But that's not "software sucks", it's "software creators are a mix of evil and stupid".
Yes, but it does show a tendency of huge complex networks (operating system userbases, the internet, human civilization) to rapidly converge to a fixed level of crappiness that absolutely won't improve, even as more resources become available.
Of course there could be a sudden transition to a new state with artificial networks larger than the above.
So I was banned from commenting on LessWrong . . .
(My final "diary" post that led to my total ban after already being banned from the comments sections for the previous post.)
My whole life I've been ranting about how incomprehensibly evil the world is. Maybe I'm the only one who thinks things shouldn't be difficult in the way they are.
Evil is things that don't work, but can't be avoided. A type of invincible stupidity.
For example, software is almost supernaturally evil. I've been tortured for a quarter century by computer systems that are inscrutable, deliberately dysfunctional, unpredictable; and above all the freezing and crashing.
The unusability of software is a kind of man-made implacability. It can't be persuaded or reasoned with. Omnimalevolence as an emergent property.
Software is just a microcosm of society.
The reaction to my decades of online rants and hate-filled screeds has been very consistent: the Silence or the Bodysnatchers. Meaning no reaction, or an extremely negative one (I'm not allowed to link either).
There seems to be a deep willingness among normal people to accept evil, which may be the source of their power.
When I was banned from LessWrong commenting (after two requests to be reinstated), they said such talk was "weird". Weird does NOT automatically mean wrong!
Studying the evilness of human-designed interfaces might reveal why the world has always sucked.
Seemingly simple things (like easy interfaces) are still absolutely impossible today. Only the illusion exists, and not for me.
Does that mean that seemingly impossible things (like an intelligence explosion) will turn out to be simple reality tomorrow?
Maybe. Heck PROBABLY. But maybe not.
The fact that it's so difficult to make even the simplest systems not suck, may mean that much larger systems won't work either.
In fact, it's certain that many unexpected things will go wrong before then.
The only way to get transhuman AIs to work MAY be by connecting many existing smaller systems, perhaps even including groups of humans.
I used to believe the world is so unimaginably horrible that we should do everything possible to accelerate AI progress, regardless of the risk, even if a runaway AI inadvertently turns the earth into a glowing orb dedicated to dividing by zero. I still believe that, but I also used to believe that in the past.
[-]Raemon 1d Moderator Comment 7upv 1 agr
Hey Flagland, I feel a bit bad about how this played out, but after thinking more and reading this, the mod team has decided to full restrict your commenting permissions. I don't really expect you posting about your interests here on shortform to be productive for you or for LW.
We're also experimenting more with moderating in public so it's clearer to everyone where are our boundaries are. (I think expect this to feel a bit more intense as a person-getting-moderated, but to probably be better overall for transparency)
... "My whole life I've been ranting about how incomprehensibly evil the world is. Maybe I'm the only one who thinks things shouldn't be difficult in the way they are. [...] The reaction to my decades of online rants and hate-filled screeds has been very consistent: the Silence or the Bodysnatchers. Meaning no reaction, or an extremely negative one (I'm not allowed to link either)... There seems to be a deep willingness among normal people to accept evil, which may be the source of their power..."
... To be clear, I think your topics have been totally fine things to think about and discuss on LessWrong. The problem is that, well, ranting and hate-filled screeds just aren't very productive most of the time. If it seemed like you were here to think clearly and figure out solutions, that'd be a pretty different situation.
Thursday, March 16, 2023
Wednesday, March 1, 2023
Clear your calendars! A big event is coming to the Flag Land Base
The most important day of the year has traditionally been L. Ron Hubbard's birthday, celebrated each March 13th with a mass regging event.
After years of Covid lockdowns, this event will once again take place in a secure space somewhere at Flag sometime this month (TBD).
The video will also be shown at all other orgs.
Sea Org captain David Miscavige is expected to address and exhort the faithful in person, probably in the Fort Harrison auditorium.
There could of course be overflow capacity in the ballroom, and in the adjacent Flag Building's chapel.
In any case, security WILL be airtight, so get your RSVPs in now if you can.
Subscribe to: Posts (Atom)
How I got banned from LessWrong
These posts below got me kicked out from their "web community". My crime was to claim that their focus on future AI risks complete...
The secret of the Dyson Spheres: https://www.wattpad.com/1004256714-short-science-fiction-story-the-secret-of-the
There is at least one whole secret floor in the Flag Building. If something happened to make the Freewinds cruise ship unavailable, Scientol...
Warning: we are accumulating a dangerous news deficit. The world is now duller than ever, in entirely new ways. Everything vital has stopped...