Captain (Flag Service Organization) (Mark) Harvey Jacques has passed away on October 9, 2022 at the age of 69:
Of course, any power or control Harvey Jacques might have had over the Flag Land Base was nothing compared to the real man in charge there (and everywhere else in the Scientology world): Sea Org captain David Miscavige. He is the Sea Org's only REAL captain, even Jacques was only a brevet captain.
We're only learning about this now due to the extreme level of secrecy at Flag. This very much includes Jacques's cause of death, and every other detail.
Notice that no one on staff even signed his funeral book. They had no time or OSA clearance to do so:
This all resembles gypsy culture. When someone dies, they are never spoken of again. All efforts must be focused on staying alive in the future.
At the Flag Land Base, every effort must be focused on making more money in the future.
Friday, February 24, 2023
Tuesday, February 21, 2023
Since 2011, I've been trying to get funding for a new tech startup. This turns out to be not as easy as you probably think. It's a serious effort on my part - if you have any suggestions I would love to hear them!
It would be a unique high-tech service that some might say even has 'spiritual' overtones.
And yet this kind of "spiritual" technology would definitely NOT be offered or even acknowledged at the Flag Land Base.
The question is very simple: could you train a powerful computer program to act like an accurate copy of yourself?
Some LW members may have already tried this, but it would require super powerful software. Even if it's only text based, GPT-3 wouldn't be enough. Maybe GPT-13.
If such a program could generate a perfect copy of someone's written responses to any query, a Super Turing Test, the original person would arguably no longer need to exist. That would be the whole point: such a program would be like a "mind backup", a solution to the problem of death.
This would take something like "Deeper Learning" software, requiring huge amounts of data. For starters, it could record its subject's digital activities. Then it might try to predict sleeping and working habits. To get more data it should take the form of an operating system, or virtual assistant, or digital mind extension.
In my case, it could analyze all my old drawings, sorted by date and themes, with description and genre tags added. It could look up style data from the comic books my art is inspired by. Then it would start making making similar art of its own. It could do the same thing even easier with my fan fictions and other written texts. Obviously there would be no demand for any of this, but it would be possible.
Even more advanced, a human subject might start wearing a "sensor hat" or other clothing to allow the software to perceive everything they do. Then it would start to predict what they WILL do and experience.
An important function would be knowing what data to ignore. It should not try to predict everything in the subject's environment, like whatever appears on screens or the actions of other people. It could predict common themes like colors and shapes in programs and web layouts etc. These same patterns already exist in your brain.
Such imitator software might only render a low bandwidth or text-based description of the simulated person's behavior. That wouldn't matter at all, as long as it's all-encompassing.
One application of such research would be software designed to understand what people are doing at any time. One problem of such research would be temporary glitches, like if your coffee mug suddenly dissolves into a slinkie you'd know you were actually the software simulation. That sort of thing is obviously many decades away.
The real question is: what is the most predictive mind imitator software that could be created today?
And this would take a research project that could comfortably fill up the Flag Land Base . . .
The program could identify where it has the lowest certainty of what the person would say or do, and directly ask the person to fill in those gaps. I wonder what the psychological impact of working with a program in this way would be. It seems like the program would likely discover inconsistencies and uncertainties in the actual person and force them to confront those, which could potentially be beneficial or detrimental depending on the circumstances.
If I noticed my coffee mug turning into a slinky, my first assumption would not be that I was in a simulation, but that I was lucid dreaming. I would react by attempting to reproduce whatever led to the glitch, and exploit it to recreationally violate the usual laws of physics, because that's a novel and fun thing to do when one finds it temporarily possible. This category of reaction, which I suspect I'm not alone in having, would certainly make life more interesting for whoever was running the simulation.
A post about life in the Sea Org in the "good old days" in LA: tonyortega.substack.com/p/scientology-in-the-good-old-days Today...
When you realize how evil the world is, and how totally deliberate this evil is, and above all how all-pervasive this evil is, things become...
My new science fiction novel about the Singularity happening SOON is finally ready. There is a lot of hysteria involved in such a fantastic...
These posts below got me kicked out from their "web community". My crime was to claim that their focus on future AI risks complete...