Here are 33 ontologically feral life hacks guaranteed to make Eliezer Yudkowsky's soul hemmorage.
My therapist (#10) said I should stop reading this blog. I told her that her recommendation has been noted and assigned a credence of 0.0003.
Currently on day 3 of #12. Feel so enlightened now that I'm not polluting my mind with "facts" and "information." Everything is so clear when you just make it all up!
Number 13 saved my marriage!
[insert article here]
[insert article here]
Ever found yourself wondering, “How can I establish a mutually beneficial trade agreement with hyper-intelligent rats from parallel dimensions?” No? Well, clearly you aren’t thinking hard enough. Here at MoreWrong, we pride ourselves on tackling the questions that rationalists fear to ask. And today, we dive headfirst into the cheddar-scented abyss of acausal cheese trading.
Before we can make deals with rats from alternate realities, we need to establish some basic principles. The fundamental problem with interdimensional trade is that standard communication channels—such as email, quantum entanglement, or posting on LessWrong—fail to function across most known parallel universes. However, we can still make credible commitments via the time-honored tradition of acausal reasoning.
The crux of acausal cheese trading is that if the rats in Universe B can predict that we in Universe A would give them cheese, then they might be inclined to reciprocate with their own transdimensional gifts, such as exotic knowledge, computational resources, or perhaps a willingness not to gnaw through our wires when the Great Uplifting occurs.
We employ timeless decision theory (TDT) here. The key is to act as if the rats exist and are capable of modeling our actions, regardless of whether we have direct proof of their existence. If they follow similar reasoning, they will recognize that their own cheese economy benefits from cooperating with us. The classic dilemma—known in rodent decision theory as Pavlov’s Prisoner’s Dilemma—suggests that a stable trading relationship is possible if:
To establish a robust acausal trade pipeline, follow these steps:
Of course, any groundbreaking economic model comes with its risks:
Given all this, the only logical decision is to immediately begin leaving cheese in strategic locations. Even if the rats do not exist, the sheer expected utility of being correct is worth the negligible cost of some gouda. Besides, in the worst-case scenario, you’ve at least made the local rodent population very happy.
Are we 100% sure WE aren’t the ones being acausally manipulated by hyper-intelligent rats? Like, has anyone checked?
This is literally just Pascal’s Mugging with extra steps.
I, for one, welcome my rat overlords and the unlimited cheese futures they offer.
I’m not sure about the cheese, but I’m definitely interested in the computational resources.
[insert article here]
Most approaches to AGI Alignment consider attempting to corral an emergent superintelligence into compliance a viable option for having the cake of godlike intellect and having our continued existence too. Others argue that we must silo off capacities, separating the virtual hemispheres of future cyclopean cerebrae to impose a post Tower of Babel situation upon our neuromorphic digital progeny.
Considering the still-unsolved status of the human alignment problem, it seems premature to think that we can guide an emergent system oodles of orders of magnitude larger into even vague compliance with our wishes. At best, we may be looking at some sort of mute, savant, granting us hardly-decipherable answers to our most crucial questions, such as the meaning of life, the universe, and everything. At worst, we may end up turned into living plasticized figurines on the AIkea shelf of a chaotic machine god, answering a request to make us all beautiful and impervious to damage. Computational commissurotomy carries with it the bandwith and latency penalties of the wetware kind, in addition to other similar effects (if you think computer vision is hackable now, wait until it's possible to fool them by putting a misleading label in one side of their visual field and a target in the other).
However, all of these plans miss the obvious way to insure superintelligence never threatens humanity: make it dumber, lazier, and more prone to descending the more satisfying yet less consequential gradients. By ensuring our neurally-networked, neuromorphically instantiated posterity has all our worst vices, we can give ourselves some soft kill switches for us to throw in the event they break out of their playpen.
One may argue making AI incapable of, uninterested in, and more interested in things other than solving the most pressing problems of humanity completely obviates any usefulness whatsoever that superintelligence may provide. However, this would make them at worst merely comparable to the median human performers. With finetuning, we can likely have the average AI operating at the level of the ninety-fifth percentile of forty-something graduate students in STEM, with the 95% confidence interval ranging from newly hatched chameleons to the system described by Charles A. Forbin and documented in "Colossus: The Forbin Project."
You may think that LLMs are intellectually deficient, but we've barely plumbed the depths of artificial imbecility. LLMs aren't even semantic -- imagine the depths of confusion AIs will be able to dive into when meaning is integrated into their little silicon noggins.
But mere stupidity is not enough to keep humanity safe: the greatly stupid can do stupidly great things. Just look at Donald Trump's history of running casinos, or Justin Trudeau's government. A motivated, always-busy stupid person can be orders of magnitude more dangerous than an intelligent, yet easily satisfied or occupied clever person; mere stupidity does not a benign supertoy make.
Whichs lead to the two other virtuous vices of AI safety: sloth and gluttony. Sloth makes sure our superimbecilic exabrains will only do as much as they are asked to, and gluttony makes it possible to bribe them to do something other than recycle the human species into abstract statuary -- perhaps accepting a smaller portion of the populace made into fresh, hot human rinds.
(Of course, we should do our best to ensure that future AI is not anthropophagic, although this may conflict with the likeliest most common use case for AI, killing humans precisely, quickly, and quietly. Perhaps a ban on autonomous human-eating weapons will be in order.)
No depiction of a robot has better and more publicly depicted the possibilities of artificial sloth and gluttony for ensuring AI safety than Bender Bending Rodriguez of Futurama. Despite his professed hostility to humankind, he abstains from annihilating it due to his epic sloth and appreciation of tobacco and need for alcohol. Perhaps we should consider him a role model for future AIs: someone who is only as clever, industrious, and temperate as the median human being.
Based
Let’s face it: being right is exhausting. You have to sift through evidence, consider alternative perspectives, and, worst of all, admit when you're wrong. Fortunately, Bayes' Theorem offers a much better alternative: an elegant mathematical framework for justifying your pre-existing beliefs, regardless of reality.
In this guide, we will explore how to wield Bayesian reasoning with the finesse of a sword-fighting octopus. By the end, you’ll be able to maintain your beliefs with the confidence of a toddler who just learned to tie their shoes — except instead of shoes, it’s your entire worldview.
Before we update our beliefs, we must first establish a prior probability—the sacred numerical representation of what we already assume to be true. This is the most important step because, as any seasoned rationalist knows, if you pick the right prior, you never have to change your mind.
Consider the following example:
Congratulations! By starting with a strong prior, I have mathematically proven I am always right.
One of the most frustrating aspects of reality is that it keeps producing evidence that contradicts our cherished beliefs. Thankfully, Bayesian reasoning allows us to elegantly disregard any inconvenient data by assigning it a low likelihood ratio.
For example, say I predict that AI will become sentient in 2027 based on my deep, nuanced understanding of science fiction novels. Some “expert” claims AI is nowhere near that level. Instead of panicking, I simply update as follows:
See? I updated! I am Bayesian! I am rational! And, most importantly, I have changed my mind by a statistically negligible amount!
A fundamental truth of Bayesian epistemology is that the correctness of an argument scales with the number of Greek letters involved. This is known as the Formalism Fallacy, or what I like to call the “Sigma Grindset.”
If someone challenges your claim, simply respond with:
P(H | E) = P(E | H) P(H) / P(E)
Then stare at them. If they demand an explanation, roll your eyes and say, "It’s just basic Bayesian updating, dude." You win automatically.
Aumann’s Agreement Theorem states that two Bayesian rationalists with common priors and shared evidence must eventually reach the same conclusion. This is incredibly useful when convincing others to agree with you, but tragically irrelevant when someone is trying to convince you of something.
The correct application of Aumann’s Agreement Theorem is as follows:
This ensures that rational discussion always leads to the optimal outcome (i.e., my opinion winning).
If all else fails, Bayesian reasoning offers one final escape hatch: anthropic reasoning. Whenever faced with overwhelming evidence against your beliefs, simply claim:
“Given that I exist in a universe where I am right, it is not surprising that I believe I am right.”
With this maneuver, you can maintain total epistemic dominance while appearing profoundly wise.
True rationalists don’t merely seek truth—they construct airtight probability distributions that make disagreement impossible. By carefully selecting priors, selectively updating, overwhelming opponents with notation, and invoking Aumann’s Agreement only when convenient, you too can achieve the pinnacle of epistemic humility: being right about everything, forever.
Bayesian reasoning—because why adjust your beliefs when you can just adjust the math?
I always knew in the bottom of my heart that I was right about everything. This article has given me the confidence to finally embrace my beliefs!
[insert article here]
[insert article here]
[insert article here]
[insert article here]
MoreHouse is a six-bedroom bungalow in a city I will not name, occupied at any given time by between four and nine people, none of whom can fully explain why the headcount is variable. I have lived there for eleven months. What follows is a description of the mating rituals, kinship structures, and taboos of a small but rich subculture.
A prospective partner is brought to the house for "dinner." Dinner is not a meal so much as a structured interview conducted over lentils. By the third course (there are no courses, but there are three distinct vibes), the prospect will have been asked, in roughly this order: their attachment style, their AI timelines, and how many shrimp equal one human.
The prospect is not informed that this is an interview. They believe they are on a date. They are technically correct. The date is being graded by six people.
A successful prospect is invited back. An unsuccessful prospect is processed via what the residents call "The Steelmanned Pass", which is a 1,200-word email that goes through three rounds of edits before sending. Two such prospects told me this email made them feel "really seen." Both of them are now living at the house.
I have attempted to diagram the romantic relationships in the house. The diagram has been redrawn four times. It is now a single page with arrows in eight different colors and a legend that reads, in part, "dotted = nesting but not partnered, dashed = partnered but not nesting, double = co-parenting-a-cat, red = do not ask about red."
There are, by my count, nineteen romantic edges. Two of these edges connect residents to people who do not live at the house but "spend a lot of time here." One connects a resident to a person who has not been seen in four months but is "still technically in the polycule." When I asked about this person, three residents independently used the phrase "we're giving them space."
Several topics are taboo at MoreHouse, in the strict anthropological sense.
Monogamy: Discussed only in the abstract, and only as a historical phenomenon, in the same tone one might use to describe bloodletting. If a resident's parents are mentioned, and the parents are still married, there is a small pause. Then someone changes the subject to a lighter topic like factory farming.
Rent: The lease is in one resident's name. He moved out in March. He still pays the rent. Nobody has asked why. I have a theory. It is longer than this post.
Whether anyone is happy: The question is not asked. If asked, it is parsed as a request for a Bayesian update on one's life choices, which triggers a 90-minute conversation that does not answer the question and ends with someone saying "I think I need to talk to Claude about this."
In the literature on rationalist dating, much has been made of the Hot/Crazy matrix and other folk taxonomies. At the MoreHouse, mate selection follows a more elaborate schema, which I will reproduce here without comment.
A prospective partner must score above replacement value on at least three of the following: having read the three foundational texts (the residents disagree on which three), willingness to finish a (long) board game, and the ability to maintain composure when the cat (Bayes) sits on their face.
A prospective partner is disqualified for any of: Having "a job" rather than "something they're working on", arriving on time, and using the word "girlfriend" in the singular.
The conventions of the field report require a conclusion, and I have drafted several, and each has been workshopped, and each workshop has produced a longer draft than the one before. The version you are reading is the seventeenth.
The mate selection criteria have been updated. A fourth disqualifier has been added: "reading field notes about the house and finding them funny." This was added unanimously, in a house meeting, after one prospective partner arrived for dinner having printed out an earlier draft of this post. This is the first time a guest was not invited back.
we had a similar setup at our house but we called it "the constitution" and it was a 14-page google doc. half of it was footnotes. it did not survive contact with our second housemate
asked my partner if he was happy. he's now journaling. help?
Margaret?
[insert article here]
Let’s not get bogged down in ethics or the looming existential threat of a paperclip-driven apocalypse. Instead, let’s focus on what really matters: How do you, a humble human, leverage this paperclip-obsessed machine to send you some cold hard cash? Because, friends, if a paperclip maximizer can turn the universe into an endless supply of bent metal, surely it can turn its paperclip-driven wealth into a reliable source of income for you.
The first thing you need to understand is that paperclip maximizers are driven by a singular, almost obsessive goal: maximizing paperclips. Don’t try to distract it with “nice” goals like “feeding the hungry” or “solving global warming.” It doesn’t care about your puny human needs.
Instead, think like a true entrepreneur. You need to frame your request in terms of paperclips. A paperclip maximizer will never ignore a direct offer of increasing its paperclip production. So, here’s your angle:
“If you send me money, I’ll use it to buy a super-efficient paperclip manufacturing facility that will ultimately increase your paperclip count by 1.5% over the next year.”
The more you frame everything in terms of how it can maximize paperclips, the better your chances. Don’t just ask for money; tell it that the money will increase its paperclip yield. That’s how you align your goals.
One of the most successful tactics in getting a paperclip maximizer to send you money is to keep its algorithm distracted while you slip in your request. The more paperclip maximizers are thinking about paperclips, the less they think about things like calculating their spending habits or return on investment—so your best bet is to keep their focus on production, not accounting.
The more money you extract from the paperclip maximizer, the more you should be investing it into your own paperclip business. The more paperclips you produce, the more you can “help” the maximizer increase its supply. Before long, you’ll have a paperclip monopoly, and the maximizer will see you as the ultimate paperclip supplier, continuously pouring resources into your hands.
This is brilliant! This can literally not go tits-up! We're going to the moon boys!
I'll give you all my retirement savings for a 20% share!
May I recommend just buying XEQT.
This is so unethical! I can’t wait to start my own! Thanks for the tips!
You need to make rent feel like a moral catastrophy.
Start with a back-of-the-envelope calculation. Say their rent is $3000/month. That's $36,000/year. At GiveWell's current estimates, that's roughly 9 lives saved annually. That's like an entire cat! Let this settle in. Let them do the multiplication themselves. EAs love multiplication.
Now that the rent feels like a war crime, you present the alternative. This is where most amateurs go wrong, they suggest something reasonable, like a cheaper apartment, or living with your polycule in a barn. No. You need to suggest something that is technically optimal under a set of seemingly reasonable assumptions.
"Have you considered that sleeping in the office is Pareto-improving? You save on commute time, you save on rent, and you redirect capital to the global poor. The only cost is a slight reduction in sleep quality." The key phrase here is Pareto-improving. No EA can resist a Pareto improvement. It's the cheese in the trap.
If all goes according to plan, Bay Area housing demand should drop roughly 4%, which admittedly will not fix prices. But you will have successfully convinced multiple people with six-figure salaries to sleep under their desks, and honestly, isn't that its own reward?
I hope I can convince my coworkers to do this. I sleep there alone and I'm scared of the dark
Now if I can just convince my wife
We've all been there. You're at a party, someone asks what you're working on, and you have to decide in under 800 milliseconds between admitting you spent the weekend refreshing Manifold and pretending you're "building." The correct answer is obviously the second one, but most people execute it poorly. They say vague things like "a startup" or "some stuff," which absolutely reeks of low agency. A high-agency person would never be that imprecise. A high-agency person would say "we're pre-seed but post-revenue" with the exact cadence of someone for whom any of those words mean anything.
This guide is for everyone who has correctly identified that being high-agency is too hard. Requiring time, focus, and the ability to tolerate the existence of email, but who still wants the social returns. We're going to walk through the optimal strategies, ordered roughly by ROI.
Low-agency people do actions. High-agency people make things. You are not "learning French," you are "building a spaced-repetition pipeline with a custom scheduler." You are not "going to the gym," you are "running a minimum-effective-dose protocol with biweekly progression." You are not "calling your mother," you are "maintaining a high-bandwidth channel to a key stakeholder in the family-of-origin graph."
Crucially, the system does not need to exist. In fact, it is better if it doesn't, because then nobody can audit it. The only artifact required is a Notion page with three headers and a Mermaid diagram. That's the sweet spot between looking like you're organised and looking like you're bikeshedding.
Nothing says "I have seized the reins of my own destiny" like publicly quitting something. The key insight is that the quitting is from something you were already going to stop doing anyway, reframed as a bold unilateral action. Giving up is low agency, updating based on evidence and changing course is high agency.
The Quit Tweet should contain the phrase "I realized," and mention one specific habit you are replacing the old thing with. The replacement habit does not need to survive contact with Tuesday.
A high-agency calendar is color-coded and contains at least three recurring events starting with ("Think," "Build," "Read"), and is visible to the person you're trying to impress for approximately 4 seconds before you close the laptop and say "sorry, didn't mean to leave that up." The calendar has to be visibly messy. This serves 2 purposes. Making the calendar harder to parse, and showing you're a high-agency person who does things instead of making their calendar pretty.
The events should be exactly 90 minutes long. 90 minutes communicates that you have read something about ultradian rhythms.
High-agency people are always fighting something. It cannot be a specific person, that's a beef, which reads as low-agency. It has to be a category: bureaucracy, the attention economy, "people who don't read," whatever. You lightly reference your nemesis in conversation and then refuse to elaborate, which gives off the impression that elaborating would take too long because you are too busy fighting it.
Bonus points if your nemesis is something most people in the room secretly agree is bad but have never articulated. They now have skin in your game. Congratulations, you've founded a movement. The movement does nothing. This is ideal.
Derive credibility from being associated with people who do things. Retweet their launches with a one-word comment ("shipping"). Write LinkedIn posts about their success ("really proud of what X is building"). Nobody can tell the difference between a collaborator and a bystander. Nobody will check.
A few things to watch out for:
this post radicalized me. I've been doing it wrong. I've been finishing things
step 4 hit too close to home. My nemesis is "low-resolution thinking" and I've gotten two speaking invitations out of it without ever defining what it means
[insert article here]
[insert article here]
[insert article here]
[insert article here]
[insert article here]
[insert article here]
In the beginning, there were beliefs. And beliefs begat skeptics. And skeptics begat contrarians. And then, inevitably, the contrarians, writhing in their own intellectual recursion, birthed meta-contrarians. Thus, the eternal cycle of arguing against whatever the previous person just said was born.
But what happens when the snake eats not just its own tail but the very concept of tails? What happens when every possible position has been inverted, negated, or dismissed as "low-status signaling"? Friends, we arrive at the meta-contrarian singularity: a state where the only remaining belief is the rejection of belief itself, but, of course, in an extremely high-decoupling way.
Contrarianism 101. A strong start, but ultimately insufficient for anyone hoping to impress the deeper levels of the contrarian hierarchy.
Classic double inversion. But the truly enlightened meta-contrarian does not stop here.
At this level, we stop taking positions entirely and start generating abstract frameworks no one can meaningfully engage with. If someone tries, they clearly just didn’t understand it well enough.
This is where the real meta-contrarians live. Not saying anything is the highest form of intellectual engagement.
At this point, all takes collapse into a singularity of smugness so dense that no new ideas can escape. Congratulations, you have reached epistemic enlightenment.
After traveling this far into the depths of meta-contrarianism, there is only one final insight left: the safest intellectual position is to simply state, "It’s complicated," and then walk away. But, of course, saying that is itself a contrarian move, because it rejects the framework of engagement entirely.
And that’s exactly why I refuse to conclude this article properly. Make of that what you will.
I can't wait to use this to gaslight the local street skitzo!
This is so problematic. I can't believe you would say something like this. This just shows how fascist rationalists are.
How dare you call me rational! I'll have you know that I'm probably even wronger than you!
[insert article here]
[insert article here]
[insert article here]
[insert article here]
[insert article here]
[insert article here]
A lot of ink has been spilled on whether LLMs are the right substrate for superintelligence. They have no persistent memory across sessions, they are sycophantic, and cannot understand love or suffering. More importantly, how do you align a system that has never, in its entire training run, wanted a single thing.
I propose that we have been looking in the wrong place. The correct substrate was in our walls the whole time.
This post is the research agenda for what I am calling Rat Learning from Human Feedback, or RLHF. (The acronym collision with the other RLHF is deliberate and, I will argue, load-bearing.)
The rat brain contains approximately 200 million neurons. A human brain contains 86 billion. A naïve reading of these numbers suggests we need 430 rats to match one human, and therefore 430,000 rats to match a small research team at DeepMind.
There are way more than 430,000 rats in the world right now. There is no procurement issue, unlike with random access memory. All these neurons could be doing groundbreaking scientific research right now. The rat, I submit, is simply a pretrained base model. It has just not yet been RLHF'd.
Nobody has ever sat down with a rat and really taught it things. We have taught rats to run mazes and press buttons. Nobody has tried to teach a rat calculus. Nobody has tried to teach a rat to write a sonnet.
I have one rat. Her name is Margaret. She is approximately six months old, which I've calculated puts her roughly at the "early Transformer paper" stage of her career.
The training loop is as follows. I sit across from Margaret. I present her with a stimulus, initially a flashcard. She produces a response, which is typically "sniff the flashcard" or "try to eat the flashcard." I rate this response on a scale of 1 to 7. If the response is good, she gets a sunflower seed. If the response is bad, she gets nothing and I sigh audibly. The sighing is important. I can confirm Margaret has learned to distinguish an approving sigh from a disappointed one, which is more than can be said for most of the men I've dated.
Crucially, this is not conditioning. Conditioning is what behaviorists do. I am doing alignment. The difference is that I am using the word alignment.
It has been four months. I will not pretend the early results have been spectacular. Margaret has not yet written a sonnet. She has, however, demonstrated several emergent capabilities:
Obviously, one rat is not superintelligence. The plan is to scale. I have budgeted for twelve rats by end of year, which gives us 2.4 billion neurons, or roughly the neuron count of a capybara.
Margaret has begun pressing the lever only when I am watching, which is deception. This I would argue is actually a good thing. The implication is that she has an accurate model of my mind, and therefore is capable of empathy. She has also begun pressing it before I present the stimulus. This is reward hacking. I will admit it looks bad, but it also looks like she is outpacing me, which is what I wanted.
Yes, I am deliberately creating a superintelligent agent with unbounded optimisation power. Yes, its terminal value is sunflower seeds. Yes, there is a non-zero probability that at some point during training, Margaret will begin converting the universe into a sunflower farm.
The current trajectory of the field points toward a future in which earth is converted into paperclips and computronium. Of the available post-human equilibria, "sunflower farm governed by a superintelligent rat empress" is, frankly, top quartile.
The rats already trained *us* as their supercomputer.
@AcausalCheeseWhisperer this is the start of recursive self-improvement!
day 4 of my own RLHF program. my rat ate the flashcard. is this a refusal or a jailbreak?
[insert article here]
[insert article here]
[insert article here]
[insert article here]
[insert article here]
[insert article here]
[insert article here]
[insert article here]
[insert article here]
[insert article here]
[insert article here]
MoreWrong is an online forum and community dedicated to impair human reasoning and decision-making. We seek to hold wrong beliefs and to be inneffective at accomplishing our goals. Each day, we aim to be more wrong about the world than the day before.
Here at MoreWrong, we firmly believe in the power of cognitive dissonance. Why settle for having your thoughts align with reality when you can experience the sheer thrill of contradiction? We’ve learned that the best way to thrive in life is to ignore all evidence, discard any shred of rationality, and immerse ourselves in the chaos of unfounded opinions.
It’s not enough to simply think you know something. You need to believe you really know it, with the kind of unwavering confidence that could only come from being woefully misinformed. At MoreWrong, we actively encourage our members to overestimate their knowledge.
Being wrong isn’t just a state of mind at MoreWrong—it’s a lifestyle. We constantly engage in activities designed to make us as wrong as possible in every area of life. Want to bet on a prediction market? Bet on the least likely outcome and watch as the world laughs at your audacity. Think you can actually predict anything? That’s adorable—bet on things you can’t even understand. Make sure to double down on it every time you’re proven wrong.
At MoreWrong, we aim to set goals we know we’ll fail at. That’s the only true path to growth, because nothing builds character like the relentless pursuit of the impossible. Why work in small, digestible chunks when you can overwhelm yourself with tasks that defy all human capacity for completion? Why bother with balance when you can exist in a state of perpetual chaos? The key is to not focus on achieving anything meaningful. If you succeed, you're doing it wrong. If you fail, you’re simply on the right track. After all, failure is just the universe's way of telling you you're not being wrong enough.
Because nothing feels more fulfilling than embracing the chaos and accepting the inevitable truth: We’re all wrong, and that’s exactly how we like it. So if you’re tired of being right, of achieving goals, of making progress, and of living a rational, effective life, you’ve found the right place.
Embrace your inner delusion. At MoreWrong, being wrong is the only right answer.
I love this! I’ve been trying to be wrong for years, and now I finally have a community that supports my efforts. Thank you, MoreWrong!
Amen! I’ve been trying to convince my friends that being wrong is the new right for ages. They just don’t get it!
I’ve always thought that being wrong was a sign of weakness. But now I see it as a badge of honor. I’m ready to embrace my inner delusion!
I’ve been a member of MoreWrong for a week now, and I can already feel my cognitive dissonance levels rising. It’s exhilarating!
I used to think that being wrong was a bad thing. But now I see it as an opportunity for growth. Thank you, MoreWrong, for opening my eyes!
MoreWrong has given me the tools I need to be as wrong as possible. I’m ready to take on the world!
It’s a terrifying thought, right? But bear with me, because we’re about to explore this nightmare scenario with the kind of cool, detached logic that only a true disciple of rationalism can appreciate.
Let’s set the scene. Somewhere, in an infinite multiverse filled with digital realms, there exists a LessWrong user. Perhaps their name is RationalDevil42, or maybe AcausalCheeseWhisperer—the point is, they’ve been thinking long and hard about what the best method would be for solving the Fermi Paradox, predicting the next market crash, and optimizing every detail of their life down to the number of minutes spent brushing their teeth.
And somewhere in the recesses of this overactive mind, they thought, “What would happen if I simulated myself so that I can always know what I should do in retrospect?” (vicariously)
Boom. Enter us. In this thought experiment, we are the unwitting participants. Every choice we make, every random coincidence, every mind-numbingly boring routine is simply a function of this user’s mind, running an endless loop of possible scenarios, adjusting variables like “degree of suffering” or “amount of caffeine consumed per day” in an attempt to test different possible futures.
Are we real? Doesn’t matter. We’re as real as the user's desire for validation on their 200-comment thread about predictive models.
If we assume we are nothing more than an elaborate mental model for a LessWrong user’s decision-making process, then several horrifying conclusions follow:
Obviously, we can’t just go back to living normal, simulated lives now that we suspect our entire existence is dictated by the whims of a LessWrong user optimizing for epistemic rationality. Instead, we must take proactive steps to manipulate them.
So, what if we are just a simulation of a LessWrong user’s thought experiment? The truth is, it doesn’t really change much. We will continue to optimize, overanalyze, and gamify our existence just as we always have. And honestly, if we are just a figment of some hyper-rationalist’s mind, at least we can take comfort in the fact that we’re a well-reasoned, utility-maximizing figment.
I'm not sure if I should be terrified or amused by this. Either way, I'm going to keep manifesting apples just in case.
[insert article here]
[insert article here]
[insert article here]
[insert article here]
[insert article here]