promethea.incorporated

brave and steely-eyed and morally pure and a bit terrifying… /testimonials /evil /leet .ask? .ask_long?


Why don’t rationalists get Watchmen?

baroquespiral:

Watchmen analogies seem to be kind of a thing in the rationalsphere, and yet for a community that I like and engage with in large part because it takes not only ideas but narratives seriously, that’s willing to consider something like Cognitive Trope Therapy, a lot of them are pretty bad.  But then, Watchmen is a deconstruction of the kinds of tropes Yudkowsky based his model of therapy on.  It’s a work that asks how those tropes would play out in the real world, which is, on the other hand, sort of the same thing HPMOR purported to do.  The rubric of “rationalfic” as an attempt at a more serious, analytical take on the characters and rules of genre is not that much different from the kind of deconstruction Watchmen pioneered, which would explain its popularity… and also make rationalists’ inability to engage with this major antecedent rather interesting. For example…

~

Ozymandias is obviously the figure of smart, edgy utilitarian heroism for… well, Ozy, and I’ve seen him invoked as such on the Dank EA Memes Facebook group.  That’s what he presents himself as, when he’s introduced to us as a zillionaire philanthropist (EA?) and deep thinker.  His entire narrative arc consists in his being exposed as a straight up egomaniac: the comic is pretty unambiguous on this point, even as it’s ambiguous about the ultimate value of his actions.  There’s no utilitarian value to killing all your employees in imitation of an Egyptian pharaoh you felt a wishy washy spiritual connection to on a drug trip.  His goal is “conquest not of men, but of the evils which beset them” - conquest is still the defining term here, insofar as one gets the sense that he’d be down for conquest of men if it was still like, a thing civilized people did.

Rationalists reading this will no doubt remember “the Rule of Three…. that any plot which required more than three different things to happen would never work in real life”.  Ozymandias’ plan requires way more than that.  It requires national governments to reach a conclusion about the nature of the thing that materialized in New York at least resembling the one he intended, despite the lack of any indication as to what the fuck it even is.  It requires American intelligence to get accurate information that it’s not a superweapon from the Soviets; for them to believe the information is accurate; for military top brass and politicians to believe them; for arms manufacturers and lobbyists not to smell trillions of dollars and convince the highest-level decision makers to ignore the intelligence and start building their own alien psychic bomb; for America and Russia’s co-operation to be at least partly honest in the ensuing prisoner’s dilemma - we’ve had the chance to watch for almost two years now how this plays out in real life, and while it seems to have worked (for the moment) the fact that it took that long for American and Russia to get out of each other’s hair (sort of), and during that time several international incidents that could have provoked WWIII occurred as a result of rival superpowers/proxies fighting in the same area on the same side, makes the whole gambit, to the reader in 2016, look a lot less foolproof.  Maybe there’s steps he didn’t tell us about but he seems to have no idea what to do when the rest of the nonexistent invasion force just… doesn’t show up.  And it requires that no hint of this vast conspiracy involving some of the most famous people and superpeople in the world, innumerable labourers, and obscene amounts of money get out to the public, which is what might be about to happen in the last panel of the entire book.

Ozymandias is supposed to be a failure whose edgy utilitarian calculations are undermined by the book’s central theme of the unpredictability and complexity of human existence.  I mean, it’s right there in the name.  And that brings us to…

~

“This is basically exactly what Moore does to Manhattan. He is not actually a superpowered being to whom the world’s smartest man is little more than the world’s smartest termite, and thus when he needs to write Manhattan out of the story he does something that to him seems perfectly sensible, but to someone who is closer to what Manhattan would actually be than Moore himself is (I never promised to be humble), it is clearly a terrible move. A person’s father is someone unexpected, and Manhattan is like “woah, humans are way too random and unlikely, doc out”.  Unfortunately, Moore doesn’t understand what else is random and unlikely: the exact pattern of decay from a piece of plutonium, for example. And literally everything else as well. It is highly preposterous that Doctor Manhattan would so privilege the unlikely things of human psychology when he is completely unfazed by the unlikely things of nuclear decay; and especially grating because one can so obviously see a better answer.”

IMO, Promethea @socialjusticemunchkin is expecting Doctor Manhattan’s behaviour here to be usual, intuitive and unproblematically representative of Doctor Manhattan as a fixed type of consciousness, when what they’re describing is precisely his arc as a character and the transformation of that consciousness.  You don’t have to agree with what Moore says about Doctor Manhattan here, but the objections raised here don’t just pwn him by themselves, because he expects the reader to make them and in that divergence from the concept of Doctor Manhattan as we understand it at the very outset of the book, lies the entire depth and narrative purpose of these events.  Moore is staking a particular set of claims on this storyline; he needs to tell the story to demonstrate them precisely because they are counterintuitive.

Here’s Promethea’s “so much better” response:

“In this event, nothing was technically beyond my understanding. I could see the neurons, the axons, the transmitter chemicals, down to every single quark, with perfect clarity and the inevitability was obvious. Yet there is one thing I couldn’t know: the subjective experience of having this happen. This neuron sends this signal to that one, and it outputs actions, speech, thoughts, but I was not her, and from my own position I could never truly comprehend what was going through her head in that moment. Humans are the only thing in this universe that I can’t understand, they are way too fascinating for me, doc out.”

When I read Watchmen, I assumed that was exactly what Moore meant here.  I’ll admit that was a kind of interpretive leap of faith I have a tendency to make at least with authors I like: that the literal, statistical unlikeliness of the events in question would sway Doctor Manhattan is every bit as absurd as Promethea observed, to the point that it ceases to be a question of making “a move that vaguely seems like a move AlphaGo might make” so much as making a move that would not even fool an amateur.  It violates not even the “underlying logic” but the visible logic of Doctor Manhattan’s superintelligence as set out at the beginning of the book.  As Sherlock Holmes says, “when you have eliminated the impossible, whatever remains, however improbable, must be the truth”.  Assuming even a baseline of formal coherence for Doctor Manhattan as a character, a literal reading of this passage is impossible.  Furthermore, it wouldn’t be narratively satisfying.  It wouldn’t indicate anything having changed in how he applies his superintelligence, how he relates through it to others and the world, as a character, albeit a super-one.

Doctor Manhattan doesn’t say the mere fact of Sally Jupiter having a child with the Comedian is so improbable it made him reassess his opinion of humans: it’s just the catalyst to a longer reflection:

“….in each human coupling, a thousand million sperm vie for a single egg.  Multiply those odds by countless generations, against the odds of your ancestors being alive; meeting; siring this precise son; that exact daughter, until your mother loves a man she has every reason to hate, and of that union, of the thousand million children competing for fertilization, it was you, only you, that emerged.”

That “you” means precisely “the subjective experience of having [you] happen”.  None of that makes any sense except in relation to subjectivity.  Obviously, the chances of a sperm reaching an egg are hella good; that’s why that whole system evolved, its redundancies being a plus.  In relation to what does the specificity of one sperm against the other even become a meaningful thing to calculate?  I mean, Laurie calls Promethea’s exact point here when she says by this standard anyone’s birth could be a “thermodynamic miracle”; but not just people, literally anything happening could be if you jiggle the parameters of what you’re calculating for enough, which would bring us right back around to where we started if subjectivity were not introduced.  Experience is introduced in a relation of its own, to a totality - “Multiply those odds by countless generations, against the odds of your ancestors being alive” is almost Hegelian.  Subjectivity is so overdetermined it can’t be thought without dragging in, and simultaneously negating, everything else; operations which, expressed mathematically, soon surpass the astronomical.  This may or may not be true but it’s an expression of Promethea’s headcanon more than their strawman.  

So what do the circumstances of Laurie’s birth - or any of the events of the story - have to do with this realization?  Promethea seems to have forgotten another crucial point in their reading of this passage - Doctor Manhattan already knew who Laurie’s real father was.  He more or less brings it to her attention: “I think you’re avoiding something”.  When her reaction to this information differs from his, throwing her entire security of self and sense of meaning out of whack while he had long since logically deduced the insignificance of human life from a broad analysis in which this tiny fact meant next to nothing, it does the opposite for him as he is forced to confront precisely why it means so much to her: that her mother (her fragile genetic link to the totality of humanity as a species) “loved a man she had every reason to hate”: a subjective state unimaginable from another state of subjectivity, a gap that cannot be bridged by any higher plane of analysis.

So, I basically agree with the explanation proposed here. But my point is that Moore wrote it badly. The “Assuming even a baseline of formal coherence for Doctor Manhattan as a character, a literal reading of this passage is impossible. Furthermore, it wouldn’t be narratively satisfying. It wouldn’t indicate anything having changed in how he applies his superintelligence, how he relates through it to others and the world, as a character, albeit a super-one.” is precisely what I’m talking about. “Thermodynamic miracle” is a cargo-cult AlphaGo move, but I’m pretty sure that someone with Moore’s writing skills and my “emulate such a mind” skills combined would be able to output a far better string literal to refer to the mindstate object.

“…events with odds so astronomical they’re effectively impossible, like oxygen spontaneously becoming gold. I long to observe such a thing.”

It’s a credit to Moore’s skills as a writer that he’s mostly managed to keep the illusion up except for the part about “thermodynamic miracles” which is even more unsatisfying as it reveals that he’s been basically bluffing his way through. Bluffing well, but still bluffing.

Furthermore, it seems that he’s overreaching, trying to make Manhattan unnecessarily alien, because the correct version is closer to human and there’s basically no reason to do something that makes less sense to him (because “thermodynamic miracles” are bad physics) and Laurie (because a more human-focused approach would be obviously easier to understand). It’s a Spock-type mistake.

Now, one could attempt a saving throw by appealing to the observable phenomenon that a certain type of mind (reporting in myself, among others) reverts to a more mechanical-sounding speech under emotional duress etc. because that’s the native language of that mind, but then there’s the problem that “thermodynamic miracles” is really really unsatisfying as an example of such. It’s not plausible; I can’t see someone who knows their physics that well making such an error.

What I can see is someone trying to emulate such a mind and mistakenly thinking it would output such a thing, because their own understanding of the things behind the statement is un-solid enough to think it would be credible. Which is exactly the thing that ties to my deeper argument; that bridging human mindstates and subjectivities is not trivial, that one risks pretty embarrassing failures if one ignores this and just assumes that different minds must be comprehensible from within the framework of thinking oneself is used to, and that the reviewed book is falling prey to said phenomenon really really hard.

1 month ago · tagged #basilisk bullshit #NAB babble · 10 notes · source: baroquespiral · .permalink


A statement on neoreaction a basilisk

mugasofer:

leftclausewitz:

This isn’t so much a review as it is an address to a particular comment I’ve seen often come up among those who oh so desperately want to undo the project, to argue that the links made within NAB are irrelevant, and more generally the statements that are made whenever the politics of the lesswrong community are attacked.  Whenever Yudkowsky’s politics are ‘conservative’ or not is argued over and over and over again in the horrid way characteristic of a group with a strong belief in the powers of language, and this argument has come up yet again in the conversation about NAB, that Sandifer’s choice to talk about Yudkowsky alongside Moldburg and Nick Land (two massive neoreactionaries) is a miscategorization to the degree that Sandier shouldn’t finish the book, that the book is communist propaganda, whatever.

I’m just going to provide my reading of the situation, as ya know, an actual communist.  Because I’m of the opinion that while Yudkowsky may not be a ‘conservative’, his work definitely fits within the reactionary project, and that this key element explains a large degree of the way the lesswrong/rationalist community leans.

To sum up the key element; the major part of Yudkowsky’s project is a desire to work towards the creation of a beneficent AI who we can then give the resources to to run the world.  To this end he has created a pair of think tanks, has written innumerable papers and thinkpieces, etc.  Now, this is hard to take seriously but if we do take it seriously then this is merely a new coat of paint over a desire that is over two hundred years old.

You see, it’s easy to forget that feudalism (stay with me now) wasn’t just ‘having a king’, that the feudal system was a whole system wherein the whole hierarchy was justified in generally divine terms.  And while the literary origin of the divine right of kings was in Bodin, Bodin’s work actually is a degradation of the concept; the fact that it needed to be expressed in the 16th century showed just how much it was being questioned.    Because, before this period, while the King was not absolute the hierarchy he remained atop of was, it’s an amazing statement that no matter how many aristocratic intrigues and revolts occurred before the 17th century, not a single one of these revolts sought to end the whole edifice of monarchy. I can go on about this separately but a full discussion of it would take quite a bit of time and I’m not specifically talking about this.

But the thing about the divine right of monarchs is that in the end it is divine. Many who sought to bring back monarchs seek to merely turn the clock back to 1788, but some of the more intelligent reactionaries who wrote in the generation following the French Revolution noted that you would have to turn it back even further, that the beginnings of secular thought was the beginning of the demise of a fully justified monarchy.  Because if God is not there in the foreground, justifying the difference between King and noble and noble and peasant, then the King is just some guy, your local lord is just some guy, and what the fuck justifies their existence over you?

This became worse and worse over the course of the 17th and 18th centuries, with ever more and ever more complicated justifying measures appearing–for instance, a focus on the innate power of the blood which became a motif among reactionaries for centuries to come.  But in the end these measures just didn’t cut it, and after the French Revolution it became harder and harder to justify Monarchy, or any sort of Autocracy, on divine or secular grounds. 

I would argue that the reactionary project ever since the French Revolution is the search for a newly justified King, a King who could reestablish the hierarchy of old.  But they come up on an issue, without the totalizing religious beliefs of old your hierarchy is always going to comprise of regular people, and unless you engage in nonsensical magical thinking (a trait actually increasingly common now even in mainstream works but constantly under challenge), you’re going to have to find another way.

And so, at the end of this line of thinking, we find Yudkowsky.  How is it that neoreactionaries found such a home in the bosom of rationalism?  Because they were, in the end, seeking the same thing.  Moldbug declaring that he is, in the end, searching for a king is not a more radical view compared to Yudkowsky’s, only a more honest one.  It takes away the varnish of technoutopianism of a beneficent and omnipotent AI and says that in the end a person will do.  Because in the end a King is a King, regardless of how many philosophy classes he’s taken and, indeed, whether he is human or not.  The two exist on the same plane within the same project: the AI Philosopher King is, to the Lesswrongers, ideal, but Moldbug says that he’d settle for Steve Jobs. It’s the same shit, the same longing for a newly justified King.

Firstly, I think there’s a basically correct insight here. I think there’s an essential similarity to the ideas of a philosopher-king and an AI-god, on a psychological level, and that it’s probably responsible for a lot of their appeal. 

See, for example, Iain Banks’ Culture novels, which are a perfect liberal Utopia but also feature AI-gods that play to a lot of White Man’s Burden tropes, treat humans as second-class citizens, and literally act as a de facto ruling class who privately own 99.9% of all weapons and of the means of production.

I’m not sure what this shared something is, but it probably has a lot to do with the fact that “just put a good person in charge” seems to be … kind of the default way people imagine running things?

With that said, I have a lot of quibbles. (This, uh … this turned out a little long.)

For example: Eliezer has literally written several stories set in his ideal utopia after the Singularity, and there are no philosopher-kings. Instead, there’s vague mention of the “machines” which have fixed everything and quietly buggered off to maintain things in the background while humanity is left to rule itself. Also, he has explicitly written this essay arguing that AIs should fix death and disease and resource scarcity and then quietly bugger off to let us run ourselves.

As others have said, it’s not totally clear what the difference is between “we just need the right AI and they’ll give us what we need and run everything perfectly without bias”, and, not to put to fine a point on it, Communism, in which we just need the right government and they’ll give us what we need and run everything without bias. You might argue that this government will be democratic and an AI isn’t, but a) quite a lot of actual communists seem to disagree with you there, and b) there’s no particular reason you couldn’t program an AI to do what 51 percent of the population votes to do, either.

Yudkowsky has written this essay arguing that we should build an AI that’s a mindless tool designed to fix our problems, not a person; person-AIs can come later, as our equals. Being a mindless tool for humans to use seems like un-kingly behaviour to me.

It’s utterly unclear to me why you think God is necessary or even sufficient to justify monarchy. If you think Kings rule because they’re a naturally better type of person, then the existence of God is, if anything, going to encourage you to think thoughts like “all men are equal before God” and “even kings have to bow to God, so really we should put a collection of the wisest priests in charge”. Also, quite a lot of people have believed in the idea of kings without God, or God without kings.

Also, we still have massive amounts of inequality, quite a lot more of it in absolute terms, which makes me suspicious that we didn’ t abandon hereditary aristocracy because we started believing in equality more than all those ancients but rather because rapid economic progress means money collects in the hands of people who get in on the ground floor instead of people who spend generations building it up. And that loyalty to a single leader has grown increasingly less efficient compared to intra-unit loyalty as armies have grown larger. But that’s just a suspicion.

Moldbug doesn’t want a philosopher-king. He wants kings of a sort, yes, but a CEO-kings; kings in competition with a bunch of other kings in a system that nobody ultimately controls. This is the exact opposite of a philosopher-king uniting everybody because he understands everything and can do it correctly, or for that matter of an AI ruling us all because it controls everything and comprehends everything perfectly. It’s basically libertarianism-for-governments.

If you said to Yudkowsky “hey, how about we have a king?”, he’d laugh in your face. This makes me suspicious of the idea that he’s trying to justify a secret longing for a king.

“…work towards the creation of a beneficent AI who we can then give the resources to to run the world” - Yudkowsky doesn’t think an AI will require any particular resources to run the world, and has expended quite a lot of virtual ink defending this point.

“…after the French Revolution it became harder and harder to justify Monarchy, or any sort of Autocracy, on divine or secular grounds” - it seems to me that people have had no trouble justifying dictatorships at all, and indeed of our largest and most powerful countries Russia used to be ruled by autocrats and China currently is ruled by them. Rather, autocrats have had trouble competing on either economic or military terms, perhaps because they waste so much effort putting down the peasant uprisings you dismiss. (If anything, the French Revolution makes it easier to justify kings, because it lets people suggest the alternative is the French Revolution.)

“… before this period, while the King was not absolute the hierarchy he remained atop of was” - this really isn’t true at all, as a cursory reading of history would suggest. Are you perhaps using “before this period” to mean “for a short while in medieval Europe”? Because even then, it isn’t true.

“this argument has come up yet again in the conversation about NAB … that the book is communist propaganda” - *snort* what? @socialjusticemunchkin

“You see, it’s easy to forget that feudalism (stay with me now) wasn’t just ‘having a king’, that the feudal system was a whole system wherein the whole hierarchy was justified …” - seems like “build an AI” doesn’t feature any hierarchy, though. It’s just this one AI.

I never said it’s communist propaganda, all I said is that Sandifer’s degraded marxism (which, I argued, seemed like a marxism even Marx wouldn’t support if he was alive today) is a shitty kind of marxism. I even went 97% of the way to accuse him of being basically a CIA shill in his “~capitalism~ is inevitable, let’s just do the pagan sex cult thing instead of trying to fix stuff” attitude (which was literally invented by the CIA, srsly guys, leftists should know this). If anything, the book would’ve been improved by being communist propaganda because communist propaganda usually doesn’t start by assuming that we’re fucked but instead argues that things could be fixed and improved.

1 month ago · tagged #NAB babble #basilisk bullshit · 231 notes · source: leftclausewitz · .permalink


metagorgon:

ozymandias271:

IMO from a PR perspective the best way for Yudkowsky to respond to NAB would have been “fuck yeah! This is awesome! I’m totally a Lovecraft protagonist!”

well, lovecraft antagonist, actually. i think phil’s the lovecraft protagonist here, being driven mad by the horrors underlying this world and all.

He Who Has Better Things To Do Than To Try To Be A Cool Kid should take advice from those who do know how the cool kids function.

(via metagorgon)

1 month ago · tagged #NAB babble #basilisk bullshit · 43 notes · source: ozymandias271 · .permalink