Anyone have any ideas on how Hofstadter’s law could be used for some really, really exploitable anthropic computing? Basically, assuming that any worlds that would violate Hofstadter’s law too egregiously fail to realize because they’re swallowed by paradoxes, how could one use this to weed out less desirable outcomes? And what would be reasonable preconditions for it; obviously it only impacts things you’re doing yourself or in some other way personally involved-in-slash-responsible-for, but what else?
3 months ago · 9 notes · .permalink
academicianzex liked this
pratfins liked this
thathopeyetlives liked this
cccccppppp liked this
blashimov liked this
nostalgebraist liked this
downzorz liked this
nonevahed liked this
multiheaded1793 reblogged this from socialjusticemunchkin and added:Once, a long time ago in internet years, there was a very smart (and really weird) guy named Roko.Who thought long and...