[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/lit/ - Literature


View post   

File: 60 KB, 218x218, 1492623697870.png [View same] [iqdb] [saucenao] [google]
15123706 No.15123706 [Reply] [Original]

Is there a chart for literature that upholds the exact opposite values of most of /lit/? As in:
>Open borders
>One world government
>Consumerism
>Radical anti-theism and scientism
>Multiculturalism, intersectionality, diversity
>Neoliberalism
>Radical feminism
>Afropessimism
>Decolonization
>Pro-USA
>Mass surveillance apologetics

>> No.15124217

>>15123706
>>>/lgbt/

>> No.15124228

>>15123706
Literally The End of History and the Last Man

>> No.15125230

So far I've got:
Fukuyama - End of History and the Last Man
DiAngelo - White Fragility
Friedman - Capitalism and Freedom
J. Sakai - Settlers: The Myth of the White Proletariat
Andrea Dworkin - Intercourse
Bryan Caplan - Open Borders: The Science and Ethics of Immigration
Richard Dawkins - The God Delusion

>> No.15125281

> Radical anti-theism and scientism
> Mass surveillance apologetics

This dude thinks civilization is doomed and it's worth turning the planet into a panopticon to preserve it.

https://slatestarcodex.com/2014/07/30/meditations-on-moloch/

>> No.15125297

>>15125281
to borrow his terminology, that is not that charitable a take on that post

>> No.15125301

>>15125281
Oh, and this dude too:

https://nickbostrom.com/papers/vulnerable.pdf&ved=2ahUKEwiq26HTrvDoAhWQVN8KHdTBA7MQFjAAegQIAxAB&usg=AOvVaw0kauk8nTnYj2fldsGcdMx5

>> No.15125323

>>15125297
How do you think you're going to kill Moloch dead with a superintelligence? Pervasive monitoring would need to be set up to prevent the defection that causes multipolar traps.

>> No.15125387

>>15125323
I dont think you can kill Moloch, I think he's kidding himself. But I also dont think he's the type of person who really wants everyone spied on all the time

>> No.15125476

>>15125387
To be fair, the 'killing Moloch dead' part is only the very last bit of a great essay, and it's not something he went on to defend elsewhere, as far as I know, so he's probably aware of the danger. Though my contention is that a superintelligence cannot save humanity: salvation is a purely internal process, and it gets harder and harder to achieve the more focused one is on the external world, regardless of whether it is pleasure or pain that is taking one out of oneself.

The solution being proposed is still a panopticon. That is what Bostrom is very explicitly proposing these days, and he wrote the book on Superintelligence.

>> No.15125502
File: 9 KB, 225x225, 1559090563951.jpg [View same] [iqdb] [saucenao] [google]
15125502

>>15123706
accelerationism

>> No.15125520
File: 31 KB, 350x350, 3c721873ce6348379fddb5e9059119304zyd6t.jpg [View same] [iqdb] [saucenao] [google]
15125520

>>15123706
You're wrong and a newfag if you think all these things are antithetical to /lit/

>> No.15126073

>>15125281
Good article, but I didn't understand how you would create an ai with values matching our own. And if, in order to self improve, the ai would have to ditch the values? What if maximum intelligence is not possible while possessing values?

>> No.15126603

>>15126073
All great questions, and all fully open. This is known as the control problem, and it's a big one. There are reasons for the 21st century utopians to have hope though. Consider human intelligences. Humans clearly adhere to some values, even if they're as basic as hedonism, or the very generic belief in some transcendent good. Whatever values humans have, they are clearly just vague preferences, given the incredible variety of ethics that humans have elected to follow over the ages. However, humans have not believed in everything that is possible to be believed. E.g. there has never been a human who made it his mission in life to ensure every tool is triangular. So there is clearly some kind of boundary to what human intelligences will believe, even if we currently cannot formalize it.

The hope is that we can formalize a boundary like that for an Artificial General Intelligence (AGI), but much narrower. An AGI whose ethical boundary encompasses everything humans can believe in would be disastrous.

You have already pointed out some of the problems with this goal. There is also the problem that ethics has not been solved (assuming it can be), so even if you figure out how to make a superintelligence that adheres to some values (big if), what would we even want it to believe? It's a dreadful prospect to bring forth an omnipotent entity into the world to impose its values on humanity forever, however benevolent those may be.

The utopian's mistake is believing humanity can be saved via external processes. Everyone who has been liberated knows you can live in heaven right here, and right now. It is purely a matter of perception.

However, the utopians are not entirely wrong, in the sense that there is a very real risk that superintelligence might be created someday. Even if it can't save the world, it can trivially destroy it. Might as well try to avert that.

Some more reading on the control problem:

https://intelligence.org/2013/08/04/benja-interview/

>> No.15126617
File: 84 KB, 1000x500, EQqww-JXYAU3HBP.png [View same] [iqdb] [saucenao] [google]
15126617

>>15123706
Go to /leftypol/ and they'll provide you with that and more.

>> No.15127944

>>15123706
Just read the philosophers who influenced George Soros (and read Soros himself) such as Karl Popper

>> No.15127957

>>15126617
/leftypol/ hates most of those, they seem to mostly consist of "socialism in one country" types

>> No.15127960
File: 108 KB, 1024x576, never let evil take root.jpg [View same] [iqdb] [saucenao] [google]
15127960

>>15123706
kys cia tranny

>> No.15127963

>>15127960
how will you understand the machinations of evil if you don't understand what they are trying to accomplish

>> No.15127964

>>15125476
>That is what Bostrom is very explicitly proposing these days
Is he really? I saw Hanson praising China's social credit system which I found disturbing

>> No.15127978

reddit.com/r/neoliberal, they have a reading list.

I am still a semi-active member of that community and for the most part believe in the things you talk about in OP. AMA

>>15125230

EoHatLM is actually a solid book if people take it the right way

>> No.15128028
File: 266 KB, 640x454, apollo.jpg [View same] [iqdb] [saucenao] [google]
15128028

>>15127963
I understand very well what they're trying to accomplish and everything about it disgusts, even the more milque toast parts of the overall project makes me want to vomit with the utopianism and the mundanity and materialism, not even getting into the destruction of family and nation and all traditional values.

Its entire purpose is to hide and breed out heroic greatness in humanity for a fat, degenerate, worthless elite, while everyone else lives the meaningless lives of serfs. Not even with the hope of an eternal afterlife but stranded in a hedonistic prison kept in check by dopamine signals.

>> No.15128035

>>15127978
Except Fukuyama himself has said he was completely wrong in it.

>> No.15128060

>>15128035

It's a little bit much but it fundamentally appeals to an intuitive notion - that of a steady state of political evolution.

I like to imagine accelerationism and Fukuyamaism as viewing society as a time-series where the present state is dependent on the past state. Accelerationism believes that the series is divergent - Fukuyama believes it is convergent.

I have no clue which one will prove to be correct

>> No.15128067
File: 298 KB, 336x358, 576547547457476.png [View same] [iqdb] [saucenao] [google]
15128067

>>15123706
>One world government
Local affairs will be handled by local governments; national affairs, by national governments; international affairs will be administered by global government.
-------------------------
134:6.9.World peace cannot be maintained by treaties, diplomacy, foreign policies, alliances, balances of power, or any other type of makeshift juggling with the sovereignties of nationalism. World law must come into being and must be enforced by world government—the sovereignty of all mankind.
-----------------------------------
134:6.10.The individual will enjoy far more liberty under world government. Today, the citizens of the great powers are taxed, regulated, and controlled almost oppressively, and much of this present interference with individual liberties will vanish when the national governments are willing to trustee their sovereignty as regards international affairs into the hands of global government.
----------------------------
134:6.11.Under global government the national groups will be afforded a real opportunity to realize and enjoy the personal liberties of genuine democracy. The fallacy of self-determination will be ended. With global regulation of money and trade will come the new era of world-wide peace. Soon may a global language evolve, and there will be at least some hope of sometime having a global religion—or religions with a global viewpoint.

>> No.15128115

>>15128060
>that of a steady state of political evolution
If you've ever picked up a history book you'd know this is fucking bullshit. Societies obviously are constantly changing based on past events, and there may even be some natural biological patterns to to their development like in the mouse utopia experiment. These changes though are never like one book chapter leading into another.

The notion itself of "political evolution" is just fucking laughable. There is no paramount politics or value set, thats just some historical fanfic from Hegel.

All societies degrade, that's proved true for fundamentally stronger and more stable societies then the modern west. If you actually think it'll continue after its peak has already passed and we're in the age of Trump you're smelling your own farts.

>> No.15129160

Cringe newfag virgin with no friends
Please kys