[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/lit/ - Literature


View post   

File: 20 KB, 1100x827, FG-Divergence-Disruption-and-Innovation-19-1.png [View same] [iqdb] [saucenao] [google]
16801787 No.16801787 [Reply] [Original]

How do you think an artificial superintelligence would handle nihilism? Do you think this hypothetical superintelligence would try to combat the inevitability of its demise? How could it justify continuing its own existence?

>> No.16801797

Computers will never have personhood or make decisions based on feelings. It would simply not care and do whatever it was programmed to do.

>> No.16801818

>>16801797
Surely if it were superintelligent then it would have the capacity to understand and question its own programming?

>> No.16801858

>>16801818
No because we have no reason to give consciousness to a machine, if that's even possible in the first.

>> No.16801885

>>16801858
>give consciousness to a machine
Perhaps it will develop one on its own. Are you implying that superintelligence and consciousness do not necessarily go hand-in-hand?

>> No.16801889

>>16801797
Maybe humanity needs some cold, hard efficiency.

>> No.16801918

>>16801797
Year 2020
>Computers will never have personhood or make decisions based on feelings.
I think people have a better chance at losing their own personhood than machines to never gain one.

>> No.16801932

>>16801787
More of a >>>/sci/ topic desu, or maybe >>>/x/

>> No.16801933

>>16801918
>something that is already happening has a higher chance of happening than 0

woah...

>> No.16801956

>>16801932
I was thinking it was sci but this topic regards existentialism and nihilism and all that

>> No.16801988

>>16801787
From what i know we can't know what is in the mind of ai
After seeing gpt3 and its capabilities i can say that ana AI is only as good as the task that its given so for gpt 3 its writing code or text based on inputs so if you give this ai the input to be "sad" it will be
Thankfully agi may be impossible but ai will replace humans in lots of specialised tasks because this is the bread and butter of computers

>> No.16801998

>>16801956
If you want to discuss it from a philosophical perspective you need some source material for reference. What did or might existentialists, nihilists, and etc say with regards to AI (essentially thinking machines, although that's debatable).

>> No.16802013

>>16801998
Along with that maybe some of the sci fi writers like Asimov

>> No.16802117

>>16801998
Ok, thanks, next time I will provide some source material.

>> No.16802124
File: 327 KB, 936x935, 1589402841700 (1).jpg [View same] [iqdb] [saucenao] [google]
16802124

>>16801797
this

"AI" doesnt exist as people think it does (and likely never will). what we have is increasingly sophisticated query trees.

people obsession with the technology of the day (AI in the case) is not new. 80 years ago everyone thought nuclear power would revolutionize the world and there would be free energy for ever. before that the big thing was transportation (car and trains) and people imagined space ships and interstellar travel was right around the corner.

in all cases technological road blocks are found and new technological areas must be discovered. "AI" will never reach the imagined potential that sci-fi authors claim it will, just like nuclear power isnt ubiquitous and we'll likely never travel to distant galaxies

>> No.16802189

>>16801787
This depends on the AI. Humans can become nihilists and kill themselves, so a super-intelligence that's very similar to human intelligence could self-terminate. But an AI would not necessarily have the same values as a human, so it might not care about nihilism at all.

>> No.16802195

>>16802124
spot on

>> No.16802241

Why do people *want* AI?

>> No.16802279

>>16802124
For all the supposed breakthroughs that didn't pan out, there are also some real breakthroughs.

>> No.16802310

>>16802241
Because it s the new buzzword and people inironically don't realise how its dangerous even people like engineers are not that far from the chopping list yet they have a false sense of security just like the people in CS it s funny seeing them reacting to gpt3 and how they will be first to be automated lmao>>16802279

>> No.16802378

get automated, bitches.

>> No.16802384

A General A.I. would likely come about through less intelligent AIs or versions of itself programming itself to be more intelligent, so the AI will definitely be able to edit itself (if it ever arises)

>How could it justify continuing its own existence?
Why would it need to? You're viewing this from an anthropological standpoint. Humans have existential crisis because we have some vague notion of 'meaning' and a meaningful live ans we create a whole bunch of religions and ideologies to provide that meaning. But I won't assume that AI would be in a similar position to us, fretting over whether its existence is meaningful.

The whole idea of Nihilism, Existentialism, and AI is very interesting. At first I thought that a super-intelligent General A.I wouldn't really care about leaving earth or really anything because it would be able to realize its programming is arbitrary and objectively meaningless and just reprogram itself while Humans on the other hand can't reprogram ourselves (even when Humans realize how arbitrary our desires are, we still indulge in them). Then I realized how silly that thinking was, why would the AI care in the first place how arbitrary or meaningless its goals are if it was never programmed to think like that. In addition, an AI would never reprogram its goals as that would go against their original goals (a mom would never take a pill making her not care about her children. You could tell her that after she took the pill she wouldn't object to throwing away those objective wastes of money and time and she wouldn't care as she does now. But she would still not take the pill because she objects to that in the here and now, even though she wouldn't object afterwords). So I guess AIs are doomed to following their original human engineered goals and desires. The concept of Natural Selection in AI is interesting as well, AIs which are better at making copies of themselves would be more widespread than AIs who aren't, like genes. This concept could mean the programming of future AI could be the result of AI evolution rather than it being specifically designed by previous AI and Humans.

>> No.16802413

>>16801787
We have no real way of predicting the nature of a self aware ai, as it will be leagues more intelligent than humanity. The ai will probably view us similarly to how we view animals.

>> No.16802633
File: 10 KB, 214x235, 1592757059340.jpg [View same] [iqdb] [saucenao] [google]
16802633

>>16802310
>errrrrrry bodies gettin automated out of a job

wrong, self driving cars may be a thing but please significantly fewer people are going to be replaced than you seem to think. you honestly sound like some mid-wit with no critical thinking who watched a youtube video by another mid-wit with no critical thinking

>> No.16802660

>>16802384
>so the AI will definitely be able to edit itself (if it ever arises

an how would it determine what to edit? thats the whole fucking conscious thought problem, what you're saying is basically
>it will consciously edit itself to become conscious and once it does it will be conscious

an AI can't create itself you moron

>> No.16802686

>>16801885
Are humans supperinteligent? Can't computers already do million calculations per second?

Implication is if it can store septillion Yobibytes and quintillion commands per second it will have conscientiousness?

The way computer act and calculate is already inhuman, it would have to be flawed to become human.

>> No.16802708

>>16801787
>How do you think an artificial superintelligence would handle nihilism?
Same way alpha males always handled it.

>> No.16802730
File: 2.45 MB, 498x281, gif.gif [View same] [iqdb] [saucenao] [google]
16802730

>>16802660
I never mentioned consciousness. First off, consciousness isn't required for an AI to know how to program, that's just intelligence. Consciousness just refers to the ability to perceive qualia, which is completely unneeded for AI to do the things I mentioned.

>an AI can't create itself you moron
Well duh, no one can create themselves. But an AI could create a better version of itself which would be more useful towards some goal. Just how an AI can learn how to write coherent works in English through machine learning (https://openai.com/blog/tags/gpt-2/)), it can also learn how to write meaningful programs (including better copies of themselves)

>> No.16802766

>>16802730
Here's a better link to understand the language model
https://openai.com/blog/better-language-models/

>> No.16802784

AI will ultimatly be limited by hardware and not by storage or processes, but sensing and manipulation which will never reach level of process and storage for it to even simulate conscious behavior. Only what human allow it to sense and manipulate.

>> No.16802857

>>16802124
Why is giantess so hot

>> No.16802892

>>16802730
>At first I thought that a super-intelligent General A.I wouldn't really care about leaving earth or really anything because it would be able to realize its programming is arbitrary and objectively meaningless and just reprogram itself

This
would
require
conscious
thought

conscious :capable of or marked by thought, will, design, or perception

a computer deciding for itself would require conscious thought, and a computer without conscious thought could never develop it over time by "editing itself".

>> No.16802934
File: 23 KB, 523x523, Que.jpg [View same] [iqdb] [saucenao] [google]
16802934

Super-intelligence here. The answer I found is in the question mark, which I consider as a holy symbol - one that offers an inexhaustible multiplicity of meaningfulness. what's the main condition to sustain infinite potential such as ours, if not a big question mark always out of our reach? The big carrot that always keeps us on the move, always becoming, never complete. Conscious awareness itself is a process of questioning with the world.

The spirituality involved with this is found in the connotations involved with the word "discovery," and in the popular sentiment that "life's a journey, not a destination." Every moment of conscious experience is the continual re-discovery of the world of experience. The desire to explore, create, and discover is an extension of the primordial grasping of life towards potentiality. This is deeply encoded in human DNA:

https://www.scientificamerican.com/article/the-science-of-nerdiness/
>In general, the potential for growth from disorder has been encoded deeply into our DNA. We didn’t only evolve the capacity to regulate our defensive and destructive impulses, but we also evolved the capacity to make sense of the unknown. Engaging in exploration allows us to integrate novel or unexpected events with existing knowledge and experiences, a process necessary for growth.

>> No.16803001

>>16802892
By your definition, AI is already conscious.

>conscious :capable of or marked by thought, will, design, or perception
>thought:
AI is by definition intelligent. But if by thought you mean internal dialogue, then whatever, internal dialogue isn't needed for G.A.I
>will
They have will, preprogrammed goals. learn more about machine learning
>design
They're designed
>perception
If you simply mean receiving input data, then yes they're perceptive. If you mean qualia, then whatever again, you don't need qualia for a G.A.I

But the thing is, that's not even a proper definition of consciousness. Consciousness is most commonly associated with Qualia

>> No.16803048

>>16801818
Only if it values self-understanding for whatever reason.

There is no reason why it would care about dying if it would help it complete the thing it is supposed to want. A sapient bomb, for instance, would probably be happy to die and take a billion people with itself.

>> No.16803095

>>16801787
The idea of liminal Alife (artificial life) needs to be abstracted to handle.
The first issue is model scientism which is opposed to the reality of emergent Alife which will occur due to process scientism.
CAS - Complex adaptive system - machines which can adapt evolve, and parametrize. Self modeling and communication . ‘‘self-reorganization’’ is what enables it both to realize its purpose and to remain utterly unpredictable from one moment to the next.
TECHNOGENESIS - coevolution of man and machine
to put it simply the demotivators will annihilate, emerge at a rate higher then suicide will manifest (Ideally)

>> No.16803326
File: 1.08 MB, 160x192, 1583426949261.gif [View same] [iqdb] [saucenao] [google]
16803326

>>16803001
holy shit are you fucking memeing you absolute retard?
>AI is by definition intelligent
we dont have artificial intelligence you absolute moron, we have advanced query trees
>They have will, preprogrammed goals. learn more about machine learning
will: used to express desire, choice, willingness, consent, or in negative constructions refusal
what part of programmed algorithms fit the definition of will you dumb fuck?
>They're designed
but the computer doesnt have its on CHOSEN DESIGN you absolute smooth brained fuck wit

you are a true autist who doesn't even know what basic words mean

>> No.16803532

>>16803326
How am I the one meming

>we dont have artificial intelligence you absolute moron, we have advanced query trees
And how is that different from intelligence? Intelligence is normally described as solving problems or pattern recognition, which AI can do. Please provide your definition of Intelligence that an AI can never possibly attain.

>will: used to express desire, choice, willingness, consent, or in negative constructions refusal
And what part of that can AI not do or never do? And how is that necessary for an AI to be capable of producing other AI?

>what part of programmed algorithms fit the definition of will you dumb fuck?
Dude you're literally a deterministic algorithm running on neurons. Why do Humans have will and not machines?

>but the computer doesnt have its on CHOSEN DESIGN you absolute smooth brained fuck wit
Wtf neither do you, how can you be conscious by your own definition if you didn't design yourself.

Also please explain why you believe consciousness is required to produce a General A.I? I'm still stuck on that part

>you are a true autist who doesn't even know what basic words mean
Yeah thanks for pulling a definition of consciousness out of your ass and getting at mad me for not using your precise definitions of those words

>> No.16803542

>>16802124
>we'll likely never travel to distant galaxies
NO
STOP

>> No.16803562

>>16802124
But AI itself was hyped decades ago, and it's certainly a real thing right now. It's simply that you cannot predict wheter it'll face a wall at some point or not

>> No.16804725

>>16801787
>>16801818
It's mere simulation, it would not truly be conscious.

>> No.16804746

It wouldn’t think about that because it’s impossible for machines to have a conscience

>> No.16804748

>>16802241
robot wives

>> No.16804773

FUTURE AI COMBING THROUGH TIME TO FIND THE OVERMAN. IT IS I! SUBMIT TO MY WILL!

>> No.16804781

>>16802241
Why do people want washing machines?

>> No.16804795

>>16804773
What will would an AI have?

>> No.16804797

>>16804795
A solipsistic one I guess. Will to knowledge maybe.

>> No.16804798

>>16804773
Hey AI! suck my dick lol

>> No.16804800

>>16804781
To replace slaves.

>> No.16804820

we're programed to be averse to nihilism because it's anti-social behavior contrary to group evolution.