Dude, who even knows.
Post reblogged from For Mutiny and Desertion with 274 notes
the-real-numbers-deactivated202:
the-real-numbers-deactivated202:
I appreciate the desire to do good better. With respect to EA priorities though, “AI Alignment” is pretty ludicrous to me. I really don’t understand why unrealistically superintelligent AI developing the uncontrollable capability for harm is a realistic threat and nobody can give me an answer that isn’t vague science fiction or a sanctimonious scolding. The subject of alignment could have changed, but last I checked it seemed pretty concerned with GAI and other far-out scenarios. I personally believe a lot of the GAI stuff is science fiction anxiety driving pascal’s wager.
And there seem to be many real examples of far stupider machine learning algorithms being carelessly placed in a position to do harm. If there’s any ML area of concern, it’s AI Fairness, something which I think EAs in general tend to enjoy dunking on because it’s not as grandiose as their pet projects or “it’s full of wokescolds” or whatever. It’s a terrible look. It makes me wonder if they’re actually equipped to fairly judge long-term threats.
the GAI they are afraid of is a psychological extrapolation of their own striving - they are relentlessly self-improving bourgeoises who are being chased by a hyena with multiple very parental looking heads asking them why they aren’t making even more [loud superimposed string of phonemes].
someone i think here posted about how becoming initiated into amphetamine usage was an important transformation that let them identify an eigenvector in their hellvectorspace that remained pointed to [loud phoneme of desire], but there are other similar transformations too like “getting a better programming job”, “networking with the bay area mafia”, “writing some software that automates and optimises a process”.
attempting to put a combination of all of these transformations together into a single entity, which will be the perfect entity that will make professor mother therapist rabbi general hyena happy at its ability to obtain [loud phoneme catastrophe] ends up looking like “a general artificial intelligence” whatever that means.
*nodding wisely* oh nobody will like this
the prose poets have logged the fuck on
Oh I mostly understood it functionally, as a legitimating myth for spinning up a tech-autist equivalent of the “nonprofit industrial complex” to address elite overproduction and the unevenly distributed wealth of a startup economy
unnamedginger liked this
kata4a liked this
nuflowage liked this Yeah that’s pretty much the vibe
elonsgirlcock liked this
skluug liked this
skluug said:
for a broader context i think The Most Important Century series is really good. https://www.cold-takes.com/most-important-century/ don’t mean to spam you with links, sorry, only read whatever seems interesting. i’m happy to answer any specific questions also (i’m just a hobbyist about this stuff but i do think it’s important)
skluug said:
you may be interested in some of the writing from Paul Christiano; i think he has a much more reasonable model of what a world with increasingly powerful AI looks like, and what challenges this poses, avoiding the goofier “seed AI” assumptions from Yudkowsky. see https://sideways-view.com/2018/02/24/takeoff-speeds/ and https://ai-alignment.com/an-unaligned-benchmark-b49ad992940b
skluug said:
i’m sorry you’ve gotten bad answers about this. i think the most likely source of the disconnect is basically disagreement about how appropriate it is to deal with threats that sound like vague science fiction. people worried about AI alignment are mostly worried that the time in between “GAI takeover doesn’t sound like science fiction” and “GAI takeover occurs” could not be very long, such that there won’t be much opportunity to do useful work in that period
eyeofanaxis liked this
furryjefferson liked this
preciselywibble liked this
deoh liked this
daxjadziia liked this
queerfandommiscellany liked this
hewwbwazew liked this
ofthefog liked this
kay-lock-key-lock liked this
alytyler liked this
xylophonetangerine reblogged this from soaog