LessWrong (Curated & Popular)
Ukupno trajanje:
11 h 13 min
“Please, Don’t Roll Your Own Metaethics” by Wei Dai
LessWrong (Curated & Popular)
04:11
“Paranoia rules everything around me” by habryka
LessWrong (Curated & Popular)
22:32
“Human Values ≠ Goodness” by johnswentworth
LessWrong (Curated & Popular)
11:31
“Condensation” by abramdemski
LessWrong (Curated & Popular)
30:29
“Mourning a life without AI” by Nikola Jurkovic
LessWrong (Curated & Popular)
11:17
“Unexpected Things that are People” by Ben Goldhaber
LessWrong (Curated & Popular)
08:13
“Sonnet 4.5’s eval gaming seriously undermines alignment evals, and this seems caused by training on alignment evals” by Alexa Pan, ryan_greenblatt
LessWrong (Curated & Popular)
35:57
“Publishing academic papers on transformative AI is a nightmare” by Jakub Growiec
LessWrong (Curated & Popular)
07:23
“The Unreasonable Effectiveness of Fiction” by Raelifin
LessWrong (Curated & Popular)
15:03
“Legible vs. Illegible AI Safety Problems” by Wei Dai
LessWrong (Curated & Popular)
03:29
“Lack of Social Grace is a Lack of Skill” by Screwtape
LessWrong (Curated & Popular)
11:08
[Linkpost] “I ate bear fat with honey and salt flakes, to prove a point” by aggliu
LessWrong (Curated & Popular)
01:07
“What’s up with Anthropic predicting AGI by early 2027?” by ryan_greenblatt
LessWrong (Curated & Popular)
39:25
[Linkpost] “Emergent Introspective Awareness in Large Language Models” by Drake Thomas
LessWrong (Curated & Popular)
03:00
[Linkpost] “You’re always stressed, your mind is always busy, you never have enough time” by mingyuan
LessWrong (Curated & Popular)
04:17
“LLM-generated text is not testimony” by TsviBT
LessWrong (Curated & Popular)
19:40
“Post title: Why I Transitioned: A Case Study” by Fiora Sunshine
LessWrong (Curated & Popular)
17:21
“The Memetics of AI Successionism” by Jan_Kulveit
LessWrong (Curated & Popular)
21:27
“How Well Does RL Scale?” by Toby_Ord
LessWrong (Curated & Popular)
16:11
“An Opinionated Guide to Privacy Despite Authoritarianism” by TurnTrout
LessWrong (Curated & Popular)
07:59
“Cancer has a surprising amount of detail” by Abhishaike Mahajan
LessWrong (Curated & Popular)
23:54
“AIs should also refuse to work on capabilities research” by Davidmanheim
LessWrong (Curated & Popular)
06:34
“On Fleshling Safety: A Debate by Klurl and Trapaucius.” by Eliezer Yudkowsky
LessWrong (Curated & Popular)
142:21
“EU explained in 10 minutes” by Martin Sustrik
LessWrong (Curated & Popular)
16:47
“Cheap Labour Everywhere” by Morpheus
LessWrong (Curated & Popular)
03:38
[Linkpost] “Consider donating to AI safety champion Scott Wiener” by Eric Neyman
LessWrong (Curated & Popular)
02:35
“Which side of the AI safety community are you in?” by Max Tegmark
LessWrong (Curated & Popular)
04:18
“Doomers were right” by Algon
LessWrong (Curated & Popular)
04:35
“Do One New Thing A Day To Solve Your Problems” by Algon
LessWrong (Curated & Popular)
03:21
“Humanity Learned Almost Nothing From COVID-19” by niplav
LessWrong (Curated & Popular)
08:45
“Consider donating to Alex Bores, author of the RAISE Act” by Eric Neyman
LessWrong (Curated & Popular)
50:28
“Meditation is dangerous” by Algon
LessWrong (Curated & Popular)
07:26
“That Mad Olympiad” by Tomás B.
LessWrong (Curated & Popular)
26:41
“The ‘Length’ of ‘Horizons’” by Adam Scholl
LessWrong (Curated & Popular)
14:15
“Don’t Mock Yourself” by Algon
LessWrong (Curated & Popular)
04:10
“If Anyone Builds It Everyone Dies, a semi-outsider review” by dvd
LessWrong (Curated & Popular)
26:01
“The Most Common Bad Argument In These Parts” by J Bostock
LessWrong (Curated & Popular)
08:11
“Towards a Typology of Strange LLM Chains-of-Thought” by 1a3orn
LessWrong (Curated & Popular)
17:34
“I take antidepressants. You’re welcome” by Elizabeth
LessWrong (Curated & Popular)
06:09
“Inoculation prompting: Instructing models to misbehave at train-time can improve run-time behavior” by Sam Marks
LessWrong (Curated & Popular)
04:06