LessWrong Curated Podcast
Celková dĺžka:
17 h 32 min
[HUMAN VOICE] "How could I have thought that faster?" by mesaoptimizer
LessWrong Curated Podcast
03:02
[HUMAN VOICE] "My PhD thesis: Algorithmic Bayesian Epistemology" by Eric Neyman
LessWrong Curated Podcast
13:07
[HUMAN VOICE] "Toward a Broader Conception of Adverse Selection" by Ricki Heicklen
LessWrong Curated Podcast
21:49
[HUMAN VOICE] "On green" by Joe Carlsmith
LessWrong Curated Podcast
75:13
LLMs for Alignment Research: a safety priority?
LessWrong Curated Podcast
20:46
[HUMAN VOICE] "Social status part 1/2: negotiations over object-level preferences" by Steven Byrnes
LessWrong Curated Podcast
50:08
[HUMAN VOICE] "Using axis lines for good or evil" by dynomight
LessWrong Curated Podcast
12:17
[HUMAN VOICE] "Scale Was All We Needed, At First" by Gabriel Mukobi
LessWrong Curated Podcast
15:04
[HUMAN VOICE] "Acting Wholesomely" by OwenCB
LessWrong Curated Podcast
27:26
The Story of “I Have Been A Good Bing”
LessWrong Curated Podcast
22:39
The Best Tacit Knowledge Videos on Every Subject
LessWrong Curated Podcast
14:44
[HUMAN VOICE] "My Clients, The Liars" by ymeskhout
LessWrong Curated Podcast
13:59
[HUMAN VOICE] "Deep atheism and AI risk" by Joe Carlsmith
LessWrong Curated Podcast
46:59
[HUMAN VOICE] "CFAR Takeaways: Andrew Critch" by Raemon
LessWrong Curated Podcast
09:10
[HUMAN VOICE] "Speaking to Congressional staffers about AI risk" by Akash, hath
LessWrong Curated Podcast
24:14
Many arguments for AI x-risk are wrong
LessWrong Curated Podcast
20:03
Tips for Empirical Alignment Research
LessWrong Curated Podcast
39:53
Timaeus’s First Four Months
LessWrong Curated Podcast
11:55
Contra Ngo et al. “Every ‘Every Bay Area House Party’ Bay Area House Party”
LessWrong Curated Podcast
07:43
[HUMAN VOICE] "And All the Shoggoths Merely Players" by Zack_M_Davis
LessWrong Curated Podcast
21:40
[HUMAN VOICE] "Updatelessness doesn't solve most problems" by Martín Soto
LessWrong Curated Podcast
25:15
Every “Every Bay Area House Party” Bay Area House Party
LessWrong Curated Podcast
07:28
2023 Survey Results
LessWrong Curated Podcast
112:47
Raising children on the eve of AI
LessWrong Curated Podcast
08:16
“No-one in my org puts money in their pension”
LessWrong Curated Podcast
15:01
Masterpiece
LessWrong Curated Podcast
07:46
CFAR Takeaways: Andrew Critch
LessWrong Curated Podcast
09:48
[HUMAN VOICE] "Believing In" by Anna Salamon
LessWrong Curated Podcast
25:17
[HUMAN VOICE] "Attitudes about Applied Rationality" by Camille Berger
LessWrong Curated Podcast
07:35
Scale Was All We Needed, At First
LessWrong Curated Podcast
15:50
Sam Altman’s Chip Ambitions Undercut OpenAI’s Safety Strategy
LessWrong Curated Podcast
06:41
[HUMAN VOICE] "A Shutdown Problem Proposal" by johnswentworth, David Lorell
LessWrong Curated Podcast
12:20
Brute Force Manufactured Consensus is Hiding the Crime of the Century
LessWrong Curated Podcast
09:12
[HUMAN VOICE] "Without fundamental advances, misalignment and catastrophe are the default outcomes of training powerful AI" by Jeremy Gillen, peterbarnett
LessWrong Curated Podcast
101:22
Leading The Parade
LessWrong Curated Podcast
16:41
[HUMAN VOICE] "The case for ensuring that powerful AIs are controlled" by ryan_greenblatt, Buck
LessWrong Curated Podcast
64:07
Processor clock speeds are not how fast AIs think
LessWrong Curated Podcast
04:54
Without fundamental advances, misalignment and catastrophe are the default outcomes of training powerful AI
LessWrong Curated Podcast
116:55
Making every researcher seek grants is a broken model
LessWrong Curated Podcast
06:56
The case for training frontier AIs on Sumerian-only corpus
LessWrong Curated Podcast
06:42