What if you were able to sit down with some of the greatest leaders in the world? What would you ask? What would they say? Welcome to the “Linch with a Leader” Podcast with Mike Linch, where you are invited to join us in learning the spiritual principles behind big success.
…
continue reading
…
continue reading
Audio narrations of LessWrong posts. Includes all curated posts and all posts with 125+ karma. If you'd like more, subscribe to the “Lesswrong (30+ karma)” feed.
…
continue reading
Hosted by Chris Rivers, the CultureBus Audio Sessions features mini audio sessions on what happens when ministry leaders conquer the culture development challenge. Each episode features a big idea from Chris with guest interviews from ministry leaders that will inspire you to create a vibrant ministry culture.http://www.culturebus.cc
…
continue reading

1
Takeaways: Understanding Your Leadership Anxiety - Steve Cuss (Ep. 234)
12:56
12:56
Play later
Play later
Lists
Like
Liked
12:56In this episode, Mike Linch and Casey Linch discuss the insights shared by Steve Cuss on leadership anxiety, the impact of unmet expectations in relationships, and the importance of processing trauma for personal growth. They explore how personal challenges can bleed into professional life and emphasize the significance of recognizing God's presenc…
…
continue reading

1
“My ‘infohazards small working group’ Signal Chat may have encountered minor leaks” by Linch
10:33
10:33
Play later
Play later
Lists
Like
Liked
10:33Remember: There is no such thing as a pink elephant. Recently, I was made aware that my “infohazards small working group” Signal chat, an informal coordination venue where we have frank discussions about infohazards and why it will be bad if specific hazards were leaked to the press or public, accidentally was shared with a deceitful and discredite…
…
continue reading

1
“Short Timelines don’t Devalue Long Horizon Research” by Vladimir_Nesov
2:10
2:10
Play later
Play later
Lists
Like
Liked
2:10Short AI takeoff timelines seem to leave no time for some lines of alignment research to become impactful. But any research rebalances the mix of currently legible research directions that could be handed off to AI-assisted alignment researchers or early autonomous AI researchers whenever they show up. So even hopelessly incomplete research agendas…
…
continue reading

1
“Alignment Faking Revisited: Improved Classifiers and Open Source Extensions” by John Hughes, abhayesian, Akbir Khan, Fabien Roger
41:04
41:04
Play later
Play later
Lists
Like
Liked
41:04In this post, we present a replication and extension of an alignment faking model organism: Replication: We replicate the alignment faking (AF) paper and release our code. Classifier Improvements: We significantly improve the precision and recall of the AF classifier. We release a dataset of ~100 human-labelled examples of AF for which our classifi…
…
continue reading

1
“METR: Measuring AI Ability to Complete Long Tasks” by Zach Stein-Perlman
11:09
11:09
Play later
Play later
Lists
Like
Liked
11:09Summary: We propose measuring AI performance in terms of the length of tasks AI agents can complete. We show that this metric has been consistently exponentially increasing over the past 6 years, with a doubling time of around 7 months. Extrapolating this trend predicts that, in under five years, we will see AI agents that can independently complet…
…
continue reading

1
“Why Have Sentence Lengths Decreased?” by Arjun Panickssery
9:08
9:08
Play later
Play later
Lists
Like
Liked
9:08“In the loveliest town of all, where the houses were white and high and the elms trees were green and higher than the houses, where the front yards were wide and pleasant and the back yards were bushy and worth finding out about, where the streets sloped down to the stream and the stream flowed quietly under the bridge, where the lawns ended in orc…
…
continue reading

1
“AI 2027: What Superintelligence Looks Like” by Daniel Kokotajlo, Thomas Larsen, elifland, Scott Alexander, Jonas V, romeo
54:30
54:30
Play later
Play later
Lists
Like
Liked
54:30In 2021 I wrote what became my most popular blog post: What 2026 Looks Like. I intended to keep writing predictions all the way to AGI and beyond, but chickened out and just published up till 2026. Well, it's finally time. I'm back, and this time I have a team with me: the AI Futures Project. We've written a concrete scenario of what we think the f…
…
continue reading

1
“OpenAI #12: Battle of the Board Redux” by Zvi
18:01
18:01
Play later
Play later
Lists
Like
Liked
18:01Back when the OpenAI board attempted and failed to fire Sam Altman, we faced a highly hostile information environment. The battle was fought largely through control of the public narrative, and the above was my attempt to put together what happened.My conclusion, which I still believe, was that Sam Altman had engaged in a variety of unacceptable co…
…
continue reading

1
“The Pando Problem: Rethinking AI Individuality” by Jan_Kulveit
27:39
27:39
Play later
Play later
Lists
Like
Liked
27:39Epistemic status: This post aims at an ambitious target: improving intuitive understanding directly. The model for why this is worth trying is that I believe we are more bottlenecked by people having good intuitions guiding their research than, for example, by the ability of people to code and run evals. Quite a few ideas in AI safety implicitly us…
…
continue reading

1
“You will crash your car in front of my house within the next week” by Richard Korzekwa
1:52
1:52
Play later
Play later
Lists
Like
Liked
1:52I'm not writing this to alarm anyone, but it would be irresponsible not to report on something this important. On current trends, every car will be crashed in front of my house within the next week. Here's the data: Until today, only two cars had crashed in front of my house, several months apart, during the 15 months I have lived here. But a few h…
…
continue reading

1
“Leverage, Exit Costs, and Anger: Re-examining Why We Explode at Home, Not at Work” by at_the_zoo
6:16
6:16
Play later
Play later
Lists
Like
Liked
6:16Let's cut through the comforting narratives and examine a common behavioral pattern with a sharper lens: the stark difference between how anger is managed in professional settings versus domestic ones. Many individuals can navigate challenging workplace interactions with remarkable restraint, only to unleash significant anger or frustration at home…
…
continue reading

1
“PauseAI and E/Acc Should Switch Sides” by WillPetillo
3:31
3:31
Play later
Play later
Lists
Like
Liked
3:31In the debate over AI development, two movements stand as opposites: PauseAI calls for slowing down AI progress, and e/acc (effective accelerationism) calls for rapid advancement. But what if both sides are working against their own stated interests? What if the most rational strategy for each would be to adopt the other's tactics—if not their ulti…
…
continue reading

1
“VDT: a solution to decision theory” by L Rudolf L
8:58
8:58
Play later
Play later
Lists
Like
Liked
8:58Introduction Decision theory is about how to behave rationally under conditions of uncertainty, especially if this uncertainty involves being acausally blackmailed and/or gaslit by alien superintelligent basilisks. Decision theory has found numerous practical applications, including proving the existence of God and generating endless LessWrong comm…
…
continue reading

1
“LessWrong has been acquired by EA” by habryka
1:33
1:33
Play later
Play later
Lists
Like
Liked
1:33Dear LessWrong community, It is with a sense of... considerable cognitive dissonance that I announce a significant development regarding the future trajectory of LessWrong. After extensive internal deliberation, modeling of potential futures, projections of financial runways, and what I can only describe as a series of profoundly unexpected coordin…
…
continue reading

1
“We’re not prepared for an AI market crash” by Remmelt
3:46
3:46
Play later
Play later
Lists
Like
Liked
3:46Our community is not prepared for an AI crash. We're good at tracking new capability developments, but not as much the company financials. Currently, both OpenAI and Anthropic are losing $5 billion+ a year, while under threat of losing users to cheap LLMs. A crash will weaken the labs. Funding-deprived and distracted, execs struggle to counter coor…
…
continue reading

1
Steve Cuss on Unmasking Leadership Anxiety | Episode 234
47:01
47:01
Play later
Play later
Lists
Like
Liked
47:01In this engaging conversation, Mike Linch and Steve Cuss explore the intricacies of leadership, the challenges of managing anxiety, and the impact of expectations in relationships. Steve shares his journey from pastoring to leadership coaching, emphasizing the importance of naming feelings and experiences to foster personal growth. They delve into …
…
continue reading
Epistemic status: Reasonably confident in the basic mechanism. Have you noticed that you keep encountering the same ideas over and over? You read another post, and someone helpfully points out it's just old Paul's idea again. Or Eliezer's idea. Not much progress here, move along. Or perhaps you've been on the other side: excitedly telling a friend …
…
continue reading

1
“Tracing the Thoughts of a Large Language Model” by Adam Jermyn
22:18
22:18
Play later
Play later
Lists
Like
Liked
22:18[This is our blog post on the papers, which can be found at https://transformer-circuits.pub/2025/attribution-graphs/biology.html and https://transformer-circuits.pub/2025/attribution-graphs/methods.html.] Language models like Claude aren't programmed directly by humans—instead, they‘re trained on large amounts of data. During that training process…
…
continue reading
In this episode, Mike Linch discusses his conversation with Julie Shaffner, focusing on her journey and the mission of Voices to Connect. They explore themes of perseverance, humility, and the importance of stewarding influence in leadership. Julie's organization plays a crucial role in amplifying the voices of leaders, helping them share their sto…
…
continue reading

1
“Recent AI model progress feels mostly like bullshit” by lc
14:29
14:29
Play later
Play later
Lists
Like
Liked
14:29About nine months ago, I and three friends decided that AI had gotten good enough to monitor large codebases autonomously for security problems. We started a company around this, trying to leverage the latest AI models to create a tool that could replace at least a good chunk of the value of human pentesters. We have been working on this project si…
…
continue reading
(Audio version here (read by the author), or search for "Joe Carlsmith Audio" on your podcast app. This is the fourth essay in a series that I’m calling “How do we solve the alignment problem?”. I’m hoping that the individual essays can be read fairly well on their own, but see this introduction for a summary of the essays that have been released t…
…
continue reading

1
“Policy for LLM Writing on LessWrong” by jimrandomh
4:17
4:17
Play later
Play later
Lists
Like
Liked
4:17LessWrong has been receiving an increasing number of posts and contents that look like they might be LLM-written or partially-LLM-written, so we're adopting a policy. This could be changed based on feedback. Humans Using AI as Writing or Research Assistants Prompting a language model to write an essay and copy-pasting the result will not typically …
…
continue reading

1
“Will Jesus Christ return in an election year?” by Eric Neyman
7:48
7:48
Play later
Play later
Lists
Like
Liked
7:48Thanks to Jesse Richardson for discussion. Polymarket asks: will Jesus Christ return in 2025? In the three days since the market opened, traders have wagered over $100,000 on this question. The market traded as high as 5%, and is now stably trading at 3%. Right now, if you wanted to, you could place a bet that Jesus Christ will not return this year…
…
continue reading

1
“Good Research Takes are Not Sufficient for Good Strategic Takes” by Neel Nanda
6:58
6:58
Play later
Play later
Lists
Like
Liked
6:58TL;DR Having a good research track record is some evidence of good big-picture takes, but it's weak evidence. Strategic thinking is hard, and requires different skills. But people often conflate these skills, leading to excessive deference to researchers in the field, without evidence that that person is good at strategic thinking specifically. Int…
…
continue reading
When my son was three, we enrolled him in a study of a vision condition that runs in my family. They wanted us to put an eyepatch on him for part of each day, with a little sensor object that went under the patch and detected body heat to record when we were doing it. They paid for his first pair of glasses and all the eye doctor visits to check up…
…
continue reading

1
“On the Rationality of Deterring ASI” by Dan H
9:03
9:03
Play later
Play later
Lists
Like
Liked
9:03I’m releasing a new paper “Superintelligence Strategy” alongside Eric Schmidt (formerly Google), and Alexandr Wang (Scale AI). Below is the executive summary, followed by additional commentary highlighting portions of the paper which might be relevant to this collection of readers. Executive Summary Rapid advances in AI are poised to reshape nearly…
…
continue reading