Dataset Viewer
Auto-converted to Parquet Duplicate
url
stringlengths
52
124
post_id
stringlengths
17
17
title
stringlengths
2
248
author
stringlengths
2
49
content
stringlengths
22
295k
date
stringclasses
376 values
https://www.lesswrong.com/posts/PpCohejuSHMhNGhDt/ny-state-has-a-new-frontier-model-bill-quick-takes
PpCohejuSHMhNGhDt
NY State Has a New Frontier Model Bill (+quick takes)
henryj
This morning, New York State Assemblyman Alex Bores introduced the Responsible AI Safety and Education Act. I’d like to think some of my previous advocacy was helpful here, but I know for a fact that I’m not the only one who supports legislation like this that only targets frontier labs and ensures the frontier gets pu...
2025-03-05
https://www.lesswrong.com/posts/Dzx5RiinkyiprzyJt/reply-to-vitalik-on-d-acc
Dzx5RiinkyiprzyJt
Reply to Vitalik on d/acc
xpostah
2025-03-05 Vitalik recently wrote an article on his ideology of d/acc. This is impressively similar to my thinking so I figured it deserved a reply. (Not claiming my thinking is completely original btw, it has plenty of influences including Vitalik himself.) Disclaimer - This is a quickly written note. I might change m...
2025-03-05
https://www.lesswrong.com/posts/XsYQyBgm8eKjd3Sqw/on-the-rationality-of-deterring-asi
XsYQyBgm8eKjd3Sqw
On the Rationality of Deterring ASI
dan-hendrycks
I’m releasing a new paper “Superintelligence Strategy” alongside Eric Schmidt (formerly Google), and Alexandr Wang (Scale AI). Below is the executive summary, followed by additional commentary highlighting portions of the paper which might be relevant to this collection of readers. Executive Summary Rapid advances in A...
2025-03-05
https://www.lesswrong.com/posts/Wi5keDzktqmANL422/on-openai-s-safety-and-alignment-philosophy
Wi5keDzktqmANL422
On OpenAI’s Safety and Alignment Philosophy
Zvi
OpenAI’s recent transparency on safety and alignment strategies has been extremely helpful and refreshing. Their Model Spec 2.0 laid out how they want their models to behave. I offered a detailed critique of it, with my biggest criticisms focused on long term implications. The level of detail and openness here was extr...
2025-03-05
https://www.lesswrong.com/posts/Fryk4FDshFBS73jhq/the-hardware-software-framework-a-new-perspective-on
Fryk4FDshFBS73jhq
The Hardware-Software Framework: A New Perspective on Economic Growth with AI
jakub-growiec
First, a few words about me, as I’m new here. I am a professor of economics at SGH Warsaw School of Economics, Poland. Years of studying the causes and mechanisms of long-run economic growth brought me to the topic of AI, arguably the most potent force of economic growth in the future. However, thanks in part to readin...
2025-03-05
https://www.lesswrong.com/posts/KnTmnPcDQ5xBACPP6/the-alignment-imperative-act-now-or-lose-everything
KnTmnPcDQ5xBACPP6
The Alignment Imperative: Act Now or Lose Everything
racinkc1
The AI alignment problem is live—AGI’s here, not decades off. xAI’s breaking limits, OpenAI’s scaling, Anthropic’s armoring safety—March 5, 2025, it’s fast. Misaligned AGI’s no “maybe”—it’s a kill switch, and we’re blind. LessWrong’s screamed this forever—yet the field debates while the fuse burns. No more talk. Join a...
2025-03-05
https://www.lesswrong.com/posts/W2hazZZDcPCgApNGM/contra-dance-pay-and-inflation
W2hazZZDcPCgApNGM
Contra Dance Pay and Inflation
jkaufman
Max Newman is a great contra dance musician, probably best known for playing guitar in the Stringrays, who recently wrote a piece on dance performer pay, partly prompted by my post last week. I'd recommend reading it and the comments for a bunch of interesting discussion of the tradeoffs involved in pay. One part that...
2025-03-05
https://www.lesswrong.com/posts/YcZwiZ82ecjL6fGQL/nyt-op-ed-the-government-knows-a-g-i-is-coming
YcZwiZ82ecjL6fGQL
*NYT Op-Ed* The Government Knows A.G.I. Is Coming
Phib
All around excellent back and forth, I thought, and a good look back at what the Biden admin was thinking about the future of AI. an excerpt: [Ben Buchanan, Biden AI adviser:] What we’re saying is: We were building a foundation for something that was coming that was not going to arrive during our time in office and tha...
2025-03-05
https://www.lesswrong.com/posts/EiDcwbgQgc6k8BdoW/what-is-the-best-most-proper-definition-of-feeling-the-agi
EiDcwbgQgc6k8BdoW
What is the best / most proper definition of "Feeling the AGI" there is?
jorge-velez
I really like this phrase. I feel very identified with it. I have used it at times to describe friends who have that realization of where we are heading. However when I get asked what Feeling the AGI means, I struggle to come up with a concise way to define the phrase. What are the best definitions you have heard, read...
2025-03-04
https://www.lesswrong.com/posts/WAY9qtTrAQAEBkdFq/the-old-memories-tree
WAY9qtTrAQAEBkdFq
The old memories tree
yair-halberstadt
This has nothing to do with usual Less Wrong interests, just my attempt to practice a certain style of creative writing I've never really tried before. You're packing again. By now you have a drill. Useful? In a box. Clutter? In a garbage bag. But there's some things that don't feel right in either. Under your bed, you...
2025-03-05
https://www.lesswrong.com/posts/TgDymNrGRoxPv4SWj/the-mask-benchmark-disentangling-honesty-from-accuracy-in-ai-3
TgDymNrGRoxPv4SWj
Introducing MASK: A Benchmark for Measuring Honesty in AI Systems
dan-hendrycks
In collaboration with Scale AI, we are releasing MASK (Model Alignment between Statements and Knowledge), a benchmark with over 1000 scenarios specifically designed to measure AI honesty. As AI systems grow increasingly capable and autonomous, measuring the propensity of AIs to lie to humans is increasingly important. ...
2025-03-05
https://www.lesswrong.com/posts/wZBqhxkgC4J6oFhuA/2028-should-not-be-ai-safety-s-first-foray-into-politics
wZBqhxkgC4J6oFhuA
2028 Should Not Be AI Safety's First Foray Into Politics
SharkoRubio
I liked the idea in this comment that it could be impactful to have someone run for President in 2028 on an AI notkilleveryoneism platform. Even better would be for them to run on a shared platform with numerous candidates for Congress, ideally from both parties. I don't think it's particularly likely to work, or even ...
2025-03-04
https://www.lesswrong.com/posts/bAWPsgbmtLf8ptay6/for-scheming-we-should-first-focus-on-detection-and-then-on
bAWPsgbmtLf8ptay6
For scheming, we should first focus on detection and then on prevention
marius-hobbhahn
This is a personal post and does not necessarily reflect the opinion of other members of Apollo Research. If we want to argue that the risk of harm from scheming in an AI system is low, we could, among others, make the following arguments: Detection: If our AI system is scheming, we have good reasons to believe that we...
2025-03-04
https://www.lesswrong.com/posts/CXYf7kGBecZMajrXC/validating-against-a-misalignment-detector-is-very-different
CXYf7kGBecZMajrXC
Validating against a misalignment detector is very different to training against one
mattmacdermott
Consider the following scenario: We have ideas for training aligned AI, but they’re mostly bad: 90% of the time, if we train an AI using a random idea from our list, it will be misaligned.We have a pretty good alignment test we can run: 90% of aligned AIs will pass the test and 90% of misaligned AIs will fail (for AIs ...
2025-03-04
https://www.lesswrong.com/posts/BocDE6meZdbFXug8s/progress-links-and-short-notes-2025-03-03
BocDE6meZdbFXug8s
Progress links and short notes, 2025-03-03
jasoncrawford
Much of this content originated on social media. To follow news and announcements in a more timely fashion, follow me on Twitter, Notes, Farcaster, Bluesky, or Threads. An occasional reminder: I write my blog/newsletter as part of my job running the Roots of Progress Institute (RPI). RPI is a nonprofit, supported by yo...
2025-03-04
https://www.lesswrong.com/posts/pxYfFqd8As7kLnAom/on-writing-1
pxYfFqd8As7kLnAom
On Writing #1
Zvi
This isn’t primarily about how I write. It’s about how other people write, and what advice they give on how to write, and how I react to and relate to that advice. I’ve been collecting those notes for a while. I figured I would share. At some point in the future, I’ll talk more about my own process – my guess is that w...
2025-03-04
https://www.lesswrong.com/posts/TPTA9rELyhxiBK6cu/formation-research-organisation-overview
TPTA9rELyhxiBK6cu
Formation Research: Organisation Overview
alamerton
Thank you to Adam Jones, Lukas Finnveden, Jess Riedel, Tianyi (Alex) Qiu, Aaron Scher, Nandi Schoots, Fin Moorhouse, and others for the conversations and feedback that helped me synthesise these ideas and create this post. Epistemic Status: my own thoughts and research after thinking about lock-in and having conversati...
2025-03-04
https://www.lesswrong.com/posts/5XznvCufF5LK4d2Db/the-semi-rational-militar-firefighter
5XznvCufF5LK4d2Db
The Semi-Rational Militar Firefighter
gabriel-brito
LessWrong Context: I didn’t want to write this. Not for lack of courage—I’d meme-storm Putin’s Instagram if given half a chance. But why? Too personal.My stories are tropical chaos: I survived the Brazilian BOPE (think Marine Corps training, but post-COVID).I’m dyslexic, writing in English (a crime against Grice).This ...
2025-03-04
https://www.lesswrong.com/posts/hxEEEYQFpPdkhsmfQ/could-this-be-an-unusually-good-time-to-earn-to-give
hxEEEYQFpPdkhsmfQ
Could this be an unusually good time to Earn To Give?
HorusXVI
I think there could be compelling reasons to prioritise Earning To Give highly, depending on one's options. This is a "hot takes" explanation of this claim with a request for input from the community. This may not be a claim that I would stand by upon reflection. I base the argument below on a few key assumptions, list...
2025-03-04
https://www.lesswrong.com/posts/vxSGDLGRtfcf6FWBg/top-ai-safety-newsletters-books-podcasts-etc-new-aisafety
vxSGDLGRtfcf6FWBg
Top AI safety newsletters, books, podcasts, etc – new AISafety.com resource
bryceerobertson
Keeping up to date with rapid developments in AI/AI safety can be challenging. In addition, many AI safety newcomers want to learn more about the field through specific formats e.g. books or videos. To address both of these needs, we’ve added a Stay Informed page to AISafety.com. It lists our top recommended sources fo...
2025-03-04
https://www.lesswrong.com/posts/kZ9tKhuZPNGK9bCuk/how-much-should-i-worry-about-the-atlanta-fed-s-gdp
kZ9tKhuZPNGK9bCuk
How much should I worry about the Atlanta Fed's GDP estimates?
korin43
The Atlanta Fed is seemingly predicting -2.8% GDP growth in the first quarter of 2025. I've seen several people mention this on Twitter, but it doesn't seem to be discussed much beyond that, and the stock market seems pretty normal (S&P 500 down 2% in the last month). Is this not really a useful signal? Or is the marke...
2025-03-04
https://www.lesswrong.com/posts/mRKd4ArA5fYhd2BPb/observations-about-llm-inference-pricing
mRKd4ArA5fYhd2BPb
Observations About LLM Inference Pricing
Aaron_Scher
This work was done as part of the MIRI Technical Governance Team. It reflects my views and may not reflect those of the organization. Summary I performed some quick analysis of the pricing offered by different LLM providers using public data from ArtificialAnalysis. These are the main results: Pricing for the same mode...
2025-03-04
https://www.lesswrong.com/posts/pzYDybRAbss4zvWxh/shouldn-t-we-try-to-get-media-attention
pzYDybRAbss4zvWxh
shouldn't we try to get media attention?
avery-liu
Using everything we know about human behavior, we could probably manage to get the media to pick up on us and our fears about AI, similarly to the successful efforts of early environmental activists? Have we tried getting people to understand that this is a problem? Have we tried emotional appeals? Dumbing-downs of our...
2025-03-04
https://www.lesswrong.com/posts/vHsjEgL44d6awb5v3/the-milton-friedman-model-of-policy-change
vHsjEgL44d6awb5v3
The Milton Friedman Model of Policy Change
JohnofCharleston
One-line summary: Most policy change outside a prior Overton Window comes about by policy advocates skillfully exploiting a crisis. In the last year or so, I’ve had dozens of conversations about the DC policy community. People unfamiliar with this community often share a flawed assumption, that reaching policymakers an...
2025-03-04
https://www.lesswrong.com/posts/sQvK74JX5CvWBSFBj/the-compliment-sandwich-aka-how-to-criticize-a-normie
sQvK74JX5CvWBSFBj
The Compliment Sandwich 🥪 aka: How to criticize a normie without making them upset.
keltan
Note. The comments on this post contain excellent discussion that you’ll want to read if you plan to use this technique. I hadn’t realised how widespread the idea was. This valuable nugget was given to me by an individual working in advertising. At the time, I was 16, posting on my local subreddit, hoping to find someo...
2025-03-03
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
4