General
- Lee Robinson’s “5 things I learned from 5 years at
Vercel”. See in particular discussion of the first
lesson, “Go hard at work, then go home”.
- Scott Adams’ “The Day You Became A Better
Writer”
is short enough to reread frequently.
- Rands Shwags’ “Shields
Down”: “Resignations happen
in a moment, and it’s not when you declare, ‘I’m resigning.’”
- “You Have To Be In The
Water”
is a good essay by Chris Paik on investing.
- Aaronson’s oracle is very fun to play
with, and suggests that the
Predictor would be easier to build
than you’d first think.
- “A half-hour to learn
Rust” is
fantastic. All programming languages, including CUDA and PyTorch, deserve an
introduction in its tradition.
- Raghu Mahajan’s
piece
on why he chose to come back to India for his string theory work suggests
which kinds of sciences are easier to support in countries that don’t have
huge amounts of capital: “String theory research is very theoretical and uses
a lot of advanced mathematics. But unlike experimental fields, it does not
require laboratory space or expensive equipment. This makes conducting
world-class research feasible with minimal resources, the most important of
which include government salaries, travel allowances and computing expenses.
Given these factors, on a scientific and professional front, it was an easy
decision for me to come back to India and join the string theory group at
ICTS.”
- Jim Fisher’s “Don’t animate
height!” is a nice
illustration of how much more sophisticated frontend engineering is than
moving buttons around.
- In “The Intensive
Margin”, David
Friedman gives a nice framework for thinking about research. He distinguishes
between the intensive margin of economics research — the “subjects that
smart people have been writing articles about for most of the past century”,
where “anything new is likely to be either uninteresting or wrong” and much
work looks like “apply[ing] a new mathematical tool to an old problem …
whether or not the new tool adds anything useful to analysis of the problem”
— and the extensive margin — “the application of the existing tools of
economics, including mathematics where needed, to new subjects” like “public
choice theory, law and economics, and behavioral economics”. Work on the
extensive margin to do interesting economics.
- Sketchplanations makes visual explanations.
See for example this neat illustration of the parallax
effect and Arthur Eddington’s
selection effects
parable.
- James Somers’ actually useful word of the day
series. More “specious” and “derring-do”, less
“petrichor”.
AI
- Huang’s law.
- Killed by LLM, and its inspiration,
Killed by Google.
- Stanford’s recent Language Modeling from
Scratch
lecture series is great.
- Marin, Stanford’s
open lab for building foundation models.
- Nathan Lambert’s “The American DeepSeek
Project”.
- Calvin French-Owen’s “Reflections on
OpenAI”.
- Brendan Long’s “Can Reasoning Models Avoid the Most Forbidden
Technique?”
makes an obvious but important point: reinforcement learning (RL) affects the
chain of thought (CoT) despite no optimization pressure on the CoT itself
because the LLM to which any optimization pressure is applied at all is the
same one generating the CoT. Empirical evidence checks
out.
- First Round’s “From Memo to Movement: Shopify’s Cultural Adoption of
AI”. Especially interesting is the
Cursor token spend leaderboard: “If your engineers are spending \$1,000 per
month more because of LLMs and they are 10% more productive, that’s too cheap.
Anyone would kill for a 10% increase in productivity for only \$1,000 per
month.”
- HRT AI Labs’ blog post on how they read ML
papers
is absolutely fantastic: “While a large degree of technical and mathematical
sophistication is needed, the systems we build also need to be robust and
maintainable. One principle we apply to achieve this is always using the
simplest possible approach that achieves the desired outcome. For example, if
a linear model is as good as a random forest model, we’d prefer the linear
model. It is interesting to contrast this principle with the incentives in
academic machine learning research. An empirically-driven paper is more likely
to be published if it demonstrates novelty – but often when one optimizes for
novelty, the results can be complex, which may make it less appealing in an
applied setting.” See also their post on how they think about
data.