Many bloggers I follow maintain a tradition of regularly sharing links to content that they have found online. I always find these collections valuable — they’re like recommendations from a friend. This is my version: a handful of links that sparked my interest this past month. Some of them are works that have been on my mind for many years, some of them I found recently.
The Unbearable Slowness of Being: Why do we live at 10 bits/s? This is something I found after publishing my post, Humans are Overparameterized. It asks a fascinating question; what is the bandwidth, in bits per second, that humans appear to have in intentional tasks? They get an answer of 10 bits/second, and they convinced me that that is correct to within an order of magnitude. One implication of this is that humans appear to have about 1-10GB of output bandwidth over the course of our lives. It also lays out the massive disparity (about 8 orders of magnitude!) between the bandwidth of our apparent input and our apparent output.
Vehicles, by Valentino Braitenberg. This is a throwback link to a book from 1984. It’s a book that I read many years ago, and I think about it often. It tells the story about some of the organisms imaginable, asking how they might achieve their goals with simple, understandable wiring. Despite that focus on the simple, it has remarkable explanatory power over the behavior of many animals, including humans.
Perplexity AI. This is my favorite AI tool this week, replacing my use of Google Search. With web search, LLMs hallucinations are becoming a thing of the past. It cites its sources.
Best AI for coding in 2025: 25 developer tools to use (or avoid). I use AI coding tools constantly, and I observe that many programmers aren’t. Top of their list is aider, which I love.
Levin’s Universal Search. Another idea from the 70s and 80s. It’s relevant to today’s discourse about AI, particularly reasoning AI, because it represents one answer about how to turn compute into answers for hard problems in a generalized way. I had originally slated a Scholarpedia article on the subject to be shared, but I thought Perplexity’s summary was helpful. Tell me in the comments which you prefer.
You are Using Cursor AI Incorrectly. I know this post seems like hacky vibes-coding (and it is), but it’s really commentary on the programming process. A core problem with LLM-based coding is that the AI fails to learn lessons from what works and what doesn’t work. It will keep making the same kind of mistake, especially because your chat history with it will inevitably get truncated. This is a great solution for having the LLM coder maintain its own knowledge.
Complexity no Bar to AI by Gwern. This was shown to me after my recent post about NP-hard problems. Gwern is skeptical that the formal complexity of problems will slow down AI growth, pointing out that heuristics are often good enough, and that humans have the same limit anyway. I agree.
Short list this month! I hope you find them interesting.