Introducing elfeed-summarize
Published 2026-02-19
On consuming vs. generating
As I worked on my previous post about a Claude Code skill for interactive training sessions, I got introduced to a thought that stuck with me: the secret super power of large language models isn't generating content, it's consuming it. Writing an essay, drafting code, those are the applications that get most of the attention (and skepticism). But asking a model to dive deep into a book, a code base, etc., then answer questions about it? That's quieter, more reliable, and surprisingly useful in practice.
This post is about one such use case: summarizing RSS entries to help decide whether I want to read them.
The problem
I follow a lot of feeds, and I don't read everything. Instead, I scan the headlines and decide what interests me. Scanning efficiently requires that the headlines actually tell you something, and well, sometimes they don't.
Here are a few random recent examples from my feed:
- Is this a good sign?
- Deep blue
- Fragments: February 13
Opening an entry to find out is often self-defeating — at that point, you've already read parts of it.
Enter elfeed-summarize
I use the wonderful elfeed as a news reader: it's an extensible Emacs package that handles everything from fetching, storing, reading, tagging, to searching RSS feeds.
As it happens, the similarly wonderful llm library provides a unified interface to LLM providers in Emacs – both local ones via Ollama and cloud services like OpenAI or Gemini.
Connecting the dots seemed straightforward enough, and just a couple of hours later, elfeed-summarize adds LLM-powered summaries to elfeed.
Press z on any entry and a brief summary appears inline — as an overlay below the headline in elfeed's "search" view, or as a Summary: header in its "show" view.
Press z again to hide it.
The entry's locally cached content is what gets summarized; no network request to the original article is needed.
Figure 1: An elfeed search buffer; one of the entries displays a summary
The goal of a summary here is modest and specific: to help you decide whether an article is worth reading. Not to replace reading it, not to give you a false sense of having read it — just to answer the question "should I open this?" a little faster.
Expanding incrementally
Sometimes, a single sentence isn't enough: press Z (capital Z) and the package generates an additional paragraph of detail.
Press it again for another.
Each expansion builds on what's already there, so you can dial in exactly as much context as you need before committing to the full article.
Choosing a provider with llm
For this use case — potentially summarizing many entries in a sitting — a local model through Ollama is the natural choice. It's free to run, it works offline, and your reading habits stay on your machine. A complete setup using Mistral looks like this:
(use-package elfeed-summarize :ensure :after elfeed :config (setq elfeed-summarize-llm-provider (make-llm-ollama :chat-model "mistral:latest")) (elfeed-summarize-mode 1))
Of course, if you prefer a cloud provider — OpenAI, Gemini, Claude, or any other supported by llm — the configuration is equally simple.
Closing thoughts
None of this would exist without the work of others. Christopher Wellons's elfeed is an excellent piece of software that makes extending it genuinely pleasant — its architecture invites exactly this kind of addition. Andrew Hyatt's llm package saved me from writing provider-specific API calls and from committing to a single service.
Finally, if you try elfeed-summarize and have thoughts — what works, what doesn't, what you'd want to see — I'd love to hear from you. Thanks!