One cool small invention in engineering management is the p0, p1, p2, p3, p4 priority scale.
It compresses a lot of social and operational context into two characters. Lower number means higher priority. More importantly, priority is tied to action. If something is p0, somebody needs to do something.
But there is another scale I want for personal knowledge work: i0, i1, i2, i3, i4.
The i stands for interest. Priority is for actions. Interest is for attention.
If p0 means “act now”, i0 means “do not lose this”.
If p1 means “schedule work”, i1 means “read soon”.
If p2 means “do later”, i2 means “useful context”.
If p3 means “low priority work”, i3 means “weak signal”.
If p4 means “almost never work”, i4 means “almost never revisit”.
This is useful when you need to rank interest concisely across many topics, sources, or articles.
For example, you might follow several sources about the same broad topic. One source is must-read, another is useful background, and another is only worth keeping around for occasional context. They are all about the same thing, but they do not deserve the same amount of attention.
I use this for myself in Scoop, a news intelligence system I am building to collect articles, group related ones, and rank how much attention they deserve.
This is also my setup now, except
- Instead of mac mini, I have a DGX Spark (asus variant)
- I run openclaw alongside codex, and talk to my openclaw instance via discord
And what good is this for? It lets me program my claw @dutifulbob to extract signal from the noise, and display it in my personal open source news aggregator scoop
I feed it discord messages, openclaw git history, and other various sources, and it's supposed to evaluate whether that content deserves my interest. it's still work in progress, because the more batched you process all the info, the worse it informs
in the screenshot below, my claw underrepresented what Peter has done in one day 👎
on the other hand, it has also found a PR about local model discoverability 💪
Here is the system I use to aggregate all my info, still under development:
About creating an INTERESTS.md in OpenClaw
I use my openclaw instance to aggregate all my news and information sources, including work and maintainer stuff
Like: what did everyone do today? Did anyone had an issue with acpx today? Any complaints from users?
I have various interests like this over different projects, and I've found out it's not helpful when I have all the interest info dispersed throughout my openclaw workspace
To address this, I have created INTERESTS.md, which is automatically included in the context like AGENTS.md and SOUL.md. I define sections for each different context of interest, and in other news aggregation skills, I just tell it to "look at my openclaw interests in INTERESTS.md" and such
People were asking at @clawcon singapore how to setup eg. gemma with OpenClaw, and I realize for some time that there is no easy “1 click” local model deployment. Because local model landscape is constantly changing, and there is a million different ways you can do something
For example you can use LM studio to load a model (llama.cpp), or you can use vLLM. Why would you choose one over the other? vLLM currently supports MTP speculative decoding, and it’s a work in progress in llama.cpp. There are so many knobs and dials you can adjust
The first time end user of openclaw should of course not have to know about this! Having sufficient hardware that supports an open model, and not having an openai or anthropic subscription, it should automatically give you the option to set up a fully functional local model with a single click!
If the current ease of setup of local models are around gentoo or arch linux level of difficulty, we should aim for e.g ubuntu/manjaro linux/omarchy level of difficulty
i.e opinionated and easy first setup, with the ability to change all the configuration later on
until I make all of this possible, you can start with the following:
- read existing local models doc below
- create a new channel in telegram or discord for testing local models. you don’t want to change the global default model just yet
- tell your claw or coding agent to download and lm studio locally
- tell it to download gemma4-e4b or gemma4-e2b and set it up on openclaw for the new channel you have just created. tell it to not stop and loop itself until it gets a successful response from that channel
all these steps will be made redundant in the near future, but until then, this should get you going with experiments and getting a vibe check on the capabilities of open models. you can also copy and paste the contents of this tweet to your agent, and it should be able to set it up for you
https://t.co/C0I9HK4Dj1
Emacsification of Software - Recommended read by @tqbf
"Until now, the Achilles heel of Emacs culture has been that, except for Magit, its packages tend to be wretched user experiences. Ugly, slow, and discoverable only after inflicting years of elisp cortical injuries on yourself.
But AI agents have fracked Emacs culture, and it’s leaking out into the wider world. Given access to a screen and inputs, agents reliably build native user interfaces. Native UI was the province of professionally packaged programs. Now it’s all as bespoke as your editor configuration. And, while I’m sure there’s an upper limit to how good those interfaces can be (with current frontier models), that ceiling is higher than anything you can do in a TUI."
https://t.co/sHuqued44Y
/goal in codex is an interesting choice of word. a junior namer would have named it /loop --- but that would be naming what the feature has to perform in an LLM context, and not the general idea
/goal alludes to @mhutter42's definition of AGI, "an agent’s ability to achieve goals or succeed in a wide range of environments"
continual learning is not there yet, but for this exact reason, I am feeling the AGI when I use /goal
Idea so stupid it could be smart: a spec manager? specman?
People maintain plain language instead of code. Implementation details strictly prohibited, only high level design and ideas
MVP would also be relatively easy to implement:
- Gather list of most popular 10k npm packages
- Scrape corresponding deepwiki repo pages (sorry cognition)
- Use heuristics to get rid of implementation details, leaving you just with pure high level spec
- “specman add coolpackage” then fetches corresponding spec automatically, and triggers the local coding agent to implement that
- could leave versioning out for MVP — how often does the idea behind a package change anyway
I don't have a 128gb macbook to run ds4 out of, but I resonate with all the points on Armin's post
He was telling me, @mervenoyann and @cristinaponcela that local models need more polish 1 month ago in London. Today, I am happy to be given a chance and a shot at the problem!
I have a new job!
Excited to announce that I will be working with Hugging Face to make local models work great in OpenClaw and other open agent harnesses!
I will be building in public and documenting everything along the way, stay tuned!
I undersign this. The fact that you generate slop doesn’t mean that you don’t know the difference between good and bad code
In non-mission critical applications, slop let’s you go from 0 to 1 very quickly
Let the code grow without too much attention first. If it proves itself, tear it down and write it anew, this time properly. This is the way
This is the idea behind acpx as well
acpx is a meta-harness. it’s main idea is to delegate harness development to others, because it is hard to match the full might of OpenAI or Anthropic when it comes to building a harness
so it takes it at face value the functionality other harnesses provide, and let’s you program them from the outside
flue came out the other day which is similar, it would be cool if flue could let me program over codex as well. it looks very interesting!