This is also my setup now, except
- Instead of mac mini, I have a DGX Spark (asus variant)
- I run openclaw alongside codex, and talk to my openclaw instance via discord
And what good is this for? It lets me program my claw @dutifulbob to extract signal from the noise, and display it in my personal open source news aggregator scoop
I feed it discord messages, openclaw git history, and other various sources, and it's supposed to evaluate whether that content deserves my interest. it's still work in progress, because the more batched you process all the info, the worse it informs
in the screenshot below, my claw underrepresented what Peter has done in one day 👎
on the other hand, it has also found a PR about local model discoverability 💪
Here is the system I use to aggregate all my info, still under development:
About creating an INTERESTS.md in OpenClaw
I use my openclaw instance to aggregate all my news and information sources, including work and maintainer stuff
Like: what did everyone do today? Did anyone had an issue with acpx today? Any complaints from users?
I have various interests like this over different projects, and I've found out it's not helpful when I have all the interest info dispersed throughout my openclaw workspace
To address this, I have created INTERESTS.md, which is automatically included in the context like AGENTS.md and SOUL.md. I define sections for each different context of interest, and in other news aggregation skills, I just tell it to "look at my openclaw interests in INTERESTS.md" and such
People were asking at @clawcon singapore how to setup eg. gemma with OpenClaw, and I realize for some time that there is no easy “1 click” local model deployment. Because local model landscape is constantly changing, and there is a million different ways you can do something
For example you can use LM studio to load a model (llama.cpp), or you can use vLLM. Why would you choose one over the other? vLLM currently supports MTP speculative decoding, and it’s a work in progress in llama.cpp. There are so many knobs and dials you can adjust
The first time end user of openclaw should of course not have to know about this! Having sufficient hardware that supports an open model, and not having an openai or anthropic subscription, it should automatically give you the option to set up a fully functional local model with a single click!
If the current ease of setup of local models are around gentoo or arch linux level of difficulty, we should aim for e.g ubuntu/manjaro linux/omarchy level of difficulty
i.e opinionated and easy first setup, with the ability to change all the configuration later on
until I make all of this possible, you can start with the following:
- read existing local models doc below
- create a new channel in telegram or discord for testing local models. you don’t want to change the global default model just yet
- tell your claw or coding agent to download and lm studio locally
- tell it to download gemma4-e4b or gemma4-e2b and set it up on openclaw for the new channel you have just created. tell it to not stop and loop itself until it gets a successful response from that channel
all these steps will be made redundant in the near future, but until then, this should get you going with experiments and getting a vibe check on the capabilities of open models. you can also copy and paste the contents of this tweet to your agent, and it should be able to set it up for you
https://t.co/C0I9HK4Dj1