Post
Towards 1-click setup for local models in OpenClaw
People were asking at @clawcon singapore how to setup eg. gemma with OpenClaw, and I realize for some time that there is no easy “1 click” local model deployment. Because local model landscape is constantly changing, and there is a million different ways you can do something For example you can use LM studio to load a model (llama.cpp), or you can use vLLM. Why would you choose one over the other? vLLM currently supports MTP speculative decoding, and it’s a work in progress in llama.cpp. There are so many knobs and dials you can adjust The first time end user of openclaw should of course not have to know about this! Having sufficient hardware that supports an open model, and not having an openai or anthropic subscription, it should automatically give you the option to set up a fully functional local model with a single click! If the current ease of setup of local models are around gentoo or arch linux level of difficulty, we should aim for e.g ubuntu/manjaro linux/omarchy level of difficulty i.e opinionated and easy first setup, with the ability to change all the configuration later on until I make all of this possible, you can start with the following: - read existing local models doc below - create a new channel in telegram or discord for testing local models. you don’t want to change the global default model just yet - tell your claw or coding agent to download and lm studio locally - tell it to download gemma4-e4b or gemma4-e2b and set it up on openclaw for the new channel you have just created. tell it to not stop and loop itself until it gets a successful response from that channel all these steps will be made redundant in the near future, but until then, this should get you going with experiments and getting a vibe check on the capabilities of open models. you can also copy and paste the contents of this tweet to your agent, and it should be able to set it up for you https://t.co/C0I9HK4Dj1