<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
     xmlns:atom="http://www.w3.org/2005/Atom"
     xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Onur Solmaz blog</title>
    <description>Explorations in software, agentic systems, math, languages and more.</description>
    <link>https://solmaz.io/</link>
    <language>en-us</language>
    <managingEditor>no-reply@example.com</managingEditor>
    <ttl>60</ttl>
    <atom:link href="https://solmaz.io/feed.xml"
               rel="self" type="application/rss+xml"/>
    <pubDate>Wed, 15 Apr 2026 17:59:55 +0000</pubDate>
    <lastBuildDate>Wed, 15 Apr 2026 17:59:55 +0000</lastBuildDate>
    <generator>Jekyll v4.4.1</generator>

    
    
      
      <item>
        <title>Own your AI infrastructure</title>
        <description>When you build your company&apos;s workflows around Claude Cowork, you are betting against local models and owning your infra, and inviting your company to long-term exploitation

If I were Anthropic or OpenAI, I would be the most scared of local AI proliferating

Let&apos;s do the math. A single large big lab subscription costs 200x12=$2400 per year

If you want to have both OpenAI and Anthropic, that could cost $2400, $3600, or $4800, based on which combinations of Pro, Max plans you choose

An ASUS Ascent GX10 costs $3000, and you can use that for many years. You don&apos;t get the same level of coding quality with open models yet, but maybe you want to do something simpler than coding today... There are already many people who started buying GPUs for this reason

Now we know big labs are selling some of these plans at a loss. So they will likely get more expensive

When you use Claude Cowork or similar, you are locking yourself into being a RENTER. Because once you set up workflows for a company, it takes time to migrate away to something else, even though we have AI to help

Infra is sticky, it&apos;s how hyperscalers make profit. Think about the difference in amount you pay AWS vs Hetzner. This is B2B SaaS 101. Once you sell to a company, you are in for a long time, especially in Europe

So if you build your company&apos;s AI workflows around a proprietary product by another company, then you are basically saying &quot;Come exploit me as tolerably as you can in the next 10 years, because it will be too painful for me to switch&quot;

It&apos;s a great business for Anthropic. And Claude is awesome too! The feedback from friends who use it has been great, it made their lives a lot easier

But when you build your company over proprietary AI infra, then you are making sure you will not be an OWNER, and partake in the usual sorrows of being a RENTER from a monopolist, which is exploitation

This is not the case when you use open source agent infra. Whereas Anthropic is unlikely to let you use future open models in their future iteration of Claude Cowork, using free and open source frameworks like OpenClaw, Open Agents, etc. lets you drop in replace providers or local hardware if they start to upcharge you

Keep this in mind, if you have a business</description>
        <dc:creator>Onur Solmaz</dc:creator>
        
          
        
        
          <category>x</category>
          <category>tweet</category>
        
        <pubDate>Wed, 15 Apr 2026 16:46:17 +0000</pubDate>
        <link>https://solmaz.io/x/2044457293069062530/</link>
        <guid isPermaLink="true">https://solmaz.io/x/2044457293069062530/</guid>
      </item>
    
      
      <item>
        <title>This is our moat 🤣</title>
        <description>This is our moat 🤣</description>
        <dc:creator>Onur Solmaz</dc:creator>
        
          
        
        
          <category>x</category>
          <category>tweet</category>
        
        <pubDate>Wed, 15 Apr 2026 11:46:55 +0000</pubDate>
        <link>https://solmaz.io/x/2044381956591280140/</link>
        <guid isPermaLink="true">https://solmaz.io/x/2044381956591280140/</guid>
      </item>
    
      
      <item>
        <title>People&apos;s AI</title>
        <description>You need to understand one fact about OpenClaw

People are biased and incentivized to spread disinformation about OpenClaw. That is because OpenClaw IS NOT PUMPING ANYONE’S BAGS, unlike most other projects

Literally every other for-profit agent product is incentivized to trash OpenClaw, BECAUSE OpenClaw is a neutral third party across the industry and geopolitical scene. They MAKE MONEY when OpenClaw loses

OpenClaw does not worry about making money for some investors. Its founder @steipete is a successful exited founder. He is motivated by having fun and democratizing AI, literally. That is why he is suddenly so loved by everyone. He cares about PEOPLE, not MONEY

“OpenClaw is bloated”
-&gt; Since beginning of March, OpenClaw is thinning its core and putting functionality in plugins behind a plugin SDK. Having numerous plugins to choose from does not mean bloat. This was already copied by others and is still a work in progress

“OpenClaw is not secure”
-&gt; OpenClaw has the most eyeballs and immediately addresses any security advisories as soon as they come. It is the most secure agent, by sheer pressure

“OpenClaw is bought by OpenAI”
-&gt; Then why is my bank account so empty bro??? All maintainers are literally unpaid and working DOUBLE beside their dayjobs to ship features to you. Do you think VC money can buy that kind of commitment?

Once you understand these facts, you’ll like OpenClaw even more. Because OpenClaw is your AI, People’s AI

And you can join us too. OpenClaw is the easiest-to-join project in AI right now. You just need to start using it, and start making good contributions. If you are competent, you can become a maintainer, and join the rest of the team making history!</description>
        <dc:creator>Onur Solmaz</dc:creator>
        
          
        
        
          <category>x</category>
          <category>tweet</category>
        
        <pubDate>Wed, 15 Apr 2026 08:49:38 +0000</pubDate>
        <link>https://solmaz.io/x/2044337339518828727/</link>
        <guid isPermaLink="true">https://solmaz.io/x/2044337339518828727/</guid>
      </item>
    
      
      <item>
        <title>&amp;gt; Worked for 10 hours</title>
        <description>&amp;gt; Worked for 10 hours
&amp;gt; Selected model is at capacity

Model is gpt 5.4 high</description>
        <dc:creator>Onur Solmaz</dc:creator>
        
          
        
        
          <category>x</category>
          <category>tweet</category>
        
        <pubDate>Tue, 14 Apr 2026 10:40:40 +0000</pubDate>
        <link>https://solmaz.io/x/2044002894819541192/</link>
        <guid isPermaLink="true">https://solmaz.io/x/2044002894819541192/</guid>
      </item>
    
      
      <item>
        <title>Gemma 4 first impressions on OpenClaw</title>
        <description>This is pretty much the arc I have been going on in the 2 months since I bought my ASUS GX10 for 3k EUR

Use whisper on the API -&gt; realize it charged me $$$ for just a few calls -&gt; migrate openclaw to use local whisper

Need to deduplicate news articles for my news engine -&gt; download qwen embedding 8b

And now, gemma4-e4b  finally seems like a viable alternative for a local model that runs around 20 tok/s

So I will install a matrix client to use through tailscale, and can finally build the social life CRM I dreamed of since years.

100% private, zero data going out. I had a bias of not giving any personal data to AI since ChatGPT came out. But I can finally give more personal data to my AI agent

And I will make sure @openclaw supports all this in an easy way, make it dead easy

Fully self-owned AI begins now</description>
        <dc:creator>Onur Solmaz</dc:creator>
        
          
        
        
          <category>x</category>
          <category>tweet</category>
        
        <pubDate>Tue, 14 Apr 2026 08:35:38 +0000</pubDate>
        <link>https://solmaz.io/x/2043971427234140207/</link>
        <guid isPermaLink="true">https://solmaz.io/x/2043971427234140207/</guid>
      </item>
    
      
      <item>
        <title>gemma 4 is actually pretty decent and runs on my asus gx10 (128 gb vram)</title>
        <description>gemma 4 is actually pretty decent and runs on my asus gx10 (128 gb vram)

the original dense 31b runs slow, averaging around 3~4 tok/s. it&apos;s also using 80% of gpu memory

my previous experience with gemini 3 pro back in november was that it was too trigger happy. but this is one-shotting simple tasks I&apos;m giving it in openclaw harness, and it&apos;s hard to tell it apart from gpt 5.4 for my use cases so far

now off to try out smaller models, because 3 tok/s is too slow</description>
        <dc:creator>Onur Solmaz</dc:creator>
        
          
        
        
          <category>x</category>
          <category>tweet</category>
        
        <pubDate>Mon, 13 Apr 2026 22:23:52 +0000</pubDate>
        <link>https://solmaz.io/x/2043817471673586010/</link>
        <guid isPermaLink="true">https://solmaz.io/x/2043817471673586010/</guid>
      </item>
    
      
      <item>
        <title>@lucasmeijer @lucasmeijer one could actually periodically trigger an agent to propose...</title>
        <description>@lucasmeijer @lucasmeijer one could actually periodically trigger an agent to propose simplifications or new abstractions in a codebase, and I believe it would already work pretty well with the current models</description>
        <dc:creator>Onur Solmaz</dc:creator>
        
          
        
        
          <category>x</category>
          <category>tweet</category>
        
        <pubDate>Mon, 13 Apr 2026 15:56:56 +0000</pubDate>
        <link>https://solmaz.io/x/2043720098507018621/</link>
        <guid isPermaLink="true">https://solmaz.io/x/2043720098507018621/</guid>
      </item>
    
      
      <item>
        <title>Got tool calls to work, context size 65k tokens</title>
        <description>Got tool calls to work, context size 65k tokens</description>
        <dc:creator>Onur Solmaz</dc:creator>
        
          
        
        
          <category>x</category>
          <category>tweet</category>
        
        <pubDate>Mon, 13 Apr 2026 11:38:39 +0000</pubDate>
        <link>https://solmaz.io/x/2043655100686573715/</link>
        <guid isPermaLink="true">https://solmaz.io/x/2043655100686573715/</guid>
      </item>
    
      
      <item>
        <title>@grok does this exist</title>
        <description>@grok does this exist</description>
        <dc:creator>Onur Solmaz</dc:creator>
        
          
        
        
          <category>x</category>
          <category>tweet</category>
        
        <pubDate>Sun, 12 Apr 2026 20:38:44 +0000</pubDate>
        <link>https://solmaz.io/x/2043428626209604023/</link>
        <guid isPermaLink="true">https://solmaz.io/x/2043428626209604023/</guid>
      </item>
    
      
      <item>
        <title>Question for the community:</title>
        <description>Question for the community:
What is the best testing observability and control tool you have used until now?

- Could be SaaS, could be open source
- To be used in @openclaw repo
- Should be compatible with vitest
- Ideally language agnostic

I need something that lets me run a very long running test group multiple times on a specific commit or tag, without repeating the tests that have already finished

This is a need because the 1hr long process might get interrupted due to flakiness. So I need to persists the progress of a run, and then not repeat them

I have seen some paid SaaS for this, but none that really give me what I want

This is going to be important especially while working with agents, because when you are committing 100x faster, you don&apos;t want to waste time and compute running the same things

I started building this already as an exercise. If this exists already in a satisfactory way, I will stop. Otherwise, I&apos;ll keep building</description>
        <dc:creator>Onur Solmaz</dc:creator>
        
          
        
        
          <category>x</category>
          <category>tweet</category>
        
        <pubDate>Sun, 12 Apr 2026 20:33:10 +0000</pubDate>
        <link>https://solmaz.io/x/2043427225824055675/</link>
        <guid isPermaLink="true">https://solmaz.io/x/2043427225824055675/</guid>
      </item>
    

    <!-- Optionally, include a channel image if available -->
    
    
    

  </channel>
</rss>
