Open weights are quietly closing up - and that's a problem
14 points by martinald
14 points by martinald
This article is inaccurate with respect to Kimi. Some vendors are labeling releases "modified MIT" when they have huge restrictions, but Kimi K2.6 just has an advertising clause. I'd prefer it didn't, but it doesn't seem overly objectionable.
The claim in the article at the time of writing:
Kimi have imposed a license condition that you cannot use the open weights models if you have more than 100M monthly active users of your product or do more than $20m/month revenue
And the reality of the K2.6 license:
Our only modification part is that, if the Software (or any derivative works thereof) is used for any of your commercial products or services that have more than 100 million monthly active users, or more than 20 million US dollars (or equivalent in other currencies) in monthly revenue, you shall prominently display "Kimi K2.6" on the user interface of such product or service.
It's true that some vendors seem to be cutting back on open weight releases, but then others are entering the scene as well. Right now there's Xiaomi, DeepSeek, Moonshot, and Zai all with fairly competitive large open weight models. In terms of smaller models, Gemma 4 moved to a standard open license (Apache) which is a win. I think the article is right to raise the concern, but it feels like for now at least vendors moving away from open weight models have mostly been replaced by other vendors either entering the space or moving to a more conventional license.
Hi, author here, fair point - updated the article. I think I got confused with the Cursor Kimi stuff! But you're absolutely right(tm).
It will be interesting to see what happens. I didn't put this in the article because it was already too long, but my thoughts are that they needed open weights models to get any real traction - would have been very hard for them to get any 'global' (at least) mindshare without them - sort of like Grok now. Very poor uptake despite loads of compute and aggressive pricing. Though the Chinese models don't even have the compute that X.Ai has for inference.
And now the models are getting great there is much more incentive to close them up. But again, could be wrong - perhaps there is a world where there is always a new entrant.
Thanks for updating it!
"It's tough to make predictions, especially about the future" :) I'm just glad that with Kimi K2.6, GLM 5.1, DeepSeek V4, Mimo v2.5 we've had at least "the next round" of competitive large open weight models still made available. One I watch carefully is what happens next with Olmo from AllenAI - with open training scripts and a largely openly available dataset, it acts as a floor for what anyone with sufficient compute + a team should be able to achieve. Will we get another Olmo release that gets anywhere close to Gemma 4 / Qwen 3.6 27B I wonder.
I ask myself, what is the market incentive to publish open weight LLMs?
Nvidia on Hugging Face is releasing some models, likely betting on the fact that they can sell more graphics cards if people run local LLMs instead of using subscriptions.
Maybe, if fewer open-weights models are published, large inference providers will band together to form an "open weights foundation", similar to the Linux foundation, that coordinates training data acquisition, training, and fine-tuning. Because without competitive models to serve these companies don't have a business.
Somewhere I read this theory that the end of Moore's Law would lead to more open, repairable electronics. In order to stay on the cutting edge, companies are incentivized to keep their secret sauces to themselves. But when that edge stops moving, now you're no longer competing on functionality: all products on the market are equally good, and the incentive disappears. (Or maybe the secret sauce shifts from the product to the manufacturing?)
I'm not sure if the current inference providers would go for an open weights foundation. But if LLMs hit a wall in the coming years, I could definitely see an incentive for their customers. "Hey, why are we paying OpenAnthropic $5 billion a month for something we could do ourselves?"
GLM 5.1 is a very competitive open weight model released just last month under the MIT license. There are already many companies providing it as a service. It is produced by Z.ai, which like the other Chinese companies might add restrictions in the future - but it doesn’t have any now.