I see a future in jj
184 points by steveklabnik
184 points by steveklabnik
I use these same metrics in evaluating projects. Especially community: in the same way most technical problems are people problems, a healthy community has a good shot at fixing almost any problem. I'm real optimistic to see what collaboration workflow y'all come up with. Good luck!
Thank you! I'm glad that bit resonated, when I was trying to write the post, I was trying to think about how to make it more useful than just "hey I'm changing jobs," and I realized that while I've been evaluating stuff this way for a long time, I've never actually wrote it down anywhere.
I used rather different metrics when evaluating "my next Version Control System":
git (rather than just cosmetics, as Magit superbly achieves?)?git? Will this, in the long run, give a tool that is more pleasant to use for newcomers?For (1), this requires that the author of the tool can articulate design choices clearly. jj had pretty good docs about its design and implementation choices from the start. (I suspect that this may come from a dual-culture of git and hg, which avoids too-incremental tweaks over git.)
For me the answer to (1) is the handling of conflicts as a first-class state, the answer to (2) is yet to be determined, and for (3) the answer is the clever use of the git repo format as a backing store, which allows incremental adoption instead of requiring all collaborators to try a new tool at the same time.
I also looked at sapling and pijul, but found jj more convincing on these three points. sapling does not have this treatment of conflicts as first-class state, and pijul does not allow incremental adoption.
In which context do you assess a healthy community though? What I mean is, a healthy community "at small scale" doesn't translate to a healthy community at larger scales, when VC companies start getting involved, when there are a lot of people with very different ideas of where the project should go, and so on (hint: the Nix community).
I say this not as a critique but speaking from personal (and painful) experience of seeing a community that kinda seemed to work, to it turning into a dumpster fire, right as "success" started becoming more tangible.
It's also true though that in hindsight many of the issues plaguing this specific community were already incubating, so maybe it's just a matter of experience.
Also I realize now that it's ironic I make this comment about scale to you, who has to think hard about the health of this very community wrt to its size :)
You can only ever really evaluate it at the current moment, and hope to guess at its trajectory. It's absolutely true that communities change over time. This stuff is just really, really hard.
I think I got the idea from Simon Peyton-Jones's concept of the "threshold of immortality" that he used to explain Haskell's very unusual journey from academia into a broader audience. There's a good recording of this at 12:04 here*. In short, most research languages get few users and fade out after a couple years, but Haskell attracted enough users that it formed a community to sustain it. It became a functioning gift economy, where participants felt they would receive more than they put in by contributing to it. That community then grew to reach a "threshold of immortality" with enough users and value that it will be viable for decades.
FWIW I think, absent disaster, jujutsu will hit that threshold next year. We'll see a book or two, a popular programming editor will support it out-of-the-box, a couple open source projects with low hundreds of annual contributors will make jujutsu its primary VCS. But mostly it'll just have attracted enough users and contributors that I can expect to read a jj repo in 2060 with similar convenience that today I can install a single distro-provided package to read an RCS repo from 1990.
I'm really chewing on @Foxboron's subthread about CLAs, I think he made a compelling point about one of those potential disasters. I recently commented about how I see licensing as a tool for solving problems. But while a CLA is intended as insurance against future licensing problems, it empowers one entity who might attempt to seize all the value in the surrounding community. I shouldn't be so ready to toss out the baby with the bathwater, as much as the limitations of open source have really frustrated me the last few years.
So, coming back to your question about assessing communities, SPJ's concept is about size, but not really about health. I've been thinking about it since before I became the admin here (to pick one comment out of a couple dozen, but my first long meta comment aged well). I don't have a short, coherent answer yet. Psychological safety is a big part. And I look to see expression of old-fashioned-sounding values like charity, humility, decency, collaboration, hope, and nourishing pride and ambition - but hopefully to pro-social ends rather than into the familiar selfish disasters! I've been jotting down notes on how to nurture healthy online communities for a long time; perhaps I will even manage to refine some coherent principles and publish something before the jj repo succumbs to bit rot. :) In the absence of a rubric, I dunno, the vibes are good in jujutsu.
* Though I think I picked it up from his earlier use of these slides and concept in his July 2007 talk "A Taste of Haskell". I link the later talk because this talk's recording (part 1, part 2) is pretty rough and you have to manually follow the slides alongside.
This is a real concern, but I think in life you often have to make judgment calls with imperfect information. Steve started getting involved in Rust due to, among other reasons, Firefox. Did Rust become a big part of Firefox? Not really, but Rust took off elsewhere.
Going to miss you so much, Steve! Looking forward to jj taking over the world with your help.
I'll miss working with you too! And thanks again for introducing me to a tool I've ended up liking so much.
Have you made up any thoughts around the Google CLA situation and whether or not this can have any business impact when building on top of jj?
My next blog post will be about this topic more generally. I'm pretty anti-CLA, but I did sign this one. It's a hot topic in the jj community, and a lot of the team would prefer it not exist, but getting there will take some time and effort.
I'm not worried about it in terms of building a business, I'm more worried about it possibly driving away interested contributors to the project itself.
My next blog post will be about this topic more generally. I'm pretty anti-CLA, but I did sign this one. It's a hot topic in the jj community, and a lot of the team would prefer it not exist, but getting there will take some time and effort.
Cool, looking forward to that.
I'm not worried about it in terms of building a business, I'm more worried about it possibly driving away interested contributors to the project itself.
I've personally disregarded jj as a tool altogether because of it.
I've personally disregarded jj as a tool altogether because of it.
I am very curious about this. Not to say you're wrong, but the CLA doesn't apply to users, only contributors. What's your concern there?
I am very curious about this. Not to say you're wrong, but the CLA doesn't apply to users, only contributors. What's your concern there?
A project where you end up with one set of rules for "you" and one set of rules for "us" is inherently not something I'm willing to entertain. Especially not for tools I'll end up having around as "core" things where I expect myself to engage upstream.
This has become a huge point for me personally with the recent years where I've seen companies pull bullshit on both their communities and contributors. Canonical forcing the hand of the lxd developers, having to fork into incus. Minio doing license fuckery (not CLA related, but relevant still). Redis going proprietary, then doing a 180 back to FOSS again(????). There being a CLA, especially a Google one, signals to me it's not worth investing time into it. I will get disappointed and let down.
If jj becomes popular and Google wants a piece of that cake, they will have it. The playing field is not even.
(I'm literally still running i3 on X11, I haven't found a window manager where I personally like the developer enough to upheave my a decade worth of investment.)
Gotcha thanks. This mirrors why I generally don't like or contribute to projects with CLAs, but I've never extended that to being a user before. I can certainly understand this perspective, though. I appreciate it, knowing this sort of thing can help when trying to advocate for the removal of the CLA.
A (wild) example.
Say someone implements support for forgefed into jj and you get a cool jj webui tool to help you deal with incoming pull-requests. Say Google wants to capitalize on jj and they have their cool jhub.org thing. They realize that "wait, jj webui conflicts with our goals. So we'll take that code and only offer it as part of our proprietary product. The community edition can do fine without it".
Realistically, what do you do?
You can fork. But the code will diverge and Google can relicense future work under whatever license they want to prevent any competition from the fork.
You can maintain it as a separate tool, but also deal with increasingly diverging code. Google can keep taking the contributions one way. This is what Redis does to Valkey, and LXD does to Incus.
A CLA is a threat to you as a user, not only as a contributor.
They realize that "wait, jj webui conflicts with our goals. So we'll take that code and only offer it as part of our proprietary product. The community edition can do fine without it".
Realistically, what do you do?
You can fork. But the code will diverge and Google can relicense future work under whatever license they want to prevent any competition from the fork.
But with a non-copyleft license, they can do that anyway, can't they?
But with a non-copyleft license, they can do that anyway, can't they?
If they get permission from existing copyright holders, sure. This differs when there is a CLA at play, which is what I'm trying to illustrate.
IANAL, but: with the majority of non-copyleft open source licenses, you can modify existing code, make the result proprietary, and sell the modified version without releasing those changes. The CLA does not make a difference. As an obvious example: see the various BSDs and the commercial projects built on top of them.
Bit confused here, in your example even with BSD licenses would they not be able to remove jj webui from the main thing? Or is the issue there that they might not be able to offer jj webui in the proprietary thing along with advertising it as such in that way?
Right. I think that here, there's some significant mitigating factors, like Google employees don't even currently do the majority of development, so even if they did try to pull this, it's not clear they'd succeed. But the general thrust of what you're saying stands, for sure.
We've seen how corporations buy up FOSS developers and can effectively brain-drain the FOSS part of the project. I don't think the situation today is enough to predict the future. But I totes see that Google is not the main driver of jj today, of course.
Those are good points, in that the issues with legalese isn't in the fine points of the legalese, but the barriers that companies can erect to others while "skating free" themselves.
I'm mainly acquainted with CLAs because of the Apache Foundation (now "ASF", since they're trying to move away from the "native American tribe" reference). The ASF has long relied on them, to ensure their ability to license under their fairly open and (business- and user-)friendly license: https://www.apache.org/licenses/LICENSE-2.0.html ... in a case like ASF, does the CLA make more sense and worry you less? Or is the same (ASF, Google, whatever) in your mind?
In other words, I'm trying to understand if the problem is Google (which I'd grok) vs. the problem being the CLA (which I don't grok so well).
Interestingly enough, Google claims their CLA was derived from Apache's: https://opensource.google/documentation/reference/cla/#whats_in_the_cla
Isn't the issue that the assignment is going to Google rather than the Apache Foundation? I generally will not sign CLAs, but the exceptions are when the assignment is going to a non-profit custodian with a specific interest in keeping the project open.
Oracle could use the Apache Foundation CLA, but that wouldn't make me trust them more. I'd still be assigning copyright to Oracle.
I was mostly just trying to point out an interesting fact, but the Google CLA does not assign copyright. It grants Google a license, and allows them to relicense if they want. But you still retain your copyright.
I agree that the Foundation and Google aren't the same regardless of that, and the difference might make someone okay with one and not the other.
In other words, I'm trying to understand if the problem is Google (which I'd grok) vs. the problem being the CLA (which I don't grok so well).
To my understanding, the reason why we have CLAs today in FOSS communities is largely because of the SCO-Linux dispute during the 2000s.
https://en.wikipedia.org/wiki/SCO%E2%80%93Linux_disputes
The kernel settled on "Developer Certificate of Origin" (DCO), and some settled on CLAs.
I think it's more permissible for FOSS projects and non-profits to do copyright assignment. GNU has done this, Fedora does this and ASF (as mentioned) does this. But these projects doesn't have commercial interest, so it's finer? But I personally prefer the DCO approach.
I think my red line is a commercial entity, but it being Google obviously seals the deal for me.
But even without a CLA, JJ is licensed under Apache 2.0, which is non-viral and allows derivative works. Not having a CLA doesn’t prevent the primary contributor from choosing to relicense future versions. A CLA just documents the terms under which you contribute something.
Or is it the fact that they have a CLA implies to you they intend to do this?
But even without a CLA, JJ is licensed under Apache 2.0, which is non-viral and allows derivative works. Not having a CLA doesn’t prevent the primary contributor from choosing to relicense future versions. A CLA just documents the terms under which you contribute something.
Without a CLA the contributors would need permission from all former contributors to relicense. They could declare that all new contributions are under a new license though, and you would end up in "some parts are Apache and some parts are a different license" situation. With a CLA you can relicense the project wholesale and (as an example) say "it's all GPL".
At this point, if you fork the project before the relicense you obviously retain the previous license. But if the newer work is collectively licensed under GPL you can't look at nor use the patches from the CLAed upstream, as that could relicense your project under the GPL. This would not extend to the Apache2 code changes, so changes can go one way.
This is of course a bizarre example, but I think it illustrated my point how the field is not even?
Or is it the fact that they have a CLA implies to you they intend to do this?
No, but they can if they want. That is enough for me to disengage.
EDIT: I realized GPL is a weird example as you could poison pill your project with GPL contributions to relicense the project wholesale without a CLA! https://forgejo.org/2024-08-gpl/
EDIT: I realized GPL is a weird example as you could poison pill your project with GPL contributions to relicense the project wholesale without a CLA! https://forgejo.org/2024-08-gpl/
You say weird, I say normal. This is how "re-licensing" a permissively licensed project to a restrictive license is going to work in general - you're not going to get a new license for the permissively licensed stuff, you're just going to keep using it under the permissive license and mix it with new restrictive stuff. Anyone who doesn't want to obey the terms of the restrictive license and who is making derivative works can't do so from the new combined restrictively licensed work (up to the boundaries of the relevant countries fair use type laws).
The GPL is no more viral or restrictive than copyright in general is... it is just a license after all... it only grants additional rights.
Without a CLA the contributors would need permission from all former contributors to relicense.
It depends a lot on the specifics of the licences. In general you can take open source code and combine it with code under a different licence provided the licences do not conflict. If the open source code has a permissive licence then you can make it more “free” by combining it with GPL code, or more closed by combining it with proprietary code. There’s no need to relicense if the existing licence gives you permission to redistribute the combination in the manner you choose.
You need to get permission from past contributors if you want to remove or substantively change any restrictions that they imposed when licensing their contributions. You might want to do this if the old licence conflicts with another licence you want to use for new code.
A CLA is not a copyright assignment, it’s a different flavour of licence that covers the relationship between the contributor and the project. The licence granted to the project might be more permissive to the project than the open source licence is to the world at large, in which case the project may have more scope for relicensing. Or the CLA might be concerned with other things, such as explicit permission from the contributor’s employer. A copyright assignment (which the FSF likes to use) gives the project complete freedom to relicense.
(I am not a lawyer and this is not legal advice.)
Without a CLA the contributors would need permission from all former contributors to relicense.
I’m not a lawyer, but those I’ve read the opinion of don’t seem to support this interpretation. The differing opinion I’ve most often seen cited is that contributions should be understood to be licensed under the same terms as the project. For non-copyleft licenses this means project owners don’t require permission as you describe.
What terms do you believe open source contributions should be understood to use by default? Does that include the right to make derivative works (eg refactoring)?
I’m not a lawyer, but those I’ve read the opinion of don’t seem to support this interpretation.
You need to point out where in the article they discuss this. They do not mention anything related to relicensing, nor how the license given through the CLA interplays with the project license. Note the article is from 2018, and most CLA shenanigans has happened in recent years.
The differing opinion I’ve most often seen cited is that contributions should be understood to be licensed under the same terms as the project.
By whom? You are giving the company a broad license to your contributions that is separate from the contractual obligations of the license in the repository.
What terms do you believe open source contributions should be understood to use by default? Does that include the right to make derivative works (eg refactoring)?
I don't understand this question.
Sorry for linking the wrong article. I meant it more as a piece on the purpose of CLAs in general, but you're right it doesn't mention re-licensing. Here is a more recent article from the same author referencing the later shenanigans, if you're curious.
I also confused re-licensing existing code with changing the license of a project. @fanf has a good comment on the distinction.
The point I meant to make is that when you contribute something to an open source project, you must intend to grant them the right to use it somehow. It can't be taken as "all rights reserved", since then they would not be able to distribute it, build off it, etc. That is, you intend to give them the license to use it, under some conditions..
A CLA just documents that license. The anti-CLA counter-argument I've seen to this is that contributions should obviously be considered to be made to the project under the same terms that the project itself is made available. Github, for example, attempts to make this official by putting it in their site policy. This is, effectively, a CLA, which Github would claim you are agreeing to by using the service.
I think it is reasonable to have a separate document that combines the license with a certificate of origin. That document is a CLA, and by signing it explicitly everyone is on a lot more sure legal footing.
This is also, to be clear, an argument against CLAs being necessarily problematic. They are legal documents, and it is of course entirely reasonable to disagree with/refuse to sign any particular CLA based on its contents.
CLAs limit contributions - most large companies don't allow their employees to contribute to projects that have CLAs. Limiting contributions from the start is a signal that this is expected to be, at least from a cultural perspective, more of a source-available situation.
Disclaimer: I work for Google on an open source project that has a CLA, but these are my opinions.
You noted this much lower in the thread, but I think it’s worth highlighting for additional context: this CLA doesn’t assign copyright.
https://cla.developers.google.com/about/google-individual
In fact, it doesn’t appear to grant Google any rights that they don’t already have under the Apache license (which jj is licensed under); both include a copyright grant, a patent grant, and a disclaimer of warranty.
I need to re-get all of this into my mental cache, but I believe the issue with the Google CLA is "sublicense", which means they can re-license your contributions without a grant. I don't know if the Apache CLA says that or not, and don't have the time to dig into it right this moment.
Someone mentioned ersc.io in another jj-related thread and I signed up despite the sparse information on the website. Knowing now that @steveklabnik is going to be involved (and is excited about it) I'm... you have my attention. I'm looking forward to seeing what you build, god speed!
Sad to see steve leave oxide. Many of his articles and comments have changed me as a software engineer and as a person by challenging many of my beliefs. Excited to see where JJ goes hoping for the best for everyone.
Thanks so much! I'm sad to go too, but I just can't be in two places at once.
Can't you just create a new Steve on top of ERSC, and resolve the conflict at a later point in time?
(Seriously, though, best of luck with the new gig!)
I’m using jj every day because of sunshower’s advice here in the lobsters comments so I support your reasoning. :p
For what it's worth, Rain's endorsement of jj is what got me—one of the cofounders of East River Source Control—to start using jj and eventually leave a very cushy job working on Rust at Facebook :D
I resonate with this a lot. Around 2015, when I was still in college, I started using Rust. I'm not one to make bets on most things about the future, but it was immediately apparent to me that Rust had potential to be the future of systems programming. I recall never before or since being so certain about any bet I've ever made about the future.
So, as a result, I decided when I graduated I was going to work full time in Rust. And I did. I've been working in various Rust roles since I graduated in 2017. While some of them have been better or worse, I feel immensely lucky to be doing what I love at a company that I love. I can easily say without a doubt that level of certainty about the future does not come often, but when it does you need to chase it. You won't regret it.
Are you me? I also started using Rust around 2015, was blown away by its potential, and 10 years later find myself working professionally with Rust doing very cool stuff. We got lucky!
Steve… come on… give us more hints of what y'all are gonna do! Is it a forge?
I mean good luck etc but you gotta just tell us eventually
Ha, thanks. The thing I personally will be doing is the same stuff for jj that I did for Rust: trying to improve documentation, doing community stuff.
Something forge shaped is certainly directionally correct. We'll share more when there's something real to show!
Looking forward to it! I definitely would like some more docs to point people to (mainly of the "here's a feature I'm working on, and how I approach it with jj" rather than "here's how I do this operation in jj compared to git"), just to get onboarding to be a bit less involved.
Nothing like a big thing filled with some asciicinema to get a feel for the work in practice!
(The other day I got jj compiling to wasm/wasi by vendoring in a bunch of libs and no-oping a bunch of stuff... was only a couple hours of work to get there but I don't have a grander theory for it all so kinda dropped it. But given jj's model and relative newness, there's definitely some interesting opportunities for very interactive docs)
But given jj's model and relative newness, there's definitely some interesting opportunities for very interactive docs
You are not the only person interested in this, for sure...
I fully understand why you would be interested into building a new forge, but at the same time I'd love to efforts being made on Tangled as it's a really promising development being made toward a distributed forge that offers jj as a first-clasd citizen on top of AT Proto for all the "forge" part.
But the world is big enough for multiple jj-focused forges to co-exist :)
Don't get me wrong: I'm psyched that Tangled is a thing. I hope it does well. Full agreement that there's a world large enough for multiple ones, and in fact I think that's probably healthier for the industry overall. I tried to connect the jj and tangled communities even after i knew ERSC was a thing or that I was going to decide to join them. I'm glad it worked out :)
The main reason that ERSC is not just contributing to Tangled is that there are different goals. I want to see the atproto community grow, but being decentralized and being on top of atproto are non-goals for ERSC. Or let me put it this way: they're not front and center goals, never say never, I guess. But the priorities between the two are obviously very different, hence different projects.
As someone who wants good jj forge stuff but like for "normal" work... tangled really doesn't speak to me. Why am I meshing a forge with AT Proto of all things?
That's fine and cool as a bit of an experiment but honestly "interact with your code forge only through AT Proto" feels as much anathema to me as if Github was like "you can only create an account with Google".
I understand your point of view; however, let's note that being "backed" by the AT Proto doesn't mean you have to create an account with a third-party (tangled allow you to create an account using them, or any deployed instance, as an account hosting provider).
It only means that you don't have to create an account again for each forge you'll contribute to, which in my opinion is a nice improvement over other GitHub alternatives. (And it also mean that moving a project — and all of its non-code data like issues — from one forge to another is trivial, which is nice too!)
we have our own account provider too (PDS): all you'd need is an email id to join the platform and you can use it without being aware of the atproto bits.
what we are able to do with atproto is allow users to host their own repos, and still be open to collaboration with the rest of the world.
Hmm I hadn’t thought of that way of looking at it.
It does still feel a bit of a niche case (most companies don’t want to host their own repos I think) but “I want to host the repo but not host a whole forge” is definitely a rich niche in terms of the kind of interesting work that can happen there
That is definitely an interesting window where Github has announced a full-stop of feature development and is increasingly looking moribund. Neo-forges were on my shortlist of interesting new developments 6-12 months ago and turns out I wasn't the only one smelling the blood in the water.
Github has announced a full-stop of feature development
Whoa, citation?
That's a bit misleading. They're announced that they're not prioritizing feature development for now, while they migrate to Azure. Definitely not a "full-stop on feature development"!
See the lobste.rs thread: https://lobste.rs/s/esvr7z/github_will_prioritize_migrating_azure
Sure. You can also say that jamming in Copilot into absolutely everything is "feature development".
I really wish there was more community uptake of Darcs/Pijul where the Patch Theory-based approach is fundamentally solving probleproblems brought on by snapshot-based VCSs instead of side stepping them. More community tooling is exactly what these tools need to fill in gaps to make them competitive. I would prefer revolution over evolution, but it appears the broad dev communities aren’t read to give up their existing (often proprietary) forges & tooling.
I think that jj has more changes than pijul of getting wide traction (in fact, it already is doing that) thanks to the incremental-adoption approach of being able to work on a git repository with jj locally. This technical choice is key to its success. Pijul might be a better design in absolute terms, but it is hard to tell due to our collective lack of experience using it, due to this problem of being incompatible with existing projects or collaborators using git.
(There might be other long-term issues with pijul in terms of adoption. Its main developer has a tendency to disappear for months to reinvent something else and do it better, and does not have a good track record of attracting external contributors to work on their project. I wish pijul the best, and I am very happy that some people want to explore that space, but it is not clear to me that the project as it exists will ever be able to take off.)
But that is the thing. Revolution is actually changing the fundamentals, where Jujutsu is still design in a way for using atop Git & is limited by many of Git’s design decisions. I believe Pijul’s maker got Jujutsu to remove that it had Darcs/Pijul-like patches since it did not work under the same axiom they do that patches commute, meaning the patch order should not matter if there are no conflicts. Pijul has great ideas—specifically the identity server idea decoupling identity from commits—but absolutely is missing a ton of usability features. That said, Pijul is meant to be scriptable, & these tools can be written (which was the point I was trying to make). Darcs on the other hand actually provides a pretty good CLI experience (I like the rebase experience quite a bit) & has proper self-hostable forges & can be low-effort set up statically behind an HTTP server for distribution; but the cost is Darcs has some design decisions that show up from being older than Git like tying identity to commits, & no concept of channels/branches. Both VCSs could also be improved if something akin to Gerrit existed for reviews & could work even easier not trying to jam their own hash mechanism into the commit log to circumvent Git’s limitations, but that UX is missing due to adoption but would be an even better experience than current Gerrit+Git.
Jujutsu is still design in a way for using atop Git & is limited by many of Git’s design decisions
This is both true and not true, remember, git isn't the only backend. jj with git as the backend has some limitations, but with different backends, does not. Right now the only other real backend is the one at google, around their centralized VCS. But more backends will emerge.
(This is why jj tries to namespace commands that are specific to a backend, jj git in the public project, and whatever google does for their internal version, I forget.)
I believe Pijul’s maker got Jujutsu to remove that it had Darcs/Pijul-like patches since it did not work under the same axiom they do that patches commute,
The removal of Pijul from the readme was about this, yes. Here's the commit if you or anyone else would like to see what it used to say: https://github.com/jj-vcs/jj/commit/0525dc9d860a5fff6a97d134acc2c325aa36df30
My personal take: the comparison was bout first-class conflicts, not about commutativity of patches. But it is true that pijul's author read it that way, and so rather than argue about it, the simplest thing is to just respect his wishes and not make the comparison at all.
You may be interested in https://github.com/radarroark/xit , which is a DVCS project that (optionally, currently) does patch-theory based merging and cherry picking on top of a git-compatible repo. (I believe that it computes the patches from the snapshots, caches them, and then merges with patch theory, then applies those patches to create a new snapshot. This way you get the benefits of both git compatibility and patch theory advantages!) I'm only following it casually though, so I might be getting something wrong. This idea seems like the key to me though - I think git-the-datastructure will continue to be the common denominator and advancements will happen by sophisticated tools encoding extra information (like revisions) into git's datastructure.
I believe that a new on-disk repo format is critical to fixing many of git's limitations. In particular, I think better merging and better large file support can't be attained without moving to a new repo format.
- You could say that jujutsu does improve merging on the periphery, by making merge conflict resolution more ergonomic, but its actual merge algorithm is the same as in git: the three-way merge.
- If we want to reduce the number of merge conflicts that occur in the first place, we need to improve the way merges are done. That requires storing completely new data that the git repo format has no place for. A real database helps here a lot.
This is more in balance for revolution over evolution, yes. Keeping the network layer seems much more reasonable as the storage mechanism still needs to update to do this properly. It would be nice if they had a mirror on a non-proprietary host, but an interesting project nonetheless—& a pleasant surprise as I was skeptical.
yay! good luck!
I was looking at JJ for a long time, but my final straw was finding your tutorial: "if Steve finds it interesting, I definitely should spend my time on it". And I'm actively using it for a year already. The world goes round via web of trust 😁
(which reminds me, that I should put more effort in blogging)
Good luck to ERSC! I was pitching "we should build something like a jj-native forge" to people around me couple of months prior the initial announcement of project on Discord, but didn't find enough momentum around me. The need for something like that is in the air. Lots of risks as well.
Thumbs up. Nothing add.
The commentary: I came to similar conclusions about jj through a different heuristic: do I have the impression or see symptoms to speculate this is going to be great in two years?
Soon after I rewired my brain, it became self-evident that JJ would have enough escape velocity and succeed - at least technically.
And more recently, with the number of contributors going up, the last symptom of success stood up.
Will jj replace git as the tool of choice? I don't know. But I already see the jj-fication of git: adoption of rust, and a recent mailing list message from a key contributors thinking of porting some of the jj ergonomics onto git.
Great read.
I have a rule of thumb: if Rain likes something, I will probably like that thing, as we have similar technical tastes.
LOL @steveklabnik when I was looking for a web platform to adopt, and despairing over Docusaurus / React, I thought “gee, I wonder what Steve uses for his blog?”
And that is how I became a very happy user of Astro!
Cheers, and good luck with the future! ☺️
Amazing, haha. I haven't worked with Docusaurus much, but I did like the look of it when it released. I briefly thought about re-writing rustdoc to use it.
I'm using Astro on a side project as well, I'm pretty happy with it.
And thanks :)
I have a very similar outlook to how I choose projects. By similar methodology I was successful at predicting Go, Kubernetes, Rust, etc. I think Jujutsu has all of the leading indicators that it's going to be a smash.
Except, in the case of Jujutsu, isn't its success going to be its own downfall? If it's massively successful, won't the features just get pulled into Git directly? In the case of Rust, or K8s, there was nothing to pull the successful patterns into, so of course those projects have been successful. In the case of Jujutsu, everyone uses it on top of Git. Why not successively pull more of the model into Git?
I don't think there is a viable path to replacing git's staging area with making the working copy a commit. There is too much existing tooling that would break with such a change in fundamental assumptions. Sometimes you have to start over.
I'm sure 15 years down the road there will be another upstart seeking to improve on a now-burdened-by-BC jj.
But there's no need to replace the staging area, someone can just make a git extension (even a shell script) to add jj's workflow into git using just commits.
I don't think there's such a thing as "jj's workflow" that doesn't involve getting rid of the staging area and making the working copy a commit. There's git-branchless which provides a commit-first workflow on top of git which is maybe 40% of "jj's workflow", but its creator works on jj now.
There are also many other details here, like the operation log meaning jj doesn't have file locks on the working copy. No extension, let alone shell script, can remove file locks from git.
Huh, I wasn’t aware that git takes locks on the working copy. (I know there are locks on the repository internals to protect against concurrent invocations of git.) What are the working copy locks for?
My understanding of how git handles the working copy is covered in the racy git documentation which doesn’t mention locks. That makes sense because git can’t use locks to detect changes to the working copy on unix systems that only have advisory locks, but I guess git might be using locks for something else?
Git index operations take a file called index.lock. Jujutsu does not have an equivalent. See https://jj-vcs.github.io/jj/latest/technical/concurrency/
40% of jj's workflow in git, along with git's incumbent status, ensures git remains the default for a long time to come.
As another commenter points out, Jujutsu's popularity compared to git-branchless is suggestive.
Part of this is marketing: a new VCS feels exciting compared to yet another Git extension. But Jujutsu is usability-wise and technically better than Git, and I think it's good to market better tools.
There's a cost to marketing even technically better tools, of course. A VCS is kind of a viral thing–you kinda need other people to be using it as well. And jj still doesn't have a couple of crucial things that set git apart as production-ready, like tags and pre-commit hooks.
There is such project git-branchless yet it is less popular than JJ and author of that project is working on JJ as well. JJ just works IMHO better in that regard (I have extensively used both).
won't the features just get pulled into Git directly?
Git's cycle time for even obvious quality of life improvements is huge. This seems very unlikely to happen on any time scale that would matter.
In the case of Rust, or K8s, there was nothing to pull the successful patterns into
I disagree. C++ could have pulled in the successful patterns but they chose not to.
My conundrum with jj right now is that while I like the underpinnings, the tool ecosystem still lacks a lot of pieces I use daily with git. I'm talking things like IDE support (there's an early version of a jj plugin for IntelliJ but it doesn't do much yet) and support in GitHub-adjacent tools like Graphite.
For my personal projects, I can just decide not to use some of those tools, but for work stuff, I have to interoperate with what the rest of my team uses.
If I were to use jj for work, any time and headaches its workflow saved me would be outweighed by the additional time and headaches of losing all the integrations and automation in my current setup and having to figure out how to make jj interact well with my team's tools.
But I think jj has enough momentum that it's only a matter of time before the ecosystem expands enough to make it attractive to switch.
It's pre 1.0, so I think that tooling being lacking in places makes total sense. We'll get there! Or maybe not. time will tell... Pointing out the stuff that's missing is helpful, for sure, so talking about its shortcomings for you is important, so thanks :)
I agree, but I disagree that the right kind of tooling has been made for git yet: Not for the stack based workflow that jj facilitates: Does any IDE let you drag a hunk from one commit to another while your code is compiling? I do that all day long, using my own fork of git-revise, but I would pay a lot for a GUI with such powers.
git rebase --interactive is such an achulean handaxe for commit stack sculpting, and no IDE has improved on it, because they just wrap it.
So yes, I'm also reluctant to switch away from the git tooling, but that's more because I have invested so much into the field.
With a colocated repo, a lot of IDE git things do work (e.g., the indication of changes in VS Code). I've become a fan of jjui and just leave it up all the time in a terminal to do jj stuff. I don't miss the IDE git stuff at all now.
What's missing is a good, dedicated GUI for splitting commits. You can use any diff tool but it's just a little weird (you use the diff tool to eliminate the changes you don't want in one of the two output commits).
Big congratulations on taking this leap ahead Steve! I talked to the ERSC guys and I think they are great peers to be working with.
I, too, have my own ideas about a future VCS system. Not jj-specific, but definitely jj-compatible. Though I don't have the courage to take such a big leap like you. Best of luck! I hope we run into each other someday to have a coffee and exchange ideas.
Every time I start getting into something in tech, I find a wild ~steveklabnik there advocating for the things I'm excited about, furthering the community, and improving the vibes.
Are you my ~sunshowers?
Seriously though, I've been using jj full-time for 7-8 months and got a few others trying it too. It's one of the best general tooling/workflow improvements I've made as a developer in a decade or more.
In an age where AI agent coding is becoming increasingly popular, jj is undeniably a smoother choice than git (e.g. hook). I also foresee jj having limitless potential — looking forward to the new world you’re shaping.
This is hopeful and joyful for me… I'm one of those for whom your tutorial unlocked a lot, and I'm still using it and enjoying it. More jj is good.
Nit: add a link to a jj primer for noobs like me who aren't familiar with jj and would like to know more.
Thanks Mike! I did link to jj's website on what was the first real mention of it for this purpose, but then I made an edit based on some other feedback and accidentally introduced a previous mention without the link. Oops.
https://steveklabnik.github.io/jujutsu-tutorial/ is my tutorial, and I'll go back and fix up that reference.
The tutorial says it's for version v0.23.0. Is it best to go with that version, or would you suggest using the newest version, which is v0.34.0? Maybe someone on this forum has tried it, and there weren't any relevant breaking changes.
I definitely recommend using the latest version of jj. I think the tutorial should be up to date, people file bugs when stuff breaks and I merge it, so I can’t guarantee it, but in theory it should be good.
I’d you try it out and something doesn’t work, please let me know!
it was linked subtly in the article with an "I did": https://github.com/steveklabnik/jujutsu-tutorial/commit/fc8c588dfc58ce2ff54246174fc59aca0fc868b5
I see jj and think oh god am I going to have to learn the quirks of yet another thing and how much is it going to slow down my real work.
It took me less than a day to figure out Jujustu. And I feel like I've recovered all of that, and more, by not having to deal with constant "git rebase -i" operations to get a clean stack of commits out for review.
I thrashed around for thirty minutes trying to figure out all the fancy new terminology just to get a commit in. TBH git works just fine for me.
I'm sorry, but any expertise you have in git is now useless as everyone will switch to the new hotness that is jj.  How anyone becomes an expert in this industry when everything changes every few years is beyond me.
And in a few years, jj will seem horrible and something else will come along to replace it.
Git was released 20 years ago, and there's no requirement to learn jj now - learning a new tool every 20 years doesn't seem like a breakneck pace to me.
Why do you think git expertise is useless while understanding jj? Jujutsu uses Merkle DAGs just like Git, and in fact it has even more Merkle DAGs than Git does.
As an end user, I don’t actually care. What matters to me is the user interface and how much it gets out of my way while I do something else.
Well yes, but the commenter was talking about expertise which I surmised went beyond the UX.
If I think about it, “expertise in a version control system” is a symptom of a failure. It’s like requiring a developer to have expertise in saving a file.
Once we have version control that is just like saving a file we’ll know we have solved the problem ergonomically.
There is a level at which you are exactly right, and jj gets closer to that than anything that came before. What do you mean you have to run git add? You certainly don't have to run dbx add when adding files to Dropbox.
But there is another level at which understanding a system built for professionals can be greatly rewarding, and jj is also better at that level than anything that came before it. Understanding revsets, the operation log and the evolution log continues to pay dividends over time.
It’s like requiring a developer to have expertise in saving a file.
But we do need some amount of expertise in that, at least some of the time, because even "saving a file" is not always as simple as it sounds. CRLF or LF line breaks? UTF-8 encoding? Is the filesystem case-sensitive and does it matter for the task at hand? Is it a network share that can fail differently than a local disk? And so on. Sometimes those details don't matter, but sometimes they do.
I think version control is always going to be similar because there's a certain amount of irreducible complexity in the problem it solves. If two people edit the same part of the same file, there has to be a mechanism to reconcile the changes, and that mechanism will almost certainly fail sometimes. The user (developer) will have to know enough about what's going on to resolve the failure.
Can all that be made much cleaner and easier and more reliable than what we have today? I think so. But I doubt it'll ever become a trivial matter that nobody needs to know anything about unless they're working on the VCS implementation.
Ooh, Unix vs Windows line breaks: another example of atrocious design. But don’t modern editors handle this transparently?
File system case sensitivity lol. How many developer years has that wasted eh. But that’s just a message from the editor saying “Overwrite this file?” And it’s confusing but one learns if one is inflicted and one is only inflicted rarely.
Network share? Local disk? Usb disk? Don’t matter : it’s an error message. Couldn’t save because this thing isn’t there.
Manual file merging isn’t ever the problem: people intuitively know that combining edits on the same file can be messy and learn to use the tools at hand.
It’s the inflicted complexity of exposing the underlying automatic merge algorithms where things get gratuitous.
This is the call to idiocracy: everything must be so simple that there is no concept of expertise, at which point there will no longer be experts. I understand the anti-intellectualism which underlies this opinion but it has been unmasked over the past two decades with the rise of Wikipedia and is now clearly understood as a reaction rather than a statement of understanding and progress.
Not too many quirks — it is the most thoughtfully designed developer tool I have ever seen in my career.