Stability by Design
16 points by Jackevansevo
16 points by Jackevansevo
Security fixes should basically ~never trigger a breaking change
This is definitely not true; security fixes can break things. For example, the fix for the log4j vulnerability meant disabling a feature, because that feature was fundamentally flawed. The difference is that unlike for an enhancement, your users would rather have the breakage than the vulnerability.
Other than that it’s a decent article if you ignore the twitter links.
Other than that it’s a decent article if you ignore the twitter links.
What do you mean? What was being said was relevant to the point of the article (I believe it started as a friendly debate)
Twitter is very user hostile and essentially unbrowsable without an account. Not to mention the politics.
Quoting the text and linking the source is the best IMO, but a screenshot can do too if you really care about the tweet formats.
That’s true but I was talking more about how being reminded of the fact that twitter exists is unpleasant.
I agree with the author that stability is a good thing, but I think a lot of this article could be flipped the other way round. As far as I can tell, the idea is basically that dynamic typing isn’t a problem in Clojure, because it makes it easy to write stable code (and has fostered an ecosystem that encourages stable code).
But I think you could also argue the inverse is true: if you want to have dynamic typing in your language, you’ll need to go out of your way to ensure stability as a primary philosophy of your language, otherwise you’ll end up in trouble. In the charts at the top of the article, for example, what extent of the older layers of code could be removed, if the language had more static guarantees to help developers find cases that can never be triggered? And what extent of those older layers could be changed to a significantly simpler implementation at the cost of a breaking change that would be visible and well-telegraphed with a static type system?
The author suggests that there’s a significant cultural difference, and I suspect that’s probably true. But I think strong cultures can also indicate deficits in tooling. As an analogy: I have a really bad memory, and for years I really struggled making sure I had basic things like keys or money when leaving the house. Eventually, I came up with a strategy that has mostly solved this, which is that I have one bag, and everything that I might need for the day goes in that bag — keys, wallet, water bottle, documents, notebook, etc. I’ve trained myself to always keep everything in that bag, and to always take that bag with me no matter what, which means I now very rarely forget my keys anywhere. My wife, on the other hand, has a much better memory than me, and instead just remembers to take her stuff with her, regardless of which bag (or no bag) she’s using that day. She still forgets things every so often, and doesn’t have the bag strategy to fall back on, but 90% of the time it’s not a problem for her.
I think there’s an argument that Clojure’s stability culture is like my bag strategy: once it’s embedded, it’s a great thing to have (I forget stuff less often than my wife now). But it’s only necessary because of an existing deficit — the lack of static guarantees in Clojure’s case, a bad memory in my own. If you could find some way of removing the deficit entirely, you wouldn’t really need the strategy. This matches with my own intuitions: a lot of the time I have to deal with breaking changes, it’s a change that either improves my code, or reduces complexity in the library I’m using. Both of these are good things, and as long as the breaking change is well-telegraphed (i.e. statically detectable) and easy to fix (and as long as it’s not happening too often), then I’m fine with it. This is especially apparent within a single codebase, where refactoring an internal module/API is easy, making it possible to break down code into smaller chunks without having to worry about whether I’m writing the perfect abstraction up front, or whether that’s going to come back to bite me later on.
The analogy isn’t perfect: there are no upsides to having a bad memory, but there are plenty of upsides to dynamic type systems that might make the tradeoff worth it. But it was the first thing that came to mind when reading the article — in particular, this sense of adding in extra manual safeguards to protect against dangers that could be avoided.
I think you’re making good points. While discussing this with Tim, I mostly settled on the following claims:
In my opinion (with no data backing it up), language like Clojure has a limited way of making breaking changes safely without adding a ton of special casing in the library code, so people value and enforce stability at the ecosystem level. In typed langs (eg Rust) there are more breaking changes because there are cheaper for library authors to make while having some sort of assurances that their clients code won’t silently start doing the wrong thing.
At the ecosystem level, the former culture generates much less churn than the latter.
I think the distinction between modules in my own code, and modules that I import from other people is a good one here. In my own code, churn can often be a good thing — I’m reminded of the blog post Skin-Shedding Code which explains why it’s so convenient sometimes to be able to rip code out completely, often causing breaking changes, but with confidence that the refactoring will still work as expected by the end. And that works best if all the code is in one codebase or even repository, and you can easily make all the necessary API changes. But as soon as those modules become distributed (i.e. as soon as I’m installing modules maintained by other people), then churn is generally a bad thing, and avoiding it becomes more important.
In forty-five years of programming, I’ve worked in many languages with static types, and I’ve never experienced the promised productivity boost. Quite to the contrary. On the other hand, the productivity boost of having a REPL in a LISP or Scheme, where I can try every bit of code live as I write it, fix errors as they happen, change code in my running program without restarting it, and work on one part of my program while the rest is not yet finished, is something I’ve experienced over and over again. I will choose a short OODA Loop over other language features any day.
I think there are two kinds of productivity boost:
I’d definitely agree that a REPL and introspection are the biggest wins for the first. Smalltalk and some Lisp environments are the gold standard here. Being able to stop execution at any point, inspect all data, and then modify the code is great. Smalltalk’s #become:
is fantastic because it lets you in-place replace an object with an entirely different one and update all pointers to it. This makes iteration ludicrously fast.
For the second, late binding or generics are key. Type systems (static or dynamic) that let you create modules that are reusable in quite different use cases are important. Smalltalk and Haskell take diametrically opposite approaches to this problem but end up with the same benefits. When I started writing Objective-C, I rapidly found that a large amount of the code I wrote for one project was trivial to factor out into library code and reuse across projects.
I generally find the second outweighs the first. Writing code quickly can never be as fast as not writing code at all.
The 2nd is extremely common in lisp/clojure, generally your specific solution is already a generic (i.e. libraryesque) solution.
I find that a more static language typically makes refactoring easier, while a more dynamic language (like you say) makes experimentation and exploration easier as well as writing code from scratch. For large projects with a long lifespan involving many people, I think static languages are probably better, but I definitely prefer somewhat more dynamic languages personally. Wrangling types often just feels like a chore.
fwiw there’s a REPL in Haskell too, GHCi which ships with the compiler itself. I always keep a session open to try out stuff (mainly, inspect the types of things under various combinations of parameters)
If part of your program doesn’t yet type-check, can you run any other part?
Yes, with a compiler flag: -fdefer-type-errors
.
Cool. Just to be clear: the compiler flag affects the behavior of the REPL?
Yes, as @ocramz also mentioned, the REPL is called GHCi so this later section in the documentation is about that.
Not directly answering your question, but somewhat related I found great use of _ , which will cause the compiler to error with the inferred type I have to use at a given point.
I work in c# when I can, and typescript when I must.
In general, I find types to be a very handy way to know how things are supposed to be. I do believe that it prevents some errors from ever occurring but I have no actual data to prove that.
I would never want to work in a language for a large project without some level of typing. I appreciate the concept of a live language that you can change things, but to me not restricting the domain and range of functions just isn’t my cup of tea.
I also work in enterprise environments, and any productivity gains are because the whole system design has been rendered so complicated that its terrible.
Consider a typical Javascript program. What is it comprised of? Objects, objects, and more objects. Members of those objects must be either introspected or divined. Worse, it’s normal to monkeypatch those objects, so the object members may (or may not) change over time.
[…]
Now, consider a typical Clojure program. What is it comprised of? Namespaces.
I would counter with: a typical Clojure program is comprised of hashmaps, hashmaps and more hashmaps. Combined with the fact that dereferencing a nonexistent key returns nil
and that this tends to be silently carried through by a lot of operations, this problem is worse than in JavaScript. You’ll often be staring at a stack trace many layers removed from where the problem was introduced.
There have been several occasions where I’ve passed some config option to a library only for it to get silently ignored because it got renamed. The most egregious of such cases I ran into was with the Sentry SDK, which renamed a key and then renamed it back to the original name, with the docs still describing the intermediate name.
In almost all such “config maps”, the keys aren’t namespaced either (although that wouldn’t have helped anyway).
Interesting because Rich has a whole talk about this exact issue: https://www.youtube.com/watch?v=YR5WdGrpoug
yeah, I’ve seen that one. Let’s just say that the lived practice doesn’t always align with the ideal theory ;)
While I feel the most comfortable with an ML-like static type system, I probably exercise lesser restraint than desirable in renaming/refactoring such a codebase. This gives me something to think about.