What's the Point of Learning Functional Programming?
29 points by abhin4v
29 points by abhin4v
There’s one other big benefit that I think this article touches on but I’d highlight a bit more: type-level reasoning.
Idiomatic Haskell tends to express constraints on the behaviour of the program in terms of types. Learning to think that way will make you write much better C++ or Rust. You will write code that has a much higher chance of working if it compiles (and, conversely, fails to compile if it is buggy).
This is one of the most important skills in programming and one that you cannot easily teach in Python.
Type-level reasoning is great, but it's not inherently a quality of functional programming. It's just that functional programming languages are often strongly statically typed with some advanced type-level features. But for example Ada and Rust have some pretty strong type-level features too, and they're not FP languages.
In Norvig's "Teach yourself programming in 10 years":
Learn at least a half dozen programming languages. Include one language that emphasizes class abstractions (like Java or C++), one that emphasizes functional abstraction (like Lisp or ML or Haskell), one that supports syntactic abstraction (like Lisp), one that supports declarative specifications (like Prolog or C++ templates), and one that emphasizes parallelism (like Clojure or Go).
Type-level reasoning reminds me a lot of dimensional analysis in math. Similar to how reasoning about types in programs helps to ensure you're computing the value you intend, dimensional analysis ensures that the value you've calculated matches the units of the value you're trying to get. I imagine that's a valuable comparison to highlight for students making the transition from, say, engineering into programming.
F# combines both into the type system: https://learn.microsoft.com/en-us/dotnet/fsharp/language-reference/units-of-measure
Nothing, it’s point-free.
The lack of upvotes on this post is sad. Either no sense of humour or not enough Haskell programmers in the readership.
Can you explain monads without using Haskell?
C++, with its operator overloading and lambdas, could do the trick. But it would look like Haskell anyway, just with more line noise…
Personally, I wouldn’t start with monads. Instead I would work my way up the typclassopedia, starting with monoids, functors… and ease my way up to monads. It takes more time, but is in my opinion more approachable.
The resulting code is seemingly elegant, but arguably much more difficult to analyze for time complexity and space complexity compared to an iterative solution which doesn't use recursion. More specifically, one needs to think about why an infinite list will never be materialized, and analyzing that requires one to simultaneously look at several tiny functions, instead of one loop (so these functions are still coupled, not completely independent). https://github.com/ncreep/point-of-learning-fp-blog/blob/master/KnightsTourWholemeal.hs#L48-L84
Even the wording in the blog post briefly mentions the complexity in "This is a bit mind-bending, take a moment to let the tricky recursion sink in properly." and the bit about performance in "For such boards actually computing the whole list would likely crash the program"
In production, if I see someone shipping code that is "mind-bending", that's generally (but not always) a negative thing.
The short/elegant/idiomatic Haskell may not only be hard to analyze, but it can also run worse. Two examples:
I think this relates pretty strongly to my recent comment that
https://lobste.rs/s/t8fc8a/unexplanations_relational_algebra_is#c_gsdg75
And Haskell's expression- and recursion-based style is almost literally math -- it's geared towards reasoning about sets, without time. But this can be bad, because in programming, you often care about time.
BUT, I'd still argue that Haskell is good for teaching people about math. It's RELATED to programming, but not the same thing. A lot of code suffers from basic reasoning errors, and doing math can help with that.
I do think that the idiomatic expression of quicksort in Haskell is very helpful in understanding the algorithm, but is essentially useless as an implementation. Any implementation that is not in-place is practically pointless; you might as well just use mergesort.
Your comment makes me super curious about what resources are available for learning math with Haskell. I’ll of course do my own queries but do you have any that you recommend?
Hm I'm not sure I have great recommendations, but what I mean is that a lot of code is too long and suffers from reasoning errors
It's not "obviously correct" in the way that math can be ... you have to read too much code
The thing that turned me onto that was Lisp and the SICP book in college, not Haskell. (Much later, I went through Effective OCaml by Minsky, wrote a bit of OCaml -- I actually haven't done much Haskell)
But I'd say "short code" applies to all functional language to a degree - Lisp, OCaml, Haskell
I think type-level programming is useful up to a point, but there are often more downsides the upsides.
Hope that helps a bit
Let's go full elitist here and say what we're really getting at.
The Python solution is a machine oriented view of programming whereas the Haskell solution is more math oriented. With the Haskell one you're modeling the solution much more directly and proceeding through it naturally, whereas the Python one is more rigid and burdened with bookkeeping and minutia.
This obviously doesn't mean you can't solve this in Python. Nor does it mean you can't create a more natural model of the solution in Python. But all of it is more work. You're essentially creating explicitly, tediously, what Haskell gives you as first class language constructs.
When more powerful expressions are available as first class language constructs, better solutions are pursued more easily.
It's not an accident that these solutions are easier to reason about and understand. It's not an accident that they frequently work properly the first time they're compiled. It's not an accident that the compiler can still render binaries with excellent performance. It's just math.
Over time I've come to realize it's not a competition between paradigms, but each of them have different abstractions and mechanisms that have advantages in different situations. Learning more paradigms means being able to choose the best one for each situation.
Most real world languages (C++, C#, Java, Ruby, Python, Rust, Common Lisp, etc.) allow using different techniques together because real programs have areas that should be more functional or more OO or more procedural, and using the right paradigm in the right place makes the codebase easier to understand, have fewer bugs, and perform better.
The languages that are made to deemonstrate a paradigm (Haskell, Smalltalk, Prolog, etc.) are great for showing off the strengths of the technique, but they tend to be clunky outside of their niche.
Agreed 100%, just avoid going too deep into laziness, as it's a non-transferable knowledge baggage.