Why Algebraic Effects?
39 points by osa1
39 points by osa1
I’m not super sure what the intended target audience of the blog post was, but assuming that it includes (1) programmers from coming from both FP and non-FP languages (given the example of allocation) and (2) regular day-to-day programmers in addition to language implementers (given the example of DB queries), I find this falls into the typical trap of arguing for one side of a feature without deeply examining the other side.
In particular, there is little examination of how people accomplish the same things today in other languages which lack algebraic effects (i.e. most of them), and what that experience is like, and how this improves upon that.
I hope this doesn’t come across as mean; I don’t want to discourage the author, but I feel this frustration on reading basically every other post on algebraic effects, where people keep repeating similar points without closely examining them.
If you look at the concrete examples:
InterfaceName
or any ProtocolName
) which makes this appealing.
observation.Context
and pass that down. It works OK. If you want effect safety, you need to annotate your functions anyways. So the ‘plumbing code’ is ~roughly the same, modulo minor syntax differences.
can Use Strings
rather than an extra mylib::Context
parameter?
context
parameter, which gets implicitly passed to all functions unless one explicitly opts out.rr
, and there are lower overhead options for specific languages such as Replay.io for JS, without requiring effect handlers.
In particular, there is little examination of how people accomplish the same things today in other languages which lack algebraic effects (i.e. most of them), and what that experience is like, and how this improves upon that.
Thanks for writing this. I think it’s a very important point, and it will be useful for people to see that these algebraic effects have simple alternatives in other languages. In fact I didn’t see a single example of what makes algebraic effects different from what you can do in other languages: multiple calls to resume
. If you call resume
zero times, that’s equivalent to an exception. If you call resume one time, that’s equivalent to a function call. The only way you get behaviour different from what is easily available in other languages is if you call resume
multiple times (i.e. use it as a “multi-shot” continuation).
Now, the only examples of such a feature that I have heard of are “backtracking” use cases, such as in parsers or searches. (In Haskell this is the same feature you get in the list monad or LogicT
.) But I haven’t seen such a use case that couldn’t be rewritten using loops and exceptions, so I’m still not sure what multi-shot continuations buy you.
This is why my Bluefin (my Haskell “effect” system/“capability” system) doesn’t implement multi-shot continuations (and so I don’t describe it as an “algebraic” effect system).
The only way you get behaviour different from what is easily available in other languages is if you call
resume
multiple times (i.e. use it as a “multi-shot” continuation).
Even if you call resume
one time, you still get an ability that you don’t have with a function call. You get the ability to suspend the computation and resume it later. This is also something you can do with coroutines or generators or async functions, but effects further let you compose multiple “kinds” of suspend+resume. This is also something you can do with multi-prompt delimited continuations, but effects further track this in the type system.
doesn’t implement multi-shot continuations (and so I don’t describe it as an “algebraic” effect system).
Multi-shot continuations are not what makes effects “algebraic.” The algebra comes from how effectful computations compose: https://arxiv.org/abs/1807.05923
Even if you call resume one time, you still get an ability that you don’t have with a function call. You get the ability to suspend the computation and resume it later.
That’s exactly what a function call does! Pause what was running, push to stack, run something else, pop stack, resume what was running. But you have a point: it’s also important to be able to swap between stacks, which languages without threading don’t necessarily have. Bluefin allows two coroutines to interact with connectCoroutines
, which launches a thread for each coroutine. So yes, you’re right, languages with only function calls don’t get to make those kinds of constructs interact (though the continuations are still one-shot).
doesn’t implement multi-shot continuations (and so I don’t describe it as an “algebraic” effect system).
Multi-shot continuations are not what makes effects “algebraic.” The algebra comes from how effectful computations compose: https://arxiv.org/abs/1807.05923
But at least some algebraic effects require multi-shot continuations, which is enough reason for Bluefin not to be algebraic. (It could be some sort of “subalgebra” of algebraic effects, but that doesn’t seem like a good reason to describe it as such.)
But at least some algebraic effects require multi-shot continuations, which is enough reason for Bluefin not to be algebraic.
The thing that requires multi-shot is the handler, and the handler is not part of the algebraic signature. It is rather the interpreter of that signature. Removing multi-shot does not make this into a “subalgebra.”
Well, I don’t know what “algebraic” means so I can’t make a coherent argument about that specifically. But I originally said was
Bluefin (my Haskell “effect” system/“capability” system) doesn’t implement multi-shot continuations (and so I don’t describe it as an “algebraic” effect system).
You objected to it. So are you saying that is it possible to be an “algebraic” effect system whilst not supporting multi-shot continuations? Should I, in fact, describe Bluefin as an implementation of “algebraic effects”?
Multi-shot continuations are not what makes effects “algebraic.” The algebra comes from how effectful computations compose: https://arxiv.org/abs/1807.05923
Hmm, having now read this I feel that the vague understanding I had of what an algebraic effect was is wrong, and now I’m even more confused. It seems that the definition of “algebraic effect” doesn’t actually having anything to say about handlers, and therefore nothing about delimited continuations. So I guess Bluefin is an “algebraic effects” library, just one where the handlers are less general than in some other algebraic effects libraries.
This is one place I got the impression that delimited continuations were coextensive with algebraic effects. If that is not so then perhaps an issue should be filed on that repo.
This library provides the following features simultaneously, which existing libraries could not support together:
Delimited continuations (algebraic effects)
https://github.com/sayo-hs/heftia?tab=readme-ov-file#key-features
Here’s more:
There is a deep, well-known connection between delimited continuations and algebraic effects.
https://youtu.be/0jI-AlWEwYI?t=3287
(Alexis King - “Effects for Less” @ ZuriHac 2020 54:47)
Is multiple resume not about the same as a generator or an asynchronous function?
No, a generator or asynchronous function only returns to its caller once. (I suppose in theory you could have ones that return multiple times, but then it’s just an issue of naming: those would probably end up being called “multi-shot” generators or “multi-shot” asynchronous functions).
So it’s unclear what the value add of effect handlers is here over plain parameter passing.
In Go I’m free to make any side effect I want regardless of parameters or function type.
As I wrote in the reply to ‘Capability-based security’ (point 7 in my original reply) that relies on having access to global state, and is independent of whether the language has effect handlers or not. One can create a language without support for global variables without adding effect handlers.
I think I got lost on the way through the examples. Some of it was syntax, it has been a while since I used functional languages directly. The handle kind of reminded of swifts try.
It kind of felt that things were just jumping over the place in some of the more complicated examples.
I haven’t had a chance to use effects yet, but I see them as being two things, one a superset of the other:
At a basic level they seem like “ambient callbacks”: a caller registers a handler, and within that call chain a function can invoke that handler. As the article says, this is a more ergonomic form of passing a Context object around, which doesn’t require all the boilerplate of adding it as a parameter and an argument everywhere. This is super useful and I look forward to them showing up in Kotlin soon (they’re called “context parameters.”)
The fancier type of effect, where the callback doesn’t necessarily return back to its caller, is based on first-class continuations, which are pretty uncommon. Of all languages I’ve used, I think only Smalltalk-80 and Ruby support them, they’re documented with big scary “Here be dragons!” signs. They’re difficult to implement performantly because they require very flexible call stacks that can be reified as first-class values — in ST80 every stack frame is a heap-allocated object. (This can be optimized, but it’s difficult.)
This comment is missing the really important thing about effects: they appear in the type signature of the function that depends on them. If you think “What color is your function?” made a compelling argument against async/await, you’ll hate effects: now you have a different “color” for every kind of thing a function can be awaiting.
On the other hand, if you thought distinguishing a function over integers from a function over floats was a good idea, distinguishing functions which yield to wait for a program-local lock from functions which yield to make a network call might seem like a good idea to you as well.
If you think “What color is your function?” made a compelling argument against async/await, you’ll hate effects: now you have a different “color” for every kind of thing a function can be awaiting.
In most languages with effect handlers, functions can be generic over the effects (generic over the color, a multicolor function?), as demonstrated in the article.
I think “What color is your function” made a very compelling argument, and I still think that effects are a cool direction.
Async/await may be questionable depending on their exact runtime semantics - I would argue that a Java-like (or Go or Haskell) green/virtual threading model fits most managed languages much better, and if the runtime can do the correct thing by itself then we don’t need to make it part of the function signature, forking the ecosystem in two (though another, possibly better point we could make is that effect systems solve this very issue, making it possible for functions to be generic in asyncness as well, as mentioned by @linkdd ). It is a bit like giving managed languages malloc/free. In the same vein, it is slightly painful to use async in rust, but it is in line with the language’s goal so I think it’s a good fit.
You’re right, I forgot to mention that part. As another reply says, this is a powerful way to do capabilities. And as the article under discussion says, if you can make a function generic over effects, you can do away with most of the “colors” problem.
if you can make a function generic over effects, you can do away with most of the “colors” problem.
This doesn’t strike me as true. The problem identified by Nystrom in the “colors” post is that it’s infectious - until you resolve the effect, any caller also now is parameterized by the effect as well and needs to be modified. Being generic over effects doesn’t change anything about this.
The one thing genericity over effect parameters does allow is higher order functions to express that they carry whatever effects of the functions they take as arguments. This avoids the need to duplicate those higher order functions, which is fine when there’s a small number of built-in effects, but totally intractable when users can define their own effects, hence the need for genericity over effects. This is a narrower, different problem from the one people are usually talking about when they talk about colors, though.
I think another important feature is capability tracking. If effects are typed, this essentially lets you encode what permissions a particular function have. Combined with linear typing you can control whether a library can do network call, read your disk etc.
The Eio effect library for multi-core OCaml includes its own capability system, which looks pretty neat. (https://github.com/ocaml-multicore/eio?tab=readme-ov-file#design-note-capabilities) As noted in the link, someone could still write code that does direct access to bypass the restrictions under current systems, but the presence of those direct access calls would also be a strong code smell for any auditing.
My Haskell library Bluefin is basically a capability-tracking system. It allows you to control whether you do network calls, disk reads, etc., but it doesn’t use linear types. I wondered why you mentioned those.
https://hackage-content.haskell.org/package/bluefin-0.0.15.0/docs/Bluefin.html
Linear types can ensure a capability is used, which can be used to encode eg a handle that must be closed after writing. That’s not the only way to do this, of course (compare with bracket), but AFAIK it is required to realise the rule on type level. I’d be happy to hear I’m wrong though!
It’s worth noting recent versions of the Glasgow Haskell Compiler have delimited continuations.
Delimited continuations in Haskell have been beautifully explained (and implemented) by Alexis King in https://www.youtube.com/watch?v=DRFsodbxHQo
The fancier type of effect, where the callback doesn’t necessarily return back to its caller, is based on first-class continuations, which are pretty uncommon.
What distinguishes a proper “first-class continuation” from just a coroutine implementation like in Python? What useful things to they uniquely enable you to do?
They’re difficult to implement performantly because they require very flexible call stacks that can be reified as first-class values — in ST80 every stack frame is a heap-allocated object. (This can be optimized, but it’s difficult.)
In CPython, PythonFrameObject
are heap-allocated: Objects/frameobject.c:2108:_PyFrame_New_NoTrack This makes the implementation of “generator coroutines”/“generators with .send
” (as well as debugging functionality like the inspect
module) much easier. I suppose the performance penality is offset by the heterogeneous nature of high-performance Python code: hopefully, you’re only really need mechanisms like generator coroutines on the programme structuring side, and all the demanding work is encoded somehow into a “restricted computation domain.”
The downside of this heterogeneous approach is that there is a big discontinuity in generality between the two domains and that computation domains tend to be designed overly specific to their use-case (e.g., you can’t have a pandas.Series
of dtype
sympy.core.expr.Expr
without a lot of M×N implementation work; work which is so obscure and difficult that few are willing to try!)
Of all languages I’ve used, I think only Smalltalk-80 and Ruby support them, they’re documented with big scary “Here be dragons!” signs.
In fact, even simple uses of generator coroutines in Python are considered to be too esoteric or obscure, even when considering the problems they solve very well. I think it’s the same in Javascript, which also has generator coroutines (though Javascript programmers as a whole seem to use very little of the language…) This is probably why the only “coroutine” that anyone talks about is async def
and async () => {}
structures, which are only one, very narrow, and beneficial-but-not-particularly-exciting use-case.
In fact, these approaches are so underrepresented (outside of tools like Bluesky) that it’s a struggle to even get an LLM to write you a generator coroutine!
In contrast, in Janet “fibers” are the same mechanism, except extended beyond the def c(): _ = yield
vs async def t(): pass
vs async def ac(): _ = yield
dichotomy. Apparently, as implemented currently, there are nine user-definable “signals” (:user0
~:user9
) in addition to yield
, debug
, error
, await
, &c. allowing you to implement the 2ⁿ powerset of functionality (e.g., asynchronous×coroutine×exception×:user0
state machines!)
What distinguishes a proper “first-class continuation” from just a coroutine implementation like in Python? What useful things to they uniquely enable you to do?
First-class continuations can be resumed multiple times.
So, as I understand it, both Javascript and Python coroutines model first-class continuations…?
And, upon reading through the article again, as well as the comments here, Janet Fibers model an effect system without the—crucial?—detail of constraining and validating their use through the type system…
So, as I understand it, both Javascript and Python coroutines model first-class continuations…?
For Python, not as far as I know. If you have a Python generator object you can call .next()
on it (to run its continuation once), or not call .next()
at all (to run its continuation zero times) but I don’t think that you can call .next()
twice to run the same continuation twice. (Of course you can call .next()
a subsequent time to run the subsequent continuation.)
(I’m far from a Python expert these days though.)
Thank you. This is very helpful.
but I don’t think that you can call .next() twice to run the same continuation twice
You are right that calling .__next__
on a Python resumes evaluation from the last instruction ((_PyInterpreterFrame*)->instr_ptr
or what we see from the Python side as g.gi_frame.f_lasti
): Objects/genobject.c:gen_iternext → Objects/genobject.c:gen_send_ex2 → … → Python/generated_cases.c.h:_PyEval_EvalFrameDefault
Conceptually, since Python has no mechanism for measuring or ensuring purity, each yield
-limited sequence of instructions could contain stateful operations that could not be safely or meaningfully rerun. (Python is also an imperative language with mutable state, so it occurs to me that rerunning a sequence of instructions would require snapshotting local state. This is possible at the extremes: there was a gimmick demo years back of a patched interpreter that fork
ed on every bytecode, allowing a very limited form of rewinding.) As you state, in Python we can only ever advanced the computation forward or, using mechanisms like itertools.tee
(source), buffer the yield
ed results of previous computations.
I have read of continuation use-cases where an error arises, the error percolates to some higher point in the call graph, then a remediation is performed that allows the original code to resume evaluation. I had assumed that this was just resumption of standard operation†, but I see that you’re suggesting that it is about truly rerunning the instructions.
† e.g.,
def g():
while True:
yield f() # could `raise`
If this is the case, is it possible to implement continuations in non-pure, imperative languages?
I think that according to this paper, yes! In any language that has exceptions, sort of, as long as you don’t mind rerunning a lot of the computation every time you suspend and resume.
Thank you for sharing the paper. I tried to read through and understand it.
Here is an excerpt from the beginning of the paper that I believe outlines the authors’ thinking:
We show that, in any language with exceptions and state, you can implement any monadic effect in direct style. … Filinski showed how to do this in any language that has an effect called delimited continuations or delimited control. … We first show… a construction we call thermometer continuations, named for a thermometer-like visualization. Filinski’s result does the rest. … Continuations are rare, but exceptions are common. With thermometer continuations, you can get any effect in direct style in 9 of the TIOBE top 10 languages (all but C). … Here’s what delimited continuations look like in cooking: Imagine a recipe for making chocolate nut bars. Soak the almonds in cold water. Rinse, and grind them with a mortar and pestle. Delimited continuations are like a step that references a sub-recipe. Repeat the last two steps, but this time with pistachios. Delimited continuations can perform arbitrary logic with these subprograms (“Do the next three steps once for each pan you’ll be using”), and they can abort the present computation (“If using store-bought chocolate, ignore the previous four steps”). They are “delimited” in that they capture only a part of the program, unlike traditional continuations, where you could not capture the next three steps as a procedure without also capturing everything after them, including the part where you serve the treats to friends and then watch Game of Thrones. … Implementing delimited continuations requires capturing the current state of the program, along with the rest of the computation up to a “delimited” point. It’s like being able to rip out sections of the recipe and copy them, along with clones of whatever ingredients have been prepared prior to that section. This is a form of “time travel” that typically requires runtime support — if the nuts had not yet been crushed at step N, and you captured a continuation at step N, when it’s invoked, the nuts will suddenly be uncrushed again. … The insight of thermometer continuations is that every subcomputation is contained within the entire computation, and so there is an alternative to time travel: just repeat the entire recipe from the start! But this time, use the large pan for step 7. Because the computation contains delimited control (which can simulate any effect), it’s not guaranteed to do the same thing when replayed. Thermometer continuations hence record the result of all effectful function calls so that they may be replayed in the next execution: the past of one invocation becomes the future of the next. Additionally, like a recipe step that overrides previous steps, or that asks you to let it bake for an hour, delimited continuations can abort or suspend the rest of the computation. This is implemented using exceptions. … This approach poses an obvious limitation: the replayed computation can’t have any side effects, except for thermometer continuations. And replays are ineffcient. Luckily, thermometer continuations can implement all other effects, and there are optimization techniques that make it less ineffcient.
As I understand it, their approach is based on tracing execution of a limited sequence of operations, then replaying that sequence from the start with cached values when rerunning. The authors are proposed some optimisations to this approach to handle what might otherwise be geometric growth in the case of deeply branching code.
That “the replayed computation can’t have any side effects” yet “thermometer continuations can implement all other effects” doesn’t make sense to me.
I can understand how we could use this approach to allow code like:
#!/usr/bin/env python4
from random import choice, randint
def c():
while True:
if choice([True, False]):
yield randint(-10, +10)
else:
yield randint(-100, +100)
This, of course, would require wrapping or reïmplementing all effectful mechanisms in the core language and libraries. (However, the introduction of asyncio
seemingly requires touching about the same volume of codes…)
I cannot understand how we could use this approach to allow code like:
#!/usr/bin/env python4
from random import choice, randint
def c(data_dir):
while True:
files = [x for x in data_dir.iterdir() if x.is_file()]
if choice([True, False]):
for x in files: x.unlink()
yield 'trash', files
else:
data = [x.read_text() for x in files]
yield 'treasure', data
I suppose we could play this forward and try to capture the effects in both branch, but it occurs to me that we would need information on how these effects interact in order to do that properly.
Of course, if we had effect-free or limited-effect code, i can see how this would work, though I’m still not certain whether we would be able to accomplish anything that would be worth all of this effort.
I should say that I am very much a spectator in this particular sport. But I think the answer to what you are asking is that you don’t explore every single branch. Each run through pushes one more branch into the “past”, where it is now frozen and the next run through just reuses the result of the effectful computation that was frozen onto the “past”.
Someone has written up a python version of this here.
I would personally not suggest doing this in real code, it’s interesting but seems like it would result in horrible footguns everywhere.
If this is the case, is it possible to implement continuations in non-pure, imperative languages?
Yes, absolutely. At the most basic level you need to make a copy of the call stack in order to resume it twice. This will not be pleasant to use with lots of mutation beyond what’s in the frame, but the semantics is still straightforward. Plenty of non-pure functional languages work this way.
But then how do we handle rerunning the below without a copy of the entire local state of the frame (in Python 3 terms, frame.f_locals
)?
#!/usr/bin/env python4
from itertools import count
def coro():
for idx in count():
yield f(idx) # could `raise`
Similarly, since the local state could contain mutable entities, how do we handle reverse mutating them?
#!/usr/bin/env python4
def coro(filename):
with open(filename) as f:
for line in f:
yield f(idx) # could `raise`
In the above, to rerun the line, would we not have to seek back f
? We may need to wrap f
in something that supported continuations (but doesn’t this mean we have to propagate this design to every extend of our standard library and runtime…?)
Additionally, this is an example of something where the state is actually visible and manipulable. What about…?
#!/usr/bin/env python4
from postgres import connect
def coro(dsn):
with connect(dsn) as con, con.cursor() as cur:
cur.execute(...)
yield f(idx) # could `raise`
By my understanding, this is exactly why Python generators cannot be rewound. Since the language cannot control for or limit the presence of side-effects, it is impossible to move in any direction except forward.
This is exactly what I meant by it not being pleasant to use. Having a semantics which is not very useful for these kinds of programs doesn’t make it impossible to implement, just limits its usefulness.
There are certainly things you could do to make it more useful, on top of this. For example, make mutable objects copy-on-write, or replace some mutable objects with more immutable-style objects. More extremely, you could make some things work with transactional memory.
But at the same time, it’s not completely useless to resume an effectful computation multiple times. It really just depends on what the side effects are and why you want to resume multiple times.
I’m not knowledgeable about continuations, but I believe “resumed multiple times” is different from coroutines. It means being able to resume a single captured stack multiple times from the same point; sort of like ‘fork’ but for stacks.
(I wasn’t aware that Python had heap-allocated stack frames!)
(I wasn’t aware that Python had heap-allocated stack frames!)
I try to be fairly thorough and precise in my comments and link to primary sources as much as possible. Rather than taking my word for something, I’ll show you the code, and you can go read through it yourself!
(I don’t actually assume that people click on any of these links. As I saw just yesterday when digging through >30 year old C codes, it’s just not the case that you can drop someone into even a very short file and expect them to be able to read through it, though CPython has—historically, prior to 3.10 or so—been fairly straightforward to skim.)
But I do hope that colleagues find value in all of this. (You never know when a small detail or “fun fact” may later prove useful!)
used, I think only Smalltalk-80 and Ruby support them
What ruby feature is this? Fibers perhaps?
https://docs.ruby-lang.org/en/3.4/Continuation.html
I didn’t realize Ruby had this!
it’s had them forever, it’s just that afaik no one ever uses them because they’re slow. (other than to show off nifty examples of flow control - e.g. before fibres came in you could theoretically implement them using callcc, and you could turn an internal iterator into an external one)
TBH I haven’t used Ruby in a long time, but I recall a feature related to exceptions, back in the 1.x days, that let the handler resume the caller.
No, that’s just syntactic sugar to re-run a loop.
I searched for “ruby continuation” and found the actual Continuation class: https://docs.ruby-lang.org/en/2.3.0/Continuation.html
No. I said “retry” not “redo” the article covers both. You didn’t read far enough.
and found the actual Continuation class
Steve posted that same link as the top level reply to my comment 23 hours ago.
I’ve never used a language with algebraic effects, but I find the concept of “effects” helpful and clarifying even in languages like C++ and Rust. I mentally put parameters into two categories: effects and data. The effects are where most of the OOP style is confined.
It would probably get way too deep into the weeds of Haskell specific stuff, but I would have appreciated a comparison with monad transformers (or free monads). It’s also interesting how far the current Haskell implementations of algebraic effects (eff, bluefin, etc) are from native language support.
I’d be curious about this as well, it feels like quite a number of the benefits that the blog post talks about similarly apply to monads, which have been around for a while. In fact, on the point of treating effects as capabilities I feel Haskell-style do notation scores even better. Given the the explicit use of <- binders to thread state, a pure function introducing side effects should always throw a compile error, even if the call site was side-effecting already.
Now the blog post does briefly mention composability, which IIRC is the main benefit of algebraic effects over monads, but it would be cool if the blog post highlighted that a bit more.
Hello, Bluefin author here. I didn’t understand your comment about Bluefin. Firstly, I don’t describe it as an “algebraic” effect system because it doesn’t support multi-shot continuations. I just call it an “effect” system (or a “capability” system). Secondly, I’m not sure what you mean by “native language support” and whether you’re saying Bluefin is far from it or close to it. If you can clarify then maybe I can shed some light?
Sorry about the first point, that’s my mistake. I misremembered Bluefin. As for the second point, I’m asking how far libraries are (in usability, expressiveness and safety) to a language like the one in the OP. I hope that clarifies what I meant.
Well, Bluefin supports everything in the OP (except multi-shot continuations, but I didn’t actually see any examples of those in the article). Bluefin is perfectly safe (based on Haskell’s ST
-trick) (I don’t know how safe the language in the OP is) and perfectly expressive (basically everything safe you’d want to express in IO
you can express in Bluefin). I hope it’s usable! I find it so. That’s why I wrote it in the way that I did. But to be fair I think that should be judged by third parties :)
Thanks! I’ll certainly try bluefin out again soon. I didn’t mean to imply that you (or any other maintainer) must write the comparison. I think whoever writes it should be familiar with both languages being compared.
You’re welcome. That’s OK, I just wanted to clarify. I think it would be a worthwhile comparison, in any case. Let me know if you have any questions about Bluefin. You can always ask me any questions by starting a discussion on GitHub.
This is a rough explanation because it’s innovating both on the type system and on the language creating a mix that is quite unaccessible.
It looks like a Haskell-like language and that these languages can do this kind of stuff is a given (or more correctly, that there are individuals who can do this kind of stuff themselves but fail to get any kind of traction for it).
I mean I want to like effects but can’t really.