Access Control Syntax
18 points by MatheusRich
18 points by MatheusRich
I think modifier sections wouldn’t be too bad if they were explicit blocks.Something like:
local
rec Vec
// …
end
def sayHi()
// …
end
end
My color for bike shed:
nothing, pub
, export/pub*
for “visible in the file”, “visible in the current unit of compilation/distribution(crate)”, “visible across CUs”. The mid one is really important, and it’s a shame that Rust version requires two keywords and two sigils. “Can I change all call-sites?” is the physically meaningful aspect of access control, and CU-private captures that.
If I were going for scripty, public by default, I’d go for nothing for public and my
for private: my def foo
Another option is to have only pub, and you have pub import which re-exports the file/module, and have a top level file for the crate that pub imports the files you want. This gives you a recursive and nestable system instead of a two level one, but without the full hassle of ML-style manifests (.mli files).
That’s what rust used to have, and I find that to be pretty horrible, as it makes the answer to the key question, “is this visible outside of CU”, non-local.
It’s interesting that the ML module approach where you have interface files does not satisfy this criterion either, but if you change the type signature of one of the publicly exported functions then you will get a type error telling you there is a mismatch between your new type and the one listed in the interface file. I do wonder if there is an editor feature that could help here, highlighting the types that if you were to change them, would change the public interface of the unit of distribution.
Was thinking along the same line and talked to him on Mastodon, though I used the words “module” and “package.” I have the opinion that Rust’s pub(crate)
should be the default.
He replied that his language won’t consider the mid option as the language currently doesn’t have this distinction between file and unit of compilation/distribution(crate)
I like how ES5’s module.exports
lets you choose between (or even mix?!) the “export manifest” style, the “modifier sections” style, and even the “modifier keyword” style. Specifically, (hopefully I’m not misremembering …)
module.exports = { foo }
and define exports & non-exports belowmodule.exports = { foo: (...) => ... }
and define non-exports belowmodule.exports.foo = (...) => ...
just for the exportsMaybe that’s too much flexibility, but I appreciated it. I can get used to just about anything
The “export manifest” approach can be extended to also include a list of signatures that the module depends on, i.e. parameters to the module (this is different than imports, as you can open the same module with different parameters).
I’m not sure if this is always the best way to program against an interface, but I miss it sometimes.
Yeah, I like the idea of parameterized modules as a basis for capability-secure libraries, ie, you can’t just import IO, it has to be given to you, and you can be given a tamed IO that does only what you need. Probably has curious effects on testing too.
It’s about as repetitive as trait
vs impl
in Rust, which is tolerable — but Rust avoids much of the repetition that would occur when separating modules from interfaces.
Lua has local
for private definitions, but it’s a bit verbose for my taste.
Elixir does def
for public and defp
for private, which is terse but cryptic and adhoc. Also, it has to duplicate any defconstruct
s this way, which is meh.
Another risky option would be to blend Go and Oberon and have Rec
for public definitions and rec
for locals.
The ML way is one of the best tradeoffs in my experience and I think the points raised in the article are just not really an issue in practice.
it’s quite verbose. You end up saying the name of every exported declaration twice. Maybe even the full type signatures too.
There’s tooling for that, quite easy to be honest. You don’t even need to do it if you don’t want to, the compiler will infer it. In C++/Java you need to run tooling to document these things anyway.
Plus it’s an intentional step so you’re putting thought into how your code will be used by the outside which is about the same amount of work as adding private/public to functions after you have a better understand of a module/class.
Since the export manifest is separate from the declarations, they have to be manually kept in sync. Rename an exported function and you have to remember to rename it in the manifest too.
Yes but that’s a good thing. It’s a statically defined interface so if you rename an exported function it’ll break all over the codebase at compile time. Or if you want explicit compatibility you just alias it.
Another thing to consider is that ML-style functors share syntax with module definitions and saves the language from more specific and perhaps confusing syntax such as in Rust.
One can use the tooling to avoid saying the same thing twice by hand, but there are also reasons why the type in the module and its signature wouldn’t be the same. Abstract types, for example — one of my favorite parts of ML module systems.
module type FOO = sig
type t (* abstract *)
val make : int -> t
val get_content : t -> int
end
module Foo : FOO = struct
type t = {content: int}
let get_content v = v.content
let make c = {v with content=c}
end
With the real type of t
not exposed in the signature, outside users cannot construct Foo.t
other than with Foo.make
and thus cannot break the invariant. I find it arguably a pretty good solution, although that feature of the module system isn’t very obvious until one learns it from someone else…
Precisely, getting to define your own interface is a better system even if it seems boilerplate at first it’s really not. Tooling here just get the boilerplaty stuff out of the way so you can refine your interface and make those choices.