Progressive Web Components
13 points by pushcx
13 points by pushcx
Elena doesn’t force JavaScript for everything. You can load HTML and CSS first, then use JavaScript to progressively add interactivity.
Can anyone explain this more? If you have some data that represents a list of something, say a list of tags, and you have a Tag web component, how would you render the list without JavaScript?
I don't think it is possible to use web components w/o JavaScript. What they mean is is that the web component renders immediately w/o waiting for the JS code to run.
A Progressive Web Component is [...] designed in two layers: a base layer of HTML and CSS that renders immediately, without JavaScript, and an enhancement layer of JavaScript [...].
My guess is that that helps with issues like a "flash of unstyled content", which the articles mentions at the beginning.
The page super makes it sound like JavaScript is part of the progressively added interactivity and not in the critical rendering path. If that’s not the case then it’s either very misleading or I’m dumb. Or both.
I thought it was misleading as well, turns out I was mistaken. As I understood Progressive enhancement originally meant that indeed things are functional w/o JS and work better as more features are supported. Turns out I was wrong the original coinage of Progressive Enhancement (see slide 10) doesn't mention JS explicitly. MDN's definition also mentions that the goal is to have the web usable on older devices.
The word progressive in progressive enhancement means creating a design that achieves a simpler-but-still-usable experience for users of older browsers and devices with limited capabilities,
But as time moves forward what constitutes an "older device" moves forward as well.
It seems like the author is trying to hide their LLM usage. CLAUDE.md is in the .gitignore and there has been a lof of LLM tools config.
I don't think the author is trying to hide their LLM usage. Elena comes with a package for mcp servers. https://github.com/getelena/elena/tree/main/packages/mcp
Also from the commits are clear that LLMs were involved. Maybe they gave claude and copilot a go and didn't find them useful so they removed the files?
Oh interesting! Another possibility is they're still using LLM tools, but there's something private about the context docs, e.g., maybe they say stuff like, "Run ~/bin/mylint.sh before each commit." This example is somewhat contrived because I think there's better hygiene for putting user-specific stuff in your context docs? I'm not an expert on what the best practices are.
Another thought that's crossed my mind is whether organizations might someday keep context docs under wraps as a competitive advantage. SQLite already does something analogous: the library code is public domain, but if you want to make a change, you have to pay up for their unit tests and/or engage their consulting services.
A CLAUDE.md in a gitignore doesn’t entitle you to be an accusatory asshole (here or in your git issue).
They ship an mcp package and the terminal logo is an homage to Claude code so I see no evidence of trying to hide llm usage.
Lobsters is really becoming hard to engage with because of this vocal minority (not people who dislike LLM usage but the minority who feels like they’re on god’s crusade to smite all LLM users)
A CLAUDE.md in a gitignore doesn’t entitle you to be an accusatory asshole (here or in your git issue).
I'd wouldn't call it an accusation. The git history contains LLM tool files which shows these tools were used. It's also a fact that the maintainer tried to hide their LLM agent usage as these add themselves as co-author as of my knowledge. You have to actively change their configuration to stop them from doing that.
I also think it's important to point out if a software is written by LLMs because that can have drastical legal implications. We are still waiting on rulings regarding copyleft and copyright. In the worst case (for the LLM industry & LLM using industry) every software that uses LLM coded components could turn into AGPL code over night. You certainly don't want that.
You also certainly don't want to loose your copyright because you failed to document your human contribution to the LLM output. (And that's probably what will happen in most jurisdictions because only human creations can have copyright.)
All that doesn't even consider the ethical implications of LLM training (illegal & DDoS-like crawling, moderation by click workers), operation (climate impact, water usage, circular investments) and more.
We are playing with fire nearby a gas tank and there are billions of dollars spend on advertising flamethrowers. I know this hype is hard to ignore but (in your own interest) be careful.
It's also a fact that the maintainer tried to hide their LLM agent usage as these add themselves as co-author as of my knowledge. You have to actively change their configuration to stop them from doing that.
It depends a lot on how you use these tools. If you don't let the tool make commits independently, then it'll never add itself as co-author. So if you're only using the LLM to generate code, or offer advice, or research a codebase, and then reviewing whatever it suggests before committing, then the default is no co-authorship.
So this isn't really a smoking gun of someone trying to hide their usage of LLMs, rather it's a fairly common side-effect of using LLMs in a relatively conservative, cautious way.
If I had to guess, I think folks are reacting to your tone, not to your stance on LLMs.
The conclusion of intent ("the author is trying to hide…") goes a bit beyond what the evidence supports, as others have already pointed out. Your conclusion could still be correct! But it's still stating a definite motive, when there are multiple plausible explanations for what we're seeing. That might be why the comment came across as more accusatory than you meant.
You said the author’s intentionally deceiving people. That’s an accusation whether you want to call it that or not.
Your tone, inciting brigading, and the fact that your accusation is based on something obviously wrong makes the accusation pretty rude.
Keep it off this site, and stop litigating LLM usage in inappropriate places.
I'm supportive of your right to do forensics and announce your findings. And deceit in general is a bad thing.
But also, this feels really weird to file a bug and link it here? Both on a "What do you honestly expect a deceitful person to do about it?" level, as well as a "Doesn't Lobsters have rules against behavior that could be interpreted as brigading" level.
I agree with your opinion on linking the issue and replaced it with a direct link to the deleted files.
A Progressive Web Component is a native Custom Element designed in two layers: a base layer of HTML and CSS that renders immediately, without JavaScript, and an enhancement layer of JavaScript [...]
I found myself needing exactly that recently for my Atom feeds. Web Components won't render in a feed reader, since those don't execute JavaScript.
Most of my writing is on sites that offer feeds. I've started thinking about how to add custom features with Web Components that degrade nearly to HTML.
Unlike most web component libraries, Elena doesn’t force JavaScript for everything.
Indeed I'm surprised that most web components libraries try to mimick react's API. ej. frameworks like lit and stencil. The only exception I know of is catalyst, the framework used by GitHub before their UI went to downhill.
I've been playing with web components to evaluate. I'm interested in using them to reach that 20% extra complexity for specific parts of a page without having to embrace a full SPA. After trying to write vanilla web components my main gripe is the template story. Template and slots result in more code that I care for and string interpolation using template literals is asking for XSS injections.
I was thinking of looking into clojurescript to see if something like Spinneret's deftag could be implemented. It looks like a sweetspot. When using spinneret+parenscript the deftag is compiled down to createDocumentFragment calls.
@elenajs/cli: CLI for scaffolding Elena web components.
I rather your framework/library didn't require so much boilerplate that scaffolding tools are needed. Even in Rails, I only use rails g migration and that's mostly because of the timestamp in the name.