trifold is a tool to quickly & cheaply host static websites using a CDN
33 points by jptsh
33 points by jptsh
I built this to replace my Netlify* workflow for dozens of small static sites & thought others might find it useful. A single config file and you can trifold publish to your heart's content. Unlike free options** it requires a bunny.net CDN account, but you can host as many of your static sites as you want-- $1/month for 100GB of traffic.
*Why not Netlify? Their new billing terms & fact that they're adding more and more complexity & all these sites need is a place to host some HTML/CSS/JS. **Why not GitHub Pages/Cloudflare? I'd really like to help people move away from the same 2-3 companies controlling 95% of what people do online.
Oh cool! My "solution" so far has been basically find * -type f -exec curl --fail --request PUT --url https://storage.bunnycdn.com/$BUNNY_BUCKET/{} but having the upload tool actually delete files that shouldn't exist anymore is definitely nicer.
Love it! I host a tiny static site hosting platform because I share similar frustrations with the other tools you mentioned.
For me, local tool install for something as simple as copying static files feels overkill, and some of these tools are not so ergonomic.
https://pgs.sh for anyone interested. I do like the idea of leveraging a cdn like bunny. We decided to build our own tiny cdn to keep costs down for everyone. It’s not really in the same scale as bunny since we only have 2 locations but it has served us well.
Did you compare the performance against anything?
I tried using Bunny for my website, and it actually got worse.
I couldn't find a way to serve pre-compressed .br files for example, and Bunny's compression was not as effective.
The PageSpeed score dropped from perfect to mediocre.
Maybe it got better globally somehow, I wouldn't know how to tell, but for a small website with limited audience, I think I'm better off without it.
I get a 98 on that score, FWIW. Stored and served by bunny.
I get a 100 for a page served custom by a Clojure program on a small digitalocean droplet.
Nothing scientific really, I did a PageSpeed test on my largest two sites which saw no noticeable difference. I haven’t optimized for speed given the nature of the sites, this is “fast enough”.
I think the idea here is that it's cheaper than other options (except free options) not that it's faster.
The idea is that it's a smaller European provider with a global CDN, with no Bezos in sight. Netlify still has a free tier, can't get cheaper than free, but it's hosted on AWS.
Thanks for sharing this!
If I update a site that has 1000 pages, is the update atomic? Or is there a time window during an upgrade from A to B where users might see files on my site from version A alongside version B?
I've considered switching away from Netlify, but I can't figure out a way to replicate atomic replacements on a CDN the way Netlify and Firebase hosting do.
Out of curiosity, why does trifold implement its own diffing on top of the Bunny API rather than using existing diffing in something like rsync over Bunny's FTP interface?
Not atomic, but thats an interesting idea, I think it could be done with the API by switching the storage zone for the pull zone. I might give that a shot later.
As for why diffing via the API, this was originally written for people that wouldn’t have rsync installed so it didn’t cross my mind. It’d probably be a good improvement though, at least as an option, that’s the slowest part right now (though also not yet fully async as deployment speed hasn’t been a major priority)
FWIW, you could also just get an account on a decent shared hosting and manage it using standard unixy tools. There are providers that are even cheaper than bunny's $1/mo minimum.
Very true, but that is a server to worry about, and this is a one time thing. For students and certain clients the static hosting setup is ideal.
(Also SSL without any additional fuss)
And you get a standard unixy SPOF instead of a globally distributed CDN you could've been using for one dollar a month..
You might be thinking of a VPS rather than managed hosting. Just because they expose a "traditional" interface over ssh doesn't mean that they aren't using redundant, properly load-balanced servers with a distributed fs.
Besides, I believe there are several shared hosting providers that have had better uptime this year than e.g. Cloudflare ;p
Unrelated to the point above - do you actually want a globally distributed CDN for e.g. a personal site? I'd be absolutely fine with a hosting company that only has servers in one country. This also makes it easier for small players to compete, which I think I care about much more.
What’s the state of the art in cheap shared hosting, these days? I’ve been with NearlyFreeSpeech for years, and I should probably know who else is out there.
I haven't tried it yet, but several people have mentioned https://opalstack.com/ is supposed to be founded/staffed by refugees from WebFaction after they were acquired by GoDaddy. Can't speak to opal specifically, but webfaction was by far the best shared host experience I ever saw and I ran a number of personal and work sites on them for many years.
That said, shared hosting lives and dies by controlling quotas and resources... and I'd be worried what that looks like in the brave new world of model training scrapers.
keep shipping via MCP, Git, SFTP, and SSH
wow. One of these is definitely not like the others..
This is nice, at one point I thought about building something similar on top of Bunny.
Back then I hosted some static sites using Bunny's DNS, storage zones, and CDN. I experimented with the setup for a while, and ended up doing some additional work to make the experience closer to what you get out of the box with Github/Cloudflare pages, Netfliy, Vercel, etc. This included having only canonical URLs served and caching HTML indefinitely in the CDN points of presence, with automated cache purging whenever a page did change.
I don't know how much has changed since then because I moved back to hosting on a server, but it worked well enough. There were definitely pain points around edge rules: the configuration was terse and there were gaps in the documentation for some of the rules syntax. Similarly, there was a supposed S3 compatibility support for storage zones that was always around the corner.