What are some interesting projects that you've done this year?
52 points by runxiyu
52 points by runxiyu
Perhaps similar in spirit to the "what are you doing this week" posts --- what are some interesting projects you've done this year?
Not that impressive, but here is a simple regex engine (with a web visualizer)
Id say thats very impressive, nice code and very well done visualisation. I love the state diagram
In general a lot to do with parsers, tooling and programming languages:
a collection of libraries i needed and wanted for other projects:
And of course a bucket load of blog articles:
Writing two articles for Paged Out! #7 ranks as the most memorable thing I did. It was fun and somewhat challenging to do the research and cram everything into a single page of text, without leaving anything important out.
I also wrote 2 articles, for paged out #6. Was really fun! Was a bit hard to cram things into one page, but wasnt too hard, my first drafts came in at 1.5-2 pages, so messing around with the layout in typst and such was enough.
nixfmt so I didn’t have to have a11y issues with 2-space indentation of Nix code which finally merged in early in 2025flake-utils
I also toyed around with ATS2 & Factor, but mostly a lot of Nix. I would like to transition from front-end to Nix (maybe Ops?)… if you know anyone hiring remote, contact.
I also decided to try baking breads & have gotten a bit better at that too. The goal is to make my own version of Detroit-style pizza, my favorite.
I was diagnosed with bipolar 2 this year after a couple years of symptom escalation.
I've been disappointed with the tracking available to me in most of the recommended apps, so I've been building my own (which could in itself be a bipolar project). I've been experiencing rapid/mixed cycling which is very difficult and doesn't track very well in the common apps which just want a daily number between "sad" and "happy" it's not easy to visualise "my thoughts were racing and I was full of energy but I was really really sad".
It's been really interesting learning about the current state of "web on mobile" technologies. Capacitor can go surprisingly far, even interfacing with the health API via a plugin to grab sleep data. The whole thing is written in Clojurescript, and it's been fun flexing my frontend skills again which have gotten rusty.
Konsta UI and Ionic are really interesting mobile UI projects. It's great building something that does feel fairly native and calls out to native UI such as date pickers where it makes sense.
I did have a play with ClojureDart to build it with Flutter, as the current Bipolar UK app is, but I found myself quite lost compared to the web knowledge I already had. I'm also really unhappy with the results of flutter apps I've used, but that could be because they were bad. For example flutter has it's own date picker which doesn't feel as smooth as the native android one.
And I'm racing the clock with my latest project, but it's not done yet
Could you elaborate on the PostgreSQL as a library one? That sounds very interesting- I've played with similar things before, like playing with the binaries that some people publish that you can download and use on the fly.
(There's also PostgreSQL Single-User Mode, which I've always found fascinating, but didn't seem really usable for embedding purposes.)
The PoC is basically single user postgres as a library. All the work went into setting up some global state. I wrote about it here
I also did some work to make sure extensions work, it's pretty nice.
It's not yet able to be stopped/restarted, and the "initdb" command is not implemented, so you need to create the db manually once.
Ohhh, nice, that's supercool!
Quite often I wish I could pull in PostgreSQL easily instead of SQLite (stronger and static-er typing, tons of features, etc.)
I looked at single user mode, but if using it as a subprocess, I didn't find a convenient way to do query parameters, I only saw how to send queries as text. But with your approach you can!
Kudos!
That was a year of ditching Rust in favour of Odin language after 7 years long journey. Accidentally became a compiler bug hunter through the use of niche features during the work on full rewrite of Submerge that is our internal SCM inspired by Fossil and Subversion, and a package collection related to LSP that bundles URI parser, JSON-RPC, and related supplementaries. Eventually wanted to make both widely available but ran out of time, we’ll see next year.
Wow! What made you decide to go for Odin and ditch Rust?
Tried the language for the first time 6 years ago but it was immature and unstable. With the lack of stable alternatives (for systems programming), I had to stay with Rust that I liked more than C and C++ in terms of syntax and features (not a fan of async and RAII). Feature-rich core package collection, context system, manual memory management, and now also proper Objective-C interface are features I like the most. Directory-based packages allowed us to independently track history of changes for each package, and also handle file/package locks in non-breaking ways.
Have you had the chance to compare Odin vs. Zig? I have only a casual interest in both (playing around), but both seemed to fit the “C, but more convenient” idea.
Between 0.4 and 0.6, it suffered a lot from missing docs, frequent glibc updates breaking the compiler binary, and it required too much resources to build. That combined with bugs and huge breaking changes on every release quickly pushed me off. For what we do currently, the Objective-C interface was a great way to implement rendering logic in Odin, and then glue it into SwiftUI views to combine with native frontend.
This was already posted on lobsters but the most interesting project for me personally was zmx: https://github.com/neurosnap/zmx
It was a big learning project for me. I live in the terminal and I’ve been dying to have a reason to dig deeper into its internals. I also wanted to learn zig so session persistence seemed like the perfect fit.
I rewrote it 3 times until I found something I really liked. I learned a lot about terminal emulators, ghostty, PTYs, ansi escape codes, zig, and Linux syscalls. It was a huge success for me personally.
Well it's only been this year for an hour 40 mins but I did work on a difficult jigsaw puzzle with my family for like 30 minutes, and went for a little driving tour of the city I grew up in.
For me it's starting the Lindenii Project with a lot of subprojects I've wanted throughout the years, although initially I took a breadth-first approach. I hope to focus on getting more of them to a further stage of development next year
If you give me some leeway (technically forked in Sep. 2024) then neatocal (demo), a JavaScript port of Neatnik's single page calendar with extra features and options.
Forked and updated a lunar calendar to include moon images and some other options (forked from Codebox's lunar calendar).
Helped with a Unix Magic poster annotations project (src).
Won "3rd place" for the Anna's Archive visualization project (submission).
I built Optique, a CLI argument parser for TypeScript inspired by Haskell's optparse-applicative (if you haven't seen it, worth a look). The motivation came from working on Fedify, an ActivityPub server framework I've been developing—I needed a CLI tool but found existing libraries couldn't express complex constraints like mutually exclusive option groups or dependent arguments in a type-safe way. They're all configuration-based, so you end up scattering validation logic throughout your code.
Optique uses parser combinators instead. You compose small parsers together, and TypeScript automatically infers discriminated unions from the structure. For example, or(serverMode, clientMode) where each mode has a constant() discriminator gives you a proper tagged union without writing any type annotations.
This was my first real dive into advanced TypeScript—conditional types, mapped types, complex type inference. Learned a lot of tricks along the way.
Now I'm facing a harder challenge: supporting both sync and async modes. A user wanted async suggestions for shell completion (think: fetching Git branches). The tricky part is that when you combine parsers, if any parser is async, the combined result must be async too. In Haskell you'd parameterize over Identity vs IO, but TypeScript doesn't have higher-kinded types. I've been experimenting with a mode type parameter (Parser<"sync" | "async", TValue, TState>) and conditional types to make the async-ness propagate through combinators automatically. Still working through the ergonomics.
I thought there might be a niche for an typed, updatable HTML template language that generates a tiny amount of code: https://vegen.dev
traceboot: bootchart-like collector using ftrace and therefore very light so that you can map all processes being spawned, starting with init; posted on lobste.rs.
tts-pages: a very early/personal framework to extract text from webpages and generate speech. I use it to "read" the general news which typically lasts for 1 to 2 hours a day while doing various chores or walking outside (which is my exercising of choice). My typical flow is to collect articles either throughout the day, or before doing chores or going outside, then I throw everything at my program and get audio generated within a minute or two; the generation rate is roughly 1 minute per hour of audio (and then I can speed up the playback if I want).
And apart from that, spent a fair amount of time on my new home server setup (which hosts my public-facing services), my website+blog using soupault, experiments to discover formulas for size/quality relations in svt-av1, and photography.
I had traceboot on my list to test out for a while. Can it map out earlyboot? I was trying to measure the relative timings of booting until pid1
It maps every process, even kernel ones. That's one of the very interesting aspect of using ftrace on the kernel command-line to do the collection.
BTW, re-working the visualization part is on my radar because perfetto can be too unreliable (it's developped for chrome and android, not for me so it can break for my uses).
In March I used Claude Code to help create a calorie/macro, weight and sleep tracker tailored to my needs, including scraped data on the food I eat, and an easy way to add new foods (paste an url or fill a form). Took about two weekends and most evenings in between, and I haven't had to change things since. Absolutely no security (100% production ready my ass), so it stays behind Tailscale with all my other private stuff. During the first 7 months of using it I lost about 13 kilos, so counting works just as well for me as intermittent fasting did. That said, I find IF a lot easier to stick to, so I stopped counting in September.
In April I made a searchable movie database that trawls my NAS. Used it while creating it, then never again, don't seem to have time for movies anymore :/
At work I've scripted 95%+ of all things I normally do in Azure and Intune, plus stuff I do so rarely that I don't even remember having done before, to the point where I really have to find a way to make my modules and functions discoverable.
Edit: oh, and a lot of small go utils - 20+ - for all kinds of things (next subway home, weather forecast, exchange rates, todoist-stuff etc).
DREI, a variant of Emacs for WYSIWYG editing of HTML:
https://speechcode.com/blog/drei
Schmeep, an Android app that wraps Chibi Scheme in a Bluetooth-accessible REPL. The Bluetooth REPL's client side is a Linux command-line program. The app's UI is a WebView that you program in Scheme using a simple framework inspired by HTMX, but where the "server side" is actually Scheme code running on the phone.
https://speechcode.com/blog/schmeep
Both projects work and are already useful to me, but are in early stages.
I've created what initially was a mood tracker/journal, but it grew to manage my notes, bookmarks, to-dos, with categories and a search for everything.
Here are a few screenshots from my phone. I'm still figuring out how to sell this (I'm considering making the source available), but if you want to give a try, please DM me and I can set a free account for you.
I did many things this year (like finally getting my maths bachelor degree!). Here are some of the most fun projects I worked on:
I finally did something in c+cmake (I also learnt spack along the way which is actually pretty nice for dealing with C deps once you get the gist of it) for a scientific+parallel computing course
While working on my thesis I wanted to try out this (very) old software called Knotscape so I updated some stuff to revive it. Now I can (sadly) say I read C code that uses the register keyword...
During this year I continued my streak of not using React and Tailwind directly. Still, I wanted to try the atomic / css-in-js coding style. So I wrote a small library called preact-css-extract to have a vitejs plugin + preact option hooks that provides css template literal string with compile-time extraction. This was actually very nice to use for some projects.
I also did some experiments with multi-pass shaders for rendering nice mathematical knots for a future project.
Oh and I finally updated my website with some nice procedural art animations (before these were a bit too cpu intensive but now they should be mostly fine).
I also started to do some small projects in "vibe engineering" style. To keep them simple I opted for the single-html-page format with no build systems. This will make it easy to move these projects elsewhere if I need to and gives a nice balance of readablity (just press Ctrl+U for source) and fast project prototyping.
Here are some of them: MIDI Visualizer; Markov chain editor/simulator; Dehn twist visualizer.
I think two:
I designed and implemented an algorithm for mesh formation in a local P2P network, I.e. connecting peers found via DNS-SD using a reasonable (if not minimal) number of point-to-point connections. The tricky part is the logic is distributed, so without a leader all the peers have to make their own decisions concurrently. (This included working through a bunch of Imposter Syndrome, including justifying to myself why none of the existing algorithms I found in papers fit our use case.)
I implemented a SQL++ (dialect of SQL adding operations on unstructured JSON data) query engine in TypeScript on top of JS's IndexedDB, which is a lower level key-value db with indexes. (More imposter syndrome!) It’s quite lacking in optimizations at this point, but I’m rather proud of it.
(Both of these are now shipping in Couchbase Lite.)
In personal projects, I learned and came to love Kotlin, and implemented Prolly Trees and a syncable persistent data store based on them. Oh, and almost forgot, I spent the early part of the year implementing a Smalltalk-like language in TypeScript, just because I felt an itch to.
I started working on Glide, a super extensible Firefox-based browser, and released it a couple months ago. It's been super fun diving deep into the internals of Firefox, and seeing all the interesting configs people have written :)
I recently finished a small library for working with discrete probability distributions in TypeScript. It was quite fun—the implementation is based on an Oleg paper, so the bulk of the work was understanding the algorithms described there and deciding on the best way to represent things in TypeScript. I decided to represent the main algebraic data type at the core of the paper in a fairly object-oriented fashion, as an object hierarchy with methods for matching on the contents of values, and was happy to find that it was both aesthetically pleasing and that it significantly simplified the rest of the implementation.
I was originally planning to write a blog post on the implementation, partially to show how the core of the library can fit in a few hundred lines and partially to show how nice the encoding I chose turned out, but I think I’ve gotten nerd-sniped by another project so I don’t know if I’ll get to it any time soon.
For years i wanted to make an adventofcode like website, so i finally spent the first few months of 2025 to create easters.dev. And then i gradually moved my infrastructure from OpenBSD and FreeBSD to NixOS.
Working on a custom datamesh using Kafka, schema registry, airflow, pola.rs, and building a custom upper ontology loosely based on Netflix’s in order to build up better data governance, self service, ownership concepts and more using open source and vendor neutral tech to try and build a scalable data platform for a reasonable cost. Still working on it but it’s pretty promising at this point.
Why developing your own datamesh? Real use cases or self-improvement & education?
I mean I’m treating datamesh as a loose definition of streaming and batch processes centered around a UDA here. Real use case in that our data isn’t particularly simple and we have a large insights and analytics team that needs stronger tooling and access with appropriate guardrails in place. We have an established batch ETL system and rather than a wholesale refactor we need a valid path to grandfather things into the datamesh. And because we want to do it as vendor neutral and as cost efficiently as possible.
Started out with a comment on the orange site and a feeling that these AI coding SaaS-providers constantly had downtime. Fun project the last couple of months. Simple sqlite backend in Go, really nice to have AI assist with all the different scrapers
Interesting - how do you test for availability? Just a GET of the main page for each supported service or something more sophisticated (since you mentioned scraping)?
I scrape the status pages of the different providers, most of them use statuspage from Atlassian like https://status.claude.com/ which is an open JSON feed, easy mode. Others use RSS feeds that I parse events from (AWS, Lovable, Bolt.new), some I parse the custom HTML and filter severity based on the color of the <div> (fragile but ok until they change something).
Everything is self reported by the providers so I have a feeling the ones with the highest downtime are also the most honest. For example AWS I get the RSS from all regions with Bedrock and take the worst event from each but there’s just no downtime, a bit suspicious, but it’s AWS so maybe it’s just that rock solid? Hard to say.
Getting a free account with each provider and probing with a low token count prompt at regular intervals would be cool to do at some point.
I got fed up with buggy I2C hardware, so I wrote a portable bit-banging implementation in Ada. Then I got a bit carried away and added SPI, UART, MMC… Modern MCUs are fast enough that you can do quite a lot without resorting to counting instruction cycles.
I did some interesting things with tmux config files. This year I worked out how to play video and then use that method to implement Snake entirely in tmux.
I released a small rat-themed Half-Life mod for a mapping competition: Rodent Solutions.
The pain of working with the GoldSrc engine motivated some more work on
goldutil.
I started another project to track my indoor climbing plateau progress,
there's a few that exist already but they are either non functional or only
work if the gym pays for it. I wanted something a little less user-hostile,
more accessible (a simple PWA that also works on desktop), without selling or
giving away user data through tracking, and no JS if I can help it.
I hope to release it before spring.
I built Lobsters for Science; a git-based automatic scientific log; and am currently working on a domain-agnostic scientific data archive (and could use some help).
The goal of the last one is to make it easy for any lab, institute, or organisation to set up a scientific data archive that meets the standards needed for training AI-for-science ML models.
It's a hosted 'uFaaS platform' for Lua scripts.
I spent most of the year building this after leaving a job. It's something I been thinking about making for myself for almost a decade now, ever since webscript.io (not linked because it's SEO spam now) went down. I've got ADHD, so I'm super proud of myself for actually sticking with things and releasing an MVP. Unfortunately, the response was overwhelming in its silence. So, now I'm debating what I want to do with it. It does what I want it to do, so I don't need to do anything, but I feel like I might as well open-source it, I'm just not sure if I want to simplify it and make it possibly actually useful to others first.
I wrote a provably terminating postfix calculator, and expanded that into a mini Forth-like virtual machine with a lot of proved properties and behavior in Ada/SPARK.
Most time was spent on nitro, a tiny but flexible init system and process supervisor, which now works on actual hardware. :)
Mostly I've been struggling to wrangle my ADD with medication for the first time. I spent my 20s managing it with a lot of cardio, but anyway.
I've made decent progres on tomo which is some kid of os project. This year it grew a nice plasma desktop and adopted nix (lix) as the primary build system.
I haven't done anything particularly exciting with readable yet, but I have some aspirations to use it as a base for building a friendly sexp based language with ideas taken from erlang/elixir and APL/BQN.
I spent a little time sketching some ideas for building secure scuttlebutt-like social pki on bit torrent using some ideas borrowed from GitTorrent. Didn't get far, but maybe I'll get somewhere with it next year. My main question after pki is, what are the scaling obstacles? We know bit torrent scales beautifully, but why do so many p2p git forges choke on large repos? Let's find out!
Looking at commits this year, I did lots of maintenance on my "usable" projects, but I've only really "created" this year Converts a Raspberry Pi Zero 2W into a virtual USB drive for ISO and Converts a Windows ISO into a bootable USB image... which are a bit clunky, but work for me better than Ventoy... and help me install Windows with no prior Windows to create media.
p.s.: I very much prefer this format of "what have you done" over "what will you do" :)
I had ad-hoc scripts for sandboxing my dev setup. Over the last few weeks, I formalized it as https://github.com/ashishb/amazing-sandbox/
code input started as an online merge conflict editor and now slowly pivoting to code-owners and an alternative to GitHub actions on cloudflare workers/containers. Back-end is fully on Rust/WASM with the plan to have everything executing as a "serverless task".
The amount of work to run a business turned out to be much more than I originally anticipated. I was planning to launch in a "few months" but getting the product functional while handing other stuff (ie: SEO, authentication, GitHub app, etc.) has turned this into a one-year+ endeavor and I am not quite close to launching yet.
The source for the beestuff is here: https://github.com/rumpelsepp/bienensteff.de
I don't exactly have anything to show (except this HTML snippet you can try for a two-axis CSS grid scroller gist) but lately I've been really caught up in implementing a CSS grid / subgrid based virtualised scrolling table. Basic idea is that at the top level our CSS grid defines the columns but not the rows, and CSS subgrids are used to inherit the column definitions all the way down to the actual row rendering virtual content window. That virtual content window is a CSS grid that gets moved inside the "full content list" using translate, and it defines the grid rows and contains a static set of cell elements.
Each cell element uses grid-column to define which column it occupies (this is static can and can be set using a class), and uses order to define which row it occupies relative to other elements occupying the same column. This way there is never any churn of elements in the virtual content window, and whenever it is moved using translate to adjust for scrolling of the "full content list" and correspondingly some cell elements' contents get rewritten and their CSS order property gets changed to match their actual "data source index". The elements form a sort of rendering ring buffer with the order properties determining what order the browser should actually lay the elements out in their respective columns.
The result is truly a beautiful beast, combining really snappy scrolling (at least on my dev machine) with close to zero garbage generated and minimal re-rendering effort. Also, filtering and sorting becomes pretty trivial as you just add an "index mapper" between the "shown index" and "actual data row index". When the index mapper changes, do a full re-render of the virtual content window, and just always access data of each row not based on its "shown index" / CSS order property but the mapped index.
One interesting project was writing a programmatically-useful LLM-based agent library (foobara-agent-backed-command) using a framework I'd made that has nothing to do with LLMs.
Since the framework has no dependencies, in effect I made an agent using nothing but the Ruby standard library. Was certainly interesting and fun!
One interesting epiphany I had was that maybe in the future agents will be used to fill in holes in domain implementation early in a project's life and then increasingly be replaced with actual domain logic as the domain is learned and as accuracy/cost/performance become more important. Interesting to me because it was backwards from my intuition, which is that we'd automate instead of deautomate as a project progresses.
The EndBOX! https://www.endbasic.dev/endbox.html — an embedded disk image that boots directly into EndBASIC, with an accompanying case design for the Raspberry Pi.
I've been working on an electronics project centering around the MC14500, a 1-bit CPU from the 1970ies. I've presented the thing at 39c3: https://media.ccc.de/v/39c3-when-8-bits-is-overkill-making-blinkenlights-with-a-1-bit-cpu
I started to build a interpreted Datalog environment (datatoad, but don't use it yet) mostly from scratch, forcing myself to do things I've not done before (lexing, parsing) and thinking about other ways to do things I have done (planning, executing). Along the way, it changed shape and has become an interesting playground for exploring columnar, interpreted execution, and some "modern" multi-way join strategies.
Eating one's own dogfood is a great way to get a sense for what is rough about what you've built, and .. learning from that as well. :D
I ended up 'finishing' my game electrobillion, which is an educational simulator for our electricity/power infrastructure. It's agent-based, has a live power balancing market, and a growing city. I started working on it to learn more about how the largest machine in the world works!
Soon after I wrapped it up, it helped me get a job in the industry, and I'm really excited about working with all the systems and layers I was emulating!
I still have big plans for it, such as lower level simulations of fuel markets, more interesting weather, more reports to help players make better decisions, competing power businesses, and ability to participate in forward markets. However, it already is able to demonstrate various emergent behaviours such as black/brown-outs or the duck curve.
The most interesting technical project I’ve been working on most of this year is for a client, and it’s an RDP (MS Remote Desktop) library in Swift for cross-device Apple. Aside from some early, light parsing of DER/BER using a different library, it’s largely been implemented using swift-parsing which has been a joy, though not without challenges. I’ve done a fair amount of parsing, mostly programming language tinkering stuff in C and Janet PEG, but not a ton of binary stream parsing. It’s been me, the 450-page PDF spec, and an Xcode window, developing against xrdp running in Debian on a UTM/QEMU VM. It has allowed me to work offline in a very iterative way. It’s nearing beta and it’s really stretched my skills on the parsing and FP side. The graphics stuff is pretty straightforward (I’ve done a lot of that, either earlier for this app or for the early Mapbox SDK). It sounds monotonous but I find this level of work interesting. The protocol is vast but well-written and you can get away with subsets of it for certain use cases.
I've almost finished the first draft my alternate history Pacific rim punk novel. Only a few chapters left, but it's going to bleed into 2026. Looks like it's going to weigh in at just under 200k words in the end.
I spent this month learning the Nix language by building a little Lisp: https://codeberg.org/sstephenson/consflake
It's driven by a set of shell scripts that manage the interpreter state using content-addressed files on disk.
I started building LADSPA and LV2 adapters for the ER-301 Sound Computer. There are a bunch of existing plugins out there, and I hope to cross-compile some interesting ones to run on the ER-301.
We had some major changes in containerd 2.1 including a time-based release cadence. But I think my cash register flight tracker was the most fun to build this year.
This year I heavily focused on the sanctum protocol.
I built a library that implements the protocol which lead to a mirage of applications me and my hacker friends built upon this, tools we use on the daily now.
In no particular order:
I also built some services around it:
I talked about sanctum at SEC-T 2025, slides here.
It's been a productive fun year and I am looking to cap it off with a 1.0 release right after the new year celebrations end, that time is reserved for family first.
Continued with a hw/sw system overview with debugging techniques (strategies, tactics) with problem class overview for system design https://matu3ba.github.io/articles/optimal_debugging/. Currently looking through hardware + simulation options for computing. Next year maybe verification, cloud, compiler and unlikely coverage stuff, unless I can get my hands on safety-critical standard requirements (overviews).
Gave up on Marionette and started a small personal NativeMessaging FF extension instead (for externally grabbing current page HTML / do formfilling / extract chat WebUI unread counts / etc.)
Still keeping on working the https://ashet.computer, a small, but practical computer system with a custom operating system.
Right now i'm trying out vibecoding with codex to rework my own hypertext document (HyperDoc) i'm using in my OS for the builtin wiki. The new version will be going great, but it's not live yet.
Apart from that i actually did "not much" this year, i primarily focus on the home computer