What are you doing this week?

8 points by caius


What are you doing this week? Feel free to share!

Keep in mind it’s OK to do nothing at all, too.

jjude

We have been homeschooling our kids. Homeschooling in India is not that widespread. So when a national newspaper covered our experiment, I got lot of questions around what we were doing. For a while I wrote blog posts answering them.

Now I've written quite a few posts (and given talks), I thought of writing a book. Just wrote two chapters. The draft lives here: https://www.jjude.com/books/hs/

runxiyu

Continuing implementing furgit, my experiment at writing an architecturally clean and performant Git library in Go. Considering applying for NLnet funding!

csomar

Helping a friend figure out an alternative route and temporary accommodation. He was supposed to fly through Qatar the end of this week but it’s looking to be unlikely for now. Hopefully the chaos stops soon.

datarama

I have the week off - but unfortunately at the moment my depression is beating my spite, so my spite-driven development experiment went off track.

doriancodes

I am developing a book tracker app with a recommender system to self host on my home server

southclaws

continuing to tweak a solver, which I only discovered what it's called last week after working on it for 6 months! "capacitated lot-sizing problem" it's interesting but the feedback loop is so slow (my solver takes 2 hours!) still, beats 6 hours tweaking a spreadsheet by hand...

outside of that, lots of work on a managed flink system in kubernetes, which often involves doing more memory management than I ever had to do with something like C++... java is still quite a black box to me!

and away from work: I recently merged a huge branch implementing a plugin system for my forum software which is exciting, feels like the thing that could make it a worthwhile alternative to the current sota.

spudlyo

I'm writing an essay about how I use an ancient text editor, GNU Emacs, along with gptel and some other tools to help me me study an ancient language, Latin.

I want to show how I liberate poorly aligned, pixelated PDF image scans of century-old Latin textbooks from the Internet Archive and transform them into glorious Org mode documents while preserving important typographic details, nicely formatted tables, and some semantic document metadata. I also want to demonstrate how I integrate a local lemmatizer and dictionary to quickly perform Latin-to-English lookups, and how to send whole sentences to an frontier model for a detailed morphological and grammatical breakdown.

If I don't run out of steam, I intend to dig into how to integrate Emacs with tools such as yt-dlp and patreon-dl to grab Latin-language audio content from the Internet, transcode the audio with ffmpeg, load it into the LLM's context window, and send it off for transcription. I've also experimented with how to gather forced-alignment data using local models with whisperX such as wav2vec2-latin so I can play audio snippets of Latin texts directly from a transcription buffer in Emacs. Lastly, If I can make it work the way I want to, I'd like to explore how to leverage tools to automatically create flash cards with audio cues in Org mode using the anki-editor Emacs minor mode for sentence mining.

steinuil

$WORK: trying to understand how we should be managing user-managed scripts for data transformations and balancing ease of use and implementation complexity. I set up a basic system with very basic number-based versioning so that one of our clients can test, but now I need to spend some time figuring out how to manage this system at a "medium" scale.

$HOME: just playing a lot of Deep Rock Galactic: Survivor. I've been bitten by the Vampire Survivors bug before, and so far this seems like a very solid version of that gameplay, backed by DRG's intricate progression system on top of it. Saturday I'll be taking a train to Milano for the Nix Milano meetup, but so far I don't have anything planned for it; lately my interactions with Nix only go as far as updating my system flake inputs every once in a while, but hopefully I'll get some inspiration at the meetup.

cesarandreu

Last week I started vibe coding a toy world sim where each agent is controlled by a local LLM for fun and to explore the tools. So far the most interesting action has been the ability to pray to the world creator (i.e. me) along with the ability to respond in future cycles. This project has gotten me thinking a lot about our world from a spiritual / religious framework and on the importance of adversity as a catalyst for growth.

Unfortunately, I got sidetracked for multiple days by trying to compile llama.cpp master branch with support for my older AMD GPU because I kept reading that it contains a lot of performance improvements, but I ultimately failed and opted to just use llama-cpp-vulkan from nixOS unstable. I don't know what it is with all these GPU libraries being cursed in some way.

This week I'm going to keep playing around with the toy world sim, adding new actions and seeing how Claude Code handles evolving the codebase. It feels like a slightly more advanced tamagotchi, and I'm continuously impressed by how little is required before emergent storytelling starts to become compelling. If anyone reading this has fun or interesting suggestions for actions or capabilities I would love to hear them. The next big feature I'm interested in exploring is the ability to gossip about other agents, which will require each agent to keep track of a basic model of every other agent.

Unfortunately I don't have the resources for this, but I think it would be really interesting to run a wide-scale experiment where different groups of LLMs have their own world sim, and after each cycle all the LLMs are asked for what changes or suggestions they would like to see made to the game. Then those changes could be vibe coded and the next cycle would run. What would the different LLM worlds look like after a few dozen cycles? And if you used a mix of LLMs, which ones would end up as the dominant force? Maybe someday. I expect that it would lead to some fun stories at least.

brtkdotse

First week of funemployment, looking for new consulting clients and dusting of my vinyl cutter for a potential fun sidebussiness.

marginalia

The NSFW-filter work continues.

I've ditched the idea of using fasttext and implemented my own neural network instead, as fasttext was picking up weird features that introduced a lot of nonsensical false positives, and the vocabulary is idiosyncratic enough that manually selecting features works better.

marcecoll

I've been experimenting with type-constrained inference of LLMs. I have a small LLM-targeted programming language that has a content addressed function store and uses algebraic effects. I've been experimenting with constraining inference using an automaton that figures what types could appear at X point in the generation, then querying the codebase, checking what is available of those types there and limiting token generation to that by masking the logits. I've been exploring both autoregressive inference (most known models) and diffusion based LLMs for this. With diffusion it's fun because you can build the expressions with holes, and those holes are typed and then you can kind of "inpaint" expressions.

The language has a Quint (from the apalache model checker people) compilation target as well, so I can run simulations and model checking over a codebase.

It's been an interesting project. I did vibecode the original version to test the idea and explore the design space, but I'm now hand-coding all the core language (typechecker, parser, interpreter and compiler). I wouldn't trust an LLM for such foundational work.

mtset

Getting an old cassette player up and running. It's a real shame that you legally can't make good ones anymore, but at least there are plenty on the market.