Scaling long-running autonomous coding
7 points by mitsuhiko
7 points by mitsuhiko
The most fascinating is that they built a browser with 3 million lines of code: https://github.com/wilsonzlin/fastrender
I wonder what are the state of the art plagiarism detection tools; it'd be really interesting to run them on this. That's an enormous codebase, so it has a lot of opportunities for the LLM to accidentally be too blatant - but also there aren't that many browsers out there, so this might be one of the rare cases where it would be tractable to find the original sources.
wow this is insanely impressive. Interesting how much philosophy and pattern nudging they had to prompt it with.
I find this part of the philosophy interesting:
FastRender is building a production-quality browser rendering engine. Not a demo. Not a research project. A real engine that renders real pages correctly.
Looking at the code, it appears to be a huge mountain of slop and it's hard to imagine this ever being "production-quality" unless the models get massively smarter and better at cleaning up slop.
However, if this prompt told the truth and said "this is a research project to see how far AI agents can get before collapsing in a pile of slop", presumably the result would be much worse.