Rails on the BEAM
9 points by soulcutter
9 points by soulcutter
I'm the author of the post; and to be clear, I did this to see if it could be done rather than with a clear purpose in mind. My original interest was in developing offline first capabilities for an existing Rails application, and has expanded into exploring deeper integration with JavaScript. https://intertwingly.net/blog/2026/01/28/Twilight-Zone.html shows a full stack rails application running in your browser, and then follows up with Visual Studio Code, Vite, and Rails running in your browser with HMR using web containers. https://intertwingly.net/blog/2026/03/16/Watch-Mode.html takes this further and runs system tests, all within your browser and served statically from github pages.
I'm no expert on anything here, but my understanding is:
These two seem to not be compatible. How does the project reconcile Rail's requirement of backing state with a DB with the BEAMs requirement* that processes only communicate with messages? I see "For production, swap SQLite for PostgreSQL" but if this is used like in Ruby on Rails then I don't understand the benefit of bringing the BEAM in.
*Yes, yes, excepting ETS.
Rails discourages sharing state beyond the request/response cycle or between parallel requests. It tries to push shared state into the SQL database or the caches. It's possible to share state but culturally strongly discouraged because of the likelihood of bugs.
Rails only supports a tiny amount of parallelism with background jobs that run in an external process and have no method of signaling back to the process that enqueued them, and SQL queries that run in a background thread. Each query must be individually annotated and the main thread blocks if it tries to use the result before it's available. It's not a general-purpose multithreading/async tool. (Ruby has threads of course, but I have never heard of someone multithreading from inside the response cycle of a Rails app.)
As for the benefits, I think that's the "What the BEAM Adds" about BEAM features and "A Path to Phoenix" sections about a strangler fig migration from Rails to Phoenix. I have a small disagreement with the post about that latter:
No big-bang rewrite. No other migration path offers this. Going from Rails to Phoenix today means starting over.
There's another way to implement this pattern. Rails apps are deployed with a reverse proxy in front, typically nginx or caddy. They handle TLS and free up the heavy rails app threads from trickling data out to slow clients. Because the BEAM is also much lighter than Rails, one could insert it in the middle of this: have the reverse proxy pass the request to Phoenix (or similar) on the BEAM, to handle the request if it can and proxy other requests off to Rails. Then you have a clear process for incrementally porting from Rails to BEAM.
An added benefit is that the BEAM server could proxy a request to Rails while also generating a response itself. If the BEAM response differs from Rails, you have a bug in your reimplementation, or perhaps a data race where the fast BEAM app pulled data from the db that another thread wrote to before the slow Rails app got to it. If the responses are identical, you have higher confidence in your reimplementation. This really only works for read requests but it could be particularly useful for a Rails app that is overwhelmingly read heavy with light test coverage.
It makes little sense to run external databases like SQLite, Postgres, or Redis in isolation with a BEAM-based application. If people want something like "Rails on the BEAM," then Phoenix already exists and it was specifically designed to take full advantage of the BEAM's features.
You can achieve a proper database along with Redis-style caching using Mnesia, combined with atomic counters or ETS tables managed through supervisors. The BEAM automatically spreads work evenly across all CPU cores by default. It also treats multiple connected VM instances (nodes) as a single unified pool of resources, even across a network.
The only system that comes close to this level of seamless integration is Apache Mesos, which makes it technically possible to run external databases like FoundationDB in a similar fashion. However, I wouldn't recommend that setup. We tried it in the past and it wasn't ideal.