The Silicon Valley Stack Doesn’t Work Here: Why Africa Will Lead the Post-Bloat Web
57 points by yawaramin
57 points by yawaramin
This one is actually pretty near and dear to me in terms of topics. I worked across Africa, the Pacific and some sporadic other parts of the world (like Syria during the conflict) around building ways for reporting and alerting on notifiable diseases and everything from sentinel clinic reporting up to full-scale national roll-outs of tooling and training to help track disease and prevent outbreaks.
The whole system consisted of Mobile Apps, a web application, and the last iteration of it had a desktop application, and a remote relay server that could be run off something as small as a raspberry pi. On top of that we were playing with LoRa and Mesh style networking to connect disparate locations. This was around 7 years ago now.
The article doesn't quite elide the important bits of this that people are missing.
I don't think that HTMX / SSE is likely the solution (though I love HTMX for my own stuff and I haven't had an npm_modules folder in my projects for years). But the way that things are going right now is absolutely very detrimental to LMICs and other places where connectivity is expensive or hard to come by. Things like Windows now almost requiring an always-on connection and being heavily integrated into things like OneDrive, O365 make it prohibitively hard for people to do their work in remote environments irregardless of socioeconomic status.
I lived using a 3G internet connection in a remote farm for years, and struggled with things like downloading sizeable docker images timing out over the network, complete drops in connectivity. What I experienced is nowhere near as frustrating as what people in remoter environments experience on these kinds of connections.
For a lot of people, local-first, offline-first, installable media is the best, most affordable and sensible option. The thing that's missing for a lot of people is a shift in focus from creating super fancy applications and tools which rely on high amounts of connectivity to function, to software engineering with stronger guarantees about stability and the ability to use said applications without any ounce of a connection while still being able to connect and share when it matters. Western software engineers have benefited from the ability to quickly and rapidly update their applications on peoples desktops, phones and other devices with very little effort in recent years because the vast majority of their users are in western or highly connected countries. But the move towards always-on, constantly syncing, cloud-based solutions in those regions and countries leaves other places in the dust a bit. As frustrating as a broken app is in the modern world, an update usually comes in a reasonable amount of time, in the deep of Africa or on a remote island in the Pacific, that missing update may mean the difference of weeks of lost productivity and access.
Things like meshtastic get me really excited about possibilities, back then we were playing with things like WiFi guns, LoRa and more to see if we could overcome some of the hurdles the people we were supporting were hitting. Issuing a bunch of phones that can't connect, laptops that require an internet connection to use excel, ends up with people going back to paper. We'd have to fall back to sending chunks of reports in SMS messages to a central SMS gateway (incurring high costs) in order to get data out of some places. And in some instances, someone had to drive around with a USB key to different places to get data from users to a central office.
I'm not saying it's this bad now, but the rate of progress is slower in these places than it is in Western countries, and I can't imagine it has got that much better to be honest.
This is just the top of the pile, I could go on and on and on.
What tools/tech were you working with that you managed to have mobile apps, a web app and a desktop app that were offline first? Seems like a ton of work across different stacks, which would be pretty expensive to develop?
Shockingly not. All the cost of infrastructure and any licenses came out of my pocket. I had zero budget for any of that, and zero budget for additional engineers. So it was just me, a Canuck in the back room of a farmhouse in rural England hammering away on a lot of code, I took a lot I had learned from a previous project working on health information systems for LMICs and PoCs at the UNHCR into this project.
At it's peak while I was still working on it, it was serving around ~20 countries actively with another 10 in the process of onboarding. I managed to get my monthly AWS bill on it down to about 600USD/mo (if I recall) with some very aggressive optimizations and AWS at one point (after some begging on my part) gave me 20k in credits to offset the costs which kept it all running for quite a long time.
In terms of stack, the initial web application was built using the tornado framework in python. Backend storage was in postgres, the system was multi-tenant so I could scale to a read replica if I needed, but that only happened rarely. The frontend of the web application was a bastardised react application, there were a lot of different "sections" to the app, and not all were pertinent to every person using the system, so I modularized the whole system into "app" bundles that would only download when viewed by a registered route in the tornado application and we cached extremely aggressively. I had a "common" package that was always downloaded that let me cache things that didn't change often onto peoples browsers, which makde the individual smaller PWAs in the 100s of kbs range. I wrote a custom python script to compile it all. To save on bandwidth passing large amounts of data back and forth I set up the frontend to communicate via RPC over JSON instead of having lots of individual REST endpoints which let me do something similar to protobuf where I could massively compress the payloads because the frontend knew the structures beforehand.
The mobile application started out as react native just to get it moving quickly, but there were so many problems that towards the end I was rebuilding the applications as native apps on both android and ios, though ios users were pretty rare. Also because I needed to bring the UDP/TCP multicast and peer discovery to the mobile app and doing that in react-native wasn't really possible without a lot of extra effort.
The desktop application was a rust application, it used actix to run a local web server, as well as to spawn UDP/TCP sockets and would broadcast locally using UDP multicast to try and find other local instances of the desktop or mobile application running in order to sync between them and carry each others payloads. That also let me re-use a lot of the components from the web application frontend on the desktop application, but the electron app was heavy, and near the end I was trying to use the rust webview crates to try and cut electron out of the deal.
The relay node was basically just the same actix backend as the desktop application but with a CLI interface instead of the GUI frontend end, you could spin it up on a pi or nuc and let it run as a syncing agent between instances and a local centralized server in remote offices for people to collaborate.
We used SMS Eagles for the SMS relays, and there was a beta in progress of using the android mobile app as an sms gateway but that was getting more and more difficult as android permission management got more aggressive.
We piloted the web app and mobile app in South Sudan after about 4-5 months of heavy development, it included an analysis engine, report generation, alert management, form building, location systems, and mapping, and lots more.
When I walked from the project it was very close to having been rewritten completed in rust for the web application as well.
https://www.who.int/emergencies/surveillance/early-warning-alert-and-response-system-ewars/
That's incredible, thanks for the story!
No prob, at some point I should write all this up...
I’d be interested in reading that if you do. How did you get into this in the first place? As it happens, I live in rural England and would love to do something like this… alas, I don’t have the means to pay for it all out of my own pocket. Did you have another source of income at the time?
I was hired through a contract company to work for UNHCR initially because I had experience with a couple of frameworks they were interested in using, then when that project wound down, one of the people running it hired me to do the EWARS work for the WHO which they had moved over to.
When I say I paid for it out of pocket, I mean they were paying me (albeit inconsistently; there were long stretches of time I had no income but was still working) and a part of my pay I had to use to cover the infrastructure costs. The end came when after about 5 years of working on the project they decided to tender the work and left me hanging with no pay for 4 months while I had to bid against other companies on a project I had built from scratch.
Ah, I see. That is both better than I had assumed - you were at least being paid - and much worse! What a terrible way to treat a solo contractor.
I don’t live in Africa, but I have quite a bit of telco experience from working in Pacfic Island Nations. And I live in Australia, which is the ass-end of the internet, from a latency perspective. I also used to live at the end of a 700ms+ RTT satellite dish. So I have some experience with both latency and resource constraints.
And I can’t for the life of me understand why this person is advocating server-side rendering in a high latency, low reliability environment. If anything, well written client-side code, running locally, is likely to massively improve the user experience relative to server side rendering.
SSR is the exact opposite of the correct solution… which is why it’s so confusing when he then advocates for “local first” deployments and SQLite on the device? But we’re going to access it using SSR rendered pages? It makes no sense to me at all!
Server side rendering isn’t the answer to a 100kb react download to toggle a menu - good code, running locally, written by smart engineers and tested in unreliable environments is the solution. I don’t know react at all, but Vue is relatively tiny, and then it gets cached. PWAs are explicitly designed to work like this. Native apps are too.
Don’t get me wrong, I love efficiency and I certainly think the stack can be slimmer. I just have no idea what this guy is arguing for.
I'm in no way an expert on this kind of optimization, but I think what's going on here is that the author is pointing out multiple competing constraints. Network latency and reliability are substantial constraints, but because the devices are usually so underpowered, the solution can't be to offload all processing to the device. Instead the idea seems to be to maximize the amount of work the server does on each request, but since this obviously creates traffic load that runs up against the network constraints, also use local features to reduce the number of requests that need to be made in the first place. When frequent requests are necessary, then use smaller frameworks that allow this approach to be applied to incremental page updates (I am absolutely unqualified to debate the merits of e.g. HTMX vs React, though).
I too was very confused by that. Local only is the logical extreme of client rendering. If you don't want client rendering, I'm shocked that someone would want to go all the way to local only!!
That said, I'm on board to try to make things better. I just think we need a new concept like "progressively localized" to handle such a stringent, tricky set of requirements.
This is a fantastic goal, and I badly want the web to look more like this. It would be better for the entire world.
But I don’t like the phrase “next billion users”, and I never have. Africans and Asians and South Americans are just people, with normal connectivity in cities, and connectivity equivalent to what South Dakota had 15-20 years ago in rural places. They are the same as the first billion users, give or take. Treating them like they have special requirements just gives us an excuse to return to bloat once everyone has 5G, and that’s not good for anyone.
Maybe once the web applications and techniques developed for low-connectivity populations become more popular, they will organically lead to adoption in high-connectivity parts of the world?
A fascinating read. I love seeing this kind of momentum behind smaller, slimmer, and simpler technology stacks.
If the data centres are far away, how do we fix the speed? We bring the data centre to the phone.
How does this work from a security standpoint? Is a shallow copy of only the data relevant to the user propagated to the device? How would that function with privacy regulations, e.g. GDPR? Could a copy of deleted data persist in a user's device cache theoretically forever? I am not a web developer, so my ignorance begets curiosity in the technical details here.
Phones have facilities for secure storage. Eg https://developer.android.com/privacy-and-security/keystore
This is aspiration ungrounded in reality. The title asserts something that the body considers an opportunity.
I find the latency arguments fall flat:
For US-based servers, this problem is not novel to Africa. Places like Australia have suffered it just as badly, India worse (due, I think, to bad peering arrangements on top of the latency) and Europe not much better, and nothing much has been achieved. (I have lived in Australia and India. On the couple of occasions I visited the USA, oh! the web was so ridiculously fast, simply because of latency.)
The fact is that Society hasn’t cared enough to fix the problems (though the proliferation and popularity of private CDNs like Cloudflare or Bunny have definitely helped), and I see no reason to expect Africa will break out of that mould, when none of the rest of the world (notably including India) has.
For within-Africa servers: if you’ve optimised to avoid gratuitous serial round-tripping (which the article is assuming will be done), then for the significant majority of applications there’s little benefit in cutting 100ms latency to 20ms. One server handling all of Africa is normally good enough. To my mind this demolishes the rest of the Africa-is-special argument: other countries have done their own not-in-US digital infrastructure and it’s gone just fine, Africa’s situation isn’t that much worse.
As for the “ultimate fix” of local-first, that’s basically the opposite of what the rest of the post is arguing for.
Finally, the article is posted on Medium, with megabytes of JavaScript and other nuisanceware. That says a lot.
One server handling all of Africa is normally good enough.
Why would traffic be routed according to geography/tectonics and not according to network proximity? It seems to me that it is likely faster to connect from Cedilla or Melilla to the AWS datacenter eu-south-2 in Aragón, than it is to connect to af-south-1 in Cape Town. Likewise, wouldn't it be faster for users in Egypt to connect to il-central-1 in Israel instead of going all the way to Cape Town?
I’m assuming we’re dealing with a business serving all of Africa but nowhere else, and I’m saying that even a single server location is normally good enough—if you’ve avoided serial round-tripping, you don’t actually get much from adding more nearby locations. But if you want to have multiple locations, knock yourself out; if you do it properly, the result will be better for users. Just not by as much as people may initially think.
First off, i found myself agreeing so much with most if not all of the positions that the author took. I fully agree that other areas of the world can lead the way to a better internet, slimmer and more and efficient! In fact, i sure hope that they do! I'm tired of the bloated web that has been created by silicon valley, and we need a better way going forward!
Now, my second point is fully a nitpick, and should NOT detract from the author's very valid points! But, um, that photo at the masthead/header of the blog post is not of an African person. It appears to be a photo of a Colombian woman; and in fact someone who likely would be referred to as a "Palenquera" (or "Palenquera Colombiana", or even Afro-Colombian woman - assuming they identify as a woman). Furthermore, the photo looks to be taken in Cartagena, Colombia...How do I know this? Because my family is from Colombia (though not from Cartagena); plus, I took photos from that very same vantage point while visiting Cartagena some time ago! And, then of course, i clicked the Unsplash credit link on the photo, and it confirmed my initial assertion. Now, I'm going to assume the author did not mean anything offensive, and certainly do NOT believe anything shady is happening here. Its merely something that i noticed, and likely a simple error by the author (or maybe if an editor was involved)...and i think i only noticed it, because nowadays it feels like I'm constantly on the look out for AI-generated content...which i have no reason to believe this blog post is that. So, i guess i tend to be extra nitpicky nowadays (again, just looking for AI slop content). But, yeah, that top photo just caught my attention, and felt i needed to at least clarify...Again, NOT to detract from the author's very valid points!
It’s not clear that the author has any particular connection with Africa or its tech scene (assuming that talking about one African tech scene is meaningful)
The medium about page for the author states: " I head up a small software development company called Nanosoft located in Cape Town [South Africa]." The company website says "We have been developing software solutions since our inception in 2001. Our main focus is building user-friendly software applications catering to the finance, manufacturing and service divisions of small to medium size businesses. Our solutions includes CRM, ERP, Procurement and Project Management software."
My bad. I was mislead by the article full of generalities, the header photo from a completely different region, and the profile photo from New York City, I didn’t see there was a tab to click through.
Yeah, I think from my point, it doesn't matter so much whether the author is from Africa or not...I was merely pointing out the oddity of the photo...Because it might look like a woman from Africa, but its 100% not a person from Africa...which again, does NOT take away from the great points tha author made...Its sort of the kind of thing maybe an AI blog post might manifest. But, might also be a simple human error...though the obvious attribution link literally stating the scene is in Colombia seems even odder. Just, you know, overall odd to me. ;-)
Call me when you’ve tested this stack somewhere with poor internet connectivity. Or when someone in rural Nigeria or Senegal says something about web stacks
IMO if you want to do local-first then it's better to to do client-side rendering. It's quite hard to do local-first with server-side rendering.