On The Insecurity of Telecom Stacks in the Wake of Salt Typhoon
51 points by ibotty
51 points by ibotty
Nice work by Soatok again.
At this point I think we’re more surprised when folks actually end up doing the right thing as opposed to the behaviour seen from the software vendor here. As long as some baseline of security standards and practices are not enforced by regulation, organisations primarily incentivised by money are just going to continue on doing things like this with little to no repercussion. I suppose that’s nothing new though, it’ll probably take something catastrophic for regulators to get around to it—and even then there’s no guarantee.
What’s crazy is that security isn’t even incentivized by money in extreme cases. Breaches happen to companies who you think would be massively impacted, but nope. Okta was breached in 2023 and their customers were breached because of it, but their stock is up since then. That is… insane. This indicates that no one cares about security, to the extent that companies get breached due to a vendor getting breached and that vendor sees no financial impact.
I feel like users are perhaps fatigued to the extent that it feels pointless or idk. At least some of it is that security plays almost no role for software engineers other than being perceived as a useless pursuit that adds friction.
Our engineering minds are often blind to the non-technical “fixes” that stabilize these systems.
Consider credit card fraud. The absurdly low entropy of the standardized payment card Primary Account Number has led to a massive private bureaucracy that issues data handling regulations and regularly audits all organizations that handle these numbers. The expense is considerable. And yet, fraud is a regular occurrence, written off as a cost of doing business. If you as a consumer experience a fraudulent charge, you just contact your card issuer: they reverse the charge and issue you a new number. We don’t even perceive the friction because we have little basis for comparison: it’s always been this way.
AFAIK, the fraud liability arrangement was historically different in Europe, which is why your Paris waiter has been coming to your cafe table with a wireless card reader for a couple of decades now while it’s still a pretty new thing in the US. I remember in the 00s being told by a French waiter that I needed to get a chip+PIN because my plain card was very insecure! (I said there was no such thing in the US, but I didn’t try to explain that US card issuers don’t care because they just pass the cost to the merchants.) Now the US has chip + no PIN, which seems largely pointless to me.
Yeah when I first moved to the US and discovered I could not add a pin to my CC even if I wanted to I found it mind blowing.
That and payments being made by giving people your bank account number and then they … take the money out of your bank account? Or “if someone gets your SSN they can get a loan in your name” .. wat?
If someone gets my NZ IRD number they can: pay my taxes, or back then, pay my student loans. Technically they could get my tax return but NZ’s taxation system is not insane so you generally don’t have one (you basically only get a significant tax return if your income changed dramatically downwards sufficiently late in the tax year, which - countering “sane tax” comment - inexplicably runs until march 31st).
I think we can conclude that more regulation does not, automatically and all by itself, solve technical problems or network effects.
On the other hand, computers aren’t secure. They are incomprehensible mountains of complexity, and society at large doesn’t want to walk away from all the percieved benefits of that complexity.
So the only reasonable way to maintain a measure of security is to evaluate risk and prioritize fixing the things that produce the most risk (for some definition of “risk”). That means you will always have some amount of security breaches at some level of severity. It’s not a question of “if” but rather “when.”
Which also means that recovery after a breach is often more important than preventing all possible breaches in the first place.
So in Okta’s case… yeah, vendors are going to have security issues. Everyone with a little experience knows that at purchase-time. If mishaps inside Okta become a regular pattern, then I’ll ditch Okta. But one-off security incidents are not going to make reasonable companies switch vendors, which means Okta’s probably going to be ok as long as they learn from their mistakes and prioritize transparency.
(Counter-example: LastPass. Anyone still trusting that company after all their mishaps is insane.)
As long as humanity continues to demand these mountains of complexity, this is the way things are going to be. The FreeSWITCH situation is pretty crazy, but the Okta case at least doesn’t seem too insane to me.
I’m not advocating for extreme levels of security, just meeting a bar that’s less embarrassing. I would probably reject the “not if but when” idea too fwiw but I’m onboard with accepting some risk, that’s the point of threat modeling - knowing what risks you do and don’t accept. I think we can do a lot better than the status quo and I think it stems almost entirely from security being something that no one cares about.
(Counter-example: LastPass. Anyone still trusting that company after all their mishaps is insane.)
A big difference between LastPass and Okta is that Okta has way more money to burn on PR.
If it “helps” at all, physical safety isn’t much better. Regulations only get written when a lot of people die in a high profile incident.
When we have security standards enforced by legislation, we end up with FIPS.
What needs to be enforced is liability.
Yeah the ease with which ToS/Contracts can just offload liability for basic due care or fitness of purpose is astounding. Not just for fraud/data loss but for simply badly designed/built products (take all “self driving” cars where the driver is somehow responsible, the blanket “no guarantee it provides the marketed or stated features” on all tech products by all companies, things like Tesla’s “warranty void if pothole, carwash, …” terms, and similar).
I’ve worked with FreeSWITCH in the past and can confirm it’s a bit of a shit show. We kept running into a problem where its sqlite database kept getting corrupted, presumably because threads kept stomping on eachother’s file descriptors. Our solution: simply delete the sqlite database in a cron job. The database wasn’t important apparently, or maybe it was used as a cache or something? I don’t recall.
The reason we used FreeSWITCH: legend had it that Asterisk was a total shit show. So it must be even worse… Eldritch horrors, alright!
I worked for a company that provided services to the telcos. We had to pay a ridiculous amount of money to license an SS7 stack. It was horrible; just looking at it the wrong way would cause it to crash (I once looked at the source code—written in K&R C because it was a thing back in 1987! It would do a linear scan of all known file descriptors, on every call, even to its own private routines, for Lord knows what reason. Horrible stuff!) but I was told that of the two or three stacks available, this was the best of them.
Oh, and by 2015, it was end of life and maintenance was taken over by some random company somewhere …
Ye Gods!
When I was learning about this stuff back in 2012, I learned that the entire SS7 network stack was statically routed—by hand. There was no automation. And every network and node was trusted, because adding security just wasn’t in the cards for a protocol developed back in the mid-80s and still in use. It took us five years to get a small change we wanted in one of the telephony network stacks we used that would lighten our own load. Five Years! The Phone Company does not move fast. It doesn’t have to. It’s the Phone Company.
I’m certain that the shitshow that the big teleco equipment providers ship isn’t any better. Some stuff that I’m NDA’d would make anyone sane just close their computer and hit nearest pub.
Honestly surprised the autor didn’t receive a cease and desist from some inane telecom operator or someone similar. My recent encounters even through bug bounty programs have been disappointing to say the least. Reporting these things is annoyingly unsafe from a legal perspective if you stumble upon a special type of… person.
Somehow those bug bounty programs have also broken my nearly endless patience‡, the next things I find will either be sold or immediately published anonymously. There’s unfortunately simply nothing one can do against irresponsible vendors besides public disclosure (and shaming). Until some lawmaker decides to change that.
‡ After all I’ve been living using Linux with Nvidia on and off for more than a decade now, I’m only half joking
Use snprintf() instead.
This is kind of “defensive C programming practices 101” level.
I guess, but the code was written in the year 2000. It’s not an excuse to not have reviewed this code and modified it to use snprintf, but the snark about a >25 year old line of code in a dependency’s dependency sucks.
Nevermind, I guess? 4.4BSD in 1992 had snprintf… reading the xmlrpc-c source and shaking my head the whole time so people know I disagree with it…
If you’re producing software that handles network traffic, a basic lint on the code base every year should be a bare minimum, but also sprintf and similar were known security hazards in the 90s so should not have been added to code 30 years ago, let alone 25.
I wanted to give xmlrpc-c credit since as far as I knew snprintf was standardised in C99, but I hadn’t gathered the reach of snprintf before it was added to C99. At first it felt like recency bias to assume snprintf would be novel for software that old but upon investigation there’s no good reason to use sprintf in this case at all. It actually shocks me I was taught to use sprintf at college in the 2010s at all.