Why is GitHub UI getting so much slower?
212 points by FedericoSchonborn
212 points by FedericoSchonborn
I worked at GitHub for 5 years and resigned in February. I have no idea what I can say and I’m too lazy to really figure that out, but I will say this: every engineer that I knew there realized there has been a decline in quality in the site and service, had a strong desire to work on these issues and was being vocal about it. At least near the end of my time there, these complaints fell on deaf ears unless AI was mentioned and I don’t think that’s going to change unless leadership changes. I watched a lot of very talented people leave and while I don’t know every case, it seemed like this was often the cause.
The thing that irks me about posts like this is that it puts the blame on the engineers, who in this case, are powerless to fix these issues.
Vote with your feet and/or dollars.
If you aren’t super locked in to gh and are just looking for a git forge, codeberg is pretty nice =]. I am intimately familiar with what goes on at gh and don’t expect it to improve meaningfully any time soon.
This matches my assumptions (and a few conversations I had with former hubbers).
Curious how people ask “why do good engineers hate PMs so much” - well, this is why. The signature under “we do not fix this but instead <introduce more UI regressions for more React rewriting to get a team/person promoted>” is… the PM’s.
It almost feels like teams at GH have an obligation to show “here we have taken a (perfectly working) piece of UI and redone it, and here is why our team should still exist” type of justification.
And while articles like this are unnecessarily harsh towards engineers who would be end-responsible for delivery, I am still waiting for articles targeted in the more appropriate direction. Who silences engineers who signal enshittification and regressions? Who drags engineers who complain about incompetence (UI performance regressions) of their colleagues to HR? Who deprioritises regression fixes, and who creates an environment where “we have reworked this page” (which is in actuality “we made this page worse than it used to be”) and supports that environment?
If you want to point a finger in the right direction, it’s not the PMs calling the shots: the blame ultimately lies with executive leadership. The PMs are being told “AI AI AI”, because their managers are being told that by execs who’ve never had to use the product and never will.
Of course, ultimately the root problem is ‘capitalism’ so no one is completely to blame for anything.
I think we need to move past the code forge model and make it easier to collaborate on code. Most of the time, people are only interested in web viewing the main branch, so a purely static git generator is more than sufficient (eg https://pgit.pico.sh).
Further there are ways to collaborate that don’t involve reviewing PRs inside a browser (eg https://pr.pico.sh)
I’m not convinced that moving to another code forge is the move to make.
Most of the time, people are only interested in web viewing the main branch
Kinda disagree here - as someone who regularly posts issues on software (even after just trying it out for an hour), the threshold for “contribution” needs to be low or I won’t bother. Same for drive-by commits but let’s say if I need 2h to fix it then I will sign up wherever if I care, if it’s just a typo or small error I will not, just to submit a ticket.
if I need 2h to fix it then I will sign up wherever if I care, if it’s just a typo or small error I will not
What makes you think that pr.pico.sh requires signing up?
You should make it clear on its homepage that it doesn’t because that’s a very valuable feature that we don’t expect these days.
However, be ready to deal with spammers (or worse) because you can’t leave a freely accessible hosting service online without it being abused as soon as bad people notice its existence.
I think that was a mix of misunderstanding and not being completely clear what I replied to
I had clicked the https://pgit.pico.sh link and thought this looked just like cgit etc and had not clicked the https://pr.pico.sh link, which I saw as an aside.
Also I guess if it shows the tree and enables an “easy for whatever definition” collab workflow, where’s the difference to a “forge”? Except that if the bug tracker is somewhere else where I need a login half of my point still stands.
This is the direction that I’ve taken too, at least for my own public repos. I still use Forgejo for my private repos, but I may end up ditching that as well as I’m not using it to collaborate.
I’m just running a fork of stagit and Caddy, and it is all static. The repos themselves are just bare repos on my tiny VPS. Files on the main branch can be viewed, you can clone using HTTPS, you can download .zip for HEAD and .tar.gz for tags, and there are RSS feeds to follow tags and commits.
I agree with the overall goal: we want people to host their repos in multiple servers anywhere everywhere. But because contributing code between instances hosted in different places is so hard that will never happen unless we make contributions between different servers easier.
I like that https://sr.ht/ is trying to popularize the “patches over email” approach, but I also am not an email enthusiast, so we’re experimenting with patches over Nostr: https://fiatjaf.com/18ff5416.html (see also https://nips.nostr.com/34 and https://ngit.dev/). One advantage that has over the https://pr.pico.sh approach is that no one needs to deal with exposing SSH to the external world.
Another promising possibility is that of https://ngit.dev/grasp/ which makes it easy for people to publish repositories to multiple generic hosts without having to setup SSH (the repository state is pre-authorized via a Nostr event and then the push happens through HTTP). That enables publishing repositories without having to login or do any browser click-ops setup. And is also an alternative to the git format-patch
approach if that is too annoying or if the changes are too big, as it gets easy to send the full branch to a temporary host without hassle until that gets merged into some other tree and then it gets deleted after it’s merged on some GUI client chosen by the receiver of the merge-request.
One advantage that has over the https://pr.pico.sh approach is that no one needs to deal with exposing SSH to the external world.
The beauty of using SSH here is we already use it for cloning repos so it’s a very familiar and congruent tool. Further, it’s just a Go service running on port 22, it’s not a full-blown SSH daemon, which limits the blast radius for exploitation.
I’ve been casually looking at the alternatives popping up, there’s definitely something in the air since I’m noticing quite a lot of activity in this space. I think many SWEs can agree: GH is a gnarly bottleneck for most of our daily ops. This is why having a lighter weight alternative like patch requests (tagline: a pastebin for git collaboration) can be used out-of-band when GH goes down – at least for the collaboration component.
The other component that pgit
and git-pr
need to compete with a code forge is CI/CD. We don’t have an RFC yet, but we have some ideas around how we could reduce the complexity of a CI/CD to what is essentially a think wrapper on top of a pubsub system (https://pipe.pico.sh).
There is a reason the Hippocratic Oath is taken by physicians and not hospital owners.
I think it is fair to blame the engineers. Management are not expert users and are working based on different pressures. They are wholly dependent on the engineers to push them into making better choices. Engineers should use the power they have and either quit or refuse to make changes that they feel are destructive.
And in my experience, it is extremely rare for management to tell developers, “we need to rewrite this thing in ” and push through a rewrite despite protests from developers. It’s typically the other way around; developers feel (rightly or wrongly) that a rewrite is necessary and convinces management.
I voted with my time and money by building a better version of their product.
I’m hoping beyond hope that they’re simply too locked in and bogged down to do anything meaningful to adapt before it’s already too late
TLS cert on https://bablr.org/ needs attending to.
What’s the better version of their product you built? I only saw bablr when I followed your profile.
What’s up with the cert? We’re not experts in everything so we need help where we don’t know what we don’t know.
BABLR is the thing. I badly need to replace those very misleading FAQs that were placed on our homepage. The better basic writeup for now is over at https://github.com/bablr-lang/
You should think of it as a headless IDE, which is why it doesn’t look like much. It’s principle businesses are allowing the creation of arbitrary extensible streaming parsers (which it does) and the definition of a standard format for a syntax tree node (serialized and in-memory). It’s an IDE’s state layer and plugin system in a box, and since the data is in immutable btrees which we can readily do structural reuse of subtrees by their hash.
The saddest part is that, back in around 2018 GitHub was my favourite example of a major server side rendered application that had near instant response times. I always hated using any project management tool alongside GitHub, because GitHub made them all feel like a slog. Browsing your PRs and source files on GitHub felt as fast as alt-tab switching to an already open window. Look at what they did to it now…
Running my own forgejo reminds me of how it used to feel.
Not sure which settings page I need? Quickly check all on forgejo in about 2s. On github I havent fully rendered one page in that time.
Moving some daily-use Git repos to my own virtual server had a similar effect for me. All the sudden git push
became nearly instant again.
And then Microsoft bought it.
Meanwhile Sourcehut continues to be pretty nice for small projects.
Same. Also a great example of a “web app” with tons of complexity and data-dense pages that worked without JavaScript.
Maybe if the CEO keeps posting about how important their AI features are they’ll reassign the team whose job it is to add more React to everything and make them do some AI shit instead that you can hide with ublock origin, so at least it’ll stop getting worse?
Tangentially: I maintain a filter list which also has modules for just AI/Copilot stuff for those with uBlock interested in blocking nuisances.
Very nice. The well organized repo and familiar target site makes this a good demo of uBo features.
Github’s decline as a platform has been felt for some time already. A bit more than a year ago the article “GitHub” Is Starting to Feel Like Legacy Software came out exposing some issues that the SPA refactor may have contributed to.
Also it’s worth noting that it IS legacy software. Everything they do is built on top of SHA1.
SHA1 is very far down the list of problems on GitHub. Not only are here are no known examples of collisions for Git’s salted SHA1 but Git (therefore probably GitHub) actually replaced SHA1 with SHA1DC which has no known weakness.
I suspect that part of the reason it’s so far down the list of problems is that it seems that even their would-be competitors are equally stuck on it. The Wikipedia page on this history of SHA1 attacks shows them getting more realistic and cheaper all the time. It describes the cost of an attack starting at around $3mil in 2012 to ~$100k in ’15/’19 and possibly as low as $45k in 2020.
Make what you will of that, but given that I’m looking to build for the next 20-50 years I’m going to assume that one band-aid by the defenders is still not enough to hold off the attackers for that length of time.
The Wikipedia page on this history of SHA1 attacks shows them getting more realistic and cheaper all the time. It describes the cost of an attack starting at around $3mil in 2012 to ~$100k in ’15/’19 and possibly as low as $45k in 2020.
And the same Wikipedia page explains that it’s not relevant for the SHA1 variant used by Git and GitHub:
In the wake of SHAttered, Marc Stevens and Dan Shumow published “sha1collisiondetection” (SHA-1CD), a variant of SHA-1 that detects collision attacks and changes the hash output when one is detected. The false positive rate is 2^-90. SHA-1CD is used by GitHub since March 2017 and git since version 2.13.0 of May 2017.
Doing some reading, looks like this is the paper that gets into detail: https://marc-stevens.nl/research/papers/C13-S.pdf
This is for realistically not a problem anyone worries about given that you can sign git tags with PKI. If you are serious about what you’re building you’re going to do that anyway, so SHA-1 is a moot point.
The thing yo sign is the commit object, which points to the data by sha1 hash. If you can generate a collision, you can replace the data pointed to.
The commit message is protected, but the commit contents aren’t.
Typically the thing you sign is the tag object. The tag object points to the commit object. The commit object points to a tree object. The tree object points to other tree objects and blobs. If you want to inject a malicious blob into the tagged release you have to be able to generate a chained series of collisions for all of these, not just one. At this point we are talking about timelines like ‘heat death of the universe’.
You don’t need a chain of collisions, just one. If you have two blobs with the same hash, then a tree that refers to one also refers to the other, because trees only refer to blobs by their hash so the two blobs are indistinguishable from each other in the tree object. And commits to trees, and tags to commits.
You only need to generate one malicious blob: the root of the tree. That can point at any other blobs you want.
I still don’t think that’s true. If you somehow (still difficult; probably the hardest part) compromise a leaf of the tree, everything above it is just as compromised because every node higher in the tree is just trusting-trusting-trust as to the correctness of the contents of the leaf. The attacker’s advantage is that once they inject altered data with an identical hash into the system, it will be completely invisible to the system, like a rootkit. While payload development might take up a bit of a hacker’s time, it wouldn’t be anything close to heat death of the universe. This is a universe where we have rowhammer attacks!
I’m not at all certain I’m correct, but from what I can learn about how this feature is implemented it’s just signing the commit or tag object with a verifiable identity for the author. That would mean that SHA-1 is still the only part of the mechanism protecting the relationship between the commit object and the contained code. Am I understanding that correctly?
It used to be fast but then they rewrote it all in React. It’s not because there’s a million DOM nodes - the old server rendered stuff did the same thing. It sometimes caused the browser to chug on page load on especially large views, but the browser handled it fine once rendered. Static layout.
The new React implementation dynamically updates the DOM as you scroll doing its own visibility culling, presumably for performance reasons, but the browser is already quite good at doing this on static DOMs, so the server rendered implementation never suffered this problem. So the end result is a lot more JS code is running to render the UI now than ever ran in the past and that’s why the UI is slow now.
Yes, they also broke my extension which used to work pretty well and took me a relatively long time to make: https://module-linker.fiatjaf.com/
Oh, this would be pretty nice to have on Forgejo if you’re interested in working on that!
GitHub’s pull request page is abysmal and it makes me sad to see it getting worse because I have to use it.
At work we use Gerrit (among other tools) and it’s so much better in every aspect. Loads fast, can render massive diffs with no issues. You can navigate in the page with just the keyboard (it has its own bindings for moving to next/prev file, marking file as reviewed, expanding/collapsing files etc.).
Gerrit workflow is much better too: commit messages are a part of the diff (allowing reviewing it just like code), it allows dependent patches etc.
I just hate GitHub at this point. Unfortunately the open source community is there so I have to use it.
Unfortunately the open source community is there so I have to use it.
The speed at which people are leaving is accelerating. I run a biannual game jam, and the last one had almost as many entries hosted on Codeberg as Github. Larger projects will take longer because they have more inertia, but at this point it’s just a matter of time.
Giving up because moving off Github feels impossible is just letting them win. Remember that Sourceforge once felt unavoidable.
We can do this.
Yup, the pull request page has two main jobs:
Their UI is so awfully slow that they “fixed” it by hiding any non-trivial amount of comments or diff. I’m not even talking about actually numbers like thousands of “me too” comments on public PRs or tens of thousands of lines of diffs. At small tens of comments or mid hundreds of lines of diff it starts hiding them. Now you can’t search for comments or functions because they are collapsed. At least diffs can be expanded in one click per file but comments you need to load in like 10 at a time and the button moves around. But it is incredibly often that I get to the bottom of the “changes” tab thinking “that looks good” then a second later realize that I didn’t actually see the change I was expecting from the description. Sure enough it was hidden because it is too big for slow little GitHub to render.
My browser is really good at rendering. It can render my team’s entire conversation history in one page without breaking a sweat. Maybe every single diff from our Git repo with syntax highlighting would be a little slow on one page, but it could probably handle a few months worth of them with no problem. The browser can handle it as long as you don’t do anything dumb.
I really need to set up Gerrit. But the docs are really bad, I want to figure out how to get a basic CI integration set up and it really isn’t obvious. I’m pretty sure it is possible but can’t see what pieces I need to stick together.
one of my biggest annoyances is when all conversations have to be resolved, and then resolving them collapses them, so if you look at a PR, literally all of the meaningful discussion is hidden by default. A truly awful set of changes
Now you can’t search for comments or functions because they are collapsed.
And also it looks like searching comments via issue search sometimes misses some of the matches in larger repositories.
That was a big reason why I left GH behind. I’ve been enjoying the AGit PR workflow available in Codeberg and other Forgejo instances.
not just slower, but also less functional: sometime ago, i tried to ctrl-F some code, no hits, even though i was certain the identifier was present. turns out gh uses js to render only portions of the page.
if my laptop from ~8 years ago is able to open the text file with syntax highlighting, why can’t a website load up the same text file on the browser? does it absolutely need partial rendering?
its not just a website to browse your git remote and collaborate. its turned into an “AI coding app”. no thank you.
It’s also getting a lot flakier. Around October or November of last year I realized that every week I see another bit of SPA jankiness. I lose my scroll position because some component has appeared or disappeared above view. The issues list shows hours-old state and does not refresh. A button doesn’t work. A page loads but everything below the header is unstyled. I expand the build details but it collapses closed every time a stage finishes. A filename header appears twice. New comments don’t appear. On and on.
It’s very frustrating to have a tool that has spent 15 years fading into the background of reliable infrastructure become an intrusive, distracting mess.
After a couple months with jujutsu it’s almost completely replaced my use of git. It’s a lot to hope for, but just as jujutsu surveyed a couple decades of VCS to syncretize a huge improvement, I do hope someone will do the same for collaborating with jujutsu. GitHub PRs feel very unfortunately frozen in amber because their popularity makes it very hard to fix the core UI design problems it has with multiple tabs, incoherent timeline, edited commits, missing “8 more comments”, and now a steady drip of SPA jank.
The big remaining feature of GitHub is the network effect of coworkers and potential contributors already being logged in, and there could be a race between a competitor neutralizing that with bidirectional sync (see git-bug) and GitHub getting their usability problems sorted. Microsoft’s legendary resistance to breaking changes means there’s a very big window available.
(I posted this comment on HN when I first saw the article. One more step in something I’ve been publicly thinking through here.)
It’s a lot to hope for, but just as jujutsu surveyed a couple decades of VCS to syncretize a huge improvement, I do hope someone will do the same for collaborating with jujutsu. GitHub PRs feel very unfortunately frozen in amber because their popularity makes it very hard to fix the core UI design problems it has with multiple tabs, incoherent timeline, edited commits, missing “8 more comments”, and now a steady drip of SPA jank.
Can you explain further? If Jujutsu is an abstraction over an arbitrary VCS back-end, how would a new code forge solve GitHub’s design problems (which I take to be rooted in Git itself) unless an alternative back-end were to be used? How much of what you see as GitHub’s problems are rooted in the features it adds on top of Git versus those that are in Git itself?
I don’t think the design of the storage backend affects much of what I’m highlighting in UI design.
IMO one of the big losses was when they rewrote the source browser in 2021, to try to make it more semantic. Not only does it not produce good results for the languages I use (Python, C, C++, JS), but it’s also is way slower.
https://github.blog/open-source/introducing-stack-graphs/
It claims to use fancy algorithms, but I’ve never seen evidence that they work well. They even claimed “precise navigation in Python” … and I want to say “I do not think that word means what you think it means”
It seems like a lose-lose to me – it’s both inaccurate and slow.
FWIW this actually works consistently well for me in languages like TypeScript and Rust.
Yeah on trying it a bit more, and getting past some UI snags I didn’t like, it IS useful in some cases … but it’s not not “precise”. It falls back to text search.
e.g. I tried on a project that’s not my own, clicking on EVERYONE, and the references are based on text search, not precise semantic understanding: https://github.com/zulip/zulip/blob/main/zerver/models/realms.py#L1280
Same in Go, clicking on entries
seems to do text search: https://github.com/evanw/esbuild/blob/main/internal/cache/cache.go#L45
It’s not a bad feature; I just wish it was faster, and wasn’t advertised as “precise” … My feeling is that they started off trying to make it precise, and then had to gradually relax it due to problems encountered in real data
Just for completeness, I tested on Rust, and clicking on fmt
gives me text search results:
https://github.com/BurntSushi/ripgrep/blob/master/crates/globset/src/glob.rs#L97
And TypeScript, clicking on “server” gives me text search results:
https://github.com/microsoft/vscode/blob/main/src/server-main.ts#L210
It’s not a BAD feature, but it’s not fast, and not as advertised IMO. I think what ended up being deployed is not the algorithms they initially described.
IMO they should have used what I would call “coarse indexing”, which is basically what Sublime Text uses: https://lobste.rs/s/fvxd9q/my_text_editor_is_not_open_source
It would be fast, scalable, and the results wouldn’t any worse than they are now. Probably better.
Same, I’m able to jump to definition/usages within the PR view (mostly using Go), and that’s a really good feature.
That’s very VERY good for me as someone who put five years into building a web code rendering experience that is as fast as greased lightning, precise, and fully semantic.
Been wondering for ages why the team at GH don’t just remove Turbo. Hot garbage it is. This bug has been driving me mad for I don’t know how long.
The Rails / Basecamp crew are world class at backend stuff. Love ActiveRecord and all that good stuff. Not sure if I’m convinced anything of equivalent greatness for the front end has come from there though.
The back button updating the url but not the displayed html, forces me to reload the page. It happens a couple or times per day to me. Just walking up and down the file tree is enough to trigger it, and it’s so in-your-face that I imagine everyone just accepts it instead of doing something about it.
We used pjax before Turbo existed, which was a proto version of turbo developed by defunkt. It was everywhere on GH in the early 2010s. Im sure it’s still there in some pages, slowly getting or built over with layers of react mud.
source: I worked there for 5 years pre MSFT.
it barely works on anything that isn’t bleeding edge chromium, mobile safari will just render one part of the page and nothing else loads
it was pretty bad a year or two ago, occasional stops and sputtering, but it’s so consistently bad that I find myself clicking “open link in new tab” out of habit
Tangential to my other comment: you know how there are alternative frontends for things like YouTube etc. I was thinking about writing one for GitHub honestly, maybe this is the push to start hacking on that.
Yep, Turbo sounded like a good idea to hop onto the AJAX train for us ruby devs back in the days, but it has always been a sort of anti-web pattern. Which brings us back to the discussion of how SPAs broke the web.
Please note, SPAs, not JavaScript, ‘cause JS fixed the web at first together with CSS, and only then something went seriously out of hand 😅 My take on why is the V8’s amazing speed that made us think that web is dead, lets rewrite it from scratch just because we can.
SPAs don’t have to be slow. (and GitHub is not an SPA, by the way). Some implementations of client-side rendering are worse than others.
React is default these days, but it has some very real bottlenecks. But everyone uses it because everyone else is using it :-)
Modern libraries (SolidJS is my favorite due to awesome DX, but Svelte and Vue are valid choices too) are vastly more performant.
SPAs don’t have to be slow
Navigation in an SPA is always going to be much slower than native browser navigation because an SPA does a bunch of work in Javascript that simply doesn’t happen when the server delivers a complete HTML web page. Delivering HTML is no harder than the equivalent JSON. Manipulating the DOM in the browser requires far more computation work than templating HTML on the server.
The vast majority of SPAs try to behave like a collection of web pages. The architecture of SPAs is inherently much slower than necessary for these kinds of web sites.
and GitHub is not an SPA
Sure it is, it uses React and it avoids native browser navigation.
Navigation in an SPA is always going to be much slower than native browser navigation
that’s kinda true for simple pages and heavy diff-algorithms, but for complex apps initialization step might be heavy even if everything is in cache. On the other hand, partial changes are trivial in modern reactive systems (key word is “reactivity”, not “react”).
We usually use the name “SPA” for apps which do not have server-rendered part. GitHub is an “isomorphic” (or “universal”) app which has both server-rendered base and hydration.
Currently for PR pages the server-rendered part is only the first message, though. The entire conversation is embedded as JSON, though, so the fastest way to look at a PR conversation is to load the HTML and just read the markdown embedded in JSON (well, I do have a text-mode browsing flow where this works naturally)
I don’t know if anyone else encounters this, but GitHub’s search functionality always changes the branch I’m looking at, usually sending me to a seemingly random commit with a matching line. It didn’t always do this but like a year ago I started noticing it happening and it’s lost me so much time before looking at commits lacking the code I was trying to investigate.
News today: Github is now part of Microsoft’s CoreAI team: https://github.blog/news-insights/company-news/goodbye-github/
I’ve managed to avoid GH until recently and the UI and performance is an absolute trainwreck.
Small patches take seconds to load, fail to scroll, take noticeable amounts of time to create comments, hides parts of the PR you’re reviewing for “performance”, fails to handle rebases and force pushes.
The UI itself encourages incorrect actions: there is no significant UI distinction between “you’ve marked this for review, merge it now?” and “this has been approved, merge it?”
The UI is subtly modal: if you start a “review” more or less all comments and replies get tied to that review rather than being immediate.
The UI also fails the basic tasks a developer requires: review comments that need to be addressed - coupled with the complete inability of GH to handle rebases and similar feedback disappears constantly.
This is all before you get to the completely brain dead choice to track PRs separately from issues so that the PR comments need to reference the issue being fixed.
I used a bugzilla instance that was better than GH, that every project seems to have adopted it as there repository service of choice despite it being literally worse than bugzilla and subversion is incredibly frustrating and mind blowing.
GH is the worst platform I have ever had the misfortune to have to deal with: it fails basics of bug tracking, commit tracking, and reviewing. There is nothing it does at a level that approaches “average”. And now it also keeps trying to push AI bs on everything - as if I want an incompetent and typically wrong coworker that can spew pseudo justification for erroneous and poorly designed code that can’t be fired.
For over five years I’ve had issues, particularly on MobileSafari and Firefox on Android, where the history is just totally borked. I think it has to do with overuse of their “turbolinks” (unsure if they are based off of the project named turbolinks) vs regular full links and I have no idea how to debug or fix it. Perhaps monitor uses of History API along with a window.location logging hook?
I want to fix this via an extension or user script since GH seems to not care. Maybe the Refined GitHub folks could help?
This made me curious as to whether Turbo could be disabled with a uBlock Origin rule. I came up with the following:
||github.githubassets.com/assets/vendors-node_modules_github_turbo_dist_turbo_es2017-esm_js-*.js$script,domain=github.com
All the links that were Turbo enabled now fall back to being normal page navigations.
One problem with this is it could break buttons and links that rely on data-turbo-method to send a post, patch, or delete requests instead of a get request.
GitHub has an extremely slow UI, but a very good and comprehensive GraphQL API. If anyone is interested, replacing GitHub’s UI is a perfect fit for isograph. See, for example, this conference talk, where I make non-trivial changes to an Isograph app that uses the GitHub GraphQL API using Cursor. TLDR it should be really easy to recreate 80% of GitHub’s functionality, with immensely better performance.
Anyway, if anyone is up for it, I non-ironically think that this would be a great way to get your name out there.
They just rewrote the entire PR interface to be react based, no?
But also this is true to form for Microsoft. This is the company that shoves MS Teams down everybody’s throat and is proud of it.
Also Github Actions is down so often that it’s a great poster child for Azure Cloud.
In situations like these, people come up with alternative front-ends; in this specific case, there’s gothub*. Or in general, LibRedirect. Among these, the most memorable names amongst frontends, at least for me, are: Genius → Dumb, Intellectual
.
* Not to be confused with this gothub, which is a forge for Game of Trees, also named well.
I absolutely dread any interaction with github and it’s slow, AI-trash ad-riddled interface. Same goes for slack.
Ah, I am not the only one who feels this. Also, GitHub uses a tonne of memory for a website compared to the server side rendered days. It really has been enshittified by now which is a damn shame.
Apologies for related question: I got a charge for $684 from Github out of the blue this morning. I do absolutely nothing on Github that would cost remotely this much. Anyone else get spurious charges from them recently?
I got a $144 charge last month that turned out to be sponsorships (yearly billing). You wouldn’t have known it from the email which only list stats about actions and copilot usage (minimal for the former, 0 for the latter in my case)
For a while I was on the other side of this because I primarily work on a quite large repository, where (before any of the React-style integrations) page loads would regularly take 1-3 seconds each, which sucked. I actually went as far as running all my GitHub traffic through a on-machine Squid proxy with more aggressive cache policies to try to work around it. From that perspective, the relatively snappy React transitions felt like a breath of fresh air.
But now the slowness has returned regardless, and because of the migration has started to spread to smaller repositories too. I’m not sure a way out for GitHub other than prioritizing it.