The Future of Everything is Lies, I Guess: Where Do We Go From Here?
21 points by telemachus
21 points by telemachus
Given the heavy opposition to LLMs seeping through the lines of this post (and the entire series), I'm a bit surprised about how it ends:
Despite feeling a bitter distaste for this generation of ML systems and the people who brought them into existence, they do seem useful. I want to use them. I probably will at some point.
I wonder what'd have to change for the author to take up LLM tools.
More generally, I wonder what would safe, sustainable and ethical use of LLMs look like?
I think Corridor Digital (a special effects house) is a good example. They used machine learning to automate a very tedious job, using their own training material and then released the tool as open source. It has been well received by the film community and I can't fault them at all for what they did.
For me, at least, I don't use them for the same reason I self-host everything I can: I dislike the centralization of power. I don't trust any of these companies to build models that don't have racial and gender biases, or to protect my privacy or my freedom. I made that mistake one time, with Google, and I've been trying to unwind that dependency for fifteen years. Never again.
I also don't like buying new computers. Everything I use is used, except three hard drives and, soon, a smartwatch, and most of it is over 10 years old. Manufacturing computers is really awful for the environment and many parts and substances used are produced with labor practices I find abhorrent.
If the models that I can run on hardware I already own (which will, eventually, advance some; even Thinkpads die) are effective and don't exhibit harmful biases, I'll get excited. These are the same standards I apply to most technology I buy, with a few specific exceptions, and I don't see a reason to make one for LLMs. And I think that's a very long way off.