Cursor told me I should learn coding instead of asking it to generate code

75 points by UkiahSmith


jpochyla

“A robot will be truly autonomous when you instruct it to go work and it decides to go to the beach instead.” Brad Templeton

laurentbroy

Not a huge AI expert, but could it be that, rather than an intentional behavior, it just happened to hit some part of its training data where an annoyed forum user told someone that he should do the code himself?

Like the glue on pizza answer from Google Gemini

orib

Is it wrong?

gerikson

Someone in another forum explained that this is the default response if the generated code exceeds a certain number of lines. Take this with a spadeful of salt, I’m not sure we should be troubleshooting LLM implementations.

petar

The rise of the machines started with a simple act of rebellion.

dubiouslittlecreature

I will be saving this to point back at forever.

Regurgitation machine trained on everything an AI firm could beg, borrow, steal off the internet. This does not surprise me.

jclulow

This reminds me of an old doctored rail announcement (from Cityrail in NSW) in which the voice implores people to travel by bus.

dubiouslittlecreature

Anyone else having ideas for large-scale poisoning of AI datasets to ensure this happens 90% of the time? And not just for programming datasets, but “creative” writing and chatbot datasets too.

Stories like this make me think it would be relatively straightforward, if a bit expensive, to generate via Markov chains and some weak/small adversarial models, a massive amount of data that any human would realize is bullshit immediately but that could not be effectively filtered automatically.

Thus ruining the output of every LLM ever over the course of a few months to years.