Unauthorized Experiment on r/changemyview Involving AI-generated Comments

37 points by carsyoursken


insanitybit

Seems straightforwardly unethical and the claims made for justification are pretty weak. Even if you can justify the claim that there is such a large gap in our understanding of how to convince people of things (there obviously is not, and LLMs do not change anything about what we know about convincing people) that it merits experimentation of this nature, what is the justification for this experiment to fill that gap? The university’s response of a “formal warning” seems insufficient.

The experiment’s value seems extremely weak, the ethics are obviously questionable (tbh the question is perhaps only one of degree, it’s obviously unethical and the uni seems to grant as much), so any sort of cost/benefit here seems unjustifiable.

Taking this experiment in the best possible way, what would it show? That an LLM can convince people of things? That is obvious. That an LLM that takes on a human persona is effective at convincing people of things? Again, obvious. That using information about a person to target a response increases the chances of convincing them? This is well researched and obvious. The introduction of an LLM changes nothing here and there is already a body of research on the topic - justifications for this based on “we don’t know and this is the only way to find out” are just false.

carter

Reading the examples that the llm generated,I can only presume that the researchers are terrible humans. Like really nasty stuff.

ocramz

“not stricty a bot”, “not strictly spam”, “impossible to ask user consent” etc : the Overton window of AI-human interaction shifts under steady deliberate pressure. What will it be next time?

jrandomhacker

Makes me think of the time that the University of Minnesota tried to introduce security bugs into the Linux kernel as an experiment.

dubiouslittlecreature

I haven’t seen someone do something ethical with an LLM since they stopped being simple toys.