How I got Claude to teach me dbt
12 points by rmoff
12 points by rmoff
I thought this article had some useful tips and perspective on using coding agents for learning. It seemed like the author was confident they had learned about dbt because Claude had carved out time for them to do some hands on work, which was really cool.
But I was confused at the end:
Just as you should recognise that typing 6+7 into a calculator should yield 13 and not 42, the same goes for the use of AI. As I noted above, for example: AI hallucinates. That doesn’t mean you shouldn’t use it, but rather that you shouldn’t trust blindly what it’s saying. Just like a calculator.
We DO generally trust calculators right? Needing to review the output of LLM driven processes is one of the reasons why we can use them as learning tools, assuming we can spot the problems, because they don't completely abstract away the need to understand the details. Maybe this paragraph was written with Claude? :-)
Self reported learning is definitely interesting and useful. But I am looking forward to reading more of what social scientists find when studying what/how learning happens with LLMs, e.g. https://www.changetechnically.fyi/2396236/episodes/18692591-you-can-learn-with-ai
Pointers to studies like that would be welcome.
We DO generally trust calculators right?
I suspect the principle being referred to here is that you should smoke test the output of a calculator to detect mistakes in the input process, e.g. if you're solving a word problem and end up with "Bob is running the marathon at -278385 mph", you probably fucked up somewhere along the line.
(But this is of course very different from the reason that you should check LLM output, which is that it may generate the wrong result even given a correct input.)
If you're working quickly and type 6*7 instead of 6+7, you'll get 42 -- perhaps the author is pointing out human error is common when using even 'infallible' tools; being imprecise, getting the wrong answer, and not checking it is very human.
We DO generally trust calculators right?
Solar-powered calculators can hit some fun failure modes in low-light conditions.
Without an explanation of the acronym, I initially expanded dbt to "dialectical behavioral therapy" and thought, well I guess it's better than using an LLM as a therapist directly...
I had a similar first reaction, and I assumed it was going to be indirect lessons about mindfulness when working with an unthinking code-bot.
Admittedly, pleasantly surprised to see it used to successfully learn a new thing.
Ignoring all the hype and credulous claims from AI boosters, I personally believe that having a textbook or documentation you can actually interact with and ask questions to is the most compelling aspect of this technology. It's good to see other people having similar experiences.
Last week I sat down to bang out a small C/SDL project for fun and to unwind. I've been using Linux for almost 30 years but my history as a developer has mostly been with high level interpreted languages, I've never really gotten familiar with C and the GCC toolchain and have always bounced off the obtuse and scattered documentation when I was younger.
I explicitly told claude not to write any code for me but instead outlined what I was doing, my ideas on how to do it, and just had it answer pointed "Why?" and "What?" and "How does this work?" questions for me. It's legitimately the only time I've come out of the other end of manipulating the magical bag of weights feeling smarter than when I started.
Claude Code has a built-in /output-style learning mode that implements a similar idea. It makes the agent emit more explanatory "why" and inserts TODO(human) comments rather than finishing the code.