How I got Claude to teach me dbt

12 points by rmoff


edsu

I thought this article had some useful tips and perspective on using coding agents for learning. It seemed like the author was confident they had learned about dbt because Claude had carved out time for them to do some hands on work, which was really cool.

But I was confused at the end:

Just as you should recognise that typing 6+7 into a calculator should yield 13 and not 42, the same goes for the use of AI. As I noted above, for example: AI hallucinates. That doesn’t mean you shouldn’t use it, but rather that you shouldn’t trust blindly what it’s saying. Just like a calculator.

We DO generally trust calculators right? Needing to review the output of LLM driven processes is one of the reasons why we can use them as learning tools, assuming we can spot the problems, because they don't completely abstract away the need to understand the details. Maybe this paragraph was written with Claude? :-)

Self reported learning is definitely interesting and useful. But I am looking forward to reading more of what social scientists find when studying what/how learning happens with LLMs, e.g. https://www.changetechnically.fyi/2396236/episodes/18692591-you-can-learn-with-ai

Pointers to studies like that would be welcome.

gcupc

Without an explanation of the acronym, I initially expanded dbt to "dialectical behavioral therapy" and thought, well I guess it's better than using an LLM as a therapist directly...

jorsk

Ignoring all the hype and credulous claims from AI boosters, I personally believe that having a textbook or documentation you can actually interact with and ask questions to is the most compelling aspect of this technology. It's good to see other people having similar experiences.

Last week I sat down to bang out a small C/SDL project for fun and to unwind. I've been using Linux for almost 30 years but my history as a developer has mostly been with high level interpreted languages, I've never really gotten familiar with C and the GCC toolchain and have always bounced off the obtuse and scattered documentation when I was younger.

I explicitly told claude not to write any code for me but instead outlined what I was doing, my ideas on how to do it, and just had it answer pointed "Why?" and "What?" and "How does this work?" questions for me. It's legitimately the only time I've come out of the other end of manipulating the magical bag of weights feeling smarter than when I started.

wrs

Claude Code has a built-in /output-style learning mode that implements a similar idea. It makes the agent emit more explanatory "why" and inserts TODO(human) comments rather than finishing the code.