Never miss a new edition of The Variable, our weekly newsletter featuring a top-notch selection of editors’ picks, deep dives, community news, and more.
Like so many LLM-based workflows before it, vibe coding has attracted strong opposition and sharp criticism not because it offers no value, but due to unrealistic, hype-based expectations.
The idea of leveraging powerful AI tools to experiment with app-building, generate quick-and-dirty prototypes, and iterate quickly seems noncontroversial. The problems usually begin when human practitioners take whatever output the model produced and assume it’s robust and error-free.
To help us sort through the good, bad, and ambiguous aspects of vibe coding, we turn to our experts. The lineup we prepared for you this week offers nuanced and pragmatic takes on how AI code assistants work, and when and how to use them.
The Unbearable Lightness of Coding
“The amount of technical doubt weighs heavily on my shoulders, much more than I’m used to.” In her powerful, brutally honest “confessions of a vibe coder,” Elena Jolkver takes an unflinching look at what it means to be a developer in the age of Cursor, Claude Code, et al. She also argues that the path forward entails acknowledging both vibe coding’s speed and productivity benefits and its (many) potential pitfalls.
How to Run Claude Code for Free with Local and Cloud Models from Ollama
If you’re already sold on the promise of AI-assisted coding but are concerned about its nontrivial costs, you shouldn’t miss Thomas Reid’s new tutorial.
How Cursor Actually Indexes Your Codebase
Curious about the inner workings of one of the most popular vibe-coding tools? Kenneth Leung presents a detailed look at the Cursor RAG pipeline that ensures coding agents are efficient at indexing and retrieval.
This Week’s Most-Read Stories
In case you missed them, here are three articles that resonated with a wide audience in the past week.
Going Beyond the Context Window: Recursive Language Models in Action, by Mariya Mansurova
Explore a practical approach to analysing massive datasets with LLMs.
Causal ML for the Aspiring Data Scientist, by Ross Lauterbach
An accessible introduction to causal inference and ML.
Optimizing Vector Search: Why You Should Flatten Structured Data, by Oleg Tereshin
An analysis of how flattening structured data can boost precision and recall by up to 20%.
Other Recommended Reads
Python skills, MLOps, and LLM evaluation are just a few of the topics we’re highlighting with this week’s selection of top-notch stories.
Why SaaS Product Management Is the Best Domain for Data-Driven Professionals in 2026, by Yassin Zehar
Creating an Etch A Sketch App Using Python and Turtle, by Mahnoor Javed
Machine Learning in Production? What This Really Means, by Sabrine Bendimerad
Evaluating Multi-Step LLM-Generated Content: Why Customer Journeys Require Structural Metrics, by Diana Schneider
Google Trends is Misleading You: How to Do Machine Learning with Google Trends Data, by Leigh Collier
Meet Our New Authors
We hope you take the time to explore excellent work from TDS contributors who recently joined our community:
- Luke Stuckey looked at how neural networks approach the question of musical similarity in the context of recommendation apps.
- Aneesh Patil walked us through a geospatial-data project aimed at estimating neighborhood-level pedestrian risk.
- Tom Narock argues that the best way to tackle data science’s “identity crisis” is by reframing it as an engineering practice.
We love publishing articles from new authors, so if you’ve recently written an interesting project walkthrough, tutorial, or theoretical reflection on any of our core topics, why not share it with us?