The Evolving Role of the ML Engineer

Editor
8 Min Read


In the Author Spotlight series, TDS Editors chat with members of our community about their career path in data science and AI, their writing, and their sources of inspiration. Today, we’re thrilled to share our conversation with Stephanie Kirmer.

Stephanie is a Staff Machine Learning Engineer, with almost 10 years of experience in data science and ML. Previously, she was a higher education administrator and taught sociology and health sciences to undergraduate students. She writes a monthly post on TDS about social themes and AI/ML, and gives talks around the country on ML-related subjects. She’ll be speaking on strategies for customizing LLM evaluation at ODSC East in Boston in April 2026.

You studied sociology and the social and cultural foundations of education. How has your background shaped your perspective on the social impacts of AI?

I think my academic background has shaped my perspective on everything, including AI. I learned to think sociologically through my academic career, and that means I look at events and phenomena and ask myself things like “what are the social inequalities at play here?”, “how do different kinds of people experience this thing differently?”, and “how do institutions and groups of people influence how this thing is happening?”. Those are the kinds of things a sociologist wants to know, and we use the answers to develop an understanding of what’s going on around us. I’m building a hypothesis about what’s going on and why, and then earnestly seeking evidence to prove or disprove my hypothesis, and that’s the sociological method, essentially. 

You have been working as an ML Engineer at DataGrail for more than two years. How has your day-to-day work changed with the rise of LLMs?

I’m actually in the process of writing a new piece about this. I think the progress of code assistants using LLMs is really fascinating and is changing how a lot of people work in ML and in software engineering. I use these tools to bounce ideas off, to get critiques of my approaches to problems or to get alternative ideas to my approach, and for scut work (writing unit tests or boilerplate code, for example). I think there’s still a lot for people in ML to do, though, especially applying our skills acquired from experience to unusual or unique problems. And all this is not to minimize the downsides and dangers to LLMs in our society, of which there are many.

You’ve asked if we can “save the AI economy.” Do you believe AI hype has created a bubble similar to the dot-com era, or is the underlying utility of the tech strong enough to sustain it?

I think it’s a bubble, but that the underlying tech is really not to blame. People have created the bubble, and as I described in that article, an unimaginable amount of money has been invested under the assumption that LLM technology is going to produce some kind of results that will command profits that are commensurate. I think this is silly, not because LLM technology isn’t useful in some key ways, but because it isn’t $200 billion+ useful. If Silicon Valley and the VC world were willing to accept good returns on a moderate investment, instead of demanding immense returns on a gigantic investment, I think this could be a sustainable space. But that’s not how it has turned out, and I just don’t see a way out of this that doesn’t involve a bubble bursting eventually. 

A year ago, you wrote about the “Cultural Backlash Against Generative AI.” What can AI companies do to rebuild trust with a skeptical public?

This is tough, because I think the hype has set the tone for the blowback. AI companies are making outlandish promises because the next quarter’s numbers always need to show something spectacular to keep the wheel turning. People who look at that and sense they’re being lied to naturally have a sour taste about the whole endeavor. It won’t happen, but if AI companies backed off the unrealistic promises and instead focused hard on finding reasonable, effective ways to apply their technology to people’s actual problems, that would help a lot. It would also help if we had a broad campaign of public education about what LLMs and “AI” really are, demystifying the technology as much as we can. But, the more people learn about the tech, the more realistic they will be about what it can and can’t do, so I expect the big players in the space also will not be inclined to do that.   

You’ve covered many different topics in the past few years. How do you decide what to write about next? 

I tend to spend the month in between articles thinking about how LLMs and AI are showing up in my life, the lives of people around me, and the news, and I talk to people about what they’re seeing and experiencing with it. Sometimes I have a specific angle that comes from sociology (power, race, class, gender, institutions, etc) that I want to use as framing to take a look at the space, or sometimes a specific event or phenomenon gives me an idea to work with. I jot down notes throughout the month and when I land on something that I feel really interested in, and want to research or think about, I’ll pick that for the next month and do a deep dive.  

Are there any topics you haven’t written about yet, and that you are excited to tackle in 2026? 

I honestly don’t plan that far ahead! When I started writing a few years ago I wrote down a big list of ideas and topics and I’ve completely exhausted it, so these days I’m at most one or two months ahead of the page. I’d love to get ideas from readers about social issues or themes that collide with AI they’d like me to dig into further. 

To learn more about Stephanie’s work and stay up-to-date with her latest articles, you can follow her on TDS or LinkedIn

Share this Article
Please enter CoinGecko Free Api Key to get this plugin works.