The Evolving Role of the ML Engineer

-

You studied sociology and the social and cultural foundations of education. How has your background shaped your perspective on the social impacts of AI?

I feel my academic background has shaped my perspective on all the things, including AI. I learned to think sociologically through my academic profession, and meaning I take a look at events and phenomena and ask myself things like “what are the social inequalities at play here?”, “how do different kinds of individuals experience this thing otherwise?”, and “how do institutions and groups of individuals influence how this thing is occurring?”. Those are the sorts of things a sociologist desires to know, and we use the answers to develop an understanding of what’s happening around us. I’m constructing a hypothesis about what’s happening and why, after which earnestly searching for evidence to prove or disprove my hypothesis, and that’s the sociological method, essentially. 

You’ve gotten been working as an ML Engineer at DataGrail for greater than two years. How has your day-to-day work modified with the rise of LLMs?

I’m actually within the technique of writing a brand new piece about this. I feel the progress of code assistants using LLMs is admittedly fascinating and is changing how lots of people work in ML and in software engineering. I exploit these tools to bounce ideas off, to get critiques of my approaches to problems or to get alternative ideas to my approach, and for scut work (writing unit tests or boilerplate code, for instance). I feel there’s still lots for people in ML to do, though, especially applying our skills acquired from experience to unusual or unique problems. And all this isn’t to attenuate the downsides and dangers to LLMs in our society, of which there are various.

You’ve asked if we will “save the AI economy.” Do you suspect AI hype has created a bubble just like the dot-com era, or is the underlying utility of the tech strong enough to sustain it?

I feel it’s a bubble, but that the underlying tech is admittedly not responsible. People have created the bubble, and as I described in that article, an unimaginable sum of money has been invested under the idea that LLM technology goes to supply some sort of results that may command profits which can be commensurate. I feel that is silly, not because LLM technology isn’t useful in some key ways, but since it isn’t $200 billion+ useful. If Silicon Valley and the VC world were willing to simply accept good returns on a moderate investment, as an alternative of demanding immense returns on a huge investment, I feel this may very well be a sustainable space. But that’s not the way it has turned out, and I just don’t see a way out of this that doesn’t involve a bubble bursting eventually. 

A yr ago, you wrote concerning the “Cultural Backlash Against Generative AI.” What can AI corporations do to rebuild trust with a skeptical public?

This is hard, because I feel the hype has set the tone for the blowback. AI corporations are making outlandish guarantees because the following quarter’s numbers at all times need to indicate something spectacular to maintain the wheel turning. Individuals who take a look at that and sense they’re being lied to naturally have a sour taste concerning the whole endeavor. It won’t occur, but when AI corporations backed off the unrealistic guarantees and as an alternative focused hard on finding reasonable, effective ways to use their technology to people’s actual problems, that might help lots. It will also help if we had a broad campaign of public education about what LLMs and “AI” really are, demystifying the technology as much as we will. But, the more people learn concerning the tech, the more realistic they might be about what it might probably and may’t do, so I expect the large players within the space also is not going to be inclined to try this.   

You’ve covered many alternative topics prior to now few years. How do you select what to write down about next? 

I are inclined to spend the month in between articles fascinated by how LLMs and AI are showing up in my life, the lives of individuals around me, and the news, and I discuss with people about what they’re seeing and experiencing with it. Sometimes I even have a particular angle that comes from sociology (power, race, class, gender, institutions, etc) that I need to make use of as framing to check out the space, or sometimes a particular event or phenomenon gives me an idea to work with. I jot down notes throughout the month and after I land on something that I feel really serious about, and wish to research or take into consideration, I’ll pick that for the following month and do a deep dive.  

Are there any topics you haven’t written about yet, and that you just are excited to tackle in 2026? 

I truthfully don’t plan that far ahead! After I began writing a couple of years ago I wrote down an enormous list of ideas and topics and I’ve completely exhausted it, so as of late I’m at most one or two months ahead of the page. I’d like to get ideas from readers about social issues or themes that collide with AI they’d like me to dig into further. 

To learn more about Stephanie’s work and stay up-to-date along with her latest articles, you possibly can follow her on TDS or LinkedIn

ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x