An impulsive rant on AI

February 21, 2026 10:48 AM ET work, society, management

Almost twenty years ago, I was in a company leadership meeting, struggling to see eye to eye with others on priorities for the business. We seemed to be pursuing more work than we could staff, and not for any clear reason. “We have to grow the business,” I was told. I asked, “why? Who says we have to?” I think books were cited. Or maybe “any business textbook” perhaps.

This should’ve been my first clue. Though that company eventually became a certified B corp, “business” was still their mindset, but it wasn’t my mindset.

Years later, after having children, I saw that mentality laid out in a book about business by Dr. Seuss. “Business is business, and business must grow,” it read. The story is often viewed as a morality tale about protecting the environment, but I read it as being about the emptiness and futility of greed.

This should’ve been my second clue. I was seeing a broader message in a straightforward children’s book.

Earlier this month (and in truth, for the past year or so) my employer has been looking for ethical and reasonable ways to use AI; and they have pushed that desire from senior leadership down to front-line individual contributors. We are now encouraged to share things we’ve made with AI or tips/tricks on using AI. But I have the same question as I had twenty years ago: why?

Doesn’t everyone want to be more productive? Shouldn’t a company want to do more with less? More output with less expense. Why would a company say no to that? Business must grow!

I don’t need clues anymore, I know I’m a class warrior.

It’s an awkward position as an engineering manager: your duties are split between being a supportive agent for employees and being an advocating agent for the company. The way I’ve been able to navigate this dichotomy is to find the place where these two perspectives overlap. But this one’s harder, because I disagree with it personally.

Will I use AI at work? If I must, sure. Will I use AI at work to make myself more productive? Doubtful.

“Doing more” doesn’t make me more productive. My job isn’t comprised of producing widgets or writing lines of code. My value isn’t measured in units of work produced. And this is an easy argument to make when you’re a people manager. There’s only a few places I even could use “AI” and I refuse to use it in the most obvious places (writing performance reviews and quarterly reports) because a) I am ultimately more knowledgeable if I actually gain the knowledge to write the report myself, and b) using my own voice in writing is a valuable life skill – and a major part of building and maintaining relationships at work.

This is supposedly a harder argument to make, then, if your job is to produce widgets or lines of code. After all, if you can create more stuff, then you get more done, and “get more done” equals more production. But more production doesn’t mean we ship more widgets. (At least in software, more production doesn’t mean more users.) But I’m still stuck on the “why.” If I’m a software engineer, I’m not getting paid more if I produce more. I’m not paid by the amount of things produced. So AI being used to “increase worker productivity” is clearly a benefit for the employer.

“But wait,” one might say, “isn’t it also a benefit for the employee?” One could argue that a more productive employee is going to be compensated by the employer: better pay rise, bonuses, promotions, etc., and so it’s also in the best interests of the employee to use AI to “do more.” But using AI doesn't make you better at what you do, unless what you do is only measured by volume. “Good” use of AI is only possible if you know enough to assess what it gives you, instead of blindly taking whatever comes back. Over time, sure, reading lots of other examples will probably improve your knowledge, because if you spend time reading what someone else has done to solve a problem and understanding why they did it that way is, you know, literally learning. Learning makes you better. Spitting out copies of what someone else has done to solve a problem does not make you better. That’s not gaining knowledge.

This is without even considering – without even mentioning – the cost of using AI. We’ve let the Overton window be shifted on us: now that AI is commonly available for chat or code generation or whatever, we’re no longer talking about the price of that availability. Is AI cheaper than it used to be? Sure. Is it more fuel efficient? No. Even if you shift away from SaaS use to a local LLM, you’re just moving the energy expenditure from a data center to your room.

My brain frames this all a bit differently: the bourgeoisie have a tool that is being promoted as ubiquitous and forcing the proletariat to use it, because it benefits the bourgeoisie. It’s being sold as also benefitting the proletariat, but given that the bourgeoisie determines the compensation, the game is rigged: the bourgeoisie says that using AI is now the standard because it means the proletariat can make more – and because it’s the new standard, the bourgeoisie compensates the proletariat the same as before, despite getting more production out of them. Adding injury to this insult, the tool actively makes surviving more expensive (for both groups), but that’s OK because revenue.

But I’m biased. None of what I consider self improvement is about the rate of production. My music writing is not better if I write more; it’s better if the quality of it improves or if it evolves my aesthetic. It’s not better music if it’s more like the aggregate canon of Western music from the past 200 years. My writing isn’t better because there’s more of it or because its tone is more like the aggregate tone over the past 50 years; it’s better if it’s efficient and successful and represents my voice. My parenting isn’t improved by AI. My understanding of individuals and sense of team goals isn’t improved by using AI. AI does not help me think and reason and act. And if AI use isn’t helping me improve, but it is benefitting my employer, then it’s pretty clear to me why companies want to make AI part of the new standard for how to work.

Can AI be useful? In certain situations, sure. Have I used it to quickly explain a complex Excel formula I wrote years ago that is annoying to parse? Yes. But until AI sheds its negative side effects as an environmental spoiler and a tool of exploitation, I’m investing in my brain first.

I promise the next blog post will be about music.