How should we evaluate AI’s impact on TILM?

I’d like to offer a few lenses to look at this with more clarity and less anxiety:

1.Different tasks, different impacts.
Interpreting, translation, and localization involve distinct skills, workflows, and cognitive demands. The effect of AI on one does not automatically predict the impact on the others.

2.Impact varies by domain.
Legal contracts, literary essays, game dialogue, and medical forms all require different styles, terminologies, and management approaches to linguistic assets. Hence AI’s impact needs to be evaluated by domain, not in general.

3. Impact varies by language pair and direction.
Some language pairs are “high-resource”—AI models have been trained on large, clean corpora. Others are “low-resource,” with much weaker performance. 

Even within a pair, direction matters: EN > ZH may look fluent, but ZH > EN can produce awkward translationese.

4.If AI performs well in one area, what does that tell us?
When a task is easily handled by AI, it’s worth asking why.
What is it about that task—its structure, its predictability, its data richness—that makes it automatable?

This is a clue. If machines are outperforming humans in some areas, then our next question should be: Where do humans still outperform machines, and why?

We’re already seeing the emergence of new roles—ones that require broader knowledge structures, deeper social insight, stronger reasoning, ethical judgment…

So the real question becomes:
How do existing roles evolve to absorb new functions? What new skill structures can we build into our current work to increase resilience?

Even before generative AI, every profession had to adapt over time. Now, part of that adaptation includes engaging with technology, and thinking carefully about how we evolve with it.This isn’t about optimism or pessimism.
It’s about staying grounded and making space for real assessment.
And hopefully, helping more people in our field find where they stand.


Leave a comment