Before using AI in class, make sure to understand its pros and cons, align it with your teaching goals, and discuss your plan openly with students.

We’re seeing a new kind of fatigue: AI fatigue. Students are unsure what’s allowed, what’s ethical, or how AI will impact their learning. Many don’t trust it.
And to be honest, many educators don’t either. That’s why before introducing AI into course, we must communicate AI as a specific tool with specific use cases, not just a concept; and deeper than that, we must start with understanding.
This article is a guide for translation and interpretation faculty, or maybe any educator who wants to use AI with professionalism, clarity, and care.

1. Understand the Attitudes Around AI
Before you use AI in your teaching, take a step back and ask: how do my students feel about it?
Not everyone welcomes technology in the same way. Many worry about privacy, surveillance, and fairness. Some may feel that AI use gives unfair advantage to those who are tech-savvy. Others may simply find the whole topic overwhelming.
Acknowledge this emotional and cultural reality. Don’t assume everyone is on the same page, or even the same book. Understanding student attitudes is the first step toward responsible adoption.

2. Apply an Ethical Lens Before Any Deployment
AI may not be neutral, it reflects the values and intentions of how it’s used. To grasp AI ethics in a practical and structured way, it’s helpful to consider two key aspects:
- Timeline (Before and after deployment)
- Dimensions (Fairness, responsibility, transparency, privacy)
Interpreter’s Memory wrote this article about AI ethic framework, which might be helpful: → Thinking Ethically About AI in Language Work
Ethical integration means thinking beyond convenience. Consider how consent is handled, whether the tool’s data practices are transparent, and how you will maintain a human-at-the-heart approach.
Especially in assessment or grading contexts, even casual AI use can have serious consequences. Find deeper discussions in thie article: → Use LLMs’ Feedback with Caution
Ethics in AI isn’t only about how you teach — it’s also part of what you teach.
In any professional field, ethical use of AI is quickly becoming a core competency. As a faculty member, your students will look to you not only for knowledgewise guidance but also for how to think ethically and critically about AI.
This means that responsible AI use is both a teaching practice and an educational content. Every professional interpreter, translator, or educator has a responsibility to model the kind of reflective, profession-specific AI ethics that students can carry into their future work.

3. Build Technological Understanding to Make Informed Decisions
You don’t need to become a developer, but you do need a working knowledge of what AI is and isn’t.
◼ Know the difference between a neural network and a large language model (LLM)
◼ Understand that LLMs are not search engines, while some of them can perform web-browsing tasks, they still invents examples, misses key options, and are not accountable for misinformation.
◼ Distinguish between cloud-based and local models, and know what happens to the data you input
◼ Recognize bias in training data, including the underrepresentation of a lot of languages and perspectives
◼ Be aware that LLMs are generally strong in generative tasks but weaker in factual accuracy.
↓
With this knowledge, you can evaluate tools more critically, and make better choices about what to bring into your classroom. Think about:
○ Does this tool respect student data?
○ Is it reliable for the specific task I want (e.g., transcription, glossary building)?
○Does it align with my course objectives and improve the learning experience?
↓
A good rule: don’t introduce a tool unless you can explain exactly what problem it solves for your students.
Keep your toolset simple, well-tested, and directly tied to your pedagogy. :)

4. Communicate with Students Before You Deploy
It’s important to tell your students what you’re doing and why. You can:
○ Add an AI usage statement in your syllabus: clarify how AI will be used, what is optional vs. required, and how it supports learning
○ Discuss it openly in class, and invite student perspectives
○ Offer alternatives if students are uncomfortable
○ Emphasize that your use of AI is meant to support them, not replace your own engagement, or theirs
Creating a classroom culture of transparency and consent goes a long way toward building trust. It isn’t just part of good communication; it’s a key part of ethical and professional practice.
When you make this a norm in your classroom culture, you’re helping students build habits for the future. In a workplace where AI is becoming increasingly common, professionals who can clearly explain and ethically justify their use of technology will stand out.
You’re modeling and showing students how to engage with AI critically and transparently.

5. Stay Open — To Feedback, Change, and Uncertainty
Even among developers and engineers, attitudes toward AI vary. It’s okay if your students, or your colleagues, disagree with your approach. The important thing is to remain open-minded. Ask for feedback. Revise your methods. Be willing to unlearn and adapt.
Technology evolves. So do policies around AI. So should we.
Leave a comment