Yenting Lin
I will be joining Google DeepMind as a Research Scientist.
At Meta, I co-developed a scalable RL pipeline that improves long-form reasoning capabilities. I completed my Ph.D. in 2025 at National Taiwan University, advised by Professor Yun-Nung (Vivian) Chen.
Internship Experience
- Meta GenAI: Enhanced reasoning with stepwise feedback and long-form reasoning
- NVIDIA Research: Developed error correction methods for multimodal language models
- Amazon Alexa AI: Built factuality evaluation agent and synthetic data techniques
Selected Publications
- Step-KTO: Optimizing Mathematical Reasoning through Stepwise Binary Feedback (arXiv 2025)
- Measuring Taiwanese Mandarin Language Understanding (COLM 2024)
- LLM-Eval: Unified Multi-Dimensional Automatic Evaluation for Open-Domain Conversations with Large Language Models (NLP4ConvAI 2023)
- Selective In-Context Data Augmentation for Intent Detection using Pointwise V-Information (EACL 2023)
- Knowledge-Grounded Conversational Data Augmentation with Generative Conversational Networks (SIGDIAL 2022)
- SalesBot: Transitioning from Chit-Chat to Task-Oriented Dialogues (ACL 2022)
Featured Projects
- Taiwan-LLM: Traditional Mandarin LLMs for Taiwan
- Mistral-Small-Reasoning: Open-weight reasoning model
Fun Facts
- Proud owner of two cats
- Completed a full marathon
- Part-time (trance) DJ during college