Internship: Person-Centric Multimodal Large Language Model F/M
-
Contract type: Internship
-
Work time: Full time
-
Location Meylan
About NAVER LABS Europe
NAVER LABS Europe is part of the R&D division of NAVER, Korea’s leading Internet portal and a global tech company with a range of services that include search, commerce, content, fintech, robotics and cloud.
The position
-
Propose an approach for efficiently adapting person-oriented representations from visual and audio modalities to an MLLM
-
Create a benchmark based on existing datasets, focusing on challenging person-centric instances
-
Propose an evaluation methodology outlining the benefits of the added features in terms of explainability
Details:
- Duration: six months
- Start date: as soon as possible
About the research team
In the Interactive Systems group, we develop AI capabilities that enable robots to interact safely with humans, other robots, and systems. For a robot to be truly useful, it must represent its knowledge of the world, share what it learns, and interact with other agents, particularly humans. Our research integrates expertise in human-robot interaction, natural language processing, speech, information retrieval, data management, and low-code/no-code programming to create AI components that empower next-generation robots to perform complex real-world tasks.
What we're looking for
-
Master’s student with an excellent profile or already enrolled in a PhD program
-
Strong background in computer vision and deep learning
-
Good understanding of multimodal transformers, VLMs/LLMs, and 3D human representations
-
Experience with frameworks such as PyTorch, torchvision, and Hugging Face Transformers
-
Familiarity with LoRA fine-tuning, feature fusion, and multimodal training strategies
-
Curiosity for human-centered AI and an interest in modeling social interactions in videos
What we offer
- We foster a collaborative environment dedicated to ambitious, multidisciplinary projects that translate advanced research into impactful, real-world solutions, supported by 30+ years of experience in AI and related fields.
- Flexible work/life balance.
-
We are an equal opportunity employer that hires based on skills, experience, and merit. We foster an inclusive and diverse workplace where all qualified candidates are considered fairly, regardless of background.
-
We’re based in Meylan, close to Grenoble, a city that offers the perfect balance of urban life, cutting-edge research and technology, and spectacular mountain landscapes that provide countless opportunities to relax, recharge, and enjoy the outdoors.
All applications will be carefully considered, even if not all required skills are met. We value diverse backgrounds and the potential of each candidate, and we offer training to support the development of necessary skills.
NAVER LABS, co-located in Korea and France, is the organization dedicated to preparing NAVER’s future. Scientists at NAVER LABS Europe are empowered to pursue long-term research problems that, if successful, can have significant impact and transform NAVER. We take our ideas as far as research can to create the best technology of its kind. Active participation in the academic community and collaborations with world-class public research groups are, among others, important ways to achieve these goals. Teamwork, focus and persistence are important values for us.
When applying for this position online, please don't forget to upload your CV and cover letter. Incomplete applications will not be considered.
NAVER LABS Europe is subject to French jurisdiction requiring organisations to stipulate that a job/internship is open to both women and men. None of our jobs/internships are gender specific.

References
-
LLaVa: Large Language and Vision Assistant, Lui et al, NeurIPS'23
-
Qwen2.5-VL, Bai et al, arXiv
-
NExt-QA: Next Phase of Question-Answering for Explaining Temporal Actions, Xiao et al, CVPR'21
-
Social-IQ: A Question Answering Benchmark for Artificial Social Intelligence, Zadeh et al, CVPR'19
-
Social-IQ 2.0 Challenge, ICCV'23
-
STAR: A Benchmark for Situated Reasoning in Real-World Videos, Bo et al, NeurIPS'21
-
MoReVQA: Exploring Modular Reasoning Models for Video Question Answering, Min et al, CVPR'24
Réf: e86391dc-39c7-4536-9196-4b8aaaa3d5aa
This position has been filled.