← Back to Listings

Research Scientist Intern, Multimodal Contextual AI (PhD)

Meta β€’ πŸ“ Sunnyvale, CA, Redmond, WA
πŸ“… 2025-10-31T15:39:22-07:00 πŸ’Ό Internship

About the Role

At Reality Labs, our team brings novel experiences to life on Meta’s AR devices. We are seeking upcoming scientists and researchers with great interest in real-time embedded software development and hardware acceleration to enable on-device contextual AI within the performance, power and form-factor constraints of AR glasses. Within the vast domain of contextual AI, we are currently focusing on three key areas: computer vision, audio interaction, and large language models. As a Research Scientist intern, you will help us advance the state-of-the-art in one or more of these areas. Our internships are twelve (12) to twenty-four (24) weeks long and we have various start dates throughout the year.

Qualifications

Currently has, or is in the process of obtaining a PhD in Computer Science, Electrical Engineering, or a related field Programming and simulation experience with languages such as C/C++ and Python Experience with computer architecture and HW/SW co-design and co-optimization Must obtain work authorization in the country of employment at the time of hire, and maintain on-going work authorization during employment Experience in HW+SW system prototyping using embedded device prototypes and FPGAs Experience in operating systems, drivers, and embedded firmware development Experience working in a machine learning framework, e.g. PyTorch Experience working and communicating cross functionally in a team environment Demonstrated embedded software experience via an internship, work experience, coding competitions, or widely used contributions in open source repositories (e.g. GitHub) Intent to return to degree-program after the completion of the internship/co-op Proven track record of achieving significant results as demonstrated by grants, fellowships, patents, as well as first-authored publications at leading workshops or conferences such as CVPR, ECCV/ICCV, SIGGRAPH, or similar
Apply Now πŸš€

Opens in a new tab