Job Details:
Job Description:
AI Safety research intern is responsible for performing research, develop, implement and test new AI Safety methods (detection and mitigation), using cyber safety methods and tools on workloads in research and/or production environments.
Takes initiative in complex, multidisciplinary performance research projects. The researcher duty extends to building internal tools to help find and diagnose vulnerabilities for a wide range of use cases automatically and seamlessly.
Present extensive knowledge of AI Safety methods, Large Language Models and Multimodal models, and more. Demonstrates capabilities in learning and researching new domains, requiring the ability to navigate around ambiguities and obstacles to independently solve complex performance issues.
Shares expert insights and new learnings and contributes intellectual property to internal, external and open source communities. Uses software development processes and methodologies while building complex software systems.
Ensures that research output does not stop at just finding the specific optimizations for a client, but also includes recommendations for making them generic and widely used for additional clients.
Share knowledge, and educate other team members and coworkers.
Demonstrate effective communication with executives and productive collaboration with peer researchers.
Intelligent Systems Research Lab (ISR) is focused on multi-disciplinary research spanning ethnography, design, HCI, and AI to create Human/AI collaborative systems that can amplify human potential and create sustainable and transparent AI solutions. We utilize multi-modal signals (e.g. vision, audio, speech, language, RF) and AI to infer and predict human state and actions and enable physically situated dialog and interactions. We conduct this research in the context of vertical domains including manufacturing, education, enterprise, and assistive computing for people with disabilities.Given the fast progress in AI and large language models and its widespread adoption in the academia, industry and other areas, ensuring the safety of these models is extremely important. ISR is looking for an intern passionate in the area of AI Safety.
The work will involve building a human-in-the-loop red-teaming framework for model evaluation and understanding. The work will also include exploration of how to make these model outputs more understandable for an end-user or developer trying to incorporate it for a downstream task. In this context, the intern will be able to explore techniques that will enable human experts to interact with the model evaluation framework, and also come up with novel red-teaming techniques and mitigation of harms in LLMs such as toxic responses/bias. This research will enable the Responsible AI processes within Intel and also the publication of the findings in both internal and external venues.
Qualifications:
Enrolled in a PhD program in artificial intelligence, machine learning, computer science, statistics, electrical engineering, or a relevant engineering/science discipline
1 year of research experience in machine learning, deep learning, or natural language processing
1 year of experience in the development of algorithms in high-level languages such as Python and C++
Strong verbal and written communication skills
Prior research in Vision-Language Systems and RAG pipelines -Prior research in Large Language Models and understanding of optimization and model tuning techniques
Preferred Qualifications:
Publications in top-tier conferences and journals in machine learning related fields (NeurIPS, ICML, AAAI, ACL, etc.)
2+ year of experience with deep learning frameworks (TensorFlow, PyTorch, MXNet, etc.)
Requirements listed would be obtained through a combination of industry relevant job experience, internship experiences and or schoolwork/classes/research.