In this high-impact role, you will operate at the forefront of responsible AI development, working closely with research scientists and engineers in AI Research, as well as cross-functional partners in Responsible AI, Agentforce, and other teams across Salesforce
- You will lead the design and implementation of Trust Layer models, as well as RAI tools and frameworks that ensure our AI systems are fair, accountable, and transparent
- Using advanced machine learning techniques, you’ll generate actionable insights, drive research excellence, and support responsible AI practices across the full development lifecycle—from experimentation to production deployment
- Build state-of-the-art LLM safeguards for enterprise
- Analyze data and models to identify potential trust and safety issues; define testing protocols for different data types and model architectures; recommend mitigation strategies, tooling investments, and safe thresholds for deployment
- Define technical goals and guide research/engineering teams on responsible AI best practices
- Offer development support and thought leadership on critical ethical tradeoffs in algorithmic design
- Contribute to the development and adoption of libraries and tools that support evaluation, testing, and mitigation of risks
- Build features that enhance explainability and user trust in model outputs
- Collaborate with industry leaders in similar positions in peer organizations on ways to improve the state of responsible AI development
Builds trusted relationships across all levels, both internally and externally