Loading
Hire.Monster

Freelance Coding QA Analyst

IT и технологииУдалённаяТестированиеДругое

Описание вакансии

At Toloka AI we create data that powers leading GenAI models and innovations. We work with frontier labs, big tech, renowned AI startups, enterprises and non-profit research organizations worldwide. We use a combination of Experts + Crowd + Tech Platform to teach AI models to reason and evaluate their efficacy and safety. We have experts in more than 50 different domains—from doctors and lawyers to physicists and engineers—and boast one of the most diverse global crowds, representing over 100 countries and speaking 40+ languages. We are a well-funded startup with an enviable portfolio of clients including Anthropic, Amazon, Microsoft, poolside, Recraft, and Shopify.

Recently, we secured strategic investment led by Bezos Expeditions with participation from Mikhail Parakhin, CTO of Shopify and board advisor to leading GenAI companies, who now serves as our Chairman of the Board. Our remote-first team is globally distributed around the world: USA, UK, the Netherlands, Czech Republic, Serbia, and more. We are headquartered in Amsterdam.

About the Role

We are seeking a detail-oriented Freelance Coding QA Analyst to manage the process of testing and onboarding new and existing partners via coding pilots as well as managing the quality of developer outputs and adherence to project standards in technical data annotation projects. You will be the critical link between our vendor onboarding team, in-house coding experts, and external developers, ensuring pilot and production projects run smoothly and meet our quality standards. You will provide actionable insights for optimizing annotation pipelines, improving data outputs, and informing partnership decisions. This role focuses on process ownership and quality coordination, not people management.

  • Key Responsibilities
  • Coordinate quality standards, guideline updates, and feedback loops across QAs, experts, and vendors to ensure consistent interpretation and execution.
  • Coordinate coding pilot projects for new potential partners.
  • Design unique pilot pipelines to test potential partners.
  • Assign coding tasks ensuring alignment with agreed languages (primarily Python, Java, or JavaScript).
  • Create and manage collaboration platform spaces for partners during pilots and handle task-related queries.
  • Ensure task designs allow for comparison of model-generated answers and prompt engineering assessment (when needed).
  • Prepare reports and recommendations to inform vendor and partnership decisions.
  • Serve as the primary point of contact for partner and expert queries during pilot and production phases.
  • Required Skills & Experience
  • Experience in Software Quality Assurance or technical vendor assessments.
  • Experience working with developer or documentation teams, high-volume review queues, and structured quality monitoring systems.
  • Ability to deliver precise, objective feedback on code quality and logic to help developers align with project requirements.
  • Strong understanding of coding languages (Python, Java, JS) and familiarity with additional or less common programming languages.
  • Ability to create and manage structured testing projects and pipelines.
  • Excellent communication skills for vendor interaction and cross-functional coordination.
  • Comfortable using collaboration platforms and organizing technical documentation.
  • Analytical mindset for assessing developer outputs and making recommendations.
  • Preferred Qualifications
  • Prior experience working with outsourcing teams in a technical capacity.
  • Familiarity with prompt engineering and evaluating AI-generated code outputs.

Условия

  • Opportunity to work with cutting-edge AI technology and build your portfolio
  • Flexible working arrangements with ability to determine your own schedule
  • Long-term collaboration potential with a growing global tech company

Exposure to international markets and diverse audience segments

Опубликовано: 12.01.2026