About The Role
What if your curiosity and creative thinking could make AI safer for everyone We're looking for AI Red Team Testers to challenge cutting-edge AI models — probing for weaknesses, exposing blind spots, and helping research teams build systems that are more reliable, fair, and safe.
This is a fully remote, flexible contract role open to sharp, imaginative thinkers from any background. No cybersecurity or technical experience required — just a knack for thinking differently and asking uncomfortable questions.
- Organization: Alignerr
- Type: Hourly Contract
- Location: Remote
- Commitment: 10–40 hours/week
What You'll Do
- Design creative prompts, scenarios, and conversational strategies to probe AI model weaknesses
- Attempt to elicit incorrect, unsafe, biased, or otherwise problematic outputs from AI systems
- Document failure modes with clear, reproducible steps and structured observations
- Rate the severity and potential impact of discovered issues
- Collaborate asynchronously with AI safety and research teams to improve model quality
- Explore edge cases across a wide range of topics, tones, and contexts
Who You Are
- Naturally curious — you enjoy puzzles, lateral thinking, and finding the unexpected
- Comfortable thinking adversarially and approaching problems from unconventional angles
- Strong written communicator who can explain findings clearly and precisely
- Systematic and detail-oriented in documenting your work
- Self-motivated and reliable when working independently
- No hacking, security, or AI background required
Nice to Have
- Experience in creative writing, philosophy, journalism, or critical analysis
- Familiarity with AI tools, chatbots, or language models as an end user
- Background in ethics, psychology, or social science
- Prior experience in quality assurance or structured testing
Why Join Us
- Work on real AI safety projects alongside leading research labs
- Fully remote and flexible — work on your own schedule, from anywhere
- Freelance autonomy with meaningful, intellectually stimulating task-based work
- Contribute to AI development that has a genuine impact on how safely and responsibly these systems operate
- Potential for ongoing work and contract extension as new projects launch