Found Description
Overview: We are looking for evaluators proficient in Gujarati language to review and compare AI-generated responses. The role focuses on identifying toxic or harmful content across native scripts and transliterated text and assessing model performance across multiple datasets.
Rate per hour : INR 400
Minimum Hour Commitment per day : 5-6 hours per day
Key Responsibilities:
- Evaluate AI model outputs in Gujarati.
- Identify and flag toxic, harmful or hate-based content, including subtle or context-dependent cases
- Compare model responses and provide performance assessments based on predefined criteria
- Classify the type and severity of toxicity, e.G. hate speech, harassment, abusive language
- Provide brief explanations for flagged items where required
- Ensure consistency, accuracy and adherence to project guidelines
<...