# roberta-large-synqa-ext **Repository Path**: modelee/roberta-large-synqa-ext ## Basic Information - **Project Name**: roberta-large-synqa-ext - **Description**: No description available - **Primary Language**: Unknown - **License**: MIT - **Default Branch**: main - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2023-05-23 - **Last Updated**: 2024-04-02 ## Categories & Tags **Categories**: llm **Tags**: None ## README --- language: - en tags: - question-answering license: apache-2.0 datasets: - adversarial_qa - mbartolo/synQA - squad metrics: - exact_match - f1 model-index: - name: mbartolo/roberta-large-synqa-ext results: - task: type: question-answering name: Question Answering dataset: name: adversarial_qa type: adversarial_qa config: adversarialQA split: validation metrics: - name: Exact Match type: exact_match value: 53.2 verified: true - name: F1 type: f1 value: 64.6266 verified: true --- # Model Overview This is a RoBERTa-Large QA Model trained from https://huggingface.co/roberta-large in two stages. First, it is trained on synthetic adversarial data generated using a BART-Large question generator on Wikipedia passages from SQuAD as well as Wikipedia passages external to SQuAD, and then it is trained on SQuAD and AdversarialQA (https://arxiv.org/abs/2002.00293) in a second stage of fine-tuning. # Data Training data: SQuAD + AdversarialQA Evaluation data: SQuAD + AdversarialQA # Training Process Approx. 1 training epoch on the synthetic data and 2 training epochs on the manually-curated data. # Additional Information Please refer to https://arxiv.org/abs/2104.08678 for full details.