Research Practicum in Artificial Intelligence
Jinho D. Choi
  • Overview
    • Syllabus
    • Schedule
    • Discussions
  • Speed Dating
    • Profiles
  • Faculty Interests
    • AI Faculty
  • Research Areas
    • AI Conferences
  • Task Selection
  • Introduction
    • Motivation
    • Overview
    • Exercise
  • Related Work
    • Literature Review
    • Exercise
  • Approach
    • Algorithm Development
    • Model Design
    • Data Creation
  • Research Challenges
  • Experiments
    • Datasets
    • Models
    • Results
    • 5.4. Homework
  • Analysis
    • Performance Analysis
    • Error Analysis
    • Discussions
    • 6.4. Homework
  • Conclusion & Abstract
    • Conclusion
    • Title & Abstract
  • Peer Review
  • Presentations
  • Team Projects
    • Fall 2023
    • Fall 2022
  • Assignments
    • HW1: Speed Dating
    • HW2: Research Areas
    • HW3: Team Promotion
    • HW4: Introduction
    • HW5: Related Work
    • HW6: Approach
    • HW7: Experiments
    • HW8: Analysis
    • HW9: Conclusion & Abstract
    • HW10: Peer Review
    • Team Project
  • Supplementary
    • LaTex Guidelines
      • Getting Started
      • File Structure
      • Packages
      • References
      • Paragraphs
      • Labels
      • Tables
      • Figures
      • Lists
    • Writing Tips
    • Progress Reports
    • Team Promotion
Powered by GitBook
On this page
  • Individual Writing
  • Rubric
Export as PDF
  1. Experiments

5.4. Homework

Individual Writing

  • Write the Experiments section in your individual overleaf project.

  • Recommended length: 100 - 200 lines (including tables and figures).

  • Submit the PDF version of your current draft up to the Experiments section.

Rubric

  • Data Section: are the sources and choices of the datasets reasonably explained? (1 point)

  • Data Split: are the statistics of training/development/evaluation or cross-validation sets distinctly illustrated? (2 points)

  • Model Descriptions: are the models designed to soundingly distinguish differences in methods? (2 points)

  • Evaluation Metrics: are the evaluation metrics clearly explained? (2 points)

  • Experimental Settings: are the settings described in a way that readers can replicate the experiments? (1 point)

  • Model Development: is the model development progress depicted? (1 point)

  • Result Tables: are the experimental results evidently summarized in tables? (2 points)

  • Result Interpretations: are the key findings from the results convincingly interpreted? (2 points)

PreviousResultsNextAnalysis

Last updated 8 months ago