Research Practicum in Artificial Intelligence
Jinho D. Choi
  • Overview
    • Syllabus
    • Schedule
    • Discussions
  • Speed Dating
    • Profiles
  • Faculty Interests
    • AI Faculty
  • Research Areas
    • AI Conferences
  • Task Selection
  • Introduction
    • Motivation
    • Overview
    • Exercise
  • Related Work
    • Literature Review
    • Exercise
  • Approach
    • Algorithm Development
    • Model Design
    • Data Creation
  • Research Challenges
  • Experiments
    • Datasets
    • Models
    • Results
    • 5.4. Homework
  • Analysis
    • Performance Analysis
    • Error Analysis
    • Discussions
    • 6.4. Homework
  • Conclusion & Abstract
    • Conclusion
    • Title & Abstract
  • Peer Review
  • Presentations
  • Team Projects
    • Fall 2023
    • Fall 2022
  • Assignments
    • HW1: Speed Dating
    • HW2: Research Areas
    • HW3: Team Promotion
    • HW4: Introduction
    • HW5: Related Work
    • HW6: Approach
    • HW7: Experiments
    • HW8: Analysis
    • HW9: Conclusion & Abstract
    • HW10: Peer Review
    • Team Project
  • Supplementary
    • LaTex Guidelines
      • Getting Started
      • File Structure
      • Packages
      • References
      • Paragraphs
      • Labels
      • Tables
      • Figures
      • Lists
    • Writing Tips
    • Progress Reports
    • Team Promotion
Powered by GitBook
On this page
  • Writing
  • Rubric
Export as PDF
  1. Assignments

HW7: Experiments

Writing

  • Write the Experiments section in your team overleaf project.

  • Recommended length: 100 - 200 lines (including tables and figures).

  • Submit the PDF version of your current draft up to the Experiments section.

Rubric

  • Data Section (1 point)

    • Are the sources and choices of the datasets clearly explained and justified?

    • Is the data preprocessing pipeline well-documented (if any)?

    • Are the dataset characteristics adequately described?

  • Data Split (1 point)

    • Is the division of data into training/development/evaluation sets clearly specified?

    • Are the statistics for each data split comprehensively reported?

    • If using cross-validation, is the procedure properly explained?

  • Model Descriptions (2 points)

    • Are the models described with sufficient references to the approach section?

    • Are the differences between methods clearly distinguished?

    • Is there a clear connection between model design choices and research objectives?

  • Evaluation Metrics (2 points)

    • Are all evaluation metrics clearly defined with proper mathematical notation?

    • Is the choice of each metric well-justified?

    • Are the limitations of the chosen metrics discussed (if any)?

  • Experimental Settings (1 point)

    • Are the hyperparameters and implementation details fully specified?

    • Is the hardware/software environment clearly described?

    • Are the experiments documented in a way that enables replication?

  • Model Development (1 point)

    • Is the model development process clearly documented?

    • Are the key decisions and modifications during development explained?

    • Are the challenges encountered and their solutions discussed?

  • Result Tables (2 points)

    • Are the experimental results presented in well-formatted, readable tables?

    • Are all tables properly labeled with units and descriptions?

    • Are baseline as well as existing state-of-the-art comparisons included and clearly marked?

  • Result Interpretations (2 points)

    • Are the key findings from the results clearly identified and explained?

    • Are the strengths and weaknesses of the results discussed?

    • Are the implications of the results connected to the research questions?

PreviousHW6: ApproachNextHW8: Analysis

Last updated 6 months ago