The Best Effort System to Score Subjective Answers of Tests in a Large Group

  • Authors

    • Jae-Young Lee
    • . .
    https://doi.org/10.14419/ijet.v7i3.33.21181
  • automatic essay scoring, automatic scoring system, content-based scoring, Internet-based scoring system, short answer scoring, subjective-type evaluation
  • The subjective tests can improve the quality of education by measuring the cognitive abilities, but the biggest drawback is the lack of fairness, consistency, and accuracy. To improve the drawback, we proposed the best effort system that scores the correct subjective answers based on the correct answer table made by committee members, then classifies the rest of subjective answers into groups of similar answers so that the latest automatic scoring systems and graders assign each reasonable credit to each group of similar subjective answers.

    In the scoring system, the groups of the similar answers are evaluated by raters and the latest automatic scoring systems, such as syntax tree comparison grading, and the syntax and semantic tree-oriented grading. All the scores for each similar answer are added and then an average for each similar is stored in the similar answer table. Finally, the system grades applicant’s answers using the correct answer table and the similar answer table. This paper proposes the algorithm for the best effort scoring system to include the latest automatic scoring system in order to be as fair, consistent, and accurate as possible.

     

     

  • References

    1. [1] Reiser RA & Kegelmann HW (1994), Evaluating instructional Software: A review and critique of current method. Education Technology Research and Development, 1994;Vol.42, No.3, 63-69.

      [2] Lee JY (2017), Dynamic Relocation of True-False Questions Using Ready-made Arrays with Random Numbers. International Journal of Software Engineering and Its Applications, Vo.10, No.8, 91-100.

      [3] Correia R, Baptista J, Eskenazi M & Mamede N (2012), Automatic Generation of Cloze Question Stems. In Computational Proceeding of the Portuguese Language, Springer-Verlag Berlin Heidelberg, 168-178.

      [4] Majumder M & Saha SK (2015), A System Multiple Choice Questions: With a Novel Approach for Sentence Selection. Proceedings of the 2nd Workshop on Natural Language Proceeding, 64-72.

      [5] Fairon C (1999), A Web-based System for Automatic Language Skill Assessment: EVALING. Proceedings of Computer Mediated Language Assessment and Evaluation in Natural Language Processing Workshop, 62-67.

      [6] Kim YS, Oh JS, Lee JY & Chang JH (2004), An intelligent grading system for descriptive examination paper based on probabilistic latent semantic analysis. Springer-Verlag Berlin Heidelberg, 1141-1146.

      [7] Leacock C & Chodorow M(2003), C-rater: Automated Scoring of Short-Answer Questions. Computers and the Humanities, Vol.37, No.4, 389-405.

      [8] Sukkarieh JZ & Stoyanchev S (2009), Automating Model Building in C-rater. Proceeding of the 2009 Workshop on Applied Textual Inference, 61-69.

      [9] Sukkarieh JZ & Blackmore J (2009), C-rater: Automatic Content Scoring for Short Constructed Response. Proceedings of the Twenty-Second International FLAIRS Conference, 290-295.

      [10] Attali Y & Burstein J (2006), Automated Essay Scoring With e-rater® V.2. Journal of Technology, Learning, and Assessment, Vol.4, No.3, 1-31.

      [11] Dikli S (2006), An Overview of Automated Scoring of Essays. The Journal of Technology, Learning, and Assessment, Vol.5, No.1, 1-36.

      [12] Langer A, Banga R & Mittal A (2010), Subramanian LV. Variant Search and Syntactic Tree Similarity Based Approach to Retrieve Matching Questions for SMS Queries. ’10 Proceedings of the Fourth Workshop on Analytic for Noisy Unstructured Text Data, 67-72.

  • Downloads

  • How to Cite

    Lee, J.-Y., & ., . (2018). The Best Effort System to Score Subjective Answers of Tests in a Large Group. International Journal of Engineering & Technology, 7(3.33), 263-268. https://doi.org/10.14419/ijet.v7i3.33.21181