In contemporary higher education, assessing students’ programming performance requires going beyond evaluating final results to considering the cognitive and algorithmic processes involved in problem solving. The aim of this study is to develop a pedagogically grounded assessment model for evaluating students’ programming problem-solving performance using measurable indicators. The objectives of the study include identifying key problem-solving indicators in programming, linking them to assessment criteria, and experimentally validating the effectiveness of the proposed model.
The research methodology is based on a pedagogical experiment and quantitative data analysis. The experiment involved 94 undergraduate students studying Python programming. Students were assigned programming tasks differentiated by difficulty level (easy, medium, and hard), and their solutions were evaluated using seven pedagogical indicators: problem understanding, algorithmic thinking, correctness, efficiency, code quality, handling of edge cases, and explanation ability. In addition, a difficulty coefficient was introduced to account for task complexity in the final assessment.
The results indicate that as task difficulty increases, students’ correct solution rates decrease, particularly in higher-level tasks requiring efficient algorithm design, edge-case handling, and solution explanation. The findings demonstrate that the proposed assessment model enables a comprehensive evaluation of students’ programming problem-solving performance by capturing both process-oriented and outcome-based aspects. Compared to traditional assessment approaches, the model provides deeper insight into students’ strengths and weaknesses and supports more balanced and fair evaluation. The study contributes to the development of pedagogically informed assessment practices in programming education and offers a foundation for further research on adaptive and process-based evaluation models.
https://orcid.org/0000-0003-3258-7558