Using information and communication technologies for the assessment of a large number of students

  • Kasym Baryktabasov Computer Engineering Department, Kyrgyz-Turkish Manas University, Bishkek, Kyrgyz Republic
  • Chinara Jumabaeva Computer Engineering Department, Kyrgyz-Turkish Manas University, Bishkek, Kyrgyz Republic
  • Ulan Brimkulov Computer Engineering Department, Kyrgyz-Turkish Manas University, Bishkek, Kyrgyz Republic
Keywords: computer-assisted assessment, computer-based assessment, e-assessment, learning assessment, education, information and communication technologies


Many examinations with thousands of participating students are organized worldwide every year. Usually, this large number of students sit the exams simultaneously and answer almost the same set of questions. This method of learning assessment requires tremendous effort and resources to prepare the venues, print question books and organize the whole process. Additional restrictions and obstacles may appear in conditions similar to those during the COVID-19 pandemic. One way to obviate the necessity of having all the students take an exam during the same period of time is to use a computer-assisted assessment with random item selection, so that every student receives an individual set of questions. The objective of this study is to investigate students’ perceptions of using random item selection from item banks in order to apply this method in large-scale assessments. An analysis of the responses of more than 1000 surveyed students revealed that most of them agree or completely agree with using the proposed method of assessment. The students from natural science departments showed more tolerance of this method of assessment compared with students from other groups. Based on the findings of this study, the authors concluded that higher-education institutions could benefit from implementing the abovementioned assessment method.


Download data is not yet available.


Abdullah, Z. D. et al. (2015). Students’ attitudes towards information technology and the relationship with their academic achievement. Contemporary Educational Technology, 6(4), 338–354. doi: 10.30935/cedtech/6158

Adesemowo, A. K. et al. (2016). The experience of introducing secure e-assessment in a South African university first-year foundational ICT networking course. Africa Education Review, 13(1), 67–86. doi: 10.1080/18146627.2016.1186922

Adesemowo, A. K., Oyedele, Y. & Oyedele, O. (2017). Text-based sustainable assessment: a case of first-year information and communication technology networking students. Studies in Educational Evaluation, 55, 1–8. doi: 10.1016/j.stueduc.2017.04.005

Alruwais, N., Wills, G. & Wald, M. (2018). Advantages and challenges of using e-assessment. International Journal of Information and Education Technology, 8(1), 34–37. doi: 10.18178/ijiet.2018.8.1.1008

Bennett, S. et al. (2017). How technology shapes assessment design: findings from a study of university teachers. British Journal of Educational Technology, 48(2), 672–682. doi: 10.1111/bjet.12439

Binnahedh, I. A. (2022). E-assessment: Wash-back effects and challenges (examining students’ and teachers’ attitudes towards E-tests). Theory and Practice in Language Studies, 12(1), 203–211. doi: 10.17507/tpls.1201.25

Bloxham, S. & Boyd, P. (2007). Developing effective assessment in higher education: a practical guide: a practical guide. United Kingdom: McGraw-Hill Education.

Boud, D. & Soler, R. (2016). Sustainable assessment revisited. Assessment & Evaluation in Higher Education, 41(3), 400–413. doi: 10.1080/02602938.2015.1018133

Brimkulov, U., Baryktabasov, K. & Jumabaeva, C. (2017). Information technologies in education: the learning assessment tools. MANAS Journal of Engineering, 5(2), 27–33.

Bull, J. & McKenna, C. (2004). Blueprint for computer-assisted assessment. London: Routledge Falmer.

Christie, M. F. et al. (2015). Improving the quality of assessment grading tools in master of education courses: a comparative case study in the scholarship of teaching and learning. Journal of the Scholarship of Teaching and Learning, 15(5), 22–35. doi: 10.14434/josotl.v15i5.13783

Clarke-Midura, J. & Dede, C. (2010). Assessment, technology, and change. Journal of Research on Technology in Education, 42(3), 309–328. doi: 10.1080/15391523.2010.10782553

Conole, G. & Warburton, B. (2005). A review of computer-assisted assessment. ALT-J, 13(1), 17–31. doi: 10.3402/rlt.v13i1.10970

Contreras-Higuera, W. E. et al. (2016). University students’ perceptions of E-portfolios and rubrics as combined assessment tools in education courses. Journal of Educational Computing Research, 54(1), 85–107. doi: 10.1177/0735633115612784

Csapó, B. & Molnár, G. (2019). Online diagnostic assessment in support of personalized teaching and learning: the eDia system. Frontiers in Psychology, 10, 1522. doi: 10.3389/fpsyg.2019.01522

Dammas, A. H. (2016). Investigate students’ attitudes toward computer based test (CBT) at chemistry course. Archives of Business Research, 4(6), 58–71. doi: 10.14738/abr.46.2325

Davey, G., De Lian, C. & Higgins, L. (2007). The university entrance examination system in China. Journal of Further and Higher Education, 31(4), 385–396. doi: 10.1080/03098770701625761

Dermo, J. (2009). e-Assessment and the student learning experience: a survey of student perceptions of e-assessment. British Journal of Educational Technology, 40(2), 203–214. doi: 10.1111/j.1467-8535.2008.00915.x

de Winter, J. C. F. & Dodou, D. (2010). Five-Point Likert items: t test versus Mann-Whitney-Wilcoxon. Practical Assessment, Research, and Evaluation, 15, Article 11. doi: 10.7275/bj1p-ts64

Ferrari, A., Cachia, R. & Punie, Y. (2009). Innovation and creativity in education and training in the EU member states: Fostering creative learning and supporting innovative teaching [JRC Technical Note 52374], European Commission. Joint Research Centre.

Field, A. (2013). Discovering statistics using IBM SPSS statistics. 4th ed. London: SAGE Publications Limited.

Haifeng, L. (2012). The college entrance examination in China. International Higher Education, 68, 23–25. doi: 10.6017/ihe.2012.68.8617

Harpe, S. E. (2015). How to fp analyze Likert and other rating scale data. Currents in Pharmacy Teaching and Learning, 7(6), 836–850.

Hayden, M. & Thiep, L. Q. (2010). Vietnam’s higher education system. In: Reforming Higher Education in Vietnam. Higher Education Dynamics, 29, 15–30. doi: 10.1007/978-90-481-3694-0_2

Içbay, M. A. (2005). A SWOT analysis on the university entrance examination in Turkey: a case study. Mersin University Journal of the Faculty of Education, 1(1), 126–140. doi: 10.17860/efd.08133

Jordan, S. (2013). E-assessment: past, present and future. New Directions, 9(1), 87–106. doi: 10.29311/ndtps.v0i9.504

Joshi, A. et al. (2015). Likert scale: explored and explained. British Journal of Applied Science and Technology, 7(4), 396–403. doi: 10.9734/BJAST/2015/14975

Karl, M. et al. (2011). Student attitudes towards computer-aided testing. European Journal of Dental Education, 15(2), 69–72. doi: 10.1111/j.1600-0579.2010.00637.x

Kim, Y., Kang, T.-S. & Rhie, J. (2017). Development and application of a real-time warning system based on a MEMS seismic network and response procedure for the day of the national college entrance examination in South Korea. Seismological Research Letters, 88(5), 1322–1326. doi: 10.1785/0220160208

Kruger, D. et al. (2015). Improving teacher effectiveness: designing better assessment tools in learning management systems. Future Internet, 7(4), 484–499. doi: 10.3390/fi7040484

Likert, R. (1932). A technique for the measurements of attitudes. Archives of Psychology, 140(22), 5–55.

Liu, Q.-J. & Feng, Y.-R. (2009). Research and implementation of random question selection based on genetic and Tabu Algorithm. Journal of Linyi Normal University, 31(6), 136–139.

Norman, G. (2010). Likert scales, levels of measurement and the ‘laws’ of statistics. Advances in Health Sciences Education, 15(5), 625–632. doi: 10.1007/s10459-010-9222-y

Piattoeva, N. (2015). Elastic numbers: national examinations data as a technology of government. Journal of Education Policy, 30(3), 316–334. doi: 10.1080/02680939.2014.937830

Piaw, C. Y. (2012). Replacing paper-based testing with computer-based testing in assessment: are we doing wrong? Procedia – Social and Behavioral Sciences, 64, 655–664. doi: 10.1016/j.sbspro.2012.11.077

Shute, V. J. et al. (2016). Advances in the science of assessment. Educational Assessment, 21(1), 34–59. doi: 10.1080/10627197.2015.1127752

Silin, Y. & Kwok, D. (2017). A study of students’ attitudes towards using ICT in a social constructivist environment. Australasian Journal of Educational Technology, 33(5), 50–62. doi: 10.14742/ajet.2890

Stödberg, U. (2012). A research review of e-assessment. Assessment & Evaluation in Higher Education, 37(5), 591–604. doi: 10.1080/02602938.2011.557496

St-Onge, C. et al. (2021). Covid-19 as the tipping point for integrating e-assessment in higher education practices. British Journal of Educational Technology, 53(2), 349–366. doi: 10.1111/bjet.13169

Tabachnick, B. G. & Fidell, L. S. (2007). Experimental Designs Using ANOVA (Vol. 724). Belmont, CA: Thomson/Brooks/Cole.

Thelwall, M. (2000). Computer-based assessment: a versatile educational tool. Computers & Education, 34, 37–49. doi: 10.1016/S0360-1315(99)00037-8
How to Cite
Baryktabasov K., Jumabaeva C., & Brimkulov U. (2023). Using information and communication technologies for the assessment of a large number of students. Research in Learning Technology, 31.
Original Research Articles