Yazar "Ozbasi, Durmus" seçeneğine göre listele
Listeleniyor 1 - 3 / 3
Sayfa Başına Sonuç
Sıralama seçenekleri
Öğe Development of Computer Literacy Test as Computerized Adaptive Testing(Assoc Measurement & Evaluation Education & Psychology, 2015) Ozbasi, Durmus; Demirtasli, NukhetThe purpose of this study is to investigate the applicability of the Computer Adaptive Testing (CAT) of the exemption exam of Information and Communication Technology in computer environment (as CAT) given to the first year students at every faculty of Ankara University each year. The research was carried out in two basic stages. In the first stage, the researchers studied with 1366 university students to obtain the data to study on with CAT application and and to test the prepared items. In the second stage, a paper and pencil test was given to 142 university first year students with live CAT. The test of computer literacy was used as an instrument of data collection. It was also tested in the study if the collected data met the hypothesis of Item Response Theory (IRT). With this regard, a live CAT application was carried out with 136-item pool which was found to comply with the three-parameter logistic model. According to the findings of the study, the highest reliability estimate found in simulative CAT application was found in test termination condition depending on the fixed item number (with 30 items). Besides, with regards to the number of used item, the least item use happened when test termination condition is Standard Error (SE)<0.50. In the CAT test, students' ability estimates in CAT in which proficiency estimate is done depending on Maximum Likelihood Estimation (MLE) has come up with more reliable results in extreme values compared to those obtained in paper-pencil test, and lower standard error estimates were obtained with the use of CAT application with regards to standard error value. The average reliability obtained from live CAT application was found to be higher that of paper-pencil test. According to the findings of this study, when the SE<0.30 test termination rule is applied, MLE was found to be more reliable, when SE<0.50 and fixed item (30) test termination rule was applied, the proficiency estimate value based on Expected A Posteriori Method (EAP) was found to be more reliable. Besides, the test information amount obtained in live CAT application was significantly higher than that of paper and pencil test.Öğe Effects of Content Balancing and Item Selection Method on Ability Estimation in Computerized Adaptive Tests(Ani Yayincilik, 2017) Sahin, Alper; Ozbasi, DurmusPurpose: This study aims to reveal effects of content balancing and item selection method on ability estimation in computerized adaptive tests by comparing Fisher's maximum information (FMI) and likelihood weighted information (LWI) methods. Research Methods: Four groups of examinees (250, 500, 750, 1000) and a bank of 500 items with 10 different content domains were generated through Monte Carlo simulations. Examinee ability was estimated by fixing all settings except for the item selection methods mentioned. True and estimated ability (theta) values were compared by dividing examinees into six subgroups. Moreover, the average number of items used was compared. Findings: The correlations decreased steadily as examinee theta level increased among all examinee groups when LWI was used. FMI had the same trend with the 250 and 500 examinees. Correlations for 750 examinees decreased as. level increased as well, but they were somewhat steady with FMI. For 1000 examinees, FMI was not successful in estimating examinee. accurately after. subgroup 4. Moreover, when FMI was used,. estimates had less error than LWI. The figures regarding the average items used indicated that LWI used fewer items in subgroups 1, 2, 3 and that FMI used less items in subgroups 4, 5, and 6. Implications for Research and Practice: The findings indicated that when content balancing is put into use, LWI is more suitable to estimate examinee theta for examinees between -3 and 0 and that FMI is more stable when examinee. is above 0. An item selection algorithm combining these two item selection methods is recommended. (C) 2017 Ani Publishing Ltd. All rights reservedÖğe Using Rank-order Judgments Scaling to Determine Students' Evaluation Preferences(Ani Yayincilik, 2019) Ozbasi, DurmusPurpose: This study sought to determine university students' evaluation preferences and then scaled them based on their rank-order judgments. Research Methods: The survey model was used in this study. This study was conducted with a total of 376 university students of varying grade levels enrolled in different departments of the faculty of education of two separate state universities in Turkey during the 2017-2018 academic year. Data were collected using a 13-item survey designed specifically for this study that solicited answers regarding students' evaluation preferences in measuring their academic performance. Students first ranked to evaluation types from most to least preferred and then assigned a single number for each stimulus. The data attained from the study were then scaled based on rank-order judgments. Findings: The study findings revealed that students most preferred to be assessed using oral exams and least preferred tests composed of multiple-choice questions. Implications for Research and Practice: This study was restricted to university students enrolled in the faculty of education of two state universities in Turkey. By conducting a similar study with students enrolled in other faculties in the same or different higher education institutions results and potential differences between faculties may be compared. (C) 2019 Ani Publishing Ltd. All rights reserved











