Arşiv logosu
  • Türkçe
  • English
  • Giriş
    Yeni kullanıcı mısınız? Kayıt için tıklayın. Şifrenizi mi unuttunuz?
Arşiv logosu
  • Koleksiyonlar
  • Sistem İçeriği
  • Analiz
  • Talep/Soru
  • Türkçe
  • English
  • Giriş
    Yeni kullanıcı mısınız? Kayıt için tıklayın. Şifrenizi mi unuttunuz?
  1. Ana Sayfa
  2. Yazara Göre Listele

Yazar "Aydogdu, Tugba" seçeneğine göre listele

Listeleniyor 1 - 3 / 3
Sayfa Başına Sonuç
Sıralama seçenekleri
  • [ X ]
    Öğe
    Knowledge and attitudes toward HIV/AIDS among Turkish clinical medical and dental students
    (Wiley, 2025) Sezer, Berkant; Aydogdu, Tugba; Ata, Batuhan
    Objectives: Despite advances in HIV/AIDS treatment and prevention, persistent knowledge gaps and stigmatizing attitudes among healthcare trainees emphasize the need for early educational interventions to promote ethical and non-discriminatory care for people living with HIV/AIDS (PLWHA). This study aimed to assess and compare HIV-related knowledge and attitudes among clinical medical and dental students. Methods: A cross-sectional, questionnaire-based survey was conducted among clinical-level students at a public university in T & uuml;rkiye. Participants included fourth- to sixth-year medical students and fourth- to fifth-year dental students. The questionnaire assessed general HIV/AIDS knowledge, transmission routes, post-exposure prophylaxis and attitudes toward PLWHA. Data were analysed using descriptive statistics, independent samples t-test, Mann-Whitney U test and chi-square tests. Results: Of 528 eligible students, 504 completed the survey (260 medical, 244 dental). Medical students scored significantly higher than dental students across all knowledge domains (p < 0.001) and demonstrated more positive attitudes (p < 0.001). However, both groups' overall knowledge levels were categorized as weak, and their attitudes remained negative. Common misconceptions included limited awareness of the Undetectable = Untransmittable principle, with only 11.5% of all students answering this item correctly, and false beliefs about transmission via casual contact, saliva, or shared utensils. Conclusions: While medical students performed better, widespread deficiencies and stigmatizing beliefs across both groups indicate a need for curriculum reform. HIV-related education should integrate biomedical content with ethical reasoning, stigma reduction, and patient-centred approaches. Early, experiential learning may help foster more informed and inclusive attitudes among future healthcare professionals.
  • [ X ]
    Öğe
    Performance of Advanced Artificial Intelligence Models in Pulp Therapy for Immature Permanent Teeth: A Comparison of ChatGPT-4 Omni, DeepSeek, and Gemini Advanced in Accuracy, Completeness, Response Time, and Readability
    (Elsevier Science Inc, 2025) Sezer, Berkant; Aydogdu, Tugba
    Introduction This study aims to evaluate and compare the performance of three advanced chatbots-ChatGPT-4 Omni (ChatGPT-4o), DeepSeek, and Gemini Advanced-on answering questions related to pulp therapies for immature permanent teeth. The primary outcomes assessed were accuracy, completeness, and readability, while secondary outcomes focused on response time and potential correlations between these parameters. Methods A total of 21 questions were developed based on clinical resources provided by the American Association of Endodontists, including position statements, clinical considerations, and treatment options guides, and assessed by three experienced pediatric dentists and three endodontists. Accuracy and completeness scores, as well as response times, were recorded, and readability was evaluated using Flesch Kincaid Reading Ease Score, Flesch Kincaid Grade Level, Gunning Fog Score, SMOG Index, and Coleman Liau Index. Results Results revealed significant differences in accuracy (P < .05) and completeness (P < .05) scores among the chatbots, with ChatGPT-4o and DeepSeek outperforming Gemini Advanced in both categories. Significant differences in response times were also observed, with Gemini Advanced providing the quickest responses (P < .001). Additionally, correlations were found between accuracy and completeness scores (rho: .719, P < .001), while response time showed a positive correlation with completeness (rho: .144, P < .05). No significant correlation was found between accuracy and readability (P > .05). Conclusions ChatGPT-4o and DeepSeek demonstrated superior performance in terms of accuracy and completeness when compared to Gemini Advanced. Regarding readability, DeepSeek scored the highest, while ChatGPT-4o showed the lowest. These findings highlight the importance of considering both the quality and readability of artificial intelligence-driven responses, in addition to response time, in clinical applications.
  • [ X ]
    Öğe
    Performance of Advanced Artificial Intelligence Models in Traumatic Dental Injuries in Primary Dentition: A Comparative Evaluation of ChatGPT-4 Omni, DeepSeek, Gemini Advanced, and Claude 3.7 in Terms of Accuracy, Completeness, Response Time, and Readability
    (Mdpi, 2025) Sezer, Berkant; Aydogdu, Tugba
    This study aimed to evaluate and compare the performance of four advanced artificial intelligence-powered chatbots-ChatGPT-4 Omni (ChatGPT-4o), DeepSeek, Gemini Advanced, and Claude 3.7 Sonnet-in responding to questions related to traumatic dental injuries (TDIs) in the primary dentition. The assessment focused on accuracy, completeness, readability, and response time, aligning with the 2020 International Association of Dental Traumatology guidelines. Twenty-five open-ended TDI questions were submitted to each model in two separate sessions. Responses were anonymized and evaluated by four pediatric dentists. Accuracy and completeness were rated using Likert scales; readability was assessed using five standard indices; and response times were recorded in seconds. ChatGPT-4o demonstrated significantly higher accuracy than Gemini Advanced (p = 0.005), while DeepSeek outperformed Gemini Advanced in completeness (p = 0.010). Response times differed significantly (p < 0.001), with DeepSeek being the slowest and ChatGPT-4o and Gemini Advanced being the fastest. DeepSeek produced the most readable outputs relatively, though none met public readability standards. Claude 3.7 generated the most complex texts (p < 0.001). A strong correlation existed between accuracy and completeness (rho = 0.701, p < 0.001). These findings emphasize the cautious integration of artificial intelligence chatbots into pediatric dental care due to varied performance. Clinical accuracy, completeness, and readability are critical when offering information aligned with guidelines to support decisions in dental trauma management.

| Çanakkale Onsekiz Mart Üniversitesi | Kütüphane | Açık Erişim Politikası | Rehber | OAI-PMH |

Bu site Creative Commons Alıntı-Gayri Ticari-Türetilemez 4.0 Uluslararası Lisansı ile korunmaktadır.


Çanakkale Onsekiz Mart Üniversitesi, Çanakkale, TÜRKİYE
İçerikte herhangi bir hata görürseniz lütfen bize bildirin

DSpace 7.6.1, Powered by İdeal DSpace

DSpace yazılımı telif hakkı © 2002-2026 LYRASIS

  • Çerez Ayarları
  • Gizlilik Politikası
  • Son Kullanıcı Sözleşmesi
  • Geri Bildirim