Arşiv logosu
  • Türkçe
  • English
  • Giriş
    Yeni kullanıcı mısınız? Kayıt için tıklayın. Şifrenizi mi unuttunuz?
Arşiv logosu
  • Koleksiyonlar
  • Sistem İçeriği
  • Analiz
  • Talep/Soru
  • Türkçe
  • English
  • Giriş
    Yeni kullanıcı mısınız? Kayıt için tıklayın. Şifrenizi mi unuttunuz?
  1. Ana Sayfa
  2. Yazara Göre Listele

Yazar "Arisan, Arda" seçeneğine göre listele

Listeleniyor 1 - 2 / 2
Sayfa Başına Sonuç
Sıralama seçenekleri
  • [ X ]
    Öğe
    Camera-Based Monocular Depth Estimation in Orthodontics: Vision Transformer vs. CNN Model Performance
    (Mdpi, 2025) Arisan, Arda; Duran, Gokhan Serhat
    Background: Monocular Depth Estimation (MDE) is a computer vision approach that predicts spatial depth information from a single two-dimensional image. In orthodontics, where facial soft-tissue evaluation is integral to diagnosis and treatment planning, such methods offer new possibilities for obtaining sagittal profile information from standard frontal photographs. This study aimed to determine whether MDE can extract clinically meaningful information for facial profile assessment. Methods: Standardized frontal photographs and lateral cephalometric radiographs from 82 adult patients (48 Class I, 28 Class II, 6 Class III) were retrospectively analyzed. Three clinically relevant soft-tissue landmarks-Upper Lip Anterior (ULA), Lower Lip Anterior (LLA), and Soft Tissue Pogonion (Pog ')-were annotated on frontal photographs, while true vertical line (TVL) analysis from cephalograms served as the reference standard. For each case, anteroposterior (AP) relationships among the three landmarks were represented as ordinal rankings based on predicted depth values, and accuracy was defined as complete agreement between model-derived and reference rankings. Depth maps were generated using one vision transformer model (DPT-Large) and two CNN-based models (DepthAnything-v2 and ZoeDepth). Model performance was evaluated using accuracy, 95% confidence intervals, and effect size measures. Results: The transformer-based DPT-Large achieved clinically acceptable accuracy in 92.7% of cases (76/82; 95% CI: 84.8-97.3), significantly outperforming the CNN-based models DepthAnything-v2 (9.8%) and ZoeDepth (4.9%), both of which performed below the theoretical chance level (16.7%). Conclusions: Vision transformer-based Monocular Depth Estimation demonstrates the potential for clinically meaningful soft-tissue profiling from frontal photographs, suggesting that depth information derived from two-dimensional images may serve as a supportive tool for facial profile evaluation. These findings provide a foundation for future studies exploring the integration of depth-based analysis into digital orthodontic diagnostics.
  • [ X ]
    Öğe
    Orthodontic Biomechanical Reasoning with Multimodal Language Models: Performance and Clinical Utility
    (Mdpi, 2025) Arisan, Arda; Genc, Celal; Duran, Gokhan Serhat
    Background: Multimodal large language models (LLMs) are increasingly being explored as clinical support tools, yet their capacity for orthodontic biomechanical reasoning has not been systematically evaluated. This retrospective study assessed their ability to analyze treatment mechanics and explored their potential role in supporting orthodontic decision-making. Methods: Five publicly available models (GPT-o3, Claude 3.7 Sonnet, Gemini 2.5 Pro, GPT-4.0, and Grok) analyzed 56 standardized intraoral photographs illustrating a diverse range of active orthodontic force systems commonly encountered in clinical practice. Three experienced orthodontists independently scored the outputs across four domains-observation, interpretation, biomechanics, and confidence-using a 5-point scale. Inter-rater agreement and consistency were assessed, and statistical comparisons were made between models. Results: GPT-o3 achieved the highest composite score (3.34/5.00; 66.8%), significantly outperforming all other models. The performance ranking was followed by Claude (57.8%), Gemini (52.6%), GPT-4.0 (48.8%), and Grok (38.8%). Inter-rater reliability among the expert evaluators was excellent, with ICC values ranging from 0.786 (Confidence Evaluation) to 0.802 (Observation). Model self-reported confidence showed poor calibration against expert-rated output quality. Conclusions: Multimodal LLMs show emerging potential for assisting orthodontic biomechanical assessment. With expert-guided validation, these models may contribute meaningfully to clinical decision support across diverse biomechanical scenarios encountered in routine orthodontic care.

| Çanakkale Onsekiz Mart Üniversitesi | Kütüphane | Açık Erişim Politikası | Rehber | OAI-PMH |

Bu site Creative Commons Alıntı-Gayri Ticari-Türetilemez 4.0 Uluslararası Lisansı ile korunmaktadır.


Çanakkale Onsekiz Mart Üniversitesi, Çanakkale, TÜRKİYE
İçerikte herhangi bir hata görürseniz lütfen bize bildirin

DSpace 7.6.1, Powered by İdeal DSpace

DSpace yazılımı telif hakkı © 2002-2026 LYRASIS

  • Çerez Ayarları
  • Gizlilik Politikası
  • Son Kullanıcı Sözleşmesi
  • Geri Bildirim