Yazar "Yenisari, Esma" seçeneğine göre listele
Listeleniyor 1 - 2 / 2
Sayfa Başına Sonuç
Sıralama seçenekleri
Öğe A Query Evaluation Approach using Opinions of Turkish Financial Market Professionals(Assoc Information Communication Technology Education & Science, 2015) Ugurlu, Bora; Yenisari, Esma; Karasulu, Bahadir; Ayan, Ozcan ZaferPeople who do not have expertise in the financial area may not see the relationship between the numerical and linguistic data. In our study, a knowledge discovery approach using Turkish natural language processing is recommended in order to respond to meaningful queries and classify them with high accuracy. Query corpus consists of randomly selected unique keywords. Quantitative evaluation is done in order to measure the classification performance. Experimental results indicate that our proposed approach is sufficiently consistent with and able to make categorical classifications correctly. The approach highlights the relationship between numerical and linguistic data obtained from Turkish financial market.Öğe Deep Learning-Based Sign Language Recognition Using Efficient Multi-Feature Attention Mechanism(Ieee-Inst Electrical Electronics Engineers Inc, 2025) Yenisari, Esma; Yavuz, SirmaSign language is a communication system used by Deaf and hard of hearing people and serves as a bridge between Deaf and hearing communities. Since sign language uses numerous visuomotor elements that include both visual perception (hand shapes, facial expressions) and physical movements (hand and arm movements), it represents a multimodal input source for Sign Language Recognition (SLR) systems. In this study, a novel deep learning-based architecture using EfficientNet and multi-feature attention mechanism is proposed to accurately recognize SL signs. Initially, general visual features are acquired through the EfficientNet model, leveraging the transfer learning paradigm. Subsequently, dataset-specific contextual features are extracted utilizing distinct network types; spatial dependencies are modeled via Convolutional Neural Networks (CNNs), whereas temporal dynamics are learned through Recurrent Neural Networks (RNNs). These features are adaptively weighted using an attention mechanism and focus on the most critical information for the classification task. This approach ensures that the most information-rich and useful components of both methods are emphasized, leading to a significant increase in final performance. Utilizing RGB video images, the proposed model, on the BosphorusSign22k General dataset comprising Turkish Sign Language (TSL) signs, achieved accuracies of 99.01% and 96.84% for sign classes of 50 and 174, respectively. Furthermore, the generalization ability of the model was demonstrated by its high accuracy of 99.84% in the Argentinian Sign Language dataset (LSA64) and 98.41% in the Indian Sign Language dataset (INCLUDE50). Experimental results indicated that the proposed model architecture has a competitive performance compared to existing SLR models reviewed in the literature.











