An Approach for Audio-Visual Content Understanding of Video using Multimodal Deep Learning Methodology

dc.contributor.authorBoztepe, Emre Beray
dc.contributor.authorKarakaya, Bedirhan
dc.contributor.authorKarasulu, Bahadır
dc.contributor.authorÜnlü, İsmet
dc.date.accessioned2025-01-27T19:00:20Z
dc.date.available2025-01-27T19:00:20Z
dc.date.issued2022
dc.departmentÇanakkale Onsekiz Mart Üniversitesi
dc.description.abstractThis study contains an approach for recognizing the sound environment class from a video to understand the spoken content with its sentimental context via some sort of analysis that is achieved by the processing of audio-visual content using multimodal deep learning methodology. This approach begins with cutting the parts of a given video which the most action happened by using deep learning and this cutted parts get concanarated as a new video clip. With the help of a deep learning network model which was trained before for sound recognition, a sound prediction process takes place. The model was trained by using different sound clips of ten different categories to predict sound classes. These categories have been selected by where the action could have happened the most. Then, to strengthen the result of sound recognition if there is a speech in the new video, this speech has been taken. By using Natural Language Processing (NLP) and Named Entity Recognition (NER) this speech has been categorized according to if the word of a speech has connotation of any of the ten categories. Sentiment analysis and Apriori Algorithm from Association Rule Mining (ARM) processes are preceded by identifying the frequent categories in the concanarated video and helps us to define the relationship between the categories owned. According to the highest performance evaluation values from our experiments, the accuracy for sound environment recognition for a given video's processed scene is 70%, average Bilingual Evaluation Understudy (BLEU) score for speech to text with VOSK speech recognition toolkit's English language model is 90% on average and for Turkish language model is 81% on average. Discussion and conclusion based on scientific findings are included in our study. © 2022, Sakarya University. All rights reserved.
dc.identifier.doi10.35377/saucis...1139765
dc.identifier.endpage207
dc.identifier.issn2636-8129
dc.identifier.issue2
dc.identifier.scopus2-s2.0-85180373578
dc.identifier.scopusqualityN/A
dc.identifier.startpage181
dc.identifier.trdizinid1116728
dc.identifier.urihttps://doi.org/10.35377/saucis...1139765
dc.identifier.urihttps://search.trdizin.gov.tr/tr/yayin/detay/1116728
dc.identifier.urihttps://hdl.handle.net/20.500.12428/13258
dc.identifier.volume5
dc.indekslendigikaynakScopus
dc.indekslendigikaynakTR-Dizin
dc.language.isoen
dc.publisherSakarya University
dc.relation.ispartofSakarya University Journal of Computer and Information Sciences
dc.relation.publicationcategoryMakale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanı
dc.rightsinfo:eu-repo/semantics/openAccess
dc.snmzKA_Scopus_20250125
dc.subjectAssociation Rule Mining; Multimodal Deep Learning; Named Entity Recognition; Natural Language Processing
dc.titleAn Approach for Audio-Visual Content Understanding of Video using Multimodal Deep Learning Methodology
dc.typeArticle

Dosyalar