Exploration of Syntactic Structure in Virtual Sign Language: A Study on AI-Based Social Media Platforms
Downloads
Background. The virtualization of sign language through artificial intelligence in social media platforms presents linguistic challenges that have not been widely explored, especially related to the accuracy of syntactic structures in digital contexts. These visual representations have the potential to reproduce grammatical misconceptions that impact the meaning and effectiveness of communication.
Purpose. This study aims to explore how the syntactic structure of sign language is represented in a virtual format by AI systems used in social media such as TikTok, Instagram, and YouTube, as well as identify their accuracy and distortions.
Method. The research uses an exploratory qualitative approach with a cross-platform case study design. Data were obtained from 30 virtual sign language videos and analyzed using visual-spatial linguistic frameworks and open coding techniques. Validation is carried out through thematic triangulation analysis and expert consultation.
Results. The results show that the representation of syntactic structure varies greatly between platforms, with YouTube being superior in accuracy to TikTok. Factors such as the length of the video, the sophistication of the algorithm, and the presence of non-manual elements greatly affect the completeness of sentence structure in virtual sign language.
Conclusion. The current representation of sign language by AI does not fully reflect the complex syntactic structure. A new approach is needed in the development of multimodal-based technologies that consider linguistic elements as a whole to make digital communication more inclusive and accurate.
Adorjan, A. (2023). Towards a Researcher-in-the-loop Driven Curation Approach for Quantitative and Qualitative Research Methods. Communications in Computer and Information Science, 1850(Query date: 2023-11-30 23:13:48), 647–655. https://doi.org/10.1007/978-3-031-42941-5_58
Al-Samarraay, M., Salih, M., Ahmed, M., & ... (2022). A new extension of FDOSM based on Pythagorean fuzzy environment for evaluating and benchmarking sign language recognition systems. Neural Computing and …, Query date: 2025-07-06 02:07:59. https://doi.org/10.1007/s00521-021-06683-3
Amin, M., Rizvi, S., & Hossain, M. (2022). A comparative review on applications of different sensors for sign language recognition. Journal of Imaging, Query date: 2025-07-06 02:07:59. https://www.mdpi.com/2313-433X/8/4/98
Athira, P., Sruthi, C., & Lijiya, A. (2022). A signer independent sign language recognition with co-articulation elimination from live videos: An Indian scenario. Journal of King Saud University-Computer …, Query date: 2025-07-06 02:07:59. https://www.sciencedirect.com/science/article/pii/S131915781831228X
Ayanouz, S., Abdelhakim, B., & Benhmed, M. (2020). A smart chatbot architecture based NLP and machine learning for health care assistance. Proceedings of the 3rd …, Query date: 2025-07-06 02:07:59. https://doi.org/10.1145/3386723.3387897
Balaha, M., El-Kady, S., Balaha, H., Salama, M., & ... (2023). A vision-based deep learning approach for independent-users Arabic sign language interpretation. Multimedia Tools and …, Query date: 2025-07-06 02:07:59. https://doi.org/10.1007/S11042-022-13423-9
Bordeleau, M. (2021). Classification of qualitative fieldnotes collected during quantitative sensory testing: A step towards the development of a new mixed methods approach in pain research. Journal of Pain Research, 14(Query date: 2023-11-30 23:13:48), 2501–2511. https://doi.org/10.2147/JPR.S301655
Cascella, M., Schiavo, D., Cuomo, A., & ... (2023). Artificial intelligence for automatic pain assessment: Research methods and perspectives. Pain Research and …, Query date: 2025-07-06 02:07:59. https://doi.org/10.1155/2023/6018736
Chen, Y., Wei, F., Sun, X., Wu, Z., & ... (2022). A simple multi-modality transfer learning baseline for sign language translation. Proceedings of the IEEE …, Query date: 2025-07-06 02:07:59. http://openaccess.thecvf.com/content/CVPR2022/html/Chen_A_Simple_Multi-Modality_Transfer_Learning_Baseline_for_Sign_Language_Translation_CVPR_2022_paper.html
Cui, R., Liu, H., & Zhang, C. (2019). A deep neural framework for continuous sign language recognition by iterative training. IEEE Transactions on Multimedia, Query date: 2025-07-06 02:07:59. https://ieeexplore.ieee.org/abstract/document/8598757/
Dong, Y. (2022). Application of artificial intelligence software based on semantic web technology in english learning and teaching. Journal of Internet Technology, Query date: 2025-07-06 02:07:59. https://jit.ndhu.edu.tw/article/view/2651
Farooq, U., Rahim, M., Sabir, N., Hussain, A., & ... (2021). Advances in machine translation for sign language: Approaches, limitations, and challenges. Neural Computing and …, Query date: 2025-07-06 02:07:59. https://doi.org/10.1007/s00521-021-06079-3
Koch, C. (2021). The smartphone diary in media use research: A qualitative methodological approach under the magnifying glass. Medien und Kommunikationswissenschaft, 69(2), 299–319. https://doi.org/10.5771/1615-634X-2021-2-299
Law, L. (2024). Application of generative artificial intelligence (GenAI) in language teaching and learning: A scoping literature review. Computers and Education Open, Query date: 2025-07-06 02:07:59. https://www.sciencedirect.com/science/article/pii/S2666557324000156
Lucchi, N. (2024). ChatGPT: a case study on copyright challenges for generative artificial intelligence systems. European Journal of Risk Regulation, Query date: 2025-07-06 02:07:59. https://www.cambridge.org/core/journals/european-journal-of-risk-regulation/article/chatgpt-a-case-study-on-copyright-challenges-for-generative-artificial-intelligence-systems/CEDCE34DED599CC4EB201289BB161965
McCarthy, J. (2022). Artificial intelligence, logic, and formalising common sense. … Learning and the City: Applications in Architecture …, Query date: 2025-07-06 02:07:59. https://doi.org/10.1002/9781119815075.ch6
Mittal, A., Kumar, P., Roy, P., & ... (2019). A modified LSTM model for continuous sign language recognition using leap motion. IEEE Sensors …, Query date: 2025-07-06 02:07:59. https://ieeexplore.ieee.org/abstract/document/8684245/
Peter, N., & Intelligence, R. (2021). A Modern Approach. dai.fmph.uniba.sk. https://dai.fmph.uniba.sk/courses/ICI/References/rn.chap1.pdf
Pontes, H., Duarte, J., & Pinheiro, P. (2020). An educational game to teach numbers in Brazilian Sign Language while having fun. Computers in Human Behavior, Query date: 2025-07-06 02:07:59. https://www.sciencedirect.com/science/article/pii/S0747563218305892
Rahman, M., Islam, M., Rahman, M., & ... (2019). A new benchmark on american sign language recognition using convolutional neural network. … for Industry 4.0 (STI), Query date: 2025-07-06 02:07:59. https://ieeexplore.ieee.org/abstract/document/9067974/
Salmona, M., Lieber, E., & Kaczynski, D. (2019). Qualitative and mixed methods data analysis using Dedoose: A practical approach for research across the social sciences. books.google.com. https://books.google.com/books?hl=en&lr=&id=inClDwAAQBAJ&oi=fnd&pg=PT13&dq=%22mixed+methods%22&ots=7xCghCr5x0&sig=tFzLuRcngi2r1dRkyQE2dCMav-c
Sathyanarayanan, D., Reddy, T., & ... (2023). American Sign Language Recognition System for Numerical and Alphabets. … , Artificial Intelligence …, Query date: 2025-07-06 02:07:59. https://ieeexplore.ieee.org/abstract/document/10369455/
Saunders, B., Camgoz, N., & Bowden, R. (2020). Adversarial training for multi-channel sign language production. arXiv preprint arXiv:2008.12405, Query date: 2025-07-06 02:07:59. https://arxiv.org/abs/2008.12405
Sopcak, P., & Sopcak, N. (t.t.). Qualitative Approaches to Empirical Ecocriticism. researchgate.net, Query date: 2025-02-10 15:27:07. https://www.researchgate.net/profile/Paul-Sopcak-2/publication/369913120_Qualitative_Approaches_to_Empirical_Ecocriticism_Understanding_Multidimensional_Concepts_Experiences_and_Processes/links/6433c4c0ad9b6d17dc4a4342/Qualitative-Approaches-to-Empirical-Ecocriticism-Understanding-Multidimensional-Concepts-Experiences-and-Processes.pdf
Stevenson, C. N. (2019). Data speaks: Use of poems and photography in qualitative research. Applied Social Science Approaches to Mixed Methods Research, Query date: 2023-11-21 20:22:48, 119–144. https://doi.org/10.4018/978-1-7998-1025-4.ch006
Sun, Z., Zhu, M., Shan, X., & Lee, C. (2022). Augmented tactile-perception and haptic-feedback rings as human-machine interfaces aiming for immersive interactions. Nature communications, Query date: 2025-07-06 02:07:59. https://www.nature.com/articles/s41467-022-32745-8
Touretzky, D., & Gardner-McCune, C. (2022). Artificial intelligence thinking in K–12. direct.mit.edu. https://direct.mit.edu/books/book-pdf/2243178/book_9780262368971.pdf#page=160
Wang, C., He, T., Zhou, H., Zhang, Z., & Lee, C. (2023). Artificial intelligence enhanced sensors-enabling technologies to next-generation healthcare and biomedical platform. Bioelectronic Medicine, Query date: 2025-07-06 02:07:59. https://doi.org/10.1186/s42234-023-00118-1
Wen, F., Zhang, Z., He, T., & Lee, C. (2021). AI enabled sign language recognition and VR space bidirectional communication using triboelectric smart glove. Nature communications, Query date: 2025-07-06 02:07:59. https://www.nature.com/articles/s41467-021-25637-w
Westera, W., Prada, R., Mascarenhas, S., & ... (2020). Artificial intelligence moving serious gaming: Presenting reusable game AI components. Education and …, Query date: 2025-07-06 02:07:59. https://doi.org/10.1007/s10639-019-09968-2
Copyright (c) 2025 Ratna Susanti, Rashid Rahman

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.