Decoding Lost Languages: A Philological Study of Ancient Texts
Downloads
Background. This research focuses on the decoding of ancient languages and the complexity of the symbols used by the Egyptian, Mesopotamian, and Indus Valley civilizations. The background of the research is based on the importance of understanding language as a key tool for uncovering the social, spiritual, and administrative lives of past civilizations.
Purpose. The purpose of the study was to explain how these symbols can be interpreted using a combination of traditional philological methods and artificial intelligence technology.
Method. The methods used include manual linguistic analysis supported by modern algorithms to speed up the decoding process.
Results. The results showed that the symbols of Ancient Egypt were easier to decrypt due to additional documentation, while the symbols from the Indus Valley remained difficult to understand. Symbols from Mesopotamia show complex dual meanings, especially in religious and astronomical contexts. Case studies show that ancient languages are multifunctional tools that reflect advanced social and spiritual structures.
Conclusion. The conclusion of the study confirms that an interdisciplinary approach is essential to uncover more secrets from past civilizations. This research enriches the understanding of ancient languages and shows that technology can speed up the decoding process, although it is not yet fully adequate. This contribution paves the way for further research involving global collaboration and the development of new technologies.
Bagal, V. 2022. “MolGPT: Molecular Generation Using a Transformer-Decoder Model.” Journal of Chemical Information and Modeling 62 (9): 2064–76. https://doi.org/10.1021/acs.jcim.1c00600.
Dai, Z. 2021. “UP-DETR: Unsupervised Pre-Training for Object Detection with Transformers.” Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, no. Query date: 2024-11-08 12:03:18, 1601–10. https://doi.org/10.1109/CVPR46437.2021.00165.
Duke, N.K. 2021. “The Science of Reading Progresses: Communicating Advances Beyond the Simple View of Reading.” Reading Research Quarterly 56 (Query date: 2024-11-08 12:03:18). https://doi.org/10.1002/rrq.411.
Feng, G. 2021. “Encoder Fusion Network with Co-Attention Embedding for Referring Image Segmentation.” Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, no. Query date: 2024-11-08 12:03:18, 15501–10. https://doi.org/10.1109/CVPR46437.2021.01525.
Freitag, M. 2022. “Results of WMT22 Metrics Shared Task: Stop Using BLEU - Neural Metrics Are Better and More Robust.” Conference on Machine Translation - Proceedings, no. Query date: 2024-11-08 12:03:18, 46–68.
Gao, Y. 2021. “UTNet: A Hybrid Transformer Architecture for Medical Image Segmentation.” Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 12903 (Query date: 2024-11-08 12:03:18): 61–71. https://doi.org/10.1007/978-3-030-87199-4_6.
Guo, D. 2022. “UniXcoder: Unified Cross-Modal Pre-Training for Code Representation.” Proceedings of the Annual Meeting of the Association for Computational Linguistics 1 (Query date: 2024-11-08 12:03:18): 7212–25.
Hartvigsen, T. 2022. “TOXIGEN: A Large-Scale Machine-Generated Dataset for Adversarial and Implicit Hate Speech Detection.” Proceedings of the Annual Meeting of the Association for Computational Linguistics 1 (Query date: 2024-11-08 12:03:18): 3309–26.
Hatamizadeh, A. 2022. “UNETR: Transformers for 3D Medical Image Segmentation.” Proceedings - 2022 IEEE/CVF Winter Conference on Applications of Computer Vision, WACV 2022, no. Query date: 2024-11-08 12:03:18, 1748–58. https://doi.org/10.1109/WACV51458.2022.00181.
He, P. 2021. “DEBERTA: DECODING-ENHANCED BERT WITH DISENTANGLED ATTENTION.” ICLR 2021 - 9th International Conference on Learning Representations, no. Query date: 2024-11-08 12:03:18. https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85150260629&origin=inward.
Hu, R. 2021. “UniT: Multimodal Multitask Learning with a Unified Transformer.” Proceedings of the IEEE International Conference on Computer Vision, no. Query date: 2024-11-08 12:03:18, 1419–29. https://doi.org/10.1109/ICCV48922.2021.00147.
Krause, B. 2021. “GeDi: Generative Discriminator Guided Sequence Generation WARNING: This Paper Contains GPT-3 Outputs Which Are Offensive in Nature.” Findings of the Association for Computational Linguistics, Findings of ACL: EMNLP 2021, no. Query date: 2024-11-08 12:03:18, 4929–52.
Kwon, W. 2023. “Efficient Memory Management for Large Language Model Serving with PagedAttention.” SOSP 2023 - Proceedings of the 29th ACM Symposium on Operating Systems Principles, no. Query date: 2024-11-08 12:03:18, 611–26. https://doi.org/10.1145/3600006.3613165.
Li, R. 2022. “Multistage Attention ResU-Net for Semantic Segmentation of Fine-Resolution Remote Sensing Images.” IEEE Geoscience and Remote Sensing Letters 19 (Query date: 2024-11-08 12:03:18). https://doi.org/10.1109/LGRS.2021.3063381.
Liu, A. 2021. “DEXPERTS: Decoding-Time Controlled Text Generation with Experts and Anti-Experts.” ACL-IJCNLP 2021 - 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, Proceedings of the Conference, no. Query date: 2024-11-08 12:03:18, 6691–6706.
Liu, P. 2022. “Chinese Named Entity Recognition: The State of the Art.” Neurocomputing 473 (Query date: 2024-11-08 12:03:18): 37–53. https://doi.org/10.1016/j.neucom.2021.10.101.
Liu, Y. 2024. “A Survey of Visual Transformers.” IEEE Transactions on Neural Networks and Learning Systems 35 (6): 7478–98. https://doi.org/10.1109/TNNLS.2022.3227717.
Lu, Y. 2022. “Decoding Lip Language Using Triboelectric Sensors with Deep Learning.” Nature Communications 13 (1). https://doi.org/10.1038/s41467-022-29083-0.
Moses, D.A. 2021. “Neuroprosthesis for Decoding Speech in a Paralyzed Person with Anarthria.” New England Journal of Medicine 385 (3): 217–27. https://doi.org/10.1056/NEJMoa2027540.
Ni, J. 2022. “Sentence-T5 (ST5): Scalable Sentence Encoders from Pre-Trained Text-to-Text Models.” Proceedings of the Annual Meeting of the Association for Computational Linguistics, no. Query date: 2024-11-08 12:03:18, 1864–74.
Pan, S. 2024. “Unifying Large Language Models and Knowledge Graphs: A Roadmap.” IEEE Transactions on Knowledge and Data Engineering 36 (7): 3580–99. https://doi.org/10.1109/TKDE.2024.3352100.
Peng, H. 2021. “RANDOM FEATURE ATTENTION.” ICLR 2021 - 9th International Conference on Learning Representations, no. Query date: 2024-11-08 12:03:18. https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85127399183&origin=inward.
Qian, L. 2021. “Glancing Transformer for Non-Autoregressive Neural Machine Translation.” ACL-IJCNLP 2021 - 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, Proceedings of the Conference, no. Query date: 2024-11-08 12:03:18, 1993–2003.
Schick, T. 2021. “Self-Diagnosis and Self-Debiasing: A Proposal for Reducing Corpus-Based Bias in Nlp.” Transactions of the Association for Computational Linguistics 9 (Query date: 2024-11-08 12:03:18): 1408–24. https://doi.org/10.1162/tacl_a_00434.
Scholak, T. 2021. “PICARD: Parsing Incrementally for Constrained Auto-Regressive Decoding from Language Models.” EMNLP 2021 - 2021 Conference on Empirical Methods in Natural Language Processing, Proceedings, no. Query date: 2024-11-08 12:03:18, 9895–9901. https://doi.org/10.18653/v1/2021.emnlp-main.779.
Seo, P.H. 2022. “End-to-End Generative Pretraining for Multimodal Video Captioning.” Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition 2022 (Query date: 2024-11-08 12:03:18): 17938–47. https://doi.org/10.1109/CVPR52688.2022.01743.
Shang, Y.M. 2022. “OneRel: Joint Entity and Relation Extraction with One Module in One Step.” Proceedings of the 36th AAAI Conference on Artificial Intelligence, AAAI 2022 36 (Query date: 2024-11-08 12:03:18): 11285–93.
Sheng, E. 2021. “Societal Biases in Language Generation: Progress and Challenges.” ACL-IJCNLP 2021 - 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, Proceedings of the Conference, no. Query date: 2024-11-08 12:03:18, 4275–93.
Shi, F. 2023. “Large Language Models Can Be Easily Distracted by Irrelevant Context.” Proceedings of Machine Learning Research 202 (Query date: 2024-11-08 12:03:18): 31210–27.
Tang, J. 2023. “Semantic Reconstruction of Continuous Language from Non-Invasive Brain Recordings.” Nature Neuroscience 26 (5): 858–66. https://doi.org/10.1038/s41593-023-01304-9.
Touvron, H. 2023. “ResMLP: Feedforward Networks for Image Classification with Data-Efficient Training.” IEEE Transactions on Pattern Analysis and Machine Intelligence 45 (4): 5314–21. https://doi.org/10.1109/TPAMI.2022.3206148.
Wang, N. 2021. “Transformer Meets Tracker: Exploiting Temporal Context for Robust Visual Tracking.” Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, no. Query date: 2024-11-08 12:03:18, 1571–80. https://doi.org/10.1109/CVPR46437.2021.00162.
Wang, W. 2021. “TransBTS: Multimodal Brain Tumor Segmentation Using Transformer.” Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 12901 (Query date: 2024-11-08 12:03:18): 109–19. https://doi.org/10.1007/978-3-030-87193-2_11.
Wang, X. 2023. “SELF-CONSISTENCY IMPROVES CHAIN OF THOUGHT REASONING IN LANGUAGE MODELS.” 11th International Conference on Learning Representations, ICLR 2023, no. Query date: 2024-11-08 12:03:18. https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85199927883&origin=inward.
Wang, Y. 2021. “CodeT5: Identifier-Aware Unified Pre-Trained Encoder-Decoder Models for Code Understanding and Generation.” EMNLP 2021 - 2021 Conference on Empirical Methods in Natural Language Processing, Proceedings, no. Query date: 2024-11-08 12:03:18, 8696–8708.
Wang, Z. 2022. “CRIS: CLIP-Driven Referring Image Segmentation.” Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition 2022 (Query date: 2024-11-08 12:03:18): 11676–85. https://doi.org/10.1109/CVPR52688.2022.01139.
Wu, H. 2022. “FAT-Net: Feature Adaptive Transformers for Automated Skin Lesion Segmentation.” Medical Image Analysis 76 (Query date: 2024-11-08 12:03:18). https://doi.org/10.1016/j.media.2021.102327.
Yang, Z. 2022. “LAVT: Language-Aware Vision Transformer for Referring Image Segmentation.” Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition 2022 (Query date: 2024-11-08 12:03:18): 18134–44. https://doi.org/10.1109/CVPR52688.2022.01762.
Yuan, F. 2023. “An Effective CNN and Transformer Complementary Network for Medical Image Segmentation.” Pattern Recognition 136 (Query date: 2024-11-08 12:03:18). https://doi.org/10.1016/j.patcog.2022.109228.
Zheng, W. 2021. “Improving Visual Reasoning through Semantic Representation.” IEEE Access 9 (Query date: 2024-11-08 12:03:18): 91476–86. https://doi.org/10.1109/ACCESS.2021.3074937.
Copyright (c) 2024 Sri Nur Rahmi, Vann Sok, Sokha Dara

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.