Abstract

This study presents the development of a Turkish Natural Language Processing (NLP) module for Socially Assistive Robots (SARs), aiming to enhance their integration into public spaces and smart environments. The rising labor costs and limitations of human staffing in developed regions, including Turkey, underscore the need for localized language capabilities in SARs. Leveraging advanced machine learning techniques, the system fine-tunes a BERT-based model specifically for Turkish language understanding and response generation. Trained on a dataset of 61,293 question–answer pairs, the system achieved 95% voice recognition accuracy, 82% response accuracy, and a 92% speech comprehensibility rating among native Turkish speakers. It also reached a 72% F1-score and a 72% Exact Match (EM) score, confirming its ability to generate contextually appropriate responses. These results highlight the system’s effectiveness in enabling natural and intuitive interactions. The developed module demonstrates significant potential for deployment in public domains such as universities, hospitals, airports, and shopping centers, where it can assist users in navigating environments and accessing essential services. Furthermore, the research showcases educational benefits through the Temi OS platform, offering hands-on learning opportunities in robotics, AI, and natural language processing. Competitions based on SAR development further encourage innovation among students and researchers. In conclusion, this work illustrates the critical role of Turkish-language SARs in improving service accessibility and advancing AI education. Future efforts will focus on refining linguistic capabilities, expanding interactive features, and conducting long-term user engagement studies to maximize societal impact.

Keywords

  • Turkish Natural Language Processing
  • Socially Assistive Robots (SARs)
  • Human-Robot Interaction
  • BERT Fine-Tuning
  • Conversational AI
  • Voice Recognition
  • Smart Environments
  • Multilingual Robotics

References

  1. 1. T. Seo, Y. Jeon, C. Park, and J. Kim, “Survey on Glass And Façade-Cleaning Robots: Climbing Mechanisms, Cleaning Methods, and Applications,” Int. J. Precis. Eng. Manuf. Technol., vol. 6, no. 2, pp. 367–376, Apr. 2019, doi: 10.1007/s40684-019-00079-4.
  2. 2. A. Nayyar, V. Puri, N. G. Nguyen, and D. N. Le, “Smart Surveillance Robot for Real-Time Monitoring and Control System in Environment and Industrial Applications,” 2018, pp. 229–243.
  3. 3. C. Breazeal, “Social Robots,” in Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, Mar. 2017, pp. 1–1, doi: 10.1145/2909824.3020258.
  4. 4. C.-F. Hung, Y. Lin, H.-J. Ciou, W.-Y. Wang, and H.-H. Chiang, “FoodTemi: The AI-Oriented Catering Service Robot,” in 2021 IEEE International Conference on Consumer Electronics-Taiwan (ICCE-TW), Sep. 2021, pp. 1–2, doi: 10.1109/ICCE-TW52618.2021.9603096.
  5. 5. J. M. Garcia-Haro, E. D. Oña, J. Hernandez-Vicen, S. Martinez, and C. Balaguer, “Service Robots in Catering Applications: A Review and Future Challenges,” Electronics, vol. 10, no. 1, p. 47, Dec. 2020, doi: 10.3390/electronics10010047.
  6. 6. E. Toscano, M. Spitale, and F. Garzotto, “Socially Assistive Robots in Smart Homes: Design Factors that Influence the User Perception,” in 2022 17th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Mar. 2022, pp. 1075–1079, doi: 10.1109/HRI53351.2022.9889467.
  7. 7. K. Maj and P. Zarzycki, “Meeting with social robots like the cat-cucumber meeting? An integrated model of human-robot first contact. Psychological perspective.,” Paladyn, J. Behav. Robot., vol. 10, no. 1, pp. 454–465, Jan. 2019, doi: 10.1515/pjbr-2019-0026.
  8. 8. M. Spitale, S. Silleresi, G. Cosentino, F. Panzeri, and F. Garzotto, “‘Whom would you like to talk with?,’” in Proceedings of the Interaction Design and Children Conference, Jun. 2020, pp. 262–272, doi: 10.1145/3392063.3394421.
  9. 9. van der Heijden, “User Acceptance of Hedonic Information Systems,” MIS Q., vol. 28, no. 4, p. 695, 2004, doi: 10.2307/25148660.
  10. 10. M. Niemelä, P. Heikkilä, and H. Lammi, “A Social Service Robot in a Shopping Mall,” in Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, Mar. 2017, pp. 227–228, doi: 10.1145/3029798.3038301.
  11. 11. F. Har and B. W. L. Ma, “The Future of Education Utilizing an Artificial Intelligence Robot in the Centre for Independent Language Learning: Teacher Perceptions of the Robot as a Service,” 2023, pp. 49–64.
  12. 12. Patel, A. V., Jasani, S., AlAshqar, A., Panakam, A., Amin, K., & Sheth, S. S. (2024). Evaluating the Accuracy and Utility of Large Language Models in Answering Common Contraception Questions [ID 2683633]. Obstetrics & Gynecology, 143, 12S-12S
  13. 13. Mukanova, A., Barlybayev, A., Nazyrova, A., Kussepova, L., Matkarimov, B., & Abdikalyk, G. (2024). Development of a Geographical Question- Answering System in the Kazakh Language. IEEE Access, 12, 105460-105469.
  14. 14. Parikh, P., & Penfield, J. (2024). Automatic Question Answering From Large ESG Reports. International Journal of Data Warehousing and Mining, None.
  15. 15. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., ... & Polosukhin, I. (2017). Attention is all you need. In Advances in neural information processing systems (pp. 5998-6008).
  16. 16. Budur, Emrah, et al. "Building Efficient and Effective OpenQA Systems for Low-Resource Languages." arXiv preprint arXiv:2401.03590 (2024).
  17. 17. C. Wang et al., “LaMI: Large Language Models for Multi-Modal Human-Robot Interaction,” in Extended Abstracts of the CHI Conference on Human Factors in Computing Systems, in CHI EA ’24. New York, NY, USA: Association for Computing Machinery, May 2024, pp. 1–10. doi: 10.1145/3613905.3651029.
  18. 18. J. R. Cole, M. J. Q. Zhang, D. Gillick, J. M. Eisenschlos, B. Dhingra, and J. Eisenstein, “Selectively Answering Ambiguous Questions,” Nov. 15, 2023, arXiv: arXiv:2305.14613. doi: 10.48550/arXiv.2305.14613.
  19. 19. Z. Ren, N. Yolwas, H. Wang, and W. Slamu, “Exploring Turkish Speech Recognition via Hybrid CTC/Attention Architecture and Multi-feature Fusion Network,” Mar. 22, 2023, arXiv: arXiv:2303.12300. doi: 10.48550/arXiv.2303.12300.
  20. 20. F. C. Akyon, D. Cavusoglu, C. Cengiz, S. O. Altinuc, and A. Temizel, “Automated question generation and question answering from Turkish texts,” Apr. 07, 2022, arXiv: arXiv:2111.06476. doi: 10.48550/arXiv.2111.06476.
  21. 21. S. Yildirim, “Fine-tuning Transformer-based Encoder for Turkish Language Understanding Tasks,” Jan. 30, 2024, arXiv: arXiv:2401.17396. doi: 10.48550/arXiv.2401.17396.
  22. 22. “Full article: Advancing natural language processing (NLP) applications of morphologically rich languages with bidirectional encoder representations from transformers (BERT): an empirical case study for Turkish.” Accessed: Feb. 05, 2025. [Online]. Available: https://www.tandfonline.com/doi/full/10.1080/00051144.2021.1922150