Harmonizing Tradition, Algorithm, and Innovation: A Bibliometric Study on AI in Traditional Music

  • Wei Jie He Faculty of Creative Arts, Universiti Malaya, 50603, Kuala Lumpur, Malaysia
  • I Ta Wang Faculty of Creative Arts, Universiti Malaya, 50603, Kuala Lumpur, Malaysia
  • KuWing Cheong School of Music, UCSI University, 94017, Kuala Lumpur, Malaysia
Keywords: Artificial intelligence, Traditional music, Bibliometric, BiblioMagika, Music technology

Abstract

This study aims to explore the intersection of artificial intelligence (AI) and traditional music through a bibliometric lens, identifying key trends, contributors, and research hotspots within this emerging interdisciplinary field. The bibliometric data were extracted from the Scopus database, which covers publications from 1974 to April 19, 2025, using keywords related to AI and traditional music. This study utilized BiblioMagika to analyze publication trends, prolific authors, contributing countries, source titles, keyword co-occurrence patterns, and collaboration networks. The results indicate a significant increase in research output on the intersection of artificial intelligence and traditional music since 2019. Over time, the thematic focus has evolved from early applications such as signal processing, genre classification, and feature extraction rooted in music information retrieval, to more sophisticated areas including generative AI composition, cultural modeling of indigenous musical forms, and multimodal human-AI interaction in performance and education contexts.

Downloads

Download data is not yet available.

References

Agarwal, K., & Chen, X. (2008). Applicability of MUSIC-type imaging in two-dimensional electromagnetic inverse problems. IEEE Transactions on Antennas and Propagation, 56(10), 3217-3223 https://doi.org/10.1109/TAP.2008.929434

Alam, A., Fianto, B. A., Ratnasari, R. T., Ahmi, A., & Handayani, F. P. (2023). History and development of Takaful research: a bibliometric review. Sage Open, 13(3).

Aria, M., & Cuccurullo, C. (2017). bibliometrix: An R-tool for comprehensive science mapping analysis. Journal of informetrics, 11(4), 959-975. https://doi.org/10.1016/j.joi.2017.08.007

Briot, J. P., Hadjeres, G., & Pachet, F. D. (2020). Deep learning techniques for music generation. Heidelberg: Springer. https://doi.org/10.1007/978-3-319-70163-9

Bryan-Kinns, N., Noel-Hirst, A., & Ford, C. (2024, June). Using Incongruous Genres to Explore Music Making with AI Generated Content. In Proceedings of the 16th Conference on Creativity & Cognition (pp. 229-240). https://doi.org/10.1145/3635636.3656198

Chen, K., Wang, C. I., Berg-Kirkpatrick, T., & Dubnov, S. (2020). Music sketchnet: Controllable music generation via factorized representations of pitch and rhythm, arXiv preprint. https://doi.org/10.48550/arXiv.2008.01291

Cheng, Y. H., Chang, P. C., & Kuo, C. N. (2020, November). Convolutional neural networks approach for music genre classification. In 2020 International symposium on computer, consumer and control (IS3C), 399-403. IEEE. https://doi.org/10.1109/IS3C50286.2020.00109

Du, Y., Li, X., & Wang, Z. (2023). EEG-based emotion recognition in traditional Chinese music using 1D-CNN and BiLSTM. Frontiers in Neuroscience, 17, 112345.​

Echchakoui, S. (2020). Why and how to merge Scopus and Web of Science during bibliometric analysis: the case of sales force literature from 1912 to 2019. Journal of Marketing Analytics, 8 (3), 165-184.

Erkut, C., Karjalainen, M., Huang, P., & Välimäki, V. (2002). Acoustical analysis and model-based sound synthesis of the kantele. The Journal of the Acoustical Society of America, 112 (4), 1681–1691. https://doi.org/10.1121/1.1504858

Herremans, D., Chuan, C.-H., & Chew, E. (2017). A functional taxonomy of music generation systems. ACM Computing Surveys, 50 (5), Article 69. https://doi.org/10.1145/3108242

Hizlisoy, S., Yildirim, S., & Tufekci, Z. (2021). Music emotion recognition using convolutional long short term memory deep neural networks. Engineering Science and Technology, an International Journal, 24(3), 760-767. https://doi.org/10.1016/j.jestch.2020.10.009

Hu, Y., Abhayapala, T. D., & Samarasinghe, P. N. (2020). Multiple source direction of arrival estimations using relative sound pressure based MUSIC. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 29, 253-264. https://doi.org/10.1109/TASLP.2020.3039569

Juhász, Z. (2015). A search for structural similarities of oral musical traditions in Eurasia and America using the self organizing cloud algorithm. Journal of New Music Research, 44(3), 196-218. https://doi.org/10.1080/09298215.2015.1060246

Kanhov, E., Kaila, A. K., & Sturm, B. L. (2024). Innovation, data colonialism and ethics: critical reflections on the impacts of AI on Irish traditional music. Journal of New Music Research, 1-17.

Kramer, M. J. (2024). The Global Jukebox and the Celestial Monochord: Alan Lomax’s cultural equity and data-driven folk music analysis. Modern American History, 7(2), 204–227. https://doi.org/10.1017/mah.2024.14

Lin, Q., Niu, Y., Zhu, Y., Lu, H., Mushonga, K. Z., & Niu, Z. (2018). Heterogeneous knowledge-based attentive neural networks for short-term music recommendations. IEEE Access, 6, 58990-59000. https://doi.org/10.1109/ACCESS.2018.2874959

Lidy, T., Silla Jr, C. N., Cornelis, O., Gouyon, F., Rauber, A., Kaestner, C. A., & Koerich, A. L. (2010). On the suitability of state-of-the-art music information retrieval methods for analyzing, categorizing and accessing non-western and ethnic music collections. Signal Processing, 90(4), 1032-1048. https://doi.org/10.1016/j.sigpro.2009.09.014

Lu, R., Wu, K., Duan, Z., & Zhang, C. (2017, March). Deep ranking: Triplet MatchNet for music metric learning. In 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 121-125). IEEE. https://doi.org/10.1109/ICASSP.2017.7952130

Liu, Y., Chen, H., & Wang, B. (2021). DOA estimation based on CNN for underwater acoustic array. Applied Acoustics. https://doi.org/10.1016/j.apacoust.2020.107594

Li, Y., Zhang, Q., & Liu, M. (2023). Integrating AI into Chinese Opera Education: A Mixed-Methods Study. Frontiers in Psychology, 14, 1521777.​

MacIntyre, P. D., Baker, S. C., & Sparling, H. (2017). Heritage passions, heritage convictions, and the rooted L2 self: Music and Gaelic language learning in Cape Breton, Nova Scotia. The Modern Language Journal, 101(3), 501-516. https://doi.org/10.1111/modl.12417

Merchán Sánchez-Jara, J. F., González Gutiérrez, S., Cruz Rodríguez, J., & Syroyid Syroyid, B. (2024). Artificial intelligence-assisted music education: A critical synthesis of challenges and opportunities. Education Sciences, 14(11), 1171. https://doi.org/10.3390/educsci14111171

Nagaraj, P., Sudar, K. M., Krishna, G. V., Yeswanth, G. S., Charan, G., & Charan, N. S. (2024, March). Relevant Musical Schema-based Human Emotion Controller using Deep Learning Techniques. In 2024 Third International Conference on Intelligent Techniques in Control, Optimization and Signal Processing (INCOS) (pp. 1-5). IEEE. https://doi.org/10.1109/INCOS59338.2024.10527687

Pikrakis, A., Theodoridis, S., & Kamarotos, D. (2003). Recognition of isolated musical patterns using context dependent dynamic time warping. IEEE Transactions on Speech and Audio Processing, 11(3), 252–259. https://doi.org/10.1109/TSA.2003.811533

Reddish, P., Fischer, R., & Bulbulia, J. (2013). Let’s dance together: Synchrony, shared intentionality and cooperation. PloS one, 8(8), e71182. https://doi.org/10.1371/journal.pone.0071182

Singh, V. K., Singh, P., Karmakar, M., Leta, J., & Mayr, P. (2021). The journal coverage of Web of Science, Scopus and Dimensions: A comparative analysis. Scientometrics, 126, 5113-5142.

Sturm, B. L., Ben-Tal, O., Monaghan, Ú., Collins, N., Herremans, D., Chew, E., ... & Pachet, F. (2019). Machine learning research that matters for music creation: A case study. Journal of New Music Research, 48(1), 36-55. https://doi.org/10.1080/09298215.2018.1515233

Tahvanainen, H., Ylönen, T., & Valo, O. (2024, August). Feature comparison for classification of Kaustinen fiddle playing style from archived recordings using deep learning. In 2024 32nd European Signal Processing Conference (EUSIPCO) (pp. 1007-1011). IEEE. https://doi.org/10.23919/eusipco63174.2024.10715264

Wan, L., Liu, K., Liang, Y. C., & Zhu, T. (2021). DOA and polarization estimation for non-circular signals in 3-D millimeter wave polarized massive MIMO systems. IEEE Transactions on Wireless Communications, 20(5), 3152-3167. https://doi.org/10.1109/TWC.2020.3047866

Wan, L., Han, G., Jiang, J., Rodrigues, J. J., Feng, N., & Zhu, T. (2015). DOA estimation for coherently distributed sources considering circular and noncircular signals in massive MIMO systems. IEEE Systems Journal, 11(1), 41-49. https://doi.org/10.1109/JSYST.2015.2445052

Xambó, A., & Roma, G. (2024). Human–machine agencies in live coding for music performance. Journal of New Music Research, 53(1-2), 33-46. https://doi.org/10.1080/09298215.2024.2442355

Xing, B., Zhang, K., Sun, S., Zhang, L., Gao, Z., Wang, J., & Chen, S. (2015). Emotion-driven Chinese folk music-image retrieval based on DE-SVM. Neurocomputing, 148, 619-627. https://doi.org/10.1016/j.neucom.2014.08.007

Zhang, K. (2021). Music style classification algorithm based on music feature extraction and deep neural network. Wireless Communications and Mobile Computing, 2021(1), 9298654. https://doi.org/10.1155/2021/9298654

Zhang, J. (2021). Music feature extraction and classification algorithm based on deep learning. Scientific Programming, 2021(1), 1651560. https://doi.org/10.1155/2021/1651560

Published
2025-06-24
Section
Articles