This paper proposes a feature extraction method for waveform music files based on their complex and changeable signals.First, we denoise the music signals in the time domain.The signal envelope is got by the Gaussian low-pass filter for the high frequency signals.And we use FFT algorithm in frequency domain on the peak detection to extract the notes and the feature.Based on the strength laws of the notes in the bars, we extract bars and bars eigenvector by using the window moving match algorithm, and then mark off the period according to the similarity between bars, and get period feature with emotional factors.The period feature is input into RAN neural network emotion recognizer for emotion recognition of certain periods, and to decide the emotion feature of the whole music.The experimental results show that this method is effective in extracting features of different types of music.
SHI Xiang-bin, SUN Peng-yu
. Research on a method of feature extraction for waveform music files[J]. Journal of Shenyang Aerospace University, 2013
, 30(5)
: 60
-66
.
DOI: 10.3969/j.issn.2095-1248.2013.05.013
[1]Cope D.Recombinant music:using the computer to explore musical style[J].Computer, 1991, 24(7):22-28.
[2]Nagashima T, Kawashima J.Experimental study on arranging music by chaotic neural network[J].International Journal of Intelligent Systems, 1997, 12(4):323-339.
[3]Johnson M L.Toward an expert system for expressive musical performance[J].Computer, 1991, 24(7):30-34.
[4]Kostek B.Computer-based recognition of musical phrases using the rough-set approach[J].Information Sciences, 1998, 104(1):15-30.
[5]Katayose H, Fukuoka T, Takami K, et al.Expression extraction in virtuoso music performances[C].Proceedings of the 10th International Conference on Pattern Recognition, 1990:1:780-784.
[6]Wang Q, Saiwaki N, Nishida S.An approach to automatic motion synthesis for 3D spring model with music factors[C].Proceedings of IEEE International Conference on System, Man, and Cybernetics, 1999:4:224-229.
[7]Yanase T, Takasu A, Adachi J.Phrase based feature extraction for musical information retrieval[C].Proceedings of 1999 IEEE Pacific Rim Conference on Communications, Computers and Signal Processing, 1999:396-399.
[8]Kashino K, Tanaka H.Asound source separation system with the ability of automatic tone modeling[C].Proceedings of the International Computer Music Accociation Conference on International, 1993:248-255.
[9]Alghoniemy M, Tewfik A H.Rhythm and periodicity detection in polyphonic music[C].Proceedings of 1999 IEEE 3th Workshop on Multimedia Signal Processing, 1999:185-190.
[10]Scheirer E D.Tempo and beat analysis of acoustic musical signals[J].The Journal of the Acoustical Society of America, 1998, 103(1):588-601.
[11]Whiteley N, Cemgil A T, Godsill S.Bayesian modelling of temporal structure in musical audio[C].Proceedings of the 7th International Conference on the Music Information Retrieval, 2006:29-34.
[12]Whiteley N, Cemgil A T, Godsill S.Sequential inference of rhythmic structure in musical audio[C].Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing, 2007:1321-1324.
[13]林小兰, 李传珍, 王晖, 等.基于贝叶斯理论的单音音乐节奏提取[J].中国传媒大学学报 (自然科学版), 2008, 15(4):44-51.
[14]石祥滨, 赵健谕, 刘芳, 等.一种分段式音乐情感识别方法[J].小型微型计算机系统, 2012, 33(8):1847-1850.
[15]Schuller B, Hage C, Schuller D, et al.‘Mister DJ, Cheer Me Up!′:Musical and textual features for automatic mood classification[J].Journal of New Music Research, 2010, 39(1):13-34.
[16]彭琼.上海音乐情感的计算机分析与自动识别技术研究[D].上海:上海交通大学, 2007.