top of page
Search
  • Writer's pictureSIPLab

A Robust Music Auto-Tagging Technique Using Audio Fingerprinting and Deep Convolutional Neural Netwo

Music tags are a set of descriptive keywords that convey high-level information about a music clip, such as emotions(sadness, happiness), genres(jazz, classical), and instruments(guitar, vocal). Since tags provide high-level information from the listener’s perspectives, they can be used for music discovery and recommendation.

However, in music information retrieval (MIR), researchers need to have expertise based on acoustics or engineering design in order to analyze and organize music informations, classify them according to music forms, and then provide music information retrieval.

In recent years, people have been paying more attention to the feature learning and deep architecture, thus reducing the required of the engineering works and the need for prior knowledge. The use of deep convolutional neural networks has been successfully explored in the image, text and speech field. However, previous methods for music auto-tagging can’t accurately discriminate the type of music for the distortion and noise audio, it will have the bad results in the auto-tagging. Therefore, we will propose a robust method to implement auto-music tagging. First, convert the music into a spectrogram, and find out the important information from the spectrogram, that is, the audio fingerprint. Then use it as the input of convolutional neural networks to learn the features, in this way to get a good music search result. Experimental results demonstrate the robustness of the proposed method.



559 views0 comments
bottom of page