EECS351 STEM SEPARATION
CITATIONS
Works Cited
-
Z. Rafii, A. Liutkus, F.-R. Stöter, S. I. Mimilakis, and R. Bittner, ‘The MUSDB18 corpus for music separation’. Dec-2017.
-
‘Performance measurement in blind audio source separation’, IEEE transactions on audio, speech, and language processing, vol. 14, no. 4, pp. 1462–1469, 2006.
-
“Time-Frequency Masking for Harmonic-Percussive Source Separation,” MATLAB & Simulink. [Online]. Available: https://www.mathworks.com/help/audio/ug/time-frequency-masking-for-harmonic-percussive-source-separation.html. [Accessed: 25-Apr-2023].
-
“Cocktail Party Source Separation Using Deep Learning Networks,” MATLAB & Simulink. [Online]. Available: https://www.mathworks.com/help/deeplearning/ug/cocktail-party-source-separation-using-deep-learning-networks.html. [Accessed: 25-Apr-2023].
-
O. Ronneberger, P. Fischer, and T. Brox, ‘U-net: Convolutional networks for biomedical image segmentation’, in Medical Image Computing and Computer-Assisted Intervention--MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18, 2015, pp. 234–241.
-
A. Jansson, E. Humphrey, N. Montecchio, R. Bittner, A. Kumar, and T. Weyde, ‘Singing voice separation with deep u-net convolutional networks’, 2017.
-
E. Manilow, P. Seetharman, and J. Salamon, Open Source Tools & Data for Music Source Separation. https://source-separation.github.io/tutorial, 2020.
-
P. Seetharaman, F. Pishdadian, and B. Pardo, ‘Music/voice separation using the 2d fourier transform’, in 2017 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA), 2017, pp. 36–40.