Research

Statement

A search for the creative possibilities of new technologies has been at the core of my scientific and artistic research in my academic and professional career. In my early graduate research, I combined human-computer interaction, physical computing, and music information retrieval techniques to develop novel gestural interfaces to control sound synthesis processes that I then used expressively in my creative practice. Currently, I am exploring the creative affordances of the machine learning paradigm in music composition and performance. I develop, adapt, compose, and perform with machine learning techniques and tools that learn models from music encoded in a symbolic format. In my current research, I am investigating the expressive possibilities of machine learning-assisted music-making directly in the audio domain.

The contemporary and rapidly changing new music economy landscape requires researchers, professionals, and makers not only aware of the new tools and techniques to design and implement new creative output, but also to think critically on the biases and implications of using data-driven technologies in creative practice in general, and music-making in particular. As a researcher, scholar, computer musician, and performer, these are for me important topics and a focal point. As a result, in my scientific research and creative output, I interrogate and explore the aesthetic possibilities of new technologies, but I also pay attention to their omissions, silences, and disconnections. Hence, I am committed to an inclusive approach to the development of new ways of producing, researching, and enjoying music.

Publications

2024

Tecks, A., T. Peschlow, and G. Vigliensoni. 2024. Explainability paths for sustained artistic practice. In Proceedings of the the Second International Workshop on eXplainable AI for the Arts at the ACM Creativity and Cognition Conference (XAIxArts2024). doi.org/10.48550/arxiv.2407.15216

Bryan-Kinns, N., C. Ford, S. Zheng, H. Kennedy, A. Chamberlain, M. Lewis, D. Hemment, Z. Li, Q. Wu, L. Xiao, G. Xia, J. Rezwana, M. Clemens, and G. Vigliensoni. Explainable AI for the Arts 2 (XAIxArts2). In Proceedings of the ACM Creativity and Cognition Conference (C&C ’24). doi.org/10.1145/3635636.3660763

2023

Vigliensoni, G., and R. Fiebrink. 2023. Steering latent audio models through interactive machine learning. In Proceedings of the 14th International Conference on Computational Creativity (ICCC'23). doi.org/10.5281/zenodo.8087978.

Vigliensoni, G., and R. Fiebrink. 2023. Interacting with neural audio synthesis models through interactive machine learning. In The First International Workshop on eXplainable AI for the Arts at the ACM Creativity and Cognition Conference (XAIxArts2023).

Vigliensoni, G., and R. Fiebrink. 2023. Re•col•lec•tions: Sharing sonic memories through interactive machine learning and neural audio synthesis models. In Creative AI track of the 37th Conference on Neural Information Processing Systems (NeurIPS 2023).

Shimizu, J., I. Olowe, T. Broad, G. Vigliensoni, P. Thattai, and R. Fiebrink. 2023. Interactive machine learning for generative models. In Proceedings of the Machine Learning for Creativity and Design Workshop, 37th Conference on Neural Information Processing Systems (NeurIPS 2023).

Fujinaga, I, and G. Vigliensoni. 2023. Optical music recognition workflow for medieval music manuscripts. In_ Proceedings of the 5th International Workshop on Reading Music Systems (WoRMS 2023)_.

2022

Vigliensoni, G., L. McCallum, E. Maestre, and R. Fiebrink. 2022. R-VAE: Live latent space drum rhythm generation from minimal-size datasets. Journal of Creative Music Systems 1(1). doi.org/10.5920/jcms.902.

Vigliensoni, G., P. Perry, and R. Fiebrink. 2022. A small-data mindset for generative AI creative work. In Proceedings of the Generative AI and HCI Workshop - Conference on Human Factors in Computing Systems Workshop (CHI2022). doi.org/10.5281/zenodo.7086327.

2021

Vigliensoni, G., E. de Luca, and I. Fujinaga. 2021. Chapter 6: Repertoire: Neume Notation. In Music Encoding Initiative Guidelines, edited by J. Kepper et al.

2020

Vigliensoni, G., L. McCallum, E. Maestre, and R. Fiebrink. 2020. Generation and visualization of rhythmic latent spaces. In Proceedings of the 2020 Joint Conference on AI Music Creativity. doi.org/10.5281/zenodo.4285422

Vigliensoni, G., L. McCallum, and R. Fiebrink. 2020. Creating latent spaces for modern music genre rhythms using minimal training data. In Proceedings of the 11th International Conference on Computational Creativity (ICCC’20). doi.org/10.5281/zenodo.7415792

Vigliensoni, G., E. Maestre, and R. Fiebrink. 2020. Web-based dynamic visualization of rhythmic latent space. In Proceedings of the Sound, Image and Interaction Design Symposium (SIIDS2020). doi.org/10.5281/zenodo.7438305

Regimbal, J., G. Vigliensoni, C. Hutnik, and I. Fujinaga. 2020. IIIF-based lyric and neume editor for square-notation manuscripts. In Proceedings of the Music Encoding Conference.

2019

Fujinaga, I., and G. Vigliensoni 2019. The Art of Teaching Computers: The SIMSSA Optical Music Recognition Workflow System. In Proceedings of the 27th European Signal Processing Conference. doi.org/10.23919/eusipco.2019.8902658

Vigliensoni, G., A. Daigle, E. Liu, J. Calvo-Zaragoza, J. Regimbal, M. A. Nguyen, N. Baxter, Z. McLennan, and I. Fujinaga. 2019. Overcoming the challenges of optical music recognition of Early Music with machine learning. Digital Humanities Conference 2019.

Vigliensoni, G., A. Daigle, E. Liu, J. Calvo-Zaragoza, J. Regimbal, M. A. Nguyen, N. Baxter, Z. McLennan, and I. Fujinaga. 2019. From image to encoding: Full optical music recognition of Medieval and Renaissance music. Music Encoding Conference 2019.

2018

Vigliensoni, G., J. Calvo-Zaragoza, and I. Fujinaga. 2018. Developing an environment for teaching computers to read music. In Proceedings of 1st International Workshop on Reading Music Systems.

Castellanos, F., J. Calvo-Zaragoza, G. Vigliensoni, and I. Fujinaga. 2018. Document analysis of music score images with selectional auto-encoders. In Proceedings of the 19th International Society for Music Information Retrieval Conference.

Nápoles, N., G. Vigliensoni, and I. Fujinaga. 2018. Encoding matters. In Proceedings of the 5th International Conference on Digital Libraries for Musicology. doi.org/10.1145/3273024.3273027

Calvo-Zaragoza, J., F. Castellanos, G. Vigliensoni, and I. Fujinaga. 2018. Deep neural networks for document processing of music score images. Applied Sciences, 8(5), 654. doi.org/10.3390/app8050654

Vigliensoni, G., J. Calvo-Zaragoza, and I. Fujinaga. 2018. An environment for machine pedagogy: Learning how to teach computers to read music. In Proceedings of the Intelligent Music Interfaces for Listening and Creation workshop.

2017

Vigliensoni G., D. Romblom, M. P. Verge, and C. Guastavino. 2017. Perceptual evaluation of a virtual acoustic room model. The Journal of the Acoustical Society of America 142(4): 2559.

Vigliensoni, G. and I. Fujinaga. 2017. The music listening histories dataset. In Proceedings of the 18th International Society for Music Information Retrieval Conference.

Calvo-Zaragoza, J., G. Vigliensoni, and I. Fujinaga. 2017. One-step detection of background, staff lines, and symbols in medieval music manuscripts with convolutional neural networks. In Proceedings of the 18th International Society for Music Information Retrieval Conference.

Barone, M., K. Dacosta, G. Vigliensoni, and M. Woolhouse. 2017. GRAIL: Database linking music metadata across artist, release, and track. In Proceedings of the 4th International Workshop on Digital Libraries for Musicology. doi.org/10.1145/3144749.3144760

Calvo-Zaragoza, J., G. Vigliensoni, and I. Fujinaga. 2017. Music document layout analysis through machine learning and human feedback. In Proceedings of the 12th IAPR International Workshop on Graphics Recognition. doi.org/10.1109/icdar.2017.259

Saleh, Z., K. Zhang, J. Calvo-Zaragoza, G. Vigliensoni, and I. Fujinaga. 2017. Pixel.js: Web-based pixel classification correction platform for ground truth creation. In Proceedings of the 12th IAPR International Workshop on Graphics Recognition. doi.org/10.1109/icdar.2017.267

Calvo-Zaragoza, J., G. Vigliensoni, and I. Fujinaga. 2017. Pixelwise classification for music document analysis. In Proceedings of the 2017 Seventh International Conference on Image Processing Theory, Tools, and Applications. doi.org/10.1109/ipta.2017.8310134

Calvo-Zaragoza, J., G. Vigliensoni, and I. Fujinaga. 2017. Pixel-wise binarization of musical documents with Convolutional Neural Networks. In Proceedings of the 15th IAPR Conference on Machine Vision Applications. doi.org/10.23919/mva.2017.7986876

Calvo-Zaragoza, J., G. Vigliensoni, and I. Fujinaga. 2017. Staff-line detection on greyscale images with pixel classification. In Proceedings of the 8th Iberian Conference on Pattern Recognition and Image Analysis. doi.org/10.1007/978-3-319-58838-4_31

Barone, M., K. Dacosta, G. Vigliensoni, and M. Woolhouse. 2017. GRAIL: A general recorded audio identity linker. Late breaking session 17th International Society for Music Information Retrieval Conference.

Calvo-Zaragoza, J., G. Vigliensoni, and I. Fujinaga. 2017. A unified approach towards automatic recognition of heterogeneous music documents. In Proceedings of the Music Encoding Conference.

Calvo-Zaragoza, J., G. Vigliensoni, and I. Fujinaga. 2017. A machine learning framework for the categorization of elements in images of musical documents. In Proceedings of the Third International Conference on Technologies for Music Notation and Representation.

2016

Vigliensoni, G. and I. Fujinaga. 2016. Automatic music recommendation systems: Do demographic, profiling, and contextual features improve their performance?. In Proceedings of the 17th International Society for Music Information Retrieval Conference.

Calvo-Zaragoza, J., G. Vigliensoni, and I. Fujinaga. 2016. Staff-line detection on greyscale images with pixel classification. Late breaking session 17th International Society for Music Information Retrieval Conference.

Calvo-Zaragoza, J., G. Vigliensoni, and I. Fujinaga. 2016. Document analysis for music scores via machine learning. 3rd International Digital Libraries for Musicology workshop. doi.org/10.1145/2970044.2970047

2015

Fujinaga, I., G. Vigliensoni, and H. Knox. 2015. The making of a computerized harpsichord for analysis and training. International Symposium on Performance Science.

Barone, M., K. Dacosta, G. Vigliensoni, and M. Woolhouse. 2015. GRAIL: A music identity space collection and API. Late breaking session 16th International Society for Music Information Retrieval Conference.

2014

Vigliensoni, G., and I. Fujinaga. 2014. Time-shift normalization and listener profiling in a large dataset of music listening histories. Fourth annual Seminar on Cognitively Based Music Informatics Research.

Vigliensoni, G., and I. Fujinaga. 2014. Identifying time zones in a large dataset of music listening logs. In Proceedings of the International Workshop on Social Media Retrieval and Analysis. doi.org/10.1145/2632188.2632203

2013

Vigliensoni, G., J. A. Burgoyne, and I. Fujinaga. 2013. Musicbrainz for the world: the Chilean experience. In Proceedings of the International Society for Music Information Retrieval Conference.

Vigliensoni, G., G. Burlet, and I Fujinaga. 2013. Optical measure recognition in common music notation. In Proceedings of the International Society for Music Information Retrieval Conference.

2012

Vigliensoni, G., and M. Wanderley. 2012. A quantitative comparison of position trackers for the development of a touch-less musical interface. In Proceedings of the New Interfaces for Musical Expression Conference.

Hankinson, A., J. A. Burgoyne, G. Vigliensoni, A. Porter, J. Thompson, W. Liu, R. Chiu, and I. Fujinaga. 2012. Digital document image retrieval using optical music recognition. In Proceedings of the International Society for Music Information Retrieval Conference.

Hankinson, A., J. A. Burgoyne, G. Vigliensoni, and I. Fujinaga. 2012. Creating a large-scale searchable digital collection from printed music materials. In Proceedings of Advances in Music Information Research. doi.org/10.1145/2187980.2188221

2011

Vigliensoni, G. 2011. Touch-less gestural control of concatenative sound synthesis. Master’s thesis, McGill University.

Vigliensoni, G., J. A. Burgoyne, A. Hankinson, and I. Fujinaga. 2011. Automatic pitch detection in printed square notation. In Proceedings of the International Society for Music Information Retrieval Conference.

Hankinson, A., G. Vigliensoni, J. A. Burgoyne, and I. Fujinaga. 2011. New tools for optical chant recognition. International Association of Music Libraries Conference.

Burgoyne, J. A., R. Chiu, G. Vigliensoni, A. Hankinson, J. Cumming, and I. Fujinaga. 2011. Creating a fully-searchable edition of the Liber Usualis. Medieval and Renaissance Music Conference.

2010

Vigliensoni, G., and M. Wanderley. 2010. Soundcatcher: Explorations in audio-looping and time-freezing using an open-air gestural controller. In Proceedings of the International Computer Music Conference.

McKay, C., J. A. Burgoyne, J. Hockman, J. B. L. Smith, G. Vigliensoni, and I. Fujinaga. 2010. Evaluating the performance of lyrical features relative to and in combination with audio, symbolic and cultural features. In Proceedings of the International Society for Music Information Retrieval Conference.

Vigliensoni, G., C. McKay, and I. Fujinaga. 2010. Using jWebMiner 2.0 to improve music classification performance by combining different types of features mined from the web. In Proceedings of the International Society for Music Information Retrieval Conference.

Datasets

The Music Listening Histories Dataset (MLHD). 2017. Vigliensoni, G., and I. Fujinaga.

Music Recommendation Dataset (KGRec-music). 2016. Oramas, S., V. C. Ostuni, and G. Vigliensoni. Licensed under Creative Commons CC BY-NC 3.0, except 3rd party data.

Sound Recommendation Dataset (KGRec-sound). 2016. Oramas, S., V. C. Ostuni, and G. Vigliensoni. Licensed under Creative Commons CC BY-NC 3.0, except 3rd party data.