Francesca Ronchini, Luca Comanducci, Simone Marcucci and Fabio Antonacci
Dipartimento di Elettronica, Informazione e Bioingegneria - Politecnico di Milano
Paper accepted @ 17th International Symposium on
Computer Music Multidisciplinary Research (CMMR25)
Text-to-music models have revolutionized the creative landscape, offering new possibilities for music creation. Yet their integration into musicians’ workflows remains underexplored. This paper presents a case study on how TTM models impact music production, based on a user study of their effect on producers' creative workflows. Participants produce tracks using a custom tool combining TTM and source separation models. Semi-structured interviews and thematic analysis reveal key challenges, opportunities, and ethical considerations. The findings offer insights into the transformative potential of TTMs in music production, as well as challenges in their real-world integration.
This README contains brief notes related to the supplementary material of the AI-Assisted Music Production: A User Study on Text-to- Music Models paper.
interface_code_ai_music_production.py contains the python code for executing interface.
Before running the code, make sure to install the proper environment, running the following command:
conda env create -f env_demo.yaml
Once the environment is correctly installed, the demo can be run by simply typing
python interface_code_ai_music_production.py
on a terminal and then connecting via browser on the port specified by the model. Requires the installation of audiocraft library, since the interface generates music using MusicGen and demucs for source separation.
Additional material and audio samples are available on the companion website.
If you use code or comments from this work, please cite our paper:
@inproceedings{ronchini2025aiassisted,
title={AI-Assisted Music Production: A User Study on Text-to- Music Models},
author={Ronchini, Francesca and Comanducci, Luca and Marcucci, Simone and Antonacci, Fabio},
booktitle={17th International Symposium on Computer Music Multidisciplinary Research},
year={2025}
}
