Skip to content

Latest commit

 

History

History
40 lines (29 loc) · 1.42 KB

File metadata and controls

40 lines (29 loc) · 1.42 KB

Sound Field Reconstruction using Diffuse Models

Brief

The code in this repository is mainly inspired by Palette: Image-to-Image Diffusion Models. After you prepared own data, you need to modify the corresponding configure file to point to your data. The data generation script is taken from Lluis

Training/Resume Training

  1. Set resume_state of configure file to the directory of previous checkpoint. Take the following as an example, this directory contains training states and saved model:
"path": { //set every part file path
	"resume_state": "experiments/training_sfr/checkpoint/100" 
},
  1. Run the script:
cd src
python run.py -p train -c config/sfr.json

Test

  1. Modify the configure file to point to your data following the steps in Data Prepare part.
  2. Set your model path following the steps in Resume Training part.
  3. Run the script:
cd src
python run.py -p test -c config/sfr.json

Main changes wrt original Palette implementation

  • dataset.py - creation of sound field dataset, starting from frequency responses of rooms (here you can set the number of mics for each room)
  • mask.py - random masking of the sound fields, based on the numebr of available mics
  • metric.py - added nmse metric