Some functionality for synthetic data generation already exists in the prior localization repository in synth.py and glm_predict.py.
These should be reorganized in a sensible way to:
-
Generate predicted spike counts in bins from a new design matrix with the same covariates, or a subset of the existing design matrix, using a nglm.predict() method.
-
Decompose PETHs into their subcomponents contributed by each kernel
-
Optionally generate totally synthetic spike times, which is harder especially if you want to randomize when the spike occurs within a bin. This could be useful for testing other data analysis methods or pipelines.
-
and 2. are high-priority and low-hanging fruit given that the primitives for 1. exist in sklearn and the code for 2. has already been written. They just need to be adapted.
Some functionality for synthetic data generation already exists in the prior localization repository in synth.py and glm_predict.py.
These should be reorganized in a sensible way to:
Generate predicted spike counts in bins from a new design matrix with the same covariates, or a subset of the existing design matrix, using a
nglm.predict()method.Decompose PETHs into their subcomponents contributed by each kernel
Optionally generate totally synthetic spike times, which is harder especially if you want to randomize when the spike occurs within a bin. This could be useful for testing other data analysis methods or pipelines.
and 2. are high-priority and low-hanging fruit given that the primitives for 1. exist in
sklearnand the code for 2. has already been written. They just need to be adapted.