You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+17-28Lines changed: 17 additions & 28 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,8 +6,23 @@
6
6
7
7
A central goal of systems neuroscience is to understand how high-dimensional neural activity relates to complex stimuli and behaviours. Recent advances in neural and behavioural recording techniques have enabled routine collection of large datasets to answer these questions. With access to such rich data, we are now able to describe activity at the level of neural populations rather than individual neurons, and we can look at the neural underpinnings of increasingly complex and naturalistic behaviours. Unfortunately, it can be challenging to extract interpretable structure from such high-dimensional, often noisy, data. Generative modelling is a powerful approach from probabilistic machine learning that can reveal this structure by learning the statistical properties of the recorded data, often under the assumption that the high-dimensional observations arise from some lower-dimensional ‘latent’ process. Moreover, constructing such generative models makes it possible to build prior knowledge about the data directly into the analysis pipeline, such as multi-region structure or temporal continuity. This makes it possible both to make more efficient use of the available data by building in appropriate inductive biases, and to make the models more interpretable by shaping them according to the known structure of the data. Given the wealth of advances in generative modelling for systems neuroscience in recent years, we think the time is ripe to review this progress and discuss both challenges and opportunities for the future.
8
8
9
+
## Example notebooks
9
10
11
+
We provide three example notebook which implements and discusses a range of generative models commonly used in neuroscience.
10
12
13
+
*1. Regression*\
14
+
This notebook considers methods used for regression - the case where we have both some observations and set of regressors that we think can predict our observations.
15
+
We start from the simple case of linear regression and reformulate it as a Bayesian method which can be generalized to the more complicated but powerful *Gaussian process* regression (see https://www.youtube.com/watch?v=cQAPIlMeL_g for a more thorough overview of the use of Gaussian processes in systems neuroscience).
16
+
17
+
*2. Latent variable models (lvms)*\
18
+
Having treated the case of regression, we then move on to latent variable models. This *unsupervised learning* setting generalizes regression to the case where we do now know the regressors but instead have to *infer* them from the data.
19
+
This inference process is often complicated, which calls for simplifying assumptions such as Gaussianity or linearity.
20
+
In this notebook, we consider both linear and non-linear method and look at the importance of such modelling choices for analysing high-dimensional data.
21
+
22
+
*3. Discrete state spaces*\
23
+
In both of the above cases, we worked with *continuous* state spaces.
24
+
However, it is becoming increasingly common in systems neuroscience to also work with discrete state spaces, such as motivational states or different regimes of neural dynamics.
25
+
In this notebook, we start from the simple Hidden Markov Model for inferring discrete states and transitions from data and then generalize to the increasingly popular approach of Switching Linear Dynamical Systems for modelling neural data.
11
26
12
27
## Generative models
13
28
@@ -16,9 +31,6 @@ A central goal of systems neuroscience is to understand how high-dimensional neu
16
31
-**Universal count model** ([Liu and Lengyel, 2021](https://proceedings.neurips.cc/paper/2021/hash/6f5216f8d89b086c18298e043bfe48ed-Abstract.html))
17
32
-**Manifold GPLVM** ([Jensen et al., 2020](https://proceedings.neurips.cc/paper/2020/hash/fedc604da8b0f9af74b6cfc0fab2163c-Abstract.html))
18
33
19
-
Other models [colab](https://colab.research.google.com/drive/15YK-TyWjCCfcVDPxiTOUYbMimBZN1EL7?usp=sharing)
More detailed instructions are provided further below.
38
-
39
-
40
-
### Detailed instructions
41
-
42
-
Click on a section to expand it.
43
-
44
-
<details>
45
-
<summary>1. <b>Install Python 3</b></summary>
46
-
47
-
- On Windows: ...
48
-
49
-
- On MacOS, Ubuntu, etc, go to 'Terminal' and run `chmod +x` on the downloaded `.sh` file, then run it
50
-
with...
51
-
</details>
52
-
53
-
Installing ssm :
54
-
git clone https://github.com/lindermanlab/ssm
55
-
cd ssm
56
-
pip install numpy cython
57
-
pip install -e .
58
-
59
-
60
49
## Acknowledgements
61
50
62
-
We would like to thank the COSYNE Workshops organizing committee...
63
-
51
+
We would like to thank the COSYNE Workshops organizing committee for giving us the opportunity to run our Workshop in Montreal.
52
+
We are also grateful to Ta-Chu Kao, Guillaume Hennequin, Máté Lengyel, and many others in CBL and beyond for various discussions and assistance with implementations of some of the methods.
0 commit comments