|
| 1 | +# # [Concept relevance propagation](@id docs-crp) |
| 2 | +# In [*From attribution maps to human-understandable explanations through Concept Relevance Propagation*](https://www.nature.com/articles/s42256-023-00711-8) (CRP), |
| 3 | +# Achtibat et al. propose the conditioning of LRP relevances on individual features of a model. |
| 4 | +# |
| 5 | +# This example builds on the basics shown in the [*Getting started*](@ref docs-getting-started) section. |
| 6 | +# We start out by loading the same pre-trained LeNet5 model and MNIST input data: |
| 7 | +using ExplainableAI |
| 8 | +using Flux |
| 9 | + |
| 10 | +using BSON # hide |
| 11 | +model = BSON.load("../../model.bson", @__MODULE__)[:model] # hide |
| 12 | +model |
| 13 | +#- |
| 14 | +using MLDatasets |
| 15 | +using ImageCore, ImageIO, ImageShow |
| 16 | + |
| 17 | +index = 10 |
| 18 | +x, y = MNIST(Float32, :test)[10] |
| 19 | +input = reshape(x, 28, 28, 1, :) |
| 20 | + |
| 21 | +convert2image(MNIST, x) |
| 22 | + |
| 23 | +# ## Step 1: Create LRP analyzer |
| 24 | +# To create a CRP analyzer, first define an LRP analyzer with your desired rules: |
| 25 | +composite = EpsilonPlusFlat() |
| 26 | +lrp_analyzer = LRP(model, composite) |
| 27 | + |
| 28 | +# ## Step 2: Define concepts |
| 29 | +# Then, specify the index of the layer on the outputs of which you want to condition the explanation. |
| 30 | +# In this example, we are interested in the outputs of the last convolutional layer, layer 3: |
| 31 | +concept_layer = 3 # index of relevant layer in model |
| 32 | +model[concept_layer] # show layer |
| 33 | + |
| 34 | +# Then, specify the concepts you are interested in. |
| 35 | +# To automatically select the $n$ most relevant concepts, use [`TopNConcepts`](@ref). |
| 36 | +# |
| 37 | +# Note that for convolutional layers, |
| 38 | +# a feature corresponds to an entire output channel of the layer. |
| 39 | +concepts = TopNConcepts(5) |
| 40 | + |
| 41 | +# To manually specify features, use [`IndexedConcepts`](@ref). |
| 42 | +concepts = IndexedConcepts(1, 2, 10) |
| 43 | + |
| 44 | +# ## Step 3: Use CRP analyzer |
| 45 | +# We can now create a [`CRP`](@ref) analyzer |
| 46 | +# and use it like any other analyzer from ExplainableAI.jl: |
| 47 | +analyzer = CRP(lrp_analyzer, concept_layer, concepts) |
| 48 | +heatmap(input, analyzer) |
| 49 | + |
| 50 | +# ## Using CRP on input batches |
| 51 | +# Note that `CRP` uses the batch dimension to return explanations. |
| 52 | +# When using CRP on batches, the explanations are first sorted by concepts, then inputs, |
| 53 | +# e.g. `[c1_i1, c1_i2, c2_i1, c2_i2, c3_i1, c3_i2]` in the following example: |
| 54 | +x, y = MNIST(Float32, :test)[10:11] |
| 55 | +batch = reshape(x, 28, 28, 1, :) |
| 56 | + |
| 57 | +heatmap(batch, analyzer) |
0 commit comments