22# ## Preparing the model
33# ExplainabilityMethods.jl can be used on any classifier.
44# In this tutorial we will be using a pretrained VGG-19 model from
5- # [Metalhead.jl](https://github.com/FluxML/Metalhead.jl)
5+ # [Metalhead.jl](https://github.com/FluxML/Metalhead.jl).
66using ExplainabilityMethods
77using Flux
88using Metalhead
@@ -11,7 +11,7 @@ using Metalhead: weights
1111vgg = VGG19 ()
1212Flux. loadparams! (vgg, Metalhead. weights (" vgg19" ))
1313
14- # md # !!! note "Pretrained weights"
14+ # md # !!! warning "Pretrained weights"
1515# md # This doc page was generated using Metalhead `v0.6.0`.
1616# md # At the time you read this, Metalhead might already have implemented weight loading
1717# md # via `VGG19(; pretrain=true)`, in which case `loadparams!` is not necessary.
@@ -27,24 +27,23 @@ img_raw = testimage("chelsea")
2727
2828# which we preprocess for VGG-19
2929include (" ../utils/preprocessing.jl" )
30- img = preprocess (img_raw)
31- size (img)
30+ img = preprocess (img_raw); # array of size (224, 224, 3, 1)
3231
3332# ## Calling the analyzer
3433# We can now select an analyzer of our choice
35- # and call `analyze` to get an explaination `expl`:
34+ # and call [ `analyze`](@ref) to get an explaination `expl`:
3635analyzer = LRPZero (model)
3736expl, out = analyze (img, analyzer);
3837
39- # md # !!! note "Neuron selection"
38+ # Finally, we can visualize the explaination through heatmapping:
39+ heatmap (expl)
40+
41+ # md # !!! tip "Neuron selection"
4042# md # To get an explaination with respect to a specific output neuron (e.g. class 42) call
4143# md # ```julia
4244# md # expl, out = analyze(img, analyzer, 42)
4345# md # ```
4446#
45- # Finally, we can visualize the explaination through heatmapping:
46- heatmap (expl)
47-
4847# Currently, the following analyzers are implemented:
4948#
5049# ```
@@ -56,21 +55,23 @@ heatmap(expl)
5655# └── LRPGamma
5756# ```
5857
59- # ## Custom rules composites
58+ # ## Custom composites
6059# If our model is a "flat" chain of Flux layers, we can assign LRP rules
6160# to each layer individually. For this purpose,
62- # ExplainabilityMethods exports the method `flatten_chain`:
61+ # ExplainabilityMethods exports the method [ `flatten_chain`](@ref) :
6362model = flatten_chain (model)
6463
65- # md # !!! note "Flattening models"
64+ # md # !!! warning "Flattening models"
6665# md # Not all models can be flattened, e.g. those using
6766# md # `Parallel` and `SkipConnection` layers.
6867#
6968# Now we set a rule for each layer
7069rules = [
71- ZBoxRule (), repeat ([GammaRule ()], 15 )... , repeat ([ZeroRule ()], length (model) - 16 )...
70+ ZBoxRule (),
71+ repeat ([GammaRule ()], 15 )... ,
72+ repeat ([ZeroRule ()], length (model) - 16 )...
7273]
73- # to define a custom LRP analyzer:
74+ # and define a custom LRP analyzer:
7475analyzer = LRP (model, rules)
7576expl, out = analyze (img, analyzer)
7677heatmap (expl)
@@ -80,9 +81,9 @@ heatmap(expl)
8081# The rule has to be of type `AbstractLRPRule`.
8182struct MyCustomLRPRule <: AbstractLRPRule end
8283
83- # It is then possible to dispatch on the utility functions `modify_layer`, `modify_params`
84- # and `modify_denominator` with our rule type `MyCustomLRPRule`
85- # to define custom rules without writing boilerplate code.
84+ # It is then possible to dispatch on the utility functions [ `modify_layer`](@ref),
85+ # [`modify_params`](@ref) and [ `modify_denominator`](@ref) with our rule type
86+ # `MyCustomLRPRule` to define custom rules without writing boilerplate code.
8687function modify_params (:: MyCustomLRPRule , W, b)
8788 ρW = W + 0.1 * relu .(W)
8889 return ρW, b
@@ -93,6 +94,6 @@ analyzer = LRP(model, MyCustomLRPRule())
9394expl, out = analyze (img, analyzer)
9495heatmap (expl)
9596
96- # md # !!! note "PRs welcome"
97+ # md # !!! tip "Pull requests welcome"
9798# md # If you implement a rule that's not included in ExplainabilityMethods, please make a PR to
9899# md # [`src/lrp_rules.jl`](https://github.com/adrhill/ExplainabilityMethods.jl/blob/master/src/lrp_rules.jl)!
0 commit comments