Skip to content

Commit bef83fc

Browse files
committed
Small fixes to docs
1 parent e2e6249 commit bef83fc

File tree

5 files changed

+29
-24
lines changed

5 files changed

+29
-24
lines changed

docs/literate/example.jl

Lines changed: 19 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
# ## Preparing the model
33
# ExplainabilityMethods.jl can be used on any classifier.
44
# In this tutorial we will be using a pretrained VGG-19 model from
5-
# [Metalhead.jl](https://github.com/FluxML/Metalhead.jl)
5+
# [Metalhead.jl](https://github.com/FluxML/Metalhead.jl).
66
using ExplainabilityMethods
77
using Flux
88
using Metalhead
@@ -11,7 +11,7 @@ using Metalhead: weights
1111
vgg = VGG19()
1212
Flux.loadparams!(vgg, Metalhead.weights("vgg19"))
1313

14-
#md # !!! note "Pretrained weights"
14+
#md # !!! warning "Pretrained weights"
1515
#md # This doc page was generated using Metalhead `v0.6.0`.
1616
#md # At the time you read this, Metalhead might already have implemented weight loading
1717
#md # via `VGG19(; pretrain=true)`, in which case `loadparams!` is not necessary.
@@ -27,24 +27,23 @@ img_raw = testimage("chelsea")
2727

2828
# which we preprocess for VGG-19
2929
include("../utils/preprocessing.jl")
30-
img = preprocess(img_raw)
31-
size(img)
30+
img = preprocess(img_raw); # array of size (224, 224, 3, 1)
3231

3332
# ## Calling the analyzer
3433
# We can now select an analyzer of our choice
35-
# and call `analyze` to get an explaination `expl`:
34+
# and call [`analyze`](@ref) to get an explaination `expl`:
3635
analyzer = LRPZero(model)
3736
expl, out = analyze(img, analyzer);
3837

39-
#md # !!! note "Neuron selection"
38+
# Finally, we can visualize the explaination through heatmapping:
39+
heatmap(expl)
40+
41+
#md # !!! tip "Neuron selection"
4042
#md # To get an explaination with respect to a specific output neuron (e.g. class 42) call
4143
#md # ```julia
4244
#md # expl, out = analyze(img, analyzer, 42)
4345
#md # ```
4446
#
45-
# Finally, we can visualize the explaination through heatmapping:
46-
heatmap(expl)
47-
4847
# Currently, the following analyzers are implemented:
4948
#
5049
# ```
@@ -56,21 +55,23 @@ heatmap(expl)
5655
# └── LRPGamma
5756
# ```
5857

59-
# ## Custom rules composites
58+
# ## Custom composites
6059
# If our model is a "flat" chain of Flux layers, we can assign LRP rules
6160
# to each layer individually. For this purpose,
62-
# ExplainabilityMethods exports the method `flatten_chain`:
61+
# ExplainabilityMethods exports the method [`flatten_chain`](@ref):
6362
model = flatten_chain(model)
6463

65-
#md # !!! note "Flattening models"
64+
#md # !!! warning "Flattening models"
6665
#md # Not all models can be flattened, e.g. those using
6766
#md # `Parallel` and `SkipConnection` layers.
6867
#
6968
# Now we set a rule for each layer
7069
rules = [
71-
ZBoxRule(), repeat([GammaRule()], 15)..., repeat([ZeroRule()], length(model) - 16)...
70+
ZBoxRule(),
71+
repeat([GammaRule()], 15)...,
72+
repeat([ZeroRule()], length(model) - 16)...
7273
]
73-
# to define a custom LRP analyzer:
74+
# and define a custom LRP analyzer:
7475
analyzer = LRP(model, rules)
7576
expl, out = analyze(img, analyzer)
7677
heatmap(expl)
@@ -80,9 +81,9 @@ heatmap(expl)
8081
# The rule has to be of type `AbstractLRPRule`.
8182
struct MyCustomLRPRule <: AbstractLRPRule end
8283

83-
# It is then possible to dispatch on the utility functions `modify_layer`, `modify_params`
84-
# and `modify_denominator` with our rule type `MyCustomLRPRule`
85-
# to define custom rules without writing boilerplate code.
84+
# It is then possible to dispatch on the utility functions [`modify_layer`](@ref),
85+
# [`modify_params`](@ref) and [`modify_denominator`](@ref) with our rule type
86+
# `MyCustomLRPRule` to define custom rules without writing boilerplate code.
8687
function modify_params(::MyCustomLRPRule, W, b)
8788
ρW = W + 0.1 * relu.(W)
8889
return ρW, b
@@ -93,6 +94,6 @@ analyzer = LRP(model, MyCustomLRPRule())
9394
expl, out = analyze(img, analyzer)
9495
heatmap(expl)
9596

96-
#md # !!! note "PRs welcome"
97+
#md # !!! tip "Pull requests welcome"
9798
#md # If you implement a rule that's not included in ExplainabilityMethods, please make a PR to
9899
#md # [`src/lrp_rules.jl`](https://github.com/adrhill/ExplainabilityMethods.jl/blob/master/src/lrp_rules.jl)!

docs/make.jl

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -14,9 +14,6 @@ for example in readdir(EXAMPLE_DIR)
1414
Literate.script(EXAMPLE, OUT_DIR) # .jl script
1515
end
1616

17-
DocMeta.setdocmeta!(
18-
ExplainabilityMethods, :DocTestSetup, :(using ExplainabilityMethods); recursive=true
19-
)
2017
makedocs(;
2118
modules=[ExplainabilityMethods],
2219
authors="Adrian Hill",

docs/src/api.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -35,3 +35,7 @@ modify_denominator
3535
strip_softmax
3636
flatten_chain
3737
```
38+
39+
# Index
40+
```@index
41+
```

docs/src/index.md

Lines changed: 5 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,10 @@ CurrentModule = ExplainabilityMethods
33
```
44
![ExplainabilityMethods.jl](https://raw.githubusercontent.com/adrhill/ExplainabilityMethods.jl/gh-pages/assets/banner.png)
55

6-
Explainable AI in Julia using Flux.
6+
Explainable AI in Julia using [Flux.jl](https://fluxml.ai).
77

8-
```@index
8+
## Installation
9+
To install this package and its dependencies, open the Julia REPL and run
10+
```julia-repl
11+
julia> ]add https://github.com/adrhill/ExplainabilityMethods.jl
912
```

src/gradient.jl

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
"""
2-
InputTimesGradient(model)
2+
Gradient(model)
33
44
Analyze model by calculating the gradient of a neuron activation with respect to the input.
55
"""

0 commit comments

Comments
 (0)