Skip to content

Commit 438f006

Browse files
authored
Merge pull request #2 from DLFMetadataAssessment/add-draft
Final changes
2 parents b8dcd2e + 00b3aa6 commit 438f006

File tree

3 files changed

+27
-11
lines changed

3 files changed

+27
-11
lines changed

doc/benchmarks-full.rst

Lines changed: 15 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -11,10 +11,21 @@ In this model, the benchmark and metrics set the standard (i.e., the criteria th
1111

1212
- General benchmarks usage:
1313

14-
- Each criterion is intended to be “system agnostic” but some may not apply to every situation (e.g., local field requirements)
15-
- Criteria are binary -- i.e., the set being evaluated must meet all points or it does not meet the benchmarking standard for that level
16-
- These benchmarks focus solely on the quality of metadata entry, not the quality of information (i.e., available information is all entered correctly, although we might wish that additional information is known about an item to improve the record)
17-
- This framework is intended to be scalable (it is written in the context of 1 record, but could apply across a collection, resource type, or an entire system)
14+
- Each criterion is intended to be “system agnostic” but some may not apply to
15+
every situation (e.g., local field requirements)
16+
- Criteria are binary -- i.e., the set being evaluated must meet all points or
17+
it does not meet the benchmarking standard
18+
- Benchmarks are cumulative -- i.e., records must meet all the criteria at the chosen
19+
level and the lower levels, if relevant
20+
- These benchmarks focus solely on the quality of metadata entry, not the quality
21+
of information -- i.e., available information is all entered correctly, although
22+
we might wish that additional information is known about an item to improve the record
23+
- This framework is intended to be scalable (it is written in the context of 1 record,
24+
but could apply across a collection, resource type, or an entire system)
25+
- Minimal criteria apply in all cases; suggested criteria do not rise to the level
26+
of “absolute minimum” but are suggested as priorities for "better-than-minimal"
27+
based on our research and experience; ideal criteria tend to be more subjective and may not apply in every situation
28+
1829

1930

2031

doc/benchmarks-summary.rst

Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -10,10 +10,12 @@ Usage:
1010
- Each criterion is intended to be “system agnostic” but some may not apply to
1111
every situation (e.g., local field requirements)
1212
- Criteria are binary -- i.e., the set being evaluated must meet all points or
13-
it does not meet the benchmarking standard for that level
13+
it does not meet the benchmarking standard
14+
- Benchmarks are cumulative -- i.e., records must meet all the criteria at the chosen
15+
level and the lower levels, if relevant
1416
- These benchmarks focus solely on the quality of metadata entry, not the quality
15-
of information (i.e., available information is all entered correctly, although
16-
we might wish that additional information is known about an item to improve the record)
17+
of information -- i.e., available information is all entered correctly, although
18+
we might wish that additional information is known about an item to improve the record
1719
- This framework is intended to be scalable (it is written in the context of 1 record,
1820
but could apply across a collection, resource type, or an entire system)
1921
- Minimal criteria apply in all cases; suggested criteria do not rise to the level

doc/citations.rst

Lines changed: 7 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,11 @@
1-
=========
1+
=======
2+
Sources
3+
=======
4+
This (non-comprehensive) list of references includes a wide array of literature and other resources that may be helpful for organizations that are thinking about benchmarking projects, such as papers and articles related to metadata quality work and benchmarking processes within and outside the library sphere. We have also tried to include links to resources that may support specific goals that organizations may have for metadata quality or user interactions more generally.
5+
6+
---------
27
Citations
3-
=========
8+
---------
49
These sources were referenced directly to compile benchmarks and supplemental information about metadata quality frameworks.
510

611
- Bruce & Hillmann (2004). The Continuum of Metadata Quality: Defining, Expressing, Exploiting. https://www.ecommons.cornell.edu/handle/1813/7895
@@ -15,8 +20,6 @@ These sources were referenced directly to compile benchmarks and supplemental in
1520
***************
1621
Other Resources
1722
***************
18-
This (non-comprehensive) list of references includes a wide array of literature and other resources that may be helpful for organizations that are thinking about benchmarking projects, such as papers and articles related to metadata quality work and benchmarking processes within and outside the library sphere. We have also tried to include links to resources that could support specific goals that organizations may have for metadata quality or user interactions more generally.
19-
2023

2124
Sources Related to Benchmarking
2225
===============================

0 commit comments

Comments
 (0)