Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
19 changes: 15 additions & 4 deletions doc/benchmarks-full.rst
Original file line number Diff line number Diff line change
Expand Up @@ -11,10 +11,21 @@ In this model, the benchmark and metrics set the standard (i.e., the criteria th

- General benchmarks usage:

- Each criterion is intended to be “system agnostic” but some may not apply to every situation (e.g., local field requirements)
- Criteria are binary -- i.e., the set being evaluated must meet all points or it does not meet the benchmarking standard for that level
- These benchmarks focus solely on the quality of metadata entry, not the quality of information (i.e., available information is all entered correctly, although we might wish that additional information is known about an item to improve the record)
- This framework is intended to be scalable (it is written in the context of 1 record, but could apply across a collection, resource type, or an entire system)
- Each criterion is intended to be “system agnostic” but some may not apply to
every situation (e.g., local field requirements)
- Criteria are binary -- i.e., the set being evaluated must meet all points or
it does not meet the benchmarking standard
- Benchmarks are cumulative -- i.e., records must meet all the criteria at the chosen
level and the lower levels, if relevant
- These benchmarks focus solely on the quality of metadata entry, not the quality
of information -- i.e., available information is all entered correctly, although
we might wish that additional information is known about an item to improve the record
- This framework is intended to be scalable (it is written in the context of 1 record,
but could apply across a collection, resource type, or an entire system)
- Minimal criteria apply in all cases; suggested criteria do not rise to the level
of “absolute minimum” but are suggested as priorities for "better-than-minimal"
based on our research and experience; ideal criteria tend to be more subjective and may not apply in every situation




Expand Down
8 changes: 5 additions & 3 deletions doc/benchmarks-summary.rst
Original file line number Diff line number Diff line change
Expand Up @@ -10,10 +10,12 @@ Usage:
- Each criterion is intended to be “system agnostic” but some may not apply to
every situation (e.g., local field requirements)
- Criteria are binary -- i.e., the set being evaluated must meet all points or
it does not meet the benchmarking standard for that level
it does not meet the benchmarking standard
- Benchmarks are cumulative -- i.e., records must meet all the criteria at the chosen
level and the lower levels, if relevant
- These benchmarks focus solely on the quality of metadata entry, not the quality
of information (i.e., available information is all entered correctly, although
we might wish that additional information is known about an item to improve the record)
of information -- i.e., available information is all entered correctly, although
we might wish that additional information is known about an item to improve the record
- This framework is intended to be scalable (it is written in the context of 1 record,
but could apply across a collection, resource type, or an entire system)
- Minimal criteria apply in all cases; suggested criteria do not rise to the level
Expand Down
11 changes: 7 additions & 4 deletions doc/citations.rst
Original file line number Diff line number Diff line change
@@ -1,6 +1,11 @@
=========
=======
Sources
=======
This (non-comprehensive) list of references includes a wide array of literature and other resources that may be helpful for organizations that are thinking about benchmarking projects, such as papers and articles related to metadata quality work and benchmarking processes within and outside the library sphere. We have also tried to include links to resources that may support specific goals that organizations may have for metadata quality or user interactions more generally.

---------
Citations
=========
---------
These sources were referenced directly to compile benchmarks and supplemental information about metadata quality frameworks.

- Bruce & Hillmann (2004). The Continuum of Metadata Quality: Defining, Expressing, Exploiting. https://www.ecommons.cornell.edu/handle/1813/7895
Expand All @@ -15,8 +20,6 @@ These sources were referenced directly to compile benchmarks and supplemental in
***************
Other Resources
***************
This (non-comprehensive) list of references includes a wide array of literature and other resources that may be helpful for organizations that are thinking about benchmarking projects, such as papers and articles related to metadata quality work and benchmarking processes within and outside the library sphere. We have also tried to include links to resources that could support specific goals that organizations may have for metadata quality or user interactions more generally.


Sources Related to Benchmarking
===============================
Expand Down
Loading