06. Understanding Usage Metrics
Key metrics explanation and interpretation
The statistics presented in the widget and LA Referencia platform are built from events that occur at three different levels: institutional repositories, national aggregators, and the regional aggregator.
Metrics Across Different Levels
The usage statistics presented in LA Referencia tools are constructed from events that occur at three distinct levels:
- Institutional repositories - Where the original content is hosted
- National aggregators - Which harvest metadata from repositories within a country
- Regional aggregator - LA Referencia, which integrates records from different Latin American countries and Spain
Understanding Each Metric
Below we explain the meaning of each metric, considering the differences according to the level at which the events are generated:
Visit
Refers to access to an object's detail page (for example, a publication record). This event can be recorded both in the repository and in the aggregators.
Download
Corresponds to the action of accessing the attached file (such as a PDF). This event occurs exclusively in the repository.
Outlink
This metric is recorded only in aggregators (national or regional) and measures how many times a user clicks on the link that leads from the object record in the aggregator to the item page in the original repository. It reflects interest in accessing the primary source from a broader search environment.
Conversion
- In repositories: A conversion occurs when a person who views an object's record performs a file download. That is, when a view turns into a download action.
- In aggregators: Conversion occurs when, after viewing an object, the person clicks on the link that redirects to the repository. In this case, the conversion is realized through the "Outlink" event.
Robot Traffic Filtering
It’s important to mention that automated traffic (robots) is cleaned using both standard COUNTER lists and behavior-based criteria, such as the number of actions per unit of time. This means that events suspected of being generated by bots are discarded, which may make the metrics comparatively lower than those observed in the repository itself.
However, since we apply the same method to all repositories, this cleaning provides a layer of normalization that improves comparability between institutions and platforms.