The task of gathering and compiling the data for the vendor packages was assigned to KARM (Technical Services) at WVUL. The datasets include COUNTER JR1 (number of successful full-text article requests by month and journal (usage)) and JR5 (number of successful full-text article requests by year-of-publication (YOP) and journal) usage reports, ILL statistics, and overlap analysis. The process for gathering and compiling is described below.
Follow link for example spreadsheet, https://drive.google.com/open?id=0B5sPtZ_YKSWGSlNtS25YeWZYV2c
Evaluating usage trends can help to identify anomalies in the data collected that skew the usage results. During our analysis of JR1 data, a bar chart of the usage was created, see below. A tutorial for creating a bar chart can be found here. Adding a trendline to this chart was illuminating as it displayed a marked increased in Vendor #3 usage beginning in 2015. Institutionally, there was no functional change that would account for that marked increase. It is surmised that compromised user accounts were responsible for skewing the data and is a reminder to continually evaluate all data, including vendor provided data.
The data gathered can be reviewed in a variety of ways. Each institution will need to consider which method or other combinations of criteria are best suited for its needs. Remember that effective analysis should be replicable, affordable (both cost and time), and ideally simple. Here are 3 methods and criteria that could be used:
A. Usage and Cost Driven
This is the method WVU used. After compiling the data as described above, apply the usage-driven selection criteria below.
B. Weighted Ranking (weighted arithmetic mean):
Analysis method where some factors contribute more than others to the average rank.
C. Journal Ranking Metrics:
Criteria widely used to evaluate an academic journal's impact and quality.
D. Institutional Publication and Citation Practices
Criteria used to correlate faculty publications and citations with institutional usage.