Total Results and Analyses

Total Results and Analyses of Seven Evaluations

Total Results and Analyses of Seven Evaluations are discussed here with summary of seven evaluations. To combine seven evaluations, average value method is chosen as the most effective way of summarizing the center of data.

Summary of Seven Evaluations

The Content, Usability, and Performance Evaluation (CUPE) Criteria were suggested to find which digital libraries are well-designed digital libraries (WDDLs) among existing digital libraries in fifteen subject areas. The well-designed digital libraries in various subject areas will consist of the proposed International Open Public Digital Library (IOPDL).

With the CUPE criteria, three main evaluations are performed with seven sub evaluation criteria. First, to find candidates of well-designed digital libraries, Professor McDonough and Boaz Sunyoung Jin evaluated contents’ quality of existing digital libraries in fifteen subject area. We investigated: whether existing digital libraries’ content is accuracy in wide and deep scopes; whether they have authority; and whether they give subjective Satisfaction. Through the content evaluation, sixty three digital libraries were recommended as candidates of well-designed digital libraries. One is duplicated in two subject domains, thus, total sixty two digital libraries became candidates for WDDLs.

Next, sixty two candidate digital libraries were evaluated again with four Usability evaluations: Accessibility evaluation with seven accessibility evaluation tools, and Interface Usability evaluations with Convenience/Ease of Use, Interfaces’ Consistency, and Visible Design and Aesthetic Appeal Evaluation Criteria. The interface usability evaluations were done by heuristic evaluation method.

Lastly, to evaluate performance of sixty two candidate digital libraries, two computer programs were executed to measure Link and Search Response Times, and Relevance of obtained results.

With results of the seven evaluations, here, it will be explained how to calculate total average for each candidate digital library with scores of evaluations. Also, the final results are described and analyzed with total averages of seven evaluations by CUPE criteria for sixty two candidate digital libraries.

Methodology to Combine Seven Evaluations

Each evaluation is scored with 5-point scale based on each scoring method. The scores of all evaluations except content evaluation should be calculated as a center meaningful value for each candidate. The method should be able to distinguish well-designed digital libraries (WDDLs).

There are several methods to summarize data, such as mean, median, mode (Weisberg, 1992). Generally the most effective way of summarizing the center of data is to average the values on the variable. To get a unique value for each digital library, thus, the mean is chosen. It fits for our purpose to find WDDLs. The weakness of mean is sensitive to extreme values. But, there is no extreme value in 5 point scale which may affect results.

Final Average

To put together all scored values (scores) of six evaluations with the CUPE criteria, an average is calculated for each digital library. It is called as ‘final average.’ In here, the content evaluation value is excluded, because through the evaluation, the candidate, sixty two digital libraries, are recommended. Thus, six scored values of six evaluations are calculated into a final average.

Final Average of each digital library = (∑Ai / 7 + ∑ Ui  + ∑ Pi) / n Where, Ai = an evaluated value/score by one Accessibility evaluation tool, Ui  = an evaluated value by Convenience, Consistnecy or Visual Design Evaluation, Pi = an evaluated value by Response time or Relevance Evaluation, n = number of evaluations, that is, 6.

Equation 7: Final Average for Each Digital Library

Total Average

With the final averages of the candidate digital libraries, a total average is calculated by all final averages of sixty two digital libraries, as equation 8 shows. It is called as ‘total average.’

Total Average of all candidate digital libraries = ∑ FAi / n  Where, FAi = a Final Average of a candidate digital library, n = total number of the candidate, that is, 62.

Equation 8: Total Average of All Candidate Digital Libraries

Final Averages of Well-Designed Digital Libraries

Then, the digital libraries that have higher average values than the total average of all sixty two candidate digital libraries are defined as well-designed digital libraries, as the equation 9 shows.

 Final Average of a Well − Designed Digital Library ≥ Total Average of all candidate DLs

Equation 9: Final Averages of Well-Designed Digital Libraries

Total Combined Results of Seven Evaluations

As a result, the total average of all candidate digital libraries is 3.2003 in 5-point scale. It is higher than a half of 5 points. It may mean generally that the candidate digital libraries may provide enough quality usability and performance. They are recommended by content evaluation in their subject domains. Ultimately, thirty four digital libraries out of sixty two digital libraries show higher final averages than the total average, 3.2003. That is, about half of the candidate digital libraries are turned out as well-designed digital libraries in their subject domains.

  1. NASA’s Visible Earth (4.452381) in Geography subject area shows the highest averages by six evaluations with the CUPE criteria.
  2. Census Atlas of the United States (4.263889) in Geography subject area
  3. U.S. Department of Health & Human Services (4.261905) in medicine subject domain.
  4. GPO Access in political science and law shows 4.142857.
  5. National Science Foundation in science shows 4.138889.
Scores of All Evaluations

 [gview file=”https://www.iopdl.org/files/AllScoresofEvaluations.pdf” save=”1″]

Total Analyses of Seven Evaluations

Analysis Based on Percentages of the Final Averages

Figure 1 shows percentages of the final average of the candidate digital library. As the results, 53.968% of sixty two digital libraries gain over 3.2003 final averages. And those digital libraries are turned out as well-designed digital libraries, because their averages are higher than total average, 3.2003. 15.87% of them, 10 digital libraries, gain the final average between 3.0 and 3.2003, which could not be well-designed digital libraries.

finalaverage

Figure 1. Percentages of the final average of all candidate digital libraries

Analysis Based on Averages of Subject Areas

According to the results based on averages of subject areas,

  1. the average of Geography subject area is the highest (4.1316).
  2. Then, the averages of Medicine (3.5571),
  3. Military Science (3.496),
  4. Political Science and Law (3.4524),
  5. Science (3.4454),
  6. Agriculture (3.3849), and
  7. Art (3.3577) are high.

Ten subject areas, 66.67% of fifteen subject areas, have higher averages than the total average. On top of that, all averages of fifteen subject areas are higher than 2 points out of five points. The social science subject area is the lowest, 2.2341 average. The results show the qualities of sixty two candidate digital libraries in fifteen subject areas are good in Content quality, Usability quality, and Performance quality overall.

Especially, in Geography subject domain, all three candidates turn out as WDDLs: NASA’s Visible Earth, Census Atlas of the United States, and David Rumsey Map Collection. Moreover, the candidate digital libraries in History of the America subject area are seven, and six digital libraries are turned out as well-designed digital libraries except Digital Past which has problems in accessing the website.

totalaveragebysubject

Figure 2. Comparing the Average based on Subject Domains

Total Analyses

Synthetically, the U.S. has strongly specialized digital libraries in Geography subject area (4.1316), Medicine (3.5571), Military Science (3.496), Political Science and Law (3.4524), and Science (3.4454).

However, the prototype evaluations are mainly done with the U.S. digital libraries except few other country libraries such as The National Archives, Education Resources UK, British Library Online Gallery, International children’s Digital Library, and Chinese Philosophical Etext archive. However, for the International Open Public Digital Library (IOPDL), we should include and evaluate more national digital libraries of many countries.

*More details are in the paper, Chapter VII. Total Results of Seven Evaluations. This website and the paper are developed by the same person.

Comments are closed.