There are three traditional ways in which information gets filtered. First, if it is written and/or issued by an authoritative source such as the federal government or a reliable organization, it is generally accepted at face value as having validity. Second, if it is authenticated as part of an editorial or peer review process by a publisher, it is generally accepted as reliable. Third, if it is evaluated by experts, reviewers, or subject specialists/librarians as part of collection development, it is generally accepted as authoritative. Traditional bibliographic instruction by librarians emphasizes evaluating information. Some may see a certain redundancy in this, given that libraries have already provided filtering for information available in the library. As part of collection development, librarians, subject specialists, and others (e.g., faculty in academia) expend a lot of energy verifying the validity and authenticity of materials through reviews and in-hand examination. However, this kind of filtering works best on certain forms of information -- namely, books -- better than on others. In the case of journals, for example, end-user evaluation becomes more important, since libraries provide access to indexes of journals that they might not collect.
Evaluative quality control has been applied to the print-on-paper world for hundreds of years, and it is recognized as increasingly relevant for the electronic world of the Internet. The federal government puts up information through several access points (e.g., THOMAS, at http://thomas.loc.gov/; GPO Access, at http://thorplus.lib.purdue.edu/gpo/). That electronic information is now regarded on a par with its print counterpart. Large, established publishers tend to put up only catalog information on the Internet (e.g., Penguin Books Ltd., at http://www.bookshop.co.uk/PENGUIN/). Once stronger, economical pay-per-use models are applied, such publishers' online information will no doubt also be accepted as just as reliable as its print counterpart.
In our Information Strategies course at the Purdue Libraries, we seek to adapt traditional information evaluation techniques to the Internet environment. We do so by emphasizing the overall nature of information on the Internet and how that information makes it online. We compare a traditional process of finding and evaluating information in an off-line environment to the same process in an online environment, in order to help shape new mental models of understanding.
For instance, we might first describe a traditional print publication process: Research starts out as a lab report, evolving into a conference paper, eventually into an article in a peer-reviewed journal that is subsequently cited in an index to be accessed for public dissemination. We could then contrast this to a process by which information makes it online directly through Usenet groups, listservs, and Web pages. This comparison points out the lack of evaluative input for much of the information on the Internet. In this sense, information on the Internet is comparable to that found on grocery-store bulletin boards or in hobbyist association meetings. It may be important, but it may not be reliable or authoritative. By both contrasting and comparing, we are reinforcing the concept of unfiltered information and the need for evaluation.
To understand why searching is not necessarily evaluating, here's some information to consider about search engines. (Don't worry, the information is authoritative. I'll vouch for it!) Three separate components comprise a search engine. A selecting function is responsible for identifying and gathering Web pages or links. A compiling function is responsible for storing and making this information accessible. A searching function is responsible for determining access points for retrieving the Web pages and links. For more on this, check out the Web site at http://www.hamline.edu/library/links/comparisons.html, which discusses and compares various search engines. Or look at the Computers in Libraries 1996 conference presentation by Hope Tillman of Babson College entitled Evaluating Quality on the Net (see http://www.tiac.net/users/hope/findqual.html).
The selecting function is achieved through one of two basic methods. One is to have people register or submit Web pages or links, and/or to have people search them out for inclusion. This method tends to be highly selective and thus somewhat evaluative. Yahoo! (http://www.yahoo.com/) is an example of such a database. The other is to use the Internet capability to send a request to a server to find any or all Web pages and links maintained there. This method is used by Lycos (http://lycos11.lycos.cs.cmu.edu/). It tends to be comprehensive but is virtually nonevaluative. Analogously, this is comparable to the difference between a library collection and a systematic survey of bookstores, newsstands, information counters, and file cabinets of public institutions. Obviously, evaluation techniques are especially needed for the latter. Likewise, the information compiling function for a search engine's database is achieved through one of two basic methods. Again, one uses human intervention and the other simply automates the function. Search engines such as Magellan (http://www.mckinley.com/) employ evaluators to rate various pages against criteria for reliability and usefulness. Lycos, on the other hand, strives for comprehensiveness in its attempt to index the entire Internet, and it seeks to add any and all information it comes across.
The need for evaluative techniques is evident in several areas. If you use a search engine that does not assess information included in its database, it is entirely up to you to do so. Likewise, if you find information simply by browsing or following subject categories, you'll need to evaluate it. And in many cases, you may still need to apply evaluation techniques to ranked or rated items found in databases such as Yahoo! or Magellan.
The primary approach to objectively verifying Internet information is similar to that used to review print materials. An evaluation checklist derived from The Savvy Student's Guide to Library Research mentioned above is available at http://thorplus.lib.purdue.edu/~techman/eval.html. For the most part, it emphasizes checking: Check reliability and credibility by verifying the author, his or her affiliation, date, and the source of publication; check perspective by assessing biases presented in the information or its source; and check the purpose by determining its scope, coverage, and level.
Even more specific to the Internet, there are certain places to look and points to touch on in evaluating Web information. Ann Scholz, in the guideline she put together for Purdue Libraries as part of the Information Strategies course , emphasizes first checking a Web page for its critical elements -- the header, body, and footer -- to determine the author and source. In addition, consider the following:
In adapting traditional bibliographic instruction to the Internet, we at the Purdue Libraries still rely on a basic model emphasizing and integrating topic definition, information seeking, and evaluation. This model applies to any information-gathering activity, but it becomes crucial when applied to a complex and constantly changing environment like the Internet.