Meeting Notes & Documents

Workshop on Quality of the DLESE Broad Collection

Lamont-Doherty Earth Observatory, Palisades, NY
June 30 – July 1, 2003.

Briefing Materials:

Workshop Part A: Information Gathering


Michael Mayhew, NSF: Charge to the Workshop.

Mike expressed the importance of the DLESE Broad Collection, but felt that quality is not being addressed sufficiently. DLESE needs to add value, and at the moment value added is limited. He said that true quality control is absent and that the filters being used to control acceptance into the broad collection were coarse. Mike referred to the results of the focus group investigation conducted by Tamara Sumner et al. He said the consensus of the teachers was that the quality of a digital collection was indicated by its scientific accuracy.

He suggested that one way to add value to broad collection resources would be to use annotations. In referring to Dave Mogk's letter to the workshop, Mike suggested that Dave promoted quality but was reluctant to have anyone else tell him what quality was. Mike suggested that one way to address the issue of how to accommodate a wide spectrum of views on quality would be to divide the broad collection into various sections.

In conclusion, Mike's view is that DLESE should contain only well-defined, high quality resources, and that “less is more.” DLESE has put too much emphasis on "more is better," i.e. increasing number of resources.


Kim Kastens, Workshop Chair, Framing of the Question and History of the Discussion.

Kim indicated two basic issues to be dealt with during the workshop:

  1. Criteria for accession of resources to the broad collection.
  2. Procedures needed to implement those criteria.

The workshop will be structured in three parts: (1) Information gathering, (2) Discussion and recommendations about Criteria, and (3) Discussion and Recommendations about Procedures.

Concerning criteria, Kim first quoted the two existing criteria for a resource to be admitted to the Broad Collection: (1) The resource is relevant to Earth System Education and (2) The resource works, i.e. it has no conspicuous bugs. She then presented a list of other criteria which have been suggested for the DLESE Broad Collection at various meetings over the years. The workshop must decide which of these criteria, or others, should be applied to accessioning decisions for the DLESE Broad Collection.

Concerning procedures, Kim posed the following issues for the workshop: What processes should be used to identify "problematic" resources? What should be done with problematic resources when they are found? Should identification of problematic resources be carried out by the community or by paid staff? When problematic resources are found, should they be excluded or annotated.? Summarizing these latter two questions into a 2x2 matrix, she emphasized that these options are not mutually exclusive, and that different criteria may be best implemented via different procedures.

Kim reviewed the history of the discussion of quality or selectivity of the Broad Collection. This has been a central issue of the both the vision and the operations of DLESE from its founding at the Coolfont Conference in August 1999 up until the present day. The Coolfont conferees and the follow-up DLESE Community Plan laid out a vision of a two-sectioned collection, with a "Reviewed" Collection of highest-quality resources which have passed seven stringent evaluation criteria, and an "Unreviewed" Collection, which would provide a wide range of resources and would serve as a forum for improvement of resources through feedback from resource users to creators. Five subsequent meetings of the Steering Committee or Collections Standing Committee have grappled with the question of quality control or selectivity of the Broad Collection. Consensus has been elusive. These debates resulting in the adoption of the current two selection criteria.


John Saylor, NSDL Core Services: NSDL's Collection Philosophy & Policy.

(Editor's Note: NSDL (www.nsdl.org) is the "National Sciences Digital Library," formerly the National SMETE Digital Library, where SMETE stood for Science, Math, Engineering and Technology Education).

John described the relationship between NSDL and DLESE. NSDL includes all science resources while DLESE is one of numerous specialized collections partnering with the NSDL. While the NSDL would be akin to the DLESE Broad collection with respect to accession, it would employ a “usefulness” criterion for assessing the suitability of a resource.

Kim asked how NSDL intended to deal with “bad science.” John replied that NSDL had not dealt with that issue. Karon suggested that because NSDL is accessioning only whole collections, some of the issues are not the same as with DLESE. John replied that NSDL is not a gatekeeper, but rather it seeks tools to identify the usefulness of a resource.

John suggested that the DLESE Reviewed Collection is similar to a scientific journal, because each requires rigorous review of its entries. On the other hand, NSDL is more like DLESE's Broad Collection.

NSDL plans to move towards annotating rather than excluding resources.

In discussion, participants noted that an annotation process could be problematic in at least two ways:

  1. Annotating someone's resource with a label that could be considered derogatory (e.g. "pseudoscience,""controversial") could lead to controversy, even lawsuits.
  2. Tammy pointed out that annotations do not migrate with the metadata; if a library such as NSDL harvests DLESE's metadata, they do not have to harvest or display annotations (Editor's note: this is using the word "annotation" in the technical sense of an annotation service as has been discussed for NSDL, not merely as a synonym for "label.")

John pointed out a parallel between Dave Mogk's position of minimal entrance criteria for the DLESE broad collection and the American Library Association's Library Bill of Rights.


Suzanne Larsen, Collections Committee Chair: Existing DLESE Policies.

Suzanne Larsen reviewed the terms and limitation of DLESE's existing policies: the Collection Scope and Policy, the Deaccessioning Policy, and the Interim Collections Accession Policy.

Responding to previous discussion of how to assess the quality of a resource seeking admission to the DLESE broad collection, Suzanne suggested that quality seems to be implicit, between the lines, in the policies, rather than articulately directly.

Because of the dynamic nature of a digital library, there needs to be a deaccessioning policy. As yet, DLESE has no mechanism for the community to request the removal of a resource from the collection. The DPC does not seek out resources for deaccessioning. If the DPC is not certain how to deal with a questionable resource, it will refer that resource to the Collections Committee.

The Interim Collections Accession Policy leaves room for questions about who will do the reviewing for quality and how that review will be done.


John Snow, Steering Committee, (by phone), "Holding-tank" procedure in on-line history group.

(Editor's note: at the fall 2000 Steering Committee meeting, when the discussion seemed deadlocked over how to apply what were then being referred to as quality "filters" at the gateway to the DLESE Broad Collection, SC member John Snow recounted a system used to vet resources for a on-line history group to which he belongs, and suggested that this system might work for the DLESE Broad Collection. The SC liked the idea and directed DPC and the Collections Committee to flesh out a plan to apply this system to DLESE. Plan was developed but not implemented because of perceived cost and logistics obstacles.)

John described an on-line group called the Great War Society, a group of about 1000 members, which web-posts materials pertaining to World War I. When a new item is submitted for inclusion in the Great War Society collection, it goes into a provisional status for 30 days. Members are invited to comment on the item, and if there are no objections within the provisional period, the item is added to the collection.

In John's opinion, this system works well. John stressed two aspects that may be important for DLESE to consider. First, this systems builds a sense of involvement within the community, which fits with what DLESE is trying to do. Second, when you join the organization you have to indicate your interests on a list of topics, and you get an automatic email notifying you when new materials on your topic of interest have been submitted into the provision status area. DLESE doesn't have such a system (although it is planned). John thinks these autoemails are essential for ensuring that people actually look at the newly-submitted items; if it were necessary to proactively go to a website and look for what's new, John thinks he and others wouldn't do it.

Q: How many comments do you get? A: between zero and ten per item.

Tammy said that she is an editor of the Journal of Interactive Computing, which also has a 30-day comment period. They have never, to her knowledge, rejected an article on the basis of something that was said during the comment period.


Kim Kastens: Results from Survey by Academic Career Recognition Task Force.

(Editor's Note: John Butler, now deceased, was a founding member of the DLESE Collections Committee. Based on his experience as a college Dean, he often stressed how important (and uncommon) it was that the creators of excellent educational resources be given career recognition for the effort and creativity that go into creating high-quality digital learning materials He urged DLESE to play a pioneering role in establishing reward and recognition mechanisms for creators of excellent resources.)

Very early in the history of DLESE, before there even was a searchable collection, John Butler convened an Academic/Career Recognition Task Force, which conducted a websurvey to find out what would it take for digital educational resources to be recognized as being of sufficient quality and importance to merit academic career recognition.

Surveys were received from potential resource users, and potential resource producers/creators, and from department chairs/supervisors. The survey was aimed at college/university personnel, not K-12.

Users in all categories agreed that the selection criterion valued most was scientific accuracy. They also valued ease of use and pedagogical effectiveness.


Tammy Sumner, K-12 educators' perceptions of quality, based on focus groups.

Tammy reviewed the results from focus group discussions about the perceptions of potential DLESE users concerning the nature of the broad collection. The focus groups consisted of a cross section of potential users, both K-12 and college faculty, most of whom were new to DLESE.

The participants were asked for advice on "policies," "priorities" and "best practices" that they thought ought to apply to the Broad Collection. Tammy gave a working definition of "priorities" as things you should to spend time and money on. They were to do this by looking at specific web sites, selected so as to raise issues of quality, such as the Kentucky Coal Organization and Global Warming.org.

Tammy reported that:

  1. Focus group participants tended to agree about what's good, but not as much about what's bad.
  2. There was fairly close agreement within those representing grades 6-16, with emphasison scientific accuracy. But K-5ers expressed different concerns.
  3. Focus group participants got very emotional on the topic of biased resources, especially infuriated about resources that were seen as advocacy masquerading as science.
  4. Underlying rationale for wanting DLESE to mark or exclude biased resources had to do with credibility of DLESE: "We need to know that you know that this biased resource is not OK."
  5. Tolerance for advertising ranged from none to low. Problem with advertising was expressed as distracting students from task.
  6. Metadata should not just repeat what the resource developer says, because resource developers often are wrong re grade level and standards alignment. Informationabout a resource provided by DLESE should be better than that provided by the resource creator/developer because of DLESE's association with NSF.
  7. Even highly rated sites had useability issues; students would get lost or bored.
  8. Grade designations provided by developers are often too broad or just inaccurate.

Recommendations resulting from the focus group discussions:

  1. Shift from growing the collection quickly to more selectively.
  2. Tightly couple collections development and library use. To do this, DLESE needs to broker a relationship between a developer and partners who would evaluate use of developed resource and thus provide feedback to developer.
  3. Identifying problematic resources is as important as identifying the good resources.

Tammy's presentation generated lively, multi-faceted discussion intertwined with and following the presentation. Points and questions raised by workshop participants include: Not sure can we extend results of these focus groups to entire community of potential users. Resources labeled as K-2 in the DLESE collection are sometimes not actually age-appropriate for that group. Shouldn't read too much into K-5ers not stressing scientific accuracy; they probably thought that was a given. DLESE's reputation could be jeopardarized because of influential individual's reaction to the presence of a particular resource in the Broad Collection. It's important to avoid distractions such as advertising or links to related sites which can take students off task. The resource has 5 minutes or less to engage people; then they leave and don't come back. Want to see labeling of “biased” resources. Don't need replication of traditional material. Web resource design affects usability. Remember that not all users are students or teachers who need finished products; don't lose idea of using broad collection as “test bed” for resources in development. How do you not reject, but rather, improve, imperfect resources with potential? What is peer review about? an evaluative judgement or a process for involving community? Both. Concerning bias flag or bias meter, "those people are taxpayers too." DWEL experience was that they got more and more picky as time went on. But remember DWEL is, and always was conceived as, part of the Reviewed Collection, not the Broad Collection. DWEL initial expectation was that the hard part would be the cataloging, but after 6-8 months the cataloging was under control and it seemed that gathering/selecting was the hard part. Similar trajectory in NSDL-funded DLESE gathering effort [DiLeonardo, Tahkirkheli, Kastens, DeFelice]; initially underestimated effort involved in gathering/selecting.


Chris De Leonardo, DLESE Steering Committee Chair, "Resources I Didn't Collect and One that I did Collect".

(Editors' note: beginning in fall 2000, and scheduled to continue through fall 2004, there has been an NSDL-funded Collections building effort within DLESE. The PI's are Kim Kastens, Chris DiLeonardo, Sharon Tahirkheli (American Geological Institute), and Barbara DeFelice. Within this 4-PI collaboration, Chris is in charge of gathering/ selecting appropriate resources, which are then cataloged by Sharon's group. In addition, in 2001-2002, Dave Mogk lead a GEO-funded gathering/cataloging effort for DLESE. Both of these funded efforts fed/feed resources directly into the Broad Collection, as contrasted with the builders of themed or aggregated collections, who gather resources for a branded subcollection within DLESE.)

What has been brought into the DLESE Broad Collection so far is "the cream."

Chris used several examples to describe the types of resources that presented problems when considering their inclusion in the DLESE broad collection. In many cases, he by-passed resources that he thought were sub-standard, simply not passing them along for the labor-intensive process of cataloging. They weren't formally rejected in any official way, and the resource creator never even knew they had been considered. They just didn't go into the library.

Chris' examples included instances of bias; poor science; bias plus poor science; blatant commercialism; clearly commercial, but having educational value; non-science or very inappropriate, pseudoscience; reliable author but information within site not reliable; political bias with misused science.

It was clear from the discussion of these cases that DLESE's formal policies do not specify how to deal with such cases. Chris has been using his own judgement, as did Dave Mogk previously, and quietly leaving aside what he considers flawed resources.


Holly Devaul, DPC (by phone), Quality Assurance of the DLESE Community Collection.

Holly described the Quality Assurance (QA) procedures utilized in reviewing resources submitted to the DLESE Community Collection.

(Editor's note: "The DLESE Community Collection" refers to resources that have been contributed and cataloged as individual items via the DLESE Cataloger, regardless of whether they came from funded gathering/cataloging efforts (Dileonardo/Tahirkheli or Mogk) or from members of the wide DLESE community, or from resource creators. The Community Collection is to be contrasted with "Themed Collections" aka "Aggregated Collections" aka "Branded Collections," terms which are used to refer to collections of resources that have been accessioned as a group into DLESE, under the terms of the Interim Collection Accession Policy. Katy Ginger spoke about Themed Collections later in the meeting.)

Every resource coming into the DLESE Community Collection is checked by a staff member at the DPC. This QA step addresses whether the resources have the required metadata, whether they are within the scope of the library, and whether the site is functional.

There is a DLESE Collection Systems holding area where submitted resources are held before being accessioned. Some of the reasons they are being held:

  1. Doesn't meet criteria.
  2. Is a mirror site.
  3. Is off-line, temporarily or permanently.
  4. Has content that is different than when initially reviewed.

Currently there are 324 records in this holding area, most commonly because the link is not working (found by automatic link checking).

The DPC QA staff checks whether resources meet the Broad Collection accession criteria. Sites not meeting criteria are not accessioned; cataloger or creator is notified by email. Reasons why resources have been turned down at this step.

  1. Not relevant to ESS.
    1. Out of scope science.
    2. Not science.
  2. Not a technically functioning electronic resource.
    1. broken link, fatal loop, under construction.
    2. No direct access to material.

There is a "grey area" of resources in the library that are of questionable quality (because of distracting advertising, biased science, or lack of learning materials) but not clearly excludible under current policy.

(Note: links to examples of all of the above-mentioned problematic resource types are included in Holly's Powerpoint.)

Most problems arise with resources not coming from one of the funded gathering/cataloging efforts. These "Other Community" resources constitute 12% of the collection (before upcoming addition of the newly accessioned themed collections).

Katy Ginger, DPC (phone), DLESE Themed Collection Accessioning.

Katy described the processes for accessioning themed collections (Refer also to the Interim Collection Accession Policy). At this time, there are about 20 collections that may be accessioned as part of the DLESE v.2 roll-out this summer (see list) . This is the first time that themed collections have been accepted for accessioning into DLESE, and therefore procedures are still being refined.

Katy provided a handout containing a 12-point checklist for collections accessioning and a collections accessioning flowchart that shows the process in detail.

Some themed collections subcontracted with DPC for QA. For these collections, the
quality checks on individual items are similar to those for individual resources submitted into the Community Collection (see Holly's talk). One specific difference is that if there is a problem, DLESE QA personnel contact the collection builder, not the resource creator.

(Editor's comment: For collections which do not contract with DPC for metadata QA, it is less clear how the checks for resource quality and metadata quality are going to happen. The Interim Collections Accession Policy calls for the Collections Committee to review collections which are applying for accessioning. At least for this year, this task has been delegated to a smaller working group chaired by the Collections Standing Committee. The first meeting of this group was scheduled during the week following the Quality Workshop.)


Susan Buhr, Evaluation Core Services Group, highlights of Pitchman's article.

To make sure we haven't overlooked any important potential selection criteria, Kim asked Susan to summarize anything important mentioned in Pitchman's "Selection Criteria" chapter which hadn't yet emerged in our discussion.

Susan pointed out that Pitchmann's article clarified the difference between scope and selection criteria. For those resources that fall within the scope of a collection, but whose inclusion in a collection is not warranted, selection criteria need to be defined so that a collection contains only suitable resources.

Some selection criteria listed by Pritchmann but not yet featuring in the DLESE discussion are:

  1. Provenance – the origin of a resource and credibility of that source.
  2. Uniqueness of the information – how much primary content is present.
  3. Currency – how up-to-date the content is.


Suzanne Larsen, Collections Committee Chair: Learning from Traditional Libraries.

Suzanne handed out an example of a Collection Development Policy from a university-affiliated high school library in Colorado, calling attention in particular to the sections on "Reconsideration of Materials," "Procedure for handling Challenged or Questioned Books and Materials," and the form for "Citizen's Request for Reconsideration of Library Materials." DLESE doesn't have such a mechanism for community members to challenge resources.

The several librarians among the workshop group spoke about similarities and difference between a digital library and a traditional library, and what DLESE can learn from traditional libraries. One big difference is that a traditional library has a predefined constituency, whereas the constituency of a digital library is in part self-selected because it is attracted to the Collection.

Traditional libraries have "bibliographers" or "subject specialists" who make decisions about what shall be acquired.

Another big difference is that most resources in traditional libraries cost money to acquire. But, said another participant, resources in DLESE cost money also, not to acquire, but for cataloging QA, ongoing link checking, etc.

It may now be important for DLESE to articulate the answer to the question "What does it means to be a DLESE resource." If a resource is in the DLESE Collection, does that carry with it a promise of quality? Or not? There was mention of a recent article about DLESE in Science magazine headlined "Best of Earth Science Ed."

(Editor's note: The workshop's review of DLESE Policies neglected to touch on the Terms of Use, which turns out to bear on this question. The following text from the Terms of Use was circulated by email after the meeting: "This website contains links to other websites that do not belong to DLESE. ... No sponsorship, approval or endorsement of any product, service or information provided by the third-party site, the content of the site, or the site itself is intended or implied by DLESE.")

Workshop Part B: Discussion of Criteria

The remaining workshop time for Monday was spent in a Think-Pair-Share exercise about Criteria. First, each participant thought individually about what criteria should be at the gateway of the Broad Collection. Then pairs or small groups met to compare notes and opinions. Discussion was lively. Then each group summarized their discussion and conclusions on a large sticky post-it.

The post-its were stuck on the wall and the whole group reconvened. A spokesperson from each group briefly reported out on the highlights of their group's discussion.

Then the participants viewed other groups' posters. While examining the posted ideas, each individual then indicated by a * whether he/she supported the idea or indicated by a NO if he/she did not agree with the idea. Again, discussion was lively.

Overnight, Kim typed up the material on the posters (text only, not drawings), including the number of *'s and "NO's" given to each item by other participants. In red are items which had more than two *'s.

 

Morning Session, Tuesday July 1, 2003.

The posters were re-posted on the walls. Kim handed out copies of the typed-up version of the Criteria posters. Kim noted that we seem to have very few bad ideas; there are few "NO's" on the posters. There are a lot of ideas on the posters which have many *'s, offering hope that we may be able to reach consensus on some items.

Kim then put up on the computer projector a Word file with all of the ideas from the posters that had more than two stars. From this starting position, she lead a discussion of what Criteria and Priorities shall be applied at the gateway to the Broad Collection. A distinction was drawn between "Criteria" and "Priorities." "Criteria" (aka "accession threshold criteria") means a resource without this attribute should be excluded from the library. "Priority" means that time and effort should be focused on gathering and cataloging resources that have this desireable attribute--but that if a community member contributes (and catalogs) a resource without this attribute, that lack wouldn't be grounds to exclude the resource.

During this discussion, Kim drafted real-time "Proto-recommendations" on the computer projector, aiming for group consensus on a few key words or phrases on each topic. These consensus words or phrases are in red in the "Proto-recommendations" document, and are followed in black by notes about things that should be taken into account in implementing the recommended criterion or priority.

Workshop Part C: Discussion of Procedures

Kim asked Raj to review the outcome of DPC's investigation of the feasibility of the provisional status or "holding tank" idea, following the Fall 2000 and Summer 2001 Steering Committee directives to investigate this idea. Raj provided a framework of factors that should be taken into account in evaluating the feasibility of a mechanism or procedure for applying criteria/priorities to the DLESE Broad Collection, using the "holding tank" idea as an example. The factors identified by Raj are: technical implementation, social implementation, cost at cataloging and QA steps, maintenance costs, scalability, preserving of information during harvesting by other libraries, whether the mechanism supports collection development, and whether the mechanism helps to create a collection that follows the selection criteria. On the question of community participation, Raj drew a distinction between mechanisms that rely on community participation in order to be successful, versus mechanisms which afford opportunities for the community to become involved, but which will still succeed even if no one from the community chooses to become involved in a specific case.


Rajul Pandya, DPC, A framework for thinking about mechanisms.

Kim then began a discussion of procedural implications of the previously-discussed criteria. Casting so many of our ideas as "priorities" or as "DLESE favors" implies that DLESE will continue to require funded gatherers/catalogers, who are committed to finding and cataloging resources which have the favored attributes. We can't just rely on contributions from the community (with flawed resources screened out) to build a well-balance library characterized by the attributes we favor.

Holly estimated that the new set of criteria above would mean that about 100 items from the Broad Collection (out of approx 3600 total) would probably not have been admitted. The group felt that this was not an alarmingly high number. Holly and Chris both emphasized, that had these criteria been in place earlier, many of these items would not have been gathered/cataloged. If they had been cataloged from the community, current QA procedures would have detected them and they wouldn't have been ingested. We agreed that, if more stringent criteria are adopted at the gateway to the Broad Collection, that these 100 or so problematic resources should be reexamined and deaccessioned if appropriate, as called for under the Deaccession Policy.

Mary pointed out that number of resources is one factor that people use in deciding whether to use DLESE. If employing new criteria reduces the number of new accessions, then growth of the library useage may be slower than we hoped. Useage goals may need to be revised downward.

The discussion then turned to procedures for enforcing the accession threshold criteria and encouraging the priorities. As during the Criteria discussion, Kim composed real-time Proto-recommendations on the computer screen, with consensus wording in red.

Mike Mayhew proposed that a Review Board should review resources before they are ingested into the Broad Collection. Questions arising concerning a review board idea: Should the board review all resources, or just ones identified as questionable during the current QA process? Should the board consist of paid staff, or volunteers like journal Associate Editors? Should the review board review resources coming in as part of themed collections, or only the Community Collection (resources coming individually)? Should the review board review pass judgement on all criteria or only the more controversial or subjective criteria?

Katy Ginger pointed out that we hadn't dealt with criteria for annotation collections, which have different characteristics than resource collections. (Editor's note: an annotation collection is a collection in which the individual items are annotations about educational resources, rather than educational resources themselves.)

We seemed to have bitten off more than we could chew, and were struggling to try to develop procedures that would cover too many diverse types of circumstances. So Kim asked the group to consider a narrower suite of resources: only resources submitted as individual items, in other words, the Community Collection part of the Broad Collection. Kim suggested that, even after having benefited from Katy's briefing, we the workshop participants, and DLESE as an organization, didn't know enough about Themed Collections to develop meaningful recommendations for review procedures. The first meeting of the Collections Accession working group was scheduled the week after our workshop, and we encouraged that group to think about procedures for enforcing/encouraging the quality and relevance of resources submitted within Themed Collections. On the topic of Themed Collections, we agreed that the resources in Themed Collections should be held to the same threshold accession criteria as resources in the Community Collection.

Focusing in on this narrower set of resources--the Community Collection resources which enter the library item by item--the group drafted a strawman set of procedures. The funded gatherers/catalogers (Foothill/AGI group and successors) would gather and catalog only resources meeting new accession criteria, and would focus on resources in priority areas. The current QA system would keep eyes out for problematic resources, especially those coming from community catalogers. The DPC QA staff would refer problematic resources to a Review Board (this would be post-cataloging, pre-ingestion). The Review Board would make recommendation to the Collections Committee about whether or not to accession that item.

There wasn't universal agreement that these procedures would be effective, so the workshop participants stressed the importance of trying these procedures provisionally and carefully evaluating their effectiveness. The Collection Committee should seek to learn from consideration of specific problem cases to refine criteria, outreach message, etc. so as to decrease the abundance of problematic resources in the future. The Review Board should sample a subset of the accessioned resources to test if problematic resources are slipping through.

The final topic discussed by the workshop concerned whether and how DLESE should implement a system of identifying or flagging resources that could have a pedagogical value if used carefully, but which aren't, taken by themselves, what most geoscience educators would classify as good educational resources. Examples would be websites that have a strong point-of-view or advocacy position on a controversial environmental topic, or resources which contain out-of-date ideas or interpretations. There was general agreement that DLESE should further explore developing such a capability.

There was an inconclusive discussion of how, exactly, the "flagging" or identification should be accomplished. There was agreement that the technique, whatever it was, should be searchable, so that a user could elect not to see the flagged resources. Whatever it is, it should be obvious on the first page (short description) in the Discovery System, or users wouldn't see it. There was some concern that if an annotation service were used, that the annotations wouldn't be harvested along with the metadata by NSDL or other libraries or wouldn't be displayed by those libraries. There was a concern that such value judgments or "reviews" shouldn't be in the metadata; that metadata was supposed to be reserved for a straight description of the resource as it stands. But John Saylor pointed out that as far as NSDL is concerned, DLESE owns the metadata and can put whatever we want into the metadata. Some favored putting the "flag" as specific wording in the first few lines of the description; others favored an icon in the Discovery System return. Another suggestion is that such resources shouldn't labeled as DLESE Collection resources at all, but should be made available through special pages or portals (analogous to the "Bad Science" site), or through a "Developmental Collection" of resources that don't meet the criteria for the DLESE Collection.