Jane Greenberg's article "Understanding Metadata and Metadata Schemes" presents an approach for the study of metadata schemes. Her MODAL (Metadata Objectives and principles, Domains and Architectural Layout) framework proposes the examination of the features of various metadata schemes, such as EAD, RSS, Dublin Core, FRBR, etc., to provide a way to study and interpret schemes and to aid in their design. Although the subject matter is far beyond the scope of my studies thus far, Greenberg provides ample background information and definitions to aid in the understanding of the MODAL approach. Metadata, or data about data, "addresses attributes that describe, provide context, indicate the quality, or document other object (or data) characteristics" (20). There are many different functions that metadata supports, including the discovery, management, usage, audience(s), authentication, linking and hardware/software needs of particular resources. Examples of such functions might include author, title, subject, the price of a particular resource, its rights and reproduction restrictions, and so on. There are many different metadata schemes for different organizations. One thing all metadata schemes have in common, however, is that they incorporate objectives and principles that govern how the scheme will use metadata to describe the organization's collection(s). Greenberg's MODAL approach also looks at the domain of an organization's collection(s) to further understand its metadata scheme. Domain includes "the discipline or the community that the scheme serves" (29), as well as object types and formats. Architectural layout refers to a scheme's structure - how deep the metadata elements go and how they branch off into different directions to describe the collection(s). Greenberg states that "although metadata schemes vary tremendously, they are shown to be similar when examining their objectives and principles, domain foci, and architectural layout" (33).
Dawson and Hamilton's "Optimising metadata to make high-value content more accessible to Google users" presents, in my opinion, a very balanced view of the Google vs. academia debate that has caused so much controversy among information professionals in recent years. I agree with the authors' position that Google has the capability to reach millions of information seekers, so information providers should do everything in their power to make their collections available to the casual Internet surfer as well as to the more serious scholar, both of whom may just be using Google because of its speed and ease of use. The authors point to several success stories of private and public institutions that have used metadata implementation to increase their rankings in Google searches. If it is our job as information professionals to make information easy to find and access, why then is there so much skepticism regarding Google as a reliable source for information? Of course, the cost involved in creating metadata for such extensive collections as library catalogs is quite high but in many cases I would think that the potential benefits to the institutions would eventually outweigh the bottom line. Dawson and Hamilton introduce the term "data shoogling," which means "rejigging, or republishing, existing digital collections, and their associated metadata, for the specific purpose of making them more easily retrievable via Google" (313). The authors offer relatively simple solutions for "shoogling" data that one need not be a cataloging expert to carry out successfully. The Glasgow Digital Library (GDL) serves as a poignant example of what data shoogling can do for a relatively small library. The GDL published an electronic book about old country houses in Glasgow. Because of optomized metadata the book ranked number one when "old country houses" (without quotation marks) was searched in Google in 2004. In fact, I just searched those terms myself in Google, and the same holds true today! The authors realize that Google may not be on top forever but offer ways to get around that. For example, they suggest that information providers "remain flexible and...establish procedures that will allow output and optimisation for different applications in future" (324). Finally, the authors urge institutions to reconsider the Google question since, after all, that's where many of their potential users already are.
Greenberg's article made me think back to my archives class last semester, for which I wrote a research paper on Encoded Archival Description (EAD). It was very interesting learning about the levels of classification and the history of this metadata scheme, which originated at UC Berkeley in the 1990s. Here is the web site: http://sunsite.berkeley.edu/FindingAids/uc-ead/ (sorry, my equal key doesn't work because I spilled limeade on my laptop - true story - I'll fix the link tomorrow!). I had a lot of fun using the metadata tags to find photos in the Online Archive of California (http://www.oac.cdlib.org/ - again, I'll fix it tomorrow...) At the time I saw metadata tags as something similar to Library of Congress Subject Headings, which I guess they are, but they go much deeper as they can easily be slipped into the code of a web page to make the content more findable to information seekers. EAD and other metadata schemes just make so much sense to me. Why not make the information-rich collections of public institutions available to Google and other search engine users? Isn't the point of having these free resources so that people can and will want to access them?
Monday, October 23, 2006
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment