Monday, December 04, 2006

Dec. 4: Notess, Ru and Horowitz

The last blog of the semester is upon us. I can hardly believe it's over, it went by so quickly. This week's articles were about L2, or Library 2.0, and the "invisible web," the behind-the-scenes, non-indexed (and, therefore, unsearchable) web. Greg Notess discusses in his article, "The Terrible Twos: Web 2.0, Library 2.0, and More," the ambiguous and sometimes controversial concepts of 2.0 applications and their implications for the library of the future. The terms are ambiguous because there is still debate as to what, exactly, they mean. They're controversial because sometimes people don't particularly like the terms and argue that the definitions are too broad. Notess cautions, however, that before we completely dismiss Library 2.0, we should "visit some of the example sites, experiment with their capabilities, and imagine the possibilities for products and processes" (42). Some of the L2 concepts he discusses, such as instant messaging, RSS, wikis, social networking, and blogs, can already be seen to be effective tools in place in many libraries. As the Internet grows to be a more interactive tool, Library 2.0 will be, in my opinion, more and more appropriate and important in the future.

Yangbo Ru and Ellis Horowitz's article, "Indexing the invisible web: A survey," discusses the two sides of the web; the visible, searchable side that we see every day by entering terms into a search engine, and the hidden side, or the invisible web. The invisible web "refers to the vast collection of information that is accessible via the worldwide web, but is not indexed by conventional search engines" (249). This can include databases, audio and video clips, and intentionally excluded pages, such as pornography. Some search engines have applications in place that automatically index the invisible side of web sites. Some others are indexed by human beings, which, of course, means that the indexing is limited to the preference and experience of the indexer. Another factor of indexing web content is that there are many different interfaces, thus making it difficult to design a one-size-fits-all solution to accomodate all of the invisible content out there. The authors suggest "a technique that can more comprehensively index the data in an invisible web site...that will not get swamped by the size of the data" (262).

I guess I had never given much thought to the "invisible web" before. It seems like there is so much content out there already and, as Ru and Horowitz note, the hidden, unsearchable side of the web contains much more information than the Publicly Indexable Web (249). It seems like a monumental task but, as time goes on and the web evolves and gets even larger, such a task will mean more opportunity for information professionals. As I come to my last semester in graduate school, my thoughts turn to jobs, interviews, and resumes - scary stuff... Library 2.0 and the invisible web, though, respectively, controversial and unfathomable in terms of magnitude, give me hope for the future of librarianship and my role as an information professional.

Happy Holidays everyone! Here's a recipe
I'm going to try this Xmas. It's from our friends at Forward Foods and it looks delicious. Actually, I picked up some of the Smokey Blue cheese today and, well, just trust me - try it.

Monday, November 20, 2006

Nov. 20: Nicholson, Rose

Scott Nicholson's article, "Digital Library Archaeology: A Conceptual Framework for Understanding Library Use Through Artifact-Based Evaluation," compares the science of archaeology through the ages to bibliomining and other research into digital library use. Nicholson suggests that library science has not grown as much as the pure sciences because of a lack of hypothesis-based research and traditional scientific process. The author stresses looking at patterns of use but also maintaining user privacy in the process. He states that "the focus of the present work is to understand the interaction between a user and electronic resources through a digital library service" (500). The main point of the article, as I see it, is that there remains much research to be done and the archaeological method is quite applicable in the digital realm. I agree with Nicholson that user privacy is of utmost importance in user-centered research. User-centered research seems to be the current trend in library science. The benefit of being able to converse with the actual living users of a system, unlike trying to piece together information from artifacts left behind by earlier civilizations, gives library science an advantage that archaeology does not have. By using this information carefully, library science can gain a better understanding of digital library users and create a better system based on their behavior.

Daniel E. Rose's article, "Reconciling Information-Seeking Behavior with Search User Interfaces for the Web," discusses user search behavior in a variety of contexts. The author notes that today's information seekers use more simplified queries rather than the complex Boolean logic of former information retrieval systems. Today most of the more complicated search mechanisms "happen behind the scenes, while relevance ranking determines how results are presented to the user" (797). The contexts in which Rose looks at information-seeking behavior are the goal of the user, the cultural and situational context at the time of the search, and the repetitive nature of the search task (797). Rose identifies three types of information needs: 1) navigational - for example, finding a particular web site without prior knowledge of its URL; 2) informational - simply finding information related to search terms; and 3) transactional - finding a service, such as a database, that will allow the user to investigate her/his query further. Rose states that despite these different types of user searches, "nearly every Web search engine offers users the identical search experience" (798). Cultural and situational contexts, as well as the iterative nature of the search task, further complicate user-search engine interaction. Rose suggests that Web search interfaces should take these points into consideration and that these insights might change the face of the search engine as we know it today.

Monday, November 13, 2006

Nov. 13: Audunson, Liu

Ragnar Audunson's article, "The public library as a meeting-place in a multicultural and digital context: The necessity of low-intensive meeting-places," discusses the role of the public library in today's multicultural world. Audunson compares modern multicultural society with "the so-called information or knowledge society" (429). He states at the beginning of the article that the two are 'seemingly unrelated' but that both have a great impact on public librarianship. I couldn't agree more, although I can easily see the correlation between the diversification of society with the information explosion we are now experiencing. It seems to me that it might be less isolating for ethnic and cultural minorities to move to a completely foreign society (the article mentions Lapps and Romany as examples), where they may not even speak the language. If information is more widely available, especially digitally, I would think this would help ease the transition into the adopted society much more so than before the advent of the Internet. Audunson traces the public library's roots from the Enlightenment to its modern role as the seat of community democracy, promoting cultural tolerance and community involvement. The public library, the author notes, should be a place "where people belonging to different cultural groups can meet and communicate" (433). He highlights this further by explaining how the wide availability of digital information can foster communication with greater numbers of people. Audunson calls the public library a "low-intensive arena" in which this communication can take place. These arenas make it possible for people from different cultural groups and with different values to come together and discuss social and political issues. He explains that "high-intensive arenas" (such as the workplace, church, families, etc.), although vital to an individual's sense of self, can "create social and cultural boundaries and demarcations" (437). The public library, therefore, must be upheld as a necessity by its community. It remains as the one societal institution where all forms of information and people from all walks of life are not only welcome but also encouraged.

Ziming Liu's article, "Reading behavior in the digital environment: Changes in reading behavior over the past ten years," looks at how the explosion of digital information has caused the emergence of screen-based reading, as opposed to traditional hand-held print reading. One goal of Liu's study is to understand these changes in order to design better digital library resources. Some recent studies "argue that the arrival of digital media, together with the fragmentary nature of hypertext, is threatening sustained reading" (701). However, Liu's study contradicts this theory. Although the author doesn't deny the emergence of screen-based reading as a new reading behavior, his study finds that most readers still prefer the printed word to its digital counterpart for a variety of reasons. Annotating and highlighting is the main reason people prefer printed documents for in-depth reading. As Liu notes, anyone can use a pencil or highlighter on a piece of paper but annotating and highlighting electronic documents requires certain knowledge that not everyone has. Liu points out that screen-based reading has its advantages, such as browsing, scanning, and keyword spotting, but that paper will likely remain the preference in the future for in-depth reading.

Liu's article struck me as quite appropriate to my life. I think the availability of information in the electronic form is great but I, both as a student and as just a regular person, much prefer to hold what I'm reading in my hand. I sometimes even print out lengthier emails just so I don't have to stare at the computer screen too long. Also, a little secret about me: I hate ebooks. Well, hate is a strong word but I don't really like them much. If I was assigned a book to read and it was only available to me online, I would be very unhappy. I would probably go instead to the public library and get a physical copy of the book. I, like many working folks, spend way too much time every day with my eyes glued to a computer monitor (well, in my case, two computer monitors). It's nice to do some old-fashioned print-based reading every once in a while. I think Liu's study is interesting and it will be even more interesting to see how reading behavior evolves in the future.

Monday, November 06, 2006

Nov. 6: Shaker, Terdiman

Lee Shaker's article, "In Google we trust: Information integrity in the digital age," discusses the safety and reliability of Google and how the public's view of it is shaped partly by its financial success. Shaker takes a good look at Google through two years of New York Times news stories about the company. He finds that most (over half) of the stories within this two year period solely discussed Google's "corporate interests" (9). This, the author maintains, colors the public's view of Google and erroneously allows for its trust in Google to grow. Shaker discusses the framing of news stories and how, for example, using quotes from prominent business people (instead of getting unbiased views from many sides), who attest to Google's prosperity, further serve to exacerbate the growing problem of a false sense of trust based on nothing more than its fiscal profits. The author also considers Google's questionable privacy practices with regard to its users. He states that Google's "privacy policy makes it clear that it is a document to protect the company's interests first, to reassure users second, and protect users lastly" (5). Shaker wishes to stress that users of Google (as well as, I assume, other information providers) be somewhat wary and realize that media coverage of its financial success is not necessarily reason to believe that no information risks exist with using the product.

Daniel Terdiman's article, "Folksonomies Tap People Power," discusses the different tagging styles used in sites like del.icio.us and Flickr. The author explains that del.icio.us uses a broad folksonomy, meaning that many users tag the same item (in this case, a URL), while Flickr is a narrow folksonomy - small numbers of users tagging many different individual items (photographs). Terdiman notes that tag use is on the rise in blogs and other social sites and stresses that the more people that get involved, the greater the value tags will have.

I finally checked out del.icio.us after reading about it last week and it seems like a pretty useful tool for someone who a) either has a huge number of web sites s/he likes to keep track of, or b) likes to learn what other people consider valuable information about specific sites. To me, it's somewhat reminiscent of Wikipedia, where many people have the opportunity to create and edit information and, together, the information for the most parts remains reliable because it is under constant scrutiny. On the other hand, it doesn't look like users can edit others' tags so I would think that "bad" tags would eventually be phased out by "better" ones, if that makes sense.

Monday, October 30, 2006

Wow, political activism still exists!!

Dear Norman Residents,

Have you noticed the Jim Lemons political signs around town? Well, I have and I've been making fun of his slogan for days now, which is "America, Families, Etc." Lame. WELL, turns out that it is a spoof! I LOVE IT!!! I've been thinking to myself, "OK, 'America, Families' is pretty damn vague but the 'Etc.' is the part that really gets me." 'Etc.' could mean practically ANYTHING - prostitution, drugs, gay marriage, take your pick! Anyway, this is DEL.ICIO.US. Here's the complete story.

Students spoof political process with fictitious candidate

By M. Scott Carter
THE NORMAN TRANSCRIPT (NORMAN, Okla.)
NORMAN, Okla. — The Jim Lemons political campaign is pretty low key.

There’s no television.

There’s no radio.

No newspaper ads.

Public appearances are scarce.

Heck, if you attended Lemons’ last press conference, consider yourself among the chosen few. Finding the Lemons headquarters isn’t easy, either. Granted, there are yard signs and stickers scattered throughout Norman, but Lemons and his staff are very difficult to locate.

There is a campaign spokesman.

And yes, there’s also a Web site.

But a vote for Jim Lemons is, well, pretty much impossible.

Because Jim Lemons doesn’t exist.

At least not in the flesh.

The brainchild of Norman residents Tres Savage and Josh McBee, the Jim Lemons campaign is a statement; a protest, Savage says, “about the deplorable electoral process Oklahomans have gotten themselves into.”

And that protest has become a local legend.

With curious phone calls to The Transcript and questions from seasoned political obser-vers and other candidates, the Lemons campaign has grown far beyond a few yard signs and a Web site.

It has quickly become part of the local political landscape.

And that landscape, Savage says, needs to be shaken up.

As the editor of the University of Oklahoma’s student newspaper, The Oklahoma Daily, Savage has covered his share of politicians; and it’s their behavior, along with the process of getting elected, which bothers him.

“This is personal,” Savage said.

“It has nothing to do with my job. It’s a protest about ugly political process and how people are being misled.” A former intern and Transcript reporter, Savage is no stranger to politics or protests.

And while some protesters choose to rally, march and sing or boycott whatever entity they disagree with, that’s not Savage’s or McBee’s style.

Both are journalists — college journalists.

And, unlike many players in the state political arena, they actually have something to say. But to be effective the pair knew their protest had to be unique; it had to have some panache, if you will. For them, their political statement had to be something the public would remember, and hopefully, take to heart.

And yeah, it also had to be fun.

“OK, the truth is Josh (McBee) and I were sitting around this summer when we came up with the idea. We were talking about politics when I realized how much I hated the process,” Savage admitted to The Transcript.

And thus, Jim Lemons—and his 2006 campaign—was born.

Labor was induced.

“Yes, we were induced by something,” Savage said. “But what, I don’t want to say.”

Taking advantage of the quickest way possible to get the message out — the Internet’s myspace Web site — the Lemons campaign took its first steps. Following the site’s launch, the first bright red yard sign was placed in Savage’s yard.

Touting Lemons’ name and the slogan, “Make lemonade 06” the sign caught the attention of locals and more signs — and supporters — followed.

And, before long, Savage had produced 500 yard signs — at a cost of almost $800 — urging voters to support the unseen candidate. “We looked at it like this: No matter who you’re gonna vote for you’re gonna get a lemon. So that became our slogan.”

And even though Lemons claimed no party affiliation, nor did he seek any particular office, his campaign continued to expand — due in part, Savage said, to the public’s frustration with mainstream candidates, political parties and the media. “Plus the fact there really isn’t anyone trying to do anything different.”

But to keep their momentum their candidate had to seem real.

Back to the Internet.

Complete with photographs and “news” stories, the Jim Lemons’ site includes some personal information, but little political insight:

• Lemons says he’s 51 years old.

• He says he’s married.

• He says he has grandchildren.

• He says he’s straight.

• He says he’s a Capricorn.

• He claims to be a resident of Norman.

• He also says he’s a Christian, a proud parent and a college grad.

• And his chief political rival is a man named David Dibble.

Lemons, according to his Internet site, has also been busy: The candidate has hosted at least one impromptu campaign rally and a fall press conference. In a September press release, Lemons even responds to questions about his campaign literature being found at the scene of several area drug busts.

“Because hundreds of supporters have been spreading my message and sticking my stickers all across Norman, it’s unavoidable that some of the thousands of marijuana users in this city might happen to venture past my campaign postings,” Lemons’ release said. “I think, if anything, that these so-called seedy sightings of my election paraphernalia only prove the strength of my campaign.”

In another posting, Lemons, like many local candidates, addresses his problems with yard signs being destroyed.

“There have been reports that Mr. Dibble and his associates have been involved in the disappearance of my signs,” the site says, “but I do understand that David is a documented kleptomaniac and has been seeking treatment at various facilities for multiple years. Thus, I do not want to turn his struggle with a crippling psychological syndrome into a campaign issue.”

The tongue-in-cheek volley comments on the recent spate of television stories covering controversies about thefts and defacing of candidates’ yard signs.

As Lemons’ stealth campaign continued, his strategy evolved and, consequently, a new theme was adopted. “I was working as an intern for the Oklahoma City Gazette and was covering the 5th Congressional District race, and I was amazed by the rhetoric — faith, family and all that stuff,” Savage said. “I wanted to take a shot at that.”

The result, Savage said, was a new campaign theme: “Jim Lemons — America, families, etc.”

“That pretty much summed up our feelings,” he said. “We’re trying to throw Oklahoma politics a curve ball. It needs a curve ball.”

So far, Savage and McBee have thrown strikes.

From the huge increase in requests for yard signs to the unscheduled campaign rally, Jim Lemons and his cadre of supporters are injecting a bit of fun and political theater into an otherwise drab campaign season filled with sleaze, mud-slinging and ever-increasing claims of negativity.

“It’s not just the politicians,” Savage said. “I’m also frustrated with the media; they are part of a politician’s plan to get elected. The politicians want to get publicity. They mostly court television, more than print, for sure, but the trick is to get attention. And yet, at the same time, no one in the media is holding any candidate’s feet to the fire.”

As an example, Savage cites the Senate District 16 race.

“None of the candidates separated themselves from one another,” he said. “There was hardly anything about how the candidates stood and what they believed in. Plus, early in the primary they were all heavy into yard signs. I was talking with Josh (McBee) about it and we agreed: If you were just going on yard signs early on, Ott would have been elected.”

That race, Savage said, and the fact that the state’s voter turnout has been incredibly low for the election cycle, gives Lemons’ campaign more standing.

“It’s something different,” Savage said. “It’s far from the norm. Lemons appeals to people who don’t tune into what’s going on right now between Thad and Wallace and Sparks and Davis. Lemons is for those people who are so disgusted they don’t care about the other.”

The campaign’s focus, Savage says, is on those who are frustrated.

“If only 30 percent of the registered voters in this state vote, then Jim Lemons is for the other 70 percent,” he said.

With just days left before the Nov. 7 election, Savage said the Lemons campaign isn’t worried. “We’re telling people to write Lemons’ name in,” Savage said. “Even if it does invalidate their ballot.”

“In Oklahoma, write-in candidates are not counted,” says Cleveland County election board secretary, Paula Roberts. “Our machines are not set up to read write-in candidate names. And writing in a name could invalidate the ballot.”

That fact doesn’t bother Savage, he says, because not allowing write-in candidates is wrong. “It’s ridiculous and it needs to be changed,” he said. “I know I will be writing Lemons’ name in and I highly encourage anyone who doesn’t know who they are voting for to write Lemons’ name in.”

People, he said, should not be discouraged from voting.

“That’s why we’ve put the date on our sign,” he said. “To let people know when they could vote.”

So what happens to Jim Lemons after the election?

“I think Jim will stick around,” Savage said. “I was thinking, ‘From now on anytime I want to be philanthropic to help further society, Jim Lemons will help me do it.’ Plus he may write the occasional opinion piece or letter to the editor.”

A fictional way to solve some very real problems, he says.

And Lemons today?

“Oh he’s everywhere,” Savage said. “That’s what the signs say.”



M. Scott Carter writes for The Norman (Okla.) Transcript.

Oct. 30: Ding, Golder and Huberman

Ying Ding's article, "A review of ontologies with the Semantic Web in view," discusses several important ontologies in relation to human-computer interaction. Ontologies "can be seen as metadata that explicitly represent the semantics of data in a machine-processable way" (377). A widely cited definition of an ontology comes from Gruber: "an ontology is a formal, explicit specification of a shared conceptualization" (378). What is important here for the information science community is an ontology's relationship to the Semantic Web, which allows for machine-readable information exchange. Ding lists several important ontologies, ontology languages, and ontology tools. Each community may have its own specialized ontology. For example, the business community uses Enterprise Ontology, which highlights terms related to processes and planning, the structure of organizations, high level planning, and marketing and selling goods and services (379). Ontology languages "are either logic-based (frame logic), or web-based (RDF, XML, HTML)" (379). Continuing with our business ontology example, Enterprise toolsets "are implemented using an agent-based architecture to integrate off-the-shelf tools in a plug-and-play style" (380). Enterprise toolsets support the Enterprise Ontology discussed earlier. Ding also lists several ontology projects, including Enterprise, which is "aimed at providing a method and computer toolset which will help capture aspects of a business and analyse these to identify and compare options for meeting the business requirements" (381). Although much of Ding's article was very abstract to me, I understand the importance of ontologies with regard to the Semantic Web. Ontologies provide a set of standards which can support the interoperability of common tools and aid in their design.

On a lighter note, Scott Golder and Bernardo Huberman look at Del.icio.us, a popular site for bookmarking and tagging URLs. The authors discuss the difference between collaborative tagging, such as is the practice in Del.icio.us, and taxonomies, which are more hierarchical and exclusive. With collaborative tagging, individuals make the distinction as to what tag to apply to a certain bookmarked URL, which is influenced by the individual's level of expertise as well as social factors such as language and culture. Although collaborative tagging does present some problems, it also provides the "opportunity to learn from one another through sharing and organizing information" (201). Golder and Huberman looked at data from Del.icio.us to reveal patterns of use. They found that users initially prefer more general tags and that successive tags were more specific and/or personal in nature. Another important finding is that users often imitate other users and share knowledge in the network, meaning that they often choose tags that have been created by other users because they perceive them as being 'correct' when they may not know how to tag a particular URL. The authors assert that this factor may be a cause for the stabilization of tags to describe URLs. Interestingly, Del.icio.us in this way can be seen as a URL recommendation service "even without explicitly providing recommendations" (207).

I've never used Del.icio.us myself but, after reading Golder and Huberman's article, I am interested to see how it all works. As I was reading I was reminded a lot of Flickr, a photo storage service that allows its users to tag photos to be searched by other users. Flickr is, in my opinion, much more personal, or at least it can be, as it allows users to tag their own photos with personal names of friends, family, and even complete strangers. Of course, other more general tags can be and are used in Flickr. Searching through the millions of photos can provide hours of time-wasting fun!

Monday, October 23, 2006

Oct. 23: Dawson, Greenberg

Jane Greenberg's article "Understanding Metadata and Metadata Schemes" presents an approach for the study of metadata schemes. Her MODAL (Metadata Objectives and principles, Domains and Architectural Layout) framework proposes the examination of the features of various metadata schemes, such as EAD, RSS, Dublin Core, FRBR, etc., to provide a way to study and interpret schemes and to aid in their design. Although the subject matter is far beyond the scope of my studies thus far, Greenberg provides ample background information and definitions to aid in the understanding of the MODAL approach. Metadata, or data about data, "addresses attributes that describe, provide context, indicate the quality, or document other object (or data) characteristics" (20). There are many different functions that metadata supports, including the discovery, management, usage, audience(s), authentication, linking and hardware/software needs of particular resources. Examples of such functions might include author, title, subject, the price of a particular resource, its rights and reproduction restrictions, and so on. There are many different metadata schemes for different organizations. One thing all metadata schemes have in common, however, is that they incorporate objectives and principles that govern how the scheme will use metadata to describe the organization's collection(s). Greenberg's MODAL approach also looks at the domain of an organization's collection(s) to further understand its metadata scheme. Domain includes "the discipline or the community that the scheme serves" (29), as well as object types and formats. Architectural layout refers to a scheme's structure - how deep the metadata elements go and how they branch off into different directions to describe the collection(s). Greenberg states that "although metadata schemes vary tremendously, they are shown to be similar when examining their objectives and principles, domain foci, and architectural layout" (33).

Dawson and Hamilton's "Optimising metadata to make high-value content more accessible to Google users" presents, in my opinion, a very balanced view of the Google vs. academia debate that has caused so much controversy among information professionals in recent years. I agree with the authors' position that Google has the capability to reach millions of information seekers, so information providers should do everything in their power to make their collections available to the casual Internet surfer as well as to the more serious scholar, both of whom may just be using Google because of its speed and ease of use. The authors point to several success stories of private and public institutions that have used metadata implementation to increase their rankings in Google searches. If it is our job as information professionals to make information easy to find and access, why then is there so much skepticism regarding Google as a reliable source for information? Of course, the cost involved in creating metadata for such extensive collections as library catalogs is quite high but in many cases I would think that the potential benefits to the institutions would eventually outweigh the bottom line. Dawson and Hamilton introduce the term "data shoogling," which means "rejigging, or republishing, existing digital collections, and their associated metadata, for the specific purpose of making them more easily retrievable via Google" (313). The authors offer relatively simple solutions for "shoogling" data that one need not be a cataloging expert to carry out successfully. The Glasgow Digital Library (GDL) serves as a poignant example of what data shoogling can do for a relatively small library. The GDL published an electronic book about old country houses in Glasgow. Because of optomized metadata the book ranked number one when "old country houses" (without quotation marks) was searched in Google in 2004. In fact, I just searched those terms myself in Google, and the same holds true today! The authors realize that Google may not be on top forever but offer ways to get around that. For example, they suggest that information providers "remain flexible and...establish procedures that will allow output and optimisation for different applications in future" (324). Finally, the authors urge institutions to reconsider the Google question since, after all, that's where many of their potential users already are.

Greenberg's article made me think back to my archives class last semester, for which I wrote a research paper on Encoded Archival Description (EAD). It was very interesting learning about the levels of classification and the history of this metadata scheme, which originated at UC Berkeley in the 1990s. Here is the web site: http://sunsite.berkeley.edu/FindingAids/uc-ead/ (sorry, my equal key doesn't work because I spilled limeade on my laptop - true story - I'll fix the link tomorrow!). I had a lot of fun using the metadata tags to find photos in the Online Archive of California (http://www.oac.cdlib.org/ - again, I'll fix it tomorrow...) At the time I saw metadata tags as something similar to Library of Congress Subject Headings, which I guess they are, but they go much deeper as they can easily be slipped into the code of a web page to make the content more findable to information seekers. EAD and other metadata schemes just make so much sense to me. Why not make the information-rich collections of public institutions available to Google and other search engine users? Isn't the point of having these free resources so that people can and will want to access them?

Sunday, October 15, 2006

Are You Blogging This?

My friend Dave published this video on his blog (see link under my favorites) and I thought I'd share it with the class, in case you haven't seen it already. It's from David Lee King, director of Digital Branch & Services Manager at the Topeka & Shawnee County Public Library. Looks like a fun coworker, eh? Think I'll go add him to my myspace friends.

Oct. 16: Ferreria/Pithan, Jeng

This week's articles were all about the usability of digital libraries. They look at things like effectiveness, efficiency, satisfaction, learnability, and error recovery. These are all important characteristics for designers of digital libraries to keep in mind. Ferreira and Pithan's study looked at the issue from a human-computer-interaction (HCI) and information science (IS) point of view, and integrated that with Carol Kuhlthau's and Jakob Nielsen's work on information seeking and usability. I think this study is a good place to start for digital library designers, as it encompasses many different ideas that, when considered together, allow for a great deal of information gathering on how users perceive the usability of digital libraries. Since the explosion of digital information in the 1990s, it seems that not much work has been done in this area and that users have had to somehow figure out how to use digital libraries for themselves. Studies have been conducted in the area of IS, I assume, since its inception, so it makes sense (a tribute to Dervin there!) to study the newest method of information retrieval in the context of usability. Keeping the human being in mind is of vital importance, since there is always a person on one end of a search for information... I remember reading Kuhlthau's article and noticing myself going through the six phases (uncertainty, optimism, confusion/doubt, confidence/clarity, sense of direction, and satisfaction/disapointment) when searching for articles in certain databases (which shall remain nameless). The study also looked at Nielsen's five variables of usability (learnability, efficiency, memorability, errors, and satisfaction) in the context of human-digital library interaction. Much research remains to be done so that digital libraries can be pleasant, efficient, and satisfying resources for information seekers to use.

Jeng's article looks more at measuring usability. She states, "Indeed, digital library development involves interplay between people, organization, and technology. The usability issues should look at the system as a whole" (48). Here, she hit the nail on the head for me. The same theme uncovered itself to me as in Ferreira and Pithan's study - consider the human aspect. Jeng looked at the definition and dimensions of usability, how other studies have evaluated it, and proposes a model for assessing the usability of academic digital libraries using ISO 9241-11, which defines usability as "the extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency, and satisfaction in a specified context of use" (50). This same definition was used for the HCI portion of Ferreira and Pithan's study. Jeng concludes that "there is a need of usability testing benchmarks for comparison" (52). I couldn't agree more. It would be nice, at least in an academic digital library context, to have some sort of standard to which the information search and retrieval process could adhere. That way, a user would more likely know a "good" digital library from a "bad" one.

OK, here is where I insert my related experience/reading/whatever. I admit, I am a myspace junkie. I really like the social aspect of it all. In fact, I was recently (yesterday) reunited with my first and second cousins, who now live in Florida! I remember when all my second cousins were born but, sadly, they moved away in the 90s and we kind of lost touch. It was surprising and wonderful for them to find me on myspace. I also have friends in California, New York, Georgia, Washington, and even overseas, with whom I can easily keep in touch via myspace. Plus, it's just fun to act dumb and keep up one's site, at least I think so. What I want to discuss, though, is the fact that myspace needs to employ a librarian to catalog and classify its music section to make it more efficient, effective, and satisfying to use. In a word, the need to look into usability. It's fun to think of a song you might want to hear and look up the band, lo and behold, there it is! Someone out there has taken the time to create a page for a band and make their song(s) available for download and/or posting on one's page. The problem is, however, that anyone can create a music page. Well, that's not really the problem. The problem is in the cataloging. Users can name their band anything they want. For example, try searching for a band with an ampersand in its name and see what results you get. I'm not saying that users shouldn't be allowed to add whichever band they want, just that the music should be more easily searchable. Another example of how frustrating it is is that one can only search by three categories - band name, genre, country. It would be nice if myspace allowed users to search by song title, year, etc. If its collection were well organized and correctly cataloged, this could happen. The myspace music database is a wiki of sorts but with no oversight for errors. Maybe someday I'll get a life and not have to worry about it! Until then, though, they could hire me to clean it up and make it more usable. What a fun job that would be!

Sunday, October 08, 2006

Oct. 9: Lanier, Schiff

Stacy Schiff's article "Can Wikipedia conquer expertise?" calls the online encyclopedia "a lumpy work in progress" ([8]). Schiff compares Wikipedia to the Encyclopedia Brittanica and other encyclopedias of the past. She recalls the story of Johann Heinrich Zedler, who compiled an encyclopedia in Germany in the 18th century. Book dealers in the area feared that they would be put out of business because Zedler's Universal-Lexicon would "[render] all other books obsolete" ([2]). It seems the information world has the same fears today - Google will annihilate the library and Wikipedia will cause dust to collect on reference shelves. Again I am reminded of the readings we did in the first semester about the emergence of the printing press and how people feared the dissemination of information to the "common folk" would devalue hand-copied manuscripts and the information contained within them. In fact, the exact opposite happened and, as we learned, the more people that have access to information, the better off society is as a whole. Information - the great equalizer. Wikipedia is definitely an interesting experiment but I see no reason to fear it. Schiff notes that since its inception, Wikipedia has instituted policies and procedures to cut down on the amount of hacking and bias. Schiff mentions provenance as one of Wikipedia's main shortcomings. She states that "[t]he bulk of Wikipedia's content originates not in the stacks but on the Web, which offers up everything from breaking news, spin, and gossip to proof that the moon landings never took place" ([8]). This, in my opinion, pretty much proves that Wikipedia will never replace the library or any part of it. As the joke goes, "It's on the Internet, so it must be true." Wikipedia can be a valuable resource for gathering basic information about a subject before actual research takes place but it will never be a substitute for real, reliable resources.

Jaron Lanier's article, ""Digital Maoism: The hazards of the new online collectivism", presents a more chilling look at Wikipedia and its possible repercussions. This article was a very fun read and it actually got my blood pumping a few times. I can completely understand his hysteria with regard to the effects that this new online collectivism might have on society. In the same breath, however, I feel that his is a pretty radical point of view but, hey, I love radical! I think there is real reason to fear the way people tend to use Wikipedia as a reliable source of factual information. As we information professionals know, it is not to be regarded in this way but to be taken with the proverbial grain of salt. But I think of the generation of people that are growing up having never known life without the Internet and what they might believe about Wikipedia. It's hard to imagine, having had to research topics in libraries throughout my own life but I imagine that the younger generation might well be fooled into thinking that Wikipedia is the same as any other encyclopedia. Lanier states quite eloquently, "In the last year or two the trend has been to remove the scent of people, so as to come as close as possible to simulating the appearance of content emerging out of the Web as if it were speaking to us as a supernatural oracle. This is where the use of the Internet crosses the line into delusion" ([5]). Again, though, the printing press and even the emergence of radio and television come to mind as I think about Wikipedia's possible side effects. This is just yet another avenue for information and entertainment and to treat is as Satan incarnate is going just a little too far. People just need to be educated about the good and the bad of resources such as Wikipedia. It will be our job as librarians and information professionals to do this.

I found some reactions to Lanier's article on boingboing, which were very insightful. I think Cory Doctorow summed it up best: "Wikipedia isn't great because it's like the Britannica. The Britannica is great at being authoritative, edited, expensive, and monolithic. Wikipedia is great at being free, brawling, universal, and instantaneous". Regardless, I absolutely loved Lanier's article and his take on online collectivism and the hive mentality. I think it takes all kinds of opinions and everyone is entitled to her/his own.

Monday, September 25, 2006

Sept. 25: Atkinson, Lindgaard

Ross Atkinson's article "Transversality and the Role of the Library as Fair Witness" brings up several extremely important points to consider for the future of libraries and librarianship. Atkinson explains four attributes that make a library a library. First and foremost, a library must have a collection of information from which to choose and this collection must be made up of authoritative sources. Atkinson calls these first-order attributes. Secondly, a library must exhibit metafunctionality and preeminence in its collection, or what Atkinson calls second-order attributes. Metafunctionality refers to the contextuality of other objects in the collection, while preeminence signifies the library's place in the community. Atkinson discusses these aspects of modern libraries in conjunction with the digital information explosion that is currently changing the face of libraries; especially academic libraries. The library as "fair witness" (a term taken from Heinlein's Stranger in a Strange Land) comes into play with the accuracy and believability of the library as cultural institution. Atkinson proposes (and I wholeheartedly agree with him) that libraries take on the role of "trusted agent behind the technology" - or fair witness (178). This is librarianship at its core. Ranganathan's Five Laws of Library Science came to mind as I was reading this article. Number four - save the time of the reader - seems especially appropriate here. Atkinson goes on to explain the concept of transversality in regard to the library as fair witness. Transversality takes first-order and second-order attributes, along with the idea of fair witness, and gives the user the capacity to make value judgements and draw conclusions from the information and services provided by the library. Transversality is, simply, interoperability (182). Atkinson also discusses the trend of treating libraries as commercial information services, which goes against every librarian grain of my being. Information for sale? No. Wrong. Free information for all. The author cautions against going in this direction and says that, although it is important for libraries to find new sources for funding, "[i]t is...absolutely essential that the library take the greatest care not to emulate too stringently the values of commercial information intermediaries" (185). As we all know, information brokers do not embody the "fair witness" ideal of the library. The article is wrapped up by the (at least in my mind) positive note that libraries must sometimes act in opposition to their own political or even financial interests in order to serve as fair witnesses in society. This reminded me of the many times I've heard about some group or other wanting a particular book removed from library shelves. For example, a group might want a homosexual-themed children's book taken off the shelf, even when the book is specifically targeted to a juvenile audience (such as a book about having gay parents). I think in cases like these, libraries should take a stand and keep the "controversial" book on the shelf. Parenting should be the responsibility of parents, not librarians. As a fair witness, the library has the responsibility to present the true record of society, regardless of whether it ruffles someone's feathers!

Gitte Lindgaard's article, "Attention Web Designers: You have 50 milliseconds to make a good first impression!" presents the results of three studies on how users perceive web site design. The participants viewed a set of 100 preselected web sites in random order for 500 milliseconds and again for 50 milliseconds and rated each on its visual appeal. Interestingly, results showed that participants made a decision as to whether a site was visually appealing in a matter of 50 milliseconds. The authors indicate that preliminary favorable or unfavorable attitudes toward the aesthetics of a particular site would most likely color the users' subsequent impressions of the site, regardless of whether it was well-designed (from a usability standpoint) or not. The article delves somewhat into what constitutes the aesthetics of "good design" but states that further research is still needed to arrive at a conclusion. I know that the use of color and white space, as well as the choice of font, can make or break a web site for me. If I see Comic Sans, I run the other way in terror!

This is an interesting site I found while searching the web for "bad web sites," which felt just a little surreal... I think I'll bookmark this one for my final project. It has some great tips for avoiding "trainwrecks on the Information Superhighway". Most of the ones I viewed featured really bad color combinations that almost hurt my eyes. Check out the Association of International Glaucoma Societies page if you want to be really confused and, dare I say, disgusted.

Sunday, September 17, 2006

Sept. 18: Gradmann, Mohamed

This week's readings were about implementing metadata and Semantic Web technology to make searching the web easier and more fruitful for the user. Khaled Mohamed's "The Impact of Metadata in Web Resources Discovering" was interesting in that it explored the supposed effect of metadata tags on the findability of web sites using search engines. It also looked at how sites with metadata ranked in the results set compared to those without metadata. I found it hilarioius that almost half of the sites that were about metadata standards (for example, a Dublin Core tutorial web site) did not employ metadata elements on their sites to improve finability and ranking. The search engines Mohamed used for the study (Go, Alta Vista, and HotBot) "claimed that using metadata influenced page rank order" (165) but the author found that wasn't the case at all. He found that there was just a slight effect on page rank order when meta tags were used. Mohamed also notes at the end of his article how he had previously researched Arabic web sites and that most (98%) of them used no metadata whatsoever. He stresses the importance of looking at Arabic web sites in comparison to other international sites in terms of government information services.

To me, this just seems like everything else in the world - everyone does things a little bit differently, even when they're doing similar things, and somehow the world hasn't ended yet. Credit card machines are an example of this. Go to a grocery store, a convenience store, a department store - they all have credit card machines that customers use to pay for purchases, yet all of them are probably a little different. The customer will have to slide her/his card in a slightly different way or the buttons will be in different locations or s/he may have to sign a piece of paper, a screen on the machine or not at all. I find this mildly frustrating but it's really not that important. If everything were the same everywhere, the world would be a pretty boring place.

Stefan Gradmann's article discusses the "hidden Web" and Semantic Web technology that could be used to free bibliographic information from the isolated world of the librarian and use it to interact with information on the Web to, I assume, make information easier to find and use. I had a pretty hard time understanding this article as I have no practical experience with FRBR (and I had to Google it to find out that it is, in fact, Functional Requirements for Bibliographic Records) or Semantic Web technology. I found a good introduction to FRBR here. What I understood of the article is that Gradmann proposes to sort of mesh the two information representation systems together for the benefit of users who are searching for information on the Web. He lists several benefits to his proposed system, one of which is that an FRBR system would make the Web more transparent. Now, I don't really know exactly what that means to the world of librarianship, or at all to be quite honest. I've been reading the blogs of other classmates and I see that I'm not alone, which makes me feel a little better. I really have a hard time grasping concepts, especially having to do with computers, if I haven't had any experience using them. I'm sure if I did know what Gradmann was talking about, I'd think it was a really good idea. Wow, lame attempt at discussing the article, huh?

And now for something completely different... I'm using my Monty Python skit freebie this week! I recently read this article about searching the Internet and privacy. Imagine if everything, or every person, you ever Googled could somehow be linked to your identity. Scary thought, eh? Well, the Internet is still relatively new and the privacy issues that surround it are still up in the air. This article gives lots of tips for keeping your identity and other personal information safe and secure when using search engines and other information services on the Web. Be careful - Big Brother might be watching!

Monday, September 11, 2006

Sept. 11: Vaidhyanathan, Veith

While reading chapter 8 of Siva Vaidhyanathan's The Anarchist in the Library, "The Perfect Library," it hit me. The reason I got into librarianship in the first place. Of course it wasn't the money (ha ha) but I suppose I have just kind of lost sight of why this career path appealed to me. Vaidhyanathan ties everything up so well - the Enlightenment, terrorism, the library in Alexandria, the USA PATRIOT Act, and the commercialization of information that threatens to further widen the digital divide. I could not agree with the author more that "[p]ublic libraries are functional expressions of Enlightenment principles. We are about to let commercial interests shut them down" (124). I see it happening already at Bizzell Library. More and more, students are getting used to having access to information online. We've read in other classes about Google and Wikipedia being used as reliable sources by university students! Sometimes they even expect to be able to borrow something from another library because they can't get access to it electronically through our catalog. What happens often, though, is that our library does, in fact, hold a hard copy and just the newest issues are available online. Students and faculty as well would sometimes rather pay for our staff to locate and scan an article rather than come to the library and do it themselves. If things keep going in this direction, with more and more information available online and more and more libraries fulfilling this expectation by purchasing electronic journals and books, the library as cultural institution may well be, as Vaidhyanathan warns, on its way out. I think as future librarians and information professionals, it will be up to us to get the word out about where we want libraries to be in the future. Vaidhyanathan states that he "fear[s] we may be too late" (129); that the commercialization of information is imminent. I certainly hope he's wrong.

I will have to agree with Richard Veith that Vannevar Bush's Memex is not the Internet and is more like the desktop computer, wherein one searches only her/his own files (memories) and not the vast amount of material available to search on the Internet. The MyLifeBits Project interests me very much but I almost have to laugh at the enormity of it! I can't even seem to organize my underwear drawer, much less even think about attempting to organize everything I have ever heard or seen. I can see Veith's comparison of the Memex and the iPod/Tablet combination. Maybe something like the Treo or Blackberry with music capabilities comes closest to that today. I can see the eventual integration of all of the technologies we have now, including cell phone, tablet pc with Internet, iPod, and camera. It seems we're almost there.

Tuesday, August 22, 2006

August 28, 2006: Gee, Hinton

Let me begin by saying that these two articles are the most interesting I have read to date in library school. I agree with Gee's thoughts on using video games as models for learning in our schools. He states that "something about how games are designed to trigger learning...makes them so deeply motivating." (Gee 2001, [2]). Motivation seems to be a gross oversight of our schools, even at the university level. Rote memorization is something I remember from my elementary and secondary (and even post secondary) school days. The only motivator with such a practice is the final grade handed down by the instructor - not much of a tool for instilling in someone the actual desire to learn and understand a concept. Gee lists 13 principles taken from video games and explains how they could be applied in education. For example, one that I thought was quite important was Principle 2: Customize, which states that "[d]ifferent styles of learning work better for different people" (Gee 2001, [7]). The author explains how certain video games allow players to customize their game play to fit their needs. He explains how this principle would let students find their own style of learning as well as try new styles without being afraid. I also wholeheartedly agree with Gee's take on professional practice. He states, "Professional practice is crucial here, because, remember, real learning in science, for example, is constituted by being a type of scientist doing a type of science not reciting a fact you don't understand" (Gee 2001, [11]). Gee uses the example of Galileo experimenting with pendulums and applying to it the principles of geometry. In our schools, however, students are expected to understand things like why the pendulum swings as it does before they have had any practical experience with geometry. This is no doubt quite frustrating to those less mathematically inclined students. I think Gee is definitely on to something with his video game learning theory. Games use subtle techniques to motivate and teach without the user being aware that s/he is learning. Imagine the possibilities if school made learning - all learning - fun...

Hinton's article explores online game environments and other online communities as "third places;" virtual stations where people spend large amounts of time outside of real life and work life. He discusses Quake, " the first three-dimensional computer game to use a set of open standards that allowed anyone to host a game server and create content and modifications for the game engine" (Hinton 2006, [1]). Users of online games and communities create meaning by being able to oversee certain aspects of its creation. With regard to libraries, users of catalogs and databases might be granted the opportunity to create keyword searches from which other users may benefit. Hinton mentions Wikipedia and Craigslist as two examples of "emergent spaces where user activity and interaction create meaning and relevance" (Hinton 2006, [8]). Some might even say that OCLC's WorldCat is a sort of "third place," although it doesn't exactly fit Hinton's definition. Catalogers in libraries around the world have the power to enter books, videos, journals, and many other kinds of resources into WorldCat's interface. Something I've noticed in working with WorldCat on a daily basis is that items are often either duplicate entries, cataloged slightly differently than one another, or, strangely, someone will catalog an article title and the person searching for its location will have to have an awareness of the fact that it is an article and not the journal title in which the article can be found. However, WorldCat as a "third place" might not be exactly appropriate as catalogers are hopefully at work (the "second place") when they are "living" in WorldCat.

Hinton also discusses social networking sites such as MySpace, Facebook, and Digg as "third places." I can say from experience that many people do indeed "live" on MySpace. I see it as a sort of 21st century hybrid of the telephone, note-passing, and school yearbooks. It is even more than that, however. Now one can play videos of all kinds, view, listen to (and sometimes download songs), and "befriend" unsigned and signed bands, and keep current with new movie releases and other media. So, I might say that MySpace is a sort of telephone/note-passing/yearbook/television/newspaper hybrid, although I think the term "social network" is easier to remember... Hinton's memory of a Quake player saying, "we live here" is quite apropos of online social communities. Like the saloons, pubs, and nightclubs of the past and present, they are yet another "place" where people can share ideas, make friends, and get information.

Hinton's article came to mind when I was reading a post on boing boing, a website of weird things, tech news, and other sundries (you'll just have to look at it if you've never seen it). I check this site pretty regularly, as I like offbeat news, and I came across this article, which originally appeared in Wired, about a Canadian record label giving away ProTools files of popular music on MySpace. This is something that would allow DJs and others familiar with ProTools to tweak the music files and use them however they see fit. But the idea, which some think is outrageous because of intellectual property issues and others pertinent to the music industry, might revolutionize the way musicians and record labels make their money. By using social networking sites like MySpace, the record label seeks to attract attention to certain bands at certain times in hopes of, I assume, making money in the end. I found the site on MySpace where one can supposedly download these ProTools files but I was not able to locate them. Maybe the promotion is over. I did, however, see where the songs are for sale, or maybe they are the files, I don't know... I don't particularly care for this band, so I didn't pursue it further. Regardless, the Wired article explains how this record label is experimenting with music promotion with this particular band. The CEO of the label calls the record industry "antiquated" (as do many others in the industry and on the outside) and speaks of "collapsed copyright" in terms of intellectual property of music. By allowing its fans to download and edit or recreate their songs, the band and the record label might just revolutionize the music industry. I personally think it's a brilliant idea that will just serve to sell more CDs, mp3s and concert tickets. People like new and different ways of doing things. Just like iTunes has transformed the sale of songs and albums, so might giving away a band's musical secrets do the same for its individual sales.

Gee, James Paul. 2001. Learning by design: Good video games as learning machines. E-Learning 2(1):5-16.



Hinton, Andrew. (July-August 2006). We live here: Games, third places and the information architecture of the future. Bulletin of the American Society for Information Science & Technology. Available at http://www.asis.org/Bulletin/Aug-06/hinton.html