Dear Norman Residents,
Have you noticed the Jim Lemons political signs around town? Well, I have and I've been making fun of his slogan for days now, which is "America, Families, Etc." Lame. WELL, turns out that it is a spoof! I LOVE IT!!! I've been thinking to myself, "OK, 'America, Families' is pretty damn vague but the 'Etc.' is the part that really gets me." 'Etc.' could mean practically ANYTHING - prostitution, drugs, gay marriage, take your pick! Anyway, this is DEL.ICIO.US. Here's the complete story.
Students spoof political process with fictitious candidate
By M. Scott Carter
THE NORMAN TRANSCRIPT (NORMAN, Okla.)
NORMAN, Okla. — The Jim Lemons political campaign is pretty low key.
There’s no television.
There’s no radio.
No newspaper ads.
Public appearances are scarce.
Heck, if you attended Lemons’ last press conference, consider yourself among the chosen few. Finding the Lemons headquarters isn’t easy, either. Granted, there are yard signs and stickers scattered throughout Norman, but Lemons and his staff are very difficult to locate.
There is a campaign spokesman.
And yes, there’s also a Web site.
But a vote for Jim Lemons is, well, pretty much impossible.
Because Jim Lemons doesn’t exist.
At least not in the flesh.
The brainchild of Norman residents Tres Savage and Josh McBee, the Jim Lemons campaign is a statement; a protest, Savage says, “about the deplorable electoral process Oklahomans have gotten themselves into.”
And that protest has become a local legend.
With curious phone calls to The Transcript and questions from seasoned political obser-vers and other candidates, the Lemons campaign has grown far beyond a few yard signs and a Web site.
It has quickly become part of the local political landscape.
And that landscape, Savage says, needs to be shaken up.
As the editor of the University of Oklahoma’s student newspaper, The Oklahoma Daily, Savage has covered his share of politicians; and it’s their behavior, along with the process of getting elected, which bothers him.
“This is personal,” Savage said.
“It has nothing to do with my job. It’s a protest about ugly political process and how people are being misled.” A former intern and Transcript reporter, Savage is no stranger to politics or protests.
And while some protesters choose to rally, march and sing or boycott whatever entity they disagree with, that’s not Savage’s or McBee’s style.
Both are journalists — college journalists.
And, unlike many players in the state political arena, they actually have something to say. But to be effective the pair knew their protest had to be unique; it had to have some panache, if you will. For them, their political statement had to be something the public would remember, and hopefully, take to heart.
And yeah, it also had to be fun.
“OK, the truth is Josh (McBee) and I were sitting around this summer when we came up with the idea. We were talking about politics when I realized how much I hated the process,” Savage admitted to The Transcript.
And thus, Jim Lemons—and his 2006 campaign—was born.
Labor was induced.
“Yes, we were induced by something,” Savage said. “But what, I don’t want to say.”
Taking advantage of the quickest way possible to get the message out — the Internet’s myspace Web site — the Lemons campaign took its first steps. Following the site’s launch, the first bright red yard sign was placed in Savage’s yard.
Touting Lemons’ name and the slogan, “Make lemonade 06” the sign caught the attention of locals and more signs — and supporters — followed.
And, before long, Savage had produced 500 yard signs — at a cost of almost $800 — urging voters to support the unseen candidate. “We looked at it like this: No matter who you’re gonna vote for you’re gonna get a lemon. So that became our slogan.”
And even though Lemons claimed no party affiliation, nor did he seek any particular office, his campaign continued to expand — due in part, Savage said, to the public’s frustration with mainstream candidates, political parties and the media. “Plus the fact there really isn’t anyone trying to do anything different.”
But to keep their momentum their candidate had to seem real.
Back to the Internet.
Complete with photographs and “news” stories, the Jim Lemons’ site includes some personal information, but little political insight:
• Lemons says he’s 51 years old.
• He says he’s married.
• He says he has grandchildren.
• He says he’s straight.
• He says he’s a Capricorn.
• He claims to be a resident of Norman.
• He also says he’s a Christian, a proud parent and a college grad.
• And his chief political rival is a man named David Dibble.
Lemons, according to his Internet site, has also been busy: The candidate has hosted at least one impromptu campaign rally and a fall press conference. In a September press release, Lemons even responds to questions about his campaign literature being found at the scene of several area drug busts.
“Because hundreds of supporters have been spreading my message and sticking my stickers all across Norman, it’s unavoidable that some of the thousands of marijuana users in this city might happen to venture past my campaign postings,” Lemons’ release said. “I think, if anything, that these so-called seedy sightings of my election paraphernalia only prove the strength of my campaign.”
In another posting, Lemons, like many local candidates, addresses his problems with yard signs being destroyed.
“There have been reports that Mr. Dibble and his associates have been involved in the disappearance of my signs,” the site says, “but I do understand that David is a documented kleptomaniac and has been seeking treatment at various facilities for multiple years. Thus, I do not want to turn his struggle with a crippling psychological syndrome into a campaign issue.”
The tongue-in-cheek volley comments on the recent spate of television stories covering controversies about thefts and defacing of candidates’ yard signs.
As Lemons’ stealth campaign continued, his strategy evolved and, consequently, a new theme was adopted. “I was working as an intern for the Oklahoma City Gazette and was covering the 5th Congressional District race, and I was amazed by the rhetoric — faith, family and all that stuff,” Savage said. “I wanted to take a shot at that.”
The result, Savage said, was a new campaign theme: “Jim Lemons — America, families, etc.”
“That pretty much summed up our feelings,” he said. “We’re trying to throw Oklahoma politics a curve ball. It needs a curve ball.”
So far, Savage and McBee have thrown strikes.
From the huge increase in requests for yard signs to the unscheduled campaign rally, Jim Lemons and his cadre of supporters are injecting a bit of fun and political theater into an otherwise drab campaign season filled with sleaze, mud-slinging and ever-increasing claims of negativity.
“It’s not just the politicians,” Savage said. “I’m also frustrated with the media; they are part of a politician’s plan to get elected. The politicians want to get publicity. They mostly court television, more than print, for sure, but the trick is to get attention. And yet, at the same time, no one in the media is holding any candidate’s feet to the fire.”
As an example, Savage cites the Senate District 16 race.
“None of the candidates separated themselves from one another,” he said. “There was hardly anything about how the candidates stood and what they believed in. Plus, early in the primary they were all heavy into yard signs. I was talking with Josh (McBee) about it and we agreed: If you were just going on yard signs early on, Ott would have been elected.”
That race, Savage said, and the fact that the state’s voter turnout has been incredibly low for the election cycle, gives Lemons’ campaign more standing.
“It’s something different,” Savage said. “It’s far from the norm. Lemons appeals to people who don’t tune into what’s going on right now between Thad and Wallace and Sparks and Davis. Lemons is for those people who are so disgusted they don’t care about the other.”
The campaign’s focus, Savage says, is on those who are frustrated.
“If only 30 percent of the registered voters in this state vote, then Jim Lemons is for the other 70 percent,” he said.
With just days left before the Nov. 7 election, Savage said the Lemons campaign isn’t worried. “We’re telling people to write Lemons’ name in,” Savage said. “Even if it does invalidate their ballot.”
“In Oklahoma, write-in candidates are not counted,” says Cleveland County election board secretary, Paula Roberts. “Our machines are not set up to read write-in candidate names. And writing in a name could invalidate the ballot.”
That fact doesn’t bother Savage, he says, because not allowing write-in candidates is wrong. “It’s ridiculous and it needs to be changed,” he said. “I know I will be writing Lemons’ name in and I highly encourage anyone who doesn’t know who they are voting for to write Lemons’ name in.”
People, he said, should not be discouraged from voting.
“That’s why we’ve put the date on our sign,” he said. “To let people know when they could vote.”
So what happens to Jim Lemons after the election?
“I think Jim will stick around,” Savage said. “I was thinking, ‘From now on anytime I want to be philanthropic to help further society, Jim Lemons will help me do it.’ Plus he may write the occasional opinion piece or letter to the editor.”
A fictional way to solve some very real problems, he says.
And Lemons today?
“Oh he’s everywhere,” Savage said. “That’s what the signs say.”
M. Scott Carter writes for The Norman (Okla.) Transcript.
Monday, October 30, 2006
Oct. 30: Ding, Golder and Huberman
Ying Ding's article, "A review of ontologies with the Semantic Web in view," discusses several important ontologies in relation to human-computer interaction. Ontologies "can be seen as metadata that explicitly represent the semantics of data in a machine-processable way" (377). A widely cited definition of an ontology comes from Gruber: "an ontology is a formal, explicit specification of a shared conceptualization" (378). What is important here for the information science community is an ontology's relationship to the Semantic Web, which allows for machine-readable information exchange. Ding lists several important ontologies, ontology languages, and ontology tools. Each community may have its own specialized ontology. For example, the business community uses Enterprise Ontology, which highlights terms related to processes and planning, the structure of organizations, high level planning, and marketing and selling goods and services (379). Ontology languages "are either logic-based (frame logic), or web-based (RDF, XML, HTML)" (379). Continuing with our business ontology example, Enterprise toolsets "are implemented using an agent-based architecture to integrate off-the-shelf tools in a plug-and-play style" (380). Enterprise toolsets support the Enterprise Ontology discussed earlier. Ding also lists several ontology projects, including Enterprise, which is "aimed at providing a method and computer toolset which will help capture aspects of a business and analyse these to identify and compare options for meeting the business requirements" (381). Although much of Ding's article was very abstract to me, I understand the importance of ontologies with regard to the Semantic Web. Ontologies provide a set of standards which can support the interoperability of common tools and aid in their design.
On a lighter note, Scott Golder and Bernardo Huberman look at Del.icio.us, a popular site for bookmarking and tagging URLs. The authors discuss the difference between collaborative tagging, such as is the practice in Del.icio.us, and taxonomies, which are more hierarchical and exclusive. With collaborative tagging, individuals make the distinction as to what tag to apply to a certain bookmarked URL, which is influenced by the individual's level of expertise as well as social factors such as language and culture. Although collaborative tagging does present some problems, it also provides the "opportunity to learn from one another through sharing and organizing information" (201). Golder and Huberman looked at data from Del.icio.us to reveal patterns of use. They found that users initially prefer more general tags and that successive tags were more specific and/or personal in nature. Another important finding is that users often imitate other users and share knowledge in the network, meaning that they often choose tags that have been created by other users because they perceive them as being 'correct' when they may not know how to tag a particular URL. The authors assert that this factor may be a cause for the stabilization of tags to describe URLs. Interestingly, Del.icio.us in this way can be seen as a URL recommendation service "even without explicitly providing recommendations" (207).
I've never used Del.icio.us myself but, after reading Golder and Huberman's article, I am interested to see how it all works. As I was reading I was reminded a lot of Flickr, a photo storage service that allows its users to tag photos to be searched by other users. Flickr is, in my opinion, much more personal, or at least it can be, as it allows users to tag their own photos with personal names of friends, family, and even complete strangers. Of course, other more general tags can be and are used in Flickr. Searching through the millions of photos can provide hours of time-wasting fun!
On a lighter note, Scott Golder and Bernardo Huberman look at Del.icio.us, a popular site for bookmarking and tagging URLs. The authors discuss the difference between collaborative tagging, such as is the practice in Del.icio.us, and taxonomies, which are more hierarchical and exclusive. With collaborative tagging, individuals make the distinction as to what tag to apply to a certain bookmarked URL, which is influenced by the individual's level of expertise as well as social factors such as language and culture. Although collaborative tagging does present some problems, it also provides the "opportunity to learn from one another through sharing and organizing information" (201). Golder and Huberman looked at data from Del.icio.us to reveal patterns of use. They found that users initially prefer more general tags and that successive tags were more specific and/or personal in nature. Another important finding is that users often imitate other users and share knowledge in the network, meaning that they often choose tags that have been created by other users because they perceive them as being 'correct' when they may not know how to tag a particular URL. The authors assert that this factor may be a cause for the stabilization of tags to describe URLs. Interestingly, Del.icio.us in this way can be seen as a URL recommendation service "even without explicitly providing recommendations" (207).
I've never used Del.icio.us myself but, after reading Golder and Huberman's article, I am interested to see how it all works. As I was reading I was reminded a lot of Flickr, a photo storage service that allows its users to tag photos to be searched by other users. Flickr is, in my opinion, much more personal, or at least it can be, as it allows users to tag their own photos with personal names of friends, family, and even complete strangers. Of course, other more general tags can be and are used in Flickr. Searching through the millions of photos can provide hours of time-wasting fun!
Monday, October 23, 2006
Oct. 23: Dawson, Greenberg
Jane Greenberg's article "Understanding Metadata and Metadata Schemes" presents an approach for the study of metadata schemes. Her MODAL (Metadata Objectives and principles, Domains and Architectural Layout) framework proposes the examination of the features of various metadata schemes, such as EAD, RSS, Dublin Core, FRBR, etc., to provide a way to study and interpret schemes and to aid in their design. Although the subject matter is far beyond the scope of my studies thus far, Greenberg provides ample background information and definitions to aid in the understanding of the MODAL approach. Metadata, or data about data, "addresses attributes that describe, provide context, indicate the quality, or document other object (or data) characteristics" (20). There are many different functions that metadata supports, including the discovery, management, usage, audience(s), authentication, linking and hardware/software needs of particular resources. Examples of such functions might include author, title, subject, the price of a particular resource, its rights and reproduction restrictions, and so on. There are many different metadata schemes for different organizations. One thing all metadata schemes have in common, however, is that they incorporate objectives and principles that govern how the scheme will use metadata to describe the organization's collection(s). Greenberg's MODAL approach also looks at the domain of an organization's collection(s) to further understand its metadata scheme. Domain includes "the discipline or the community that the scheme serves" (29), as well as object types and formats. Architectural layout refers to a scheme's structure - how deep the metadata elements go and how they branch off into different directions to describe the collection(s). Greenberg states that "although metadata schemes vary tremendously, they are shown to be similar when examining their objectives and principles, domain foci, and architectural layout" (33).
Dawson and Hamilton's "Optimising metadata to make high-value content more accessible to Google users" presents, in my opinion, a very balanced view of the Google vs. academia debate that has caused so much controversy among information professionals in recent years. I agree with the authors' position that Google has the capability to reach millions of information seekers, so information providers should do everything in their power to make their collections available to the casual Internet surfer as well as to the more serious scholar, both of whom may just be using Google because of its speed and ease of use. The authors point to several success stories of private and public institutions that have used metadata implementation to increase their rankings in Google searches. If it is our job as information professionals to make information easy to find and access, why then is there so much skepticism regarding Google as a reliable source for information? Of course, the cost involved in creating metadata for such extensive collections as library catalogs is quite high but in many cases I would think that the potential benefits to the institutions would eventually outweigh the bottom line. Dawson and Hamilton introduce the term "data shoogling," which means "rejigging, or republishing, existing digital collections, and their associated metadata, for the specific purpose of making them more easily retrievable via Google" (313). The authors offer relatively simple solutions for "shoogling" data that one need not be a cataloging expert to carry out successfully. The Glasgow Digital Library (GDL) serves as a poignant example of what data shoogling can do for a relatively small library. The GDL published an electronic book about old country houses in Glasgow. Because of optomized metadata the book ranked number one when "old country houses" (without quotation marks) was searched in Google in 2004. In fact, I just searched those terms myself in Google, and the same holds true today! The authors realize that Google may not be on top forever but offer ways to get around that. For example, they suggest that information providers "remain flexible and...establish procedures that will allow output and optimisation for different applications in future" (324). Finally, the authors urge institutions to reconsider the Google question since, after all, that's where many of their potential users already are.
Greenberg's article made me think back to my archives class last semester, for which I wrote a research paper on Encoded Archival Description (EAD). It was very interesting learning about the levels of classification and the history of this metadata scheme, which originated at UC Berkeley in the 1990s. Here is the web site: http://sunsite.berkeley.edu/FindingAids/uc-ead/ (sorry, my equal key doesn't work because I spilled limeade on my laptop - true story - I'll fix the link tomorrow!). I had a lot of fun using the metadata tags to find photos in the Online Archive of California (http://www.oac.cdlib.org/ - again, I'll fix it tomorrow...) At the time I saw metadata tags as something similar to Library of Congress Subject Headings, which I guess they are, but they go much deeper as they can easily be slipped into the code of a web page to make the content more findable to information seekers. EAD and other metadata schemes just make so much sense to me. Why not make the information-rich collections of public institutions available to Google and other search engine users? Isn't the point of having these free resources so that people can and will want to access them?
Dawson and Hamilton's "Optimising metadata to make high-value content more accessible to Google users" presents, in my opinion, a very balanced view of the Google vs. academia debate that has caused so much controversy among information professionals in recent years. I agree with the authors' position that Google has the capability to reach millions of information seekers, so information providers should do everything in their power to make their collections available to the casual Internet surfer as well as to the more serious scholar, both of whom may just be using Google because of its speed and ease of use. The authors point to several success stories of private and public institutions that have used metadata implementation to increase their rankings in Google searches. If it is our job as information professionals to make information easy to find and access, why then is there so much skepticism regarding Google as a reliable source for information? Of course, the cost involved in creating metadata for such extensive collections as library catalogs is quite high but in many cases I would think that the potential benefits to the institutions would eventually outweigh the bottom line. Dawson and Hamilton introduce the term "data shoogling," which means "rejigging, or republishing, existing digital collections, and their associated metadata, for the specific purpose of making them more easily retrievable via Google" (313). The authors offer relatively simple solutions for "shoogling" data that one need not be a cataloging expert to carry out successfully. The Glasgow Digital Library (GDL) serves as a poignant example of what data shoogling can do for a relatively small library. The GDL published an electronic book about old country houses in Glasgow. Because of optomized metadata the book ranked number one when "old country houses" (without quotation marks) was searched in Google in 2004. In fact, I just searched those terms myself in Google, and the same holds true today! The authors realize that Google may not be on top forever but offer ways to get around that. For example, they suggest that information providers "remain flexible and...establish procedures that will allow output and optimisation for different applications in future" (324). Finally, the authors urge institutions to reconsider the Google question since, after all, that's where many of their potential users already are.
Greenberg's article made me think back to my archives class last semester, for which I wrote a research paper on Encoded Archival Description (EAD). It was very interesting learning about the levels of classification and the history of this metadata scheme, which originated at UC Berkeley in the 1990s. Here is the web site: http://sunsite.berkeley.edu/FindingAids/uc-ead/ (sorry, my equal key doesn't work because I spilled limeade on my laptop - true story - I'll fix the link tomorrow!). I had a lot of fun using the metadata tags to find photos in the Online Archive of California (http://www.oac.cdlib.org/ - again, I'll fix it tomorrow...) At the time I saw metadata tags as something similar to Library of Congress Subject Headings, which I guess they are, but they go much deeper as they can easily be slipped into the code of a web page to make the content more findable to information seekers. EAD and other metadata schemes just make so much sense to me. Why not make the information-rich collections of public institutions available to Google and other search engine users? Isn't the point of having these free resources so that people can and will want to access them?
Sunday, October 15, 2006
Are You Blogging This?
My friend Dave published this video on his blog (see link under my favorites) and I thought I'd share it with the class, in case you haven't seen it already. It's from David Lee King, director of Digital Branch & Services Manager at the Topeka & Shawnee County Public Library. Looks like a fun coworker, eh? Think I'll go add him to my myspace friends.
Oct. 16: Ferreria/Pithan, Jeng
This week's articles were all about the usability of digital libraries. They look at things like effectiveness, efficiency, satisfaction, learnability, and error recovery. These are all important characteristics for designers of digital libraries to keep in mind. Ferreira and Pithan's study looked at the issue from a human-computer-interaction (HCI) and information science (IS) point of view, and integrated that with Carol Kuhlthau's and Jakob Nielsen's work on information seeking and usability. I think this study is a good place to start for digital library designers, as it encompasses many different ideas that, when considered together, allow for a great deal of information gathering on how users perceive the usability of digital libraries. Since the explosion of digital information in the 1990s, it seems that not much work has been done in this area and that users have had to somehow figure out how to use digital libraries for themselves. Studies have been conducted in the area of IS, I assume, since its inception, so it makes sense (a tribute to Dervin there!) to study the newest method of information retrieval in the context of usability. Keeping the human being in mind is of vital importance, since there is always a person on one end of a search for information... I remember reading Kuhlthau's article and noticing myself going through the six phases (uncertainty, optimism, confusion/doubt, confidence/clarity, sense of direction, and satisfaction/disapointment) when searching for articles in certain databases (which shall remain nameless). The study also looked at Nielsen's five variables of usability (learnability, efficiency, memorability, errors, and satisfaction) in the context of human-digital library interaction. Much research remains to be done so that digital libraries can be pleasant, efficient, and satisfying resources for information seekers to use.
Jeng's article looks more at measuring usability. She states, "Indeed, digital library development involves interplay between people, organization, and technology. The usability issues should look at the system as a whole" (48). Here, she hit the nail on the head for me. The same theme uncovered itself to me as in Ferreira and Pithan's study - consider the human aspect. Jeng looked at the definition and dimensions of usability, how other studies have evaluated it, and proposes a model for assessing the usability of academic digital libraries using ISO 9241-11, which defines usability as "the extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency, and satisfaction in a specified context of use" (50). This same definition was used for the HCI portion of Ferreira and Pithan's study. Jeng concludes that "there is a need of usability testing benchmarks for comparison" (52). I couldn't agree more. It would be nice, at least in an academic digital library context, to have some sort of standard to which the information search and retrieval process could adhere. That way, a user would more likely know a "good" digital library from a "bad" one.
OK, here is where I insert my related experience/reading/whatever. I admit, I am a myspace junkie. I really like the social aspect of it all. In fact, I was recently (yesterday) reunited with my first and second cousins, who now live in Florida! I remember when all my second cousins were born but, sadly, they moved away in the 90s and we kind of lost touch. It was surprising and wonderful for them to find me on myspace. I also have friends in California, New York, Georgia, Washington, and even overseas, with whom I can easily keep in touch via myspace. Plus, it's just fun to act dumb and keep up one's site, at least I think so. What I want to discuss, though, is the fact that myspace needs to employ a librarian to catalog and classify its music section to make it more efficient, effective, and satisfying to use. In a word, the need to look into usability. It's fun to think of a song you might want to hear and look up the band, lo and behold, there it is! Someone out there has taken the time to create a page for a band and make their song(s) available for download and/or posting on one's page. The problem is, however, that anyone can create a music page. Well, that's not really the problem. The problem is in the cataloging. Users can name their band anything they want. For example, try searching for a band with an ampersand in its name and see what results you get. I'm not saying that users shouldn't be allowed to add whichever band they want, just that the music should be more easily searchable. Another example of how frustrating it is is that one can only search by three categories - band name, genre, country. It would be nice if myspace allowed users to search by song title, year, etc. If its collection were well organized and correctly cataloged, this could happen. The myspace music database is a wiki of sorts but with no oversight for errors. Maybe someday I'll get a life and not have to worry about it! Until then, though, they could hire me to clean it up and make it more usable. What a fun job that would be!
Jeng's article looks more at measuring usability. She states, "Indeed, digital library development involves interplay between people, organization, and technology. The usability issues should look at the system as a whole" (48). Here, she hit the nail on the head for me. The same theme uncovered itself to me as in Ferreira and Pithan's study - consider the human aspect. Jeng looked at the definition and dimensions of usability, how other studies have evaluated it, and proposes a model for assessing the usability of academic digital libraries using ISO 9241-11, which defines usability as "the extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency, and satisfaction in a specified context of use" (50). This same definition was used for the HCI portion of Ferreira and Pithan's study. Jeng concludes that "there is a need of usability testing benchmarks for comparison" (52). I couldn't agree more. It would be nice, at least in an academic digital library context, to have some sort of standard to which the information search and retrieval process could adhere. That way, a user would more likely know a "good" digital library from a "bad" one.
OK, here is where I insert my related experience/reading/whatever. I admit, I am a myspace junkie. I really like the social aspect of it all. In fact, I was recently (yesterday) reunited with my first and second cousins, who now live in Florida! I remember when all my second cousins were born but, sadly, they moved away in the 90s and we kind of lost touch. It was surprising and wonderful for them to find me on myspace. I also have friends in California, New York, Georgia, Washington, and even overseas, with whom I can easily keep in touch via myspace. Plus, it's just fun to act dumb and keep up one's site, at least I think so. What I want to discuss, though, is the fact that myspace needs to employ a librarian to catalog and classify its music section to make it more efficient, effective, and satisfying to use. In a word, the need to look into usability. It's fun to think of a song you might want to hear and look up the band, lo and behold, there it is! Someone out there has taken the time to create a page for a band and make their song(s) available for download and/or posting on one's page. The problem is, however, that anyone can create a music page. Well, that's not really the problem. The problem is in the cataloging. Users can name their band anything they want. For example, try searching for a band with an ampersand in its name and see what results you get. I'm not saying that users shouldn't be allowed to add whichever band they want, just that the music should be more easily searchable. Another example of how frustrating it is is that one can only search by three categories - band name, genre, country. It would be nice if myspace allowed users to search by song title, year, etc. If its collection were well organized and correctly cataloged, this could happen. The myspace music database is a wiki of sorts but with no oversight for errors. Maybe someday I'll get a life and not have to worry about it! Until then, though, they could hire me to clean it up and make it more usable. What a fun job that would be!
Sunday, October 08, 2006
Oct. 9: Lanier, Schiff
Stacy Schiff's article "Can Wikipedia conquer expertise?" calls the online encyclopedia "a lumpy work in progress" ([8]). Schiff compares Wikipedia to the Encyclopedia Brittanica and other encyclopedias of the past. She recalls the story of Johann Heinrich Zedler, who compiled an encyclopedia in Germany in the 18th century. Book dealers in the area feared that they would be put out of business because Zedler's Universal-Lexicon would "[render] all other books obsolete" ([2]). It seems the information world has the same fears today - Google will annihilate the library and Wikipedia will cause dust to collect on reference shelves. Again I am reminded of the readings we did in the first semester about the emergence of the printing press and how people feared the dissemination of information to the "common folk" would devalue hand-copied manuscripts and the information contained within them. In fact, the exact opposite happened and, as we learned, the more people that have access to information, the better off society is as a whole. Information - the great equalizer. Wikipedia is definitely an interesting experiment but I see no reason to fear it. Schiff notes that since its inception, Wikipedia has instituted policies and procedures to cut down on the amount of hacking and bias. Schiff mentions provenance as one of Wikipedia's main shortcomings. She states that "[t]he bulk of Wikipedia's content originates not in the stacks but on the Web, which offers up everything from breaking news, spin, and gossip to proof that the moon landings never took place" ([8]). This, in my opinion, pretty much proves that Wikipedia will never replace the library or any part of it. As the joke goes, "It's on the Internet, so it must be true." Wikipedia can be a valuable resource for gathering basic information about a subject before actual research takes place but it will never be a substitute for real, reliable resources.
Jaron Lanier's article, ""Digital Maoism: The hazards of the new online collectivism", presents a more chilling look at Wikipedia and its possible repercussions. This article was a very fun read and it actually got my blood pumping a few times. I can completely understand his hysteria with regard to the effects that this new online collectivism might have on society. In the same breath, however, I feel that his is a pretty radical point of view but, hey, I love radical! I think there is real reason to fear the way people tend to use Wikipedia as a reliable source of factual information. As we information professionals know, it is not to be regarded in this way but to be taken with the proverbial grain of salt. But I think of the generation of people that are growing up having never known life without the Internet and what they might believe about Wikipedia. It's hard to imagine, having had to research topics in libraries throughout my own life but I imagine that the younger generation might well be fooled into thinking that Wikipedia is the same as any other encyclopedia. Lanier states quite eloquently, "In the last year or two the trend has been to remove the scent of people, so as to come as close as possible to simulating the appearance of content emerging out of the Web as if it were speaking to us as a supernatural oracle. This is where the use of the Internet crosses the line into delusion" ([5]). Again, though, the printing press and even the emergence of radio and television come to mind as I think about Wikipedia's possible side effects. This is just yet another avenue for information and entertainment and to treat is as Satan incarnate is going just a little too far. People just need to be educated about the good and the bad of resources such as Wikipedia. It will be our job as librarians and information professionals to do this.
I found some reactions to Lanier's article on boingboing, which were very insightful. I think Cory Doctorow summed it up best: "Wikipedia isn't great because it's like the Britannica. The Britannica is great at being authoritative, edited, expensive, and monolithic. Wikipedia is great at being free, brawling, universal, and instantaneous". Regardless, I absolutely loved Lanier's article and his take on online collectivism and the hive mentality. I think it takes all kinds of opinions and everyone is entitled to her/his own.
Jaron Lanier's article, ""Digital Maoism: The hazards of the new online collectivism", presents a more chilling look at Wikipedia and its possible repercussions. This article was a very fun read and it actually got my blood pumping a few times. I can completely understand his hysteria with regard to the effects that this new online collectivism might have on society. In the same breath, however, I feel that his is a pretty radical point of view but, hey, I love radical! I think there is real reason to fear the way people tend to use Wikipedia as a reliable source of factual information. As we information professionals know, it is not to be regarded in this way but to be taken with the proverbial grain of salt. But I think of the generation of people that are growing up having never known life without the Internet and what they might believe about Wikipedia. It's hard to imagine, having had to research topics in libraries throughout my own life but I imagine that the younger generation might well be fooled into thinking that Wikipedia is the same as any other encyclopedia. Lanier states quite eloquently, "In the last year or two the trend has been to remove the scent of people, so as to come as close as possible to simulating the appearance of content emerging out of the Web as if it were speaking to us as a supernatural oracle. This is where the use of the Internet crosses the line into delusion" ([5]). Again, though, the printing press and even the emergence of radio and television come to mind as I think about Wikipedia's possible side effects. This is just yet another avenue for information and entertainment and to treat is as Satan incarnate is going just a little too far. People just need to be educated about the good and the bad of resources such as Wikipedia. It will be our job as librarians and information professionals to do this.
I found some reactions to Lanier's article on boingboing, which were very insightful. I think Cory Doctorow summed it up best: "Wikipedia isn't great because it's like the Britannica. The Britannica is great at being authoritative, edited, expensive, and monolithic. Wikipedia is great at being free, brawling, universal, and instantaneous". Regardless, I absolutely loved Lanier's article and his take on online collectivism and the hive mentality. I think it takes all kinds of opinions and everyone is entitled to her/his own.
Subscribe to:
Posts (Atom)