Tuesday, December 9, 2008

Week 14 Comments and Muddiest Point

https://www.blogger.com/comment.g?blogID=4181925387762663697&postID=9182485950601912427&page=1

https://www.blogger.com/comment.g?blogID=7203734298375207400&postID=805300911015851371&page=1


Muddiest Point:

I understand that cloud computing is supposed to save costs for users, but in a service such as Google Docs, for example, what costs are there for the provider, and how are they defrayed?

Wednesday, December 3, 2008

Week 13 Comments

https://www.blogger.com/comment.g?blogID=4181925387762663697&postID=4051051144997857665&page=1

http://lma32.blogspot.com/2008/11/reading-assignment-week-thirteen.html

Week 13 Readings

No Place to Hide / "Terrorism" Information Awareness

Wow. I've read about and been aware of what's going on in our 'surveillance state' (at least what I am allowed to know), but it never ceases to give me a shiver. Unfortunately, what Justice Louis Brandeis called our 'right to be let alone' will turn out to be indefensible when it comes down to it. In the name of security, anything seems to be fair game.

I appreciated the variety of relevant links encompassed in the TIA article- from what happens with the phone numbers that cashiers ask for to how private credit reports actually are to the details of Poindexter's ideal world. What is most shocking about this whole business is not reading alarmist, paranoid conspiracy-theorist articles on these propositions and actions, but reading the various pieces of legislation themselves. They need no exaggeration or hype to be frightening. More surprising yet is how quickly we've accepted (generally) these practices as a society.

Thursday, November 20, 2008

Muddiest Point 12

After reading all about institutional repositories, I am confused about what they actually are. Is it a tool like our courseweb that we use everyday, or is it a combination of all the information on a school's intranet, or is it something we don't have yet at Pitt? It is described in the readings as something very revolutionary, but sounds commonplace to me.

Week 12 Readings

Using a Wiki to Manage a Library Instruction Program

The library focused on in this article lists a few specific uses of their library instruction wiki: "...the specifics of the classes' needs, unforeseen directions of the assignment, preferences of the professor that were not necessarily communicated previous to class time, and housekeeping issues..." None of these communication needs are new or extraordinary, but using the wiki to communicate does seem to offer several advantages. The simple concept of being able to report technical issues for a website, for example, would save time, and much hassle for the upkeeper of the site, who might otherwise be inundated with emails reporting the same broken link or typo. Having a location to share feedback for a class gives the instructors a great advantage and means to improve their teaching skills. The wiki would also serve as a wonderful reference for new instructors of the class. In future classes, problems can be predicted or avoided altogether if their is a record of the problems and how they were resolved.

Creating the Academic Library Folksonomy

This sounds like it could be a big part of the solution for the problem of having to mine through endless pages of digital garbage when researching or just surfing online. It would be very helpful to know what others have found to be helpful or relevant to a topic, especially in the example from the article of classmates researching a common topic. The article also mentions how this tool could be used to bring some of the gems hidden in the deep web to the surface; if specialized scholars access and tag these sites, then it could be discovered more easily by students.

We've already been introduced to this concept with the citeulike assignment, but I hadn't really considered at the time the possibilities for a tool like this. It seems to me that social bookmarking sites need to cater only to a specific population, like a university, or to a specific research area, because it seems to me that having tons of generic sites with limited popularity and content will diminish their usefulness.

How a Ragtag Band Created Wikipedia

Given the concept alone, I am skeptical about the accuracy of Wikipedia, but in reality, I haven't been able to find to much fault with it so far. Wales said that "it isn't perfect, but it's much better than you would expect"

I was particularly interested in the portion about controversial issues. Wales said that it wasn't actually a big problem, because he believes that most people see the need for neutrality. So "vandalism" is very quickly corrected. I didn't realize that there are so many measures in place to get rid of bogus or poor contributions, using seemingly fair, democratic methods.

The vast majority of content that is added and edited is done by a relatively very small group. It makes sense that people who would dedicate so many hours to writing an encyclopedia just for fun tend to be reliable.

It has been said that, basically, academics hate Wikipedia. My experience, even in my first semester of studies at Pitt, has been very mixed. I wonder exactly how many teachers and professors at the various levels of education are 'for' and 'against' Wikipedia, and why. Wales also touched briefly on what may be the next big thing in education: open-licensed textbooks. Would these really be less biased?

Wednesday, November 12, 2008

Week 11 Readings

Digital Libraries: Challenges and Influential Work

This article mentions what seems to me to be an overarching theme in my classes and discussions: For a digital library, the ability provide access to digital collections is vastly different than the ability to provide access to this information in a way that is useful and productive for users. Ideally, digital libraries seek to achieve a continuity across the various information resources. These various resources: databases of journals, books, etc. are all self-governing, so it is no easy task to provide the "seamless federation" of these resources that is sought after. The article discusses several projects in this field, and the tremendous progress that has been made towards this still-distant goal.

Dewey Meets Turing

This is a discussion about how the relationship between computer scientists and librarians has developed since the National Science Foundation's Digital Libraries Initiative (1994). While we have extensively studied the impact and benefits of computerization and digitization of libraries, the article interestingly begins by mentioning some of the DL Initiative's benefits on the computer science field. Specifically, it bridged a sort of gap between the scientists' responsibility as researchers and the funders' pressure on them to impact society.

The combining of libraries and computer technology alone would surely have been a huge task, but nothing compared to what a problem it is when we throw the World Wide Web into the equation.

I greatly enjoyed the metaphors in this article :)

Institutional Repositories

Another run-in with our friend Clifford. This time he talks about the development of institutional repositories, which immediately became a very important means of scholarly communication. He views these repositories as a "strategy for supporting the use of networked information to advance scholarship."

Basically, institutional repositories provide a university (usually) with means to manage and disseminate digital materials produced by the university and its community, for use by the same. Today, these capabilities are a few years old, but their impact has already been significant. I'm not sure exactly where we are in the development today, but Lynch predicted 5 years ago that mature repositories "will contain the intellectual works of faculty and students - both research and teaching materials - and also documentation of the activities of the institution itself in the form of records of events and performance and of the ongoing intellectual life of the institution. It will also house experimental and observational data captured by members of the institution that support their scholarly activities."

Monday, November 10, 2008

My Website Link

Well, here it is. I actually got it online without issues, but I tried everything I could think of and everything that others helpfully suggested on the discussion boards to get my pictures to work - to no avail. They work fine in KompoZer, and I even tried deleting them all and relinking them directly from Flickr, but I have had no success. So this is the site minus my pictures and screen captures - I am looking forward to seeing what I have been missing.

www.pitt.edu~msh31

Friday, November 7, 2008

Week 10 Muddiest Point

Considering the capability of the searching, mapping, and indexing capabilities of all the various search engines, is any progress being made? Or is content being added at a much faster rate than the crawlers can keep up with?

Week 10 Readings

Web Search Engines

At this point, we've encountered dozens of descriptions of the magnitude of content on the Web, as well as the magnitude of the task that search engines face - but it still amazes me. Tens of thousands of computers running thousands of parallel threads of query at once, unceasingly.
I had never known what types of techniques spammers use, even very basic ones, so this was interesting as well. I didn't realize what great lengths spammers go to, even creating entire landscapes of servers, links, and pages to try to gain artificial credibility. I don't quite understand what makes it worth all this effort.
This read piqued my interest for several topics, without going into too much technical detail.

The Deep Web

It didn't surprise me too much that search engines only scratch the surface. But I assumed that most of what we can't easily access is secure, has restricted access, or something. This is very frustrating when you think about how much relevant, rich content is hiding in the 'deep web', especially considering the worthless results I've gotten from so many searches.

Friday, October 17, 2008

Week 8 Muddiest Point

I am not looking to do the minimum here, but I am curious about how much time I should dedicate to becoming fluent at this stuff. Beyond the scope of this class, is this a skill that it would be an advantage/necessity to have as a future librarian?

Week 8 Readings

After the initial intimidation, I really did enjoy the HTML tutorial. I saved the Cheatsheet to my desktop, although I don't think I'll be comfortable enough with it to wean myself from the tutorial anytime soon.

The CSS tutorial indicated that I should begin with a basic knowledge of HTML- which I don't feel completely familiar with yet, so I didn't get as far with this one.

Wednesday, October 1, 2008

Week 7 Readings

How Internet Infrastructure Works

Acronym Party!!

sooo...Let me get this straight:

ISP - internet service provider
POP - point of presence
NAP - network access point
IP - internet protocol (address)
DNS - domain name system
URL - uniform resource locater
FTP - file transfer protocol
HTTP - hypertext transfer protocol

Google Video

What amazes me is how Google is constantly putting out amazing products. From what I've read about the perks of working at Google, it's a miracle any work gets done there.

Muddiest Point

I'm getting a better understanding about how computer networks are set up but I'm having trouble understanding how routers work. I know their basic function, but I just don't get how they filter the information that comes through and know what to send where.

Tuesday, September 30, 2008

Zotero/CiteULike Assignment

http://www.citeulike.org/user/hohmanms1

Hmmm...I fiddled and experimented, but still I ended up with only 28 articles in my library, even though I had 34 in my Zotero library, plus 9 from CiteULike. I don't know whether my problem was on the pre- or post-exporting side.

Wednesday, September 24, 2008

Week 6 Readings

Wikipedia: Local Area Network and Computer Network

I. Ways to classify networks
A. Scale
1. PAN - personal workspace (computer + printers, scanners, fax, etc.)
2. LAN - small area like a home, office, or building; usually multiple computers connected to shared external devices
3. CAN - connects LANs but in a contiguous area (college campus, military base)
4. MAN - connects LANs or CANs and covers a town/city area
5. GAN - still in the defining stages, goal of supporting mobile coverage between any number of networks

6. Internetwork - describes various types of interconnection between networks
a. Intranet - set of networks controlled by a single administrator, with only authorized participants
b. Extranet - single organization, but with limited external connections to trusted entities
c. Internet
- backbone of the WWW, worldwide interconnection of networks
B. Connection Method
1. Physical Wiring connections: include Ethernet, optical fiber, etc.
2. Wireless technology: uses radio waves
C. Functional Relationship: e.g. active networking, client-server, peer-to-peer
D. Network Topology: describes logical relations of network devices (as opposed to physical network layout)
II. Basic Hardware Components
A. Network Card/Adapter/Interface Card
B. Repeaters: retransmit signals to allow a signal to cover a longer physical distance in a network
C. Hubs: copies a packet for retransmission through several ports
D. Bridges: copies traffic to several ports, but selectively (unlike hubs)
E. Switches: include routers and bridges, or other devices that distribute traffic through all or selected ports
F. Routers: use headers and forwarding tables to use the best path to forward packets between networks

Management of RFID in Libraries

It was interesting to learn about the wide variety of common technologies based on RFID. For inventory identification purposes, it is easy to see how useful this technology could be for libraries and retail store use. But it is even more worthwhile an investment in libraries, since these are unique in that inventory recirculates many times. An added function is security. Though, as the article discusses, it is not a rock-solid theft prevention method, it is no worse than other library security measure and saves money since security is a double-duty.

Yet another advantage are the saved time and money resources, especially in inventory. Not only does this technology allow several items to be scanned at once, but since no direct line of 'sight' is required, books can remain on shelves to be scanned.

Even though several disadvantages are discussed, none of them seem devastating, and I would imagine that issues such as bulky tag size will be resolved with further development of the technology.

Thursday, September 18, 2008

Week 5 Muddiest Point

I sure this must have been addressed in the readings somewhere, but I either missed it or the explanation went over my head. I don't quite understand where compression and decompression occurs. Are the programs in charge of this part of the individual OS? Are they in the computer's hardware or in software? Are instructions for "rehydrating" files tacked on to the files themselves somehow, or are they recognizable and assumed by the computer? Sorry to use such remedial language...

Week 5 Ctd...

Imaging Pittsburgh

It was nice to see a real-life application of many of the things we've been learning about. The overview of the various challenges the team faced in communication and in how they viewed the items in the collection. In the end, it was probably an advantage because the situation will be the same on the users' end - people will be searching the databases for different criteria and with different interests.

YouTube and Libraries

Sounds like a decent idea in theory - why not take advantage of a service that's so simple to use... and free. But would people actually use it for purposes such as these? I have never known anyone that used YouTube for anything other than purely recreational and largely mindless reasons.

Wednesday, September 17, 2008

Week 5 Readings

Wikipedia: Data Compression

I really enjoyed this read, partly because I wasn't utterly lost reading it!

When I read about how "lossless compression algorithms usually exploit statistical redundancy", and later: "any compression algorithm will necessarily fail to compress any data containing no discernible patterns", I first generalized that these types of algorithms could be used to compress language (words) but not number data. But in actuality, numerical data is by no means random. It should even be predictable to some extent, depending on what it is describing. The article mentioned the ability to find patterns in numbers themselves, such as repeating digits. But are there algorithms that recognize the probability of certain patterns occuring for data that describes weather vs. test scores vs. race times, for example?

It also caught my attention to find that the study of rate-distortion theory was so old - dating to the very earliest days of the computer era - and the linked articles about Claude Shannon were especially interesting.

Data Compression Basics

I had so many "aha" moments reading this article! I am pretty remedial when it comes to computer matters, but readings like this help establish so many new connections in my brain. For instance, I knew nothing about what file extensions meant, except for the frustration they could cause me when trying to open them with incompatible programs. Now I understand that the extension basically indicates how the file is to be decompressed, which explains the ghastly stings of unreadable code that can appear when I open a file with the wrong application (if the file opens at all)

This also helped to really break down what audio, stills, and video data looks like at a fundamental level, and therefore why so many different compression methods are needed.

By far the most informative and enjoyable read so far - I even liked the writing style and subtle British humor.


Other articles and Muddiest Point to follow...

Monday, September 15, 2008

Comments: Week 4

https://www.blogger.com/comment.g?blogID=854093220520038877&postID=174628254612469280&page=1


https://www.blogger.com/comment.g?blogID=2369947867373070193&postID=7977490293481423923&page=1

Flickr Link

http://www.flickr.com/photos/29973599@N03/

Friday, September 12, 2008

Week 4 Readings and Muddiest Point

Wikipedia: Database

This article was very helpful to me in that it set forth basic principles that I sort of grasped instinctively about databases without having the knowledge to articulate it. For instance, the DBMS ideals of atomicity, consistency, isolation, and durability make perfect sense, but I wouldn't have been able to put my finger on them just through observation. Of the three basic database models - hierarchical, network, and relational - I am still having trouble making a solid mental image of a relational model.

Metadata

The concept of "data about data" blew my mind until I read on to some of the examples. It is still a little treacherous that the term has quite different connotations in different times and fields.
Soo...is metadata what is contained within a database? Or is 'metadata' a term interchangeable with 'database'?

Dublin Core Data Model

Ambitious. I may have missed the target, but is the goal here a universally standardized world database? Is this even possible in theory? I guess I never realized how far we are from being able to collaborate worldwide on any topic.

Muddiest Point:

I think I understand a lot more about databases after these readings, but I am still having trouble visualizing the models, or coming up with concrete examples. Do all models fall into one of the three given categories, or are there major variations?

Monday, September 8, 2008

Week 3 Comments

http://oliverlis2600.blogspot.com/2008/09/week-3-readings-and-murkiest-point.html

Friday, September 5, 2008

Week 3 Readings

Linux: completely new to me, so I found this article particularly intriguing. If Linux was the result of the first attempt at a standardized operating system, I wonder how the development of other OS's paralleled it, and when and why they began to surpass Linux in popularity. What I got from the readings was that it was because the other OS's seem to put their focus on user-friendliness and customer appeal.
I also found it interesting that anyone with Linux has the ability to further develop/alter the programming (if I understand correctly) I wonder how far these manipulations can be taken and if the system can even become unrecognizable from the original "kernel".

[kernelthread] Wow...I just couldn't much beyond the introduction, history and conclusion. As soon as I got into the "meat" of the article, I was tempted to just skip this reading altogether. I moved on to the Wikipedia article, which felt like a walk in the park - almost too un-technical as it seemed to dwell mainly on basic features and appearance. But this helped a lot when I revisited the kernelthread reading; I was able to digest more than I thought I could.

What I do know is...I love my Mac!

Thursday, September 4, 2008

Week 2 Readings

These readings complemented each other very well; I tabbed back and forth between them. The subject matter this week well beyond my area of expertise, so my notes this week are basic outlines to aid myself in getting the material straight.

Computer Hardware (the Guts)

I. Motherboard
-CPU (brain)
-internal buses (connect internal components)
-external bus controllers (connect to external devices)
II. Power Supply
-converts AC to low-voltage DC power, includes cooling fan
III. Removable media devices
-include CDs, DVD, blu-ray discs, and others
IV. Internal storage
-hard drive
V. Sound and Graphics cards
VI. Networking
-modem, other internet connections
VII. Input and output devices include:
-keyboard, mouse, game controllers, scanner, microphone, webcam
-printer, monitor, speakers


Moore's Law


Exploring some of the linked articles (esp. transistors) was well worth the extra time - things made much more sense to me with the aid of some background info.

I. History
-a few similar predictions prior to Moore's Law
-1965: Moore's orginal observation of transistor count doubling every year
-1970: term "Moore's Law" coined
-1975: projection altered to doubling every two years
II. Areas of similar growth
-number of transistors per integrated circuit
-rate of increase of transistor density at minimized cost
-cost per transistor
-manufacturing costs have gone up, however (not sure I understand this)
-transistor speed per cost unit
-power consumption (closer to Moore's original rate)
III. Moore's law evolved from an observation of an existing trend into a goal for the pace of these technological advances
IV. The future and limits of the law
-remember, originally only referred to semiconductor circuits. many have broadened the usage to describe many other technologies
-expected to continue for anywhere from a couple decades to several centuries (?!) by a few speculators.
-ultimately, transistors built on the atomic scale (necessary stopping point)

Computer History Museum

The Computer History Timeline was a good (non-technical) supplement to the Moore's Law article. I found the Internet History Exhibit quite interesting as well.

Monday, September 1, 2008

Information Literacy and Information Technology Literacy

Beginning on the second page of the article, Lynch elaborated on the two basic views on what information technology literacy means (oversimplified: skills vs. theory). This is not exactly what the focus of the article, but it did make me wonder:

We are gaining access to rapidly advancing technologies and can learn how to use them skillfully and efficiently without having any literacy in the technology itself.

Certainly, this is not true exclusively for the present; this statement could be applied to people and advancements in the past. Early humans did not understand scientifically what fire was and how it worked, yet they learned to create and manipulate it for their use. I do not have full understanding of how a car works, yet I can use one proficiently.

Does the average consumer/user of today's technologies have a lower level of literacy than in
previous times? How much of a problem (if any) is this illiteracy?

Week I Readings

Information Format Trends: Content, not Containers

Although the purpose of this article seems to be to set forth statistics and point out trends in information accessibility in a very straightforward manner, it automatically raises questions and brings up issues along the way. It is not hard to observe that information is becoming more portable and instantly accessible to the individual. It is not a impossible to imagine libraries, at least in their current roles, becoming less popular, relevant, or useful. The question that came up to me as I read was on the role of the librarian and how this role might ever be filled virtually/digitally. The article finally hints at this issue on p. 14. As a topic for personal research, I would like to find out:
- what types of more sophisticated programs already exist that help to map out available information in a more useful way than just searching for some keywords
- what is the predicted trend for these programs: Is there a virtual replacement for the librarian?