Andy Walker and Jennifer Brill
If You Build It They Will Not Come: Usage Data from A Recommender
System Based On Web Annotations and Their Implications
Andy Walker, Utah State University and Jennifer Brill, Virginia Tech
Thursday, September 29, 2005, 3:15-4:00 pm
mp3 podcast Right click and 'Save Target As' for manual download
Infrastructure improvements continue to bring the accessibility of the Internet to teachers and their students. As early as 2000, almost 98% of full-time public school teachers in the United States had Internet access in one form or another (Cattagni & Farris Westat, 2001). While the availability in classrooms tapered off to 77% in the same study, the availability remains high and continues to grow. In addition, once on the Internet, there are more resources to find. Although it certainly has its detractors (Tennant, 2005) Google has proposed digitizing huge portions of the print collections of several major libraries and placing them online for free. Despite an increase in access, and perhaps because of the steady increase in the volume of information, the Internet is not always used in educationally relevant ways. One study of sixth grade science students noted that Internet usage was often unstructured, spurious and as a result unproductive (Wallace, Kupperman, Krajcik, & Soloway, 2000).
There are several possible reasons for this lack of meaningful use. One primary concern is the abundance of poor web-based resources. Misinformation has been documented in several educational domains (Robertson, 1999) making the task of identifying valuable resources more onerous. This creates a very real need for important educational stakeholders like teachers, students, and instructional designers to be able to evaluate online educational resources. Whereas before, with print based materials, selection tasks tended to focus on relevance, information consumers now have to deal with issues like quality, accuracy, and authoritativeness (Reeves & Harmon, 1994; Schrock, 2002).
In addition to the poor quality of content, the Internet itself was never designed with education in mind. According to researchers in situated cognition (Brown, Collins, & Duguid, 1989; Brown &Duguid, 2000) it is artificial to separate knowledge from the people or communities that use it—yet this is precisely what the Internet encourages. By relegating information exchange to isolated clients asking for knowledge from individual servers a great deal of valuable material is lost. The users of web sites need to be able to define the juxtaposition of and relationships between sets of web based resources, in addition to elaborating on their value to their own communities of practice (CoP). In short, the evaluation of educational resources should not occur in a vacuum but instead should be a part of critically reflective practice among an active group who are pursuing common goals.
In partial fulfillment of this need, students at Lehigh University from three different programs (counseling psychology, instructional design, and teacher education) taking a shared core class are asked to annotate web-based educational resources. While the structure for these annotations is based partly on the work of Reeves & Harmon (1994) as well as Schrock (2002) students are encouraged to formulate annotations in line with their own interests and needs. In the past, these annotations were completed and submitted to the Instructor in relative isolation. Students might be asked to share one of their annotations with the class, but this generally results in three problems. First, since the core class is shared across three different programs students often found information irrelevant. This certainly fits with the concept that boundary members between CoPs are a minority group and that forcing that role can be counter-productive. While all of these students are interested in learning in general, they are not necessarily interested in annotating, reviewing, and hearing about such a diverse set of web based resources to inform their practice. The second problem is temporal. Students in subsequent semesters of the class rarely got to benefit from the work of their preceding peers. The third, related problem is that instructors were acting as a censure of information that was of far more interest to the population that produced it. In short, by restricting students to sharing a single annotation, or selecting a set of annotations believed to be of interest to the class as a whole, instructors were operating as an artificial mediator.
To address some of these challenges, students in a web-database integration class teamed up with an instructor of the core class to develop an on-line database which allows students to submit, search, and read web site annotations from fellow students. Further, ratings provided as part of those annotations could be used to drive a recommender system (Resnick & Varian, 1997) which can provide website referrals based on the underlying preferences of system users. Thus, the annotations are no longer submitted in quasi-isolation but are housed in a repository from which members of this diverse group can benefit from and add to. Additionally, annotations would no longer be mediated artificially by an instructor, but would be mediated by users’ own personalized preferences.
The presentation will involve a description of the system, its ties to situated cognition, the process and lessons learned in using it, along with data from four semesters of usage. It will conclude with a description of the corresponding recommender system and some major limitations for this particular setting along with directions for future work.
References
Brown, J. S., Collins, A., & Duguid, P. (1989). Situated cognition and the culture of learning. Educational Researcher, 18, 32-42.
Brown, J. S., & Duguid, P. (2000). The social life of information. Cambridge, MA: Harvard Business School Press.
Cattagni, A., & Farris Westat, E. (2001). Internet access in U.S. public schools and classrooms: 1994-2000 (NCES No. 2001-071).
Washington DC: National Center for Education Statistics, U.S. Department of Education.
Lave, J., & Wenger, E. (1991). Situated learning : legitimate peripheral participation. Cambridge England: New York.
Reeves, T., & Harmon, S. (1994). Systematic Evaluation Procedures for Interactive Multimedia for Education and Training. In S. Resman(Ed.), Multimedia Computing: Preparing for the 21st Century (pp. 472-505). Harrisburg, London: Idea Group Publishing.
Resnick, P., & Varian, H. (1997). Recommender systems. Communications of the ACM, 4(3), 56-58.
Robertson, J. S. (1999). The curse of plenty: Mathematics and the Internet. Journal of Computers in Mathematics and Science Teaching, 18(1), 3-5.
Schrock, K. (2002). The ABCs of Web site evaluation. Retrieved June 19, 2005, from http://school.discovery.com/schrockguide/pdf/weval_02.pdf
Tennant, R. (2005). Google Out of Print. Library Journal, 130(3), 27.
Wallace, R., Kupperman, J., Krajcik, J., & Soloway, E. (2000). Science on the Web: Students online in a sixth-grade classroom. Journal of the Learning Sciences, 9(1), 75-104.
0 Comments:
Post a Comment
<< Home