1. Trang chủ
  2. » Giáo Dục - Đào Tạo

personalized digital television targeting programs to individual viewers

334 323 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 334
Dung lượng 9,81 MB

Nội dung

[...]... state of the art in the development of personalized EPGs that customize program recommendations to TV viewers The described work addresses the identi¢cation of the TV viewer’s preferences and the personalized recommendation of items to individual users and to groups of users, as is typical of household environments This section also includes an analysis of TV viewers aimed at de¢ning stereotypical... order to take this fact into account, the con¢dence of each prediction is evaluated The UMC employs this parameter to weight the predictions provided by the Experts into a Main User Model, whose contents are exploited to personalize the suggestion of TV programs 3.1 THE EXPLICIT USER MODEL This user model stores the user’s personal data, (e.g., occupation and age), her declared attitudes towards topics... PTV Listings Service (Cotter and Smyth, 2000) systems to generate personalized TV listings, and in the TiVo (2002) system to select programs for VCR recording Collaborative ¢ltering requires that the user positively or negatively rate the programs she has watched; the ranking pro¢les are collected L Ardissono et al (eds.), Personalized Digital Television, 3^26, 2004 # 2004 Kluwer Academic Publishers... accepts explicit feedback about programs that may be rated by clicking on the ‘thumb up/down’ buttons located in the bottom-right area of the User Interface By default, the system works in personalized mode (Personalization ON) and ranks the TV programs by taking the user model into account The less suitable programs are ¢ltered out and the most promising ones are shown at the top of the list The recommendation... of personalized services ^ Program Processing: The automated identi¢cation, indexing, segmentation (e.g into components, stories, commercials), summarization, and visualization of television programs, such as interactive documentaries ^ Program Representation and Reasoning: representing the general characteristics and speci¢c content of programs and shows, including the possible segmentation of programs. .. preferences into account As recommender systems have been successfully applied to customize the suggestion of items in various application domains, such as e-commerce, tourism and digital libraries (Resnick and Varian, 1997; Riecken, 2000; Mostafa, 2002), several e¡orts have been recently made to apply this technology to the Digital TV world For instance, collaborative ¢ltering has been applied in the... Universita di Torino, Corso Svizzera 185, 10149 Torino, Italy email: {liliana,cgena,torasso}@di.unito.it 2 Telecom Italia Lab, Multimedia Division, Via G Reiss Romoli 274, 10148 Torino, Italy email: {bellifemine,di¢no,negro}@tilab.com Abstract This chapter presents the recommendation techniques applied in Personal Program Guide (PPG) This is a system generating personalized Electronic Program Guides for Digital. .. EPG embedded in the set-top box may continuously track the user’s viewing behavior, unobtrusively acquiring precise information about her preferences Moreover, the guide can be extended to become a personal assistant helping the user to browse and manage her own digital archive To prove our ideas, we developed the Personal Program Guide (PPG) This is a personalized EPG that customizes the TV program... the set-top box and is deeply integrated with the TV playing and the video recording services o¡ered by that type of device 1 Introduction With the expansion of TV content, digital networks and broadband, hundreds of TV programs are broadcast at any time of day This huge amount of content has the potential to optimally satisfy individual interests, but it makes the selection of the programs to watch... lengthy task Therefore, TV viewers end up watching a limited number of channels and ignoring the other ones; see Smyth and Cotter (in this volume) for a discussion about this issue In order to face the information overload and facilitate the selection of the most interesting programs to watch, personalized TV guides are needed that take individual interests and preferences into account As recommender . end of this volume. Personalized Digital Television Targeting Programs to Individual Viewers Edited by Liliana Ardissono Dipartimento di Informatica, Università di Torino, Italy Alfred. to face the information overload and facilitate the selection of the most interesting programs to watch, personalized TV guides are needed that take individual interests and preferences into. selection of content on an individual basis, and to provide easy -to- use interfaces that satisfy viewers interaction requirements. Given the heterogeneity of TV viewers, who di¡er e.g. in interests

Ngày đăng: 06/07/2014, 15:23

Nguồn tham khảo

Tài liệu tham khảo Loại Chi tiết
5. Heuristic Evaluation of the First Prototype A formal heuristic evaluation involves having a small set of evaluators (often usability experts) examine and judge the user interface with recognized usability principles (or heuristics). With heuristic evaluation it is possible to identify many usability problems early in the design phase (Nielsen, 1993; Lif, 1998).Although the focus of our research was placed on the three main aspects of recom- mendations, the heuristic evaluation also gave us insight into usability issues that a¡ect the entire user interface of the TV recommender.5.1. HEURISTICSThe heuristics that were used in evaluating the prototype are (Harst & Maijers, 1999;Shneiderman, 1998; Nielsen, 1993): provide task suitability, employ consistency Sách, tạp chí
Tiêu đề: Heuristic Evaluation of the First Prototype
Tác giả: Nielsen, Lif, Harst, Maijers, Shneiderman
Năm: 1993
6.4. FUTURE EVALUATIONS AND RESEARCH The usability tests described in this section were the last tests we performed on the prototype interface. However, before a TV recommender system and its user interface as described in this chapter can be marketed commercially, more extensive usability tests should be performed involving usage in real-life household settings with a sub- stantial larger number of users and over a longer period of time. The usage on multiple devices, individual user characteristics (such as color blindness), and integration with devices such as digital video recorders should also be taken into account. Additional usability problems could then be uncovered that remain unnoticed in a more laboratory-like environment.7. ConclusionsThis chapter has addressed the issue of the design of a usable interface for a TV recom- mender system, with a focus on three aspects of a recommender system that are re£ec- ted in the user interface, namely the presentation of predictions, the presentation of explanations, and the provision of feedback to the recommender. In order to develop an intuitive, easy-to-use interface and to develop guidelines for TV recommender system interfaces, we conducted an iterative design process. The chapter focused on both the design process itself and the results of the various design steps and evaluations, resulting in a number of guidelines for designing user interfaces of TV recommender systems Sách, tạp chí
Tiêu đề: Future Evaluations and Research
2004, User Modeling and Recommendation Techniques for Personalized Electronic Program Guides. In this volume.Baudisch, P. and Brueckner, L.: 2002, TV Scout: Lowering the Entry Barrier to Personalized TV Program Recommendations. In: P. De Bra, P. Brusilovsky and R. Conejo (eds.): Adaptive Hypermedia and Adaptive Web-Based Systems: Proceedings of the Second International Conference. May 29^31, Malaga, Spain, Springer, Heidelberg, pp. 58^68.Buczak, A. L., Zimmerman, J. and Kurapati, K.: 2002, Personalization: Improving Ease-of-use, Trust and Accuracy of a TV Show Recommender. In: L. Ardissono and A. L. Buczak (eds.) Proceedings of the Second Workshop on Personalization in Future TV. Malaga, Spain, May 28, pp. 3^12.Cayzer, S. and Aickelin, U.: 2002, A Recommender System Based on the Immune Network.In: Proceedings of the 2002 Congres on Evolutionary Computation, Honolulu, USA, May 12^17, pp. 807^813. On-line: www.hpl.hp.com/techreports/2002/HPL-2002-1.pdfDix, A. J., Finlay, J. E., Abowd, G. D. and Baele, R.: 1998, Human-Computer Interaction. 2nded. London: Prentice Hall, Europe.Dumas, J. S. and Redish, J. C.: 1993, A Practical Guide to Usability Testing. New Jersey, USA:Ablex Publishing Corporation.Ehn, P.: 1992. Scandinavian Design: On Participation and Skill. In: P. S. Adler and T. A.Winograd (eds.): Usability: Turning technologies into tools. New York: Oxford University Press, pp. 96^132.Faulkner, X.: 2000, Usability Engineering. Hamsphire, UK: Palgrave.Floyd, C., Mehl, W. M., Reisin, F. M., Schmidt, G. and Wolf, G.: 1989, Out of Scandinavia:Alternative Approaches to Software Design and System Development. Human-Computer Interaction 4(4), 253^350.Gutta, S., Kurapati, K., Lee, K. P., Martino, J., Milanski, J., Scha¡er, D. and Zimmerman, J Sách, tạp chí
Tiêu đề: Adaptive Hypermedia and Adaptive Web-Based Systems: Proceedings of the Second International Conference
Tác giả: P. De Bra, P. Brusilovsky, R. Conejo
Nhà XB: Springer
Năm: 2002
2000, TV Content Recommender System. In: Proceedings of 17th National Conference on AI, Austin, July 2000, pp. 1121^1122.Harst, G. and Maijers, R.: 1999, E¡ectief GUI-ontwerp. Schoonhoven, The Netherlands:Academic Service.Herlocker, J.: 2000, Understanding and Improving Automated Collaborative Filtering Systems.PhD thesis, University of Minnesota.Herlocker, J., Konstan, J. A. and Riedl, J.: 2000, Explaining Collaborative Filtering Recommendations. In: W. Kellogg and S. Whittaker (eds.): Proceedings of the 2000 ACM Conference on Computer Supported Cooperative Work, Philadelphia, Pennsylvania:ACM Press, New York, pp. 241^250.Houseman, E. M. and Kaskela, D. E.: 1970, State of the art of selective dissemination of information. IEEE Trans Eng Writing Speech III 78^83.Jackson, P.: 1990, Introduction to Expert Systems. Reading: Addison-Wesley.Lieberman, H.: 1995, Letizia: An Agent that Assists Web Browsing. In: Proceedings of the fourteenth International Conference on AI. Montreal, Canada, August, pp. 924^929.Lif, M.: 1998, Adding UsabilityMethods for modelling, User Interface Design and Evalua- tion. Technical Report 359, Comprehensive Summary of Dissertation, Faculty of Science and Technology, University of Uppsala, Uppsala, Sweden.Lindgaard, G.: 1994, Usability Testing and System Evaluation. London: Chapman & Hall.Mandel, T. W.: 1997, Elements of User Interface Design. New York: John Wiley & Sons, Inc.Mastho¡, J.: 2004, Group modeling: Selecting a Sequence of Television Items to Suit a Group of Viewers. In this volume.Muller, M. and Kuhn, S. (eds.): 1993, Special issue on participatory design. Communications of the ACM 36(4) Sách, tạp chí
Tiêu đề: TV Content Recommender System
Nhà XB: Proceedings of 17th National Conference on AI
Năm: 2000
2002, Getting to Know You: Learning New User Preferences in Recommender Systems. In: Proceedings of ACM Intelligent User Interfaces 2002. January 13^16, San Francisco:ACM Press, New York, pp. 127^134.Roccio, J. J.: 1965, Relevance Feedback in Information Retrieval. In: G. Salton (ed.): Scienti¢c Report ISR-9, Information Storage and Retrieval, National Science Foundation, pp. XXIII-1^XXIII-11.Shardanand, U. and Maes, P.: 1995, Social Information Filtering: Algorithms for Automated Word of Mouth. In: I. R. Katz, R. Mack, L. Marks, M. B. Rosson & J. Nielsen (eds.): Proceedings of Human Factors in Computing Systems (CHI’1995). May 7^11, Denver, New York: ACM Press, pp. 210^217.Shneiderman, B.: 1998, Designing the User Interface, Strategies for E¡ective Human- Computer Interaction, 3rd ed. Longman, USA: Addison Wesley.Sinha, R. and Swearingen, K.: 2002, The Role of Transparency in Recommender Systems. In:Extended Abstracts Proceedings of Conference on Human Factors in Computer Systems (CHI’2002). April 20^25, Minneapolis, Minnesota: ACM Press, New York, pp. 830^831.Smyth, B. and Cotter, P.: 2000, A Personalised TV Listings Service for the Digital TV Age.Knowledge-Based Systems 13, 53^59.Smyth, B. and Cotter, P.: 2004, The Evolution of the Personalized Electronic Programme Guide. In this volume.Spolsky, J.: 2001, User Interface Design for Programmers. Berkeley, USA: Apress.Tognazzini, B.: 2000, If They Don’t Test, Don’t Hire Them. On-line: http://www.asktog.com/columns/037TestOrElse.htmlvan Setten, M.: 2002, Experiments with a Recommendation Technique that Learns Category Interests. In: P. Isai¤as (ed.): Proceedings of the IADIS International Conference www/Internet 2002. Lisbon, Portugal, November 13^15, pp. 722^725 Sách, tạp chí
Tiêu đề: Getting to Know You: Learning New User Preferences in Recommender Systems
Nhà XB: ACM Press
Năm: 2002
6.1. SETUP OF THE USABILITY TEST Our ¢rst usability test was conducted with three male and two female participants in individual sessions. One participant was in the age group of 15^20, two were 21^30, one was 31^45 and one was older than 45. All participants were familiar with the usage of TVs and had used a PC before: some had limited PC experience, the others average. They were provided with a tablet PC containing the TV recommender.Before starting the session, they were allowed to practice the use of the tablet PC with a stylus as an input device by playing a few games.All actions performed by the participants were recorded on a VCR by capturing the image of the tablet PC. The participants were asked to go through several assignments on their own, without any help from or communication with the observer, and to think aloud. To ensure that the participants had real goals when using the personalized EPG, the assignments included questions they had to answer, e.g. ‘How well do you think the program ‘Newsradio’ suits your interests, according to the system?(in your own words)’. Participants were clearly instructed that we were evaluating the user interface and not them, so that if they were unable to carry out an assignment it was not their fault, but a fault of the interface. In order to assess the perceived quality of the user interface, participants were asked to ¢ll out a small questionnaire (16 questions on a 5-point Likert scale). After ¢nishing all assignments, they had a brief discussion with the observer.Before our usability test, we de¢ned the following quantitative usability goals:. All participants must be able to perform all assignments on their own, without intervention by the observer.. Each assignment must be completed within a speci¢ed time, which was determined by measuring our own use of the system (adding a safety margin because we were well acquainted with the interface) and based on a few small tests with di¡erent people. The participants were not aware of this prede¢ned maximum time; they could continue until the assignment was completed, or abort the current assignment if they felt the system was not responding properly.The qualitative usability goals were:. The user interface must be easy to use.. The interface should be intuitive Khác
6.2. RESULTS OF THE FIRST USABILITY TEST All participants performed all assignments without help from the observer. However, not all participants accomplished all assignments within our prede¢ned maximum time (all reported times are true ‘interaction times’ and do not include time spent reading the question). In particular, we identi¢ed the following problems:. In the used prototype, the stars of a listed program turned white to indicate that this was not a prediction but feedback provided previously by the user for that same program. This appeared to be unclear: it took three participants more than one minute each to ¢gure out how the interface displayed this information.. Users could drag programs to their watch lists by clicking on a handle pane next to each listing (see Figure 10.8) and then dragging the listing(s) to their watch lists. Based on the heuristic evaluation, we had already changed the mouse cursor symbol to indicate that the user could initiate a drag operation when hovering over this area. Participants nevertheless assumed that a program could be drag- ged by clicking at any point in its display area. It took two participants more than 1.5 minutes each to complete the assignment.. Finally, knowing how to ¢nd out which programs are in a certain genre was not intuitive (once again it took two participants more than 1.5 minutes each to complete this assignment the ¢rst time). However, when asked a second time, all participants completed this assignment well within the maximum time allotted.The measured times also indicate that participants quickly learned how to use the interface. For instance, it took our ¢ve participants an average of 49 seconds to highlight genres the ¢rst time they had to do this. On a second occasion, it took them only 19 seconds. All participants were able to work out how to deal with this particular aspect of the interface, and easily remembered and applied this knowledge later.Decreasing execution times for similar tasks were also seen in assignments in which participants had to drag programs to their watch lists. The ¢rst time it took them an average of 120 seconds, and the second time only 12 seconds. Because the average time for completing this assignment the ¢rst time greatly exceeded the maximum allowable time limit, we changed the way programs could be dragged to the watch list: dragging could now be initiated by clicking anywhere in the display area of a program, rather than a dedicated handle only Khác
6.2.1. Presentation of Recommendations All participants instantly understood the meaning of the stars that indicated their pre- dicted interest in a particular program. Also, when looking for more information on a certain program, they intuitively clicked on the program in question. Participants agreed that the interface clearly indicated whether or not a program would meet their interests (score 4.2 out of 5). The use of colors (green, yellow and red stars) was seen as explanatory and clarifying (score 4.6 out of 5). This calmed the concern that arose in the heuristic evaluation; users do appreciate the use of colors for presenting predictions.In our design, the di¡erence between a recommendation and a program for which the user had already provided feedback was expressed by replacing the prediction with the feedback of the user, and visually changing the color of the stars to white.This only appeared to be clear to two of the participants. One of the other three noticed it later in the test. We made this clearer in the next version of our prototype, by adding a small icon of a person beside the stars if the rating was based on feedback given by the user (the color still changed to white) and by making it clearer when providing feedback (see next section) Khác
6.2.2. Providing Feedback on Recommendations All participants were able to quickly access the part of the interface with which they could give feedback on a program recommendation. The way to do this with the feed- back widget was purposely kept redundant: users could use the slider or directly click on the stars. Three participants used the slider only, one participant clicked on the stars only, and one participant used both options.After rating a program in a pop-up window, four out of ¢ve participants were insecure about how to close the window. One participant pressed the ‘Reset’ button, while others eventually used the ‘X’ button in the top-right corner of the pop-up.One of the participants reopened the pop-up window in order to make sure that his feedback was saved properly. During the discussion, four participants indicated that they expected some explicit feature to save their feedback, such as a save button.The lack of speci¢c feedback from the system on their actions resulted in insecurity.This ¢nding is in contradiction with the opinion of one of the usability experts in the heuristic evaluation. It appears that although it takes an extra action, users prefer to be certain that their feedback is saved. We changed this in the user interface by reintroducing the save button. The save button is only enabled when the user has given or changed a rating. Pressing the button changes two visual states: the stars of the feedback widget changes by turning the color of the stars to white (the same color that is used for the stars in the program listing for a program the user had already given feedback on) and the save button becomes disabled.According to the ¢nal questionnaire, four participants agreed that giving feedback on a recommendation takes little e¡ort (score 4.75 out of 5) while 1 participant was indecisive about this matter Khác
6.2.3. Explanations of Recommendations All participants were able to quickly access the part of the interface with which they could ¢nd explanations about a prediction. This was also con¢rmed by the ¢nal ques- tionnaire, in which all participants agreed that the explanations were easy to ¢nd (score 4.8 out of 5). Participants also indicated that explanations were visualized in a good way (score 5 out of 5) and that the explanations serve their purpose well, because they clarify the recommendation (score 4.6 out of 5). Participants also indi- cated that they found the explanations to be relatively credible (score 3.8 out of 5).However, some participants indicated that they would like more detailed explana- tions. The survey also indicated that some people prefer minimal explanations, while others prefer more details. Therefore, in our next prototype we allowed users to ask for more details when desired Khác
6.2.4. Interaction with Various Interface Components In general, participants found that the interface was easy to use (score 4.2 out of 5) and that they were in control of it (score 4.6 out of 5). This conclusion is also supported by the measured times it took participants to complete the assignments.Separating the interface into di¡erent main functions on tabbed ‘¢le cards’ (see Figure 10.9) also appeared to be a good design decision. All participants managed to ¢nd information on these ¢le cards quickly, and knew intuitively how to use the tabs. Finally, the pop-up window with extended program information, feedback and explanations appeared in a ¢xed position relative to the program on which the user clicked. Some participants mentioned that this obstructed the programs listed below the selected program, and suggested making the window draggable. This was changed in the follow-up prototype.6.3. ITERATIONBecause some of the changes to the original prototype were not trivial (e.g. how user ratings are saved and how they are visually presented), iterative design theory requires another evaluation test, which could focus on the revised parts of the interface.Another usability test was therefore performed that was similar to the one described in the previous section. Five di¡erent participants were asked to participate (one in the age group of 15^20, two in the group 21^30, one in the group 31^40 and one well above 41). The usability goals of this test corresponded with the usability goals of the previous test but focused on the changed aspects of the interface.This second evaluation attested signi¢cant improvements in the usability of the prototype. All participants were able to perform all assignments within the estimated time limit without help from the observer. Measured times for completing the assign- ments show that the changes made to the prototype greatly simplify the tasks that proved to be too di⁄cult in the ¢rst usability test. Dragging four programs of their own choice to the watch list took participants an average of 79 seconds (compared Khác

TỪ KHÓA LIÊN QUAN

w