Jump to content

User:Yfamy123/sandbox

From Wikipedia, the free encyclopedia

Rating-Based Collaborative Filtering

[edit]

Rating-Based Collaborative Filtering is a branch of Collaborative Filtering (CF)', which only use numeric measures of a user’s preference on an item . Collaborative Filtering is a technique for recommendation system which focus on only presenting each user only those items that they most likely to be interested in. 

History

[edit]

In the early 1990s, the idea of helping users focus only on the items they will like sets the stage for collaborative filtering recommendation systems. The insight behind collaborative filtering recommender systems is that people have relative stable tastes. There- fore, if two people have agreed in the past they will likely continue to agree in the future. A key part of this insight – and the major difference between this and content based recommendation strategies – is that it doesn’t matter why two users agree. They could share tastes in books because they both like the same style of book binding, or they could share taste in movies due to a nuance of directing; a collaborative filtering recommender system doesn’t care. So long as the two users continue agreeing, we can use the stated prefer- ences of one user to predict the preferences of another. Since we need very little supplementary information, collaborative filtering algorithms are applicable in a wide range of possible circumstances.

Since the mid 1990s, collaborative filtering recommender systems have become very popular in the industry. Amazon, Netflix, Google, Facebook and many others, have deployed collaborative filtering algorithms to help their users find things they would enjoy. The popularity of these deployments has pushed the field of recommender systems, leading to faster, more accurate recommender systems. These improvements have been coupled to changes in how we think about deploying collaborative filtering systems to support users. Originally, collaborative filtering systems were used as a simple filter which helped users isolate interesting news articles from a broader stream of information. As recommender systems have grown larger it has become evident that users may not want a filter as they don’t need to see every good item. Instead, it is now common to think of these algorithms as recommending a short list of the best items for a target user, even if that means some good items will still be missed. 

Concepts and  Notation

[edit]

The two most central objects in a recommendation system are the users the system recommends to and the items the system might recommend.  One user represents one independently tracked account for recommendation. Typically this represents one system account, and is assumed to represent one person’s tastes. We will denote the set of all users as with being individual users from the set. One item represents one independently tracked thing that can be recommended. Most systems only track one kind of thing, which maps naturally to items for recommendation. Some systems track things at multiple levels of detail, some more applicable for recommendation than others. We will denote the set of all items as I with being individual items from the set. 

In the Rating-Based Collaborative Filtering System, ratings are collected from users on a given rating scale such as the 1-to-5 star scale used in MovieLens, Amazon, and Netflix or the ten star scale used by the Internet Movie DataBase. Often the choice of rating scale is not relevant for a recommendation algorithm; some interfaces, however, deserve special consideration. Small scales such as a binary (thumbs up, thumbs down) scale or unary “like” scales may require special adaptions when applying algorithms designed with a larger scale in mind. 

Recommender systems often represent the set of all ratings as a sparse rating matrix , which is a matrix. only has values at where user has rated item ; for all other pairs of and , is blank. The set of all items rated by a user u is , the collection of all ratings by one user can also be expressed as a sparse vector . Similarly, the set of all users who have rated one item is , and the collection of these ratings can be expressed as a sparse vector .  

Algorithm

[edit]

The collabrative filtering, we can have two different task: 1) Prediction task: Predict what a target user would rate an item . Rating predictions can be used by users to quickly evaluate an unknown item: if the item is predicted highly it might be worth further consideration. 2) Ranking task: Generate a personalized ranking of the item set for each user.

Any prediction algorithm can be used to generate rankings, but not all ranking algorithms produce scores that can be thought of as a prediction. We will use the syntax to represent the output of both types of algorithm.

Baseline Predictors

[edit]

The global baseline is , which simply predicts the average rating value for all users and items.  It can be trivially improved by using a different constant for every item or user leading to the item baseline and the user baseline respectively. However most rating scales are not well anchored, two users might use different rating values to express the same preference for an item. Then it comes to a generic form of the baseline algorithm, the user-item bias model include , the average rating; , the item bias representing if an item is, on average, rated better or worse than average; and the user bias representing whether the user tends to rate high or low on average. Most of time, the functions will contain a damping term to avoid a poor performance when a user or item has very few ratings. Usually, damping parameter values of 5 to 25, but for best results β should be re-tuned for any given system  

 

Prediction Algorithm

[edit]

Based on the machine learning taxonomies, prediction algorithms can be separated to model-based algorithms and memory-based algorithms. The most important rating-based collaborative filtering algorithms in these two groups are nearest neighbor algorithms and matrix factorization algorithms respectively.

Nearest Neighbor Algorithms (The first collaborative filtering algorithms)  

[edit]

This is the most direct implementations of the idea behind collaborative filtering, simply find users who have agreed with the current user in the past and use their ratings to make predictions for the current user. For a given user and item is to generate a set of users who are similar to and have rated item . The set of similar users is normally referred to as the user’s neighborhood , with the subset who have rated an item being . Once we have the set of neighbors we can take a weighted average of their ratings on an item as the prediction. Usually, we use Pearson’s r [1][2] as the similarity function .

However, it has two significant weaknesses.

  1. It is only defined over the items rated by both users, and a decision needs to be made on how to handle ratings for other items. The standard statistical way of handling this is to ignore those ratings completely when computing the correlation.
  2. The correlation between any two users who have rated exactly two of the same items is 1, and in general correlations based on small numbers of common ratings tend towards extreme values based little data. When looking for similar neighbors to produce highquality predictions, however, it is better to prefer neighbors for which that similarity is well-substantiated by the data.  

Item-based Nearest Neighbor is more popular than user-user nearest neighbor. Since the item-item algorithm is that item similarities and neighborhoods can be shared between users, furthermore, in systems where the set of users is much larger than the set of items, we would expect the average item to have very many ratings. In addition, item-based nearest neighbor is efficient, they can store the model in memory.

On interesting application of Item-based Nearest Neighbor is the K-furthest neighbors algorithm, the researchers can use it to increase the diversity of a recommendation system.

Matrix Factorization Algorithm

[edit]

The matrix factorization algorithm one kind of latent feature model. The target of it is to find user’s preference matrix and item’s relevance to features  which satisfy . Usually, there are two way to solve this problem.

There are two important and related difficulties with the singular value decomposition in its natural form. First, it is only defined over complete matrices, but most of R is unknown. In a rating-based system, this problem can be addressed by imputation, or assuming a default value (e.g. the item’s mean rating) for unknown values.[3] If the ratings matrix is normalized by subtracting a baseline prediction before being decomposed, then the unknown values can be left as ’s and the normalized matrix processed using standard sparse matrix routines 

The target of training matrix decomposition model is to learn matrices and such that predicting the known ratings in with the multiplication has minimal (squared) error (). To avoid bias, the formula can be rewritten as a biased matrix factorization model:[4]

Where , can be learned by stochastic gradient descent algorithm. 

Learn-to-rank Algorithm

[edit]

Learning-to-rank algorithms are a recently popular class of algorithms from the broader field of machine learning. As their name suggests, these algorithm directly learn how to produce good rankings of items, instead of the indirect approach taken by the Top-N recommendation strategy. While there are several approaches, most learning- to-rank algorithms produce rankings by learning a ranking function. Like a prediction, a ranking function scores each user, item pair individually. Unlike predictions, however, the ranking score has no deliberate relationship with rating values, and is only interesting for its ranked order.  Based on the relationship considered in the learn-to-rank algorithm, it can be divided as three model:

  • Pointwise algorithm:[5] Pointwise algorithm category covers traditional prediction algorithms as well as some algorithms that allow for a broader treatment of rating data such as the ordrec algorithm. 
  • Pairwise algorithm:[6][7][8][9] Pairwise algorithms focus on pairs of items, usually representing that a user prefers one item over another. Example: Bayesian Personalized Ranking (BPR) [10] algorithm.
  • Listwise algorithm:[11][12] Listwise algorithms focus on properties of the entire set of items for a given user, such as which item should be ranked first. 

Other Algorithm

[edit]
  • Probabilistic Model: Each profile has a distinct probability distribution over movies describing the movies that cluster tends to watch and, for each movie a probability distribution over ratings for that cluster. For example: PLSI (Probabilistic LSI)[13] and LDA (latent Dirichlet allocation).[14] 
  • Linear Regression Approaches: In slope one we compute an average offset between all pairs of items. We then predict an item i by applying the offset to every other rating by that user, and performing a weighted average. For example: Slope one recommendation algorithm.[15]
  • Graph-based recommender algorithms: these algorithms tend to impose a graph by connecting users to items they have rated. For example: Graph-based algorithms to combine content and collaborative filtering approaches.[16][17] 

Ensemble Recommendation

[edit]

To improve the performance of recommendation algorithm, people can combine different algorithms.

Metrics and Evaluation

[edit]

Every recommendation system is different, and no one algorithm has been discovered that works best for all systems. Different systems tend to have subtly different data properties, usage patterns, and recommendation needs. In general the best way of knowing which recommender is best for a system is to test the system with multiple algorithms and real users.  But it expensive to test. To resolve this problem, recommender systems researchers have developed ways to evaluate a recommender algorithm online, without real users. Online evaluations are a common evaluation strategy from machine learning and information retrieval. But it is difficult to evaluate how well the system supports discovery of new preferences and favorites.So the best way to evaluate that is a combination of offline and online test. Using offline evaluation to narrow down possible algorithms to a small set of best choices. Then final selection of the best algorithm can be done based on an online evaluation (Lab study, Virtual Lab Study, A-B test).

Offline Evaluation

[edit]

Prediction Metrics

[edit]

Ranking Quality 

[edit]

Decision Support Matrix 

[edit]

Others Matrix

[edit]

References

[edit]
  1. ^ Breese, John S.; Heckerman, David; Kadie, Carl (1998-05-01). "Empirical Analysis of Predictive Algorithms for Collaborative Filtering". Microsoft Research.
  2. ^ Herlocker, Jon; Konstan, Joseph A.; Riedl, John (2002-10-01). "An Empirical Analysis of Design Choices in Neighborhood-Based Collaborative Filtering Algorithms". Inf. Retr. 5 (4): 287–310. doi:10.1023/A:1020443909834. ISSN 1386-4564.
  3. ^ Sarwar, Badrul M.; Karypis, George; Konstan, Joseph A.; Riedl, John T. (2000-01-01). "Application of Dimensionality Reduction in Recommender System – A Case Study". In ACM Webkdd Workshop.
  4. ^ Koren, Yehuda (2008-01-01). "Factorization Meets the Neighborhood: A Multifaceted Collaborative Filtering Model". Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. KDD '08. New York, NY, USA: ACM: 426–434. doi:10.1145/1401890.1401944. ISBN 9781605581934.
  5. ^ RecSys '11: Proceedings of the Fifth ACM Conference on Recommender Systems. New York, NY, USA: ACM. 2011-01-01. ISBN 9781450306836.
  6. ^ Qiu, Shuang; Cheng, Jian; Yuan, Ting; Leng, Cong; Lu, Hanqing (2014-01-01). "Item Group Based Pairwise Preference Learning for Personalized Ranking". Proceedings of the 37th International ACM SIGIR Conference on Research & Development in Information Retrieval. SIGIR '14. New York, NY, USA: ACM: 1219–1222. doi:10.1145/2600428.2609549. ISBN 9781450322577.
  7. ^ Rendle, Steffen; Freudenthaler, Christoph (2014-01-01). "Improving Pairwise Learning for Item Recommendation from Implicit Feedback". Proceedings of the 7th ACM International Conference on Web Search and Data Mining. WSDM '14. New York, NY, USA: ACM: 273–282. doi:10.1145/2556195.2556248. ISBN 9781450323512.
  8. ^ Takács, Gábor; Tikk, Domonkos (2012-01-01). "Alternating Least Squares for Personalized Ranking". Proceedings of the Sixth ACM Conference on Recommender Systems. RecSys '12. New York, NY, USA: ACM: 83–90. doi:10.1145/2365952.2365972. ISBN 9781450312707.
  9. ^ Zhong, Hao; Pan, Weike; Xu, Congfu; Yin, Zhi; Ming, Zhong (2014-01-01). "Adaptive Pairwise Preference Learning for Collaborative Recommendation with Implicit Feedbacks". Proceedings of the 23rd ACM International Conference on Conference on Information and Knowledge Management. CIKM '14. New York, NY, USA: ACM: 1999–2002. doi:10.1145/2661829.2661986. ISBN 9781450325981.
  10. ^ Rendle, Steffen; Freudenthaler, Christoph; Gantner, Zeno; Schmidt-Thieme, Lars (2009-01-01). "BPR: Bayesian Personalized Ranking from Implicit Feedback". Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence. UAI '09. Arlington, Virginia, United States: AUAI Press: 452–461. ISBN 9780974903958.
  11. ^ Shi, Yue; Karatzoglou, Alexandros; Baltrunas, Linas; Larson, Martha; Oliver, Nuria; Hanjalic, Alan (2012-01-01). "CLiMF: Learning to Maximize Reciprocal Rank with Collaborative Less-is-more Filtering". Proceedings of the Sixth ACM Conference on Recommender Systems. RecSys '12. New York, NY, USA: ACM: 139–146. doi:10.1145/2365952.2365981. ISBN 9781450312707.
  12. ^ Shi, Yue; Larson, Martha; Hanjalic, Alan. "List-wise learning to rank with matrix factorization for collaborative filtering". Proceedings of the fourth ACM conference on Recommender systems - RecSys '10. doi:10.1145/1864708.1864764.
  13. ^ Hofmann, Thomas (1999-01-01). "Probabilistic Latent Semantic Indexing". Proceedings of the 22Nd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. SIGIR '99. New York, NY, USA: ACM: 50–57. doi:10.1145/312624.312649. ISBN 1581130961.
  14. ^ M., Blei, David; Y., Ng, Andrew; I., Jordan, Michael (2003-01-01). "Latent Dirichlet Allocation". Journal of Machine Learning Research. 3 (Jan). ISSN 1533-7928.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  15. ^ Brusilovsky, Peter; Oh, Jung Sun; López, Claudia; Parra, Denis; Jeng, Wei (2016-06-20). "Linking information and people in a social system for academic conferences". New Review of Hypermedia and Multimedia. 0 (0): 1–31. doi:10.1080/13614568.2016.1179796. ISSN 1361-4568.
  16. ^ Huang, Zan; Chung, Wingyan; Chen, Hsinchun (2004-02-01). "A graph model for E-commerce recommender systems". Journal of the American Society for Information Science and Technology. 55 (3): 259–274. doi:10.1002/asi.10372. ISSN 1532-2890.
  17. ^ Phuong, Nguyen Duy; Thang, Le Quang; Phuong, Tu Minh (2008-12-15). "A Graph-Based Method for Combining Collaborative and Content-Based Filtering". PRICAI 2008: Trends in Artificial Intelligence. Springer, Berlin, Heidelberg: 859–869. doi:10.1007/978-3-540-89197-0_80.