Summary of "[ИАД, весна 2025] Рекомендательные системы, 2"

Summary — main ideas and lessons

1) Topic and context

2) Historical motivation and intuition

3) Types of collaborative filtering

4) User-based vs. item-based — pros and cons

5) Formal setup and goals

Memory-based (neighborhood) methods

Methodology / steps

Implementation tips

Matrix-based item-item learning (SLIM-like)

Idea and objective

Solution approaches and practicalities

Similarity computation practicalities

Data, splits and evaluation (seminar practicals)

Data and preprocessing

Train/test split approaches

Candidates and cold items/users

Metrics covered

Baselines

Practical coding notes from the seminar

Final course plan pointers

Detailed recipes — building and evaluating a memory-based item-item recommender

  1. Preprocessing

    • Map original user/item IDs to compact indices and keep mappings.
    • Sort interactions by timestamp (if available).
    • Build sparse user-item rating matrix R.
  2. Create train/test split

    • Time-based: train = interactions before T, test = after T; or per-user holdout (keep recent interactions per user for test).
    • Remove test users with no train history (unless evaluating cold-start).
  3. Compute item-item similarities

    • For each item pair (i, j) compute cosine or Pearson similarity using only co-rated users.
    • Apply shrinkage:
      • sim_shrunk = (n_co_ratings / (n_co_ratings + β)) * sim
    • Optionally apply IDF-like weighting to reduce popularity bias.
  4. Make predictions (KNN item-based)

    • For target (u, i): select K most similar items to i among items that u has rated.
    • Compute weighted aggregate of u’s ratings on those neighbors; apply normalization (subtract user mean if using centered ratings).
    • Exclude items already seen by u when producing top-N recommendations.
  5. Evaluate

    • For each test user, produce top-N recommendations from candidate pool.
    • Compute hit rate, coverage, precision@N, recall@N, etc.
    • Compare to simple baselines (popularity).
  6. (Alternative) Learn item-item weight matrix W

    • Solve optimization: minimize ||R − R W||^2 + λ ||W||_2^2 with diag(W)=0 (optionally W≥0, L1 for sparsity).
    • Use closed-form when available (no L1, relaxed constraints) or iterative solvers otherwise.
    • Compute predictions as R̂ = R W.

Parameters and typical values

Speakers / sources referenced

Category ?

Educational


Share this summary


Is the summary off?

If you think the summary is inaccurate, you can reprocess it with the latest model.

Video