0

I've been trying to implement stochastic gradient descent as part of a recommendation system following these equations:

enter image description here

I have:

for step in range(max_iter):
        e = 0
        for x in range(len(R)):
            for i in range(len(R[x])):
                if R[x][i] > 0:
                    exi = 2 * (R[x][i] - np.dot(Q[:,i], P[x,:]))
                    qi, px = Q[:,i], P[x,:]

                    qi += _mu_2 * (exi * px - (2 * _lambda_1 * qi))
                    px += _mu_1 * (exi * qi - (2 * _lambda_2 * px))

                    Q[:,i], P[x,:] = qi, px

The output I expect isn't quite right but I can't really put a finger on it. Please help me to identify the problem I have in my code.

Much appreciate your support

Community
  • 1
  • 1
Thang Do
  • 316
  • 2
  • 16
  • Did you ever figure this out? I am looking for a solution too. – nad May 06 '18 at 23:13
  • unfortunately I never did. but I reckon I should ask my fellow classmates who scored 100 on this one for their solution. – Thang Do May 24 '18 at 01:47

1 Answers1

0

When you update qi and px, you should exchange mu1 and mu2.

expectedAn
  • 96
  • 9
  • Took me 4 years to realise mu1 and mu2 are supposed to be swapped around. just like you suggested :) – Thang Do Dec 13 '21 at 00:48