As far as I know you can't really get access to the data itself (i.e. like a pointer). The reasoning being is that the code will be data agnostic so that it can pass around the data to different CPUs or GPUs without you worrying about that part (or you could specify device to use but that gets cumbersome).
So tf.slice would be the correct function to use.
you could do :
for i in range(n_sample):
curr_slice = tf.slice(C, [i,0], [n_sample,1])
do_something(curr_slice)
this isn't the most efficient version but it's what you asked for in the comments.
for i inVectorized range(n_sample):approach
curr_sliceloss = tf.slice(C, [i,0], [n_sample,1])
y.assign_add( tf.nn.l2_loss(tf.sub(curr_slice,X - tf.matmul(X,curr_slice)C)) + lambdalamb * tf.nn.l2_loss(curr_slice) C)
loss=tf.reduce_sum(y)
Vectorized approach much cleaner:
loss = tf.nn.l2_loss(X - tf.matmul(X,C)) + lamb * tf.nn.l2_loss(C)
train_step = tf.train.GradientDescentOptimizer(0.01).minimize(loss)
init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)
sess.run(train_step)
You might need to initialize the some of the values by creating placeholders.
Alternatively I couldn't find it in skflow yet but in scikit learn it's a simple 3 liner.
from sklearn.linear_model import Ridge
clf = Ridge(alpha=1.0)
clf.fit(X, W)