As many machine learning algorithms rely to matrix multiplication(or at least can be implemented using matrix multiplication) to test my GPU is I plan to create matrices a , b , multiply them and record time it takes for computation to complete.
Here is code that will generate two matrices of dimensions 300000,20000 and multiply them :
import tensorflow as tf
import numpy as np
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)
#a = np.array([[1, 2, 3], [4, 5, 6]])
#b = np.array([1, 2, 3])
a = np.random.rand(300000,20000)
b = np.random.rand(300000,20000)
println("Init complete");
result = tf.mul(a , b)
v = sess.run(result)
print(v)
Is this a sufficient test to compare performance of GPU's ? What other factors should I consider ?