That's not the way a Markov chain works. You need a starting state (in this case, either [1,0,0], [0,1,0], or [0,0,1]), then you left multiply the transition matrix by the state vector, then multiply the transition matrix by the newly attained state vector, etc. You don't multiply the transition matrix by itself. If you need to figure what happens after a specific number of transitions, you can just loop through X times and perform X matrix-vector multiplies. If you want the steady state, you need to find the dominant eigenvector, which you can do using numpy.linalg.eig. Note also that this won't work with the transition matrix you have, because those rows are not probability distributions.
Edit: Okay, I think I see what you're trying to do. Because of the way matrix vector multiplication works, you can also just exponentiate the matrix, then multiply it by the starting state vector, and get the same result as if you had iteratively multiplied each intermediate result. You can use numpy.linalg.matrix_power to do that. And I see you got that matrix from Wikipedia. You just miscopied some of those numbers, i.e. should be 0.025, not 0.25. It's critical that every row sums to 1.
This code reproduces the example from Wikipedia:
import numpy as np
T = np.array([[0.9, 0.075, 0.025],
[0.15, 0.8, 0.05],
[0.25, 0.25, 0.5]])
start = np.array([0, 1, 0])
def find_state_after_n(start, T, n):
Tmult = np.linalg.matrix_power(T, n)
state = np.dot(start, Tmult)
return state
find_state_after_n(start, T, 3)
array([ 0.3575 , 0.56825, 0.07425])