0

I am getting above error in the below line of code:
from tensorflow.initializers import random_uniform
When I run the same code file on Python 2.7.17 TensorFlow 1.15.0, I dont get the above error, but I get the following error:
SyntaxError: invalid syntax in Line: self.state_memory=np.zeros((self.mem_size,*input_shape))
Somehow, it doesnt recognise the * before the input_shape variable.
Github link of the code: https://github.com/philtabor/Youtube-Code-Repository/blob/master/ReinforcementLearning/PolicyGradient/DDPG/pendulum/tensorflow/ddpg_orig_tf.py

Im new to Tensorflow and python. Is there something very basic that I am missing?

varun
  • 1
  • The `ModuleNotFoundError` is answered here https://stackoverflow.com/questions/43531434/cannot-import-keras-initializers. Regarding the `SyntaxError`, the line you've included is valid syntax, so we'll need more information to help on that one. – Chris Apr 07 '20 at 07:22
  • I solved the ModuleNotFoundError by the stackoverflow link's help that you sent. Thanks for that. However, my code still doesnt work owing to other compatibility issues of running TF 1.x code in TF 2.0 like tf.Session(), etc. Hence, I need to sort out the SyntaxError I stated in my initial question above, to get this code running in TF 1.15. Please specify what other info I should attach to help you identify the problem. – varun Apr 07 '20 at 16:49
  • Glad the `ModuleNotFoundError` is sorted. To help with the syntax error, you'll need to include a [minimal, complete and verifiable example](https://stackoverflow.com/help/mcve) which replicates the error and allows people to fully understand the problem. – Chris Apr 07 '20 at 17:00
  • Also, your title says Python 3.3.7 but the question says 2.7.17 – can you clarify which python version you are using please? – Chris Apr 07 '20 at 17:02
  • I have 2 conda envs. One with Python 3.7.7 TF 2.1.0, where the ModuleNotFoundError was coming, which has been sorted out but there are other errors in code, like tf.Session(), etc. The other env is with Python 2.7.17 TF 1.15.0, where this SyntaxError is showing up. Going by the syntax used in the code(Github link mentioned in initial question), I am assuming it to be written in TF 1.x, so I am trying to execute it in that environment, but I am getting this syntax error. – varun Apr 07 '20 at 18:51
  • Infact, I tried running another code and faced the same issue there as well. The link to that code is below Error in Line 195 of this code. Any help would be appreciated. – varun Apr 07 '20 at 18:52
  • Ah, I see. In python 2.7 `*` does not work as an unpacking operator. I'd define the shape in `__init__` and then use it in the `tf.variable_scope` context manager. I.e. something along the lines of `self.input_shape = [None] + state_size` and `self.inputs_ = tf.placeholder(tf.float32, self.input_shape, name="inputs")` This will work for python 2 and 3. – Chris Apr 07 '20 at 19:09

2 Answers2

1

tensorflow.initializers is not present in TensorFlow 2.x . In place of that you can you can use https://www.tensorflow.org/api_docs/python/tf/random/uniform or https://www.tensorflow.org/api_docs/python/tf/random_uniform_initializer

For ex:

import tensorflow as tf

print(tf.random.uniform(shape=[2,3]))

output:

<tf.Tensor: shape=(2, 3), dtype=float32, numpy=
array([[0.26927817, 0.40026963, 0.28173876],
       [0.3990215 , 0.15438187, 0.8430346 ]], dtype=float32)>
0

To answer the syntax error, the unpacking (*) operator does not work in Python 2.7. Instead, you can define the input shape in __init__ and use that attribute in the call to tf.placeholder. For example:

class DQNetwork:
    def __init__(self, state_size, action_size, learning_rate, name='DQNetwork'):
        #set_trace()
        print (state_size)
        print (action_size)
        self.state_size = state_size
        self.action_size = action_size
        self.learning_rate = learning_rate
        # Define input shape (assumes state_size is a list)
        self.input_shape = [None] + state_size 
        with tf.variable_scope(name):
            # We create the placeholders
            self.inputs_ = tf.placeholder(tf.float32, self.input_shape, name="inputs")
Chris
  • 1,618
  • 13
  • 21
  • Done. Worked. However, when this unpacking operator is used in np.zeros, how should I go about it? ```class ReplayBuffer(object): def __init__(self,max_size,input_shape,n_actions): self.state_memory=np.zeros((self.mem_size,*input_shape))``` – varun Apr 08 '20 at 09:51
  • Another similar issue in the code I sent yestrday: ```def predict_action(explore_start, explore_stop, decay_rate, decay_step, state, actions): Qs = sess.run(DQNetwork.output, feed_dict = {DQNetwork.inputs_: state.reshape((1, *state.shape))}) ``` – varun Apr 08 '20 at 09:52
  • Hi @varun, you can use the same general apporach for those subsequent issues. The problem isn't with specific libraries or modules, it's with python 2. Also, if you are unable to solve them, you should open one or more new questions. Whilst this is perhaps slightly less convenient for you, it helps keep this site well organised and makes it easier for other people to find issues with their specific coding problems. – Chris Apr 08 '20 at 10:00
  • 1
    Thanks Chris. Really appreciate your suggestions. Im a novice, hence this mistake. Will take special care going forward to open different issues as separate questions. – varun Apr 08 '20 at 10:13
  • Don't worry about it, it takes some time to get to know how the site works. Good luck with the coding! – Chris Apr 08 '20 at 10:31