I have written this code in Jupyter, but I get an error message:
tokenizer = RegexpTokenizer (r'\w+')
career_df['How could the conversation have been more useful?']= career_df['How could the conversation have been more useful?'].apply(lambda x:tokenizer.tokenize(x.lower()))
The error is
AttributeError: 'function' object has no attribute 'lower'