I try to lemmatize the column "tokenized" in a dataframe. One cell of the column "tokenized" looks as follows " yeah simply zurich generic serving think media bland prepared curry kind paying well loves used parboiled oily place elaborate non tasteful stay underspiced institution vegetarian indian clueless away hiltl anyone served support veg long like normal strong worth insult not rice kitchen know wont food cuisine fantastic fan time term patrons ".
When I run my code it returns something like this: ",,e,n,d,e,d,,,p,a,y,i" which is not what i want. How can I lemmatize full words?
This is my code:
reviews_english['tokenized_lem'] = reviews_english['tokenized'].apply(
lambda lst:[lmtzr.lemmatize(word) for word in lst])
reviews_english