I'm trying to migrate some code from pandas to pyspark. Pyspark.pandas comes along as an easy maintainable solution. I want to make code as efficient as possible, so, my question is:
Is there any difference between pyspark and pyspark.pandas? If so, what are the main differences? Or is pyspark.pandas just a wrapper around pyspark functions?