I installed databrick-connect in a conda enviroment, without having pyspark installed (I read that having pyspark would crash with the installation of databricks-connect). After finishing the configuration of databricks-connect with the cluster,port... info. I tried to run pyspark within the conda enviroment but it does not work:
Traceback (most recent call last):
File "C:\Users\Name\Anaconda3\envs\conda_env1\Scripts\find_spark_home.py", line 86, in <module>
print(_find_spark_home())
File "C:\Users\Name\Anaconda3\envs\conda_env1\Scripts\find_spark_home.py", line 52, in _find_spark_home
module_home = os.path.dirname(find_spec("pyspark").origin)
AttributeError: 'NoneType' object has no attribute 'origin'
The system cannot find the path specified.
The system cannot find the file specified.
The system cannot find the file specified.
The system cannot find the path specified.
Additional info: I'm using Windows 10, windows power shell to run my commands. Java8, Hadoop 3-3.4, databricks-connect==9.1 LTS, python 3.8.
Any ideas what could be the problem ?