EDIT: I have raised the same question on Microsoft Learn Q&A, and there they told me there is no solution as far as December 2022. They opened an internal ticket to solve this directly. My present solution is to write the outputs in the Lake database, and then query it afterwards in the pipeline.
I am working on Azure Synapse Analytics and I have a pipeline in which there is a Spark Job Activity for a Python script. I managed to get input parameters inside the Spark Job by using the sys package (sys.argv) inside the Python script.
I wonder if there is a similar method to return output values from the Spark Job Activity inside the pipeline.
Thank you, Dario
I tried sys.exit(), but if is not the integer 0, then the Spark Job Activity terminates with an error, and it is not what I want.