0

I understand livy session statement intakes code statements like the below example.

data = {
 'code': textwrap.dedent("""
import random
NUM_SAMPLES = 100000
def sample(p):
  x, y = random.random(), random.random()
  return 1 if x*x + y*y < 1 else 0

count = sc.parallelize(xrange(0, NUM_SAMPLES)).map(sample).reduce(lambda a, b: a + b)
print "Pi is roughly %f" % (4.0 * count / NUM_SAMPLES)
""")
}

r = requests.post(statements_url, data=json.dumps(data), headers=headers)

but is there a way in which I can provide pyspark files, maybe something like this:

data = {
 'pySparkFile': file_name.py
}

I understand livy batch provides this functionality but I want an interactive session where users can pass multiple scripts one after another and we can also call variables of other scripts, just like in an interactive pySpark session.

Shubzumt
  • 143
  • 1
  • 12

1 Answers1

0

I am not sure this answers your question, but I managed to create a Spark session on EMR using cURL like this:

$ curl -H "Content-Type: application/json" -X POST -d '{"kind":"pyspark", "conf": {"spark.yarn.dist.pyFiles": "s3://bucket-name/test.py"}}' http://ec2-3-87-28-125.compute-1.amazonaws.com:8998/sessions
{"id":0,"name":null,"appId":null,"owner":null,"proxyUser":null,"state":"starting","kind":"pyspark","appInfo":{"driverLogUrl":null,"sparkUiUrl":null},"log":["stdout: ","\nstderr: ","\nYARN Diagnostics: "]}

I inspected /mnt/var/log/livy/livy-livy-server.out and found this line which indicates that the session was successfully created:

20/08/31 18:02:25 INFO InteractiveSession: Interactive session 0 created [appid: application_1598896609416_0002, owner: null, proxyUser: None, state: idle, kind: pyspark, info: {driverLogUrl=http://ip-172-31-85-247.ec2.internal:8042/node/containerlogs/container_1598896609416_0002_01_000001/livy, sparkUiUrl=http://ip-172-31-95-182.ec2.internal:20888/proxy/application_1598896609416_0002/}]
Hedi Bejaoui
  • 384
  • 2
  • 16