While doing some coding exercise, I came across this problem:
"Write a function that takes in a list of dictionaries with a key and list of integers and returns a dictionary with standard deviation of each list."
e.g
input = [
{
'key': 'list1',
'values': [4,5,2,3,4,5,2,3]
},
{
'key': 'list2',
'values': [1,1,34,12,40,3,9,7],
}
]
Answer: Answer: {'list1': 1.12, 'list2':14.19}
Note, the 'values' is actually the key to the list of the value, a little decepting at first!
My attempt:
def stdv(x):
for i in range(len(x)):
for k,v in x[i].items():
result = {}
print(result)
if k == 'values':
mean = sum(v)/len(v)
variance = sum([(j - mean)**2 for j in v]) / len(v)
stdv = variance**0.5
return stdv # not sure here!!
result = {k, v} # this is where i get stuck
I was able to calculate the standard deviation, but I have no idea how to put the results back into the dictionary as suggested in the answer. Can anyone shed some lights into it? Much appreciated!