Have a report I am mocking out. The report is returned as a dictionary - each metric in the report is a key and the value is an array of n length (number of days in the report). In the case that no data was returned from the report an array of 0s is generated. Only a few of the metrics in the report are being mocked with a return value therefore anticipate the returning report dictionary to have only specific metrics with a value.
When run the test and assert if the output is equal to anticipated result, the magic mock is being plugged in as the value of the given metric. I have written a few tests like this and have not encountered this before.
from django.test import TestCase
from mock import patch
@patch.object(Report, 'clicks_data')
@patch.object(Report, 'sent_messages_data')
@patch.object(Report, 'run_for_group')
def test_group_report_return_csv_data(self, run_group, sent_msgs, clicks):
networks = self.networks ## defined in the setUp
group_report = GroupReport(networks, datetime(2014, 10, 24, 0, 0, 0), datetime(2014, 10, 26, 0, 0, 0), self.group.id)
group_report.run() ## this calls the Report.run_for_group()
run_group.return_value = None ## builds a property containing the raw data of all metrics recorded during those days; never returns a value but calls outside database
sent_msgs.return_value = [123, 111, 321]
clicks.return_value = [500, 345, 456]
result = { 'sent_msgs' : [123, 111, 321],
'clicks': [500, 345, 456],
'opened_msgs': [0, 0, 0],
'bounced_msgs': [0, 0, 0] }
self.assertEqual(result, group_report.email_group_report.get_csv_data())
Errors messages -
AssertionError: ....
+ { 'sent_msgs': <MagicMock name='sent_msgs_data' id='4389821456'>,
+ 'clicks': <MagicMock name='clicks_data' id='4408776976'>,
- 'sent_msgs': [123, 111, 321],
- 'clicks': [500, 345, 456],
'opened_msgs': [0,0,0],
'bounded_msgs': [0,0,0 }
The value of the metrics mocked out show the magic mock, not the actual return value declared. What is wrong with this syntax?