0

I'm using the actuator agent get_multiple_points with VOLTTRON 8.1.3 to make about 30 BACnet read requests of sensors with:

zone_setpoints_data = self.vip.rpc.call('platform.actuator', 'get_multiple_points', actuator_get_this_data).get(timeout=300)

And I notice this debug message:

2022-06-09 19:55:21,927 (loadshedagent-0.1 2930461) __main__ DEBUG: [Simple DR Agent INFO] - ACTUATOR SCHEDULE EVENT SUCESS {'result': 'FAILURE', 'data': {}, 'info': 'REQUEST_CONFLICTS_WITH_SELF'}

But I have the data, like it appears to be working just fine in addition to the 1 minute interval scrape all BACnet devices inside the building. Anything to worry about or should I make some sort of adjustment?

EDIT Code snip for scheduling the actuator below. Am I scheduling the actuator agent wrong with the _now,str_start,_end,str_end on 30 devices for get_multiple_points? Should I be adjusting this td(seconds=10) uniquely to space out the call for each device?

# create start and end timestamps for actuator agent scheduling
_now = get_aware_utc_now()
str_start = format_timestamp(_now)
_end = _now + td(seconds=10)
str_end = format_timestamp(_end)


actuator_schedule_request = []
for group in self.nested_group_map.values():
    for device_address in group.values():
        device = '/'.join([self.building_topic, str(device_address)])
        actuator_schedule_request.append([device, str_start, str_end])

# use actuator agent to get all zone temperature setpoint data
result = self.vip.rpc.call('platform.actuator', 'request_new_schedule', self.core.identity, 'my_schedule', 'HIGH', actuator_schedule_request).get(timeout=90)
bbartling
  • 3,288
  • 9
  • 43
  • 88

1 Answers1

0

It seems to me that getting points won't cause that debug message. What the message you are seeing comes from https://volttron.readthedocs.io/en/releases-8.x/driver-framework/actuator/actuator-agent.html?highlight=REQUEST_CONFLICTS_WITH_SELF#task-schedule-failures

So it appears that somewhere your schedules are overlapping. But the call to get_multiple_points should not have anything to do with this error.

Craig
  • 949
  • 1
  • 5
  • 13