I am currently trying to regressively search from a goal state to find out a list of actions that will achieve the goal state for my GOAP planner. So far, the pseudocode for what I have down goes like this:
closedList = list;
s = stack;
s.push(goal);
while(s.size() > 0)
{
state = s.pop;
if(!state exists in closedList)
{
closedList.add(state);
for(action = firstAction; action != lastAction; action = nextAction)
{
if(action.getEffect() == state)
{
if(!action.ConditionsFulfilled())
{
conditionList = action.getConditions;
for(i = 0; i < conditionList.size(); i++)
{
s.push(conditionList[i]);
}
}
}
}
}
}
I hear that GOAP is exactly like an A* algorithm only that the node is the state and the edges are the actions. But since in an A*, there aren't any conditions a node has to fulfill, it rather confused me on how to adapt an A* algorithm to work with preconditions. What I'm struggling to understand is how to store the actions and compare the cost of the actions in order to find the most efficient path. If we assume that the class action has a function getCost() which returns the cost of the action, how would I go about this while considering preconditions?