For a large model, the function model() used through the Z3 Python API truncates the output (at some point the model is continued with "...").
Is there a way to avoid this?
For a large model, the function model() used through the Z3 Python API truncates the output (at some point the model is continued with "...").
Is there a way to avoid this?
I have the function from below which tries to save into fileName the answer given by str(self.solver.check()) and the model given by str(self.solver.model())
def createSMT2LIBFileSolution(self, fileName):
with open(fileName, 'w+') as foo:
foo.write(str(self.solver.check()))
foo.write(str(self.solver.model()))
foo.close()
The output of the problem is:
sat[C5_VM1 = 0,
... //these "..." are added by me
VM6Type = 6,
ProcProv11 = 18,
VM2Type = 5,
...]
The "...]" appear at line 130 in all the files which are truncated. I don't know if is a Python or Z3 thing. If the model can be written on less than 130 lines everything is fine.
If I remember correctly, this is a Python "feature". Instead of str(...)
try using repr(...)
, which should produce a string that can be read back (if needed) by the interpreter. You can of course iterate over the model elements separately, so as to make all the strings needing to be output smaller. For instance, along these lines:
s = Solver()
# Add constraints...
print(s.check())
m = s.model()
for k in m:
print('%s=%s' % (k, m[k]))