1

I previously asked in How to bias Z3's (Python) SAT solving towards a criteria, such as 'preferring' to have more negated literals, whether there was a way in Z3 (Python) to 'bias' the SAT search towards a 'criteria'?

In that post, we learnt 'simple biases' such as I would like Z3 to obtain a model, but not any model: if possible, give me a model that has a great amount of negated literals.

This was performed using a new solver (Optimize instead of Solver) and soft constraints (add_soft instead of add). Concretely, for each literal lit_n and an optimized solver o_solver, this was added: o_solver.add_soft(Not(lit_n)); or, equivalently: o_solver.add_soft(Or(Not(lit1), Not(lit2), ...).

However, I would like to express a 'little more complicated biases'. Concretely: if possible: I prefer models with the half of literals to True and the half to False.

Is there any way I can express this and similar 'biases' using the Optimize tool?

Theo Deep
  • 666
  • 4
  • 15

1 Answers1

2

Here’s a simple idea that might help. Count the number of positive literals and subtract from the count of negative ones. Take the absolute value of the difference and minimize it using the optimizer.

That should find the solution with as close a count of positive and negative literals, satisfying your “about half” soft constraint.

Here's a simple example. Let's say you have six literals and you want to satisfy their disjunction. The idiomatic solution would be:

from z3 import *

a, b, c, d, e, f = Bools("a b c d e f")

s = Solver()
s.add(Or(a, b, c, d, e, f))

print(s.check())
print(s.model())

If you run this, you'll get:

sat
[f = False,
 b = False,
 a = True,
 c = False,
 d = False,
 e = False]

So, z3 simply made a True and all others False. But you wanted to have roughly the same count of positive and negative literals. So, let's encode that:

from z3 import *

a, b, c, d, e, f = Bools("a b c d e f")

s = Optimize()
s.add(Or(a, b, c, d, e, f))

def count(ref, xs):
    s = 0
    for x in xs:
        s += If(x == ref, 1, 0)
    return s

def sabs(x):
    return If(x > 0, x, -x)

lits = [a, b, c, d, e, f]
posCount = count(True,  lits)
negCount = count(False, lits)
s.minimize(sabs(posCount - negCount))


print(s.check())
print(s.model())

Note how we "symbolically" count the negative and positive literals and ask z3 to minimize the absolute value of the difference. If you run this, you'll get:

sat
[a = True,
 b = False,
 c = False,
 d = True,
 e = False,
 f = True]

With 3 positive and 3 negative literals. If you had 7 literals to start with, it'd have found a 4-3 split. If you prefer more positive literals than negatives, you can additionally add a soft constraint of the form:

s.add_soft(posCount >= negCount)

to bias the solver that way. Hope this gets you started..

alias
  • 28,120
  • 2
  • 23
  • 40
  • 1) Hello, thanks for the response. I think I do not understand it yet: you mean count the number of positive (and negative) literals of where? If you mean a model, then how can I count the positive and negative literals of a model that has not been decided yet? I mean, if I have booleans `a`,`b`,`c` and `d` and I would like to get models like `[a=T, b=T, c=F, d=F]` or `[a=T, b=F, c=T, d=F]` (i.e. with 'about half'), how do I encode that? – Theo Deep Feb 08 '22 at 14:45
  • 2) I mean, I do guess you are giving me hints that I implement a function that calculates that (difference between positives and negatives) and give that function to the optimizer, like in a kind of `o.optimize(f(x))`, where `f` calculates that difference. But I don't know what input parameters it would have (the `x`): sounds to me like using the model as a parameter for that function, but the model has not been calculated yet. Probably misunderstanding you, sorry :( – Theo Deep Feb 08 '22 at 14:45
  • 1
    I added a simple example. – alias Feb 08 '22 at 14:57
  • That completely solves my problem! However, it is making my head explore. I did not know that kind of functions. For instance, I tested `print(posCount)` and received a weird response: `0 + If(a == True, 1, 0) + If(b == True, 1, 0) + If(c == True, 1, 0) + If(d == True, 1, 0) + If(e == True, 1, 0) + If(f == True, 1, 0)`. How can I know more about these kind of techniques? Is this some kind of polymorphic? For my problem it makes sense, but I definitively have a lack of knowledge. Still was what I thought you were suggesting to me. Thanks a lot again!! – Theo Deep Feb 08 '22 at 15:11
  • As for your last comment on preferring more positive literals than negative ones, does `s.minimize(sabs(posCount - negCount))` not conflict with `s.add_soft(posCount > negCount)`? Is it the way to solve it using different weights? – Theo Deep Feb 08 '22 at 15:14
  • 1
    Learning these techniques: It's like any other programming skill: After a while you pick up on the techniques. I've previously posted resources for learning, you might want to review that: https://stackoverflow.com/a/70795825/936310 – alias Feb 08 '22 at 15:20
  • 1
    Biasing towards positive/negative literals: Yes, that can indeed interfere with the optimization goal itself. I'm not sure what the exact internal mechanisms would be; likely undocumented. You might want to review https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/nbjorner-nuz.pdf to see if they have talked about this; or ask it in z3 discussion forum: https://github.com/Z3Prover/z3/discussions – alias Feb 08 '22 at 15:23
  • 1
    I started a discussion at https://github.com/Z3Prover/z3/discussions/5824. Hopefully the authors will provide some insight into how soft-assertions and optimization goals interact. – alias Feb 08 '22 at 16:07
  • Thanks a lot! I just read it, I did not know there was not a way to attach a weight to a maximize/minimize, so my question was initially wrong when I said directly 'how could be use that weight selection to solve the problem'. Thanks again! – Theo Deep Feb 08 '22 at 16:31