0

I want to check validity of a formula with the following quantifier alternation: ∃∀.φ

  1. Do I check validity of this formula if I solve Exists(exist_vars, ForAll(forall_vars, Phi)) in Z3Py? What is the difference between checking validity and satisfiability of this formula? How do I check both in Z3?

I mean I know that e.g., a ∃.ψ being unsat implies a ∀.¬ψ being valid, but I still do not understand what is the difference between validity and satisfiability. For instance, what is the meaning of a ∃.ψ being valid?

I want to really understand this because I have the following situation, going back to ∃∀.φ. It is the case that I am going to keep variables in the universal scope unchanged (i.e., have the part ∀.φ re-used over and over), whereas I will only modify the existentially quantified part. Thus, I decided to save some computation and "simplify" ∃∀.φ by performing quantifier elimination of ∀.φ, so I get φ'.

The problem is that we know that ∃φ' preserves equi-satisfiability with respect to ∃∀.φ... but what about validity? I guess that if I solve ∃φ', and get SAT, then I am not solving whether ∃∀.φ is valid (instead, that ∃∀.φ is satisfiable), am I right?

  1. How can I perform validity of the original formula, but doing quantifier elimination in order to save some computation? Do I have to negate the formula in something like ¬∃¬φ'? How can I think of transformations from satisfiability to validity involving first-order theories and quantifier elimination?
Theo Deep
  • 666
  • 4
  • 15

1 Answers1

2

Do I check validity of this formula if I solve Exists(exist_vars, ForAll(forall_vars, Phi)) in Z3Py? What is the difference between checking validity and satisfiability of this formula? How do I check both in Z3?

To be clear, z3 has a solve function. Are you talking about using that? Here's an example:

from z3 import *

x, y = BitVecs('x y', 16)
solve(Exists([x], ForAll([y], x <= y)))

This prints:

[]

The meaning of this is that the formula is satisfiable, though due to quantifiers there's no model to display. You can also use prove:

prove(Exists([x], ForAll([y], x <= y)))

This prints:

proved

So, to answer your question: To check validity use prove. To check satisfiability use solve.

Internally, solve creates a problem and asks if it's sat. (This is the basic function provided by z3.) On the other hand prove asks if the negation is satisfiable. You can do the same of course:

solve(Not(Exists([x], ForAll([y], x <= y))))

This prints:

no solution

which means it is not satisfiable; proving that ∃x∀y. x <= y is a valid formula when x and y are interpreted as 8-bit bit vectors.

All these techniques about quantifier-elimination etc. is more or less irrelevant. If you have the formula ∃∀.φ and you performed quantifier-elimination to get rid of the universal and found ∃φ' then you do the same thing: To prove, check the satisfiability of ¬∃φ'. If you get unsat, your original is formula is valid.

As usual, thinking about "concrete" examples helps understanding. If the above wasn't clear, I suggest you write down concrete examples and ask questions about concrete instances.

Skolemization and negation do not commute

It's important to note that if you want to check validity by checking if the negation is unsatisfiable (which is a valid proof method), then you have to be careful if you also want to skolemize. In particular, you have to first negate, and then skolemize. If you first skolemize and then negate, that would be unsound.

Here's a concrete example to demonstrate. Let's say we want to check the validity of the formula:

∃x∀y.y >= x

where x and y are interpreted over 8-bit bit-vectors. This is a valid formula. The value of 0 for x is a witness, as no (unsigned) bit vector is less than 0 by definition.

The typical way to prove this in z3 would be:

from z3 import *

s    = Solver()
x, y = BitVecs('x y', 8)

phi = Exists([x], ForAll([y], y >= x))

# Negate
phi1 = Not(phi)

# Check unsat to prove phi
s.add(phi1)
print(s.check())

And the above prints unsat, establishing validity.

Let's say we want to play the skolemization game. The correct way would be to NEGATE the formula first, and then skolemize, like this:

from z3 import *

s    = Solver()
x, y = BitVecs('x y', 8)

phi = Exists([x], ForAll([y], y >= x))

# Negate, then skolemize. This is OK.
F = Function('F', BitVecSort(8), BitVecSort(8))
phi1 = ForAll([x], F(x) < x)

# Check unsat to prove phi
s.add(phi1)
print(s.check())

(I manually negated and then skolemized, I'm sure you can follow the details there.) This also prints unsat; so we're good.

If, however, we make the mistake of skolemizing first and then negating, look what we get:

from z3 import *

s    = Solver()
x, y = BitVecs('x y', 8)

phi = Exists([x], ForAll([y], y >= x))

# Skolemize, then negate. NOTE THAT THIS IS UNSOUND!
phi1 = Not(ForAll([y], y >= x))

# Check unsat to prove phi
s.add(phi1)
print(s.check())
print(s.model())

And this prints:

sat
[x = 1]

which might lead you to think that the original formula is NOT valid; with x = 1 as the counter-example. But that's obviously not a valid conclusion. We made a mistake: We first skolemized and then negated; which is an unsound thing to do for proving validity via checking the satisfiability of the negation.

To sum up

You can always check validity of some formula φ by showing that ¬φ is unsatisfiable. But be careful: if you want to do skolemization for whatever reason, you have to skolemize ¬φ! You can't first skolemize φ, and then negate the remnant. That would be an unsound thing to do.

alias
  • 28,120
  • 2
  • 23
  • 40
  • Thanks a lot for the answer. My concern was that when obtaining `∃φ'` from `∃∀.φ`, then `∃φ'` is equi-satisfiable (not equivalent) to `∃∀.φ`. I did not know that validity and satisfiability were so much related. Then I see that `∃φ'` from `∃∀.φ` preserves "equi-validity". Thanks again. – Theo Deep Apr 03 '23 at 19:30
  • One extra question, imagine we have a solver "s", then "s.check()" is like "solve" or like "prove"? – Theo Deep Apr 03 '23 at 20:03
  • 1
    Validity and satifiability is two sides of the same coin. "If valid, then negation is unsatisfiable" is pretty much all you need to remember though. – alias Apr 03 '23 at 21:20
  • 1
    `solve` is pretty much short hand for `s.check()`, it just saves you from declaring a solver first. `prove` is similar, except it negates the goal first. See https://github.com/Z3Prover/z3/blob/479f8442009987726e3c03fe5618b250acca383a/src/api/python/z3/z3.py#L9085-L9112 and https://github.com/Z3Prover/z3/blob/479f8442009987726e3c03fe5618b250acca383a/src/api/python/z3/z3.py#L9146-L9171 – alias Apr 03 '23 at 21:22
  • It makes sense, it is just the case I never received the formation on this. I understand it as "if valid, then there is no model for the negated formula (i.e., negation is unsat)". Thanks again. – Theo Deep Apr 03 '23 at 22:16
  • 1
    I should caution you that if you're trying to prove things by asking if the negation is `unsat`, you should keep that in mind before you do the skolemization. That is, you should skolemize the *negated version* of the formula. It would definitely be unsound to skolemize your formula first, and then negate the remnant! I'm guessing you don't do this anyhow, but feel free to ask a separate question if this comment sounds cryptic. For satisfiability there's no difference since there's no negation, but for validity, skolemization has to be done after the negation is applied. – alias Apr 03 '23 at 23:16
  • 1
    I think I will find myself in this situation and will probably ask a question, since I am now more into Skolem/Herbrand synthesis. Thanks a lot! – Theo Deep Apr 04 '23 at 09:20
  • 1
    Another rule of thumb: Skolemization and negation don’t commute. – alias Apr 04 '23 at 14:48
  • 1
    I added a concrete example showing why skolemization should be done *after* negation. Hope that helps. – alias Apr 05 '23 at 17:10
  • Just read about the example regarding Skolemnization, which was a question I was about to make, so thank you very much! I am a little confused, though. Let us see if I can explain myself. – Theo Deep Apr 10 '23 at 10:57
  • First, let me sum up. As you explained, when we have `∃∀.φ` and perform QE, then we have `∃φ'` (equi-satisfiable to `∃∀.φ`) and it is enough to check validity of `∃φ'` to know validity of the original `∃∀.φ`. Another option provided in this post is using Skolemization: we negate `∃∀.φ`, so get `¬∃∀.φ` and perform Skolemnization of it, getting a new `∀.φ’’` , where `φ’’` does not contain variables bounded by `∀` in `∃∀.φ` (i.e., `y` in the example). Now, it suffices to check validity of `∀.φ’’` to know validity of the original `∃∀.φ`. – Theo Deep Apr 10 '23 at 10:57
  • My question is, which is the “semantic” difference between both methods? I mean, why is not Skolemizing the same as eliminating `y` via quantifier elimination? I mean, we are eliminating `y`! I know Skolemizing is obtaining the form with “only universals”, but this makes me confuse: for me it looks like we are eliminating `y` in both methods, but one leaves a formula with a variable `x` existentially quantified and the other one leaves a formula with (the same variable `x` and) universal quantification. I do not know whether I should post a new question for this. – Theo Deep Apr 10 '23 at 10:58
  • 1
    Skolemization doesn't eliminate variables. It just replaces existentials with constant function symbols. (Only quantifier-elimination can eliminate variables.) When you skolemize, you do not eliminate the existentials at all: They just become top-level function symbols; i.e., nested existentials move to the outermost positions in the formula. And this is why it doesn't produce equivalent formulas; but just "equi-satisfiable" ones. – – alias Apr 10 '23 at 17:41
  • 1
    Here's another way to think about it. Skolemization moves existentials to the top; but doesn't "decide" your formula. It structurally alters it, so you can apply further techniques to it. Quantifier-elimination completely removes the variable. If you remove all variables, what you are left with is essentially a constant, upto the interpretation of other constants and function symbols. https://mathoverflow.net/questions/114083/why-skolemization is a good article to read. – alias Apr 11 '23 at 01:42
  • Yes, for me the key difference is that quantifier elimination yields a decision procedure when all variables are bounded by the quantifiers. Thanks again! – Theo Deep Apr 11 '23 at 14:47