I'm new to Prolog and logic programing in general, I'm writing a small theorem prover for fun, and in doing so I wrote a normalization procedure. I wanted this procedure to be deterministic and steadfast, so I wrote something like this :
normal(S, R):- var(S), !, S = R.
normal(true(S), R):- !, normal(S, R).
normal(!(S), R):- !, normal(false(S), R).
normal(P => Q, R):- !, normal(false(P and false(Q)), R).
normal(A or B, R):- !, normal(false(false(A) and false(B)), R).
normal(false(S), R):- !, normal(S, NS), normal_false(NS, R).
normal(A and B, R):- !, normal(A, NA), normal(B, NB), normal_and(NA, NB, R).
normal(S, S):- !.
normal_false(S, R):- var(S), !, S = false(R).
normal_false(false(S), S):- !.
normal_false(true, false):- !.
normal_false(false, true):- !.
normal_false(S, false(S)):- !.
normal_and(A, B, R):- var(A), var(B), !, R = A and B.
normal_and(A, true, A):- !.
normal_and(true, B, B):- !.
normal_and(_, false, false):- !.
normal_and(false, _, false):- !.
normal_and(A, B, A and B):- !.
I'm now wondering if this was the right way to do it. It currently seems to work, but I'm wondering if this might not fit the properties I'm expecting in some edge-cases, if there might be some performance problems with the way I wrote it, or if this is just bad coding style/practice in general.