0

I am still a bit confused about how SystemVerilog's 2012 rule 4.7 is implemented. The rule states that in a situation like this:

module test;
  logic a;
  integer cnt;

  initial begin
    cnt = 0;
    #100;
    a <= 0;
    a <= 1;
    a <= 0;
    a <= 1;
    a <= 0;
  end

  always @(posedge a)
    begin
      cnt <= cnt + 1;
    end
endmodule

all assignments would be scheduled on the Non Blocking Assignment queue, and must then be executed in order. The last value wins. Up to here, it's all clear.

What happens next though is not the same for all simulators. iverilog and Modelsim (at least the Vivado 2016/3 edition) create one event on 'a', which causes cnt to increment. This seems to also match the behaviour as illustrated by Mr Cummings at SNUG 2000

VCS however filters out the intermediate values and applies only the last one, which by the way is also the way that real flip flops work.

In this case it is not a purely hypothetical discussion, the simulation results are different, and the iverilog/modelsim behaviour could cause bugs that are very difficult to catch, because the flop toggles but no value change is seen in the waveforms.

The other point is this: if iverilog/modelsim are correct, why then are they creating one event and not two?

EDIT: Additional note.

The example above is indeed not very meaningful. A more realistic case would be

always @(posedge clk)
  begin
    clk2 <= 1'b1;
    if (somecondition)
      clk2 <= 1'b0;
  end

always @(posedge clk2, negedge rst_n)
  begin
    if (!rst_n)
      q <= 1'b0;
    else
      q <= ~q;
end

this is perfectly legal and in real hardware would never glitch. the first always is actually logically identical to

always @(posedge clk)
  begin
    if (somecondition)
      clk2 <= 1'b0;
    else
      clk2 <= 1'b1;
  end

However, if you simulate the first version with ModelSim, you'll see your q happily toggling away, with clk2 constant 0. This would be a debugging nightmare.

Marco
  • 21
  • 4
  • You should not use NBAs to make assignments to a gated clock. It adds skew to the clock path. FF that go from clk to clk2 will be off by a cycle. When modeling RTL without timing, you need to be aware of the scheduling semantics that can produce undesirable artifacts. – dave_59 Nov 12 '16 at 14:46
  • Thanks Dave. I agree with you. Your answer is absolutely correct from the point of view of the 1800.2012. I am still scratching my head on why would the standard require to model a behaviour which does not match the HW. I'd love to understand that. – Marco Nov 14 '16 at 06:13

1 Answers1

1

Your last question is easy to explain. It's not that simulators create only one event, they don't- it's that only the first event schedules the @(posedge) to resume the always process and the other events happen in the NBA region before the always block resumes execution in the next active event region.

I can't justify the behavior of other simulators. You are not allowed to make multiple assignments to the same flip-flop in real hardware, so you analogy in not that simple. It's possible to have an un-timed description and get multiple (@posedge's) without time passing. So filtering would prevent that coding style.

dave_59
  • 39,096
  • 3
  • 24
  • 63
  • Thank you Dave, your explanation for the second question is very clear. However, i do not agree with your second statement. I have edited the original question to add a better example, to show how simulation result does not match at all silicon behaviour – Marco Nov 12 '16 at 08:01