My question is about the different CP types triggers on a NetApp filer. I have looked a lot and found good description for most of them but some explanations are a bit general.
Here is the list of the CP types (as shown at sysstat command) along with an explanation to the ones I already know. Please help me understand the rest (and correct me if I got anything wrong):
T - Time. CP occurs every 10 seconds since the last CP if no other trigger has caused it.
F - Full NVLog. The NVRAM is divided into two sections (4 when working in an HA pair configuration - half is a mirror of the HA partner) - if one is filled up the CP occurs and the data is flushed to disks, in the meantime the other half is used for incoming writes.
B - Back to back. While a CP is commited, the second half of the NVLog is full and needs to flush before the first one finished. This situation causes for latency problems and means that the filer is having hard times keeping up with the write loads.
b - I need help from you guys about this one, all the places I read only declare that this is also back to back that is worse than B but no one explains exactly what is the difference and when this is shown instead of the other.
S - Snapshot. Right before the filer is taking snapshot it is committing CP so it will be in a consistent state.
Z - I need your help for this one as well, everything I found just says that this is CP that happens in order to sync the machine and happens before snapshots. So, what is the need for this one if we have the S? what is the difference between them?
H - High water mark. I AM NOT SURE I GOT THIS ONE CORRECT BUT - When there is a lot of changed data in the memory buffers (RAM not NVRAM!) the filer is committing CP in order to flush and get the buffers clean.
L - Low water mark. I AM NOT SURE I GOT THIS ONE CORRECT BUT - When there is low space left on the memory buffers (RAM not NVRAM!) he filer is committing CP in order to flush and get the buffers clean. So the difference between this and H is that H is about changed data threshold and this is about data in buffers as a whole (if I got it right).
U - flUsh. When application using asynchronous writes asks that it's data will be flushed down to a persistent storage.
V - low Virtual buffers. I have no idea what that one means, help?
M - low Mbufs. I have no idea what that one means, help?
D - low Datavects. I have no idea what that one means, help?
N - max entries on NVLog. What is the difference between this one an F?
So, in summary I need help at:
- Difference between B and b (and a real one - not that b is worse)
- Difference between S and Z
- Difference between F and N
- Any information about V, M & D types
- A validation that I got things right, specifically L, H and U will be appreciated
Thanks in advance.