0

New servers have NVRAM-N DIMM's which allow for their information to be saved to flash when the system loses power. I've read that it takes about a minute to save the information and the built in battery lasts only seconds more than the time required to save the RAM into it's Flash.

However I've read nothing about how long it takes to restore the information from flash...
So for DDR4 NVRAM-N DIMM's what's the maximum amount of time it takes to restore the memory and resume the server again?

Tmanok
  • 197
  • 2
  • 13

1 Answers1

1

This is entirely down to whatever code pulls the data off the NVDIMM, they may choose never to do so, it's entirely implementation-specific, there's no on-size-fits-all answer I'm afraid.

Chopper3
  • 101,299
  • 9
  • 108
  • 239
  • That doesn't make a whole lot of sense, why wouldn't you opt to restore the data you're already backing up to the flash? Why even buy NVDIMM's if you don't want the data back? – Tmanok Feb 16 '18 at 17:24
  • Because it's an option, it's not mandatory, it's entirely software driven by the OS and/or application, the hardware doesn't go near it, it just sees it as more NVMe-compliant storage for the OS to deal with. Now in reality you may well find that most applications choose to bring it back in once consistency-checks are complete, but it really is entirely software driven and thus optional. – Chopper3 Feb 16 '18 at 17:30
  • Huh that's pretty funky but I guess everything new has an odd beginning. I can still appreciate that they're using DIMM slots to make "NVRAM" run on the system, pretty genius if you ask me. Hopefully they implement the tech as full force NVRAM and not NVme soon as you mentioned. Thanks for answer Chopper :) – Tmanok Feb 16 '18 at 17:33
  • For your information the NVMe spec started out as a DIMM-slot-only spec, obviously DIMM have the highest off-die bandwidth and lowest-latency. Only as they moved towards production did someone suggest adding a PCIe header in front of the flash to give us the M.2/U.2/PCIe-adapter versions we have now. – Chopper3 Feb 16 '18 at 18:00
  • 1
    Oh and future/NDA server CPUs will have two classes of DIMM slots, ones for DRAM that deal with the whole ranking/power issues and a second class of NVDIMM slots that will allow for more, but slightly-slower NVDIMM modules per socket - think instead of today's 6 channels of 2 DIMMS pers socket more like 4 channels of 2 DIMMs plus 2 channels of 8 NVDIMMs per socket :) – Chopper3 Feb 16 '18 at 18:01