0

We have a 600W APC UPS and a Server with two power adapters (each 800W). Originally, both power adapters were connected to the UPS, and the UPS connected to the power grid. Unfortunately the room where our server is located has no air con. So on a hot day it happened that the server drew more energy that the UPS could deliver and the system failed.

From the spec it's obvious that a 600W UPS is too weak to power a 2x800W Server. But our IT supplier argues that under normal circumstances the server would consume way below 600W, probably 300W, and our room was too hot.

But as we are a school with limited resources we can neither buy a new UPS with 1,600W capacity nor install an air con. So I am considering the following setup:

  • 1 power adapter connected the UPS, which is connected to the power grid
  • 1 power adapter directly connected to the power grid, bypassing the UPS

Justification: From this question I learned that a server with two power adapters distributes its load more or less evenly among both power sources. Then there would be 4 scenarios.

  1. Cold day: 300W power load by the server, 150W via the UPS, 150W directly from the power grid
  2. Cold day + power cut: UPS has to bear 300W
  3. Hot day: >600W power load by the server, >300W via the UPS, >300W directly from the power grid
  4. Hot day + power cut: UPS has to bear >600W

As only case 4 is a bad case, the setup reduces the risk of system failure. Or am I wrong? On other posts I learned about UPS-bypasses for maintenance or for redundancy. But here it would be a productive setup. I should add that the main purpose to have a UPS is to protect the server hardware from damages in case of a power cut, not to ensure 99.99% uptime. 99% uptime are also OK for us. Also, power cuts at our location are rare, maybe 1-2 per year, so case 4 should be even rarer.

kinnla
  • 103
  • 2

1 Answers1

0

It is difficult to provide a yes/no answer as there are many variants and the biggest variant is - what do you, and your organization, want. Everything in IT depends on what you want to achieve. No UPS could be equality fine if that's what you required (OK, it's probably a rare case but could be).

First thing is to make sure your assumptions are correct.

  1. Is your server actually behaving the way you think it is, when using two power supplies. Depending on the type of servers, this may be as you think it is - split the load between both. Some servers are configurable with different power modes - low power, max performance, balanced. This can generally be configured at boot time in the "BIOS" or other management tools (such as HPE's servers).
  2. Test it. Don't assume anything, even after reading specs. Trust the specs but verify it is doing it.
  3. What are you trying to protect? Power off is one thing, that is an impediment to using the system, but that's not the only reason we use UPS. We just don't want the servers to just turn off, because that may lead to data corruption. For example, I had a server which ran fine for a few years and one day it powered off, all drives then wouldn't spin and all the data was lost. There are other scenarios...
  4. Maybe condition 4 - high power draw (supposedly due to high heat) and a power outage during that time - seems unlikely to you and maybe it is. But let's say that this 1 time in the year when it happens, you are in the process of doing something very important, lots of data and due to the interrupted power your server goes off and the data gets corrupted and you lost all your research work, worth millions, or thousands, or maybe it's all the grades of your class for the year and now there is no more any records of how the students did. Will you say "oh well, there were only 2 in 365 chances of this happening today so be it", or will you get fired? Will people paid $100/hr have to redo 4 hours of work? At which point, your "savings" of not getting another UPS have been for nothing?

Now there is also another factor - how warm is warm? If it's under 80F, and the air is flowing, then fine. If it's above 80F, air flowing or not, then that's not find, you may lose your server for other reasons than power outages. If it's above 70F and the air isn't flowing, then also not fine. These are rough temperatures - not a scientific measurement, just throwing these numbers out of nowhere (well my general experience).

If your hard drives boil, then you're likely to see faster failures. Soldering will expand and if your server suddenly go cold (like a power outage) you may have cracked soldering on components resulting in what's known as a "flaky server" (random inexplicable failures).

And another aspect, UPS batteries should be kept in a cool environment. If it's 80F, then your battery life is reduced. It will need to be replaced faster and may not hold as much charge as you expect. Look at the UPS operating temperature spec.

ETL
  • 6,513
  • 1
  • 28
  • 48
  • Thx for the detailed answer! We'll go ahead with the setup for now. Server data is not too important. The real issue indeed is the room temperature that easily rises above 80F during the summer. Have to address this. We'll also check the server settings if there's a way to save power. – kinnla Aug 19 '18 at 15:47
  • Update: we measured the power consumption on both supplies in average (cool) room temperature. It was 50 W on the one that is connected to the power grid and 70 W on the one connected via the UPS. We will keep the setup over this summer and see how it works. – kinnla Apr 04 '19 at 08:07