The situation you describe is not a legitimate data integrity issue. so the solution is not a constraint, no matter how it is implemented. Data integrity is concerned primarily with the validity of the data, not accuracy. Accuracy is not a concern of the database at all.
Data integrity can fit into two categories: context-free integrity and contextual integrity. Context-free integrity is when you can verify the validity of the datum without referring to any other data. If you try to write an integer value into a date field for example (domain checking) or set an integer field to "3" instead of 3 or set it to 3 when the range is defined as "between 100 and 2000".
Contextual integrity is when the validity must be considered as part of a group. Any foreign key, for example. The value 3 may be perfectly valid in and of itself, but can fail validity checking if the proper row in a different table doesn't exist.
Accuracy, on the other hand, is completely different. Let's look again at the integer field constrained to a range of between 100 and 2000. Suppose the value 599 is written to that field. Is it valid? Yes, it meets all applicable constraints. Is it accurate? There is no way to tell. Accuracy of the data, as the data itself, originates from outside the database.
But doesn't the ability to add all city's population within a county and compare it to the overall county population mean that we can check for accuracy here?
Not really, or not in a significant way. Upon inserting a new city or updating a city population value, we can test to see if the total of all city populations exceeds the county population. We can alert the user to a problem but we can't tell the user where the problem is. Is the error in the insert or update? Is there a too-large population value in an existing city that was entered earlier? Are there several such too-large values for many cities? Are all city population values correct but the country population too small?
From within the database, there is no way to tell. The best we can to is verify the incorrect total and warn the user. But we can't say "The population of city XYZ is too large" because we can't tell if that is the problem. The best we can do is warn that the total of all cities within the county exceed the population defined for the county as a whole. It is up to the data owners to determine where the problem actually occurs.
This may seem like a small difference but a constraint determines that the data is invalid and doesn't allow the operation to continue ("Data Integrity: preventing bogus data from entering the database in the first place").
In the case of a city population, the data is not invalid. We can't even be sure if it is wrong, it could well be absolutely correct. There is no reason to prevent the operation from completing and the data entering the database.
It is nice if there can be some ability to verify accuracy but notice that this is not even such a case. As city data is entered into the database, the population value for most of them could be wildly erroneous. You aren't aware of a problem until the county population is finally exceeded. Some check is better than none, I suppose, but it only alerts if the inaccuracies result in a too-large value. There could just as well be inaccuracies that result in too small a value. So some sort of accuracy check must be implemented from the get-go that will test for any inaccuracies -- too large or too small.
That is what will maintain the accuracy of the data, not some misplaced operation within the database.