It depends. The error of HyperLogLogs can be assumed to be normally distributed, if the number of inserted elements is significantly larger than the number of registers which is 2^14 in the Redis implementation. If elements are equally sharded over multiple HyperLogLogs and the number of elements per HyperLogLog is still larger than the number of registers the total cardinality estimate obtained by summing up the cardinality estimates of all HyperLogLogs will have a smaller error.
The reason is that the sum of N independently and normally distributed numbers with mean M and standard error S will be normally distributed with mean N x M and standard error S x SQRT(N). Therefore the relative error changes from S / M to S x SQRT(N) / (N x M) = S / (M x SQRT(N)) which corresponds to a SQRT(N) improvement.
However, this sharding approach will not work for arbitrary numbers of HyperLogLogs. Once the partial cardinalities drop below the number of registers the assumption of normally distributed errors will be violated and the improvement of the estimation error will be smaller or even negligable.