Suppose that we are dealing with independent and identically distributed random variables X1,...,Xn. Then E[Xi]=mu and Var[Xi]=sigma2 for each i=1,...,n. So in your example 4.037 would be a sample estimate of mu, and 1.727 would be a sample estimate of sigma.
Now what about this range (mu - sigma, mu + sigma)? The probability of Xi falling there is F(mu+sigma) - F(mu-sigma), where F is the cumulative distribution function of Xi. In the case of a normal distribution, that is indeed around 0.68. In other cases, it certainly does not have to be anywhere close 0.68. In fact, the Chebyshev inequality gives barely that F(mu+sigma) - F(mu-sigma) >= 0 (a noninformative result). For instance, in the Gamma(2,3) case the probability is around 0.74, while the t-distribution with 3 degrees of freedom gives around 0.82.
Now someone may suggest to use the central limit theorem to say that 0.68 is the number for any probability distribution. That would not be right, however. What the central limit theorem says is where and how the mean is concentrated, not every single observation.
So, without further assumptions, you can't really say much more with certainty. Here's an example showing that even 0% of data may fall within this interval. If any more sample statistics were available, it may be possible to get more precise bounds. Also, since I understand that those are seconds for time durations, you may look into distributions for such modelling, like Gamma and Weibull. If you are willing to assume that your time durations follow one of those distributions, estimating the distributional parameters would allow to give estimates about the range corresponding to any percentage, not just 68%.