It has been a long time since I have worked with latent models, so I might be a bit off. But your statements don't sound right. You state your model is perfect, however I do not see why it would be. It is a null-model at best, which means it is a good basis to compare other models to in order to evaluate whether they are an improvement. If I compare it to Mplus (the program I worked with most) this would mean:
Although the baseline model in most SEM computer programs represents what is commonly termed the independence model (or null) model, the baseline model in Mplus is somewhat different. Common to both baseline models is the assumption of zero covariation among the observed indicator variables. However whereas the only paramters in the independence model related to a CFA model are the observed variable variances (given that no estimated means are of interest), those in the Mplus baseline model comprise both the variances and means (i.e., intercepts) of the observed variables. [...] Because baseline models assume zero covariation among the observed variables, it is not surprising that the chi-squared value of these models is typically always substantially larger than that of the structured hypothesized model.
I think for your code you will not get the co variations between the observed variables, so therefore all the variation will be captured in the single factor specified. At least, I checked this lavaan tutorial to be certain that I do not remember it wrongly and your code appears to be the very basic/minimal CFA. This makes it to me a baseline model.
I do not know what you mean with a perfect model. The only thing I can imagine is a model that allows for all variation directly observed in the data.
A saturated model is one in which the number of estimated parameters equals the number of data points [...] In contrast to the baseline (or independence) model, which is the most restrictive SEM model, the saturated model is the least restricted SEM model. Conceptualizing within the framework of a continuum, the saturated model would represent one endpoint, whereas the independence model would represent the other; a hypothesized model will always represent a point somewhere between the two.
However, your model is not saturated. You do not specific state in your code that the model should estimate the first coefficient. That is, from your code I assume you get coefficients for all variables except the first. That one will/should be 1. It would have helped a bit if you had provided some data to actually test this, but I have no reason to expect otherwise checking your code against the one in the lavaan tutorial.
I believe, however, that a saturated model is not identified and therefor will not run. It is a theoretical model used to estimated fit indices, but I do not really understand why you would want to model it yourself. The general goal of modeling is finding the most parsimonious model, the "perfect model" that takes into account all variation in the data is not parsimonious and therefore seldom of interest.
However, I do not remember whether RMSEA is an indicator of a saturated model. I do know that a RMSEA of 0 will occur when your degrees of freedom are more than chi-square. When looking at the meaning of the RMSEA
absolute fit indices do not rely on comparison with a reference model in determinng the extent of model improvement; rather, they depend only on determining how well the hypothesized model fits the sample data. [..] The RMSEA takes into account the error of approximation in the population and ask the question "How well would the model, with unknown but optimally chosen parameters values, fit the population co variance matrix if it were available?". This discrepancy, as measured by the RMSEA, is expressed per degree of freedom, thus making it sensitive to the number of estimated parameters in the model.
So in other words, expecting an RMSEA of 0 means you expect that the values in your model are exactly the same as those observed in the population. I do not think that is a very likely situation to occur.
To me your RMSEA is an indication that your one factor model doesn't fit the data, which from a psychology perspective (my field) would be incredibly surprising if it did. You can compare this model to better fitting models, like two, three or other number of factor models. From that, select the best fitting most parsimonious model. But please, don't just start randomly estimating X number of factor models. Do it informed. I have seen so many researchers just running lots of CFA models with various variations only to cherry pick the ones they like, that is no good way of performing statistics.
Please check out the David Kenny website for more general info
Source for my qoutes: Structural Equation Modeling with Mplus: Basic concepts, applications and programming. From Barbara M. Byrne (2012)