No, it is not a flaw, it is a feature. Performance should be evaluated on a test data set not seen by the algorithm.
If you want to cross-validate, it might take you months to complete a simple study with Optuna. It is not wrong to do that, but probably a waste of time because Optuna's algorithm is a Bayesian optimizer, which cross-validation can only approximate.
That being said, if you are using machine learning and are required to have a train/validate loop per epoch, I recommend using Jun Shao's proportion of n**(0.75)
as your training set size, randomly chosen before training starts; not only is it faster, but it is probably better.
So while there is a need to do multiple training and validation actions in machine learning, it is not necessary to cross-validate that model's performance if you are using Optuna. Please click the link above to see my answer on cross-validated's SE site, and from there you can click through to the Github repo but please comment first and/or see what others are saying.