I am working on a module where i need to parse a csv file, pick a few fields from it and process those fields to persist in my database.
What i am doing is, using OpenCSV i have read the file into List<String[]>. Then i loop on this List to map csv data and prepare a list of hashmap(List<Map<String, String>>) which serves as my final request.
We know that hashmap has performance overheads and creating a list of hashmap doesn't seem right to me from performance perspective. I have read using java beans is much more efficient. So i want to know the best approach.
When i use a hashmap in request, i am able to validate my request data(for null/blank data checks) using the below code which is quite generic and without any hardcoded methods works for all request data.
public static boolean validateDataValues(final Map<String, String> data, final List<String> keys) {
for (String key : keys) {
if (StringUtils.isBlank(data.get(key))) {// checks for both null and empty strings
return false;
}
}
return true;
}
Also using a hashmap allows me to carry additional data in request which might not be part of csv. If i use java bean in its place, i'll have to call getter of each mandatory field for validation. So while coding should i consider the performance factor with java beans or the less code factor that is there with hashmap.
Please suggest what might be the best approach on this.