3

For some reason despite configuring a FasterXML Jackson CSV mapper to create a POJO-based schema, it insists that no suitable configuration has been provided. I get the following exception:

com.fasterxml.jackson.databind.JsonMappingException: No value type configured for ObjectReader
com.fasterxml.jackson.databind.ObjectReader._findRootDeserializer(ObjectReader.java:1371)
com.fasterxml.jackson.databind.ObjectReader._bindAndClose(ObjectReader.java:1265)
com.fasterxml.jackson.databind.ObjectReader.readValue(ObjectReader.java:897)
aol.model.core.services.admin.CSVParserService.parseCSVFileStreamAsClass(CSVParserService.java:42)
aol.rest.controller.AdminController.importCsvData(AdminController.java:30)
aol.rest.controller.AdminController$$FastClassBySpringCGLIB$$b9304c43.invoke(<generated>)
org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204)
...

My POJO is very simple

@JsonPropertyOrder({"firstName", "lastName", "age"})
public class Person {
    String firstName;
    String lastName;
    Integer age;
    public Person() {} //no other use than to avoid no-suitable-construction found issue
    //getters and setters omitted for brevity
}

My parsing code is

public MappingIterator<Person> parseCSVFileStreamAsClass(MultipartFile file) throws IOException {
        StringBuilder lines = new StringBuilder();
        String lineSeparator = System.getProperty("line.separator");
        try(BufferedReader reader = new BufferedReader(new InputStreamReader(file.getInputStream()))) {
            for (String r = reader.readLine(); r != null; r = reader.readLine()) {
                lines.append(r).append(lineSeparator);
            }
        }

        CsvMapper mapper = new CsvMapper();
        CsvSchema schema = mapper
                .schemaFor(Person.class)
                .withHeader()
                .withLineSeparator(lineSeparator);
        MappingIterator<Person> out = mapper.reader(schema).readValue(lines.toString());
        return out;
    }

The reason I handled the MultipartFile this way isntead of reading the stream directly is eliminate issues of mismatching line separator between file and mapper (I work on Windows [don't down-vote me :(], and the mapper's default line separator is only \n.

The data file is this

firstName,lastName,age

"Paul","Smith","22"

"Jane","Crass","98"

I tried adding and removing the quotation marks (default String separators). I tried without the age number quotes, then added them out of desperation. No joy!

I looked at the documentation other SO questions as well as blog posts. No luck.

Community
  • 1
  • 1
Sinker
  • 576
  • 1
  • 7
  • 21

2 Answers2

6

Okay, so my colleague figured it out.

MappingIterator<Person> out = mapper.reader(schema).readValue(lines.toString());

needs to be changed to

MappingIterator<Person> out = mapper.reader(Person.class).with(schema).readValue(lines.toString());

The rest is fine

Sinker
  • 576
  • 1
  • 7
  • 21
2

One correction to the above answer. It should be calling readValues() instead of readValue() method

MappingIterator<Person> out = mapper.reader(Person.class).with(schema).readValues(lines.toString());
JTN
  • 79
  • 1
  • 1
  • 11