I have a docx file with Chinese characters and other Asian languages. I am able to convert the docx file into a PDF file perfectly on my laptop with the Chinese characters embedded properly into the PDF, but when the same code is run as runable jar on the Linux server, the Chinese characters are replaced with # symbol. Can someone please guide me with this problem? Thank you for the help in advance. The java code is given below
public static void main(String[] args) throws Exception {
try {
Docx4jProperties.getProperties().setProperty("docx4j.Log4j.Configurator.disabled", "true");
Log4jConfigurator.configure();
org.docx4j.convert.out.pdf.viaXSLFO.Conversion.log.setLevel(Level.OFF);
System.out.println("Getting input Docx File");
InputStream is = new FileInputStream(new File(
"C:/Users/nithins/Documents/plugin docx to pdf/other documents/Contains Complex Fonts Verified.docx"));
WordprocessingMLPackage wordMLPackage = WordprocessingMLPackage.load(is);
wordMLPackage.setFontMapper(new IdentityPlusMapper());
System.out.println("Setting File Encoding");
System.setProperty("file.encoding", "Identity-H");
System.out.println("Generating PDF file");
org.docx4j.convert.out.pdf.PdfConversion c = new org.docx4j.convert.out.pdf.viaXSLFO.Conversion(
wordMLPackage);
File outFile = new File(
"C:/Users/nithins/Documents/plugin docx to pdf/other documents/Contains Complex Fonts Verified.pdf");
OutputStream os = new FileOutputStream(outFile);
c.output(os, new PdfSettings());
os.close();
System.out.println("Output pdf file generated");
} catch (Exception e) {
e.printStackTrace();
}
}
public static String changeExtensionToPdf(String path) {
int markerIndex = path.lastIndexOf(".docx");
String pdfFile = path.substring(0, markerIndex) + ".pdf";
return pdfFile;
}