3

We're in the process of updating our COBOL programs running on an IBM Mainframe (z/OS) to support non-latin characters.

I'm struggling a bit with understanding how COBOL processes UTF-16 characters defined as PIC N.

I need to update a field currently defined as PIC X to PIC N. This field then gets written to a file.

Example:

01 RecordToWrite PIC X(20).

I understand PIC N needs twice as much space as PIC X. What I don't know is how to define the corresponding PIC N field.

My guess would be COBOL takes care of the conversion itself:

01 RecordToWrite PIC N(20).

But I'm really not sure if it's that simple.

Can I just simply define the old field as PIC N without having to worry that my file still looks the same? Which measures do I need to take?

Chris
  • 43
  • 6
  • Try it; see what happens – Bruce Martin Aug 06 '17 at 02:42
  • That's what I usually would do. Unfortunately our DEV environment is not set up with suitable test data yet and our processes are very strict when it comes to promotion to higher test levels. – Chris Aug 06 '17 at 05:03
  • In one of my programs, I just used a PIC X field that was twice as large as I needed to account for double byte characters – SaggingRufus Aug 09 '17 at 12:37

1 Answers1

1

Your field:

01 RecordToWrite PIC N(20).

Will consume 40 bytes, for its 20 characters. So you will need to make corresponding adjustments going from X to N.

You might also need to make use of NATIONAL-OF and DISPLAY-OF functions to get it into UTF-16, or CCSID 1200. Once you have converted your data, you won't need to mess with that any more, and your 40-byte/20-char Pic N will behave just like your old Pic X fields did -- unless you have to talk to unconverted parts of the system.

Joe Zitzelberger
  • 4,238
  • 2
  • 28
  • 42