I have never backed a subfile with an array, though I suppose it would be a useful technique if you were going to have more than 9999 records. Instead I have always just read into it from a cursor. In fact mixing SQL and procedures makes populating a subfile extremely easy. You can even use a multi-record fetch if it makes sense to you. Here is a simple example (single record fetch):
dcl-proc LoadSubfile;
dcl-pi *n;
keyFields Type;
end-pi;
dcl-ds sflin LikeRec(sfl: *Input) Inz;
ClearSubfile();
OpenCursor(keyFields);
dow FetchCursor(record);
eval-corr sflin = record;
PopulateHidden(sflin);
rrn += 1;
write sfl sflin;
enddo;
CloseCursor();
rrnMax = rrn;
end-proc;
There are some things here that are not defined, for instance FetchCursor()
returns an Indicator = *On if a record is returned. and PopulateHidden()
populates hidden fields in the subfile record. I use hidden in an editable subfile to hold original values for fields that can be changed. I define the subfile fields the same as the record fields so that I can do an eval-corr
to get them into the IO data structure. I also will check the subfile rrn for overflow if I believe it is possible for there to be more than 9999 records in the database. Then I throw a subfile full
message with directions to filter the record set.
Some other things You didn't ask about, but I will tell you since you asked about best practices. Unless there is a reason to avoid it, I use SFLCLR
to clear the subfile, and I generally load the entire subfile in one shot unless I suspect that there will be thousands of records. Many of the old optimizations like SFLNXTCHG
and loading a single page at a time were put in place because communications were slow. Not so much the twinax communications, but the ASCII workstation controller, or remote workstations which were often on the other side of a communications line much slower than twinax. This is no longer true. I tend to avoid those old hacks meant to preserve bandwidth because they only complicate the code. Write the whole thing at once, and when processing, read through the whole subfile with a for
loop.
Also, in the future, these questions should be on Code Review if you have working code that you want to know about best practices for, or maybe Software Engineering if you are more interested in theoretical answers. Stack Overflow is more for answering specific objective questions.