its required to update a filed of all rows in a table with spaces. The table has around 1 million records. Could please provide the best way to do it with shortest processing time. Thanks
-
There are may be below 2 solution to do this. solution-1 ) update table1 set col1=spaces ; which will update all rows but can take lot of time (without load utilites help.) solution-2) (a)unload all rows of table1 in a file using unload utiity (b)replace the col1 value with spaces (c)load empty the table (d)load the table with modified file. Need to know which soultion is better. – user2054151 Mar 05 '15 at 09:46
-
3Which operating system? Which version of DB2? Which third-party utilities do you have available? What do your DBAs say? For example, IBM's utilities for DB2 on z/OS have come a long way in the last few versions and are now on par with CDB's. Whatever the answer is today, it may change with the next version - is this a one-time task? – cschneid Mar 05 '15 at 12:39
-
A million rows isn't a lot. What you are doing is very simple. Test both methods. With different volumes and different complexity and different tables different solutions will perform better. – Bill Woodger Mar 05 '15 at 18:18
2 Answers
What version and platform? I see a COBOL tag, so I assume DB2 for z/OS or DB2 for IBM i.
I have a hard time believing an unload/reload as mentioned would be best on any platform. As it would involve at least twice the I/O as a simple update.
On DB2 for IBM i (aka AS/400, iSeries)...I know an unload/reload wouldn't be faster. The following apply to DB2 for i.
1) simple UPDATE MYTBL SET MYCOL = ' ' WITH NC;
...as long as the file is NOT journaled or you can turn off journaling for that table long enough to run the update. On an old, little bitty 515 running 7.1, I can update 400K rows in just over 13 seconds. I'd expect to update 1M in in less than 35 seconds.
2) If your absolutely can not stop journaling on the table, then your next fastest method would be to use COBOL or RPG record level access (RLA aka native I/O). Make sure your read the table in arrival sequence and that you open it under commitment control. You'll also want to commit the changes after X number of records. Where X = 128KB / (size of row) rounded down.
Another consideration is rather or not the column you are updating is used in an index or keyed logical. You don't want the system to update the access path for each record update. If an index, it's probably easiest to drop the index and rebuild it after you update the file. If a logical, you can remove the member using RMVM before the update and re-add it after the update with ADDLFM.

- 21,637
- 1
- 20
- 44
-
Unload, reload NOLOG will use far less overhead, including IO. Unless it is (unlikely) static/ephemereal data you will always log activity on z/OS. Unload, fastunload should be the way. Modifying the datasets has always led to disaster sooner of later in the shops I have worked in. – mckenzm Mar 06 '15 at 22:11
-
If elapsed time is your only issue. Test for it. It mat be slower to unload if the index rebuilds are heavy. Load replace with suppressed log/chck, will also naturally re-org the table. – mckenzm Mar 06 '15 at 22:27
Using update command, will take lot of time and CPU cost. The best way is to unload all the data in a file. Update the value of field using cobol program and load back the data with LOAD REPLACE .
Note --> Please take the back up of table data before this

- 91
- 1
- 13
-
yes i am thinking the same way. although sort jcl will be efficient compare to cobol to update the value in file. – user2054151 Mar 05 '15 at 11:21
-