How the dump load can be done faster in Progress? I need to automate the process of dump load,so that I can have dump load on weekly basis?
-
1If there is an actual reason why you need to dump & load weekly please provide some details. Otherwise, as Tim said, you shouldn't worry about it. – Tom Bascom Feb 02 '12 at 16:58
-
1I have two different servers Linux and Unix. I want to dump data from Linux and SCP it to unix and load the data there. I use my unix platform for Training purpose so I need updated data at both the sides. So I need to find a process of doing dump load much faster. I heard that simultaneous dump-load can occur? – sajid shaikh Feb 03 '12 at 19:14
2 Answers
Generally one wouldn't need to do a weekly D&L as the server engine does a decent job of managing is data. A D&L should only be done when there's an evident concern about performance, when changing versions, or making a significant organizational change in the data extents.
Having said that, a binary D&L is usually the fastest, particularly if you can make it multi-threaded.

- 3,201
- 1
- 17
- 23
-
I have two different servers Linux and Unix. I want to dump data from Linux and SCP it to unix and load the data there. I use my unix platform for Training purpose. So I need to find a process of doing dump load much faster. – sajid shaikh Feb 03 '12 at 19:11
-
In that case, Tom's provided a way to get the data out faster. I'd also mention that you can export an NFS mount from the target system to the source system, and use that to export the data directly to the target system for loading w/out having to scp it across as a separate step. – Tim Kuehn Feb 04 '12 at 13:33
-
Thanks Team for your suggestion. I will mount the same on other system also. – sajid shaikh Feb 07 '12 at 12:25
Ok, dumping and loading to cross platforms to build a training system is probably a legitimate use-case. (If it were Linux to Linux you could just backup and restore -- you may be able to do that Linux to UNIX if the byte ordering is the same...)
The binary format is portable across platforms and versions of Progress. You can binary dump a Progress version 8 HPUX database and load it into a Windows OpenEdge 11 db if you'd like.
To do a binary dump use:
proutil dbname -C dump tablename
That will create tablename.bd. You can then load that table with:
proutil dbname -C load tablename
Once all of the data has been loaded you need to remember to rebuild the indexes:
proutil dbname -C idxbuild all
You can run many simultaneous proutil commands. There is no need to go one table at a time. You just need to have the db up and running in multi-user mode. Take a look at this for a longer explanation: http://www.greenfieldtech.com/downloads/files/DB-20_Bascom%20D+L.ppt
It is helpful to split your database up into multiple storage areas (and they should be type 2 areas) for best results. Check out: http://dbappraise.com/ppt/sos.pptx for some ideas on that.
There are a lot of tuning options available for binary dump & load. Details depend on what version of Progress you are running. Many of them probably aren't really useful anyway but you should look at the presentations above and the documentation and ask questions.

- 13,405
- 2
- 27
- 33
-
Thanks so much Tom. The presentations you provided is great. I will go through the ppt and try to implement the same. Thanks for your quick response. – sajid shaikh Feb 07 '12 at 12:09