0

My .MDF file size is increasing and I want to add multiple file groups for Dynamics Ax 2009. I want a primary file for tables and clustered indexes and secondary file group for nonclustered indexes.

Is this possible?

Does anyone have implemented this - any recommendations?

My scenario is as follows:

AX 2009 (and there are no upgrade plans whatsoever, nor for AX nor for SQL underneath it) SQL Server 2008 R2 Enterprise clustered running on Windows 2008 R2 Enterprise (512GB, 80 cores) SAN-based storage, combination of SSDs and SATA disk pools Database size reached 6TB, has only the PRIMARY FG Database in FULL recovery model, compat level 100 FULL backup taken daily (business requirement, I would have done it weekly + daily DIFFs) My problems:

storage usage is heavily unbalanced and database file sizes are out of control and unmanageable if I have to relocate the database at the file level (DB is 6TB and has 2 files + transaction log) FULL backup runs in close to 4rs, want to shorten it. Already using 5 stripes and a combination of MAXBUFFERS and other BACKUP parameters as suggested in one of the posts here. During my tests, the magic number of stripes was 5. Over 5, the duration stayed pretty much the same (Before this approach, the full backup ran for 12-13hrs and it often failed after this amount of time) Database maintenance creates an I/O hotspot because all tables and indexes are on the same FG and same drive Restore runs in 7hrs 55mins. The RTO for this database is 3 hrs, so I have to come up with a solution to shorten the restore time.(We run an automated backup testing process every week) My suggestion for the business were as follows:

To improve the I/O, remove I/O hotspots and improve storage utilization: Group AX tables based on functional criteria (LEDGER, SALES, etc) and assign each group to a filegroup (FG) Add sufficient storage for each table groups and place each group and corresponding filegroup to dedicated storage Relocate data out of PRIMARY FG Split data from indexes (move data to dedicated FGs and all nonclustered indexes to separate FGs) For FGs, will end up with something like: LEDGER_DATA, LEDGER_INDEX, SALES_DATA, SALES_INDEX and so on Each FG will have its own set of files (NDF files) To address the long backup and even longer restores,the second main reason for having multiple FGs: Switch to a FG level backup (aka partial backups) In case of an incident, do a piecemeal restore instead of full database restores To further improve I/O efficiency and optimize storage utilization Add data compression And my questions Image

is the multiple, dedicated filegroups (and not counting the PRIMARY FG) and data files design supported in AX2009? All posts I came across doesn't explicitly say NO, but they doesn't say YES either. I read the article from MS that says YES, but it refers to table partitioning and to having multiple files within the existing, PRIMARY FG. I don't want to use table partitioning, at least not yet.

Jan B. Kjeldsen
  • 17,817
  • 5
  • 32
  • 50
  • What do you hope to achieve by splitting the indexes across multiple files? The combined file size will still increase... Do you intend to move the secondary (.ndf) file to another disk? In case of a second disk I would personally just add the new file and let SQL Server populate the files (distribute the table and index data). – Sander Nov 10 '20 at 20:08
  • Use 2 different disk drives this is also to increment performance – Daniel Rendón Nov 10 '20 at 22:14
  • 2
    Woa 6TB on AX09?! Sounds like a very unique issue. You might be tackling the problem and not the cause/source. Have you considered archiving/truncating data or moving old data to a data warehouse? Often times there are processes in AX that do not clean up after themselves, so you need (1) know how to identify them in AX and (2) manually cleanup (not always truncate). Tables like `EventCUD`, `InventSumLogTTS`, `SysDatabaseLog`, `InventSettlement`, `[Sales/Purch]Parm[Table/Line/Update]`, `SysTraceTableSQL` are off the top of my head. Some might be `AX2012` tables. That would be the FIRST route. – Alex Kwitny Nov 11 '20 at 17:38
  • ony important part of the solution is to distribute the workload to multiple disk arrays to gain more performance in the database – Daniel Rendón Nov 11 '20 at 19:40

0 Answers0