0

I'm looking at implementing File Groups on an existing database that's roughly 600GB. The largest hurdle I have is actually setting this up in production. I need the database to still be available but moving some of the tables will take a considerable amount of time. In development it took nearly 5 days to move all the indexes to their new homes and then shrink the database.

So what is the best way to implement file groups on a large database where you need to keep the database live?

nikolifish
  • 503
  • 1
  • 5
  • 18
  • 2
    you may receive great answers if you ask this on [dba.stackexchange](http://dba.stackexchange.com/) – void Apr 08 '15 at 17:14
  • Create the new database structure on the new server i.e. set the new server up completely. Setup transactional replication between the old and new server. This will then use the new file groups to import the data, This way your old and new servers will stay in synch. There are other ways too. – Namphibian Apr 08 '15 at 21:11
  • Are you creating the new filegroups on seperate RAID array i.e. different spindles or are you just creating the new filegroups on the same drive? – Namphibian Apr 08 '15 at 22:51
  • It will be on the same drive – nikolifish Apr 09 '15 at 00:05
  • Putting them on the same drive??? Then there is no benefits to filegroups. – Namphibian Apr 09 '15 at 06:45
  • I'm under the assumption that I could be able to partially restore a database if I was using file groups. Also it gives me the ability to move the NDF to a new drive if I felt the need. – nikolifish Apr 09 '15 at 12:08

1 Answers1

1

If you have Enterprise Edition, you can do your rebuilds online. There will still be a short-duration lock taken on the table at the beginning and end of the process, but the table is available for reads and writes for the majority of the duration of the rebuild. So the fact that it takes a couple of days shouldn't really be an issue.

Ben Thul
  • 31,080
  • 4
  • 45
  • 68
  • 1
    Yes he can online build but would require double the space. Still 600GB is small. Just a note not criticism. – Namphibian Apr 08 '15 at 21:10
  • @Namphibian: Good point. Though my assumption is that it isn't 600 Gb in one object. So, you'd need the size of the largest object being rebuilt. – Ben Thul Apr 08 '15 at 22:11
  • Indeed. There seems to be another problem though. 5 Days for 600GB is way too much. I suspect that they have created filegroups on the same drives. This does not reduce disk contention which I suspect the op wants to do. – Namphibian Apr 08 '15 at 22:51
  • I chalked it up to it being a dev system and potentially b/locking. – Ben Thul Apr 08 '15 at 23:49