0

i am having recently came to know that sql server if i delete one column or modify it acquires space at backend so i need to reindex and shrink the database and i have done it and my datbase size reduced to

2.82 to 1.62

so its good like wise so now i am in a confusion so in my mind many questions regarding this subject occurs pls help me about this one 1. So it is necessary to recreate indexes(refresh ) after particular interval

  1. It is necessary to shrink database after particular time so performance will be up to date?

  2. If above yes then what particular time should i refresh (Shrink) my database?

  3. i am having no idea what should be done for disk spacing problem i am having 77000 records it takes 2.82gb dataspace which is not acceptable i am having two tables of that one only with one table nvarchar(max) so there should be minimum spaces to database can anyone help me on this one Thanks in advance

vhadalgi
  • 7,027
  • 6
  • 38
  • 67
Just code
  • 13,553
  • 10
  • 51
  • 93
  • Column deletion: if you want to reclaim the space after a variable column deletion (ALTER TABLE ... DROP COLUMN) you could use DBCC CLEANTABLE: "Reclaims space from dropped variable-length columns in tables or indexed views." ([ref](http://technet.microsoft.com/en-us/library/ms174418.aspx)) – Bogdan Sahlean Nov 27 '13 at 04:39
  • Ok vijay kumar but what about shrinking and indexes? – Just code Nov 27 '13 at 04:47
  • 2
    Shirnk data files -> fragmentation -> bad for performance – Bogdan Sahlean Nov 27 '13 at 04:54
  • @BogdanSahlean can you explain me by steps i am newbie to this – Just code Nov 27 '13 at 05:17
  • When you shrink a database, there is disk activity (I/O) as the data file is shrunk. Then as your database grows, there is more I/O as your datafile has to be grown again. I/O is bad - you want to avoid it where possible. So you should avoid shrinking your external database file, as it will most likeley just need to grow again, and while it is growing your system has to wait for it to finish. – Nick.Mc Nov 27 '13 at 05:17
  • 2
    With regards to the space used. Why don't you use `sp_spaceused` to isolate which table is the issue, then take a look at the data in that table. If you are going to use NVARCHAR(MAX), you are going to have space issues. Do you really need to use that datatype? Are you really storing very long (>8000 chars) data in multiple languages? 3G really isn't a very big database. It comes down to the value of the data you are storing vs the value of a bigger disk drive. – Nick.Mc Nov 27 '13 at 05:23
  • @ElectricLlama See you have given nice consequences but i have again questions on it `how can you measure user input it can be upto 8000 because i am using ckeditor in it` and `3g is a big database when i have only 77000 products` – Just code Nov 27 '13 at 05:52
  • 1
    Your database is as big as the data you have in it. I don't understand your other comment, Suffice to say, try changing your datatype to VARCHAR(8000) and see if it saves you any space without truncating data. Did you establish which table is the issue? (or whether it is both). If you want to save space, only use the correct datatypes. That might actually mean saving your ckeditor content to an external file and accessing it through FILESTREAM. Yo'll sacve a lot of space that way. – Nick.Mc Nov 27 '13 at 06:32
  • Ya totally agree saving to external file but i would like to have little more help we have been searching for external file storage may be notepad :( but it hangs as i have images in my output so what external file would be? – Just code Nov 27 '13 at 06:37

1 Answers1

2

I am going to simplify things a little for you so you might want to read up about the things I talk about in my answer.

Two concepts you must understand. Allocated space vs free space. A database might be 2GB in size but it is only using 1GB so it has allocated 2GB with 1GB free space. When you shrink a database it removes the free space so free space should be about 0. Dont think smaller file size is faster. As you database grows it has to allocate space again. When you shrink the file and then it grows every so often it cannot allocate space in a contiguous fashion. This will create fragmentation of the files which slows you down even more.

With data files(.mdb) files this is not so bad but with the transaction log shrinking the log can lead to virtual log file fragmentation issues which can slow you down. So in a nutshell there is very little reason to shrink your database on a schedule. Go read about Virtual Log Files in SQL Server there are a lot of articles about it. This is a good article about shrink log files and why it is bad. Use it as a starting point.

Secondly indexes get fragmented over time. This will lead to bad performance of SELECT queries mainly but will also affect other queries. Thus you need to perform some index maintenance on the database. See this answer on how to defragment your indexes.

Update:

Well the time you rebuild indexes is not clear cut. Index rebuilds lock the index during the rebuild. Essentially they are offline for the duration. In your case it would be fast 77 000 rows is nothing for SQL server. So rebuilding the indexes will consume server resources. IF you have enterprise edition you can do online index rebuilding which will NOT lock the indexes but will consume more space.

So what you need to do is find a maintenance window. For example if your system is used from 8:00 till 17:00 you can schedule maintenance rebuilds after hours. Schedule this with SQL server agent. The script in the link can be automated to run.

Your database is not big. I have seen SQL server handle tables of 750GB without taking strain if the IO is split over several disks. The slowest part of any database server is not the CPU or the RAM but the IO pathways to the disks. This is a huge topic though. Back to your point you are storing data in NVARCHAR(MAX) fields. I assume this is large text. So after you shrink the database you see the size at 1,62GB which means that each row in your database is about 1,62/77 000 big or roughly 22Kb big. This seems reasonable. Export the table to a text file and check the size you will be suprised it will probably be larger than 1,62GB.

Feel free to ask more detail if required.

Community
  • 1
  • 1
Namphibian
  • 12,046
  • 7
  • 46
  • 76