Will the update query using joins take more time if the fields that are joined have data types nvarchar(255), when compared to declaring them as nvarchar(50)? If the answer to the above question is yes,then why is this happening as the data inside the field at the maximum has only 5 characters, so SQL should be allocating space for only 5 characters.
-
Is the field indexed? If so, what type of index. – David Colwell Jul 25 '16 at 04:39
-
No the field is not indexed. I am from a non-programming background, i wanted to know the mechanism of joins and data types. – mantd Jul 25 '16 at 04:41
-
I'll expand the answer then :) – David Colwell Jul 25 '16 at 04:42
-
if you are using nvarchar(5) it would be 2*5 = 10 bytes of storage used by SQLServer. If its a simple varchar(5) then it would be 5 bytes. Also 255 vs 50 needs to be evaluated in the context of how many rows you have and whether there is string manipulation involved before updating records. – objectNotFound Jul 25 '16 at 04:55
-
The problem was we have two tables (Main and a subsidiary table) with 20 million rows each, and we needed to update a field in the main table using the subsidiary table by linking 5 fields each with nvarchar(255). The question was would the update be faster if the field sizes were say 20. So based on answer suggested to me I guess there must be no performance variation. – mantd Jul 25 '16 at 05:08
-
A caveat to that, SQL server sets aside more space in the index and storage files for larger fields. Something to consider – David Colwell Jul 25 '16 at 05:35
-
Please do correct me if I'm wrong, so my understanding now is that if the field was nvarchar(20) and had 5 characters inside it, it would set aside some additional space (5+X), but if my field was nvarchar(255) and the the same 5 charaters inside but more space (5+Y) will be assigned to it with Y>X, leading to performance issues. – mantd Jul 25 '16 at 06:20
1 Answers
TL/DR; The simple answer is the performance should be comparable.
That being said, some databases (such as SQLLite) do not handle this gracefully as they use tree based text indexes with larger memory requirements for deeper trees.
This question has similar answers: nvarchar column size impact on performance
Longer answer
There are two basic types of search.
- Index Search
- Table Scan
Index Search
If there is an index on the table being scanned, then the database will search the index first. This is essentially like using a street directory, you go to the country, then the suburb, then alphabetically to the letter the street starts with, then scan them to find the street. This saves you reading every street in the directory. As i am saying this, i realise depending on your age you may never have seen a street directory...
Table Scan
Table scanning is what happens if there is no index. The database engine will go through each row, one after another (usually in order of insertion) and compare each row to the expected result. When it finds the correct one, it will return it.
The way it does the comparison is left to right, letter by letter, so with the same data, it will take the same time. If the data in the nvarchar(255)
field had the same row count
but not the same length
(i.e. there are the same number of rows, but the content is longer) then the search would take longer as it would need to scan more letters.

- 1
- 1

- 2,450
- 20
- 31