2

My dapper code is given below with a select query:

const string Sql = @"SELECT  [id]
                              ,[groupName]
                              ,[reqCap]
                              ,[impCap]
                              ,[player]
                              ,[resumeDate]
                              ,[whitelist]
                              ,[blacklist]
                              ,[Macros]
                              FROM [VideoServer].[dbo].[TagGroup]";
return await dc.Connection.QueryAsync<TagGroup>(Sql);

My table design is given below:

[id] [int] IDENTITY(1,1) NOT NULL,
[groupName] [varchar](500) NOT NULL,
[reqCap] [int] NULL CONSTRAINT [DF_TagGroup_reqCap]  DEFAULT ((0)),
[impCap] [int] NULL CONSTRAINT [DF_TagGroup_impCap]  DEFAULT ((0)),
[player] [varchar](500) NULL,
[resumeDate] [date] NULL,
[whitelist] [nvarchar](max) NULL,
[blacklist] [nvarchar](max) NULL,
[Macros] [nvarchar](max) NULL

When I run this select query in SQL Server Management Studio it is returning within 0 milliseconds. But the same query from dapper (above code) is taking too long.

Any ideas? Is this because of nvarchar(max)?

If I clear data in nvarchar(max) fields, it's returning data very fast.

Bridge
  • 29,818
  • 9
  • 60
  • 82
Manu Mohan
  • 1,694
  • 2
  • 20
  • 31
  • how much data is in the `nvarchar(max)` fields? indexing isn't performed on `nvarchar(max)` fields, so that could be one reason. Also, max fields are stored differently to non-max fields. – user1666620 Apr 20 '16 at 13:50
  • 200KB and more in each nvarchar(max) columns – Manu Mohan Apr 20 '16 at 13:52
  • there is no WHERE clause so index can't help here – Piotr Lasota Apr 20 '16 at 13:54
  • 2
    ah, so you are trying to pull 600+Kb out of the database for every record? 20 rows makes that almost 6Mb per query **at a minimum**. When you run it in SQL Server Management Studio, it doesn't actually return the full column, it returns only the first X characters. – user1666620 Apr 20 '16 at 13:55
  • @user1666620 I agree, though 6Mb should not take 15 seconds to load. If it were 6Mb of tiny, but many rows, then possibly. But 20 rows? Seems odd. – Rob Apr 20 '16 at 13:56
  • @ user1666620 then is it not a good idea to store this on sql server? Should i use mongodb or something else? – Manu Mohan Apr 20 '16 at 13:57
  • 1
    To add on to what user said, the max it can display is 65535 characters (it is adjustable in `Options -> Query Results -> Results to Grid`), that is only 65k. – Scott Chamberlain Apr 20 '16 at 13:57
  • @Rob I think the key point is "200KB and more". 600KB is the minimum that is being returned per row. – user1666620 Apr 20 '16 at 13:57
  • @ManuMohanT it doesn't matter which database you use, you will have the same problem. Use the filesystem to store files, and the database to store references to those files. – user1666620 Apr 20 '16 at 13:59
  • In my case, storing this in file system is not a good idea. I need this String as is. – Manu Mohan Apr 20 '16 at 14:02
  • @ManuMohanT What makes you think you can't store a string in a file...? – Rob Apr 20 '16 at 14:04
  • @ManuMohanT files are the best way to store massive amounts of data like this. – user1666620 Apr 20 '16 at 14:05
  • I have to read all these files from file system, deserialize it to object, and process- it is a long process. I am querying to database only once/minut and saving this data to inmemory to increase performance. – Manu Mohan Apr 20 '16 at 14:07
  • @ManuMohanT I hope you have lots of free memory then in that case... – user1666620 Apr 20 '16 at 14:09
  • I have 16GB RAM. So, no direct solution in sql server!!! :( – Manu Mohan Apr 20 '16 at 14:18
  • It should not happen, because my application and sql server are on same region, and It is not that much big(I feel so). – Manu Mohan Apr 20 '16 at 14:23

2 Answers2

3

You are trying to pull 600+Kb out of the database for every record. 20 rows makes that almost 6Mb at a minimum per query.

The reason it runs quickly in SQL Server Management Studio is that it doesn't actually return the full column, it returns only the first X characters, so not all 6+MB is being processed. When you are running through code (dapper in this case) then all 6+MB is being returned.

If you are storing files in the database, you need to stop doing that and store them in the filesystem, and use the database to store the the locations and metadata of the files.

user1666620
  • 4,800
  • 18
  • 27
  • It is a json string. Of course, converting a csv to json array. I need this in this way, I don't want to store this as a file – Manu Mohan Apr 20 '16 at 14:01
  • @ManuMohanT then you need to figure out how to reduce the amount of data you are pulling at the same time. Pull only the minimum amount of rows/columns needed for each query. If a user wants to see the CSV, then and only then pull that specific column. But you need to figure out a better way to store the data than as a massive JSON string in the database. – user1666620 Apr 20 '16 at 14:03
0

I'm not anti storing of JSON/XML in the database, but it does give very big lumps of data to return sometimes which will take time to return - and more than in SSMS, which typically doesn't return the full information to you.

BUT - when you're returning this much data, it's important to filter. I doubt your application really needs all the fields or all the records for what it's trying to do; if you filter down to what you actually need in your query, you should get a faster result.

eftpotrm
  • 2,241
  • 1
  • 20
  • 24