I am just starting to use tsqlt inside redgate's sql test. I have to deal with quite large tables (large number of columns) in a legacy databases. What is the best practise to insert some fake data into such tables (the 'script as' insert statements are quite large) - hence they would make my 'arrange part' of the unit test literally unreadable. Can I factor out such code? Also is there a way to not only script the insert statement but also fill in some values automagically? Thanks.
-
Have you tried Redgate's SQL Data Generator? – Jayvee Jan 10 '14 at 15:28
-
1Yes, does not really help as it puts data into the 'real' db. The unit test should just fake inserting (one advantage of tsqlt) but maybe I am missing something ... – cs0815 Jan 10 '14 at 15:29
-
Just realised that tSQLt.FakeTable makes most columns nullable - hence you do not have to insert values for all columns. This may be the solution ... – cs0815 Jan 10 '14 at 15:49
-
same question here, after 4 years, did you ever find a way to dynamically generate these INSERT INTO statements during the test? I understand the FakeTable makes the columns nullable, however there might be cases where we want to test a newly added column, etc. any thoughts? thanks – dim_user Jun 11 '18 at 21:49
1 Answers
I would agree with your comment that you don't need to fill out all of the columns in your insert statement.
tSQLt.FakeTable removes all non-null constraints from the columns, as well as computed columns and identity columns (although these last two can be reinstated using particular parameters to FakeTable).
Therefore, you only need to populate the columns which are relevant to your code under test, which is usually only a smaller subset of columns from the table(s).
I wrote about this in a bit more detail in this article which also contains a few other 'gotchas' you may want to know.
Additionally, I'd suggest that if you have a number of tests which all need the same table faked and data inserted, that you consider using a SetUp routine - this is a Stored procedure in the test class (schema) which is called SetUp, and is called by tSQLt before each test in that schema. They won't show in RedGate's SQL Test window as yet (I've suggested it as an improvement), but will still work. This can make it harder to see - but does modularise that code thus reducing identical, repeated code.

- 712
- 6
- 13
-
You may also get some value from [this Pluralsight course I recorded on how to use tSQLt / SQL Test](http://pluralsight.com/training/Courses/TableOfContents/unit-testing-t-sql-tsqlt) – DaveGreen Jan 11 '14 at 08:22
-
Excellent DaveGreen thanks - I actually came across this blog when I tried to answer the question myself. Thanks for pointing out the SetUp stuff - I presume you do not need to cleanup anything with this framework (TearDown?) as it never really touches the underlying 'real' db. What happens if the sproc under test has side effects (i.e. performs some CUD operations)? – cs0815 Jan 11 '14 at 11:06
-
You're correct that no TearDown script is needed - each test is carried out inside a transaction, which is rolled back at the end of the test (whether it passes or not) so any data changes will be rolled back (it is possible to deliberately perform actions outside this transaction from a test, but that's a more advanced action that doesn't happen by default). This also means that the order in which tests are run is not important, as all tests will start from the same database state. – DaveGreen Jan 11 '14 at 14:40
-
1I tend to use FakeTable in these scenarios. I only populate the columns that are relevant to the test I am writing. Another option however is to use the test databuilder pattern (http://datacentricity.net/2011/11/unit-testing-databases-adapting-the-test-data-builder-pattern-for-t-sql/). – Andrew Jan 13 '14 at 08:40