You have to be super careful with a multi-tenant environment. There are so many easy ways to hork things up. As you've indicated, your DDL script has to be edited per schema and this means changing object names inside the script. This is scary, but I understand it's necessary. It introduces a vector for SQL injection.
I hope you have one or more "safe" schemas to test with. Heck, I hope you have a whole test world to test with. But - all worries aside, here's a scaffold of a script I've used in the past to apply changes to a multi-schema environment:
create type dbo.ObjectNamesType as table ( Name sysname )
go
create procedure RunDDL( @scriptTemplate nvarchar( max ), @objName sysname, @objType nvarchar( 10 ), @schemas dbo.ObjectNamesType readonly )
as
begin
set nocount on
declare @script nvarchar( max )
declare @objectName nvarchar( 256 )
declare c cursor for
select N'[' + s.name + N'].[' + o.name + N']'
from
sys.objects o
inner join
@schemas s
on
o.schema_id = schema_id( s.Name )
where
o.name = @objName
and
o.type = @objType
open c
while ( 1=1 )
begin
fetch next from c into @objectName
if ( @@fetch_status != 0 ) break;
select @script = replace( @scriptTemplate, N'@objectName', @objectName )
exec sys.sp_executesql @script
end
close c
deallocate c
end
go
...and to test it...
declare @script nvarchar( max )
declare @objName sysname
declare @objType sysname
declare @schemas dbo.ObjectNamesType
insert @schemas values( 'dbo' )
select @objName = 'someTable'
select @objType = 'u'
select @script = 'select * from @objectName' --> not really ddl, eh?
exec dbo.RunDDL @script, @objName, @objType, @schemas
The one I actually used is a lot more...complicated - so I just left the juicy parts behind. A couple of notes:
The inputs are set up in such a way that the script can be run against a group of schemas. This lets you run it on your test schema first and see if it's okay - and assuming you like it, you can then run it against the remaining schemas en masse.
In my world, the @templateScript
, @objName
and @objType
reside in a table which I join to - and are not passed in. I wouldn't recommend running such a procedure with inputs from the outside world, as this is an invitation to catastrophe...but for illustration/testing, it serves the purpose. Also, in my world, the input table has a version ID and a sequence. For any schema at version x, we run all the scripts in sequence and assuming success, up that schema's version to y. Each script applies to a single object.
The point here is that you'll want to make a routine workflow out of updates to the database - and this procedure is at the core of that workflow.
Note that it selects from sys.objects rather than just believing the inputs. This is just another small safeguard to keep your scripts from barfing over typos in names. If the number of objects we edit fails to match the number of objects specified, we log a warning for ourselves.
A procedure of this type should also try/catch the actual execution of the script and should log everything it tries along the way. It should roll things back on error. Make sure you have plenty of transaction log space as even tiny DDL has a way of causing immense amounts of change.
It works it's way through the specified objects, editing the @templateScript
into a @script
variable and then running the @script
variable with sys.sp_executesql
. This way, it's never changing the source variable so the replace target remains intact.
It is unfortunate that you can't use variables for tsql object names, or you could reduce the surface for injection attack. Thus, the recommendation for an inputs table rather than arguments. It also means SQL can't really do anything to parameterize/reuse the statement/execution plan - but then again, this is not something that gets run billions of times, right?