I'm not convinced you need to prematurely optimize for a performance problem you don't know is going to exist. If your table has 100 rows and it's referenced often, it will almost certainly be in memory 100% of the time and access will be a non-issue.
One way that we have made code forward-compatible is to add a parameter to the procedure, with a default value, and the app can "upgrade" when the app is ready to do so. This can be done via a config file parameter, but presumably the app would have to be re-compiled to take advantage of the new functionality anyway.
As a quick example:
CREATE PROCEDURE dbo.doStuff
@version DECIMAL(10,2) = 1.0
AS
BEGIN
SET NOCOUNT ON;
IF @version >= 1.1
BEGIN
PRINT 'This only executes if the app tells us it is 1.1 or newer.';
END
IF @version >= 2.5
BEGIN
PRINT 'This only executes if the app tells us it is 2.5 or newer.';
END
END
GO
When all of the apps are up to date, you can increase the base version on the parameter. Otherwise they can all be updated at their own rates, and the schema can progress at a different rate. If you can correlate each feature to a sequential point release, this shouldn't be too difficult to manage. But again I'll insist that a 100-row table is not going to drag your performance down as much as you seem to think it will...