8

When we define our interfaces in C# 4.0, we are allowed to mark each of the generic parameters as in or out. If we try to set a generic parameter as out and that'd lead to a problem, the compiler raises an error, not allowing us to do that.

Question:

If the compiler has ways of inferring what are valid uses for both covariance (out) and contravariance(in), why do we have to mark interfaces as such? Wouldn't it be enough to just let us define the interfaces as we always did, and when we tried to use them in our client code, raise an error if we tried to use them in an un-safe way?

Example:

interface MyInterface<out T> {
    T abracadabra();
}
//works OK

interface MyInterface2<in T> {
    T abracadabra();
}
//compiler raises an error.
//This makes me think that the compiler is cappable 
//of understanding what situations might generate 
//run-time problems and then prohibits them.

Also,

isn't it what Java does in the same situation? From what I recall, you just do something like

IMyInterface<? extends whatever> myInterface; //covariance
IMyInterface<? super whatever> myInterface2; //contravariance

Or am I mixing things?

Thanks

devoured elysium
  • 101,373
  • 131
  • 340
  • 557

2 Answers2

8

If the compiler has ways of inferring what are valid uses for both covariance (out) and contravariance(in), why do we have to mark interfaces as such?

I'm not quite sure I understand the question. I think you're asking two things.

1) Can the compiler deduce the variance annotations?

and

2) Why does C# not support call-site variance like Java does?

The answer to the first is:

interface IRezrov<V, W> 
{
    IRezrov<V, W> Rezrov(IRezrov<W, V> x);
}

I invite you to attempt to deduce what all legal possible variance annotations are on V and W. You might get a surprise.

If you cannot figure out a unique best variance annotation for this method, why do you think the compiler can?

More reasons here:

http://blogs.msdn.com/ericlippert/archive/2007/10/29/covariance-and-contravariance-in-c-part-seven-why-do-we-need-a-syntax-at-all.aspx

More generally: your question indicates fallacious reasoning. The ability to cheaply check whether a solution is correct does not logically imply that there is a cheap way of finding a correct solution. For example, a computer can easily verify whether p * q == r is true or false for two thousand-digit prime numbers p and q. That does not imply that it is easy to take r and find p and q such that the equality is satisfied. The compiler can easily check whether a variance annotation is correct or incorrect; that does not mean that it can find a correct variance annotation amongst the potentially billions of possible annotations.

The answer to the second is: C# isn't Java.

Eric Lippert
  • 647,829
  • 179
  • 1,238
  • 2,067
  • I think his second question was more like "How is C#'s variance annotations different from Java's wildcard types?" – Gabe Apr 29 '10 at 00:29
  • 1
    @Gabe: C# does *declaration-site* variance. Java does *call-site* variance. Call site variance is an interesting idea to be sure, but it feels strange to me to have a type be variant based on how it is used at a particular site, as opposed to how it is defined to behave. – Eric Lippert Apr 29 '10 at 00:37
  • Yes, I now get what the problem is with the Java usage. It has the benefit of not having to state interface's parameters as in or out, but then some client might give it some use right now that later might not be surported if I plan to update my interface. – devoured elysium Apr 29 '10 at 00:42
  • Actually, in your Rezrov example, there would be just 4 situations: V and T can be both in or out (or nothing, but that doesn't count). Or am I wrong? Anyway, you wouldn't need to check for all situations, just for the situations the client code tries to run. That is, you just check something when you try to compile code that uses the interface in a certain way. – devoured elysium Apr 29 '10 at 00:44
  • is good. is also good. Which one of those is the correct choice? Now suppose one caller uses V as covariant and another uses it as contravariant. Which one gets the error? Or do both work? – Eric Lippert Apr 29 '10 at 01:38
  • My (initial) idea would not to try to define if V and W are in or out when compiling the code interface code, but for the compiler to check any client code using the interface if what they asked was considered safe. – devoured elysium Apr 29 '10 at 02:42
  • It's interesting that Java and C# have such different implementations of variance, give the influence that Mads Torgersen had on both of them. – Gabe Apr 29 '10 at 03:32
  • @Gabe: Actually, generic variance was designed and implemented when generics were added to the CLR in v2; we just didn't surface the feature in the language. Mads came along quite a bit later. Of course we certainly made good use of his expertise in making sure that the feature was well-specified when we did add it to C# in v4. – Eric Lippert Apr 29 '10 at 03:47
0

OK, here is the answer to what I asked (from Eric's answer) : http://blogs.msdn.com/ericlippert/archive/2007/10/29/covariance-and-contravariance-in-c-part-seven-why-do-we-need-a-syntax-at-all.aspx

First, it seems to me that variance ought to be something that you deliberately design into your interface or delegate. Making it just start happening with no control by the user works against that goal, and also can introduce breaking changes. (More on those in a later post!)

Doing so automagically also means that as the development process goes on and methods are added to interfaces, the variance of the interface may change unexpectedly. This could introduce unexpected and far-reaching changes elsewhere in the program.

I decided to put it out explicitly here because although his link does have the answer to my question, the post itself does not.

devoured elysium
  • 101,373
  • 131
  • 340
  • 557