4

I've tried to come up with a composition scenario in which self-type and extends behave differently and so far have not found one. The basic example always talks about a self-type not requiring the class/trait not having to be a sub-type of the dependent type, but even in that scenario, the behavior between self-type and extends seems to be identical.

trait Fooable { def X: String }
trait Bar1 { self: Fooable =>
  def Y = X + "-bar"
}
trait Bar2 extends Fooable {
  def Y = X + "-bar"
}
trait Foo extends Fooable {
  def X = "foo"
}
val b1 = new Bar1 with Foo
val b2 = new Bar2 with Foo

Is there a scenario where some form of composition or functionality of composed object is different when using one vs. the other?

Update 1: Thanks for the examples of things that are not possible without self-typing, I appreciate the information, but I am really looking for compositions where self and extends are possible, but are not interchangeable.

Update 2: I suppose the particular question I have is why the various Cake Pattern examples generally talk about having to use self-type instead of extends. I've yet to find a Cake Pattern scenario that doesn't work just as well with extends

Arne Claassen
  • 14,088
  • 5
  • 67
  • 106
  • possible duplicate of [What is the difference between self-types and trait subclasses?](http://stackoverflow.com/questions/1990948/what-is-the-difference-between-self-types-and-trait-subclasses) – rxg Mar 27 '15 at 10:30

5 Answers5

7

Cyclic references can be done with self-types but not with extends:

// Legal
trait A { self: B => }
trait B { self: A => }

// Illegal
trait C extends D
trait D extends C

I use this sometimes to split up implementations across multiple files, when there are cyclic dependencies.

Owen
  • 38,836
  • 14
  • 95
  • 125
  • I found one other things that you can do with self types that you cannot do with extends: Self-types can be structural types – Arne Claassen Sep 02 '14 at 04:47
3

Also,

scala> trait A { def a: String ; def s = "A" }
defined trait A

scala> trait B { _: A => def s = "B" + a }
defined trait B

scala> trait C extends A { def a = "c" ; override def s = "C" }
defined trait C

scala> new C {}.s
res0: String = C

scala> new A with B { def a = "ab" }.s
<console>:10: error: <$anon: A with B> inherits conflicting members:
  method s in trait A of type => String  and
  method s in trait B of type => String
(Note: this can be resolved by declaring an override in <$anon: A with B>.)
              new A with B { def a = "ab" }.s
                  ^

scala> new A with B { def a = "ab" ; override def s = super[B].s }.s
res2: String = Bab

The point, if there is one, is that B.s doesn't override A.s.

That's not as motivational as the other answer.

som-snytt
  • 39,429
  • 2
  • 47
  • 129
2

The generic parameter must be the type itself:

trait Gen[T] {self : T => ...}

I don't see how you can get this constraint in say java or C#. It may however be approximated with

trait Gen[T] {
   def asT : T // abstract
}
Didier Dupont
  • 29,398
  • 7
  • 71
  • 90
0

Also,

as for self type, it needs a trait to mix in. It cannot use class or object. The weird thing is it allows to define a class can mix in with class, but it only fails compilation when you try to instantiate it. see this question:

why self-type class can declare class

Community
  • 1
  • 1
Xiaohe Dong
  • 4,953
  • 6
  • 24
  • 53
0

The biggest difference is in the public interface that you end up with. Let's take the example you give (slightly simplified):

trait Fooable { def foo: String = "foo" }
trait Bar1 { self: Fooable =>
  def Y = foo + "-bar"
}
trait Bar2 extends Fooable {
  def Y = foo + "-bar"
}
// If we let type inference do its thing we would also have foo() in the public interface of b1, but we can choose to hide it
def b1:Bar1 = new Bar1 with Fooable
// b2 will always have the type members from Bar2 and Fooable
def b2:Bar2 = new Bar2{}

// Doesn't compile - 'foo' definition is only visible inside the definition of Bar1
println(b1.foo)
// Compiles - 'foo' definition is visible outside the definition of Bar2
println(b2.foo)

So if you want to use the capabilities of a trait without necessarily letting your clients know that you are mixing the trait in, then you should use the self-type annotation.

Self-type annotation does not expose the public interface of the underlying type. Extending another type always exposes the public interface of the parent type.

rxg
  • 3,777
  • 22
  • 42