1

I grok that for capturing lambdas, there needs to be an object allocated (be it Object[] or some abc$Lambda$xyz type). Is it possible to customize this process anyhow? Let's say I have this code:

private void test() {
    int x = 5;
    Supplier<Integer> supplier = () -> x;
    foo(supplier); // potentially passes the supplier to another thread etc.
}

and I don't want to allocate the object capturing x, but instead just get it from a pool and fill in the value; I also know that at some point I can return the object to the pool.

I could write

Supplier<Integer> supplier = pool.get(x, v -> v);

and I could have specialized versions for different argument types (as using Object... would do the allocation (ok, there's a chance that the allocation would be eliminated by escape analysis...) but that would render the code quite unreadable. Therefore I am looking for a more aspect-like way.

Is such thing possible?


EDIT: to make the pool's functionality more obvious, the get could be implemented as

class IntHolderSupplier implements Supplier<Integer> {
    int value;
    IntFunction<Integer> func;
    @Override public Integer get() {
        return func.apply(value);
    }        
}

class Pool {
    Supplier<Integer> get(int arg, IntFunction<Integer> func) {
        IntHolderSupplier holder = ...;
        holder.value = arg;
        holder.func = func;
        return holder;
    }
}

and I would need such holder with specific signatures for all possible types lambdas I want to use.

Maybe I have complicated the example a bit by providing the function - but I wanted to capture the fact that there may be a additional computation applied to the captured argument at time of Supplier.get() invocation.

And please ignore the fact that the int is boxed which can produce an allocation.

Radim Vansa
  • 5,686
  • 2
  • 25
  • 40
  • Do you mean something like `supplier = () -> pool.getInteger();`? – bradimus Jul 29 '16 at 13:00
  • @bradimus Sorry, I don't understand the question. The intention is getting a pooled object that as it's business logic returns 5. The pool can't know about this business logic. – Radim Vansa Jul 29 '16 at 13:20
  • this holder implementations should help you to prevent objects creation if Function is also non-capturing. I am very curious where you have to optimize this part, could you please give context where it is going to be used? – Nazarii Bardiuk Jul 29 '16 at 18:01
  • 3
    No, this is not possible. This is _the JVM's job to optimize, not yours._ (To say nothing of the JVM being tremendously optimized for short-lived objects, and reallocating fresh objects being often faster than an explicit pool.) – Louis Wasserman Jul 29 '16 at 18:20
  • 4
    Object pooling is sooooo 1997. – Brian Goetz Jul 30 '16 at 15:28
  • @LouisWasserman I would love to see additional resources for that claim. AFAIK I haven't said that the object is short lived - in the usecases I am thinking about, object lifespan is hundreds of milliseconds, though that could be considered short - but my question was general. It's JVM's job, but it's quite tough for generic optimizations, as the object escapes creator thread and it's lifespan is bound to finishing RPCs/timeout. Therefore I feel that JVM would use some aid here. – Radim Vansa Aug 01 '16 at 07:41
  • @BrianGoetz And what is the 2016 panacea for reducing object allocations, to have as short and infrequent GC pauses as possible (besides Shenandoah, but that's like Java 10, so people will use that in 2020+)? Ideally, I would love to see scoped memory in Java SE/EE, and just allocate all the temporary objects I need for a request in a scope that I could throw away when the request is done, but regrettably that's not available. – Radim Vansa Aug 01 '16 at 07:45
  • Even more so than in the past, the panacea is getting out of the JVM's way and writing the simplest code you can. – Louis Wasserman Aug 01 '16 at 15:17
  • @LouisWasserman That's certainly where I would start. But then I see high allocation rates and simple fixes that reduce this show performance improvements. Now, some parts of the system are rewritten to async processing using CompletableFutures, but I have concerns that it will generate much more garbage. I can't foretell if pooling those objects will have positive effect in the end; only test can tell that (though I understand that experienced engineers can guess that ahead). – Radim Vansa Aug 01 '16 at 20:00
  • Anyway, I think that the question has been answered (by "no"), so feel free to put that as an answer and I'll tick it. – Radim Vansa Aug 01 '16 at 20:01
  • The thing you called “scoped memory” already exists. All objects (besides very large arrays) are allocated in a thread local storage that is cleared at once (after the few escaped objects have been moved to a different place). – Holger Aug 26 '16 at 14:51
  • @Brian Goetz: [not everyone heard the message](http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/8u40-b25/javax/swing/JComponent.java#452) I’m still waiting curiously when this will eventually disappear… – Holger Aug 26 '16 at 15:40
  • @Holger It takes a long time for obsolete idioms to be purged from the ecosystem -- especially in "legacy" codebases, which Swing unfortunately is. Even if the message is _heard_, it may not be immediately _acted upon_ -- but we'll try and continue to put out the message. (Patches welcome!) – Brian Goetz Aug 26 '16 at 16:03
  • 1
    @Holger http://openjdk.java.net/guide/ -- a bit out of date, but a good start. – Brian Goetz Aug 26 '16 at 16:25

1 Answers1

1

To “pool capturing lambdas” is a misnomer. Lambda expressions are a technical solution to get an instance of a functional interface. Since you don’t pool the lambda expressions but the interface instances, dropping every technical aspect of lambda expressions, like immutability or the fact that the JRE/JVM controls their life time, you should name it “pool functional interface instances”.

So you can implement a pool for these instance, just like you can implement a pool for any kind of object. It’s rather unlikely that such a pool performs better than the JVM managed objects created for lambda expressions, but well, you can try it.

It’s simple, if you keep them immutable, thus, don’t try to reuse them for a different value, but only when encountering a previously captured value again. Here is an example for a Supplier cache holding the suppliers for the last 100 encountered values:

class SupplierCache {
    static final int SIZE = 100;
    static LinkedHashMap<Object,Supplier<Object>> CACHE =
        new LinkedHashMap<Object, Supplier<Object>>(SIZE, 1f, true) {
        @Override
        protected boolean removeEldestEntry(Map.Entry<Object, Supplier<Object>> eldest) {
            return size() > SIZE;
        }
    };
    @SuppressWarnings("unchecked")
    static <T> Supplier<T> getSupplier(T t) {
        return (Supplier<T>)CACHE.computeIfAbsent(t, key -> () -> key);
    }
}

(add thread safety, if you need it). So by replacing Supplier<Integer> supplier = () -> x; with Supplier<Integer> supplier = SupplierCache.getSupplier(x); you’ll get the cache functionality and since you don’t have to release them, you don’t have to make error prone assumptions about its life cycle.

Creating a pool of objects implementing Supplier and returning the value of a mutable field, so that you can manually reclaim instances, is not too hard if you simply create an ordinary class implementing Supplier, but well, you open a whole can of worms with manual memory management including the risk of reclaiming an object still being in use. These objects can’t be shared like the immutable object like in the example above. And you replace object allocation with the action of finding a reclaimable pooled instance plus the action of explicitly putting back an instance after use—there’s no reason why this should be faster.

Holger
  • 285,553
  • 42
  • 434
  • 765
  • Replacing `() -> x` with `SupplierCache.getSupplier(x)` is the exact thing I wanted to avoid. – Radim Vansa Aug 29 '16 at 15:45
  • Well, if you want to keep the source code form of the lambda expressions, there’s no built-in solution, as explained, due to the questionable outcome. However, implementing it atop compiled class files would be possible. Generally, the JRE is in charge, so if you want to inject a different behavior without changing the JRE, you’d have to use byte code manipulation/Instrumentation to redirect the bootstrap method to an alternative implementation having that caching feature. – Holger Aug 29 '16 at 17:22
  • @Holger the nice trick about LRU cache.. definitely one plus! – Eugene Mar 11 '17 at 02:09