Comments on: Solving Problems With Asynchrony: Build Your Own Future I used to use multicast delegates and a traditional CAS retry-if-modified loop, but multicast delegates create a lot of garbage (they're backed by an object[] internally, for some reason). This solution doesn't create any garbage for futures with 0-1 listeners, which are the most common cases. I used to use multicast delegates and a traditional CAS retry-if-modified loop, but multicast delegates create a lot of garbage (they’re backed by an object[] internally, for some reason). This solution doesn’t create any garbage for futures with 0-1 listeners, which are the most common cases.

]]>
By: Hendry/2011/05/04/solving-problems-with-asynchrony-build-your-own-future/#comment-4618 Hendry Fri, 20 May 2011 01:49:39 +0000 In hindsight I am sorry that I hijacked your article to vent some thoughts that brewed for quite some time - should have put that into a blog rant. The cool think about Futures is that they open a way to go from sequential to parallel and introduce asynchrony in a nearly pain-free way. Your article is great because it introduces this important technique and also how it works below the surface - and to master parallelism and concurrency they belong into the picture. Bjoern In hindsight I am sorry that I hijacked your article to vent some thoughts that brewed for quite some time – should have put that into a blog rant.

The cool think about Futures is that they open a way to go from sequential to parallel and introduce asynchrony in a nearly pain-free way. Your article is great because it introduces this important technique and also how it works below the surface – and to master parallelism and concurrency they belong into the picture.

Bjoern

]]>
By: Kevin Gadd/2011/05/04/solving-problems-with-asynchrony-build-your-own-future/#comment-3660 Kevin Gadd Fri, 06 May 2011 14:11:20 +0000 Nice article and insight into Futures, thanks! Personally I'm a bit skeptical about Futures. They are great to move from classical sequential code to asynchronous aka parallel implementations. However, they might be a bit to easy to apply without thinking about the data-flow and high-level app design while introducing (eventually lock-free but nonetheless) sharing of variables/state - and ad-hoc or careless sharing impairs scalability. This means that Futures are great for synchronizing access to a small number of data items but bad if a Future is used per item in a data batch. The moment huge amounts of data are involved a more high-level approach to synchronizing and coordination should be selected (IMHO). I'd also advice to use a continuation-style of programming when creating new parallel core, e.g.don't let code wait on a certain result (even if it checks in a loop without blocking for a long time). Instead, enqueue a result computation task which itself enqueues the next computation task using the result when it's done - less synchronization and busy waiting will (well, might...) help scalability. Ok, so much for my skepticism in regard to Futures ;-) I'm absolutely looking forward to your next article! Bjoern Nice article and insight into Futures, thanks!

Personally I’m a bit skeptical about Futures. They are great to move from classical sequential code to asynchronous aka parallel implementations. However, they might be a bit to easy to apply without thinking about the data-flow and high-level app design while introducing (eventually lock-free but nonetheless) sharing of variables/state – and ad-hoc or careless sharing impairs scalability.

This means that Futures are great for synchronizing access to a small number of data items but bad if a Future is used per item in a data batch.
The moment huge amounts of data are involved a more high-level approach to synchronizing and coordination should be selected (IMHO).

I’d also advice to use a continuation-style of programming when creating new parallel core, e.g.don’t let code wait on a certain result (even if it checks in a loop without blocking for a long time). Instead, enqueue a result computation task which itself enqueues the next computation task using the result when it’s done – less synchronization and busy waiting will (well, might…) help scalability.

Ok, so much for my skepticism in regard to Futures ;-)

I’m absolutely looking forward to your next article!
Bjoern

]]>