[Pharo-project] Explicit control of Parallel execution in Smalltalk ? (was Counting Messages as a Proxy for Average Execution Time in Pharo)

Nicolas Cellier nicolas.cellier.aka.nice at gmail.com
Fri Apr 29 14:58:32 CEST 2011


2011/4/29 Stefan Marr <pharo at stefan-marr.de>:
> Hi:
>
> On 29 Apr 2011, at 02:18, Nicolas Cellier wrote:
>
>> 2011/4/29 Stefan Marr <pharo at stefan-marr.de>:
>>>
>>> On 29 Apr 2011, at 01:07, Nicolas Cellier wrote:
>>>>>> However, as I understand it, it's entirely up to user to write code
>>>>>> exploiting parallel Process explicitly right ?
>>>>> Sure, you have to do: n times: [ [ 1 expensiveComputation. ] fork ].
>>>>>
>>>>> I don't belief in holy grails or silver bullets.
>>>>> Automatic parallelization is something nice for the kids, like Santa Clause or the Easter Bunny...
>>>>
>>>> Unless your hobby is oriented toward functional languages maybe...
>>>> But I just know enough of it to now shut up ;)
>>> Would be nice, but that is also a fairy tale.
>>>
>>> The best we have today, in terms of being 'easily' automatically scheduleable are data-flow/stream languages. Things like StreamIT or Lucid have interesting properties, but well, its still a search for the holly grail.
>>>
>>> Best regards
>>> Stefan
>>>
>>
>> Interestingly, I was thinking of Xtreams as an obvious Smalltalk test
>> case for parallelism.
> Hm, you refer to http://code.google.com/p/xtreams/ ?
> Maybe its just my missing imagination, but Xtreams alone do not look like they help here a lot.
>

It's just that Xtreams wrapper structure is equivalent to unix pipes,
but that's only a particular form of parallelism.

Nicolas

>
>> But of course choosing manually an optimal multi-Process
>> implementation is far from easy...
> Right, and that is also not what I meant with 'automatic parallelization does not work'.
> I just think, that you will need to formulate your problems in a way that the opportunity for parallelism is obvious.
> There is no smart compiler that will parallelize something _in the spirit of_ a recursively defined factorial.
> I think, we will be forced to formulate our problems in some way that is not inherently sequential. (That is basically also what Michael referred to with Guy Steele's work, Fortress is all about trees, and I also assume that this is the best bet for the moment.)
> Data-flow/stream languages force you to do that, Fork/Join and map reduce are also about forcing your problem into a structure that is manageable.
> And then, we can apply something like a work-stealing scheduler and we do not need to specify how the computation is to be distributed over the available computational resources.
>
>
> However, there is always a big constant factor in all this on todays machines, and we are still forced to think about a sequential cut-off. Hopefully we can apply some of that automatic fusing of fine-grained parallelism to get a good-enough solution to avoid having to think about sequential cut-offs. (Which still will remain interesting since certain sequential strategies will always outperform naive divide-and-conquer strategies.)
>
>
>
>> Reading http://cag.lcs.mit.edu/commit/papers/06/gordon-asplos06-slides.pdf
>> reinforce my feeling that it shall not be optimized manually, and that
>> user shall not design/assign Process directly.
>> I don't know what the
>> right abstraction would be in Smalltalk though such that a VM could do
>> it, and neither do I understand how StreamIT solves the optimization,
>> but thanks for these interesting clues.
> Fine-grained parallelism and a good scheduling approach help already a lot. Work-stealing seems to be the state-of-the-art solution for that. And hopefully, we will get to replace the stupid scheduler in the RoarVM with a work-stealing one soon. (But a CogJIT RoarVM would help real-world performance more ;))
>
> Best regards
> Stefan
>
>
> --
> Stefan Marr
> Software Languages Lab
> Vrije Universiteit Brussel
> Pleinlaan 2 / B-1050 Brussels / Belgium
> http://soft.vub.ac.be/~smarr
> Phone: +32 2 629 2974
> Fax:   +32 2 629 3525
>
>
>



More information about the Pharo-project mailing list