[Pharo-project] Preallocation behavior

Andres Valloud avalloud at smalltalk.comcastbiz.net
Thu Apr 28 10:04:43 CEST 2011


:).  It can be improved in a number of ways.  Instead of atAllPut:, it 
should be from:to:put: (so atAllPut: x should be self from: 1 to: self 
size put: x), it should do 8 or so at:put:s by hand before starting the 
loop, and it should keep a block of 2048 or so for the iteration (less 
memory traffic on the CPU that way, read *the same* 8kb or so all the 
time, write *different* 8kb each time).  All of this is thoroughly 
explained and measured in the book.  I forget the minute details right 
now, but tinkering with those details does make a difference.

On 4/28/11 1:00 , Henrik Sperre Johansen wrote:
> On 28.04.2011 09:30, Andres Valloud wrote:
>> As a side comment, I do not know if an atAllPut: method I wrote back
>> in about 2000 or so is still in the image... but if it is not, keep in
>> mind that you can use something like replaceFrom:to:with:startingAt:
>> using the receiver as the source of data, duplicating the amount of
>> data copied each time.  Back then, if you had to write the same object
>> more than 26 or so times, it was faster to use the duplicating "block
>> copy" method.  The key is to avoid multiple primitive calls (which
>> have expensive overhead compared to what is actually done), and
>> replace them with a single primitive call that does a bunch of
>> writes.  With the duplication method, you could get away with doing
>> 9000 writes in no more than 15 primitive calls.  I wrote about this in
>> much more detail in the Fundamentals volume 2 book.
> It is. It's still beautiful :D
>
>>
>> On 4/27/11 23:35 , jannik.laval wrote:
>>> Hi all,
>>>
>>> I am playing with MessageTally, and I have a strange result with
>>> prealocation.
>>> Here is my example. I am working on a PharoCore1.3, with a VM4.2.5
>>>
>>> An optimization is to use a Stream. Here is my source code:
>>> ===
>>> MessageTally spyOn:
>>> [ 500 timesRepeat: [
>>> | str |
>>> *str := WriteStream on: (String new)*.
>>> 9000 timesRepeat: [ str nextPut: $A ]]].
>>> ===
>>>
>>> The result appears after *812 ms*, which is a large improvement.
>>> Now, we could optimize again using the preallocation. Here is my source
>>> code:
>>>
>>> ====
>>> MessageTally spyOn:
>>> [ 500 timesRepeat: [
>>> | str |
>>> *str := WriteStream on: (String new: 10000)*.
>>> 9000 timesRepeat: [ str nextPutAll: 'A' ]]].
>>> ====
>>>
>>> And the result is strange: it makes 2 times slower to display the
>>> result.
>>> The result appears after 1656 ms.
>
> In the first example, you are making a single string with all A's of
> size 9000 repeated 500 times.
> In the second example, you are making 9000 strings with all A's of size
> 10000  repeated 500 times.
>
> ie. you are not measuring equivalent operations.
>
> That it only takes twice as long is due to Andres' excellent nextPutAll:
> implementation :)
>
> Cheers,
> Henry
> .
>



More information about the Pharo-project mailing list