[Pharo-project] #& in Socket >> #waitForSendDoneFor:

Levente Uzonyi leves at elte.hu
Thu Nov 11 00:39:58 CET 2010


On Thu, 11 Nov 2010, Levente Uzonyi wrote:

> On Tue, 9 Nov 2010, Philippe Marschall wrote:
>
>> On 09.11.2010 07:58, Schwab,Wilhelm K wrote:
>>> What does your patch do?
>> 
>> It replaces the #& with and #and: swaps receiver and argument to
>> preserve the same semantics. That saves a primitive call if the data is
>> already sent. It's basically the same as posted by Levente.
>> 
>>> At a minimum, it deserves a little attention.  Things that come to mind 
>>> are that one version does less work due to some type of optimization (and 
>>> runs faster as a result) or that one is too quick to detect a loss of 
>>> connection and sends less data per opportunity, appearing to run slower as 
>>> a result.
>>> 
>>> Can you elaborate on "I'm able to push about 1 Mbyte/s more"?  I guess I'm 
>>> asking how that manifests itself?  Are there a bunch of connections that 
>>> form, send and fail?  Do they each get a little farther or do they go 
>>> faster?
>> 
>> Throughput outgoing from the Pharo image was about 1 Mbyte/s higher. Now
>
> 1MB/s throughput sounds pretty low. On windows I could transfer 160MB/s using 
> two processes in the same image by directly using the Socket primitives in 4k 
> chunks. With high level methods (#sendData: #receiveData:) it went down to 
> 110MB/s. With a two image setup (client-server) I got 66MB/s (one CPU/image).

Looks like I missed the word "higher" in your mail. Anyway, I'm interested 
in your absolute numbers too.


Levente

>
> Though our machines are different (yours is probably faster), still 
> Machine/OS differences + concurrent access + ab + apache + AJP + Seaside 
> caused 66x slowdown. That's more than acceptable IMHO.
>
>
> Levente
>
>




More information about the Pharo-project mailing list