[Pharo-project] Networking change in Pharo 1.2?

Chris Muller asqueaker at gmail.com
Mon Apr 18 17:12:25 CEST 2011


This is the VM I used:

3.9-7 #1 Sun Feb  6 18:58:21 PST 2011 gcc 4.1.2
Croquet Closure Cog VM [CoInterpreter VMMaker-oscog.47]
Linux mcqfes 2.6.18-128.el5 #1 SMP Wed Jan 21 10:44:23 EST 2009 i686
i686 i386 GNU/Linux
plugin path: /opt/4dst/thirdparty/squeak/lib/squeak/3.9-7/ [default:
/opt/4dst/thirdparty/squeak/lib/squeak/3.9-7/]

However, I use this same VM when I run the test in Pharo 1.1.1 and it's solid.

 - Chris


On Mon, Apr 18, 2011 at 3:23 AM, Henrik Sperre Johansen
<henrik.s.johansen at veloxit.no> wrote:
> On 17.04.2011 22:48, Chris Muller wrote:
>>
>> I was able to work on getting Magma 1.2 going in Pharo.  It was quite
>> easy to get the code loaded and functioning in Pharo 1.1.1, Pharo 1.2,
>> and Pharo 1.3.
>>
>> But something seems to have changed in Pharo's networking from 1.1.1
>> to 1.2.  All Magma functionality seems to work fine for low-volume
>> activity.  However, when the test-suite gets to the HA test cases (at
>> the end), one of the images performing heavy networking activity,
>> consistently gets very slow and bogged down for some reason; causing
>> the clients to timeout and disrupting the test suite.  Fortunately, it
>> happens in the same place in the test-suite every time.
>>
>> The UI of the image in question becomes VERY sluggish, but
>> MessageTally spyAllOn: didn't reveal anything useful.  What is it
>> doing?  I did verify that the Magma server in that image is still
>> functioning; clients were committing, but I had to increase their
>> timeouts from 10 to 45 seconds to avoid timeouts..
>>
>> Unfortunately, two days of wrangling in Pharo (because I'm an old
>> Squeak dog) I could not nail the problem down; but I have one
>> suspect..  A couple of times, I caught a process seemingly hung up in
>> NetworkNameResolver; trying to resolve an IP from 'localhost'.
>>
>> This exact set of Magma packages is rock-solid on Pharo 1.1.1 and
>> Squeak, but that doesn't mean the problem for sure lies in Pharo 1.2;
>> maybe a networking bug in 1.1.1 is allowing Magma to "misuse" the
>> network and get away with it and Pharo 1.2 is now more strict?  I
>> don't know, I would just like to ask the experts here for help who
>> know all what went into Pharo 1.2 so hopefully we can get to the
>> bottom of it.
>>
>> Thanks,
>>   Chris
>>
> Which VM did you run these tests on?
> IIRC, Cog has a hard limit on how many external semaphores are available,
> and each Socket consumes 3 of those.
> So if you are running on Cog, the problem when under heavy load may be that
> there simpy aren't enough free external semaphores to create enough
> sockets...
>
> Cheers,
> Henry
>
>



More information about the Pharo-project mailing list