[Cado-nfs-discuss] Help starting out using two machines

Emmanuel Thomé Emmanuel.Thome at inria.fr
Fri Mar 6 00:31:45 CET 2020


Hi,

This is an excellent occasion to complement the README.

In my opinion, we should be almost silent regarding slaves.hostnames and
friends.  They're here as a convenience means, in order to spawn all jobs
from the server, which comes in handy for baby size factorizations. But
for anything beyond that "testing-only" approach, you need to do
otherwise.

The process is simple.

    1 - build all binaries on the server node, and run the server script.
    2 - build all binaries on the client nodes, and run the client
    scripts to point to the server. (the server node can also have the
    role of a client).

Concerning 1, you need binaries, because some steps are run on the server
no matter what (creating the factor base files, but also linear algebra,
unless you choose to do linear algebra separately). To run a "bare"
server, you must do:

    make
    ./cado-nfs.py 90377629292003121684002147101760858109247336549001090677693 server.whitelist=0.0.0.0/0 --server --workdir /some/path/to/a/fresh/directory/

(adjust the network mask according to the client that will connect.)

The server's standard output contains two lines that you really want to
pay attention to, namely:
    
    Info:root: If this computation gets interrupted, it can be resumed with ./cado-nfs.py /some/path/to/a/fresh/directory/c60.parameters_snapshot.0

and

    Info:HTTP server: You can start additional cado-nfs-client.py scripts with parameters: --server=https://localhost:40047 --certsha1=108fcb0e4961f195b3011d4c2e0c6077c82b238a


After going through the very early setup, the server will be essentially
idle, and wait for clients to connect. You might want to have it in a
screen. As the message I just quoted above says, it is perfectly fine if
you interrupt and restart the server, provided that you use the suggested
command line.


Now for 2. Clients are not necessarily the same machines as the server
machine. They may have different architectures and so on. So, despite the
fact that the server does have the functionality to ship a binary to the
client, I very much advise against using it. Instead, you should build
your binaries on the client, and instruct the client script to use them,
and skip the server-shipped binaries. This is done as follows
(./cado-nfs-client.py --help might be worth a visit)

    ./cado-nfs-client.py --basebath $wdir --server=https://localhost:40047 --certsha1=108fcb0e4961f195b3011d4c2e0c6077c82b238a --bindir $(eval `make show` ; echo $build_tree)

where --server and --certsha1 are as suggested by the server, and $wdir
is some local working directory that you create specifically for the
client. Side note, you can even automate this temp dir stuff with this
helper script from the cado-nfs testing suite:

    ./tests/provide-wdir.sh --arg --basepath -- ./cado-nfs-client.py --server=.......[rest of the cmdline above]


The part:
    --bindir $(eval `make show` ; echo $build_tree)
uses cado-nfs's top-level makefile to retrieve the build directory, so
that you can pass it to the script. Of course, you can specify the build
directory with any less-automatic means you see fit.


Clients work mostly fine as is with these settings, but will still obey
all parameters that are suggested by the server. That includes in
particular the number of threads per client, which is the server's
--client-threads argument.

Again, deciding the number of threads per client should rather be the
client's business. Hence there's a way, in cado-nfs-client.py, to
override the server-suggested setting. You just add, for example:

    --override -t 4

to the cado-nfs-client.py command line, to force 4-threads clients. Note
that you may have several clients per node, and at least for polyselect
this is preferrable to having a client with many threads.

Once your computation is in the sieving phase, you can switch to a setup
where you have only one client per node, and use automatic thread
placement (alas, this is _only_ for sieving, for the moment). You just
add this to the cado-nfs-client.py command line:

    --override -t auto


This should normally get you going for the sieving part of
factorizations. As long as you don't point too many machines to the
server, it should cope. You might want to adjust the adrange and qrange
so that the clients don't ask for work too often.

Filtering, then, happens on the server. Linear algebra too. If you want
to do linear algebra on several machines, it's a different topic.

E.

On Thu, Mar 05, 2020 at 05:16:15PM -0500, David Willmore wrote:
> Hello, all.  I just picked up a second box which is pretty much
> identical to my first one WRT their CPUs and memory configuration.
> I'm trying to use both of them to sieve an example number from the
> README.  I have ssh working between them and the clients can reach the
> 'server' on port 8001 to get/report work.
> 
> What I'm stuck on is how to tell cado-nfs.py what clients to use and
> what their CPU configuration is.  If I use device for a job, it
> properly detects the CPU configuration and makes the best use of
> things--as far as I can tell.  What I can't seem to do is to figure
> out how to tell it about the worker machines--or will it just guess
> and do the right thing?  The example from the README:
> ./cado-nfs.py 353493749731236273014678071260920590602836471854359705356610427214806564110716801866803409
> slaves.hostnames=hostname1,hostname2,hostname3 --slaves 4
> --client-threads 2
> seems a bit confusing.  The machine running the initial cado-nfs.py
> script isn't listed in the list of slave hostnames.  But the # of
> slaves is 4, so is the localhost implicitly added to the three
> hostnames provided for the slaves?  What does the "--client-threads"
> parameter do?
> 
> Any help you can provide would be welcome.  Thank you.
> 
> Cheers,
> David
> _______________________________________________
> Cado-nfs-discuss mailing list
> Cado-nfs-discuss at lists.gforge.inria.fr
> https://lists.gforge.inria.fr/mailman/listinfo/cado-nfs-discuss


More information about the Cado-nfs-discuss mailing list