[Cado-nfs-discuss] [error] caused by a larger special-q

canny georgina cannysiska at gmail.com
Tue Jul 24 07:19:26 CEST 2018


I have tried to resume the failed workunit using the snapshot but still
failing under the  tasks.sieve.allow_largesq problem. I tried importing
relations for c110 instead using the guide in
/cado-nfs/scripts/cadofactor/README.
I've set the qmin value as the guide says but it shows something like this?

As for the previous computation directory for c110, it's showed like this:
c110.9950000-9960000._5x3xfb2.gz
c110.9960000-9970000.12pnspj7.gz
c110.9970000-9980000.it8kvwm9.gz
c110.9980000-9990000.pcz3e3l6.gz
c110.9990000-10000000.680ccgha.gz

so, the last one done is 10000000 and became the qmin value,
Then i re-do it using that qmin and import the 'already retrieved'
relations from before but it shows something like this:

root at ubuntu:/home/chunnie/Desktop/Math3/cado-nfs# ./cado-nfs.py
35794234179725868774991807832568455403003778024228226193532908190484670252364677411513516111204504060317568667
tasks.polyselect.import=c110_nonlinear.poly tasks.sieve.I=13
tasks.sieve.allow_largesq=true
tasks.sieve.import=@/tmp/cado.5tpo3b5l/c110.upload/
tasks.sieve.qmin=10000000
Info:root: Using default parameter file ./parameters/factor/params.c110
Warning:Parameters: Parameter tasks.sieve.qmin, previously set to value
500000, overwritten with value 10000000
Info:root: No database exists yet
Info:root: Created temporary directory /tmp/cado._98j71v7
Info:Database: Opened connection to database /tmp/cado._98j71v7/c110.db
Info:root: Set tasks.threads=4 based on detected logical cpus
Info:root: tasks.polyselect.threads = 2
Info:root: tasks.sieve.las.threads = 2
Info:root: slaves.scriptpath is /home/chunnie/Desktop/Math3/cado-nfs
Info:root: Command line parameters: ./cado-nfs.py
35794234179725868774991807832568455403003778024228226193532908190484670252364677411513516111204504060317568667
tasks.polyselect.import=c110_nonlinear.poly tasks.sieve.I=13
tasks.sieve.allow_largesq=true
tasks.sieve.import=@/tmp/cado.5tpo3b5l/c110.upload/
tasks.sieve.qmin=10000000
Info:root: If this computation gets interrupted, it can be resumed with
./cado-nfs.py /tmp/cado._98j71v7/c110.parameters_snapshot.0
Info:Server Launcher: Adding ubuntu to whitelist to allow clients on
localhost to connect
Info:HTTP server: Using non-threaded HTTPS server
Info:HTTP server: Using whitelist: localhost,ubuntu
Info:Complete Factorization: Factoring
35794234179725868774991807832568455403003778024228226193532908190484670252364677411513516111204504060317568667
Info:HTTP server: serving at https://ubuntu:33369 (0.0.0.0)
Info:HTTP server: For debugging purposes, the URL above can be accessed if
the server.only_registered=False parameter is added
Info:HTTP server: You can start additional cado-nfs-client.py scripts with
parameters: --server=https://ubuntu:33369
--certsha1=c7feef53360f1f753edaae4c46f6ff3e272e37f1
Info:HTTP server: If you want to start additional clients, remember to add
their hosts to server.whitelist
Info:Client Launcher: Starting client id localhost on host localhost
Info:Client Launcher: Starting client id localhost+2 on host localhost
Info:Client Launcher: Running clients: localhost+2 (Host localhost, PID
30745), localhost (Host localhost, PID 30742)
Info:Polynomial Selection (size optimized): Skipping this phase, as we will
import the final polynomial
Warning:Polynomial Selection (size optimized): some stats could not be
displayed for polyselect1 (see log file for debug info)
Info:Polynomial Selection (root optimized): Starting
Info:Polynomial Selection (root optimized): Importing file
c110_nonlinear.poly
Warning:Polynomial Selection (root optimized): Polynomial in file
c110_nonlinear.poly has no Murphy E value
Info:Polynomial Selection (root optimized): New best polynomial from file
c110_nonlinear.poly: Murphy E = 0
Info:Polynomial Selection (root optimized): Best polynomial previously
found in c110_nonlinear.poly has Murphy_E = 0
Info:Polynomial Selection (root optimized): Imported polynomial, skipping
this phase
Warning:Polynomial Selection (root optimized): some stats could not be
displayed for polyselect2 (see log file for debug info)
Info:Generate Factor Base: Starting
Info:Generate Factor Base: Finished
Info:Generate Factor Base: Total cpu/real time for makefb: 3.02/2.07714
Info:Generate Free Relations: Starting
Info:Generate Free Relations: Found 515539 free relations
Info:Generate Free Relations: Finished
Info:Generate Free Relations: Total cpu/real time for freerel: 54.42/16.2913
Info:Lattice Sieving: Starting
Info:Lattice Sieving: Importing files listed in
/tmp/cado.5tpo3b5l/c110.upload/
Traceback (most recent call last):
  File "./cado-nfs.py", line 122, in <module>
    factors = factorjob.run()
  File "./scripts/cadofactor/cadotask.py", line 5754, in run
    last_status, last_task = self.run_next_task()
  File "./scripts/cadofactor/cadotask.py", line 5829, in run_next_task
    return [task.run(), task.title]
  File "./scripts/cadofactor/cadotask.py", line 3058, in run
    super().run()
  File "./scripts/cadofactor/cadotask.py", line 1121, in run
    super().run()
  File "./scripts/cadofactor/cadotask.py", line 956, in run
    self.import_files(self.params["import"])
  File "./scripts/cadofactor/cadotask.py", line 962, in import_files
    with open(input_filename[1:], "r") as f:
IsADirectoryError: [Errno 21] Is a directory:
'/tmp/cado.5tpo3b5l/c110.upload/'

It seems like there's something wrong during the process of reading
imported file in the c110.upload.
I can't figure out why it happen though. Thank you


2018-07-24 9:13 GMT+07:00 canny georgina <cannysiska at gmail.com>:

> Have tried tasks.sieve.allow_largesq = true, thank you for the suggestion,
> i think this one is more flexible than enlarging lpb0 and lpb1 (because i
> can't monitor the computation 24/7).
> But, I do change the largesq option by interrupting the computation first
> (using Ctrl+C) then re-input the snapshot using tasks.sieve.allow_largesq =
> true option. Will it take any effect then?
>
> However, I'm curious whether it's possible to resume the computation of a
> failed workunit. I mean, this "not-enabled-allow-largesq" option has made
> the computation stop in the middle (because of so many failed workunits)
> and when I tried to resume its snapshot using same trick (by adding
> tasks.sieve.allow_largesq = true option) , it failed to resume.
>
>
>
> 2018-07-23 21:57 GMT+07:00 Pierrick Gaudry <pierrick.gaudry at loria.fr>:
>
>> > well, I run it as a root because I need to install some dependencies
>> first
>> > before using cado, so it feels kinda bothersome to reenter password
>> > everytime It needs root permission. But, will it be unsafe, then?
>>
>> It depends if you trust the cado-nfs developpers ;-)
>>
>> Even if we are honest, we could have made a mistake in the code that
>> implies doing "rm -rf /" or something similarly bad for the system, if
>> you gives weird, untested parameters.
>>
>> At your own risk...
>>
>> Regards,
>> Pierrick
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gforge.inria.fr/pipermail/cado-nfs-discuss/attachments/20180724/b0737c40/attachment.html>


More information about the Cado-nfs-discuss mailing list