[Cado-nfs-discuss] [error] caused by a larger special-q

canny georgina cannysiska at gmail.com
Tue Jul 24 04:13:45 CEST 2018


Have tried tasks.sieve.allow_largesq = true, thank you for the suggestion,
i think this one is more flexible than enlarging lpb0 and lpb1 (because i
can't monitor the computation 24/7).
But, I do change the largesq option by interrupting the computation first
(using Ctrl+C) then re-input the snapshot using tasks.sieve.allow_largesq =
true option. Will it take any effect then?

However, I'm curious whether it's possible to resume the computation of a
failed workunit. I mean, this "not-enabled-allow-largesq" option has made
the computation stop in the middle (because of so many failed workunits)
and when I tried to resume its snapshot using same trick (by adding
tasks.sieve.allow_largesq = true option) , it failed to resume.



2018-07-23 21:57 GMT+07:00 Pierrick Gaudry <pierrick.gaudry at loria.fr>:

> > well, I run it as a root because I need to install some dependencies
> first
> > before using cado, so it feels kinda bothersome to reenter password
> > everytime It needs root permission. But, will it be unsafe, then?
>
> It depends if you trust the cado-nfs developpers ;-)
>
> Even if we are honest, we could have made a mistake in the code that
> implies doing "rm -rf /" or something similarly bad for the system, if
> you gives weird, untested parameters.
>
> At your own risk...
>
> Regards,
> Pierrick
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gforge.inria.fr/pipermail/cado-nfs-discuss/attachments/20180724/9e56a50f/attachment-0001.html>


More information about the Cado-nfs-discuss mailing list