[Cado-nfs-discuss] [error] caused by a larger special-q
cannysiska at gmail.com
Tue Jul 24 04:13:45 CEST 2018
Have tried tasks.sieve.allow_largesq = true, thank you for the suggestion,
i think this one is more flexible than enlarging lpb0 and lpb1 (because i
can't monitor the computation 24/7).
But, I do change the largesq option by interrupting the computation first
(using Ctrl+C) then re-input the snapshot using tasks.sieve.allow_largesq =
true option. Will it take any effect then?
However, I'm curious whether it's possible to resume the computation of a
failed workunit. I mean, this "not-enabled-allow-largesq" option has made
the computation stop in the middle (because of so many failed workunits)
and when I tried to resume its snapshot using same trick (by adding
tasks.sieve.allow_largesq = true option) , it failed to resume.
2018-07-23 21:57 GMT+07:00 Pierrick Gaudry <pierrick.gaudry at loria.fr>:
> > well, I run it as a root because I need to install some dependencies
> > before using cado, so it feels kinda bothersome to reenter password
> > everytime It needs root permission. But, will it be unsafe, then?
> It depends if you trust the cado-nfs developpers ;-)
> Even if we are honest, we could have made a mistake in the code that
> implies doing "rm -rf /" or something similarly bad for the system, if
> you gives weird, untested parameters.
> At your own risk...
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Cado-nfs-discuss