Author | Message |
|
I am getting WUs with absolutely huge time estimates. Intitial estimates are in the hundreds of hours and they are actually runng maybe 10 to 20 minutes.
Jim |
|
|
|
I am getting WUs with absolutely huge time estimates. Intitial estimates are in the hundreds of hours and they are actually runng maybe 10 to 20 minutes.
Jim
Probably because of the last duration correction factor.
On an old client (which doesn\'t know the DCF), the estimated time is 5h52min.
____________
|
|
|
|
I am getting WUs with absolutely huge time estimates. Intitial estimates are in the hundreds of hours and they are actually runng maybe 10 to 20 minutes.
Jim
I noticed the same thing. But, if you think about it, it is much better this way than the reverse - where completion times are much longer than the estimates. That situation was prevalent a year or so ago and it was not uncommon to download huge caches of WU\'s that were impossible to finish before the deadline.
____________
|
|
|
|
I am getting WUs with absolutely huge time estimates. Intitial estimates are in the hundreds of hours and they are actually runng maybe 10 to 20 minutes.
Jim
I noticed the same thing. But, if you think about it, it is much better this way than the reverse - where completion times are much longer than the estimates. That situation was prevalent a year or so ago and it was not uncommon to download huge caches of WU\'s that were impossible to finish before the deadline.
I agree, except BOINC absolutely panicked and dropped everything to process this workunit, which it is designed to do. So here I am, trying to reduce or eliminate my workload since I like to have nothing running when I am away and this rather long looking WU shows up. After about 10 minutes, it had completed enough of the WU to quit panicking. No harm, no foul but I posted this just in case it was a symptom of something more severe.
Jim
|
|
|
|
I run off-line. For me, these huge estimates cause complications. When I connect, I need to fetch enough work to keep my system busy until the next time I connect. When my system downloads a Sztaki workunit that is estimated to take 100 hours, it downloads __fewer__ workunits from other projects (because it thinks there is a lot of work to do). But when that Sztaki workunit finishes in less than an hour, 99 hours of work have \"evaporated\" from my work queue. That means that my system will run out of crunching that much sooner -- and if I don\'t want my system to go idle I have to make the next connection sooner than I had planned.
. |
|
|
|
Open in the boinc directory the file client_state.xml and look for the project Sztaki the line <duration_correction_factor>x.xxxxxx</duration_correction_factor>
What\'s the value of x.xxxxxx ?
____________
|
|
|
|
What\'s the value of x.xxxxxx ?
13.2 (some more digits I didn\'t write down)
.
|
|
|
|
What\'s the value of x.xxxxxx ?
13.2 (some more digits I didn\'t write down)
.
Close your Boinc client, edit the line <duration_correction_factor>x.xxxxxx</duration_correction_factor>
setting the value x.xxxxxx to something like 1.000000.
Start your client again.
Next time, you will get more work from the project.
Only a workaround but it can help.
____________
|
|
|