-
Notifications
You must be signed in to change notification settings - Fork 68
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
job.save()'d jobs are immediately cancelled? #9
Comments
What options are being set on the jobs? Are they set to repeat? Sent from Vaughn's iPad
|
No options. Initially I had a repeat forever and wait 300 seconds, but then I commented out the code and dropped the |
In particular, take a look at the |
Thanks for the pointer - this might be it:
Will have to do more testing tomorrow. In my particular use case, parsing feeds, it wouldn't really make sense to cancel parsing all feeds if parsing one of them fails. Yet they're all of the same type, 'parseFeed', just with different data. Is |
Let me recap and see if I understand your workload:
If I have all of that correct, then yes, then
Those are just some thoughts, I'm sure there are other possibilities as well. |
Thanks - you have all of that correct. I was thinking about option 2 as well, i.e. name the job type according to the feed, since the feed name is unlikely to change. That seems a bit brittle, and conceptually somewhat misleading since the jobs are really of the same type. What I've used so far is an unsophisticated option #1: // for each feed...
if (!feedParsingJobs.findOne({ 'data.feed.url': feed.url })) {
// add the job only if it doesn't already exist in the persisted collection
var job = feedParsingJobs.createJob('parseFeed', { ... feed: ... });
job.repeat({
repeats: Job.forever,
wait: 300 * 1000
});
job.delay(0).after(new Date()); // start ASAP
job.save({cancelRepeats: false}); // if one feed parsing fails, don't stop the entire queue
} Is that existence check sufficient? Do I need to be concerned with various feed states at server restart? An advantage to this |
Having thought about this for a bit, if it were up to me, I think I'd use some version of option 3). It has the advantage that failures of individual feeds are isolated to single jobs, and the "über-job" could check-up on which jobs from last time it ran succeeded / failed. It also has the advantage that the list of feeds can change dynamically without restarting the server or modifying the repeating job. The only case that doesn't seem to be handled by your code above is what to do with defunct feeds. As coded, their jobs will keep running (and failing) forever... |
Also, I think the right answer here probably depends heavily on the ultimate number of feeds the system needs to manage. 50 is very different from 500 or 5000. |
A problem with 3 is that different feeds may have different update frequencies, so the uber-job would end up reinventing the jobCollection wheel. There are about 70 feeds right now, and expected to increase to about 100 once fully in production. |
Okay, then it seems like 1) is the right answer for your case. My takeaway from this discussion is that the default value of Also: in your code: |
I just published v0.0.10 with a change in the default value of |
My code creates a bunch of jobs in a
foreach
loop like this:Within each iteration, I see in the console
jobCancel succeeded
. Inspecting db.feedParsing.jobs shows for each job three log messages:status
iscancelled
.The above happens before I get to
var queue = feedParsingJobs.processJobs(...)
.Shouldn't the job status be "waiting"?
If I add
job.restart();
afterjob.save()
, the queue appears to run correctly, but I get(STDERR) jobRestart failed
messages.The text was updated successfully, but these errors were encountered: