You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
when I ran through the pipeline with my data some reads made it through even though they were quite short (as low as 145 bp, which was my truncLen on my forward read).
the default minLen says it is 20 which seems way too low, I'm thinking something like +/- 2 basepairs (maybe up to 5 bp) around the expected merged length might be better, given that we don't expect much variation (and for most of the reads don't actually see that much variation)
The text was updated successfully, but these errors were encountered:
I just realized that minLen is "is enforced after trimming and truncation" but NOT before merging....so I think probably setting a minLength on the sequence lengths in the seqtable???
this might be a helpful way to trim the final lengths??
colnames(seqtab_showerhead_run2_nodust_240) <- substr(colnames(seqtab_showerhead_run2_nodust_240), 1, 240)
This is something we should discuss and decide how we want to approach it. The old pipeline didn't bother trimming things, but the output did keep track of the length. Below is some code to figure out statistics about the length of each ESV and plot a histogram.
when I ran through the pipeline with my data some reads made it through even though they were quite short (as low as 145 bp, which was my truncLen on my forward read).
the default minLen says it is 20 which seems way too low, I'm thinking something like +/- 2 basepairs (maybe up to 5 bp) around the expected merged length might be better, given that we don't expect much variation (and for most of the reads don't actually see that much variation)
The text was updated successfully, but these errors were encountered: