-
Notifications
You must be signed in to change notification settings - Fork 235
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cycling a buffer with DMA #789
Comments
I don't think this is currently supported. It also looks like the documentation is not correct. https://docs.rs/rp2040-hal/latest/rp2040_hal/dma/index.html contains the sentence: "This API tries to provide three types of buffers: Single buffers, double-buffered transfers where the user can specify the next buffer while the previous is being transferred, and automatic continuous ring buffers consisting of two aligned buffers being read or written alternatingly." I think this is wrong and the "continuous ring buffer" mode currently is not implemented. Or at least it's not obvious how to use the API in that mode. Due to the way the hardware is implemented, even if you only want to send the contents of a single buffer continuously, you need two DMA channels. It is not possible to chain a channel to itself. And if the continuous mode needs two DMA channels anyway, it might be reasonable to extend |
As a side note, I was surprised that the API doesn't let you set up a chain before starting the transfer. The current API is awkward if you know there's a chain but you're essentially racing with the transfer in flight. |
Also it doesn't have a way to group channels so you can start them simultaneously. |
The current API makes sense for the designated use case: In a common double-buffer setup, you want to have full access to the second buffer from your Rust code while the first buffer is used by DMA. To avoid data races (and therefore undefined behavior), the API can't setup the chaining to the second buffer before the caller cedes its reference to that buffer. Of course, this focus on a particular use case is also limiting. If your buffers are small and the DMA speed is fast, buffer underruns become possible, and the API provides no way to compensate. But it is difficult to design an API that is both safe and flexible. And for full flexibility, it's still possible to configure the DMA registers using the PAC. (That doesn't mean that there's no room for improvement! And tickets like this one are useful and important to learn about actual use cases that could be better supported. I'm just explaining the status quo.)
Can you provide an example where this would be important? |
Right. I'm thinking about things like scatter/gather cases where you've already set things up and you just want to DMA them. But you probably want to do that with a data/control pair of channels rather than dedicate one channel per buffer.
Yeah I'm thinking I'll fall back to that to explore the design space and then see if a reasonable safe API makes sense.
My current project is trying to drive 3 SPI DACs together in lockstep so that the samples are emitted together. The DACs themselves have a PIO SM each which are fully synchronous, but they're triggered by when the first word to output is available. There's one DMA channel per PIO SM / DAC. Right now manually starting the DMA transfers, there's a noticeable skew across the output steams. I'd like to start all the DMA transfers simultaneously to minimize the skew. I'm hoping that the SM FIFO buffering will be enough to cover any delayed samples due to things like DMA bus contention (as the DMA is way faster than the PIO clocks). |
I'd like to set up a single buffer which is sent continuously to a pio fifo, and then be able to cancel it. It's not clear to me whether this is possible with the current DMA API.
double_buffer
seems the most plausible, but it doesn't seem possible because you can only callread_next
onTransfer
, not inConfig
.Is
double_buffer
intended to support this kind of use-case? Otherwise would it make sense to have a dedicatedcycle
module (or something) to support this use?The text was updated successfully, but these errors were encountered: