You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'll scope this to the DiViNa format but it may be generalized.
For making DiViNa publications accessible, I'm thinking about the following use cases:
a/ the comic is a turbomedia. Each image can be accompanied by a text or audio. This text can then be read in synthesized voice by the reading app, audio can be played directly. The audio starts when the user reaches the image and stops when the user moves to another image.
b/ the comic is a webtoon. The audio (synthesized or not) starts and stops at certain visual points in the continuous visual narrative.
c/ the comic is a traditional board with guided navigation (each box and even each bubble can be isolated by a rectangular shape, placed in sequence). Text or audio is associated with each rectangular box defined by the guided navigation. The audio start when the user reaches the box and stops when the user moves to another box.
Do you foresee other useful use cases?
Notes:
We've defined guided navigation as a collection of {href, title}, out of the reading order.
In parallel, we'll reuse in webpub the sync-narration defined by the W3C, which associates a recursive narration json object made of {text, audio}, where text isolates a segment of html and audio a segment of audio in any resource of the publication.
None of these structures is currently able to fulfill these needs.
The text was updated successfully, but these errors were encountered:
I think that having text for everything is more useful at this point than expecting everyone to produce audio content as well.
We've had an issue opened for quite some time now about description (see #21) which could be very relevant in this context:
each image in the readingOrder could contain a description
then using guided, each Link Object could also contain an additional description
This would be 100% compatible with several other things that are possible with our model:
some items in the readingOrder would have both title (for a minimal TOC) and description (for accessibility)
in addition to these text nodes, alternate could point to an audio rendition of each image resource or fragment (in the case of guided), which IMO works better than the Synchronized Narration document for such use cases
I'll scope this to the DiViNa format but it may be generalized.
For making DiViNa publications accessible, I'm thinking about the following use cases:
a/ the comic is a turbomedia. Each image can be accompanied by a text or audio. This text can then be read in synthesized voice by the reading app, audio can be played directly. The audio starts when the user reaches the image and stops when the user moves to another image.
b/ the comic is a webtoon. The audio (synthesized or not) starts and stops at certain visual points in the continuous visual narrative.
c/ the comic is a traditional board with guided navigation (each box and even each bubble can be isolated by a rectangular shape, placed in sequence). Text or audio is associated with each rectangular box defined by the guided navigation. The audio start when the user reaches the box and stops when the user moves to another box.
Do you foresee other useful use cases?
Notes:
We've defined guided navigation as a collection of {href, title}, out of the reading order.
In parallel, we'll reuse in webpub the sync-narration defined by the W3C, which associates a recursive
narration
json object made of {text, audio}, wheretext
isolates a segment of html andaudio
a segment of audio in any resource of the publication.None of these structures is currently able to fulfill these needs.
The text was updated successfully, but these errors were encountered: