Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DiViNa and accessibility #49

Open
llemeurfr opened this issue Feb 21, 2020 · 1 comment
Open

DiViNa and accessibility #49

llemeurfr opened this issue Feb 21, 2020 · 1 comment
Labels

Comments

@llemeurfr
Copy link
Contributor

I'll scope this to the DiViNa format but it may be generalized.

For making DiViNa publications accessible, I'm thinking about the following use cases:

a/ the comic is a turbomedia. Each image can be accompanied by a text or audio. This text can then be read in synthesized voice by the reading app, audio can be played directly. The audio starts when the user reaches the image and stops when the user moves to another image.
b/ the comic is a webtoon. The audio (synthesized or not) starts and stops at certain visual points in the continuous visual narrative.
c/ the comic is a traditional board with guided navigation (each box and even each bubble can be isolated by a rectangular shape, placed in sequence). Text or audio is associated with each rectangular box defined by the guided navigation. The audio start when the user reaches the box and stops when the user moves to another box.

Do you foresee other useful use cases?

Notes:
We've defined guided navigation as a collection of {href, title}, out of the reading order.

In parallel, we'll reuse in webpub the sync-narration defined by the W3C, which associates a recursive narration json object made of {text, audio}, where text isolates a segment of html and audio a segment of audio in any resource of the publication.

None of these structures is currently able to fulfill these needs.

@HadrienGardeur
Copy link
Collaborator

I think that having text for everything is more useful at this point than expecting everyone to produce audio content as well.

We've had an issue opened for quite some time now about description (see #21) which could be very relevant in this context:

  • each image in the readingOrder could contain a description
  • then using guided, each Link Object could also contain an additional description

This would be 100% compatible with several other things that are possible with our model:

  • some items in the readingOrder would have both title (for a minimal TOC) and description (for accessibility)
  • in addition to these text nodes, alternate could point to an audio rendition of each image resource or fragment (in the case of guided), which IMO works better than the Synchronized Narration document for such use cases

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants