Whenever doing something new, it's important to take time for reflection. To create accessibility one must adopt new work habits and workflows to produce digital content. This can be especially challenging when used to established workflows that do not include accessibility as part of the creation process. That is why I love teaching accessible design to students. It is possible to make accessible design workflows, like captioning, part of the creation process and during the process invest kids in the why behind producing content that can be accessed by a wide variety of users.
Personally, I have found the workflow of producing daily content for this challenge, in fact challenging though I do appreciate the self-imposed restraint of daily content production. I have a ton of unpublished draft blog posts that I haven't published because I wanted to return to them and continue to improve. I look at them now and some are not even relevant because the technology has changed or been updated. This workflow has not only pushed me to publish but also made me even more cognizant of how accessible technology continues to evolve.
For example, back in October when you uploaded transcripts to Google Drive to caption videos they required the .srt file extension. The file didn't actually have to be in the .srt file format but needed the file extension in order to sync the captions. That is no longer needed. A simple .txt file achieves the same result. It gives me hope that maybe someday it will be possible to push or attach a Google Doc to a video and it will autosync the captions helping to eliminate the step of downloading. Or even better yet, with the update to Google Slides that allows the embedding of Google Drive videos, wouldn't it be cool to be able to produce a transcript using voice typing (Looks for a post on how to do this in Google Docs soon!) within Google Slides that then could be edited for accuracy and added as captions? Anything that streamlines the process and makes it more kid-friendly helps move accessibility closer towards reality.
Also, this week Apple released Clips that was fun to play with. I don't anticipate using it much myself but it does provide another solution for captioning with its Live Titles that you can see during the creation process. This feedback is huge! The technology still relies on voice recognition which is highly variable and the editing process is terribly cumbersome. But, it does signal the continued evolution of technology in this space that creates hope for a more accessible future.
Hope you enjoyed Week 1! Coming up during Week 2 of the #CreateA11y challenge, I look forward to sharing more captioning workflows for both YouTube and for Google Drive.
Subscribe to:
Post Comments (Atom)
No comments