Stephen F. Austin State University

American Sign Language Media Development Laboratory

Welcome to the American Sign Language Media Development Laboratory

Mission of the Laboratory
The ASL Media Development Laboratory at Stephen F. Austin State University is dedicated to developing interactive media for enhancing American Sign Language instruction and educating students with disabilities.

Principal Activities

Goals and Objectives

Short-Term Goals
· Continue media and software development for all ASL courses in the form of study enhancement software.
· Develop a usable and accessible set of instructor tools to use the software for assessment and data collection.
· Develop promotional materials for recruiting deaf and hard of hearing majors.
· Duplicate and distribute study resources to ASL and deaf education classes.

Long-Term Goals
· Develop and test software for use with deaf and hard of hearing children in public school settings.
· Market software and media developed in the laboratory.
· Develop and test media and software for children with other disabilities.

Projects

ASL Acquisition Software

Flash Drives - ASL Study Tool (Prior to 2014)

The ASL Study Tool supported classroom instruction by providing students with abundant vocabulary and phrase practice. ASL students and deaf education majors may practice both receptive skills and expressive skills, although expressive practice generally works better using a highly skilled language model to judge the accuracy of signs and phrases. The ASL Study Tool was developed using Macromedia Director and uses Adobe Shockwave Technology.

Fiigure 1. The ASL Study Tool is distributed on a flash drive to all our ASL students so they may review it on their own outside class.

Figure 2. ASL Study Tool Flash Drive

ASLexpress (Since 2014)

As Adobe Shockwave diminished in popularity, it became increasingly difficult for students to use the StudyTool. The software developer made the decision to switch to a fully online software application that takes advantage of HTML5 built-in video functions. It uses PHP to access a MySQL database of videos and answers. The decision has the following advantages:

ASLexpress Features

Figure 5: ASLexpress Teacher Resources

Figure 3. ASLexpress Teacher Resources

In Progress - Teacher Tools

Motion Capture Research and Media Development

Science Signs 3D Project

The Science Signs Study Tool allows comparison of recorded performances from live actors with 3D animations generated through motion capture of deaf performers. The 3D animations can be viewed from any angle, allowing students to clarify handshapes, positions and motions that may be occluded due to camera angles in a two-dimensional QuickTime video.

Figure 6: Science Signs Study Tool

Figure 4. Science Signs Study Tool

Improving the Efficiency, Affordability and Quality of the Motion Capture Process

Motion capture has been promoted as a potential replacement for two-dimensional media, which is currently the most widespread type of media used in American Sign Language instructional and communication media. In order to replace two-dimensional formats, motion capture must overcome the advantages of video formats. The project was launched to explore three-dimensional media as a replacement for two-dimensional formats in ASL instructional materials.

QuickTime videos have the following advantages:

  1. The equipment is much more affordable than traditional motion capture equipment
  2. The process of recording, editing and publishing requires a fraction of the time and effort needed to record, process and publish motion-captured animations of equal quality
  3. Facial expressions and finger animations have been much more difficult to record using motion capture than with an off-the-shelf video camera.


First Effort
The first attempt used in the ASL Media Development Laboratory involved using mechanical motion capture, which was prone to slippage and required constant calibration. Converting the data to usable animations required a great deal of effort by skilled 3D animators. The process also involved wearing equipment that often impedes the natural production of signs.

One major drawback from the perspective of a researcher is that the 3D animations and the QuickTime videos had to be recorded separately. A direct comparison of identical signs differing only in 3D versus 2D was impossible.

Figure 5. Mechanical Motion Capture

Second Effort

The second approach to motion capture data took advantage of recent software development by IPIsoft, which used optical motion capture. The software used four Sony PlayStation Eye cameras to record video from four angles. The video quality was somewhat grainy, but it was used only for IPIsoft applications to process the video converting the video from four angles into motion data. Converting a 30 second clip into motion data required approximately four hours using our computer. One major advantage of this approach is that the deaf performers do not have to wear any bulky equipment. This translates into the ability to directly compare identical signs in 2D QuickTime videos and 3D animations; they do not need to be recorded separately.

Figure 6. IPI Recorder

Current Efforts - Snapshot ASL Recorder and Snapshot ASL Recognizer
The current approach is investigating the use of the low-cost depth sensors such as Microsoft Kinect, Intel RealSense, and other depth sensors that cost approximately $100 to $300.

In 2015, Intel held the Intel RealSense app developer challenge. Dr. Scott Whitney entered the competition with the goal of developing a fingerspelling recognition app. The resulting project was not intended as a state-of-the-art project, but as an exploration of the potential of using the Intel RealSense depth sensor to recognize American Sign Language.

In a short development cycle (45 days), the project was completed to the point that it was able to recognize approximately 40 handshapes with 50% to 90% accuracy (depending on the handshape). It used a simple method of storing the joint angles for each finger joint in a SQLite database. As the depth sensor detected handshapes from a live person, it created a query to find the closest match to the handshape the signer forms.

The RealSense depth sensor and developer tools have the potential to streamline the media development process and solve the issues of comparing identical performances in 2D and 3D.

To see the Snapshot ASL apps in action, follow this link:

https://www.youtube.com/watch?v=ZLLb_WjFOQM&feature=youtu.be