Stephen F. Austin State University

American Sign Language Media Development Laboratory

Welcome to the American Sign Language Media Development Laboratory

Mission of the Laboratory
The ASL Media Develop Laboratory at Stephen F. Austin State University is dedicated to developing interactive media for enhancing American Sign Language instruction and for educating students with disabilities.

Principal Activities
Research - The ASL Media Develop Laboratory generates media and software needed for researching and improving teacher preparation programs, American Sign Language instruction, and materials available to Deaf and hard of hearing students. Media formats include video, images, and 3D animations.

Service - The ASL Media Develop Laboratory has produced materials currently used to enhance American Sign Language courses. Additional materials will be released for use in Deaf Education classrooms with deaf and hard of hearing children.

Goals and Objectives

Short term goals:
· Continue media and software development for all ASL courses in the form of study enhancement software.
· Develop a useable and accessible set of instructor tools to use the software for assessment and data collection
· Develop promotional materials for recruiting Deaf and Hard of Hearing majors.
· Duplicate and distribute study resources to ASL and Deaf Education classes.

Long term goals:
· Develop and test software for use with Deaf and Hard of Hearing children in public school settings
· Market software and media developed in the laboratory
· Develop and test media and software for children with other disabilities


ASL Acquisition Software

Flash drives - ASL Study Tool (Prior to 2014)

The ASL StudyTool supported classroom instruction by providing students with abundant vocabulary and phrase practice. ASL students and Deaf Education majors may practice both receptive skills and expressive skills, although expressive practice generally works better using a highly skilled language model to judge the accuracy of signs and phrases. The ASL StudyTool was developed using Macromedia Director and uses adobe Shockwave Technology.

The ASL StudyTool is distributed on a flash drive to all of our ASL students so that they may review on their own outside of class.

Figure 2: ASL StudyTool Flash Drive

ASLexpress (Since 2014)

As Adobe Shockwave became less popular, it became increasingly difficult for students to use the StudyTool. The software developmer made the decision to switch to a fully online software application which takes advantage of HTML5 built-in video functions. It uses PHP to access a MySQL database of videos and answers. The decision has the following advantages:

ASLxpress Features

Figure 5: ASLexpress Teacher Resources

In Progress - Teacher Tools

Motion Capture Research and Media Development

Science Signs 3D Project

The Science Signs StudyTool allows comparison of recorded performances from live ctors with 3D animations generated through motion capture of Deaf performers. The 3D animations can be viewed from any angle, allowing students to clarify handshapes, positions and motions that may be occluded due to camera angles in a two-dimensional QuicTime video.

Figure 6: Science Signs Study Tool

Figure 6: Science Signs Study Tool

Improving the Efficiency, Affordability and Quality of the Motion Capture process

Motion capture has been promoted as a potential replacement for two dimensional medial which is currently the most wide-spread type of media used in American Sign Language instructional and communication media. In order to replace two-dimensional formats, motion capture must overcome the advantages of video formats. The project was launched in order to explore 3D media as a replacement for 2D formats in ASL instructional materials.

QuickTime videos have the following advantages:

1. The equipment is much more affordable than traditional motion capture equipment

2. The process of recording, editing, and publishing requires a fraction of the time and effort needed to record, process and publish motion-captured animations of equivalent quality

3. Facial expressions and finger animations have been much more difficult to record using motion capture than it is to record with an off-the-shelf video camera.

First Efforts
The first attempt used in the ASL Media Development laboratory involved using mechanical motion capture which was prone to slippage and required constant calibration. Converting the data to useable animations required a great deal of effort by skilled 3D animators. The process also involved wearing equipment which often impedes the natural production of signs.

One major drawback from the perspective of a researcher is that the 3D animations and the QuickTime videos had to be recorded separately. A direct comparison of identical signs differing only in 3D vs. 2D was impossible.

Figure 7: Mechanical Motion Capture

Second Effort

The second approach to motion capture data took advantage of recent software development by IPIsoft. IPIsoft use optical motion capture. The software used 4 Sony Playstation Eye cameras to record video from 4 angles. The video quality was somewhat grainy, but it was used only for IPIsoft applications to process the video converting the video from four angles into motion data. Converting a 30 second clip into motion data required approximately 4 hours using our available computer. One major advantage of this approach is that the Deaf performers do not have to wear any bulky equipment. This translates into the ability to directly compare identical signs in 2D QuickTime videos and 3D animations - they do not need to be recorded separately.

Figure 8: IPI Recorder

Current Efforts - Snapshot ASL recorder and Snapshot ASL recognizer
The current approach is investigating the use of the low-cost depth sensors such as Microsoft Kinect, Intel RealSense, and other depth sensors that cost approximately $100 to $300.

In 2015 Intell held the Intel Realsense app developer challenge. Dr. Scott Whitney entered the competition with the goal of developing a fingerspelling recognition app. The resulting project was not intended as a state-of-the art project, but as an exploration of the potential of using the Intel Realsense depth sensor to recognize American Sign Language.

In a short development cycle (45 days) the project was completed to the point that it was able to recognize approximately 40 handshapes with 50 to 90 percent accuracy (depending on the handshape). it used a simple method of storing the joint angles for each finger joint in a SQLite database. As the depth sensor detected handshapes from a live person, it would then create a query to find the closest match to the handshape the signer forms.

The Realsense depth sensor and developer tools have the potential to streamline the media development process and solve the issues of comparing identical performances in 2D and 3D.

To see the Snapshot ASL apps in action, click the following link:

ASL Lab Annual Reports

Annual Report 2014