FARMINGDALE, NY – November 25, 2020: EEG Video, the leader in closed captioning technology, has announced the launch of its Lexi 2.0 Automatic Captioning Service. The second generation of its popular automated cloud-hosted closed captioning service, Lexi 2.0 provides users with even higher accuracy, new workflow enhancements, and an expansive new control set.
With Lexi 2.0, EEG’s industry-leading automatic captioning service is faster, easier, and more cost-effective than ever before, while offering increased control and interoperability within IP-based workflows. Broadcasters, streaming content creators, governments and municipalities, corporate, and educational users will benefit from the new advances and features in Lexi 2.0, including:
With Lexi 2.0, EEG is deploying the most significant accuracy upgrade for Lexi since it was released in 2017. This is a generational improvement for automatic captioning that sees Lexi capable of achieving over 95% accuracy, depending on the content. In local news program samples, for instance, EEG has reported jumps from 87% to 95.5% accuracy.
Lexi 2.0 now stands among the most accurate automatic captioning solutions available, largely thanks to multiple advances:
Reduction in word errors of 30-50% measured for many users
Improved response to fast speech such as weather reports and casual conversation
Improved response to background noise
The increased automatic captioning accuracy performance of Lexi 2.0 is available now to all existing Lexi users at no extra charge.
Core Models represents a powerful expansion of EEG’s robust Topic Models system for accurately displaying custom vocabulary and phrases.
With the addition of Core Models, Lexi users and their audiences experience even higher captioning accuracy for a wider variety of content. Core Models further expand the usability and quality of affordable AI captioning for live broadcasting, live streaming and events, videoconferencing, and even VOD programming.
The Core Models system builds on the usefulness of EEG’s Topic Models feature. This breakthrough technology enables Lexi to recognize topics, immerse itself in distinctive vocabulary, and observe context through the absorption of relevant web data unique to each implementation. Topic Models enable Lexi to perform in real-time with a high degree of accuracy by better addressing the poor recognition of topic-specific vocabulary often displayed by previous speech-to-text captioning systems. This can include less common proper nouns, such as names of people, places, or products. Topic Models also address entire phrases, jargon, vocabularies or speaking styles that would be typical in one context, such as a baseball telecast, but unusual in another context, such as a medical imaging conference.
Additionally, people and phrases in the news can change rapidly. However, many systems update basic vocabulary infrequently–on a quarterly basis or less–and still do not necessarily weigh recent new developments highly enough compared to older sources of training data.
Phrases like “coronavirus” and “COVID-19,” for example, have been used continuously in TV news coverage for several months, yet several off-the-shelf commercial speech-to-text engines still do not recognize these phrases, providing poor phonetic substitutions, like “culvert” for the previously unknown word “COVID.” EEG recognizes that eliminating such discrepancies are critical for breaking news broadcasters to assess the viability and credibility of AI captioning, as well as achieving improved engagement and satisfaction for audiences.
With more than three years of experience in providing AI captioning for a myriad of events and media, EEG’s AI Team has distilled some of the most common vocabulary training cases into a set of Core Models, a subset of Topic Models maintained by EEG experts and available to all Lexi customers. Customers can also build their own individualized Topic Models on top of an EEG Core Model, creating multiple layers of accuracy-enhancing customization. As EEG continues to evolve and add to the Core Model, the derived individual customer models are also automatically updated to merge individual and Core changes.
Current EEG Core Models exist in English for:
Headline News (United States-focused): more than 15,000 entities and phrases
Sports: Baseball (MLB-focused): more than 10,000 entities and phrases
Christian Broadcasting: more than 15,000 entities and phrases
Legislative and Municipal Sessions: more than 1,000 entities and phrases
Weather (United States focused): more than 1,000 entities and phrases
The list of Core Models is expected to grow in the coming months, which will include entries in additional languages.
Available for both Lexi and the Lexi Local on-premises/off-cloud solutions, Scheduling is a powerful new workflow enhancement that makes it easy to schedule, monitor, and manage captioning jobs.
Scheduling is an advanced calendaring feature that allows Lexi users to program a future start and end time for automatic captioning of their content. Recurring events can also be set up for regularly scheduled programming. With the addition of Scheduling, show or event technical producers can set and forget Lexi’s automatic captioning service, ensuring that captions will appear as planned so they can concentrate on other real-time details of their content.
The new scheduling feature also helps Lexi users save money, since automatic caption delivery can now be set precisely for the content’s beginning and end times. Scheduling is particularly beneficial to users of EEG’s Falcon Live Streaming RTMP Caption Encoder, who increasingly depend on Lexi for live captioning of streaming content and events for education, e-learning, government, corporate, entertainment, independent productions, and more.
Scheduling further streamlines Lexi workflows with the new Instances capability. Lexi users can employ Instances to create a template for easy duplication of settings across different sessions, to specify that the same language, Topic Models, Access Code, caption appearance, and more are uniform for each desired session. In this way, Instances saves Lexi users time during setup while also reducing the chance of error.
Lexi is capable of delivering over 90% accuracy in English, Spanish, and French for many common media types—optimal for improving compliance and accessibility on currently uncovered material. As a cloud service, Lexi instinctively learns new data for global news and entertainment. Lexi can also monitor user-supplied URLs to absorb and leverage new data to match current on-air media transcriptions.
Instances is a template of settings that can be utilized when creating captioning jobs with Lexi and Lexi Local.
This capability is part of the newest Scheduling feature. Available for both automatic captioning solutions, Scheduling is a powerful new workflow enhancement that makes it easy to schedule, monitor, and manage automatic captioning.
Scheduling further streamlines Lexi workflows with the new Instances capability. Lexi users can employ Instances to create a template for easy duplication of settings across different sessions. These settings include language, Topic Models, Access Codes, caption appearance, and more, all which are applied to desired sessions when saved as an Instance. For users whose broadcasts, streams, or live events are not likely to change settings, Instances save users time during setup while also reducing the chance of error.
“The second generation of Lexi was developed by listening to our users, who told us they wanted even faster and more efficient workflows, opportunities for reduced costs, and easier interoperability with IP-based video infrastructure and SMPTE 2110 standards,” says Bill McLaughlin, VP of Product Development for EEG Video. “Just as important, Lexi 2.0 delivers increased accuracy by fully leveraging the latest AI captioning advances. With its many new features to improve both the user and audience experience, Lexi 2.0 reinforces EEG’s commitment to accelerating the benefits of automatic captioning.”
About EEG Video
EEG Video is the industry leader in closed captioning technology. For more than 35 years, the company’s cutting-edge solutions have been advancing captioning, subtitling, and metadata workflows for customers in live broadcasting, post production, and streaming media.
EEG was recognized by the National Academy of Television Arts & Sciences with a 2014 Technology and Engineering Emmy® Award for “Development of Low Latency Video Streaming Live Captioning Solutions.”
All product names are registered trademarks of their respective owners.