We are excited to be pushing the frontier of this emerging technology and grateful for the support of the Brown Institute of Media Innovation.The figures provided are as a result of official manufacturer's tests in accordance with EU legislation with a fully charged battery. Breaking from the traditional design using a linear play list, the tool renders a new viewing experience in which video clips capturing “ Who said What on Topic X” automatically extracted from the massive video sources are dynamically selected and played in an serendipitous way. To further explore the results, the user can filter them by time, channels, or sources.įor the lean backward mode, we provide a novel tool that shows the dynamic interaction of important players and topics through a Serendipity user interface.
#NEWS ROVER VERSION 21 ARCHIVE#
The search is completed over the comprehensive indexes of the longitudinal content archive and topic list in the News Rover system, which returns a set of matched topics populated with videos, articles, and tweets linked to each topic. While a user is following a news source (e.g., New York Times or Twitter), she can tag any snippet of interest and use it to search our NewsRover system seamlessly. For the former, we incorporate the natural tag & search step in the news consumption process. On the consumption side, we have explored a few paradigms, enabling differing but complementary user experiences we have termed lean forward and lean backward. Currently, the system has indexed close to a half million video stories organized around 24,500 topics over a period of 13 months.
#NEWS ROVER VERSION 21 DOWNLOAD#
We have deployed a real-time system to record 100 channels of news video and download news articles from online portals and social networks. The diagram below illustrates components of our comprehensive multimodal information extraction pipeline, used to discover Who, Where and What information from news videos. One of our objectives is to enhance the automatic indexing capability by leveraging both the existing metadata associated with the sources and extracting deep content information from a variety of media modalities, such as speech, image, closed caption, and screen caption. While integrated personalized news is increasingly popular for text-based systems, a comprehensive platform mixing videos with information from online sources is critically missing. We believe that such a model can provide a vastly superior user experience and provide fine-grained analytics to content providers.
We provide each viewer with a one-stop solution for stories that matter most to them, covering both current news and relevant news stories from the past. The technology for our project represents a major step in this direction. We forecast that next-generation video news consumption will be more personalized, organized by trending topic, device agnostic, and pooled from many different sources. In this project, we seek to develop and demonstrate a platform for integrated multimodal news to replace the traditional news consumption model. Furthermore, a use preference is clearly on the rise: users want to access news on any device, anytime, and anywhere. This behavior setting draws news from a variety of sources including online articles, broadcast videos, and social networks.
The conventional paradigm of receiving news in fixed time slots through fixed channels is being replaced by a dynamic topic-driven behavior. This can be clearly seen in the 2012 News Consumption Survey shown below. In the new media era, more people are accessing news content through online and mobile platforms.