Irish researchers at the forefront of video research tell us that the future of video is high-resolution, searchable and immersive.
The Adapt Centre for Digital Content Technology, in association with Huawei, recently hosted ‘Watch! Video Everywhere’, an event in Science Gallery Dublin to explore the future of video.
The pedigree of experts gathered at this event was unmistakable. There was Oscar-winner Prof Anil Kokaram, founder of the Sigmedia research group at Trinity College Dublin and the man whose breakthrough editing software enabled the explosive visual effects seen in the likes of Casino Royale, X-Men and Lord of the Rings.
There was Prof Carol O’Sullivan, who has carried out research in Trinity and Seoul National University, and with Disney Research in California to figure out how to make virtual crowds, avatars and scenes more believable to us humans.
And there was Prof Aljosa Smolic, Science Foundation Ireland’s (SFI) recently appointed professor of creative technologies, who was recruited from Disney Research Switzerland to lead a cutting-edge research programme intent on positioning Ireland as a global centre of creative industries and production.
The purpose of gathering this unique mix of experts was to showcase, dissect and discuss the world-leading research informing the next generation of video and media technologies.
Smolic, for example, is leading the V-SENSE project with a substantial €4.5m in SFI funding to grow a team investigating visual computing and how algorithms can work with images.
“The big overarching goal is to create something that would make visual communication indistinguishable from reality,” said Smolic.
Thinking in sci-fi terms, what Smolic is looking for is something like the holodeck, the holographic technology from the Star Trek universe.
“As a long-term ultimate goal, it would be something like the ultimate entertainment platform,” he said.
While these researchers see the value of video as a medium for entertainment, the potential application for this futuristic technology goes beyond TV and film.
Kokaram’s key research interest now is how to enable the transmission and distribution of detailed, rich video content that can immerse users in a virtual reality, which is more than just a Hollywood dream.
“A lot of what I am thinking about right now is involved with efficient manipulation and transmission and distribution of that sort of media. And, actually, a lot of things to do with the application of those new technologies in life sciences,” he said.
‘The big overarching goal is to create something that would make visual communication indistinguishable from reality’
– PROF ALJOSA SMOLIC
Higher resolution is key to enabling much of video’s aspirational future, with Dr François Pitié, a research fellow at the Sigmedia Lab, seeing 2017 as the coming of age of 4K. After that, 8K is just around the corner, but 16K is the ultimate goal for rich VR video.
Behind all this VR and immersion in hyper-real high-resolution video content is the data. Like all digital media, data is the backbone of video, and this information can be useful if we are able to identify and extract it.
Some of the Adapt research showcased at the Science Gallery event demonstrated how this can be put to use. A stand-out among them was PEEP, the Personalised Event Engagement Portal, which miraculously makes hours upon hours of video, audio and presentation slides searchable.
Imagine, every minute of a conference recorded and documented in a way that any mention of your area of interest can be found in a moment. Eamonn Kenny and his team have also applied PEEP to footage from the Dáil, which could be transformative in supporting people’s engagement with the political system.
So that’s how researchers are enabling you to find what you are looking for without having to watch endless hours of video footage, but there’s more to be extracted from video data, if these teams can crack it.
“2017 is always about what are going to be the next new sensors providing higher resolution of data, even more data,” said Dr Rozenn Dahyot, associate professor in statistics at Trinity.
“What I think might be happening is a sort of merging in between different sorts of information to make some intelligence out of it. So, how to merge social media data with Google data, with Netflix data, and try to make some form of intelligence to provide better service for the users down the line.”
‘We’re going to be looking at how we can transform video across languages so that we can actually have a global reach in real time’
– PROF VINCENT WADE
Dahyot’s thoughts were echoed by her peers, and the application of this data-driven insight raises the possibility of personalised video content.
“Video tends to be thought of as a very linear type of media. We’re going to be looking at how we can recompose it at run-time. We’re going to be looking at how we can transform it across languages so that we can actually have a global reach in real time. And we’re going to be looking at new techniques of, really, how to pull out specific pieces of it and then to really make it more personal for an individual,” said Adapt CEO Prof Vincent Wade.
“My prediction for video in 2017 is that ability to look at video, not as a completely edited thing, but to start looking at the little bits that were created and figure out what might be of value … To be able to tune in on the bit that is of individual value to users,” said Prof Owen Conlan, theme leader of a personalisation research project at Adapt.
Speaking to these researchers, this future of rich, immersive and searchable video with insights to be readily extracted doesn’t seem so far off. And, with the Adapt centre and Huawei’s newly announced video R&D lab right here in Dublin, Ireland could be leading the charge.