Video4IMX-2024

1st International Workshop on Video for Immersive Experiences

at the ACM Interactive Media Experiences Conference (IMX 2024), 12-14 June 2024

Tentative program has been posted. The workshop will take place in the morning of June 12, 2024.

WORKSHOP AIM AND TOPICS




The aim of the Video4IMX - sponsored by the TRANSMIXR and XReco projects – workshop will be to address the increasing importance and relevance of classical (linear 2D), interactive (non-linear), 360° and volumetric video assets in the creation of immersive experiences. Richly granular and semantically expressive descriptive metadata about these assets is necessary for tools to facilitate their discovery, adaptation and re-use. This is needed if those assets are to form a part of immersive content experiences, both in automated ways (e.g. automated insertion of journalist’s video recordings inside an immersive experience for a breaking news story) and semi-automated (e.g. creatives can search for and re-use, re-mix or summarise videos as part of a theatrical or cultural immersive experience).


Such descriptive metadata needs extraction (potentially adapted to the particular characteristics of interactive, 360° and volumetric video), modelling (according to shared vocabularies and schema) and management (in appropriate storage tools with expressive query support) before it can be meaningfully used to discover and organise the videos for new, innovative data-driven immersive content experiences. There should also be means to adapt, summarise or remix video content according to the usage purpose in the immersive experience, even to the extent that video could be input to generative AI models to generate 3D objects or scenes.


The workshop will solicit the latest research and development in all areas around the creation and management of descriptive metadata for video (particularly as used for integration into immersive experiences) as well as approaches to adapt or convert video according to its purpose and use in that immersive experience. It aims to support the growth of a community of researchers and practitioners interested in creating an ecosystem of tools, specifications and best practices for metadata extraction from video as well as video discovery, adaptation, summarization or conversion, especially in the context of video use in immersive experiences.


Topics for the workshop include, but are not limited to:


  • Extraction and modelling of descriptive metadata about traditional 2D video, as well as 360° and volumetric video (decomposition, semantic representation, categorization, annotation, emotion/mood etc.)
  • Use of common schema and vocabularies (incl. Linked Data) for descriptive metadata models that are also supported by tools for metadata storage and query, browsing engines for video discovery and creation tools for the insertion of video into immersive environments
  • Tools and algorithms for the adaptation, summarisation or remixing of any type of video assets (classical, interactive, 360°, volumetric)
  • Generative AI or equivalent for converting from a video input to immersive content (3d objects, 3d scenes)
  • Examples and use cases for usage of video (esp. 360° or volumetric) or immersive content generated from video in immersive experiences
  • Evaluations of user experience with video (esp. 360° or volumetric) or immersive content generated from video as part of an immersive experience

Video4IMX-2024 will continue from the successful DataTV workshops held at IMX 2019 and 2021 where a range of topics related to data-driven personalised of television were presented, as reported in the workshop proceedings at http://datatv2019.iti.gr and http://datatv2021.iti.gr, and which also led to a Special Issue on Data Driven Personalisation of Television Content in the Multimedia Systems journal.




CALL FOR PAPERS




Video4IMX foresees two types of submission. Both submission types will be handled by a dedicated EasyChair page. Full papers will have an oral presentation at the workshop and short papers may be presented as either a poster or a demo at the workshop: All accepted papers will be included in the ACM IMX 2024 Workshop Proceedings that will be published in ACM ICPS and will be available in the ACM Digital Library.


Concerning the submission format, please follow the instructions and the templates provided by the IMX2024 conference, on the "Submission Format" section of the IMX2024 Call for Papers page, with one notable exception: submissions to the Video4IMX workshop should not be anonymized; they should include name, affiliation, and contact information of all authors.


Full papers


These are to be between 7000 and 9000 words in the SIGCHI Proceedings Format with 150 word abstract, describing original research which has been completed or is close to completion and which covers at least one of the workshop topics. Accepted papers will be presented in the oral session.


Short papers


These are 3500-5500 words in the SIGCHI Proceedings Format with 150 word abstract. Papers are to describe works in progress or demos, to be included in the poster and demo session. The submitters will be asked to provide links to the work that will be presented and outline in the short paper why this is relevant to a topic of the workshop, as well as identify if the submission is for a poster or a demo to be shown at the workshop. We expect new concepts and early work-in-progress to be reported here.

SUBMIT A PAPER
The submission system is now open!
You can initiate a new submission or edit a previously-initiated (or completed) one. Any previously-initiated or completed submission can be updated unlimited times up to the submission deadline. Please DO NOT start a new submission for the purpose of updating one that you have already started or completed.

IMPORTANT DATES


  • Paper submission by 17 March 2024 3 April 2024
  • Notification of Acceptance by 7 April 2024 17 April 2024
  • Camera ready submission 28 April 2024

SCHEDULE

The workshop will take place in the morning of June 12, 2024. Tentative program. All times are local times in Stockholm, Sweden, i.e., CEST (Central European Summer Time: Paris etc.) times.

09:00 CEST: Workshop opening (Welcome, logistics)
09:05 CEST: Keynote 1. Niall Murray, “Towards privacy aware Quality of Experience evaluations of Immersive Multisensory Experiences”
09:35 CEST: Paper 1. Colm O Fearghail, Nivesh Gadipudi and Gareth W. Young, “Back to the Virtual Future: Presence in Cinematic Virtual Reality”
09:55 CEST: Paper 2. Ioannis Kontostathis, Evlampios Apostolidis and Vasileios Mezaris, “A Human-Annotated Video Dataset for Training and Evaluation of 360-Degree Video Summarization Methods”
10:15 CEST: Paper 3. Helmut Neuschmied and Werner Bailer, “Efficient Few-Shot Incremental Training for Landmark Recognition”
10:35 CEST: Coffee break
10:55 CEST: Paper 4. Nivesh Gadipudi, Colm O Fearghail and John Dingliana, “Auto-summarization of Human Volumetric Videos”
11:15 CEST: Paper 5. Ilias Poulios, Theodora Pistola, Spyridon Symeonidis, Sotiris Diplaris, Konstantinos Ioannidis, Stefanos Vrochidis and Ioannis Kompatsiaris, “Enhanced real-time motion transfer to 3D avatars using RGB-based human 3D pose estimation”
11:35 CEST: Paper 6. Giulio Federico, Fabio Carrara, Giuseppe Amato and Marco Di Benedetto, “Spatio-Temporal 3D Reconstruction from Frame Sequences and Feature Points”
11:55 CEST: Keynote talk 2. Ivan Huerta, “From Information Retrieval to Content Generation: a Unified Framework for the XR Media Ecosystem (XReco)”
12:25 CEST: Wrap-up and closing of the workshop


Keynote talk 1: Towards privacy aware Quality of Experience evaluations of Immersive Multisensory Experiences

In this talk, an overview on the evolution of the field of quality of user experience (QoE) research will place in context the need for developing privacy aware QoE of immersive and multisensory multimedia experiences. With an array of challenges and opportunities as we navigate towards the concept of virtual worlds and Web 4.0, this talk will highlight the need for privacy aware modelling of users' experiences.

Speaker: Niall Murray, Technological University of the Shannon, Ireland.

Niall Murray is a senior lecturer in the department of computer & software engineering in the Faculty of Engineering & Informatics in the TUS midlands campus. His main areas of research are Quality of Experience (QoE) evaluation of Immersive and Interactive multimedia experiences and Human Centric AI. He is a Science Foundation Ireland funded investigator in the Adapt Centre. He is the coordinator of the Horizon Europe TRANSMIXR project (https://transmixr.eu/). The TRANSMIXR project, funded by the European Union as part of the Horizon Europe Framework Program (HORIZON), under the grant agreement 101070109, is a collaborative effort of 22 partners from 12 countries with expertise in European research, media and innovation programs, in-depth knowledge of AI & XR and their application to the media sector.


Keynote talk 2: From Information Retrieval to Content Generation: a Unified Framework for the XR Media Ecosystem (XReco)

In this talk, we will present how the XReco platform can facilitate the integration of XR content into the media industry by providing a comprehensive solution for content sharing, search, discovery, and creation. We will walk through the entire process, starting with the data-driven ecosystem for the media industry, where data sharing, search, and discovery occur, and moving through the essential XR services such as 3D reconstruction, new view synthesis, content enhancement, free viewpoint video, and holoportation, to the final creation of XR experiences. We will demonstrate how the XReco platform can transform the use of XR media content from occasional involvement in media production to regular integration within the media industry. XReco is a Horizon Europe Innovation Project co-financed by the EC under Grant Agreement ID: 101070250.

Speaker: Ivan Huerta, i2CAT, Spain.

Ivan Huerta leads the Multi-modal AI Perception research line at i2CAT in Spain. With a PhD in Computer Vision and AI from UAB, he has extensive post-doctoral experience at prestigious institutions like IRI-CSIC, Università di Venezia, and Disney Research. He brings over 15 years of diverse experience in academia, startups, and project leadership. His contributions span European projects and networks, national initiatives, and industry collaborations, where he serves as IP, co-IP, WP leader, and task leader. Dr. Huerta has authored 2 patents and over 30 publications, specializing in Computer Vision, AI, and Machine Learning, with particular expertise in multi-sensor fusion, motion capture, deep learning, and extended reality.



CHAIRS

Lyndon Nixon, MODUL Technology GmbH, Austria
Vasileios Mezaris, CERTH-ITI, Greece
Stefanos Vrochidis, CERTH-ITI, Greece

TRANSMIXR
Your Second Image
Privacy & Cookies Policy