Live Action Data Input / Output



      LADIO will create a central hub with structured access to all data generated on set. This will enable the post-production team to be part of the on-set dataflow, building on the momentum of editing and color grading that are already initiated directly on-set. It will also support digital visual effects (VFX), whose seamless integration between live action and computer graphics elements is even more demanding. LADIO will bring about a VFX paradigm shift by spending more effort on set to drastically improve efficiency, without intruding onto the work of the live action crew. The LADIO hardware and software will streamline the setup of all devices for data collection and monitoring, and track the location of all recorded data in time and space in a common 3D reference system. 3D acquisition of the set with a dedicated in-house pipeline is already a common practice for high-end productions. LADIO will improve and release new open source software libraries to foster interoperability and collaborations.


  • 8th October 2018 - The Virtual Assist published an interview of the AliceVision team
  • 17th January 2018 - Intermediate Project Review
  • 1st December 2017 - Quine joins the project consortium
  • 28th June 2017 - Advisory Board in Toulouse with Project's Consortium and Ben HAGEN, Jon Michael PUNTERVOLD and Lars 'Lalo' NIELSEN
  • 28-29th June 2017 - Plenary Meeting in Toulouse
  • 28th June 2017 - Thomas Eskénazi (MIK) presented to EBU MDN workshop in Geneva the implementation of LADIO data model as an extension of EBU Class Conceptual Data Model (CCDM).
  • 16th February 2017 - Tomas Pajdla (CTU) came at Mikros Image to see datasets and discuss about the evolution of the CMPMVS algorithms.
  • 23rd to 27th January 2017 - One week workshop at Prague with Simula, Mikros Image and CTU to work on the analysis of CMPMVS and discuss about possible improvements in term of quality and performances.
  • 15th and 16th December 2016 - LADIO Kickoff meeting in Prague


We love to test our algorithms with popular models, which are often considered as reference for datasets. Here are some examples gathered on our AliceVision's website with Sketchfab.

We have also collected dataset from a couple of film sets, comprising data for pointcloud creation and reconstruction, and several challenging scenes for camera tracking.

The original (and still growing) dataset can be found on, but you can take a look at the following previews.

Deliverables and Reports of the project can be found here.


Michal Polic, Woldgang Förstner, Tomas Pajdla. Fast and Accurate Camera Covariance Computation for Large 3D Reconstruction. ECCV 2018.

Hajime Taira, Masatoshi Okutomi, Torsten Sattler, Mircea Cimpoi, Marc Pollefeys, Josef Sivic, Tomas Pajdla, Akihiko Torii. InLoc: Indoor Visual Localization with Dense Matching and View Synthesis. CVPR 2018.

Torsten Sattler, Will Maddern, Carl Toft, Akihiko Torii, Lars Hammarstrand, Erik Stenborg, Daniel Safari, Masatoshi Okutomi, Marc Pollefeys, Josef Sivic, Fredrik Kahl, Tomas Pajdla. Benchmarking 6DOF Outdoor Visual Localization in Changing Conditions. CVPR 2018.

Viktor Larsson, Magnus Oskarsson, Kalle Astrom, Alge Wallis, Zuzana Kukelova, Tomas Pajdla. Beyond Grobner Bases: Basis Selection for Minimal Solvers. CVPR 2018.

Relja Arandjelovic, Petr Gronat, Akihiko Torii, Tomas Pajdla, Josef Sivic NetVLAD: CNN Architecture for Weakly Supervised Place Recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(6):1437-1451, 2018.

Torii, A., Arandjelovic, R., Sivic, J., Okutomi, M., & Pajdla, T. (2018). 24/7 Place Recognition by View Synthesis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(2), 257–271.

Sattler, T., Torii, A., Sivic, J., Pollefeys, M., Taira, H., Okutomi, M., & Pajdla, T. (2017). Are Large-Scale 3D Models Really Necessary for Accurate Visual Localization? In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 6175–6184). IEEE.

Kileel, J., Kukelova, Z., Pajdla, T., & Sturmfels, B. (2017). Distortion Varieties. Foundations of Computational Mathematics.

Albl, C., Kukelova, Z., Fitzgibbon, A., Heller, J., Smid, M., & Pajdla, T. (2017). On the Two-View Geometry of Unsynchronized Cameras. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 5593–5602). IEEE.

Kato, T., Shimizu, I., & Pajdla, T. (2017). Selecting image pairs for SfM by introducing Jaccard Similarity. IPSJ Transactions on Computer Vision and Applications, 9(1), 12.

Rashwan, H. A., Chambon, S., Gurdjos, P., Morin, G., & Charvillat, V. (2016). Towards multi-scale feature detection repeatable over intensity and depth images. In 2016 IEEE International Conference on Image Processing (ICIP) (pp. 36–40). IEEE.

Rashwan, H. A., Chambon, S., Morin, G., Gurdjos, P., & Charvillat, V. (2017). Towards Recognizing of 3D Models Using A Single Image. In I. Pratikakis, F. Dupont, & M. Ovsjanikov (Eds.), Eurographics Workshop on 3D Object Retrieval. The Eurographics Association.

Polic, M., & Pajdla, T. (2017). Uncertainty Computation in Large 3D Reconstruction (pp. 110–121).

Open Source

We build a fully integrated software for 3D reconstruction, photomodeling and camera tracking. We aim to provide a strong software basis with state-of-the-art computer vision algorithms that can be tested, analyzed and reused. Links between academia and industry is a requirement to provide cutting-edge algorithms with the robustness and the quality required all along the visual effects and shooting process.

The LADIO software builds on a set of core libraries, both newly written ones and such that are adapted from existing software. All of these core libraries are based upon open standards and released in open source. This open approach enables both us and other users to achieve a high degree of integration and easy customisation for any studio pipeline.

Beyond our project objectives, open source is a way of life. We love to exchange ideas, improve ourselves while making improvements for other people and discover new collaboration opportunities to expand everybody’s horizon.

More details about our Open Source Photogrammetry Pipeline can be found here :


Photogrammetric Computer Vision Framework

AliceVision is a Photogrammetric Computer Vision framework for 3D Reconstuction and Camera Tracking. The library scope is not limited to the creative sectors but can be used in a wide spectrum of fields, like urban planning, industrial design, archeology, medical applications, etc.


Scale-Invariant Feature Transform (SIFT)

This library provides a GPU implementation of SIFT.
25 fps on HD images on recent graphic cards.


This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 731970.