LADIO will create a central hub with structured access to all data generated on set. This will enable the post-production team to be part of the on-set dataflow, building on the momentum of editing and color grading that are already initiated directly on-set. It will also support digital visual effects (VFX), whose seamless integration between live action and computer graphics elements is even more demanding. LADIO will bring about a VFX paradigm shift by spending more effort on set to drastically improve efficiency, without intruding onto the work of the live action crew. The LADIO hardware and software will streamline the setup of all devices for data collection and monitoring, and track the location of all recorded data in time and space in a common 3D reference system. 3D acquisition of the set with a dedicated in-house pipeline is already a common practice for high-end productions. LADIO will improve and release new open source software libraries to foster interoperability and collaborations.
We love to test our algorithms with popular models, which are often considered as reference for datasets. Here are some examples gathered on our AliceVision's website with Sketchfab.
Donaldby
AliceVision
on
sketchfab
We have also collected dataset from a couple of film sets, comprising data for pointcloud creation and reconstruction, and several challenging scenes for camera tracking.
The original (and still growing) dataset can be found on quine.no, but you can take a look at the following previews.
Deliverables and Reports of the project can be found
here.
Michal Polic, Woldgang Förstner, Tomas Pajdla. Fast and Accurate Camera Covariance Computation for Large 3D Reconstruction. ECCV 2018. https://arxiv.org/pdf/1808.02414.pdf
Hajime Taira, Masatoshi Okutomi, Torsten Sattler, Mircea Cimpoi, Marc Pollefeys, Josef Sivic, Tomas Pajdla, Akihiko Torii. InLoc: Indoor Visual Localization with Dense Matching and View Synthesis. CVPR 2018. http://openaccess.thecvf.com/content_cvpr_2018/CameraReady/2096.pdf
Torsten Sattler, Will Maddern, Carl Toft, Akihiko Torii, Lars Hammarstrand, Erik Stenborg, Daniel Safari, Masatoshi Okutomi, Marc Pollefeys, Josef Sivic, Fredrik Kahl, Tomas Pajdla. Benchmarking 6DOF Outdoor Visual Localization in Changing Conditions. CVPR 2018. http://openaccess.thecvf.com/content_cvpr_2018/papers/Sattler_Benchmarking_6DOF_Outdoor_CVPR_2018_paper.pdf
Viktor Larsson, Magnus Oskarsson, Kalle Astrom, Alge Wallis, Zuzana Kukelova, Tomas Pajdla. Beyond Grobner Bases: Basis Selection for Minimal Solvers. CVPR 2018. http://openaccess.thecvf.com/content_cvpr_2018/papers_backup/Larsson_Beyond_GroBner_Bases_CVPR_2018_paper.pdf
Relja Arandjelovic, Petr Gronat, Akihiko Torii, Tomas Pajdla, Josef Sivic NetVLAD: CNN Architecture for Weakly Supervised Place Recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(6):1437-1451, 2018. https://ieeexplore.ieee.org/document/7937898
Torii, A., Arandjelovic, R., Sivic, J., Okutomi, M., & Pajdla, T. (2018). 24/7 Place Recognition by View Synthesis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(2), 257–271. http://doi.org/10.1109/TPAMI.2017.2667665
Sattler, T., Torii, A., Sivic, J., Pollefeys, M., Taira, H., Okutomi, M., & Pajdla, T. (2017). Are Large-Scale 3D Models Really Necessary for Accurate Visual Localization? In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 6175–6184). IEEE. http://doi.org/10.1109/CVPR.2017.654
Kileel, J., Kukelova, Z., Pajdla, T., & Sturmfels, B. (2017). Distortion Varieties. Foundations of Computational Mathematics. http://doi.org/10.1007/s10208-017-9361-0
Albl, C., Kukelova, Z., Fitzgibbon, A., Heller, J., Smid, M., & Pajdla, T. (2017). On the Two-View Geometry of Unsynchronized Cameras. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 5593–5602). IEEE. http://doi.org/10.1109/CVPR.2017.593
Kato, T., Shimizu, I., & Pajdla, T. (2017). Selecting image pairs for SfM by introducing Jaccard Similarity. IPSJ Transactions on Computer Vision and Applications, 9(1), 12. http://doi.org/10.1186/s41074-017-0021-8
Rashwan, H. A., Chambon, S., Gurdjos, P., Morin, G., & Charvillat, V. (2016). Towards multi-scale feature detection repeatable over intensity and depth images. In 2016 IEEE International Conference on Image Processing (ICIP) (pp. 36–40). IEEE. http://doi.org/10.1109/ICIP.2016.7532314
Rashwan, H. A., Chambon, S., Morin, G., Gurdjos, P., & Charvillat, V. (2017). Towards Recognizing of 3D Models Using A Single Image. In I. Pratikakis, F. Dupont, & M. Ovsjanikov (Eds.), Eurographics Workshop on 3D Object Retrieval. The Eurographics Association. http://doi.org/10.2312/3dor.20171062
Polic, M., & Pajdla, T. (2017). Uncertainty Computation in Large 3D Reconstruction (pp. 110–121). http://doi.org/10.1007/978-3-319-59126-1_10
We build a fully integrated software for 3D reconstruction, photomodeling and camera tracking. We aim to provide a strong software basis with state-of-the-art computer vision algorithms that can be tested, analyzed and reused. Links between academia and industry is a requirement to provide cutting-edge algorithms with the robustness and the quality required all along the visual effects and shooting process.
The LADIO software builds on a set of core libraries, both newly written ones and such that are adapted from existing software. All of these core libraries are based upon open standards and released in open source. This open approach enables both us and other users to achieve a high degree of integration and easy customisation for any studio pipeline.
Beyond our project objectives, open source is a way of life. We love to exchange ideas, improve ourselves while making improvements for other people and discover new collaboration opportunities to expand everybody’s horizon.
More details about our Open Source Photogrammetry Pipeline can be found here :
https://alicevision.org
AliceVision is a Photogrammetric Computer Vision framework for 3D Reconstuction and Camera Tracking. The library scope is not limited to the creative sectors but can be used in a wide spectrum of fields, like urban planning, industrial design, archeology, medical applications, etc.
This library provides a GPU implementation of SIFT.
25 fps on HD images on recent graphic cards.
The coordinator of this project may be reached at griff@simula.no
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 731970.