POPART aims to democratize previz for all kinds of film and TV
production.
Previz is not only required for highly complex VFX shots. Previz gives
you
more artistic control, reduces production risks and
optimizes your budget. Previz contributes to film-making from the
planning stage to filming with actors on-set, and even during the post-production
steps.
POPART proposes an original approach to combine quick and easy setup with
a robust and precise camera tracking. Finally, it allows to optimize everything
in full quality based on the RAW data collected during the shooting.
Public project deliverables to the EU commision can be found here.
POPART delivers a fully integrated commercial solution including software and hardware for on-set Previz and also delivers a free solution for 3D reconstruction, photomodeling and camera tracking.
The POPART camera tracking solution provides a camera RIG with two witness cameras shutter synchronized with the main camera. We provide scripts to conform data and perform fully automatic camera tracking of your footage. Additionally, we provide scripts to generate scenes for your preferred camera tracking software for manual corrections.
The POPART previz solution provides realtime previzualization On-Set with a full integration to improve your post-production.
LABO and Mikros Image propose Previz integrated with Post-Production servicing.
We build a fully integrated software for 3D reconstruction, photomodeling and camera tracking. We aim to provide a strong software basis with state-of-the-art computer vision algorithms that can be tested, analyzed and reused. Links between academia and industry is a requirement to provide cutting-edge algorithms with the robustness and the quality required all along the visual effects and shooting process.
The POPART software builds on a set of core libraries, both newly written ones and such that are adapted from existing software. All of these core libraries are based upon open standards and released in open source. This open approach enables both us and other users to achieve a high degree of integration and easy customisation for any studio pipeline.
Beyond our project objectives, open source is a way of life. We love to exchange ideas, improve ourselves while making improvements for other people and discover new collaboration opportunities to expand everybody’s horizon.
OpenMVG library solves classical problems in Multiple View Geometry and provides a full Structure-from-Motion pipeline with a strong focus on accuracy. The library scope is not limited to the creative sectors but can be used in a wide spectrum of fields, like urban planning, industrial design, archeology, medical applications, etc.
MayaMVG allows graphic artists to do photomodeling on top of a 3D reconstruction (point cloud and cameras) with pixel precision.
The CameraLocalizer plugin estimates the camera pose of an image regarding an existing 3D reconstruction generated by openMVG. The plugin supports multiple clips in input to localize a RIG of cameras (multiple cameras rigidly fixed). The LensCalibration plugin estimates the distortion parameters according to the couple camera/optics.
This library allows you to detect and identify CCTag markers. Such marker system can deliver sub-pixel precision while being largely robust to challenging shooting conditions (e.g. motion blur, occlusion and poor lighting conditions).
This library provides a GPU implementation of SIFT.
25 fps on HD images on recent graphic cards.
We released 2 datasets allowing to evaluate 3D reconstructions and camera tracking.
This dataset consists of 596 rendered images along with a 3D ground truth.
It allows to run 3D reconstruction solutions on these 596 images to measure
the reconstruction quality according to the 3D ground-truth. We also provide
the rendering of a virtual camera to evaluate camera tracking with different
rendering options.
This dataset consists of 596 rendered images along with a 3D ground truth.
It allows to run 3D reconstruction solutions on these 596 images to measure
the reconstruction quality according to the 3D ground-truth. We also provide
the rendering of a virtual camera to evaluate camera tracking with different
rendering options.
This dataset is an example of an everyday production with professional
cameras and green screen studio with artificial lighting environment. The
dataset contains raw video files shot on RED EPIC, with additional witness
camera footage shot in a mixed reality studio setting with CCtag fiducial
markers attached to the wall and ceiling. The goal of this dataset is to
provide a reference footage for improving the open source CCtag fiducial
marker library. It contains a reference camera tracking and all material
required to build a virtual composite using the POPART camera tracking
system.
This dataset collects 56 images taken in Place du Capitole in Toulouse
(FRA) and 4 related videos taken while moving around in the place. The
aim of the dataset is to test and evaluate the camera tracking algorithms
developed for the POPART project, and help the reproducibility of the experiments.
The camera tracking algorithms developed for the POPART project are based
on model tracking of an existing 3D reconstruction of the scene. First
a collection of still images are taken and a SfM pipeline is used to perform
the 3D reconstruction of the scene. As result of this first step, a 3D
point cloud is generated. The camera tracking algorithms are then based
on camera localization techniques: each frame is individually localized
w.r.t. the 3D point cloud using the photometric information (SIFT features)
associated to each point. This allows to align the point cloud to the current
frame and thus compute the camera pose.
This dataset collects 197 images taken in courtyard of Cap Digital building
in Paris (FRA) and 2 video sequences taken with a camera rig composed of
1 main camera and 2 witness cameras on the side looking outwards. The aim
of the dataset is to test and evaluate the camera tracking algorithms developed
for the POPART project, and help the reproducibility of the experiments.
The camera tracking algorithms developed for the POPART project are based
on model tracking of an existing 3D reconstruction of the scene. First
a collection of still images are taken and a SfM pipeline is used to perform
the 3D reconstruction of the scene. As result of this first step, a 3D
point cloud is generated. The camera tracking algorithms are then based
on camera localization techniques: each frame is individually localized
w.r.t. the 3D point cloud using the photometric information (SIFT features)
associated to each point. This allows to align the point cloud to the current
frame and thus compute the camera pose.
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 644874.