ANR JCJC LabCom Cineviz

ANR project - 2016-2019. Amount: 300kE

Topics

The goal of this joint laboratory named Cineviz is to propose the Film industry a set of new previzualisation tools that (i) ease the creation of cinematographic sequences in virtual environments before the shooting (previzualisation stage or previs), and (ii) prepare the technical implementation of these sequences during the shooting (technical vizualisation stage or techvis). These tools stand in starking contrast with existing ones essentially based on generic 3D modelers, which are complex to use, not easy accessible to film professionals, and not adapted to specific needs in the movie industry.
This labcom ambitions two major shifts in the field: (i) merging the stages of shooting and lighting for the rapid and creative exploration of cinematographic sequences in 3D environments and (ii) merging the previs and techvis stages for a better preparation of the shooting stage. The expected impacts are a reduction of the production costs through a better preparation, and a support for creativity through a rapid exploration of possibilities.
This labcom is organized around three research axes to conduct the proposed shifts : (i) the design of an automated framing model that relies on the formalization of cinematographical knowledge and an integration of constraints related to real cameras and real rigs, (ii) the design of an interactive model to create camera trajectories which are also related to real camera rigs, and (iii) the formalisation of lighting knowledge from the film industry in a computationally efficient model to assist not only the placement but also the smart interaction with light sources.
The advances will be integrated in the SolidFrame software, the result of a on-going technological transfer between INRIA and SolidAnim. Each advance in the axes will correspond to a software milestone in SolidFrame and will be proposed to his clients. To reach these ambitions, Cineviz proposes to share resources, technologies and knowledge from the INRIA team-project Mimetic, on one side, and the SME SolidAnim on the other. Mimetic team-project is a joint team between Inria – Rennes Bretagne Atlantique, laboratory UMR 6074 IRISA and laboratory M2S. The team is world-leading challenges related to cinematography in virtual environments and publishes in the leading scientific events of computer animation (Siggraph, Eurographics, Siggraph Asia, Symposium on Computer Animation), artificial intelligence (AAAI) and Mutlimedia (ACM Multimedia).
SolidAnim is a French SME in located in Angouleme,, Ivry and now in Los Angeles. The SME is specialized in developing numerical preproduction tools for the film industry, more precisely in motion capture for special effects. It develops the SolidTrack software that integrates in real–time on the shooting stage virtual and real images (on-set previs). SolidAnim works for many productions including the next episodes of James Cameron’s Avatar.
The advances will be integrated in the SolidFrame software, the result of a on-going technological transfer between INRIA and SolidAnim. Each advance in the axes will correspond to a software milestone in SolidFrame and will be proposed to his clients. The proposed innovations will allow SolidAnim to extend the range of services the company can propose on a film preparation, and to address new markets, typically those which do not use rely on previsualisation techniques.

Challenges

Workflows in the movie industry are changing at a fast pace. New technologies emerge. They transform the possibilities and impact the content of movies. They deeply alter the ways contents are created, not only by reducing the costs, but also by fostering the creativity of film crews.
In particular, a relevant example is the recent and significant rise of digatal tools at the pre-production stage through techniques like previsualisation. Previsualisation consists in prototyping a cinematographic sequence in a 3D virtual environment before shooting it on stage. It enables the rough layout of the scence, the cinematography (placement of cameras, and lights), and the editing. Such a previsualisation enables the rapid exploration of alternatives in framing, lighting, shooting and editing, providing them with a fantastic support in their creativity is an early stage of the production. It also reduces the mistakes on the stage by anticipating issues and helps to prepare and coordinate the shooting.
However, tools in previsualisation have a very low level of support in terms of creativity, by simply not formalizing cinematographic knowledge.

ANR JCJC Cinecitta - Interactive Virtual Cinematography

ANR project - 2012-2016 - Cinecitta . Amount: 208kE

Topics

The main objective of this research is to explore and evaluate a new workflow which mixes user interaction and automated computation aspects for interactive virtual cinematography that will better support user creativity. In particular, following some preliminary results presented at ACM Multimedia 2011, we intend to propose a novel workflow in which artificial intelligence techniques are employed to generate a large range of viewpoint suggestions, to be explored by the users as a starting point for creating shots and performing cuts. Typically, users would then reframe the selected viewpoints to their needs, shoot the sequence, and request further suggestions for the next shots. All next suggestions will rely on the existing shots to generate relevant viewpoints that follow classical continuity rules between shots. A further, original and novel way of interacting with such a system is by using motion-tracked cameras. Motion-tracked cameras are devices tracked in both motion and position in a real-environment which coordinates are mapped to a virtual camera in a virtual environment. Enabling a proper mix between hints provided by an automated system and interactive possibilities offered by a motion-tracked camera represents an important scientific challenge and potentially leads to a strong industrial impact.

Challenges

The underlying scientific and technical challenges are :

Results

The results of the ANR JCJC project have been transfered to the SolidAnim company, a french SME which works on many well-known international productions The software is now available: the SolidFrame tool, a MotionBuilder 2014 & 2015 plugin, compatible with SolidTrack technologies. More information on SolidFrame
Publications and patents are here: https://cinecitta.inria.fr/publications/

FP7 Network of Excellence - Integrating Research in Interactive Storytelling

EU project - 2009-2011 - IRIS , Amount: 270 kE

Topics

The IRIS project aims at creating a virtual centre of excellence that will be able to achieve breakthroughs in the understanding of Interactive Storytelling and the development of corresponding technologies.

Our contributions

The objective of this work package is to explore the use of declarative methods and the combination of off-line and real-time solving techniques for the maintenance of cinematographic idioms in accordance with narrative progression. Previous approaches to automate cinematography have addressed each of these aspects in isolation (e.g. allowed the declarative specification of camera shots for fixed lights and actors positions). By contrast IRIS will both develop a unified language for cinematographic expression that relates narrative goals directly to cinematographic templates, and configure cameras, staging and lighting concurrently in the real-time satisfaction of these goals. Cinematography relates the interdependent problems of camera control, staging (the positioning of actors and other scene element) and lighting design. Current approaches to specifying cameras, staging and lighting are based on traditional modelling techniques founded on abstract mathematical notions (e.g. splines and velocity graphs) more or less hidden by high-level manipulators that allow positioning and animation of objects as well as cameras in a 3D world. The models and interaction processes are removed from cinematographic notions and instead we propose the use of optimization and constraint-based techniques to allow the explicit specification of a consistent cinematographic style. Fully automating cinematography for interactive storytelling requires number of separate innovations both within and across the cinematographic elements (cameras, staging and lighting):

CNRS / NSC - Smart Motion Planning

CNRS PICS framework (joint with National Science Council, Taiwan) - 2009-2011. Amount 20kE

Topics

The main objective of this Franco-Taiwanese project is to provide the founding models and techniques that will enable the design of more intelligent (a new generation of) motion planners in virtual environments. These models are based on the automated extraction of topological information to build an informed abstract representation of the environment, that is used in turn to plan motions at symbolic levels rather than numeric and geometric levels. This reduces the complexity of the planning planning while augmenting its expressiveness.

Motion planning is a complex and critical problem that finds its application in a large range of problems. Results from the robotics field have long-time been projected onto planning problems in virtual spaces, and have been mainly adapted to character motion planning, object planning and camera path planning. There are numerous important applications of these techniques (crowd simulation for emergency siutations, creating intelligent behaviours in training and edutainment scenarii, dynamic target tracking, smart exploration and navigation in virtual environments, and virtual camera planning for storytelling) that would profit from more informed environements and a higher-level of reasoning.

This project proposes to make a fundamental and qualitative step in motion planning within virtual environments by integrating the computation and usage of topological and semantic information in the exploration of navigable search spaces. This will improve both the expressiveness and the realism of planning techniques, while reducing the load of hand-made inputs on raw geometry and the complexity.

Our contributions

To achieve these goals, the project is articulated around four major axes: (1) propose a semantic information scheme authorizing semantic abstraction of the environment, (2) combine the semantic model with the automatic topology extraction to provide a model enabling a coupled abstraction of both semantic and topological relations, (3) adapt planning techniques to include semantic and topological reasoning, (4) demonstrate the advantages provided by the method on two classes of problems: character navigation and camera planning.

The two primary steps rely on the knowledge of both French and Taiwanese partners to study the nature and levels of topologic and semantic information required for improving the planning. Teams will capitalize on the experience gained over the TopoPlan (Topological Planner) tool [Lamarche, 2009], designed by the French partners. This model provides an exact spatial decomposition scheme which automatically extract environment topology (areas, spatial relations). This model will be extended to take into account the semantics associated to the environment and to provide a multi-level abstraction.

We then propose to integrate the semantic and topological information at the heart of the planning technique by relying on the Taiwanese expertise. The planning will be performed at two levels, first by reasoning on the semantic and topological levels (\emph{e.g.} through the construction of extended aspect graphs), and second at the geometric levels. The dynamic aspects of the applications will require to design specific local and global reactive processes (\emph{i.e.} plan globally while reacting locally). Improved expressiveness can be enforced by using constraints, established on the different levels of abstract, to control the behaviour of the planning.

The last step consists in applying the proposed techniques to two specific and motivating problems in motion planning: camera and character control in cluttered environments, on which both partners have gathered experience. The partners will share their issues, example scenes and previous results, to provide an expressive framework for evaluating this approach.

This work takes place in a more general framework that tends to bring intuitive authoring tools, thereby delivering expressiveness and realism to people engagement in virtual 3D worlds. Better planning techniques through qualitative reasoning in 3D environments represents a strong move in this direction.


FUI Sustains project - Constraint-based Prototyping of Urban Environments

National FUI 10 funding - 2010-2013. Amount = 160 kE.

Topics

The Sustains project is a decision-making tool for urban planning and selection of energy systems. The aim to offer a smart assistant in understanding the complexity of urban models (residential, industrial, public services) in its social, economic, energy, mobility, and sustainability dimensions. The integration, visualization, and manipulation of these dimensions in an operational computerized platform of the city aim to re-position the various players (elected officials, financiers, non-trading company) within the decision-making process. Ultimately, Sustains will be able to offer a representation of the city by integrating assumptions on the development choices, with the management of energy, water, air, and waste.

Our contributions

To be detailed ...