Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[nodes] New node "RenderAnimatedCamera" using blender API #1432

Merged
merged 9 commits into from
Jul 23, 2021

Conversation

DanielDelaporus
Copy link
Contributor

Description

This node uses Blender's API to Display the SFM Data at some specific points in the pipepline using the imported animated camera to show the result.
Before going into details the user need to link the path to their blender.exe file. This is mandatory to use the software.

  • First off, if you import your data as an alembic file. The node will be trying to display it as a Point Cloud using the particle system of Blender.
  • Moreover, if you import your data as an obj file. The node will be trying to display it as a mesh. To be more exact, for visibility purposes, the edges of a mesh will be rendered.
  • Importing the animated camera is required, it rebuilds the sequence in blender to see the render.
  • Even if it's not mandatory, the node can display the background image corresponding to your data set. If you link the image folder of course.
  • Finally the result of the render will be created at the path given in the node in any of the supported format of video.

Features list

  • The SFM Data extension changes drastically the way it's processed. It changes completely the render options so I added a system that displays the different render features dynamically using the SFM Data Path.

  • If the Data is a Point Cloud, the user can change the size, the color and the density of the particle used to display in on blender.

  • If the Data is a Mesh, the user can change the color of the edges of the display on blender.

  • The feature to display the background image is not mandatory.

  • Finally the sequence is render in 4 supported format. Gif are not included because that are not supported by blender. However we could imagine it being possible in the future using external converting tools.

  • Something else that we could add it the possibility to add several render if the imported alembic containing the camera contains several of them.

Implementation remarks

Most of the choices were made so the client could see and compare the result of meshroom to the original data set.
We tried to multiply features so that if the client has choices in their render method.

Every render was made using blender.

@@ -0,0 +1,405 @@
import bpy
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rename the file to something like "renderAnimatedCameraInBlender.py"


class RenderAnimatedCamera(desc.CommandLineNode):
commandLine = '{blenderPathValue} -b --python {scriptPathValue} -- {allParams}'

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add category and documentation variables.

@natowi natowi mentioned this pull request Jul 2, 2021
10 tasks
@natowi natowi added wip work in progress and removed wip work in progress labels Jul 2, 2021
The node is almost functional. The animated camera works but the imported cloud of point isn't visible is rendering... I'll need to find a way to make display it.
(For now there is a cube as a placeholder in the scene to show the movement of the camera)
The returns in ExportAnimatedCamera didn't include the path toward the undistorted images so I added it.

The Blender Rendition nodes can now (among other things) diplay cloud of points. The code is cleaned up and only the background image sequence remains to be implemented...
Added a numeral scale to the node to make the density rendering of the cloud of point more customisable...
I had to use the graphe node to render the image in the back. I also made the node much more adaptable. I'll verify if it works with another set of image.
Almost complete version of the node, I added a background that can render with eevee and changed the cubes used as particles by a plane that always follows the camera. The one of the only thing left is the option to change the color of the particle (among other things).
Many minor bug fixes and added the possibility to change the particle color of the rendering to let the user chose the clearest color in their case. Commented a lot of my code to make it readable to someone else that myself.
Added the possibility of rendering the output of the meshing node into a edge detection render of the obj. Added the activation and deactivation of the background images as an option. Improving the way the arguments are shown with a conditionnal display of some arguments.
@fabiencastan fabiencastan merged commit 0055f1e into develop Jul 23, 2021
@fabiencastan fabiencastan deleted the dev/blenderRender branch July 23, 2021 14:27
@fabiencastan fabiencastan added this to the Meshroom 2021.2.0 milestone Jul 23, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants