Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Articulation #209

Open
wants to merge 33 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
33 commits
Select commit Hold shift + click to select a range
8cd5ce2
add first articulation example
Feb 3, 2022
8b642f1
Merge branch 'main' of https://github.com/google-research/kubric into…
Feb 3, 2022
4c64383
add articulated animation rendering option
Feb 8, 2022
87bde1f
add dynamic camera pose to articulation.py
Feb 15, 2022
6e9d298
improve resolution
Feb 16, 2022
1aec48f
add instruction for rendering animation.
Feb 16, 2022
fea6bae
add instructions for animation rendering
Feb 16, 2022
ed8a8d3
Edit Readme.md
Feb 16, 2022
1692e49
improved camera poses
Feb 24, 2022
9e48a6b
Merge branch 'articulation' of https://github.com/google-research/kub…
Feb 24, 2022
ff719d0
add first articulation example
Feb 3, 2022
6f3c3eb
add articulated animation rendering option
Feb 8, 2022
44275b8
add dynamic camera pose to articulation.py
Feb 15, 2022
bdd544e
improve resolution
Feb 16, 2022
658c817
add instruction for rendering animation.
Feb 16, 2022
04df5ab
add instructions for animation rendering
Feb 16, 2022
a5af1a4
improved camera poses
Feb 24, 2022
126eeee
Edit Readme.md
Feb 16, 2022
744cd47
Merge branch 'articulation' of https://github.com/google-research/kub…
Feb 24, 2022
66d055b
update articulation gif
Feb 24, 2022
5cef8d7
add dynamic camera
Mar 22, 2022
9cb85bc
merged main
Mar 22, 2022
93d9e51
Merge branch 'main' into articulation
matansel Mar 28, 2022
dd8e9de
Merge branch 'main' into articulation
matansel Mar 28, 2022
22278c5
add dynamic nerf challenge
matansel Mar 28, 2022
bc5ce4e
Merge branch 'main' into articulation
matansel Mar 30, 2022
5d62cd8
add dynamic_nerf challenge
matansel Mar 30, 2022
fe1664c
change Readmes
matansel Mar 30, 2022
9ed785d
Update README.md
Mar 30, 2022
6b6d9bb
update Readme.md
matansel Mar 30, 2022
f8dc68e
--amend
matansel Mar 30, 2022
5084098
rename example.py to worker.py
matansel Mar 30, 2022
2655c5f
update README.md
matansel Mar 30, 2022
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@
*.bullet
.vscode/settings.json
output/*
output_tmp/*
docs/_build
.idea
**/.ipynb_checkpoints
Expand All @@ -14,4 +15,5 @@ dist/
*.blend*
shapenet2kubric/shapenet2kubric.log
examples/KuBasic/Cone/*
examples/KuBasic/rain*/*
Dockerfile
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,6 @@ Kubric employs **Blender 2.93** (see [here](https://github.com/google-research/k
- Access to rich ground truth information about the objects in a scene for the purpose of evaluation (eg. object segmentations and properties)
- Control the train/test split to evaluate compositionality and systematic generalization (for example on held-out combinations of features or objects)


## Challenges and datasets
Generally, we store datasets for the challenges in this [Google Cloud Bucket](https://console.cloud.google.com/storage/browser/kubric-public).
More specifically, these challenges are *dataset contributions* of the Kubric CVPR'22 paper:
Expand All @@ -59,6 +58,7 @@ More specifically, these challenges are *dataset contributions* of the Kubric CV
* [Single View Reconstruction](challenges/single_view_reconstruction)
* [Video Based Reconstruction](challenges/video_based_reconstruction)
* [Point Tracking](challenges/point_tracking)
* [Dynamic NeRF](challenges/dynamic_nerf)

Pointers to additional datasets/workers:
* [ToyBox (from Neural Semantic Fields)](https://nesf3d.github.io)
Expand Down
14 changes: 14 additions & 0 deletions challenges/dynamic_nerf/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
# Dynamic NeRF Challenge
An example of rendering animation from blender is given in examples/articulation.py
To render the following example, run:
```
mkdir examples/KuBasic/rain_v22
gcloud cp gs://research-brain-kubric-xgcp/articulation/* examples/KuBasic/rain_v22/
docker build -f docker/KubruntuDev.Dockerfile -t kubricdockerhub/kubruntudev:latest .
docker run --rm --interactive --user `id -u`:`id -g` \
--volume `pwd`:/workspace \
kubricdockerhub/kubruntudev \
python3 challenges/dynamic_nerf/worker.py
```

![](/docs/images/articulation.gif)
95 changes: 95 additions & 0 deletions challenges/dynamic_nerf/worker.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,95 @@
# Copyright 2021 The Kubric Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import imageio
import logging
import numpy as np
import kubric as kb
from kubric.renderer.blender import Blender as KubricRenderer

logging.basicConfig(level="INFO")


# --- CLI arguments
parser = kb.ArgumentParser()
FLAGS = parser.parse_args()

# --- create scene and attach a renderer
image_size = (512, 512)
scene = kb.Scene(resolution=image_size)
scene.frame_start = 1
scene.frame_end = 180 # < numbers of frames to render
scene.frame_rate = 24 # < rendering framerate
camera_radius = 0.7
camera_height = 1.5
camera_look_at = (0, 0, camera_height)
use_static_scene = False

if use_static_scene:
blend_file = "examples/KuBasic/rain_v22/rain_v2.1.blend"
scratch_dir = "output/static"
else:
blend_file = "examples/KuBasic/rain_v22/rain_v2.1_face_animated.blend"
scratch_dir = "output/dynamic"

scene += kb.DirectionalLight(name="sun", position=(10, -10, 1.5), look_at=(0, 0, 1), intensity=3.0)
scene += kb.DirectionalLight(name="sun", position=(-10, -10, 1.5), look_at=(0, 0, 1), intensity=3.0)
scene += kb.PerspectiveCamera(name="camera", position=(0, -camera_radius, camera_height), look_at=camera_look_at)
renderer = KubricRenderer(scene, scratch_dir=scratch_dir,
custom_scene=blend_file)

frames = []

for frame_nr in range(scene.frame_start - 1, scene.frame_end + 2):
interp = float(frame_nr - scene.frame_start + 1) / float(scene.frame_end - scene.frame_start + 3)
scene.camera.position = (-camera_radius*np.sin(interp*2*np.pi),
-camera_radius*np.abs(np.cos(interp*2*np.pi)),
camera_height)
scene.camera.look_at(camera_look_at)
scene.camera.keyframe_insert("position", frame_nr )
scene.camera.keyframe_insert("quaternion", frame_nr )

kb.write_json(filename=f"{scratch_dir}/camera/frame_{frame_nr:04d}.json", data=
kb.get_camera_info(scene.camera))

frames.append(frame_nr)

frames_dict = renderer.render(frames=frames, ignore_missing_textures=True)
kb.write_image_dict(frames_dict, f"{scratch_dir}/output/")

# --- render (and save the blender file)
renderer.save_state(f"{scratch_dir}/articulation.blend")

# -- generate a gif
gif_writer = imageio.get_writer(f"{scratch_dir}/summary.gif", mode="I")
mp4_writer = imageio.get_writer(f"{scratch_dir}/video.mp4", fps=scene.frame_rate)
all_files = os.listdir(f"{scratch_dir}/output/")
for frame in range(scene.frame_start - 1, scene.frame_end + 2):
image_files = [os.path.join(f"{scratch_dir}/output", f) for f in all_files if str(frame).zfill(4) in f]
image_files = [f for f in image_files if f.endswith(".png") and "coordinates" not in f]
image_files = sorted(image_files)
assert len(image_files) > 0
images = [imageio.imread(f, "PNG") for f in image_files]
images = [np.dstack(3*[im[..., None]]) if len(im.shape) == 2 else im for im in images]
images = [im[..., :3] if im.shape[2]>3 else im for im in images]
if len(images) % 2 == 1:
images += [np.zeros_like(images[0])]
images = [np.hstack(images[:len(images)//2]), np.hstack(images[len(images)//2:])]
images = np.vstack(images)
gif_writer.append_data(images)
rgb_image_file = os.path.join(f"{scratch_dir}/images", f"frame_{frame:04d}.png")
image = imageio.imread(rgb_image_file)
mp4_writer.append_data(image)
gif_writer.close()
mp4_writer.close()
Binary file added docs/images/articulation.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
4 changes: 2 additions & 2 deletions kubric/file_io.py
Original file line number Diff line number Diff line change
Expand Up @@ -270,11 +270,11 @@ def write_normal_batch(data, directory, file_template="normal_{:05d}.png", max_w
multi_write_image(data, path_template, write_fn=write_png, max_write_threads=max_write_threads)


def write_coordinates_batch(data, directory, file_template="object_coordinates_{:05d}.png",
def write_coordinates_batch(data, directory, file_template="object_coordinates_{:05d}.tiff",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please do not change from png to tiff here. That is a breaking change.
If you need coordinates as a tiff file, then add a new write_coordinates_batch_tif method and in your worker set kb.file_io.DEFAULT_WRITERS["object_coordinates"] = write_coordinates_batch_tif

max_write_threads=16):
assert data.ndim == 4 and data.shape[-1] == 3, data.shape
path_template = str(as_path(directory) / file_template)
multi_write_image(data, path_template, write_fn=write_png, max_write_threads=max_write_threads)
multi_write_image(data, path_template, write_fn=write_tiff, max_write_threads=max_write_threads)


def write_depth_batch(data, directory, file_template="depth_{:05d}.tiff", max_write_threads=16):
Expand Down
13 changes: 10 additions & 3 deletions kubric/renderer/blender.py
Original file line number Diff line number Diff line change
Expand Up @@ -233,6 +233,9 @@ def save_state(self, path: PathLike, pack_textures: bool = True):
path.parent.mkdir(parents=True, exist_ok=True) # ensure directory exists
logger.info("Saving '%s'", path)
tf.io.gfile.copy(tmp_path, path, overwrite=True)

def load_state(self, path: PathLike):
bpy.ops.wm.open_mainfile(filepath=path)

def render(self,
frames: Optional[Sequence[int]] = None,
Expand Down Expand Up @@ -263,8 +266,10 @@ def render(self,
- "normal": shape = (nr_frames, height, width, 3) (uint16)
"""
logger.info("Using scratch rendering folder: '%s'", self.scratch_dir)
if not ignore_missing_textures:
self._check_missing_textures()
missing_textures = sorted({img.filepath for img in bpy.data.images
if tuple(img.size) == (0, 0) or len(img.filepath)>1})
if missing_textures and not ignore_missing_textures:
raise RuntimeError(f"Missing textures: {missing_textures}")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any particular reason for inlining _check_missing_textures here?

self.set_exr_output_path(self.scratch_dir / "exr" / "frame_")
# --- starts rendering
if frames is None:
Expand Down Expand Up @@ -357,6 +362,9 @@ def clear_and_reset_blender_scene(verbose: bool = False, custom_scene: str = Non
else:
logger.info("Loading scene from '%s'", custom_scene)
bpy.ops.wm.open_mainfile(filepath=custom_scene)
dirname = os.path.dirname(custom_scene)
for img in bpy.data.images:
img.filepath = img.filepath.replace("//", dirname + "/")

@singledispatchmethod
def add_asset(self, asset: core.Asset) -> Any:
Expand Down Expand Up @@ -716,7 +724,6 @@ def __call__(self, change):
if self.converter:
# use converter if given
new_value = self.converter(new_value)

setattr(self.blender_obj, self.attribute, new_value)


Expand Down
1 change: 1 addition & 0 deletions requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@ etils[epath_no_tf]
cloudml-hypertune
google-cloud-storage
imageio
imageio-ffmpeg
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this should be a mandatory kubric dependency just because the dynamic nerf worker uses it. I'd prefer adding specific instructions for additional dependencies to that worker.

munch
numpy>=1.17
pandas
Expand Down