You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Manim is an animation engine for explanatory math videos. It's used to create precise animations programmatically, as seen in the videos at 3Blue1Brown.
This repository contains the version of manim used by 3Blue1Brown. There is also a community maintained version at https://github.com/ManimCommunity/manim/.
To get help or to join the development effort, please join the discord.
Installation
Manim runs on Python 3.6 or higher version. You can install it from PyPI via pip:
pip3 install manimlib
System requirements are cairo, ffmpeg, sox (optional, if you want to play the prompt tone after running), latex (optional, if you want to use LaTeX).
You can now use it via the manim command. For example:
manim my_project.py MyScene
For more options, take a look at the Using manim sections further below.
👉The Open Architecture Playbook. Use it to create better and faster (IT)Architectures. OSS Tools, templates and more for solving IT problems using real open architecture tools that work!
😎TOPICS: architecture,design-tools
⭐️STARS:507, 今日上升数↑:42
👉README:
ArchitecturePlaybook
Smart people have been thinking on how to create IT architectures as long as there has been computers. Ideas come and go, however creating a good architectures can still be complex and time consuming. Especially when you try to invent the wheel for yourself. With this interactive playbook you can create your IT architecture better and faster.
This architecture playbook is divided in the commonly used architecture sections:
Business
Data
Applications and of course
Technology Infrastructure (TI)
This playbook is primarily created for on-line usage.
HELP?!
Share this book! The best way to help is share this eBook!
SHAP (SHapley Additive exPlanations) is a game theoretic approach to explain the output of any machine learning model. It connects optimal credit allocation with local explanations using the classic Shapley values from game theory and their related extensions (see papers for details and citations).
👉Objectron is a dataset of short, object-centric video clips. In addition, the videos also contain AR session metadata including camera poses, sparse point-clouds and planes. In each video, the camera moves around and above the object and captures it from different views. Each object is annotated with a 3D bounding box. The 3D bounding box describes the object’s position, orientation, and dimensions. The dataset contains about 15K annotated video clips and 4M annotated images in the following categories: bikes, books, bottles, cameras, cereal boxes, chairs, cups, laptops, and shoes
The Objectron dataset is a collection of short, object-centric video clips, which are accompanied by AR session metadata that includes camera poses, sparse point-clouds and characterization of the planar surfaces in the surrounding environment. In each video, the camera moves around the object, capturing it from different angles. The data also contain manually annotated 3D bounding boxes for each object, which describe the object’s position, orientation, and dimensions. The dataset consists of 15K annotated video clips supplemented with over 4M annotated images in the following categories: `bikes, books, bottles, c...
The torchvision package consists of popular datasets, model architectures, and common image transformations for computer vision.
Installation
We recommend Anaconda as Python package management system. Please refer to pytorch.org <https://pytorch.org/>_
for the detail of PyTorch (torch) installation. The following is the corresponding torchvision versions and
supported Python versions.
👉Jupyter notebooks for the code samples of the book "Deep Learning with Python"
😎TOPICS: ``
⭐️STARS:11427, 今日上升数↑:13
👉README:
Companion Jupyter notebooks for the book "Deep Learning with Python"
This repository contains Jupyter notebooks implementing the code samples found in the book Deep Learning with Python (Manning Publications). Note that the original text of the book features far more content than you will find in these notebooks, in particular further explanations and figures. Here we have only included the code samples themselves and immediately related surrounding comments.
These notebooks use Python 3.6 and Keras 2.0.8. They were generated on a p2.xlarge EC2 instance.
Python随身听-2020-11-12-技术精选
🤩Python随身听-技术精选: /donnemartin/system-design-primer
👉Learn how to design large-scale systems. Prep for the system design interview. Includes Anki flashcards.
😎TOPICS:
programming,development,design,design-system,system,design-patterns,web,web-application,webapp,python,interview,interview-questions,interview-practice
⭐️STARS:112085, 今日上升数↑:313
👉README:
*English ∙ 日本語 ∙ 简体中文 ∙ 繁體中文 | العَرَبِيَّة ∙ বাংলা ∙ Português do Brasil ∙ Deutsch ∙ ελληνικά ∙ עברית ∙ Italiano ∙ 한국어 ∙ فارسی ∙ Polski ∙ русский язык ∙ Español ∙ [...
地址:https://github.com/donnemartin/system-design-primer
🤩Python随身听-技术精选: /3b1b/manim
👉Animation engine for explanatory math videos
😎TOPICS:
python,animation,explanatory-math-videos,3b1b-videos
⭐️STARS:27777, 今日上升数↑:92
👉README:
Manim is an animation engine for explanatory math videos. It's used to create precise animations programmatically, as seen in the videos at 3Blue1Brown.
This repository contains the version of manim used by 3Blue1Brown. There is also a community maintained version at https://github.com/ManimCommunity/manim/.
To get help or to join the development effort, please join the discord.
Installation
Manim runs on Python 3.6 or higher version. You can install it from PyPI via pip:
pip3 install manimlib
System requirements are cairo, ffmpeg, sox (optional, if you want to play the prompt tone after running), latex (optional, if you want to use LaTeX).
You can now use it via the
manim
command. For example:manim my_project.py MyScene
For more options, take a look at the Using manim sections further below.
###...
地址:https://github.com/3b1b/manim
🤩Python随身听-技术精选: /apache/airflow
👉Apache Airflow - A platform to programmatically author, schedule, and monitor workflows
😎TOPICS:
airflow,apache,apache-airflow,python,scheduler,workflow
⭐️STARS:19054, 今日上升数↑:82
👉README:
🇬🇧 🇨🇳 🇰🇷 🇪🇸 🇮🇹 🇹🇷 🇯🇵 [🇸🇦](https://github.com/Atcold/pytorch-Deep-Learning/blob/master/docs/ar/README-AR.m...
地址:https://github.com/Atcold/pytorch-Deep-Learning
🤩Python随身听-技术精选: /nocomplexity/ArchitecturePlaybook
👉The Open Architecture Playbook. Use it to create better and faster (IT)Architectures. OSS Tools, templates and more for solving IT problems using real open architecture tools that work!
😎TOPICS:
architecture,design-tools
⭐️STARS:507, 今日上升数↑:42
👉README:
ArchitecturePlaybook
Smart people have been thinking on how to create IT architectures as long as there has been computers. Ideas come and go, however creating a good architectures can still be complex and time consuming. Especially when you try to invent the wheel for yourself. With this interactive playbook you can create your IT architecture better and faster.
This architecture playbook is divided in the commonly used architecture sections:
This playbook is primarily created for on-line usage.
HELP?!
Share this book! The best way to help is share this eBook!
This is ...
地址:https://github.com/nocomplexity/ArchitecturePlaybook
🤩Python随身听-技术精选: /slundberg/shap
👉A game theoretic approach to explain the output of any machine learning model.
😎TOPICS:
interpretability,machine-learning,deep-learning,gradient-boosting,shap,shapley,explainability
⭐️STARS:10765, 今日上升数↑:23
👉README:
SHAP (SHapley Additive exPlanations) is a game theoretic approach to explain the output of any machine learning model. It connects optimal credit allocation with local explanations using the classic Shapley values from game theory and their related extensions (see papers for details and citations).
Install
Shap can be installed from either ...
地址:https://github.com/slundberg/shap
🤩Python随身听-技术精选: /google-research-datasets/Objectron
👉Objectron is a dataset of short, object-centric video clips. In addition, the videos also contain AR session metadata including camera poses, sparse point-clouds and planes. In each video, the camera moves around and above the object and captures it from different views. Each object is annotated with a 3D bounding box. The 3D bounding box describes the object’s position, orientation, and dimensions. The dataset contains about 15K annotated video clips and 4M annotated images in the following categories: bikes, books, bottles, cameras, cereal boxes, chairs, cups, laptops, and shoes
😎TOPICS:
deep-learning,computer-vision,machine-learning,python,tensorflow,pytorch,3d-vision,3d-reconstruction,ai,3d,neural-network,dataset,augmented-reality
⭐️STARS:228, 今日上升数↑:124
👉README:
Objectron Dataset
Objectron is a dataset of short object centeric video clips with pose annotations.
Website • Dataset Format • Tutorials • License
The Objectron dataset is a collection of short, object-centric video clips, which are accompanied by AR session metadata that includes camera poses, sparse point-clouds and characterization of the planar surfaces in the surrounding environment. In each video, the camera moves around the object, capturing it from different angles. The data also contain manually annotated 3D bounding boxes for each object, which describe the object’s position, orientation, and dimensions. The dataset consists of 15K annotated video clips supplemented with over 4M annotated images in the following categories: `bikes, books, bottles, c...
地址:https://github.com/google-research-datasets/Objectron
🤩Python随身听-技术精选: /pytorch/vision
👉Datasets, Transforms and Models specific to Computer Vision
😎TOPICS:
computer-vision,machine-learning
⭐️STARS:7675, 今日上升数↑:15
👉README:
torchvision
.. image:: https://travis-ci.org/pytorch/vision.svg?branch=master
:target: https://travis-ci.org/pytorch/vision
.. image:: https://codecov.io/gh/pytorch/vision/branch/master/graph/badge.svg
:target: https://codecov.io/gh/pytorch/vision
.. image:: https://pepy.tech/badge/torchvision
:target: https://pepy.tech/project/torchvision
.. image:: https://img.shields.io/badge/dynamic/json.svg?label=docs&url=https%3A%2F%2Fpypi.org%2Fpypi%2Ftorchvision%2Fjson&query=%24.info.version&colorB=brightgreen&prefix=v
:target: https://pytorch.org/docs/stable/torchvision/index.html
The torchvision package consists of popular datasets, model architectures, and common image transformations for computer vision.
Installation
We recommend Anaconda as Python package management system. Please refer to
pytorch.org <https://pytorch.org/>
_for the detail of PyTorch (
torch
) installation. The following is the correspondingtorchvision
versions andsupported Python versions.
+-------------------...
地址:https://github.com/pytorch/vision
🤩Python随身听-技术精选: /AtsushiSakai/PythonRobotics
👉Python sample codes for robotics algorithms.
😎TOPICS:
python,robotics,algorithm,path-planning,control,animation,localization,slam,cvxpy,ekf,autonomous-vehicles,autonomous-driving,mapping,autonomous-navigation,robot
⭐️STARS:10640, 今日上升数↑:16
👉README:
PythonRobotics
Python codes for robotics algorithm.
Table of Contents
地址:https://github.com/AtsushiSakai/PythonRobotics
🤩Python随身听-技术精选: /fchollet/deep-learning-with-python-notebooks
👉Jupyter notebooks for the code samples of the book "Deep Learning with Python"
😎TOPICS: ``
⭐️STARS:11427, 今日上升数↑:13
👉README:
Companion Jupyter notebooks for the book "Deep Learning with Python"
This repository contains Jupyter notebooks implementing the code samples found in the book Deep Learning with Python (Manning Publications). Note that the original text of the book features far more content than you will find in these notebooks, in particular further explanations and figures. Here we have only included the code samples themselves and immediately related surrounding comments.
These notebooks use Python 3.6 and Keras 2.0.8. They were generated on a p2.xlarge EC2 instance.
Table of contents
地址:https://github.com/fchollet/deep-learning-with-python-notebooks
The text was updated successfully, but these errors were encountered: