Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unstable branch sync with dmlc/tvm 20190309 #11

Closed
wants to merge 82 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
82 commits
Select commit Hold shift + click to select a range
cabe9b2
Fix fusion bug when call symbol that is not an operator. (#2630)
jroesch Feb 20, 2019
39580f7
[RUNTIME][NDArray] Allowing External Libraries to Subclass NDArrays (…
junrushao Feb 21, 2019
02db9ed
Fix pylint 2.2.2 gripes. (#2642)
mshawcroft Feb 21, 2019
adf0bbd
add MXNet converter for where operator for both NNVM and Relay (#2647)
haojin2 Feb 22, 2019
482952a
[Quantization][RELAY] Add check against NCHWc ops in the quantization…
eqy Feb 22, 2019
31c03b1
Stop pylint complaining about useless import alias. (#2655)
mshawcroft Feb 22, 2019
9f64a42
Explicitly disable pylint warning subprocess-popen-preexec-fn (#2656)
mshawcroft Feb 22, 2019
aa0d12b
[RELAY][PASS]use attribute registration style in the mac count pass (…
yidawang Feb 22, 2019
7814755
[Relay] fix anf for reference and pattern matching (#2637)
MarisaKirisame Feb 22, 2019
96f324b
fix lint (#2649)
were Feb 22, 2019
4f5676d
[RELAY/OP] Gradient of relay level1 ops (#2633)
ZihengJiang Feb 22, 2019
fcda217
Update community.rst
tqchen Feb 22, 2019
4a14e2a
[Relay] GNF (#2492)
MarisaKirisame Feb 22, 2019
8a68378
add committer (#2661)
icemelon Feb 23, 2019
20ef1e4
[Relay/TOPI][OP] Add arange op in Relay and TOPI (#2621)
icemelon Feb 23, 2019
44302fc
Fix -Wreturn-std-move and -Wself-assign-overloaded (#2669)
junrushao Feb 24, 2019
beac50c
[Relay] add more function to prelude (#2660)
MarisaKirisame Feb 25, 2019
f96b27b
[BUILD] Simplify after bind device type (#2670)
tqchen Feb 25, 2019
dfffc83
[Hybrid Script] Add `max_num_threads` (#2672)
were Feb 26, 2019
741b222
fix (#2674)
MarisaKirisame Feb 26, 2019
78d4427
[Relay] fix error in ANF (too agressively inline atomic expression an…
MarisaKirisame Feb 26, 2019
51cfb73
Add CONCATENATION to tflite frontend, support Inception V3 (#2643)
ariwaranosai Feb 26, 2019
1015c0d
[AUTOTVM][Bugfix] Fix history loader for heterogeneous execution
imorinaga Feb 27, 2019
4092611
[Graph Runtime] Run_individual for benchmarking individual layers (#2…
hlu1 Feb 27, 2019
3136e21
REGION op removed from topi and added in darkent frontend (#2275)
siju-samuel Feb 27, 2019
c107135
yolo reorg op for relay (#1941)
siju-samuel Feb 27, 2019
00fe71a
[Relay] Ensure nested higher-order functions are treated correctly (#…
slyubomirsky Feb 27, 2019
0f1d941
[Relay] add more descriptive error for checked_type (#2652)
MarisaKirisame Feb 27, 2019
052eb97
[Relay] Port param dict save/load from NNVM (#2620)
weberlo Feb 27, 2019
a977b7d
add converter for MXNet slice in nnvm and relay (#2662)
haojin2 Feb 27, 2019
edf5f7b
[PYLINT] Disable consider-using-get (#2654)
mshawcroft Feb 27, 2019
9b39048
[DOC] CoreML frontend tutorial (#2667)
kazum Feb 27, 2019
5fdaffd
Support mean in NNVM to Relay converter. (#2680)
lixiaoquan Feb 27, 2019
acd8219
Stop pylint complaining about unnecessary return statement. (#2684)
mshawcroft Feb 27, 2019
4cf98f4
[RUST] Fix typo (#2681)
take-cheeze Feb 27, 2019
13536cc
Handle Select in IntSetEvaluator (#2687)
derisavi Feb 27, 2019
d30a4e7
[CODEGEN LLVM GPU] Initialize llvm before lookup for the target (#2683)
denis0x0D Feb 27, 2019
c267349
[RELAY] Fix get_int_tuple for shape like '(1001,)' (#2691)
lixiaoquan Feb 28, 2019
11eb9fa
[AUTOTVM] tweak `sample_int` implementation (#2677)
eqy Feb 28, 2019
1d4dc80
[Lang] Layout in TVM node system (#2509)
yzhliu Feb 28, 2019
25a2e75
[DOC] Using External Libraries in Relay (#2694)
SiNZeRo Feb 28, 2019
6f4d7c7
[RELAY][PASS] Enable switching CanonicalizeOps in pass_enabled (#2696)
vinx13 Feb 28, 2019
73bcd57
Docker updates (#2702)
mshawcroft Feb 28, 2019
5005283
[Relay][Doc] Separate arguments types formatting with comma (#2690)
wweic Feb 28, 2019
6d5028e
[DOC] MXNet frontend tutorial (#2688)
kazum Feb 28, 2019
d9263ea
Few docs fixes (#2703)
ruslo Feb 28, 2019
08ae245
Pin pylint version 2.2.2 (#2698)
mshawcroft Mar 1, 2019
bcee9b1
[Relay] fix checkwellform (#2705)
MarisaKirisame Mar 1, 2019
f4c6ede
support MXNet _minimum and _maximum (#2709)
haojin2 Mar 1, 2019
8023059
[TOPI][Relay] Fix default `out_dtype` for `conv2d_NCHWc` and Relay (#…
eqy Mar 1, 2019
05dccba
Improve task_lint.sh robustness (#2711)
mshawcroft Mar 1, 2019
c2de9c8
Docker build script robustness (#2710)
mshawcroft Mar 1, 2019
d32112f
[Doc] Relay tutorial - Deploy the Pretrained Model on Raspberry Pi (#…
makihiro Mar 1, 2019
1184dae
Defined a common base class for TensorComputeOp and ComputeOp (#2587)
derisavi Mar 1, 2019
8e3058d
[Relay/TOPI][Op] Add batch_matmul in relay and TOPI (#2561)
icemelon Mar 1, 2019
da8b8ae
[ARITH] Analyzer Infra, ConstIntBound, Modular (#2668)
tqchen Mar 2, 2019
820136b
[EXPR] ir_operator.h->expr_operator.h Centralize const folder logic (…
tqchen Mar 3, 2019
c2d7430
[RELAY][PASS] Common subexpression elimination (#2639)
vinx13 Mar 3, 2019
f8dceb7
[Tensorflow, NNVM, TOPI] Support for logical operators (#2453)
ashutoshparkhi Mar 3, 2019
ffc050c
[Relay][Frontend] Add a few mxnet ops in relay frontend (#2704)
icemelon Mar 3, 2019
f083fd0
[Relay][Frontend] Add slice axis op in mxnet converter (#2706)
icemelon Mar 4, 2019
a507613
[DOCS] Fix tutorial (#2724)
imorinaga Mar 4, 2019
75e19e3
[Relay] Higher order reverse mode automatic differentiation that work…
MarisaKirisame Mar 4, 2019
027be62
Fix compilation on XCode 10 (#2731)
ajtulloch Mar 4, 2019
4f0cbde
[DOCKER] Pin pylint==1.9.4 (#2727)
mshawcroft Mar 4, 2019
42aba5c
Docs: pip dependencies for testing (#2728)
ruslo Mar 4, 2019
09e70c7
[COMMUNITY] @sgrechanik-h -> Reviewer (#2732)
ZihengJiang Mar 5, 2019
b06f6a6
use LLVM linker (#2713)
mnboos Mar 5, 2019
b8179e4
[RELAY][OP] Faster-RCNN Proposal OP (#2725)
vinx13 Mar 5, 2019
dc33e75
[Relay][Frontend][Bugfix] Fix bug in converting slice_axis when axis …
icemelon Mar 5, 2019
db70eb3
[VERSION] Update to 0.6.dev (#2736)
ZihengJiang Mar 6, 2019
0d7c51b
[Relay][TOPI][OP] intel_graphics conv2d alterlayout support relay, ad…
Laurawly Mar 6, 2019
789240b
[RUNTIME][OPENCL] clFinish before releasing memory (#2737)
kazum Mar 7, 2019
6897580
[Bugfix][Relay][Frontend] Fix bug in mxnet converter for slick_like (…
icemelon Mar 7, 2019
2db78d4
Improve NNVM to Relay conversion (#2734)
kazum Mar 8, 2019
4773a62
[Relay] Add logical operators (#2743)
Mar 9, 2019
f04400e
Fix vmlal.s16 code generation for int8 x int8 -> int32 (#2748)
ajtulloch Mar 9, 2019
2470364
revert PR#2420 nms changes (#2747)
Laurawly Mar 9, 2019
4b29df7
[Relay][Quantization] Speed-aware quantization scheme improvement (#2…
vinx13 Mar 9, 2019
6238bcb
[RUNTIME][OPENCL] set type_key even when platform is not available (#…
kazum Mar 9, 2019
532632e
[DLPACK] fix flaky ctypes support (#2759)
tqchen Mar 9, 2019
433f393
Improvements to the conda build (#2742)
Mar 9, 2019
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
2 changes: 2 additions & 0 deletions CONTRIBUTORS.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@ We do encourage everyone to work anything they are interested in.
- [Yizhi Liu](https://github.com/yzhliu) (PMC): @yzhliu - jvm, topi, relay
- [Masahiro Masuda](https://github.com/masahi): @masahi - topi, relay
- [Thierry Moreau](https://github.com/tmoreau89) (PMC): @tmoreau89 - vta
- [Jared Roesch](https://github.com/jroesch): @jroesch - relay
- [Siva](https://github.com/srkreddy1238): @srkreddy1238 - frontends, golang
- [Haichen Shen](https://github.com/icemelon9) (PMC): @icemelon9 - relay, topi
- [Zhixun Tan](https://github.com/phisiart): @phisiart - opengl, web
Expand All @@ -32,6 +33,7 @@ We do encourage everyone to work anything they are interested in.
- [Tianqi Chen](https://github.com/tqchen): @tqchen
- [Liangfu Chen](https://github.com/liangfu): @liangfu
- [Zhi Chen](https://github.com/zhiics): @zhiics
- [Sergei Grechanik](https://github.com/sgrechanik-h): @sgrechanik-h
- [Nick Hynes](https://github.com/nhynes): @nhynes
- [Yuwei Hu](https://github.com/Huyuwei): @Huyuwei
- [Yizhi Liu](https://github.com/yzhliu) : @yzhliu
Expand Down
2 changes: 1 addition & 1 deletion apps/extension/Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ PKG_CFLAGS = -std=c++11 -O2 -fPIC\
-I${TVM_ROOT}/3rdparty/dlpack/include\
-I${TVM_ROOT}/3rdparty/HalideIR/src

PKG_LDFLAGS =-L${TVM_ROOT}/lib
PKG_LDFLAGS =-L${TVM_ROOT}/build
UNAME_S := $(shell uname -s)

ifeq ($(UNAME_S), Darwin)
Expand Down
29 changes: 28 additions & 1 deletion apps/extension/python/tvm_ext/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ def __init__(self, handle):
def __del__(self):
# You can also call your own customized
# deleter if you can free it via your own FFI.
tvm.nd.free_extension_handle(self.handle, 17)
tvm.nd.free_extension_handle(self.handle, self.__class__._tvm_tcode)

@property
def _tvm_handle(self):
Expand All @@ -42,3 +42,30 @@ def __getitem__(self, idx):

# Register IntVec extension on python side.
tvm.register_extension(IntVec, IntVec)


nd_create = tvm.get_global_func("tvm_ext.nd_create")
nd_add_two = tvm.get_global_func("tvm_ext.nd_add_two")
nd_get_addtional_info = tvm.get_global_func("tvm_ext.nd_get_addtional_info")

class NDSubClass(tvm.nd.NDArrayBase):
"""Example for subclassing TVM's NDArray infrastructure.

By inheriting TMV's NDArray, external libraries could
leverage TVM's FFI without any modification.
"""
# Should be consistent with the type-trait set in the backend
_array_type_code = 1

@staticmethod
def create(addtional_info):
return nd_create(addtional_info)

@property
def addtional_info(self):
return nd_get_addtional_info(self)

def __add__(self, other):
return nd_add_two(self, other)

tvm.register_extension(NDSubClass, NDSubClass)
85 changes: 84 additions & 1 deletion apps/extension/src/tvm_ext.cc
Original file line number Diff line number Diff line change
Expand Up @@ -7,24 +7,87 @@
#include <tvm/runtime/packed_func.h>
#include <tvm/runtime/module.h>
#include <tvm/runtime/registry.h>
#include <tvm/runtime/ndarray.h>
#include <tvm/packed_func_ext.h>
#include <tvm/runtime/device_api.h>

namespace tvm_ext {
using IntVector = std::vector<int>;
class NDSubClass;
} // namespace tvm_ext

namespace tvm {
namespace runtime {
template<>
struct extension_class_info<tvm_ext::IntVector> {
struct extension_type_info<tvm_ext::IntVector> {
static const int code = 17;
};
template<>
struct array_type_info<tvm_ext::NDSubClass> {
static const int code = 1;
};
} // namespace tvm
} // namespace runtime

using namespace tvm;
using namespace tvm::runtime;

namespace tvm_ext {
/*!
* \brief A subclass of TVM's NDArray.
*
* To use this extension, an external library should
*
* 1) Inherit TVM's NDArray and NDArray container,
* and define the trait `array_type_info` for this class.
*
* 2) Define a constructor in the inherited class that accepts
* a pointer to TVM's Container, which is nullable.
*
* 3) On Python frontend, inherit `tvm.nd.NDArrayBase`,
* define the class attribute `_array_type_code` consistent to
* the C++ type trait, and register the subclass using `tvm.register_extension`.
*/
class NDSubClass : public tvm::runtime::NDArray {
public:
class SubContainer : public NDArray::Container {
public:
SubContainer(int addtional_info) :
addtional_info_(addtional_info) {
array_type_code_ = array_type_info<NDSubClass>::code;
}
static bool Is(NDArray::Container *container) {
SubContainer *c = static_cast<SubContainer*>(container);
return c->array_type_code_ == array_type_info<NDSubClass>::code;
}
int addtional_info_{0};
};
NDSubClass(NDArray::Container *container) {
if (container == nullptr) {
data_ = nullptr;
return;
}
CHECK(SubContainer::Is(container));
container->IncRef();
data_ = container;
}
~NDSubClass() {
this->reset();
}
NDSubClass AddWith(const NDSubClass &other) const {
SubContainer *a = static_cast<SubContainer*>(data_);
SubContainer *b = static_cast<SubContainer*>(other.data_);
CHECK(a != nullptr && b != nullptr);
return NDSubClass(new SubContainer(a->addtional_info_ + b->addtional_info_));
}
int get_additional_info() const {
SubContainer *self = static_cast<SubContainer*>(data_);
CHECK(self != nullptr);
return self->addtional_info_;
}
};
} // namespace tvm_ext

namespace tvm_ext {

TVM_REGISTER_EXT_TYPE(IntVector);
Expand Down Expand Up @@ -64,6 +127,26 @@ TVM_REGISTER_GLOBAL("device_api.ext_dev")
.set_body([](TVMArgs args, TVMRetValue *rv) {
*rv = (*tvm::runtime::Registry::Get("device_api.cpu"))();
});

TVM_REGISTER_GLOBAL("tvm_ext.nd_create")
.set_body([](TVMArgs args, TVMRetValue *rv) {
int addtional_info = args[0];
*rv = NDSubClass(new NDSubClass::SubContainer(addtional_info));
});

TVM_REGISTER_GLOBAL("tvm_ext.nd_add_two")
.set_body([](TVMArgs args, TVMRetValue *rv) {
NDSubClass a = args[0];
NDSubClass b = args[1];
*rv = a.AddWith(b);
});

TVM_REGISTER_GLOBAL("tvm_ext.nd_get_addtional_info")
.set_body([](TVMArgs args, TVMRetValue *rv) {
NDSubClass a = args[0];
*rv = a.get_additional_info();
});

} // namespace tvm_ext

// External function exposed to runtime.
Expand Down
16 changes: 16 additions & 0 deletions apps/extension/tests/test_ext.py
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,7 @@ def test_sym_add():
c = tvm_ext.sym_add(a, b)
assert c.a == a and c.b == b


def test_ext_vec():
ivec = tvm_ext.ivec_create(1, 2, 3)
assert(isinstance(ivec, tvm_ext.IntVec))
Expand All @@ -44,6 +45,7 @@ def ivec_cb(v2):

tvm.convert(ivec_cb)(ivec)


def test_extract_ext():
fdict = tvm.extract_ext_funcs(tvm_ext._LIB.TVMExtDeclare)
assert fdict["mul"](3, 4) == 12
Expand All @@ -68,7 +70,21 @@ def check_llvm():
check_llvm()


def test_nd_subclass():
a = tvm_ext.NDSubClass.create(addtional_info=3)
b = tvm_ext.NDSubClass.create(addtional_info=5)
c = a + b
d = a + a
e = b + b
assert(a.addtional_info == 3)
assert(b.addtional_info == 5)
assert(c.addtional_info == 8)
assert(d.addtional_info == 6)
assert(e.addtional_info == 10)


if __name__ == "__main__":
test_nd_subclass()
test_extern_call()
test_ext_dev()
test_ext_vec()
Expand Down
20 changes: 20 additions & 0 deletions conda/cross-linux.cmake
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
# this one is important
set(CMAKE_SYSTEM_NAME Linux)
set(CMAKE_PLATFORM Linux)
#this one not so much
set(CMAKE_SYSTEM_VERSION 1)

# specify the cross compiler
set(CMAKE_C_COMPILER $ENV{CC})

# where is the target environment
set(CMAKE_FIND_ROOT_PATH $ENV{PREFIX} $ENV{BUILD_PREFIX}/$ENV{HOST}/sysroot)

# search for programs in the build host directories
set(CMAKE_FIND_ROOT_PATH_MODE_PROGRAM NEVER)
# for libraries and headers in the target directories
set(CMAKE_FIND_ROOT_PATH_MODE_LIBRARY ONLY)
set(CMAKE_FIND_ROOT_PATH_MODE_INCLUDE ONLY)

# god-awful hack because it seems to not run correct tests to determine this:
set(__CHAR_UNSIGNED___EXITCODE 1)
4 changes: 2 additions & 2 deletions conda/nnvm/meta.yaml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
{% set version = "0.5.dev" %}
{% set version = "0.6.dev" %}

package:
name: nnvm
Expand All @@ -8,7 +8,7 @@ source:
path: ../..

build:
number: 1
number: 0
skip: True # [win]

requirements:
Expand Down
4 changes: 2 additions & 2 deletions conda/topi/meta.yaml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
{% set version = "0.5.dev" %}
{% set version = "0.6.dev" %}

package:
name: topi
Expand All @@ -8,7 +8,7 @@ source:
path: ../..

build:
number: 1
number: 0

requirements:
host:
Expand Down
26 changes: 23 additions & 3 deletions conda/tvm-libs/build.sh
Original file line number Diff line number Diff line change
@@ -1,5 +1,9 @@
#!/bin/bash

# Fix for OSX build to hide the clang LLVM
rm -f ${BUILD_PREFIX}/bin/llvm-config
rm -rf ${BUILD_PREFIX}/lib/cmake

set -e

if [ -z "$PREFIX" ]; then
Expand All @@ -9,13 +13,29 @@ fi
if [ -z "$cuda" ] || [ "$cuda" == "False" ]; then
CUDA_OPT=""
else
CUDA_OPT="-DUSE_CUDA=ON"
CUDA_OPT="-DUSE_CUDA=ON -DUSE_CUBLAS=ON"
fi

if [ "$target_platform" == "osx-64" ]; then
# macOS 64 bits
METAL_OPT="" # Conda can only target 10.9 for now
TOOLCHAIN_OPT=""
else
METAL_OPT=""
if [ "$target_platform" == "linux-64" ]; then
# Linux 64 bits
TOOLCHAIN_OPT="-DCMAKE_TOOLCHAIN_FILE=${RECIPE_DIR}/../cross-linux.cmake"
else
# Windows (or 32 bits, which we don't support)
METAL_OPT=""
TOOLCHAIN_OPT=""
fi
fi

rm -rf build || true
mkdir -p build
cd build
cmake $CUDA_OPT -DUSE_LLVM=ON -DINSTALL_DEV=ON -DCMAKE_INSTALL_PREFIX="$PREFIX" ..
make -j4 VERBOSE=1
cmake $METAL_OPT $CUDA_OPT -DUSE_LLVM=ON -DINSTALL_DEV=ON -DCMAKE_INSTALL_PREFIX="$PREFIX" $TOOLCHAIN_OPT ..
make -j${CPU_COUNT} VERBOSE=1
make install
cd ..
16 changes: 6 additions & 10 deletions conda/tvm-libs/meta.yaml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
{% set version = "0.5.dev" %}
{% set version = "0.6.dev" %}

package:
name: tvm-libs
Expand All @@ -8,21 +8,17 @@ source:
path: ../..

build:
number: 1
number: 0
string: cuda{{ cuda_version }}_{{ PKG_BUILDNUM }} # [cuda]

requirements:
build:
- {{ compiler('cxx') }} # [linux]
- llvmdev ==6.0.0 # [osx]
host:
# The OS X build will require some manual setup or it will break
# See https://conda.io/docs/user-guide/tasks/build-packages/compiler-tools.html#macos-sdk
# It is also ass-backward because of llvm brokeness when mixed with the
# conda OS X compiler
- {{ compiler('cxx') }} # [osx]
# See https://docs.conda.io/projects/conda-build/en/latest/source/resources/compiler-tools.html#macos-sdk
- {{ compiler('cxx') }}
host:
- cmake
- llvmdev ==6.0.0 # [linux]
- llvmdev ==6.0.0
- zlib # [linux]
run:
- {{ pin_compatible('cudatoolkit', lower_bound=cuda_version, max_pin='x.x') }} # [cuda]
Expand Down
4 changes: 2 additions & 2 deletions conda/tvm/meta.yaml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
{% set version = "0.5.dev" %}
{% set version = "0.6.dev" %}

package:
name: tvm
Expand All @@ -8,7 +8,7 @@ source:
path: ../..

build:
number: 1
number: 0

requirements:
build:
Expand Down
2 changes: 1 addition & 1 deletion docker/Dockerfile.ci_gpu
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ COPY install/ubuntu_install_sphinx.sh /install/ubuntu_install_sphinx.sh
RUN bash /install/ubuntu_install_sphinx.sh

# Fix recommonmark to latest version
RUN git clone https://github.com/rtfd/recommonmark
RUN git clone --depth=1 https://github.com/rtfd/recommonmark
RUN cd recommonmark; python3 setup.py install

# Enable doxygen for c++ doc build
Expand Down
2 changes: 1 addition & 1 deletion docker/Dockerfile.ci_lint
Original file line number Diff line number Diff line change
Expand Up @@ -6,4 +6,4 @@ RUN apt-get update && apt-get install -y sudo wget
COPY install/ubuntu_install_python.sh /install/ubuntu_install_python.sh
RUN bash /install/ubuntu_install_python.sh
RUN apt-get install -y doxygen graphviz
RUN pip3 install cpplint pylint mypy
RUN pip3 install cpplint pylint==1.9.4 mypy
2 changes: 1 addition & 1 deletion docker/Dockerfile.demo_opencl
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ RUN echo "Cloning TVM source & submodules"
ENV TVM_PAR_DIR="/usr"
RUN mkdir -p TVM_PAR_DIR && \
cd ${TVM_PAR_DIR} && \
git clone https://github.com/dmlc/tvm --recursive
git clone --depth=1 https://github.com/dmlc/tvm --recursive
#RUN git submodule update --init --recursive


Expand Down
8 changes: 7 additions & 1 deletion docker/install/install_tvm_cpu.sh
100644 → 100755
Original file line number Diff line number Diff line change
@@ -1,5 +1,11 @@
#!/bin/bash

set -e
set -u
set -o pipefail

cd /usr
git clone https://github.com/dmlc/tvm --recursive
git clone --depth=1 https://github.com/dmlc/tvm --recursive
cd /usr/tvm
echo set\(USE_LLVM llvm-config-6.0\) >> config.cmake
echo set\(USE_RPC ON\) >> config.cmake
Expand Down
Loading