-
Notifications
You must be signed in to change notification settings - Fork 5.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove CMake files #4493
Remove CMake files #4493
Conversation
Test PASSed. |
Test FAILed. |
@@ -91,7 +91,7 @@ setup_commands: | |||
# - sudo dpkg --configure -a | |||
# Install basics. | |||
- sudo apt-get update | |||
- sudo apt-get install -y cmake pkg-config build-essential autoconf curl libtool unzip flex bison python | |||
- sudo apt-get install -y pkg-config build-essential autoconf curl libtool unzip flex bison python |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we get rid of even more? E.g., pkg-config
, autoconf
, libtool
, flex
, bison
, python
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
in other places as well
Test PASSed. |
Test FAILed. |
Jenkins seems to have failed with
I've seen this error before. Maybe we should use an absolute path to bazel? |
Test PASSed. |
Test PASSed. |
Test FAILed. |
I think I fixed the bazel docker error now (it was not installed, and also the path needed to be set). |
Test PASSed. |
Test FAILed. |
Test FAILed. |
Test PASSed. |
Test FAILed. |
Test FAILed. |
Test PASSed. |
docker/base-deps/Dockerfile
Outdated
@@ -26,6 +21,7 @@ RUN apt-get update \ | |||
&& /opt/conda/bin/conda clean -y --all \ | |||
&& /opt/conda/bin/pip install \ | |||
flatbuffers \ | |||
cython==0.29.0 | |||
cython==0.29.0 \ | |||
numpy==0.15.4 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
without this I'm getting
++ docker run --rm --shm-size=20G --memory=20G 87fffcd575c38f478db77c643c872b49299baae2dc8ff16094ea1d76d4e45656 /ray/ci/suppress_output /ray/python/ray/rllib/train.py --env PongDeterministic-v0 --run A3C --stop '{"training_iteration": 1}' --config '{"num_workers": 2}'
WARNING: Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap.
real 0m4.461s
user 0m3.665s
sys 0m3.451s
/opt/conda/lib/python2.7/site-packages/sklearn/ensemble/weight_boosting.py:29: DeprecationWarning: numpy.core.umath_tests is an internal NumPy module and should not be imported. It will be removed in a future NumPy release.
from numpy.core.umath_tests import inner1d
2019-03-28 17:56:50,670 WARNING worker.py:1397 -- WARNING: Not updating worker name since `setproctitle` is not installed. Install this with `pip install setproctitle` (or ray[debug]) to enable monitoring of worker processes.
2019-03-28 17:56:50,671 INFO node.py:423 -- Process STDOUT and STDERR is being redirected to /tmp/ray/session_2019-03-28_17-56-50_8/logs.
2019-03-28 17:56:50,781 INFO services.py:363 -- Waiting for redis server at 127.0.0.1:60548 to respond...
2019-03-28 17:56:50,920 INFO services.py:363 -- Waiting for redis server at 127.0.0.1:50558 to respond...
2019-03-28 17:56:50,923 INFO services.py:760 -- Starting Redis shard with 4.29 GB max memory.
2019-03-28 17:56:50,971 INFO services.py:1383 -- Starting the Plasma object store with 6.44 GB memory using /dev/shm.
2019-03-28 17:56:51,533 INFO tune.py:64 -- Did not find checkpoint file in /root/ray_results/default.
2019-03-28 17:56:51,533 INFO tune.py:211 -- Starting a new experiment.
== Status ==
Using FIFO scheduling algorithm.
Resources requested: 0/32 CPUs, 0/0 GPUs
Memory usage on this node: 12.2/135.1 GB
Traceback (most recent call last):
File "/ray/python/ray/rllib/train.py", line 151, in <module>
run(args, parser)
File "/ray/python/ray/rllib/train.py", line 145, in run
resume=args.resume)
File "/ray/python/ray/tune/tune.py", line 311, in run_experiments
raise_on_failed_trial=raise_on_failed_trial)
File "/ray/python/ray/tune/tune.py", line 235, in run
runner.step()
File "/ray/python/ray/tune/trial_runner.py", line 234, in step
next_trial = self._get_next_trial() # blocking
File "/ray/python/ray/tune/trial_runner.py", line 397, in _get_next_trial
self._update_trial_queue(blocking=wait_for_trial)
File "/ray/python/ray/tune/trial_runner.py", line 531, in _update_trial_queue
trials = self._search_alg.next_trials()
File "/ray/python/ray/tune/suggest/basic_variant.py", line 57, in next_trials
trials = list(self._trial_generator)
File "/ray/python/ray/tune/suggest/basic_variant.py", line 87, in _generate_trials
experiment_tag=experiment_tag)
File "/ray/python/ray/tune/config_parser.py", line 200, in create_trial_from_spec
**trial_kwargs)
File "/ray/python/ray/tune/trial.py", line 269, in __init__
Trial._registration_check(trainable_name)
File "/ray/python/ray/tune/trial.py", line 329, in _registration_check
from ray import rllib # noqa: F401
File "/ray/python/ray/rllib/__init__.py", line 11, in <module>
from ray.rllib.evaluation.policy_graph import PolicyGraph
File "/ray/python/ray/rllib/evaluation/__init__.py", line 2, in <module>
from ray.rllib.evaluation.policy_evaluator import PolicyEvaluator
File "/ray/python/ray/rllib/evaluation/policy_evaluator.py", line 20, in <module>
from ray.rllib.evaluation.sampler import AsyncSampler, SyncSampler
File "/ray/python/ray/rllib/evaluation/sampler.py", line 16, in <module>
from ray.rllib.evaluation.tf_policy_graph import TFPolicyGraph
File "/ray/python/ray/rllib/evaluation/tf_policy_graph.py", line 15, in <module>
from ray.rllib.models.lstm import chop_into_sequences
File "/ray/python/ray/rllib/models/__init__.py", line 1, in <module>
from ray.rllib.models.catalog import ModelCatalog, MODEL_DEFAULTS
File "/ray/python/ray/rllib/models/catalog.py", line 19, in <module>
from ray.rllib.models.fcnet import FullyConnectedNetwork
File "/ray/python/ray/rllib/models/fcnet.py", line 6, in <module>
import tensorflow.contrib.slim as slim
File "/opt/conda/lib/python2.7/site-packages/tensorflow/contrib/__init__.py", line 41, in <module>
from tensorflow.contrib import distributions
File "/opt/conda/lib/python2.7/site-packages/tensorflow/contrib/distributions/__init__.py", line 44, in <module>
from tensorflow.contrib.distributions.python.ops.estimator import *
File "/opt/conda/lib/python2.7/site-packages/tensorflow/contrib/distributions/python/ops/estimator.py", line 21, in <module>
from tensorflow.contrib.learn.python.learn.estimators.head import _compute_weighted_loss
File "/opt/conda/lib/python2.7/site-packages/tensorflow/contrib/learn/__init__.py", line 93, in <module>
from tensorflow.contrib.learn.python.learn import *
File "/opt/conda/lib/python2.7/site-packages/tensorflow/contrib/learn/python/__init__.py", line 28, in <module>
from tensorflow.contrib.learn.python.learn import *
File "/opt/conda/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/__init__.py", line 30, in <module>
from tensorflow.contrib.learn.python.learn import estimators
File "/opt/conda/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/estimators/__init__.py", line 302, in <module>
from tensorflow.contrib.learn.python.learn.estimators.dnn import DNNClassifier
File "/opt/conda/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/estimators/dnn.py", line 34, in <module>
from tensorflow.contrib.learn.python.learn.estimators import dnn_linear_combined
File "/opt/conda/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/estimators/dnn_linear_combined.py", line 36, in <module>
from tensorflow.contrib.learn.python.learn.estimators import estimator
File "/opt/conda/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/estimators/estimator.py", line 52, in <module>
from tensorflow.contrib.learn.python.learn.learn_io import data_feeder
File "/opt/conda/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/learn_io/__init__.py", line 26, in <module>
from tensorflow.contrib.learn.python.learn.learn_io.dask_io import extract_dask_data
File "/opt/conda/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/learn_io/dask_io.py", line 33, in <module>
import dask.dataframe as dd
File "/opt/conda/lib/python2.7/site-packages/dask/dataframe/__init__.py", line 3, in <module>
from .core import (DataFrame, Series, Index, _Frame, map_partitions,
File "/opt/conda/lib/python2.7/site-packages/dask/dataframe/core.py", line 20, in <module>
from .. import array as da
File "/opt/conda/lib/python2.7/site-packages/dask/array/__init__.py", line 8, in <module>
from .routines import (take, choose, argwhere, where, coarsen, insert,
File "/opt/conda/lib/python2.7/site-packages/dask/array/routines.py", line 245, in <module>
@wraps(np.matmul)
File "/opt/conda/lib/python2.7/functools.py", line 33, in update_wrapper
setattr(wrapper, attr, getattr(wrapped, attr))
AttributeError: 'numpy.ufunc' object has no attribute '__module__'
on Jenkins (not clear why). The fix is from scikit-image/scikit-image#3654.
Test PASSed. |
Test FAILed. |
Test PASSed. |
Test FAILed. |
@@ -6,14 +6,9 @@ RUN apt-get update \ | |||
git \ | |||
wget \ | |||
cmake \ | |||
pkg-config \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we get rid of cmake
above?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think I tried and it was needed to pip install gym
Test PASSed. |
Test FAILed. |
Test PASSed. |
Test FAILed. |
Test PASSed. |
Test FAILed. |
Test PASSed. |
Test PASSed. |
Test FAILed. |
Test FAILed. |
Test PASSed. |
Test FAILed. |
Jenkins retest this please |
Test PASSed. |
Test FAILed. |
This is ready to merge now! |
Where does dask get installed that we have to remove it? |
This will fix #2887 (which is already basically done, but this does the finishing touches).