Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[rllib] [rfc] Proposed class renames #4813

Closed
ericl opened this issue May 17, 2019 · 10 comments · Fixed by #4820
Closed

[rllib] [rfc] Proposed class renames #4813

ericl opened this issue May 17, 2019 · 10 comments · Fixed by #4820
Assignees
Labels
RFC RFC issues

Comments

@ericl
Copy link
Contributor

ericl commented May 17, 2019

Per discussion with several users, there are some possible renames we could do to clarify the internal architecture:

  • Rename rllib.evaluation.PolicyGraph to rllib.policy.Policy. Similarly, rllib.evaluation.TFPolicyGraph becomes rllib.policy.TFPolicy and so on. The files are moved to a new rllib/policy dir.

  • Move the rllib/agents directory to rllib/train.

  • Add TF qualifier for consistency with Torch policies
    A3CPolicyGraph => A3CTFPolicy
    A3CTorchPolicyGraph => A3CTorchPolicy

  • Rename PolicyEvaluator to RolloutWorker
    deprecate compute_gradients() / apply_gradients()

  • Consolidate [local_evaluator, remote_evaluators] to single WorkerSet object

    workers = WorkerSet(...)
    local_evaluators => workers.local_worker()
    remote_evaluators => workers.remote_workers()
    

For the most part, I think we can do these renames without breaking backwards compatibility by leaving aliases behind. Though, I'm not sure how easy moving an entire directory is.

@ericl ericl pinned this issue May 17, 2019
@ericl ericl changed the title [rllib] Proposed class renames [rllib] [rfc] Proposed class renames May 17, 2019
@ericl
Copy link
Contributor Author

ericl commented May 17, 2019

cc @hartikainen @bbalaji-ucsd

@bbalaji-ucsd
Copy link

+1 for all changes, except agents

Move the rllib/agents directory to rllib/train

I'm conflicted about this one given that agents is used in Coach, TF Agents, RLGraph, ChainerRL. I like the point you raised in one of the threads that this is confusing when multi-agents are used. I quite like the use of algos in Spinning Up.

There are also two concepts here:

  1. The algorithm that puts together various aspects of learning a policy - loss, model, replay buffer sampling, exploration, filters, etc.
  2. The orchestrator that manages things like initialize policy, interact with env, collect samples, update policy, evaluate and so on.
    Which of these two is train? The verb aligns well with the latter in my opinion, and perhaps can be algo agnostic.

@ericl
Copy link
Contributor Author

ericl commented May 17, 2019

We could leave it as agents, since this would just be the status quo. Algos does seem slightly nicer, but maybe not nice enough to do a rename.

The algorithm that puts together various aspects of learning a policy - loss, model, replay buffer sampling, exploration, filters, etc.
The orchestrator that manages things like initialize policy, interact with env, collect samples, update policy, evaluate and so on.
Which of these two is train? The verb aligns well with the latter in my opinion, and perhaps can be algo agnostic.

Here's the current division of these concepts:

loss, model, exploration -> Policy (rllib.policy)

sampling, filters -> RolloutWorker (rllib.evaluation) (arguably, filters should be made into a trainable part of the policy, though this would require some work to implement)

replay, sgd -> PolicyOptimizer (rllib.optimizers) (or perhaps TrainingStrategy?)

putting it all together -> Trainer (rllib.agents)

@richardliaw
Copy link
Contributor

richardliaw commented May 17, 2019

I think PolicyOptimizer can stay as is; I've never found it too confusing (and TrainingStrategy reminds me of "DistributedStrategy" which isn't too informative.

@bbalaji-ucsd
Copy link

bbalaji-ucsd commented May 17, 2019

This is where there is a lack of consensus in implementations across libraries.

loss, model, exploration -> Policy (rllib.policy)

Most libraries call this the agent.

putting it all together -> Trainer (rllib.agents)

Coach calls this graph_manager. I like the name trainer. I can't find equivalent in TF Agents.

Fine with sticking to PolicyOptimizers.

Just to point out why things are inconsistent and confusing: TF Agents refers to exploration as policies, ChainerRL refers to optimizers for rmsprop_async and policies for action distribution.

@hartikainen
Copy link
Contributor

hartikainen commented May 17, 2019

Generally I really like the idea of clarifying the class/module names.

Rename rllib.evaluation.PolicyGraph to rllib.policy.Policy.
This makes sense. Unrelated to the naming though, I think it's a little confusing that currently all the rl logic is incorporated in the policy graphs, meaning that the PolicyGraph includes things like value functions etc., which in my opinion are separate from the policies, and the interaction (e.g. training policy using the value functions) should be handled by the Algorithm or Trainer. I.e., currently, the Rllib seems to be organized as something like the following:

  • Trainer
    • PolicyGraph
      • Policies pi(a|s)
      • Value functions V(s), Q(s,a)
    • Environment
    • PolicyEvaluator (or more generally EnvironmentSampler)
    • Replay Pool

Whereas I think it's more natural to have the value functions etc. on the same level with the Policy:

  • Trainer
    • Policies pi(a|s)
    • Value functions V(s), Q(s,a)
    • Reward functions r(s,a,s')
    • Whatever else is needed to learn by the algorithm
    • Environment
    • PolicyEvaluator (or more generally EnvironmentSampler)
    • Replay Pool

Move the rllib/agents directory to rllib/train.

At some point I thought that train/trainer would be a good name for these. On the hindsight, though, I actually think that Algorithm might be a better name, mainly because these classes seem to correspond exactly to what we generally call a reinforcement algorithm, e.g. SAC/PPO/DDPG/whatever. I don't feel strongly though.

Add TF qualifier for consistency with Torch policies
A3CPolicyGraph => A3CTFPolicy
A3CTorchPolicyGraph => A3CTorchPolicy

👍

Rename PolicyEvaluator to RolloutWorker

RolloutWorker sounds good to me. Could also be something like EnvironmentSampler?

deprecate compute_gradients() / apply_gradients()

Can you elaborate on this?

Consolidate [local_evaluator, remote_evaluators] to single WorkerSet object

👍

@ericl
Copy link
Contributor Author

ericl commented May 17, 2019

Policies pi(a|s)
Value functions V(s), Q(s,a)
Reward functions r(s,a,s')
Whatever else is needed to learn by the algorithm

While conceptually there isn't a reason to combine them from the RL point of view, it's valuable to group them into one opaque object that "the system" can manipulate. Otherwise, things like weight synchronization, multi-agent, etc. become harder to manage, especially in a distributed setting.

However that doesn't mean we can't internally decompose Policy into different orthogonal concepts that can be easily put together. That should be easy to implement incrementally. Note that custom models already have stuff like value_function().

RolloutWorker sounds good to me. Could also be something like EnvironmentSampler?

Yeah my only requirement here is it ends with *Worker, to connect with the num_workers config. I'm partial to EvaluationWorker since it seems like a nice clarification and isn't that much of a change from the current naming.

deprecate compute_gradients() / apply_gradients()

Originally having these two functions available on PolicyEvaluator and PolicyGraph was important for asynchronous optimization (i.e., A3C). However it turns out asynchronous optimization isn't that great when async sampling will do, so it makes sense to deprecate these functions.

@bbalaji-ucsd
Copy link

I'd prefer RolloutWorker over EvaluationWorker. In my mind, EvaluationWorker refers to evaluating the policy without exploration. OpenAI Five uses similar terminology:

image

@bbalaji-ucsd
Copy link

In the Agent class, we can have a train() function, so that the Trainer abstraction is maintained.

@ericl
Copy link
Contributor Author

ericl commented May 18, 2019

Ok, I agree RolloutWorker is less ambiguous. No intent to rename anything to agent though, we just went through getting rid of the term agent in the last release :)

@ericl ericl self-assigned this May 19, 2019
richardliaw pushed a commit that referenced this issue May 20, 2019
#4819)

This implements some of the renames proposed in #4813
We leave behind backwards-compatibility aliases for *PolicyGraph and SampleBatch.
stefanpantic added a commit to wingman-ai/ray that referenced this issue May 28, 2019
* [rllib] Remove dependency on TensorFlow (ray-project#4764)

* remove hard tf dep

* add test

* comment fix

* fix test

* Dynamic Custom Resources - create and delete resources (ray-project#3742)

* Update tutorial link in doc (ray-project#4777)

* [rllib] Implement learn_on_batch() in torch policy graph

* Fix `ray stop` by killing raylet before plasma (ray-project#4778)

* Fatal check if object store dies (ray-project#4763)

* [rllib] fix clip by value issue as TF upgraded (ray-project#4697)

*  fix clip_by_value issue

*  fix typo

* [autoscaler] Fix submit (ray-project#4782)

* Queue tasks in the raylet in between async callbacks (ray-project#4766)

* Add a SWAP TaskQueue so that we can keep track of tasks that are temporarily dequeued

* Fix bug where tasks that fail to be forwarded don't appear to be local by adding them to SWAP queue

* cleanups

* updates

* updates

* [Java][Bazel]  Refine auto-generated pom files (ray-project#4780)

* Bump version to 0.7.0 (ray-project#4791)

* [JAVA] setDefaultUncaughtExceptionHandler to log uncaught exception in user thread. (ray-project#4798)

* Add WorkerUncaughtExceptionHandler

* Fix

* revert bazel and pom

* [tune] Fix CLI test (ray-project#4801)

* Fix pom file generation (ray-project#4800)

* [rllib] Support continuous action distributions in IMPALA/APPO (ray-project#4771)

* [rllib] TensorFlow 2 compatibility (ray-project#4802)

* Change tagline in documentation and README. (ray-project#4807)

* Update README.rst, index.rst, tutorial.rst and  _config.yml

* [tune] Support non-arg submit (ray-project#4803)

* [autoscaler] rsync cluster (ray-project#4785)

* [tune] Remove extra parsing functionality (ray-project#4804)

* Fix Java worker log dir (ray-project#4781)

* [tune] Initial track integration (ray-project#4362)

Introduces a minimally invasive utility for logging experiment results. A broad requirement for this tool is that it should integrate seamlessly with Tune execution.

* [rllib] [RFC] Dynamic definition of loss functions and modularization support (ray-project#4795)

* dynamic graph

* wip

* clean up

* fix

* document trainer

* wip

* initialize the graph using a fake batch

* clean up dynamic init

* wip

* spelling

* use builder for ppo pol graph

* add ppo graph

* fix naming

* order

* docs

* set class name correctly

* add torch builder

* add custom model support in builder

* cleanup

* remove underscores

* fix py2 compat

* Update dynamic_tf_policy_graph.py

* Update tracking_dict.py

* wip

* rename

* debug level

* rename policy_graph -> policy in new classes

* fix test

* rename ppo tf policy

* port appo too

* forgot grads

* default policy optimizer

* make default config optional

* add config to optimizer

* use lr by default in optimizer

* update

* comments

* remove optimizer

* fix tuple actions support in dynamic tf graph

* [rllib] Rename PolicyGraph => Policy, move from evaluation/ to policy/ (ray-project#4819)

This implements some of the renames proposed in ray-project#4813
We leave behind backwards-compatibility aliases for *PolicyGraph and SampleBatch.

* [Java] Dynamic resource API in Java (ray-project#4824)

* Add default values for Wgym flags

* Fix import

* Fix issue when starting `raylet_monitor` (ray-project#4829)

* Refactor ID Serial 1: Separate ObjectID and TaskID from UniqueID (ray-project#4776)

* Enable BaseId.

* Change TaskID and make python test pass

* Remove unnecessary functions and fix test failure and change TaskID to
16 bytes.

* Java code change draft

* Refine

* Lint

* Update java/api/src/main/java/org/ray/api/id/TaskId.java

Co-Authored-By: Hao Chen <[email protected]>

* Update java/api/src/main/java/org/ray/api/id/BaseId.java

Co-Authored-By: Hao Chen <[email protected]>

* Update java/api/src/main/java/org/ray/api/id/BaseId.java

Co-Authored-By: Hao Chen <[email protected]>

* Update java/api/src/main/java/org/ray/api/id/ObjectId.java

Co-Authored-By: Hao Chen <[email protected]>

* Address comment

* Lint

* Fix SINGLE_PROCESS

* Fix comments

* Refine code

* Refine test

* Resolve conflict

* Fix bug in which actor classes are not exported multiple times. (ray-project#4838)

* Bump Ray master version to 0.8.0.dev0 (ray-project#4845)

* Add section to bump version of master branch and cleanup release docs (ray-project#4846)

* Fix import

* Export remote functions when first used and also fix bug in which rem… (ray-project#4844)

* Export remote functions when first used and also fix bug in which remote functions and actor classes are not exported from workers during subsequent ray sessions.

* Documentation update

* Fix tests.

* Fix grammar

* Update wheel versions in documentation to 0.8.0.dev0 and 0.7.0. (ray-project#4847)

* [tune] Later expansion of local_dir (ray-project#4806)

* [rllib] [RFC] Deprecate Python 2 / RLlib (ray-project#4832)

* Fix a typo in kubernetes yaml (ray-project#4872)

* Move global state API out of global_state object. (ray-project#4857)

* Install bazel in autoscaler development configs. (ray-project#4874)

* [tune] Fix up Ax Search and Examples (ray-project#4851)

* update Ax for cleaner API

* docs update

* [rllib] Update concepts docs and add "Building Policies in Torch/TensorFlow" section (ray-project#4821)

* wip

* fix index

* fix bugs

* todo

* add imports

* note on get ph

* note on get ph

* rename to building custom algs

* add rnn state info

* [rllib] Fix error getting kl when simple_optimizer: True in multi-agent PPO

* Replace ReturnIds with NumReturns in TaskInfo to reduce the size (ray-project#4854)

* Refine TaskInfo

* Fix

* Add a test to print task info size

* Lint

* Refine

* Update deps commits of opencensus to support building with bzl 0.25.x (ray-project#4862)

* Update deps to support bzl 2.5.x

* Fix
@ericl ericl unpinned this issue Jun 2, 2019
stefanpantic added a commit to wingman-ai/ray that referenced this issue Jun 6, 2019
* [rllib] Remove dependency on TensorFlow (ray-project#4764)

* remove hard tf dep

* add test

* comment fix

* fix test

* Dynamic Custom Resources - create and delete resources (ray-project#3742)

* Update tutorial link in doc (ray-project#4777)

* [rllib] Implement learn_on_batch() in torch policy graph

* Fix `ray stop` by killing raylet before plasma (ray-project#4778)

* Fatal check if object store dies (ray-project#4763)

* [rllib] fix clip by value issue as TF upgraded (ray-project#4697)

*  fix clip_by_value issue

*  fix typo

* [autoscaler] Fix submit (ray-project#4782)

* Queue tasks in the raylet in between async callbacks (ray-project#4766)

* Add a SWAP TaskQueue so that we can keep track of tasks that are temporarily dequeued

* Fix bug where tasks that fail to be forwarded don't appear to be local by adding them to SWAP queue

* cleanups

* updates

* updates

* [Java][Bazel]  Refine auto-generated pom files (ray-project#4780)

* Bump version to 0.7.0 (ray-project#4791)

* [JAVA] setDefaultUncaughtExceptionHandler to log uncaught exception in user thread. (ray-project#4798)

* Add WorkerUncaughtExceptionHandler

* Fix

* revert bazel and pom

* [tune] Fix CLI test (ray-project#4801)

* Fix pom file generation (ray-project#4800)

* [rllib] Support continuous action distributions in IMPALA/APPO (ray-project#4771)

* [rllib] TensorFlow 2 compatibility (ray-project#4802)

* Change tagline in documentation and README. (ray-project#4807)

* Update README.rst, index.rst, tutorial.rst and  _config.yml

* [tune] Support non-arg submit (ray-project#4803)

* [autoscaler] rsync cluster (ray-project#4785)

* [tune] Remove extra parsing functionality (ray-project#4804)

* Fix Java worker log dir (ray-project#4781)

* [tune] Initial track integration (ray-project#4362)

Introduces a minimally invasive utility for logging experiment results. A broad requirement for this tool is that it should integrate seamlessly with Tune execution.

* [rllib] [RFC] Dynamic definition of loss functions and modularization support (ray-project#4795)

* dynamic graph

* wip

* clean up

* fix

* document trainer

* wip

* initialize the graph using a fake batch

* clean up dynamic init

* wip

* spelling

* use builder for ppo pol graph

* add ppo graph

* fix naming

* order

* docs

* set class name correctly

* add torch builder

* add custom model support in builder

* cleanup

* remove underscores

* fix py2 compat

* Update dynamic_tf_policy_graph.py

* Update tracking_dict.py

* wip

* rename

* debug level

* rename policy_graph -> policy in new classes

* fix test

* rename ppo tf policy

* port appo too

* forgot grads

* default policy optimizer

* make default config optional

* add config to optimizer

* use lr by default in optimizer

* update

* comments

* remove optimizer

* fix tuple actions support in dynamic tf graph

* [rllib] Rename PolicyGraph => Policy, move from evaluation/ to policy/ (ray-project#4819)

This implements some of the renames proposed in ray-project#4813
We leave behind backwards-compatibility aliases for *PolicyGraph and SampleBatch.

* [Java] Dynamic resource API in Java (ray-project#4824)

* Add default values for Wgym flags

* Fix import

* Fix issue when starting `raylet_monitor` (ray-project#4829)

* Refactor ID Serial 1: Separate ObjectID and TaskID from UniqueID (ray-project#4776)

* Enable BaseId.

* Change TaskID and make python test pass

* Remove unnecessary functions and fix test failure and change TaskID to
16 bytes.

* Java code change draft

* Refine

* Lint

* Update java/api/src/main/java/org/ray/api/id/TaskId.java

Co-Authored-By: Hao Chen <[email protected]>

* Update java/api/src/main/java/org/ray/api/id/BaseId.java

Co-Authored-By: Hao Chen <[email protected]>

* Update java/api/src/main/java/org/ray/api/id/BaseId.java

Co-Authored-By: Hao Chen <[email protected]>

* Update java/api/src/main/java/org/ray/api/id/ObjectId.java

Co-Authored-By: Hao Chen <[email protected]>

* Address comment

* Lint

* Fix SINGLE_PROCESS

* Fix comments

* Refine code

* Refine test

* Resolve conflict

* Fix bug in which actor classes are not exported multiple times. (ray-project#4838)

* Bump Ray master version to 0.8.0.dev0 (ray-project#4845)

* Add section to bump version of master branch and cleanup release docs (ray-project#4846)

* Fix import

* Export remote functions when first used and also fix bug in which rem… (ray-project#4844)

* Export remote functions when first used and also fix bug in which remote functions and actor classes are not exported from workers during subsequent ray sessions.

* Documentation update

* Fix tests.

* Fix grammar

* Update wheel versions in documentation to 0.8.0.dev0 and 0.7.0. (ray-project#4847)

* [tune] Later expansion of local_dir (ray-project#4806)

* [rllib] [RFC] Deprecate Python 2 / RLlib (ray-project#4832)

* Fix a typo in kubernetes yaml (ray-project#4872)

* Move global state API out of global_state object. (ray-project#4857)

* Install bazel in autoscaler development configs. (ray-project#4874)

* [tune] Fix up Ax Search and Examples (ray-project#4851)

* update Ax for cleaner API

* docs update

* [rllib] Update concepts docs and add "Building Policies in Torch/TensorFlow" section (ray-project#4821)

* wip

* fix index

* fix bugs

* todo

* add imports

* note on get ph

* note on get ph

* rename to building custom algs

* add rnn state info

* [rllib] Fix error getting kl when simple_optimizer: True in multi-agent PPO

* Replace ReturnIds with NumReturns in TaskInfo to reduce the size (ray-project#4854)

* Refine TaskInfo

* Fix

* Add a test to print task info size

* Lint

* Refine

* Update deps commits of opencensus to support building with bzl 0.25.x (ray-project#4862)

* Update deps to support bzl 2.5.x

* Fix

* Upgrade arrow to latest master (ray-project#4858)

* [tune] Auto-init Ray + default SearchAlg (ray-project#4815)

* Bump version from 0.8.0.dev0 to 0.7.1. (ray-project#4890)

* [rllib] Allow access to batches prior to postprocessing (ray-project#4871)

* [rllib] Fix Multidiscrete support (ray-project#4869)

* Refactor redis callback handling (ray-project#4841)

* Add CallbackReply

* Fix

* fix linting by format.sh

* Fix linting

* Address comments.

* Fix

* Initial high-level code structure of CoreWorker. (ray-project#4875)

* Drop duplicated string format (ray-project#4897)

This string format is unnecessary. java_worker_options has been appended to the commandline later.

* Refactor ID Serial 2: change all ID functions to `CamelCase` (ray-project#4896)

* Hotfix for change of from_random to FromRandom (ray-project#4909)

* [rllib] Fix documentation on custom policies (ray-project#4910)

* wip

* add docs

* lint

* todo sections

* fix doc

* [rllib] Allow Torch policies access to full action input dict in extra_action_out_fn (ray-project#4894)

* fix torch extra out

* preserve setitem

* fix docs

* [tune] Pretty print params json in logger.py (ray-project#4903)

* [sgd] Distributed Training via PyTorch (ray-project#4797)

Implements distributed SGD using distributed PyTorch.

* [rllib] Rough port of DQN to build_tf_policy() pattern (ray-project#4823)

* fetching objects in parallel in _get_arguments_for_execution (ray-project#4775)

* [tune] Disallow setting resources_per_trial when it is already configured (ray-project#4880)

* disallow it

* import fix

* fix example

* fix test

* fix tests

* Update mock.py

* fix

* make less convoluted

* fix tests

* [rllib] Rename PolicyEvaluator => RolloutWorker (ray-project#4820)

* Fix local cluster yaml (ray-project#4918)

* [tune] Directional metrics for components (ray-project#4120) (ray-project#4915)

* [Core Worker] implement ObjectInterface and add test framework (ray-project#4899)

* [tune] Make PBT Quantile fraction configurable (ray-project#4912)

* Better organize ray_common module (ray-project#4898)

* Fix error

* Fix compute actions return value
stefanpantic added a commit to wingman-ai/ray that referenced this issue Jun 21, 2019
* [rllib] Remove dependency on TensorFlow (ray-project#4764)

* remove hard tf dep

* add test

* comment fix

* fix test

* Dynamic Custom Resources - create and delete resources (ray-project#3742)

* Update tutorial link in doc (ray-project#4777)

* [rllib] Implement learn_on_batch() in torch policy graph

* Fix `ray stop` by killing raylet before plasma (ray-project#4778)

* Fatal check if object store dies (ray-project#4763)

* [rllib] fix clip by value issue as TF upgraded (ray-project#4697)

*  fix clip_by_value issue

*  fix typo

* [autoscaler] Fix submit (ray-project#4782)

* Queue tasks in the raylet in between async callbacks (ray-project#4766)

* Add a SWAP TaskQueue so that we can keep track of tasks that are temporarily dequeued

* Fix bug where tasks that fail to be forwarded don't appear to be local by adding them to SWAP queue

* cleanups

* updates

* updates

* [Java][Bazel]  Refine auto-generated pom files (ray-project#4780)

* Bump version to 0.7.0 (ray-project#4791)

* [JAVA] setDefaultUncaughtExceptionHandler to log uncaught exception in user thread. (ray-project#4798)

* Add WorkerUncaughtExceptionHandler

* Fix

* revert bazel and pom

* [tune] Fix CLI test (ray-project#4801)

* Fix pom file generation (ray-project#4800)

* [rllib] Support continuous action distributions in IMPALA/APPO (ray-project#4771)

* [rllib] TensorFlow 2 compatibility (ray-project#4802)

* Change tagline in documentation and README. (ray-project#4807)

* Update README.rst, index.rst, tutorial.rst and  _config.yml

* [tune] Support non-arg submit (ray-project#4803)

* [autoscaler] rsync cluster (ray-project#4785)

* [tune] Remove extra parsing functionality (ray-project#4804)

* Fix Java worker log dir (ray-project#4781)

* [tune] Initial track integration (ray-project#4362)

Introduces a minimally invasive utility for logging experiment results. A broad requirement for this tool is that it should integrate seamlessly with Tune execution.

* [rllib] [RFC] Dynamic definition of loss functions and modularization support (ray-project#4795)

* dynamic graph

* wip

* clean up

* fix

* document trainer

* wip

* initialize the graph using a fake batch

* clean up dynamic init

* wip

* spelling

* use builder for ppo pol graph

* add ppo graph

* fix naming

* order

* docs

* set class name correctly

* add torch builder

* add custom model support in builder

* cleanup

* remove underscores

* fix py2 compat

* Update dynamic_tf_policy_graph.py

* Update tracking_dict.py

* wip

* rename

* debug level

* rename policy_graph -> policy in new classes

* fix test

* rename ppo tf policy

* port appo too

* forgot grads

* default policy optimizer

* make default config optional

* add config to optimizer

* use lr by default in optimizer

* update

* comments

* remove optimizer

* fix tuple actions support in dynamic tf graph

* [rllib] Rename PolicyGraph => Policy, move from evaluation/ to policy/ (ray-project#4819)

This implements some of the renames proposed in ray-project#4813
We leave behind backwards-compatibility aliases for *PolicyGraph and SampleBatch.

* [Java] Dynamic resource API in Java (ray-project#4824)

* Add default values for Wgym flags

* Fix import

* Fix issue when starting `raylet_monitor` (ray-project#4829)

* Refactor ID Serial 1: Separate ObjectID and TaskID from UniqueID (ray-project#4776)

* Enable BaseId.

* Change TaskID and make python test pass

* Remove unnecessary functions and fix test failure and change TaskID to
16 bytes.

* Java code change draft

* Refine

* Lint

* Update java/api/src/main/java/org/ray/api/id/TaskId.java

Co-Authored-By: Hao Chen <[email protected]>

* Update java/api/src/main/java/org/ray/api/id/BaseId.java

Co-Authored-By: Hao Chen <[email protected]>

* Update java/api/src/main/java/org/ray/api/id/BaseId.java

Co-Authored-By: Hao Chen <[email protected]>

* Update java/api/src/main/java/org/ray/api/id/ObjectId.java

Co-Authored-By: Hao Chen <[email protected]>

* Address comment

* Lint

* Fix SINGLE_PROCESS

* Fix comments

* Refine code

* Refine test

* Resolve conflict

* Fix bug in which actor classes are not exported multiple times. (ray-project#4838)

* Bump Ray master version to 0.8.0.dev0 (ray-project#4845)

* Add section to bump version of master branch and cleanup release docs (ray-project#4846)

* Fix import

* Export remote functions when first used and also fix bug in which rem… (ray-project#4844)

* Export remote functions when first used and also fix bug in which remote functions and actor classes are not exported from workers during subsequent ray sessions.

* Documentation update

* Fix tests.

* Fix grammar

* Update wheel versions in documentation to 0.8.0.dev0 and 0.7.0. (ray-project#4847)

* [tune] Later expansion of local_dir (ray-project#4806)

* [rllib] [RFC] Deprecate Python 2 / RLlib (ray-project#4832)

* Fix a typo in kubernetes yaml (ray-project#4872)

* Move global state API out of global_state object. (ray-project#4857)

* Install bazel in autoscaler development configs. (ray-project#4874)

* [tune] Fix up Ax Search and Examples (ray-project#4851)

* update Ax for cleaner API

* docs update

* [rllib] Update concepts docs and add "Building Policies in Torch/TensorFlow" section (ray-project#4821)

* wip

* fix index

* fix bugs

* todo

* add imports

* note on get ph

* note on get ph

* rename to building custom algs

* add rnn state info

* [rllib] Fix error getting kl when simple_optimizer: True in multi-agent PPO

* Replace ReturnIds with NumReturns in TaskInfo to reduce the size (ray-project#4854)

* Refine TaskInfo

* Fix

* Add a test to print task info size

* Lint

* Refine

* Update deps commits of opencensus to support building with bzl 0.25.x (ray-project#4862)

* Update deps to support bzl 2.5.x

* Fix

* Upgrade arrow to latest master (ray-project#4858)

* [tune] Auto-init Ray + default SearchAlg (ray-project#4815)

* Bump version from 0.8.0.dev0 to 0.7.1. (ray-project#4890)

* [rllib] Allow access to batches prior to postprocessing (ray-project#4871)

* [rllib] Fix Multidiscrete support (ray-project#4869)

* Refactor redis callback handling (ray-project#4841)

* Add CallbackReply

* Fix

* fix linting by format.sh

* Fix linting

* Address comments.

* Fix

* Initial high-level code structure of CoreWorker. (ray-project#4875)

* Drop duplicated string format (ray-project#4897)

This string format is unnecessary. java_worker_options has been appended to the commandline later.

* Refactor ID Serial 2: change all ID functions to `CamelCase` (ray-project#4896)

* Hotfix for change of from_random to FromRandom (ray-project#4909)

* [rllib] Fix documentation on custom policies (ray-project#4910)

* wip

* add docs

* lint

* todo sections

* fix doc

* [rllib] Allow Torch policies access to full action input dict in extra_action_out_fn (ray-project#4894)

* fix torch extra out

* preserve setitem

* fix docs

* [tune] Pretty print params json in logger.py (ray-project#4903)

* [sgd] Distributed Training via PyTorch (ray-project#4797)

Implements distributed SGD using distributed PyTorch.

* [rllib] Rough port of DQN to build_tf_policy() pattern (ray-project#4823)

* fetching objects in parallel in _get_arguments_for_execution (ray-project#4775)

* [tune] Disallow setting resources_per_trial when it is already configured (ray-project#4880)

* disallow it

* import fix

* fix example

* fix test

* fix tests

* Update mock.py

* fix

* make less convoluted

* fix tests

* [rllib] Rename PolicyEvaluator => RolloutWorker (ray-project#4820)

* Fix local cluster yaml (ray-project#4918)

* [tune] Directional metrics for components (ray-project#4120) (ray-project#4915)

* [Core Worker] implement ObjectInterface and add test framework (ray-project#4899)

* [tune] Make PBT Quantile fraction configurable (ray-project#4912)

* Better organize ray_common module (ray-project#4898)

* Fix error

* [tune] Add requirements-dev.txt and update docs for contributing (ray-project#4925)

* Add requirements-dev.txt and update docs.

* Update doc/source/tune-contrib.rst

Co-Authored-By: Richard Liaw <[email protected]>

* Unpin everything except for yapf.

* Fix compute actions return value

* Bump version from 0.7.1 to 0.8.0.dev1. (ray-project#4937)

* Update version number in documentation after release 0.7.0 -> 0.7.1 and 0.8.0.dev0 -> 0.8.0.dev1. (ray-project#4941)

* [doc] Update developer docs with bazel instructions (ray-project#4944)

* [C++] Add hash table to Redis-Module (ray-project#4911)

* Flush lineage cache on task submission instead of execution (ray-project#4942)

* [rllib] Add docs on how to use TF eager execution (ray-project#4927)

* [rllib] Port remainder of algorithms to build_trainer() pattern (ray-project#4920)

* Fix resource bookkeeping bug with acquiring unknown resource. (ray-project#4945)

* Update aws keys for uploading wheels to s3. (ray-project#4948)

* Upload wheels on Travis to branchname/commit_id. (ray-project#4949)

* [Java] Fix serializing issues of `RaySerializer` (ray-project#4887)

* Fix

* Address comment.

* fix (ray-project#4950)

* [Java] Add inner class `Builder` to build call options. (ray-project#4956)

* Add Builder class

* format

* Refactor by IDE

* Remove uncessary dependency

* Make release stress tests work and improve them. (ray-project#4955)

* Use proper session directory for debug_string.txt (ray-project#4960)

* [core] Use int64_t instead of int to keep track of fractional resources (ray-project#4959)

* [core worker] add task submission & execution interface (ray-project#4922)

* [sgd] Add non-distributed PyTorch runner (ray-project#4933)

* Add non-distributed PyTorch runner

* use dist.is_available() instead of checking OS

* Nicer exception

* Fix bug in choosing port

* Refactor some code

* Address comments

* Address comments

* Flush all tasks from local lineage cache after a node failure (ray-project#4964)

* Remove typing from setup.py install_requirements. (ray-project#4971)

* [Java] Fix bug of `BaseID` in multi-threading case. (ray-project#4974)

* [rllib] Fix DDPG example (ray-project#4973)

* Upgrade CI clang-format to 6.0 (ray-project#4976)

* [Core worker] add store & task provider (ray-project#4966)

* Fix bugs in the a3c code template. (ray-project#4984)

* Inherit Function Docstrings and other metedata (ray-project#4985)

* Fix a crash when unknown worker registering to raylet (ray-project#4992)

* [gRPC] Use gRPC for inter-node-manager communication (ray-project#4968)
stefanpantic added a commit to wingman-ai/ray that referenced this issue Jun 26, 2019
* [rllib] Remove dependency on TensorFlow (ray-project#4764)

* remove hard tf dep

* add test

* comment fix

* fix test

* Dynamic Custom Resources - create and delete resources (ray-project#3742)

* Update tutorial link in doc (ray-project#4777)

* [rllib] Implement learn_on_batch() in torch policy graph

* Fix `ray stop` by killing raylet before plasma (ray-project#4778)

* Fatal check if object store dies (ray-project#4763)

* [rllib] fix clip by value issue as TF upgraded (ray-project#4697)

*  fix clip_by_value issue

*  fix typo

* [autoscaler] Fix submit (ray-project#4782)

* Queue tasks in the raylet in between async callbacks (ray-project#4766)

* Add a SWAP TaskQueue so that we can keep track of tasks that are temporarily dequeued

* Fix bug where tasks that fail to be forwarded don't appear to be local by adding them to SWAP queue

* cleanups

* updates

* updates

* [Java][Bazel]  Refine auto-generated pom files (ray-project#4780)

* Bump version to 0.7.0 (ray-project#4791)

* [JAVA] setDefaultUncaughtExceptionHandler to log uncaught exception in user thread. (ray-project#4798)

* Add WorkerUncaughtExceptionHandler

* Fix

* revert bazel and pom

* [tune] Fix CLI test (ray-project#4801)

* Fix pom file generation (ray-project#4800)

* [rllib] Support continuous action distributions in IMPALA/APPO (ray-project#4771)

* [rllib] TensorFlow 2 compatibility (ray-project#4802)

* Change tagline in documentation and README. (ray-project#4807)

* Update README.rst, index.rst, tutorial.rst and  _config.yml

* [tune] Support non-arg submit (ray-project#4803)

* [autoscaler] rsync cluster (ray-project#4785)

* [tune] Remove extra parsing functionality (ray-project#4804)

* Fix Java worker log dir (ray-project#4781)

* [tune] Initial track integration (ray-project#4362)

Introduces a minimally invasive utility for logging experiment results. A broad requirement for this tool is that it should integrate seamlessly with Tune execution.

* [rllib] [RFC] Dynamic definition of loss functions and modularization support (ray-project#4795)

* dynamic graph

* wip

* clean up

* fix

* document trainer

* wip

* initialize the graph using a fake batch

* clean up dynamic init

* wip

* spelling

* use builder for ppo pol graph

* add ppo graph

* fix naming

* order

* docs

* set class name correctly

* add torch builder

* add custom model support in builder

* cleanup

* remove underscores

* fix py2 compat

* Update dynamic_tf_policy_graph.py

* Update tracking_dict.py

* wip

* rename

* debug level

* rename policy_graph -> policy in new classes

* fix test

* rename ppo tf policy

* port appo too

* forgot grads

* default policy optimizer

* make default config optional

* add config to optimizer

* use lr by default in optimizer

* update

* comments

* remove optimizer

* fix tuple actions support in dynamic tf graph

* [rllib] Rename PolicyGraph => Policy, move from evaluation/ to policy/ (ray-project#4819)

This implements some of the renames proposed in ray-project#4813
We leave behind backwards-compatibility aliases for *PolicyGraph and SampleBatch.

* [Java] Dynamic resource API in Java (ray-project#4824)

* Add default values for Wgym flags

* Fix import

* Fix issue when starting `raylet_monitor` (ray-project#4829)

* Refactor ID Serial 1: Separate ObjectID and TaskID from UniqueID (ray-project#4776)

* Enable BaseId.

* Change TaskID and make python test pass

* Remove unnecessary functions and fix test failure and change TaskID to
16 bytes.

* Java code change draft

* Refine

* Lint

* Update java/api/src/main/java/org/ray/api/id/TaskId.java

Co-Authored-By: Hao Chen <[email protected]>

* Update java/api/src/main/java/org/ray/api/id/BaseId.java

Co-Authored-By: Hao Chen <[email protected]>

* Update java/api/src/main/java/org/ray/api/id/BaseId.java

Co-Authored-By: Hao Chen <[email protected]>

* Update java/api/src/main/java/org/ray/api/id/ObjectId.java

Co-Authored-By: Hao Chen <[email protected]>

* Address comment

* Lint

* Fix SINGLE_PROCESS

* Fix comments

* Refine code

* Refine test

* Resolve conflict

* Fix bug in which actor classes are not exported multiple times. (ray-project#4838)

* Bump Ray master version to 0.8.0.dev0 (ray-project#4845)

* Add section to bump version of master branch and cleanup release docs (ray-project#4846)

* Fix import

* Export remote functions when first used and also fix bug in which rem… (ray-project#4844)

* Export remote functions when first used and also fix bug in which remote functions and actor classes are not exported from workers during subsequent ray sessions.

* Documentation update

* Fix tests.

* Fix grammar

* Update wheel versions in documentation to 0.8.0.dev0 and 0.7.0. (ray-project#4847)

* [tune] Later expansion of local_dir (ray-project#4806)

* [rllib] [RFC] Deprecate Python 2 / RLlib (ray-project#4832)

* Fix a typo in kubernetes yaml (ray-project#4872)

* Move global state API out of global_state object. (ray-project#4857)

* Install bazel in autoscaler development configs. (ray-project#4874)

* [tune] Fix up Ax Search and Examples (ray-project#4851)

* update Ax for cleaner API

* docs update

* [rllib] Update concepts docs and add "Building Policies in Torch/TensorFlow" section (ray-project#4821)

* wip

* fix index

* fix bugs

* todo

* add imports

* note on get ph

* note on get ph

* rename to building custom algs

* add rnn state info

* [rllib] Fix error getting kl when simple_optimizer: True in multi-agent PPO

* Replace ReturnIds with NumReturns in TaskInfo to reduce the size (ray-project#4854)

* Refine TaskInfo

* Fix

* Add a test to print task info size

* Lint

* Refine

* Update deps commits of opencensus to support building with bzl 0.25.x (ray-project#4862)

* Update deps to support bzl 2.5.x

* Fix

* Upgrade arrow to latest master (ray-project#4858)

* [tune] Auto-init Ray + default SearchAlg (ray-project#4815)

* Bump version from 0.8.0.dev0 to 0.7.1. (ray-project#4890)

* [rllib] Allow access to batches prior to postprocessing (ray-project#4871)

* [rllib] Fix Multidiscrete support (ray-project#4869)

* Refactor redis callback handling (ray-project#4841)

* Add CallbackReply

* Fix

* fix linting by format.sh

* Fix linting

* Address comments.

* Fix

* Initial high-level code structure of CoreWorker. (ray-project#4875)

* Drop duplicated string format (ray-project#4897)

This string format is unnecessary. java_worker_options has been appended to the commandline later.

* Refactor ID Serial 2: change all ID functions to `CamelCase` (ray-project#4896)

* Hotfix for change of from_random to FromRandom (ray-project#4909)

* [rllib] Fix documentation on custom policies (ray-project#4910)

* wip

* add docs

* lint

* todo sections

* fix doc

* [rllib] Allow Torch policies access to full action input dict in extra_action_out_fn (ray-project#4894)

* fix torch extra out

* preserve setitem

* fix docs

* [tune] Pretty print params json in logger.py (ray-project#4903)

* [sgd] Distributed Training via PyTorch (ray-project#4797)

Implements distributed SGD using distributed PyTorch.

* [rllib] Rough port of DQN to build_tf_policy() pattern (ray-project#4823)

* fetching objects in parallel in _get_arguments_for_execution (ray-project#4775)

* [tune] Disallow setting resources_per_trial when it is already configured (ray-project#4880)

* disallow it

* import fix

* fix example

* fix test

* fix tests

* Update mock.py

* fix

* make less convoluted

* fix tests

* [rllib] Rename PolicyEvaluator => RolloutWorker (ray-project#4820)

* Fix local cluster yaml (ray-project#4918)

* [tune] Directional metrics for components (ray-project#4120) (ray-project#4915)

* [Core Worker] implement ObjectInterface and add test framework (ray-project#4899)

* [tune] Make PBT Quantile fraction configurable (ray-project#4912)

* Better organize ray_common module (ray-project#4898)

* Fix error

* [tune] Add requirements-dev.txt and update docs for contributing (ray-project#4925)

* Add requirements-dev.txt and update docs.

* Update doc/source/tune-contrib.rst

Co-Authored-By: Richard Liaw <[email protected]>

* Unpin everything except for yapf.

* Fix compute actions return value

* Bump version from 0.7.1 to 0.8.0.dev1. (ray-project#4937)

* Update version number in documentation after release 0.7.0 -> 0.7.1 and 0.8.0.dev0 -> 0.8.0.dev1. (ray-project#4941)

* [doc] Update developer docs with bazel instructions (ray-project#4944)

* [C++] Add hash table to Redis-Module (ray-project#4911)

* Flush lineage cache on task submission instead of execution (ray-project#4942)

* [rllib] Add docs on how to use TF eager execution (ray-project#4927)

* [rllib] Port remainder of algorithms to build_trainer() pattern (ray-project#4920)

* Fix resource bookkeeping bug with acquiring unknown resource. (ray-project#4945)

* Update aws keys for uploading wheels to s3. (ray-project#4948)

* Upload wheels on Travis to branchname/commit_id. (ray-project#4949)

* [Java] Fix serializing issues of `RaySerializer` (ray-project#4887)

* Fix

* Address comment.

* fix (ray-project#4950)

* [Java] Add inner class `Builder` to build call options. (ray-project#4956)

* Add Builder class

* format

* Refactor by IDE

* Remove uncessary dependency

* Make release stress tests work and improve them. (ray-project#4955)

* Use proper session directory for debug_string.txt (ray-project#4960)

* [core] Use int64_t instead of int to keep track of fractional resources (ray-project#4959)

* [core worker] add task submission & execution interface (ray-project#4922)

* [sgd] Add non-distributed PyTorch runner (ray-project#4933)

* Add non-distributed PyTorch runner

* use dist.is_available() instead of checking OS

* Nicer exception

* Fix bug in choosing port

* Refactor some code

* Address comments

* Address comments

* Flush all tasks from local lineage cache after a node failure (ray-project#4964)

* Remove typing from setup.py install_requirements. (ray-project#4971)

* [Java] Fix bug of `BaseID` in multi-threading case. (ray-project#4974)

* [rllib] Fix DDPG example (ray-project#4973)

* Upgrade CI clang-format to 6.0 (ray-project#4976)

* [Core worker] add store & task provider (ray-project#4966)

* Fix bugs in the a3c code template. (ray-project#4984)

* Inherit Function Docstrings and other metedata (ray-project#4985)

* Fix a crash when unknown worker registering to raylet (ray-project#4992)

* [gRPC] Use gRPC for inter-node-manager communication (ray-project#4968)

* Fix Java CI failure (ray-project#4995)

* fix handling of non-integral timeout values in signal.receive (ray-project#5002)

* temp fix for build (ray-project#5006)

* [tune] Tutorial UX Changes (ray-project#4990)

* add integration, iris, ASHA, recursive changes, set reuse_actors=True, and enable Analysis as a return object

* docstring

* fix up example

* fix

* cleanup tests

* experiment analysis

* Fix valgrind build by installing new version of valgrind (ray-project#5008)

* Fix no cpus test (ray-project#5009)

* Fix tensorflow-1.14 installation in jenkins (ray-project#5007)

* Add dynamic worker options for worker command. (ray-project#4970)

* Add fields for fbs

* WIP

* Fix complition errors

* Add java part

* FIx

* Fix

* Fix

* Fix lint

* Refine API

* address comments and add test

* Fix

* Address comment.

* Address comments.

* Fix linting

* Refine

* Fix lint

* WIP: address comment.

* Fix java

* Fix py

* Refin

* Fix

* Fix

* Fix linting

* Fix lint

* Address comments

* WIP

* Fix

* Fix

* minor refine

* Fix lint

* Fix raylet test.

* Fix lint

* Update src/ray/raylet/worker_pool.h

Co-Authored-By: Hao Chen <[email protected]>

* Update java/runtime/src/main/java/org/ray/runtime/AbstractRayRuntime.java

Co-Authored-By: Hao Chen <[email protected]>

* Address comments.

* Address comments.

* Fix test.

* Update src/ray/raylet/worker_pool.h

Co-Authored-By: Hao Chen <[email protected]>

* Address comments.

* Address comments.

* Fix

* Fix lint

* Fix lint

* Fix

* Address comments.

* Fix linting

* [docs] docs for running Tensorboard without sudo (ray-project#5015)

* Instructions for running Tensorboard without sudo

When we run Tensorboard to visualize the results of Ray outputs on multi-user clusters where we don't have sudo access, such as RISE clusters, a few commands need to first be run to make sure tensorboard can edit the tmp directory. This is a pretty common usecase so I figured we may as well put it in the documentation for Tune.

* Update tune-usage.rst

* [ci] Change Jenkins to py3 (ray-project#5022)

* conda3

* integration

* add nevergrad, remotedata

* pytest 0.3.1

* otherdockers

* setup

* tune

* [gRPC] Migrate gcs data structures to protobuf (ray-project#5024)

* [rllib] Add QMIX mixer parameters to optimizer param list (ray-project#5014)

* add mixer params

* Update qmix_policy.py

* [grpc] refactor rpc server to support multiple io services (ray-project#5023)

* [rllib] Give error if sample_async is used with pytorch for A3C (ray-project#5000)

* give error if sample_async is used with pytorch

* update

* Update a3c.py

* [tune] Update MNIST Example (ray-project#4991)

* Add entropy coeff schedule

* Revert "Merge with ray master"

This reverts commit 108bfa2, reversing
changes made to 2e0eec9.

* Revert "Revert "Merge with ray master""

This reverts commit 92c0f88.

* Remove entropy decay stuff
@rkooo567 rkooo567 added the RFC RFC issues label Dec 27, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
RFC RFC issues
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants