Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[tune] Show the name of training func, instead of just ImplicitFunction. #21029

Merged
merged 1 commit into from
Dec 13, 2021

Conversation

xwjiang2010
Copy link
Contributor

@xwjiang2010 xwjiang2010 commented Dec 11, 2021

Why are these changes needed?

See sample output:

(train_mnist pid=39303) WARNING:tensorflow:From /Users/xwjiang/ray/python/ray/tune/examples/tf_distributed_keras_example.py:49: _CollectiveAllReduceStrategyExperimental.__init__ (from tensorflow.python.distribute.collective_all_reduce_strategy) is deprecated and will be removed in a future version.
(train_mnist pid=39303) Instructions for updating:
(train_mnist pid=39303) use distribute.MultiWorkerMirroredStrategy instead
(train_mnist pid=39303) 2021-12-10 17:45:26.471905: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 FMA
(train_mnist pid=39303) To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
(train_mnist pid=39303) 2021-12-10 17:45:26.476495: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:272] Initialize GrpcChannelCache for job worker -> {0 -> 127.0.0.1:62516, 1 -> 127.0.0.1:62518}
(train_mnist pid=39303) 2021-12-10 17:45:26.477083: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:427] Started server with target: grpc://127.0.0.1:62518
(train_mnist pid=39304) WARNING:tensorflow:From /Users/xwjiang/ray/python/ray/tune/examples/tf_distributed_keras_example.py:49: _CollectiveAllReduceStrategyExperimental.__init__ (from tensorflow.python.distribute.collective_all_reduce_strategy) is deprecated and will be removed in a future version.
(train_mnist pid=39304) Instructions for updating:
(train_mnist pid=39304) use distribute.MultiWorkerMirroredStrategy instead
(train_mnist pid=39304) 2021-12-10 17:45:26.471349: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 FMA
(train_mnist pid=39304) To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
(train_mnist pid=39304) 2021-12-10 17:45:26.476290: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:272] Initialize GrpcChannelCache for job worker -> {0 -> 127.0.0.1:62516, 1 -> 127.0.0.1:62518}
(train_mnist pid=39304) 2021-12-10 17:45:26.476866: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:427] Started server with target: grpc://127.0.0.1:62516
(train_mnist pid=39303) 2021-12-10 17:45:26.988813: W tensorflow/core/grappler/optimizers/data/auto_shard.cc:695] AUTO sharding policy will apply DATA sharding policy as it failed to apply FILE sharding policy because of the following reason: Found an unshardable source dataset: name: "TensorSliceDataset/_2"
(train_mnist pid=39303) op: "TensorSliceDataset"

Related issue number

Checks

  • I've run scripts/format.sh to lint the changes in this PR.
  • I've included any doc changes needed for https://docs.ray.io/en/master/.
  • I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/
  • Testing Strategy
    • Unit tests
    • Release tests
    • This PR is not tested :(

@krfricke krfricke merged commit f395b63 into ray-project:master Dec 13, 2021
@xwjiang2010 xwjiang2010 deleted the implicit_func branch July 26, 2023 19:52
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants