Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

Bug of group2ctxs for model parallelism #10455

Closed
majia-yu opened this issue Apr 7, 2018 · 8 comments
Closed

Bug of group2ctxs for model parallelism #10455

majia-yu opened this issue Apr 7, 2018 · 8 comments
Labels
Backend Issues related to the backend of MXNet Bug Pending Requester Info

Comments

@majia-yu
Copy link

majia-yu commented Apr 7, 2018

Description

mx.mod.Module with group2ctxs gives different results, for num_gpus <= 7 and >= 8.

A short example is the graph as below, behaves differently given different gpus and args.group2ctxs, with a customized softmax.

(Its complete code sample follows later.)

graph = mx.mod.Module(
    context=[mx.cpu(0)],
    symbol=softmax((512, 256), 500, gpus), #softmax here takes a list of gpus and split the calculation to these gpus
    group2ctxs={ 'dev_%d' % i : [mx.gpu(gpu)] for i, gpu in enumerate(gpus)}
               if args.group2ctxs else None
)

Steps to reproduce

The script magic.py (attached later) gives different results as following:

$ python magic.py --group2ctxs 0 --num_gpus 1    # A single-CPU no-group2ctxs case.
loss: 22.12132072448730468750000000000000
input_grads: 0.00030235992744565010070800781250

$ python magic.py --group2ctxs 1 --num_gpus 1    # Changing group2ctxs alone does not matter.
loss: 22.12132072448730468750000000000000
input_grads: 0.0003023596364073455333709716796

$ python magic.py --group2ctxs 1 --num_gpus 4    # 4 GPUs still performs same.
loss: 22.12132072448730468750000000000000
input_grads: 0.00030235989834181964397430419922

$ python magic.py --group2ctxs 1 --num_gpus 7    # 7 GPUs still performs same.
loss: 22.12132072448730468750000000000000
input_grads: 0.00030235946178436279296875000000

$ python magic.py --group2ctxs 1 --num_gpus 8    # 8 GPUs gives a totally different result!
loss: 22.12132263183593750000000000000000
input_grads: 0.00453650718554854393005371093750

Below is the magic.py.

Environment info (Required)

----------Python Info----------
Version : 3.5.2
Compiler : GCC 5.4.0 20160609
Build : ('default', 'Sep 14 2017 22:51:06')
Arch : ('64bit', 'ELF')
------------Pip Info-----------
Version : 9.0.3
----------MXNet Info-----------
Version : 1.1.0
Commit Hash : 07a83a0
----------System Info----------
Platform : Linux-4.13.0-36-generic-x86_64-with-Ubuntu-16.04-xenial
system : Linux
release : 4.13.0-36-generic
version : #40~16.04.1-Ubuntu SMP Fri Feb 16 23:25:58 UTC 2018
----------Hardware Info----------
machine : x86_64
processor : x86_64
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 56
On-line CPU(s) list: 0-55
Thread(s) per core: 2
Core(s) per socket: 14
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 79
Model name: Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz
Stepping: 1
CPU MHz: 2400.215
CPU max MHz: 3300.0000
CPU min MHz: 1200.0000
BogoMIPS: 4800.43
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 35840K
NUMA node0 CPU(s): 0-13,28-41
NUMA node1 CPU(s): 14-27,42-55
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single pti retpoline intel_ppin intel_pt tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a rdseed adx smap xsaveopt cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts
----------Network Test----------
Setting timeout: 10
Timing for FashionMNIST: https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/dataset/fashion-mnist/train-labels-idx1-ubyte.gz, DNS: 0.1864 sec, LOAD: 1.3491 sec.
Timing for MXNet: https://github.com/apache/incubator-mxnet, DNS: 0.0119 sec, LOAD: 3.8785 sec.
Timing for Conda: https://repo.continuum.io/pkgs/free/, DNS: 0.0096 sec, LOAD: 0.4216 sec.
Timing for Gluon Tutorial(cn): https://zh.gluon.ai, DNS: 0.1880 sec, LOAD: 0.8841 sec.
Timing for Gluon Tutorial(en): http://gluon.mxnet.io, DNS: 0.2153 sec, LOAD: 0.4721 sec.
Timing for PYPI: https://pypi.python.org/pypi/pip, DNS: 0.0118 sec, LOAD: 0.9239 sec.

@sxjscience sxjscience added the Bug label Apr 9, 2018
@eric-haibin-lin
Copy link
Member

For the 8 gpus case, is the result deterministic across multiple runs?

@rahul003
Copy link
Member

rahul003 commented May 29, 2018

Could you provide the script magic.py and the softmax implementation as an example to investigate?

@eric-haibin-lin
Copy link
Member

@majia-yu do you mind providing a minimum reproducible example?

@zachgk
Copy link
Contributor

zachgk commented Nov 16, 2018

@mxnet-label-bot add [Backend, Pending Requester Info]

@marcoabreu marcoabreu added Backend Issues related to the backend of MXNet Pending Requester Info labels Nov 16, 2018
@vrakesh
Copy link
Contributor

vrakesh commented Nov 26, 2018

@majia-yu Are you still facing this issue? Requesting a reproducible example for the same (magic.py)

@majia-yu
Copy link
Author

@vrakesh no, everything is fine thanks

@access2rohit
Copy link
Contributor

@majia-yu Hi, could you share how did you solve the issue ?

@access2rohit
Copy link
Contributor

@majia-yu were you able to solve this issue ? If so could you share your solution ?

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Backend Issues related to the backend of MXNet Bug Pending Requester Info
Projects
None yet
Development

No branches or pull requests

8 participants