Skip to content
This repository has been archived by the owner on Mar 12, 2021. It is now read-only.

Use of mapreduce by Flux doesn't rewrite intrinsics #154

Closed
asbisen opened this issue Oct 3, 2018 · 4 comments
Closed

Use of mapreduce by Flux doesn't rewrite intrinsics #154

asbisen opened this issue Oct 3, 2018 · 4 comments
Assignees
Labels

Comments

@asbisen
Copy link

asbisen commented Oct 3, 2018

While running simple Dense MNIST model on Julia-1.0.1, CUDA 10 on Ubuntu 18.04. I am encountering the following warning.

┌ Warning: calls to Base intrinsics might be GPU incompatible
│   exception =
│    You called exp(x::T) where T<:Union{Float32, Float64} in Base.Math at special/exp.jl:75, maybe you intended to call exp(x::Float32) in CUDAnative at /home/ubuntu/.julia/packages/CUDAnative/AGfq2/src/device/libdevice.jl:90 instead?
│    Stacktrace:
│     [1] exp at special/exp.jl:75
│     [2] mapreducedim_kernel_parallel at /home/ubuntu/.julia/packages/CuArrays/clDeS/src/mapreduce.jl:29
└ @ CUDAnative ~/.julia/packages/CUDAnative/AGfq2/src/compiler/irgen.jl:111
┌ Warning: calls to Base intrinsics might be GPU incompatible
│   exception =
│    You called exp(x::T) where T<:Union{Float32, Float64} in Base.Math at special/exp.jl:75, maybe you intended to call exp(x::Float32) in CUDAnative at /home/ubuntu/.julia/packages/CUDAnative/AGfq2/src/device/libdevice.jl:90 instead?
│    Stacktrace:
│     [1] exp at special/exp.jl:75
│     [2] mapreducedim_kernel_parallel at /home/ubuntu/.julia/packages/CuArrays/clDeS/src/mapreduce.jl:29
└ @ CUDAnative ~/.julia/packages/CUDAnative/AGfq2/src/compiler/irgen.jl:111
using Flux
using Flux: crossentropy
using Flux: @epochs
using CuArrays

using Random

push!(LOAD_PATH, "/data/projects/snippets/")
using Snippets


function getdata(split=0.8; onehot=true)
    d = Flux.Data.MNIST.images()
    nr = length(d)

    # Convert to a Tensor
    data = zeros(Float64, (28,28,1,nr))
    foreach(i -> data[:,:,:,i] = Float64.(d[i]), 1:nr)
    labels = Flux.Data.MNIST.labels()

    # shuffle data and labels
    rperm = randperm(length(labels))
    data = data[:,:,:, rperm]
    labels = labels[rperm]

    (a,b)=split_index(labels)

    return ((data[:,:,:,a], labels[a]), (data[:,:,:,b], labels[b]))
end


function net(input_size, hidden_size, num_classes)
    model = Chain(
        Dense(input_size, hidden_size, relu),
        Dense(hidden_size, num_classes),
        softmax
    ) |> gpu
    return model
end


function accuracy(m, x, y)
    nr = length(y)
    x = reshape(x, 28*28,nr) |> gpu
    pred_y = m(x)
    pred_y = Flux.onecold(pred_y) .- 1
    return sum(pred_y .== y)/nr
end


function main()
    input_size = 28*28
    hidden_size = input_size * 2
    num_classes = 10
    lr = 0.01
    num_epochs = 5
    batch_size = 100

    model = net(input_size, hidden_size, num_classes)
    
    loss(x, y) = crossentropy(model(x), y)
    opt = ADAM(params(model), lr)

    (train_x, train_y), (test_x, test_y) = getdata()

    for e in 1:num_epochs
        for batch in Iterators.partition(1:length(train_y), batch_size)
            x = reshape(train_x[:,:,:,batch], 28*28,length(batch)) .|> Float32 |> gpu
            y = Flux.onehotbatch(train_y[batch], 0:9) |> gpu
            Flux.train!(loss, [(x,y)], opt)
        end
	println("Epoch $e")
    end

    return model
end



m=main()
@maleadt
Copy link
Member

maleadt commented Oct 4, 2018

Package versions?

@vchuravy should our new broadcast catch this?

@maleadt maleadt changed the title (Flux) Warning: calls to Base intrinsics might be GPU incompatible Use of broadcast by Flux doesn't rewrite intrinsics Oct 4, 2018
@maleadt maleadt added the bug label Oct 4, 2018
@asbisen
Copy link
Author

asbisen commented Oct 4, 2018

I have tried with the release version of Flux and now master. Below is the current state of package version.

v1.0) pkg> st
    Status `~/.julia/environments/v1.0/Project.toml`
  [3a865a2d] CuArrays v0.8.0
  [a93c6f00] DataFrames v0.14.0
  [31c24e10] Distributions v0.16.4
  [587475ba] Flux v0.6.7+ #master (https://github.com/FluxML/Flux.jl.git)
  [28b8d3ca] GR v0.34.1
  [438e738f] PyCall v1.18.4
  [d330b81b] PyPlot v2.6.3

@MikeInnes
Copy link
Collaborator

Flux accounts for this here by explicitly calling broadcasted(f, args...) to hit CuArrays' function swapping. Not sure if that's changed lately.

@vchuravy vchuravy self-assigned this Oct 4, 2018
@vchuravy
Copy link
Member

vchuravy commented Oct 4, 2018

That is not coming from broadcasting that is coming from mapreduce, which doesn't do function swapping

@vchuravy vchuravy changed the title Use of broadcast by Flux doesn't rewrite intrinsics Use of mapreduce by Flux doesn't rewrite intrinsics Oct 4, 2018
@maleadt maleadt closed this as completed Feb 25, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

4 participants