You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Mar 12, 2021. It is now read-only.
While running simple Dense MNIST model on Julia-1.0.1, CUDA 10 on Ubuntu 18.04. I am encountering the following warning.
┌ Warning: calls to Base intrinsics might be GPU incompatible
│ exception =
│ You called exp(x::T) where T<:Union{Float32, Float64} in Base.Math at special/exp.jl:75, maybe you intended to call exp(x::Float32) in CUDAnative at /home/ubuntu/.julia/packages/CUDAnative/AGfq2/src/device/libdevice.jl:90 instead?
│ Stacktrace:
│ [1] exp at special/exp.jl:75
│ [2] mapreducedim_kernel_parallel at /home/ubuntu/.julia/packages/CuArrays/clDeS/src/mapreduce.jl:29
└ @ CUDAnative ~/.julia/packages/CUDAnative/AGfq2/src/compiler/irgen.jl:111
┌ Warning: calls to Base intrinsics might be GPU incompatible
│ exception =
│ You called exp(x::T) where T<:Union{Float32, Float64} in Base.Math at special/exp.jl:75, maybe you intended to call exp(x::Float32) in CUDAnative at /home/ubuntu/.julia/packages/CUDAnative/AGfq2/src/device/libdevice.jl:90 instead?
│ Stacktrace:
│ [1] exp at special/exp.jl:75
│ [2] mapreducedim_kernel_parallel at /home/ubuntu/.julia/packages/CuArrays/clDeS/src/mapreduce.jl:29
└ @ CUDAnative ~/.julia/packages/CUDAnative/AGfq2/src/compiler/irgen.jl:111
using Flux
using Flux: crossentropy
using Flux:@epochsusing CuArrays
using Random
push!(LOAD_PATH, "/data/projects/snippets/")
using Snippets
functiongetdata(split=0.8; onehot=true)
d = Flux.Data.MNIST.images()
nr =length(d)
# Convert to a Tensor
data =zeros(Float64, (28,28,1,nr))
foreach(i -> data[:,:,:,i] =Float64.(d[i]), 1:nr)
labels = Flux.Data.MNIST.labels()
# shuffle data and labels
rperm =randperm(length(labels))
data = data[:,:,:, rperm]
labels = labels[rperm]
(a,b)=split_index(labels)
return ((data[:,:,:,a], labels[a]), (data[:,:,:,b], labels[b]))
endfunctionnet(input_size, hidden_size, num_classes)
model =Chain(
Dense(input_size, hidden_size, relu),
Dense(hidden_size, num_classes),
softmax
) |> gpu
return model
endfunctionaccuracy(m, x, y)
nr =length(y)
x =reshape(x, 28*28,nr) |> gpu
pred_y =m(x)
pred_y = Flux.onecold(pred_y) .-1returnsum(pred_y .== y)/nr
endfunctionmain()
input_size =28*28
hidden_size = input_size *2
num_classes =10
lr =0.01
num_epochs =5
batch_size =100
model =net(input_size, hidden_size, num_classes)
loss(x, y) =crossentropy(model(x), y)
opt =ADAM(params(model), lr)
(train_x, train_y), (test_x, test_y) =getdata()
for e in1:num_epochs
for batch in Iterators.partition(1:length(train_y), batch_size)
x =reshape(train_x[:,:,:,batch], 28*28,length(batch)) .|> Float32 |> gpu
y = Flux.onehotbatch(train_y[batch], 0:9) |> gpu
Flux.train!(loss, [(x,y)], opt)
endprintln("Epoch $e")
endreturn model
end
m=main()
The text was updated successfully, but these errors were encountered:
maleadt
changed the title
(Flux) Warning: calls to Base intrinsics might be GPU incompatible
Use of broadcast by Flux doesn't rewrite intrinsics
Oct 4, 2018
While running simple Dense MNIST model on
Julia-1.0.1
,CUDA 10
onUbuntu 18.04
. I am encountering the following warning.The text was updated successfully, but these errors were encountered: