Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Towards Julia 1.6 #50

Open
wants to merge 2 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
523 changes: 284 additions & 239 deletions Manifest.toml

Large diffs are not rendered by default.

8 changes: 5 additions & 3 deletions Project.toml
Original file line number Diff line number Diff line change
@@ -1,13 +1,12 @@
name = "Rayuela"
uuid = "84bd14ec-51ef-568a-9c69-e494d1752004"
author = ["Julieta Martinez <[email protected]>"]
author = ["Julieta Martinez <[email protected]>"]
version = "0.0.0"

[deps]
BinDeps = "9e28174c-4ba2-5203-b857-d8d62c4213ee"
CUDAdrv = "c5f51814-7f29-56b8-a69c-e4d8f6be1fde"
CUDA = "052768ef-5323-5732-b1bb-66c8b64840ba"
Clustering = "aaaa29a8-35af-508c-8bc3-b662a17a0fe5"
CuArrays = "3a865a2d-5b23-5a0f-bc46-62713ec82fae"
Distances = "b4f34e82-e78d-54a5-968a-f98e89d6e8f7"
Distributed = "8ba89e20-285c-5b6f-9357-94700520ee1b"
Distributions = "31c24e10-a181-5473-b8eb-7969acd0382f"
Expand All @@ -22,6 +21,9 @@ SharedArrays = "1a1011a3-84de-559e-8e89-a11a2f7dc383"
SparseArrays = "2f01184e-e22b-5df5-ae63-d93ebab69eaf"
Statistics = "10745b16-79ce-11e8-11f9-7d13ad32a3b2"

[compat]
julia = "1.6.2"

[extras]
Test = "8dfed614-e22c-5e08-85e1-65c5234f0b40"

Expand Down
63 changes: 30 additions & 33 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,37 +46,10 @@ The code in this package was written by

## Requirements

This package is written in [Julia](https://github.com/JuliaLang/julia) 1.0, with some extension in C++ and CUDA.
This package is written in [Julia](https://github.com/JuliaLang/julia) 1.6, with some extension in C++ and CUDA.
You also need a CUDA-ready GPU. We have tested this code on an Nvidia Titan Xp GPU.

## Installing

Before all else, make sure that you have the `g++` compiler available from the command line, and the `nvcc` compiler availible at path `/usr/local/cuda/bin/nvcc`.

Then, open julia and type `]` to enter Package mode:

```julia
julia>
(v1.0) pkg>
```

Now you can clone our repo:

```julia
(v1.0) pkg> develop https://github.com/una-dinosauria/Rayuela.jl.git
```

This should put our code under `~/.julia/dev/Rayuela`.

Due to an [open bug](https://github.com/JuliaLang/Pkg.jl/issues/465) with the package manager, you have to manually
pull the latest changes:

```bash
cd ~/.julia/dev/Rayuela
git pull
```

## Demo and data
## Data and experiment setup

You may explore the library with `SIFT1M`, a classical retrieval dataset:

Expand All @@ -94,24 +67,48 @@ Also make a directory for the results
```
mkdir -p results/sift1m
```
## Installing

Before all else, make sure that you have the `g++` compiler available from the command line, and the `nvcc` compiler availible at path `/usr/local/cuda/bin/nvcc`.

Clone this repo
```bash
git clone [email protected]:una-dinosauria/Rayuela.jl.git
```

Then, open julia and type `]` to enter Package mode:

```julia
julia>
(@v1.6) pkg>
```

Now you can activate this repo:

```julia
(@v1.6) pkg> activate .
(Rayuela) pkg>
```

This should load the relevant dependencies into your environment

Finally, run the demo:
Now you can run the demo:

```julia
julia> include("~/.julia/dev/Rayuela/demos/demos_train_query_base.jl")
julia> include("./demos/demos_train_query_base.jl")
```

For query/base/protocol (example by default runs on SIFT1M), or

```julia
julia> include("~/.julia/dev/Rayuela/demos/demos_query_base.jl")
julia> include("./demos/demos_query_base.jl")
```

For query/base protocol (example by default runs on LabelMe22K)

This will showcase PQ, OPQ, RVQ, ERVQ, ChainQ and LSQ/LSQ++ (SR-C and SR-D).

The rest of the datasets used in our ECCV'18 publication can be found on [gdrive](https://drive.google.com/drive/folders/1MnJLHpg5LP6pPQxQuL0VjnM03vHPvgP1?usp=sharing).
The rest of the datasets used in our ECCV'18 publication can be found at [gdrive](https://drive.google.com/drive/folders/1MnJLHpg5LP6pPQxQuL0VjnM03vHPvgP1?usp=sharing).

## Roadmap

Expand Down
1 change: 0 additions & 1 deletion REQUIRE

This file was deleted.

2 changes: 1 addition & 1 deletion demos/demos_train_query_base.jl
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ include("experiment_utils.jl")
# === experiment functions ===
function run_demos(
dataset_name="SIFT1M",
ntrain::Integer=Int(1e5),
ntrain::Integer=Int(1e4),
m::Integer=8, h::Integer=256, niter::Integer=25)

nquery, nbase, knn = 0, 0, 0
Expand Down
39 changes: 22 additions & 17 deletions deps/build.jl
Original file line number Diff line number Diff line change
Expand Up @@ -2,12 +2,13 @@ using BinDeps

@BinDeps.setup

cudautils = library_dependency("cudautils")
# cudautils = library_dependency("cudautils")
linscan_aqd = library_dependency("linscan_aqd", aliases=["linscan_aqd","linscan_aqd.so"])
linscan_aqd_pairwise_byte = library_dependency("linscan_aqd_pairwise_byte", aliases=["linscan_aqd_pairwise_byte","linscan_aqd_pairwise_byte.so"])
encode_icm_so = library_dependency("encode_icm_so", aliases=["encode_icm_so", "encode_icm_so.so"])

deps = [cudautils, linscan_aqd, linscan_aqd_pairwise_byte, encode_icm_so]
# deps = [cudautils, linscan_aqd, linscan_aqd_pairwise_byte, encode_icm_so]
deps = [linscan_aqd, linscan_aqd_pairwise_byte, encode_icm_so]

prefix=joinpath(BinDeps.depsdir(linscan_aqd))
linscan_aqdbuilddir = joinpath(BinDeps.depsdir(linscan_aqd),"builds")
Expand Down Expand Up @@ -48,20 +49,24 @@ provides(BuildProcess,
end
end),encode_icm_so, os = :Unix, installed_libpath=joinpath(prefix,"builds"))

@BinDeps.install Dict([(:linscan_aqd => :linscan_aqd),
(:linscan_aqd_pairwise_byte => :linscan_aqd_pairwise_byte),
(:encode_icm_so => :encode_icm_so)])

# === CUDA code ===
provides(BuildProcess,
(@build_steps begin
CreateDirectory(linscan_aqdbuilddir)
@build_steps begin
ChangeDirectory(linscan_aqdbuilddir)
FileRule(joinpath(prefix,"builds","cudautils.so"),@build_steps begin
`/usr/local/cuda/bin/nvcc -ptx ../src/cudautils.cu -o cudautils.ptx -arch=compute_35`
`/usr/local/cuda/bin/nvcc --shared -Xcompiler -fPIC -shared ../src/cudautils.cu -o cudautils.so -arch=compute_35`
end)
end
end),cudautils, os = :Unix, installed_libpath=joinpath(prefix,"builds"))
# provides(BuildProcess,
# (@build_steps begin
# CreateDirectory(linscan_aqdbuilddir)
# @build_steps begin
# ChangeDirectory(linscan_aqdbuilddir)
# FileRule(joinpath(prefix,"builds","cudautils.so"),@build_steps begin
# `/usr/local/cuda/bin/nvcc -ptx ../src/cudautils.cu -o cudautils.ptx -arch=compute_35`
# `/usr/local/cuda/bin/nvcc --shared -Xcompiler -fPIC -shared ../src/cudautils.cu -o cudautils.so -arch=compute_35`
# end)
# end
# end),cudautils, os = :Unix, installed_libpath=joinpath(prefix,"builds"))

@BinDeps.install Dict([(:linscan_aqd, :linscan_aqd),
(:linscan_aqd_pairwise_byte, :linscan_aqd_pairwise_byte),
(:encode_icm_so, :encode_icm_so),
(:cudautils, :cudautils)])
# @BinDeps.install Dict([(:linscan_aqd, :linscan_aqd),
# (:linscan_aqd_pairwise_byte, :linscan_aqd_pairwise_byte),
# (:encode_icm_so, :encode_icm_so),
# (:cudautils, :cudautils)])
2 changes: 1 addition & 1 deletion src/ChainQ.jl
Original file line number Diff line number Diff line change
Expand Up @@ -386,7 +386,7 @@ function train_chainq(
obj = zeros(T, niter+1)

# TODO expose these
use_cuda = true
use_cuda = false
use_cpp = false

CB = zeros(T, size(X))
Expand Down
2 changes: 1 addition & 1 deletion src/ERVQ.jl
Original file line number Diff line number Diff line change
Expand Up @@ -86,7 +86,7 @@ function train_ervq(
weights = nothing # use unweighted version of update_centers!
to_update = zeros(Bool, h)
to_update[B[j,:]] .= true # In. Whether a codebook entry needs update
cweights = zeros(Float64, h) # Out. Cluster weights. We do not use this.
cweights = zeros(Int, h) # Out. Cluster weights. We do not use this.
Clustering.update_centers!(Xd, weights, B[j,:], to_update, C[j], cweights)

# Check if some centres are unasigned
Expand Down
2 changes: 1 addition & 1 deletion src/RVQ.jl
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ function quantize_rvq(
# Create new codebook entries that we are missing
if !isempty(unused)
temp_codebook = similar(C[i]) # Out. The new codebooks will be added here
Clustering.repick_unused_centers(Xr, costs, temp_codebook, unused)
Clustering.repick_unused_centers(Xr, costs, temp_codebook, unused, Distances.SqEuclidean())
singletons[i] = temp_codebook[:, unused]
end

Expand Down
12 changes: 6 additions & 6 deletions src/Rayuela.jl
Original file line number Diff line number Diff line change
Expand Up @@ -13,8 +13,8 @@ using Distributed
using Distributions

# For LSQ encoding in the GPU
using CUDAdrv
using CuArrays
# using CUDAdrv
# using CuArrays

using HDF5
using LinearAlgebra
Expand Down Expand Up @@ -44,7 +44,7 @@ else
error("Rayuela is not properly Installed.
Please run Pkg.build(\"Rayuela\") and make sure you have nvcc and g++ available from the command line.")
end
cudautilsptx = cudautils[1:end-2] * "ptx"
# cudautilsptx = cudautils[1:end-2] * "ptx"

# === Functions to read data ===
include("xvecs_read.jl")
Expand Down Expand Up @@ -74,8 +74,8 @@ include("LSQ.jl") # Local search quantization
include("SR_perturbations.jl") # Utils for SR

# === CUDA ports ===
include("CudaUtilsModule.jl")
include("SR.jl") # Stochastic relaxations
include("LSQ_GPU.jl") # Local search quantization
# include("CudaUtilsModule.jl")
# include("SR.jl") # Stochastic relaxations
# include("LSQ_GPU.jl") # Local search quantization

end # module