Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Faster MTTKRPs Algorithm #40

Open
wants to merge 21 commits into
base: master
Choose a base branch
from

Conversation

alexmul1114
Copy link
Contributor

Addresses #17.

Copy link

codecov bot commented Feb 26, 2024

Codecov Report

Attention: Patch coverage is 96.36364% with 2 lines in your changes are missing coverage. Please review.

Project coverage is 99.34%. Comparing base (bb701f2) to head (49d34eb).

Files Patch % Lines
src/tensor-kernels/mttkrps.jl 95.65% 2 Missing ⚠️
Additional details and impacted files
@@             Coverage Diff             @@
##            master      #40      +/-   ##
===========================================
- Coverage   100.00%   99.34%   -0.66%     
===========================================
  Files           12       12              
  Lines          258      306      +48     
===========================================
+ Hits           258      304      +46     
- Misses           0        2       +2     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@alexmul1114
Copy link
Contributor Author

For some reason, although running one iteration of MTTKRPs with the new algorithm versus the old implementation is much faster, but the gcp benchmarks show no improvement:

julia> using GCPDecompositions
Precompiling GCPDecompositions
  1 dependency successfully precompiled in 3 seconds. 20 already precompiled.

julia> using Random

julia> using BenchmarkTools

julia> T=Float64;

julia> r=2;

julia> using LinearAlgebra;

julia> function test_new(GU, M, X)
           N = ndims(X)
           R = size(M.U[1])[2]

           # Determine order of modes of MTTKRP to compute
           Jns = [prod(size(X)[1:n]) for n in 1:N]
           Kns = [prod(size(X)[n+1:end]) for n in 1:N]
           Kn_minus_ones = [prod(size(X)[n:end]) for n in 1:N]
           comp = Jns .<= Kn_minus_ones
               n_star = maximum(map(x -> comp[x] ? x : 0, 1:N))
           order = vcat([i for i in n_star:-1:1], [i for i in n_star+1:N])

           # Compute MTTKRPs recursively
           saved = similar(M.U[1], Jns[n_star], R)
           for n in order
               if n == n_star
                   saved = reshape(X, (Jns[n], Kns[n])) * khatrirao(M.U[reverse(n+1:N)]...)
                   GCPDecompositions.mttkrps_helper!(GU, saved, M, n, "right", N, Jns, Kns)
               elseif n == n_star + 1
                   if n == N
                       mul!(GU[n], reshape(X, (Jns[n-1], Kns[n-1]))', khatrirao(M.U[reverse(1:n-1)]...))
                   else
                       saved = (khatrirao(M.U[reverse(1:n-1)]...)' * reshape(X, (Jns[n-1], Kns[n-1])))'
                       GCPDecompositions.mttkrps_helper!(GU, saved, M, n, "left", N, Jns, Kns)
                   end
               elseif n < n_star
                   if n == 1
                       for r in 1:R
                           mul!(view(GU[n], :, r), reshape(view(saved, :, r), (Jns[n], size(X)[n+1])), view(M.U[n+1], :, r))
                           #GU[n][:, r] = reshape(view(saved, :, r), (Jns[n], size(X)[n+1])) * view(M.U[n+1], :, r)
                       end
                   else
                       saved = stack(reshape(view(saved, :, r), (Jns[n], size(X)[n+1])) * view(M.U[n+1], :, r) for r in 1:R)
                       GCPDecompositions.mttkrps_helper!(GU, saved, M, n, "right", N, Jns, Kns)
                   end
               else
                   if n == N
                       for r in 1:R
                           mul!(view(GU[n], :, r), reshape(view(saved, :, r), (size(X)[n-1], Kns[n-1]))', view(M.U[n-1], :, r))
                       end
                       #GU[n] = stack(reshape(view(saved, :, r), (size(X)[n-1], Kns[n-1]))' * view(M.U[n-1], :, r) for r in 1:R)
                   else
                       saved = stack(reshape(view(saved, :, r), (size(X)[n-1], Kns[n-1]))' * view(M.U[n-1], :, r) for r in 1:R)
                       GCPDecompositions.mttkrps_helper!(GU, saved, M, n, "left", N, Jns, Kns)
                   end
               end
           end
       end
test_new (generic function with 1 method)

julia> function test_old(GU, M, X)
           N=ndims(X)
               for k in 1:ndims(X)
               Yk = reshape(PermutedDimsArray(X, [k; setdiff(1:N, k)]), size(X, k), :)
               Zk = similar(Yk, prod(size(X)[setdiff(1:N, k)]), ncomponents(M))
               for j in Base.OneTo(ncomponents(M))
                           Zk[:, j] = reduce(kron, [view(M.U[i], :, j) for i in reverse(setdiff(1:N, k))])
               end
               mul!(GU[k], Yk, Zk)
           end
       end
test_old (generic function with 1 method)

julia> function khatrirao(A::Vararg{T,N}) where {T<:AbstractMatrix,N}
           r = size(A[1],2)
           # @boundscheck all(==(r),size.(A,2)) || throw(DimensionMismatch())
               R = ntuple(Val(N)) do k
               dims = (ntuple(i->1,Val(N-k))..., :, ntuple(i->1,Val(k-1))..., r)
               return reshape(A[k],dims)
                   end
           return reshape(broadcast(*, R...),:,r)
       end
khatrirao (generic function with 1 method)

julia> X = rand(15,20,25);

julia> Random.seed!(0);

julia> M_old = CPD(ones(T, r), rand.(T, size(X), r));

julia> Random.seed!(0);

julia> M_new = CPD(ones(T, r), rand.(T, size(X), r));

julia> GU_old = [similar(M_old.U[k]) for k in 1:ndims(X)];

julia> GU_new = [similar(M_new.U[k]) for k in 1:ndims(X)];

julia> @btime test_old(GU_old, M_old, X);
  143.400 μs (135 allocations: 139.67 KiB)

julia> @btime test_new(GU_new, M_new, X);
  8.800 μs (75 allocations: 18.31 KiB)

julia> maximum(GU_new[1]-GU_old[1])
1.4210854715202004e-14

julia> maximum(GU_new[2]-GU_old[2])
1.4210854715202004e-14

julia> maximum(GU_new[3]-GU_old[3])
7.105427357601002e-15

Benchmark Report for GCPDecompositions

Job Properties

  • Time of benchmarks:
    • Target: 4 Mar 2024 - 13:23
    • Baseline: 4 Mar 2024 - 13:24
  • Package commits:
    • Target: 4003e9
    • Baseline: 62ce59
  • Julia commits:
    • Target: 312098
    • Baseline: 312098
  • Julia command flags:
    • Target: None
    • Baseline: None
  • Environment variables:
    • Target: GCP_BENCHMARK_SUITES => gcp
    • Baseline: GCP_BENCHMARK_SUITES => gcp

Results

A ratio greater than 1.0 denotes a possible regression (marked with ❌), while a ratio less
than 1.0 denotes a possible improvement (marked with ✅). Only significant results - results
that indicate possible regressions or improvements - are shown below (thus, an empty table means that all
benchmark results remained invariant between builds).

ID time ratio memory ratio
["gcp", "bernoulliOdds-size(X)=(15, 20, 25), rank(X)=1"] 0.85 (5%) ✅ 0.94 (1%) ✅
["gcp", "bernoulliOdds-size(X)=(15, 20, 25), rank(X)=2"] 0.86 (5%) ✅ 1.07 (1%) ❌
["gcp", "bernoulliOdds-size(X)=(30, 40, 50), rank(X)=1"] 0.80 (5%) ✅ 1.04 (1%) ❌
["gcp", "bernoulliOdds-size(X)=(30, 40, 50), rank(X)=2"] 0.86 (5%) ✅ 0.97 (1%) ✅
["gcp", "gamma-size(X)=(15, 20, 25), rank(X)=1"] 1.21 (5%) ❌ 0.97 (1%) ✅
["gcp", "gamma-size(X)=(15, 20, 25), rank(X)=2"] 0.91 (5%) ✅ 0.90 (1%) ✅
["gcp", "gamma-size(X)=(30, 40, 50), rank(X)=1"] 1.21 (5%) ❌ 1.22 (1%) ❌
["gcp", "gamma-size(X)=(30, 40, 50), rank(X)=2"] 0.82 (5%) ✅ 1.02 (1%) ❌
["gcp", "least-squares-size(X)=(15, 20, 25), rank(X)=2"] 0.89 (5%) ✅ 1.00 (1%)
["gcp", "least-squares-size(X)=(30, 40, 50), rank(X)=1"] 1.22 (5%) ❌ 1.00 (1%)
["gcp", "poisson-size(X)=(15, 20, 25), rank(X)=1"] 1.08 (5%) ❌ 1.00 (1%)
["gcp", "poisson-size(X)=(15, 20, 25), rank(X)=2"] 0.99 (5%) 1.25 (1%) ❌
["gcp", "poisson-size(X)=(30, 40, 50), rank(X)=1"] 0.82 (5%) ✅ 0.90 (1%) ✅
["gcp", "poisson-size(X)=(30, 40, 50), rank(X)=2"] 1.31 (5%) ❌ 1.29 (1%) ❌

Benchmark Group List

Here's a list of all the benchmark groups executed by this job:

  • ["gcp"]

Julia versioninfo

Target

Julia Version 1.10.0
Commit 3120989f39 (2023-12-25 18:01 UTC)
Build Info:
  Official https://julialang.org/ release
Platform Info:
  OS: Windows (x86_64-w64-mingw32)
      Microsoft Windows [Version 10.0.22621.3155]
  CPU: 11th Gen Intel(R) Core(TM) i7-11800H @ 2.30GHz: 
                 speed         user         nice          sys         idle          irq
       #1-16  2304 MHz   44819604            0     44240962    7935611915      1134929  ticks
  Memory: 31.726390838623047 GB (13396.0078125 MB free)
  Uptime: 1.04030714e6 sec
  Load Avg:  0.0  0.0  0.0
  WORD_SIZE: 64
  LIBM: libopenlibm
  LLVM: libLLVM-15.0.7 (ORCJIT, tigerlake)
  Threads: 1 on 16 virtual cores

Baseline

Julia Version 1.10.0
Commit 3120989f39 (2023-12-25 18:01 UTC)
Build Info:
  Official https://julialang.org/ release
Platform Info:
  OS: Windows (x86_64-w64-mingw32)
      Microsoft Windows [Version 10.0.22621.3155]
  CPU: 11th Gen Intel(R) Core(TM) i7-11800H @ 2.30GHz: 
                 speed         user         nice          sys         idle          irq
       #1-16  2304 MHz   44924368            0     44263742    7937208507      1135085  ticks
  Memory: 31.726390838623047 GB (13700.35546875 MB free)
  Uptime: 1.04041489e6 sec
  Load Avg:  0.0  0.0  0.0
  WORD_SIZE: 64
  LIBM: libopenlibm
  LLVM: libLLVM-15.0.7 (ORCJIT, tigerlake)
  Threads: 1 on 16 virtual cores

@alexmul1114
Copy link
Contributor Author

alexmul1114 commented Mar 5, 2024

For the gcp benchmark with sz=(15,20,25), r=2, Poisson loss, it looks like most of the time is spent in evaluating the function and filling out Y (tensor of elementwise derivatives) rather than doing MTTKRPs. Below is the profiling result for the new MTTKRPs:
image
And below is the profiling result using the old implementation:
image
The MTTKRPs don't show up in the profiling for the new version but for the old version they take up maybe 5% of the total time (the ntuple part is the loop over the N modes for MTTKRP).
Using r=100 with the same size and Poisson loss, the MTTKRPs take up a somewhat bigger chunk of the time but still only about 5% for new MTTKRPs (yellow gcp_grad_U!):
image
and 15% for old (yellow gcp_grad_U!):
image

@alexmul1114
Copy link
Contributor Author

Changing the gcp_func method from using sum(value(loss, X[I], M[I]) for I in CartesianIndices(X) if !ismissing(X[I])) to mapreduce(I -> !ismissing(X[I]) ? value(loss, X[I], M[I]) : 0, +, CartesianIndices(X)) provides a decent memory benefit and some speed-up (both target and baseline here using old MTTKRPs):

Benchmark Report for GCPDecompositions

Job Properties

  • Time of benchmarks:
    • Target: 5 Mar 2024 - 12:11
    • Baseline: 5 Mar 2024 - 12:13
  • Package commits:
    • Target: 2d8cd8
    • Baseline: 62ce59
  • Julia commits:
    • Target: 312098
    • Baseline: 312098
  • Julia command flags:
    • Target: None
    • Baseline: None
  • Environment variables:
    • Target: GCP_BENCHMARK_SUITES => gcp
    • Baseline: GCP_BENCHMARK_SUITES => gcp

Results

A ratio greater than 1.0 denotes a possible regression (marked with ❌), while a ratio less
than 1.0 denotes a possible improvement (marked with ✅). Only significant results - results
that indicate possible regressions or improvements - are shown below (thus, an empty table means that all
benchmark results remained invariant between builds).

ID time ratio memory ratio
["gcp", "bernoulliOdds-size(X)=(15, 20, 25), rank(X)=1"] 0.90 (5%) ✅ 0.35 (1%) ✅
["gcp", "bernoulliOdds-size(X)=(15, 20, 25), rank(X)=2"] 0.82 (5%) ✅ 0.33 (1%) ✅
["gcp", "bernoulliOdds-size(X)=(30, 40, 50), rank(X)=1"] 0.88 (5%) ✅ 0.34 (1%) ✅
["gcp", "bernoulliOdds-size(X)=(30, 40, 50), rank(X)=2"] 0.87 (5%) ✅ 0.34 (1%) ✅
["gcp", "gamma-size(X)=(15, 20, 25), rank(X)=1"] 1.01 (5%) 0.40 (1%) ✅
["gcp", "gamma-size(X)=(15, 20, 25), rank(X)=2"] 0.90 (5%) ✅ 0.41 (1%) ✅
["gcp", "gamma-size(X)=(30, 40, 50), rank(X)=1"] 1.01 (5%) 0.46 (1%) ✅
["gcp", "gamma-size(X)=(30, 40, 50), rank(X)=2"] 0.78 (5%) ✅ 0.34 (1%) ✅
["gcp", "least-squares-size(X)=(15, 20, 25), rank(X)=2"] 1.10 (5%) ❌ 1.00 (1%)
["gcp", "least-squares-size(X)=(30, 40, 50), rank(X)=1"] 1.26 (5%) ❌ 1.00 (1%)
["gcp", "least-squares-size(X)=(30, 40, 50), rank(X)=2"] 1.35 (5%) ❌ 1.00 (1%)
["gcp", "poisson-size(X)=(15, 20, 25), rank(X)=1"] 0.84 (5%) ✅ 0.56 (1%) ✅
["gcp", "poisson-size(X)=(15, 20, 25), rank(X)=2"] 1.05 (5%) ❌ 0.83 (1%) ✅
["gcp", "poisson-size(X)=(30, 40, 50), rank(X)=1"] 0.97 (5%) 0.56 (1%) ✅
["gcp", "poisson-size(X)=(30, 40, 50), rank(X)=2"] 0.84 (5%) ✅ 0.55 (1%) ✅

Benchmark Group List

Here's a list of all the benchmark groups executed by this job:

  • ["gcp"]

Julia versioninfo

Target

Julia Version 1.10.0
Commit 3120989f39 (2023-12-25 18:01 UTC)
Build Info:
  Official https://julialang.org/ release
Platform Info:
  OS: Windows (x86_64-w64-mingw32)
      Microsoft Windows [Version 10.0.22621.3155]
  CPU: 11th Gen Intel(R) Core(TM) i7-11800H @ 2.30GHz: 
                 speed         user         nice          sys         idle          irq
       #1-16  2304 MHz   49475134            0     49141931    8601542102      1332603  ticks
  Memory: 31.726390838623047 GB (13258.26171875 MB free)
  Uptime: 1.122432015e6 sec
  Load Avg:  0.0  0.0  0.0
  WORD_SIZE: 64
  LIBM: libopenlibm
  LLVM: libLLVM-15.0.7 (ORCJIT, tigerlake)
  Threads: 1 on 16 virtual cores

Baseline

Julia Version 1.10.0
Commit 3120989f39 (2023-12-25 18:01 UTC)
Build Info:
  Official https://julialang.org/ release
Platform Info:
  OS: Windows (x86_64-w64-mingw32)
      Microsoft Windows [Version 10.0.22621.3155]
  CPU: 11th Gen Intel(R) Core(TM) i7-11800H @ 2.30GHz: 
                 speed         user         nice          sys         idle          irq
       #1-16  2304 MHz   49564492            0     49151838    8603080366      1332882  ticks
  Memory: 31.726390838623047 GB (13464.0390625 MB free)
  Uptime: 1.122534359e6 sec
  Load Avg:  0.0  0.0  0.0
  WORD_SIZE: 64
  LIBM: libopenlibm
  LLVM: libLLVM-15.0.7 (ORCJIT, tigerlake)
  Threads: 1 on 16 virtual cores

@alexmul1114
Copy link
Contributor Author

Benchmarking comparison of old getindex for CPD versus new getindex that doesn't use sum:

Benchmark Report for GCPDecompositions

Job Properties

  • Time of benchmarks:
    • Target: 7 Mar 2024 - 19:33
    • Baseline: 7 Mar 2024 - 19:35
  • Package commits:
    • Target: 6aa305
    • Baseline: aef707
  • Julia commits:
    • Target: 312098
    • Baseline: 312098
  • Julia command flags:
    • Target: None
    • Baseline: None
  • Environment variables:
    • Target: GCP_BENCHMARK_SUITES => gcp
    • Baseline: GCP_BENCHMARK_SUITES => gcp

Results

A ratio greater than 1.0 denotes a possible regression (marked with ❌), while a ratio less
than 1.0 denotes a possible improvement (marked with ✅). Only significant results - results
that indicate possible regressions or improvements - are shown below (thus, an empty table means that all
benchmark results remained invariant between builds).

ID time ratio memory ratio
["gcp", "bernoulliOdds-size(X)=(15, 20, 25), rank(X)=1"] 0.33 (5%) ✅ 0.08 (1%) ✅
["gcp", "bernoulliOdds-size(X)=(15, 20, 25), rank(X)=2"] 0.50 (5%) ✅ 0.09 (1%) ✅
["gcp", "bernoulliOdds-size(X)=(30, 40, 50), rank(X)=1"] 0.29 (5%) ✅ 0.08 (1%) ✅
["gcp", "bernoulliOdds-size(X)=(30, 40, 50), rank(X)=2"] 0.48 (5%) ✅ 0.07 (1%) ✅
["gcp", "gamma-size(X)=(15, 20, 25), rank(X)=1"] 0.47 (5%) ✅ 0.06 (1%) ✅
["gcp", "gamma-size(X)=(15, 20, 25), rank(X)=2"] 0.48 (5%) ✅ 0.07 (1%) ✅
["gcp", "gamma-size(X)=(30, 40, 50), rank(X)=1"] 0.44 (5%) ✅ 0.06 (1%) ✅
["gcp", "gamma-size(X)=(30, 40, 50), rank(X)=2"] 0.45 (5%) ✅ 0.05 (1%) ✅
["gcp", "poisson-size(X)=(15, 20, 25), rank(X)=1"] 0.42 (5%) ✅ 0.02 (1%) ✅
["gcp", "poisson-size(X)=(15, 20, 25), rank(X)=2"] 0.97 (5%) 0.83 (1%) ✅
["gcp", "poisson-size(X)=(30, 40, 50), rank(X)=1"] 0.39 (5%) ✅ 0.03 (1%) ✅
["gcp", "poisson-size(X)=(30, 40, 50), rank(X)=2"] 0.46 (5%) ✅ 0.03 (1%) ✅

Benchmark Group List

Here's a list of all the benchmark groups executed by this job:

  • ["gcp"]

Julia versioninfo

Target

Julia Version 1.10.0
Commit 3120989f39 (2023-12-25 18:01 UTC)
Build Info:
  Official https://julialang.org/ release
Platform Info:
  OS: Windows (x86_64-w64-mingw32)
      Microsoft Windows [Version 10.0.22621.3155]
  CPU: 11th Gen Intel(R) Core(TM) i7-11800H @ 2.30GHz: 
                 speed         user         nice          sys         idle          irq
       #1-16  2304 MHz   70951977            0     64394166    10622768227      1747463  ticks
  Memory: 31.726390838623047 GB (12502.3671875 MB free)
  Uptime: 1.321744875e6 sec
  Load Avg:  0.0  0.0  0.0
  WORD_SIZE: 64
  LIBM: libopenlibm
  LLVM: libLLVM-15.0.7 (ORCJIT, tigerlake)
  Threads: 1 on 16 virtual cores

Baseline

Julia Version 1.10.0
Commit 3120989f39 (2023-12-25 18:01 UTC)
Build Info:
  Official https://julialang.org/ release
Platform Info:
  OS: Windows (x86_64-w64-mingw32)
      Microsoft Windows [Version 10.0.22621.3155]
  CPU: 11th Gen Intel(R) Core(TM) i7-11800H @ 2.30GHz: 
                 speed         user         nice          sys         idle          irq
       #1-16  2304 MHz   71135820            0     64411196    10624224353      1747806  ticks
  Memory: 31.726390838623047 GB (12590.13671875 MB free)
  Uptime: 1.321848437e6 sec
  Load Avg:  0.0  0.0  0.0
  WORD_SIZE: 64
  LIBM: libopenlibm
  LLVM: libLLVM-15.0.7 (ORCJIT, tigerlake)
  Threads: 1 on 16 virtual cores

@alexmul1114
Copy link
Contributor Author

Benchmark Report using new get index for CPD, comparing MTTKRPs implementations.

Benchmark Report for GCPDecompositions

Job Properties

  • Time of benchmarks:
    • Target: 11 Mar 2024 - 19:22
    • Baseline: 11 Mar 2024 - 19:24
  • Package commits:
    • Target: 49d34e
    • Baseline: 6aa305
  • Julia commits:
    • Target: 312098
    • Baseline: 312098
  • Julia command flags:
    • Target: None
    • Baseline: None
  • Environment variables:
    • Target: GCP_BENCHMARK_SUITES => gcp
    • Baseline: GCP_BENCHMARK_SUITES => gcp

Results

A ratio greater than 1.0 denotes a possible regression (marked with ❌), while a ratio less
than 1.0 denotes a possible improvement (marked with ✅). Only significant results - results
that indicate possible regressions or improvements - are shown below (thus, an empty table means that all
benchmark results remained invariant between builds).

ID time ratio memory ratio
["gcp", "bernoulliOdds-size(X)=(15, 20, 25), rank(X)=1"] 1.11 (5%) ❌ 1.07 (1%) ❌
["gcp", "bernoulliOdds-size(X)=(15, 20, 25), rank(X)=2"] 0.87 (5%) ✅ 0.88 (1%) ✅
["gcp", "bernoulliOdds-size(X)=(30, 40, 50), rank(X)=1"] 0.97 (5%) 0.96 (1%) ✅
["gcp", "gamma-size(X)=(15, 20, 25), rank(X)=1"] 1.02 (5%) 1.02 (1%) ❌
["gcp", "gamma-size(X)=(15, 20, 25), rank(X)=2"] 1.02 (5%) 1.11 (1%) ❌
["gcp", "gamma-size(X)=(30, 40, 50), rank(X)=1"] 0.94 (5%) ✅ 0.92 (1%) ✅
["gcp", "gamma-size(X)=(30, 40, 50), rank(X)=2"] 0.94 (5%) ✅ 0.95 (1%) ✅
["gcp", "poisson-size(X)=(15, 20, 25), rank(X)=1"] 1.04 (5%) 1.08 (1%) ❌
["gcp", "poisson-size(X)=(15, 20, 25), rank(X)=2"] 0.63 (5%) ✅ 0.46 (1%) ✅
["gcp", "poisson-size(X)=(30, 40, 50), rank(X)=1"] 0.96 (5%) 0.97 (1%) ✅
["gcp", "poisson-size(X)=(30, 40, 50), rank(X)=2"] 0.87 (5%) ✅ 0.96 (1%) ✅

Benchmark Group List

Here's a list of all the benchmark groups executed by this job:

  • ["gcp"]

Julia versioninfo

Target

Julia Version 1.10.0
Commit 3120989f39 (2023-12-25 18:01 UTC)
Build Info:
  Official https://julialang.org/ release
Platform Info:
  OS: Windows (x86_64-w64-mingw32)
      Microsoft Windows [Version 10.0.22621.3155]
  CPU: 11th Gen Intel(R) Core(TM) i7-11800H @ 2.30GHz: 
                 speed         user         nice          sys         idle          irq
       #1-16  2304 MHz  107470103            0     79055961    13638613649      2131432  ticks
  Memory: 31.726390838623047 GB (12977.23828125 MB free)
  Uptime: 1.663074234e6 sec
  Load Avg:  0.0  0.0  0.0
  WORD_SIZE: 64
  LIBM: libopenlibm
  LLVM: libLLVM-15.0.7 (ORCJIT, tigerlake)
  Threads: 1 on 16 virtual cores

Baseline

Julia Version 1.10.0
Commit 3120989f39 (2023-12-25 18:01 UTC)
Build Info:
  Official https://julialang.org/ release
Platform Info:
  OS: Windows (x86_64-w64-mingw32)
      Microsoft Windows [Version 10.0.22621.3155]
  CPU: 11th Gen Intel(R) Core(TM) i7-11800H @ 2.30GHz: 
                 speed         user         nice          sys         idle          irq
       #1-16  2304 MHz  107567853            0     79065914    13640122431      2131572  ticks
  Memory: 31.726390838623047 GB (12880.9140625 MB free)
  Uptime: 1.663175265e6 sec
  Load Avg:  0.0  0.0  0.0
  WORD_SIZE: 64
  LIBM: libopenlibm
  LLVM: libLLVM-15.0.7 (ORCJIT, tigerlake)
  Threads: 1 on 16 virtual cores

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants