Skip to content
This repository has been archived by the owner on Nov 15, 2023. It is now read-only.

Introduce Slab allocator to soak up smaller allocations in wasm #1615

Closed
gavofyork opened this issue Jan 29, 2019 · 1 comment
Closed

Introduce Slab allocator to soak up smaller allocations in wasm #1615

gavofyork opened this issue Jan 29, 2019 · 1 comment
Assignees
Labels
J0-enhancement An additional feature request.
Milestone

Comments

@gavofyork
Copy link
Member

gavofyork commented Jan 29, 2019

Follow up to #300

With the current buddy allocator, small allocations are extremely inefficient since they each take up the minimum buddy leaf size (originally 4K, currently 1K). Slab allocators can be used to take these leaves and essentially allow them to be used for many allocations of the same size.

One other allocator worth prototyping would be a freeing-bump allocator.

Basically you store N linked list heads, where N is the total number of sizes of allocations to support. A simple set would be powers of two from 8 bytes to 16,777,216 bytes (2^3 - 2^24 inclusive), resulting in N = 22:

let mut heads [u64; N] = [0; N];
fn size(n: u64) -> u64 { 8 << n }
let mut bumper = 0;
fn bump(n: u64) -> u64 { let res = bumper; bumper += n; res }

We assume there's a slab of heap to be allocated:

let mut heap = [0u8; HEAP_SIZE];

Whenever you allocate, you select the lowest linked list item size that will fit the allocation (i.e. the next highest Po2). You then check to see if the linked list is empty. If empty, use the bump allocator to get the allocation with an extra 8 bytes preceding it. Initialise those preceding 8 bytes to identify the list to which it belongs (e.g. 0x__ffffffffffffff where __ is the linked list index). If it is not empty, unlink the first item from the linked list and then reset the 8 preceding bytes so they now record the identity of the linked list.

To deallocate just use the preceding 8 bytes of the allocation to knit back the allocation into the linked list from the head.

Here's the code:

fn malloc(size: u64) -> u64 {
  let item_size = size.max(8).next_power_of_two();
  let list_index = item_size.trailing_zeroes() - 3;
  let ptr = if heads[list_index] != 0 {
    // Something from the free list
    let item = heads[list_index];
    heads[list_index] = le_bytes_to_u64(heap[item..item+8]);
    item + 8
  } else {
    // Nothing to be freed. Bump.
    bump(item_size + 8) + 8
  };
  for i in 1..8 { heap[ptr - i] = 255; }
  heap[ptr - 8] = list_index;
  ptr
}
fn free(ptr: u64) {
  let list_index = heap[ptr - 8];
  for i in 1..8 { assert!(heap[ptr - i] == 255); }
  let tail = heads[list_index];
  heads[list_index] = ptr - 8;
  write_u64_into_le_bytes(&mut heap[ptr - 8, ptr], tail);
}
@gavofyork
Copy link
Member Author

Freeing bump allocator is fine.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
J0-enhancement An additional feature request.
Projects
None yet
Development

No branches or pull requests

2 participants