Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cache optimizations #3005

Merged
merged 10 commits into from
Jul 1, 2019
Merged

Cache optimizations #3005

merged 10 commits into from
Jul 1, 2019

Commits on Jul 1, 2019

  1. arm.h: add CTR_WORD_SIZE

    Adds a common define for the word size used by the CTR (cache type)
    register.
    
    Reviewed-by: Jerome Forissier <[email protected]>
    Signed-off-by: Jens Wiklander <[email protected]>
    jenswi-linaro committed Jul 1, 2019
    Configuration menu
    Copy the full SHA
    1e4c834 View commit details
    Browse the repository at this point in the history
  2. core: add icache_inv_user_range()

    Adds icache_inv_user_range() which is used when invalidating currently
    mapped user space memory. This is needed since a different ASID is
    usually in use while in kernel mode. So using icache_inv_range() would
    normally not have any effect on user mode mappings.
    
    Reviewed-by: Etienne Carriere <[email protected]>
    Signed-off-by: Jens Wiklander <[email protected]>
    jenswi-linaro committed Jul 1, 2019
    Configuration menu
    Copy the full SHA
    d61bcfe View commit details
    Browse the repository at this point in the history
  3. core: cache_helpers_a{32,64}.S: remove section assignments

    Since the FUNC and LOCAL_FUNC assembly macros now assign a section to
    each assembly function the explicitly assigned sections in
    cache_helpers_a{32,64}.S are ignored. So remove the ignored section
    assignments.
    
    Reviewed-by: Jerome Forissier <[email protected]>
    Signed-off-by: Jens Wiklander <[email protected]>
    jenswi-linaro committed Jul 1, 2019
    Configuration menu
    Copy the full SHA
    2b405e8 View commit details
    Browse the repository at this point in the history
  4. core: add dcache_clean_range_pou()

    Adds dcache_clean_range_pou() which cleans the data cache to the point
    of unification. This is exactly what's needed when later invalidating
    the icache due to updates in a page.
    
    Reviewed-by: Etienne Carriere <[email protected]>
    Signed-off-by: Jens Wiklander <[email protected]>
    jenswi-linaro committed Jul 1, 2019
    Configuration menu
    Copy the full SHA
    51ffb71 View commit details
    Browse the repository at this point in the history
  5. core: pager: use dcache_clean_range_pou()

    Pager uses dcache_clean_range_pou() when cleaning pages before
    invalidating icache for that page. Prior to this patch
    dcache_clean_range() was used indirectly which cleans the range to point
    of coherency instead of point of unification.
    
    With this patch we're likely to save one data cache level by only
    cleaning level 1 instead of level 1 and 2. This assumes separate data
    and instructions caches level 1 and a unified data cache at level 2
    
    Acked-by: Etienne Carriere <[email protected]>
    Signed-off-by: Jens Wiklander <[email protected]>
    jenswi-linaro committed Jul 1, 2019
    Configuration menu
    Copy the full SHA
    0105e1b View commit details
    Browse the repository at this point in the history
  6. core: arm32.h: add TLBI_{MVA_SHIFT,ASID_MASK}

    Adds TLBI macros to help formatting source register for TLB
    invalidations.
    
    Reviewed-by: Jerome Forissier <[email protected]>
    Signed-off-by: Jens Wiklander <[email protected]>
    jenswi-linaro committed Jul 1, 2019
    Configuration menu
    Copy the full SHA
    c2c2439 View commit details
    Browse the repository at this point in the history
  7. core: arm64.h: add tlbi_vale1is()

    Adds tlbi_vale1is() which is a wrapper around inline assembly code to
    execute  "tlbi vale1is". The operation is described as "TLB Invalidate
    by VA, Last level, EL1, Inner Shareable" in the ARM ARM.
    
    Reviewed-by: Etienne Carriere <[email protected]>
    Signed-off-by: Jens Wiklander <[email protected]>
    jenswi-linaro committed Jul 1, 2019
    Configuration menu
    Copy the full SHA
    70e5314 View commit details
    Browse the repository at this point in the history
  8. core: add tlbi_mva_asid()

    Adds tlbi_mva_asid() to invalidate one TLB entry, typically page sized,
    selected by virtual address and address identifier. The function targets
    both the kernel mode and user mode address identifiers at the same time.
    
    Reviewed-by: Etienne Carriere <[email protected]>
    Signed-off-by: Jens Wiklander <[email protected]>
    jenswi-linaro committed Jul 1, 2019
    Configuration menu
    Copy the full SHA
    11c0157 View commit details
    Browse the repository at this point in the history
  9. core: pager: use tlbi_mva_asid() where applicable

    Instead of invalidating a virtual address for all ASIDs only target the
    relevant ones. For kernel mode mappings all ASIDs still needs to be
    targeted though.
    
    Reviewed-by: Etienne Carriere <[email protected]>
    Signed-off-by: Jens Wiklander <[email protected]>
    jenswi-linaro committed Jul 1, 2019
    Configuration menu
    Copy the full SHA
    11d6ccc View commit details
    Browse the repository at this point in the history
  10. core: pager: use icache_inv_user_range()

    Prior to this patch the entire icache was invalidated when icache
    invalidations was needed, even if it only was for a single page. This
    was needed to reach a stable state with regards to paging user TAs.
    
    With this patch a new function, icache_inv_user_range(), is used to
    invalidate pages used by user TAs and icache_inv_range() is used instead
    to invalidate kernel mode pages.
    
    Reviewed-by: Etienne Carriere <[email protected]>
    Signed-off-by: Jens Wiklander <[email protected]>
    jenswi-linaro committed Jul 1, 2019
    Configuration menu
    Copy the full SHA
    2b0ebf9 View commit details
    Browse the repository at this point in the history