Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cve 2017 5754 #2048

Merged
merged 13 commits into from
Jan 10, 2018
Merged

Cve 2017 5754 #2048

merged 13 commits into from
Jan 10, 2018

Conversation

jenswi-linaro
Copy link
Contributor

@jenswi-linaro jenswi-linaro commented Jan 4, 2018

Fixes CVE-2017-5754

Note that the first commit "core: arm32: invalidate branch predictor" is also supplied in #2047 and should preferably be reviewed there. It's supplied here to avoid a later merge conflict.

@jenswi-linaro
Copy link
Contributor Author

Note that we're now using two ASIDs per context. One ASID while in kernel mode and another while in user mode. This isolates which TLB entries are valid for the ASID used while in user mode. Without this we would need to invalidate all TLB entries holding kernel mappings.

The key commits in this pull request are core: use minimal kernel map in user space and core: make all mapping non-global. The former unmaps kernel mode memory and and latter makes all mappings non-global and thus makes all tlb entry bound to a specific ASID.

@jforissier
Copy link
Contributor

Seems to be working fine on HiKey960, and I have noticed no performance penalty with xtest --aes-perf.

However, with CFG_CORE_UNMAP_CORE_AT_EL0=n:

root@HiKey960:/ xtest --aes-perf
E/TC:0 tee_ta_open_session:608 Failed. Return error 0x3f085080
TEEC_OpenSession: 0x3f085080 (orig=4)
root@HiKey960:/

😭

@glneo
Copy link
Contributor

glneo commented Jan 7, 2018

Although OP-TEE should take the security over performance, do we have any numbers on the performance hit?

Anyway, unrelated to CVE-2017-5754, this separation is just good security and I'm surprised it wasn't like this before.

Acked-by: Andrew Davis <[email protected]>

@jenswi-linaro
Copy link
Contributor Author

Update, fixes CFG_CORE_UNMAP_CORE_AT_EL0=n

@jforissier
Copy link
Contributor

Update, fixes CFG_CORE_UNMAP_CORE_AT_EL0=n

Yup, tested OK on HiKey960.

@jenswi-linaro
Copy link
Contributor Author

@glneo, I've done a simple test now on HiKey. The result is below, as you can see there's almost no difference at all.

64-bit: CFG_CORE_UNMAP_CORE_AT_EL0=y CFG_WITH_PAGER=n
time xtest:
real 1m 19.12s
user 0m 0.10s
sys 1m 20.82s

64-bit: CFG_CORE_UNMAP_CORE_AT_EL0=n CFG_WITH_PAGER=n
time xtest:
real 1m 19.82s
user 0m 0.09s
sys 1m 21.21s

@@ -83,10 +83,18 @@
mcr p15, 0, \reg, c2, c0, 0
.endm

.macro write_ttbr0_64bit reg0:req, reg1:req
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's a bit weird to use :req here because other macros don't use it. I don't think it's really needed since we'll get an assembly error anyway, should a parameter be missing.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You could on the other hand argue that it's weird to not require a parameter that is used unconditionally. I don't care that much, if you prefer that I remove it I'll do so.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Doing this in #2054 which I'll rebase this PR on at some stage.

@@ -261,6 +261,14 @@
mrc p15, 0, \reg, c13, c0, 3
.endm

.macro write_tpidrprw reg
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The commit comment does not reflect the proper names. Please s/tidrprw/tpidrprw/ and s/TIPIDRPRW/TPIDRPRW/.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Doing this in #2054 which I'll rebase this PR on at some stage.

@@ -63,8 +64,9 @@
#define TEE_MMU_UCACHE_DEFAULT_ATTR (TEE_MATTR_CACHE_CACHED << \
TEE_MATTR_CACHE_SHIFT)

/* Support for 31 concurrent sessions */
static uint32_t g_asid = 0xffffffff;
#define MMU_NUM_ASIDS 64
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we keep a comment about what this means regarding the number of concurrent sessions?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I'll add a comment.

dsb ishst /* Sync with table update */
write_tlbiasidis r0 /* Inval unified TLB by ASID Inner Sharable */
orr r0, r0, #1 /* Select the paired ASID */
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nitpicking: I'd rather s/paired/kernel/ since that's what it's used for.

lsl x0, x0, #TLBI_ASID_SHIFT
dsb ishst /* Sync with table update */
tlbi aside1is, x0 /* Invalidate tlb by asid in inner shareable */
orr x0, x0, #BIT(TLBI_ASID_SHIFT) /* Select the paired ASID */
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here also s/paired/kernel/

extern long thread_user_kdata_offset;
extern size_t thread_user_kdata_size;

extern void thread_vect_table(void);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

extern not needed

extern size_t thread_user_kcode_size;
extern vaddr_t thread_user_kdata_va;
extern long thread_user_kdata_offset;
extern size_t thread_user_kdata_size;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the end, only thread_user_kcode_offset seems to be used outside of core/arch/arm/kernel/thread.c, so why not keep most symbols static in thread.c?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Doing this in #2054 which I'll rebase this PR on at some stage.

str[3] = '\0';
str[3] = (attr & TEE_MATTR_PR) ? 'R' : '-';
str[4] = (attr & TEE_MATTR_PW) ? 'W' : '-';
str[5] = (attr & TEE_MATTR_PX) ? 'X' : '-';
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why use upper case? User vs. kernel mode is indicated by the position in the string, so there is no need to use different characters for read/write/execute.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's easier to read this way as it stands out a bit.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, why not.

@jenswi-linaro jenswi-linaro mentioned this pull request Jan 8, 2018
@etienne-lms
Copy link
Contributor

Hi. successfully tested on cortex-a9/b2260.

No issue foreseen but a bit of extra unpaged data due to the added tables in the nozi section especially with 32bit MMU (CFG_WITH_LPAE=n):

  • 32B * nb cores on LPAE
  • 16kB on 32bit MMU => that quite a lot, but seems no other way...


v = (vaddr_t)thread_vect_table;
thread_user_kcode_va = ROUNDDOWN(v, CORE_MMU_USER_CODE_SIZE);
thread_user_kcode_size = CORE_MMU_USER_CODE_SIZE * 2;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why page_size * 2?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In case the exception vector is spanning to pages.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, thanks. comment at line 896? i.e /* thread_vect_table always fits at most over 2 pages */

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Separate issue maybe: I find CORE_MMU_USER_CODE_SIZE a bit misleading, if it's a page size then maybe it should be called CORE_MMU_USER_CODE_PAGE_SIZE.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you explain in the commit text what you mean by "required kernel mapping inside a user mapping"? (just the exception vector, if I understand correctly).


.macro maybe_invalidate_bp
#ifdef CFG_WITH_ARM_TRUSTED_FW
/* Invalidate the branch predictor. */
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

minor: indentation for the comment and the 2 instructions below.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is from an old patch replaced by #2047

@@ -77,6 +77,11 @@
#define CFG_TEE_RAM_VA_SIZE CORE_MMU_PGDIR_SIZE
#endif

#ifdef CFG_WITH_LPAE
#define CORE_MMU_L1_TBL_OFFSET (CFG_TEE_CORE_NB_CORE * 4 * 8)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

* 4 * 8 looks like magic, especially compared to how NUM_L1_ENTRIES is defined in core_mmu_lpae.c. Deserves a bit of explanation: why 4 64bit wide mmu descriptors to cover 4x1GB ?

Regarding the armv7, the comment is straightforward: we need a full L1 16kB wide table.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I could base this on NUM_L1_ENTRIES, but that means moving all needed defines in core_mmu_lpae.c to this file which are overlapping with the defines above in this file.
That would warrant a cleanup that I don't think we want to do right now.

How about just adding a comment:

/*
 * CORE_MMU_L1_TBL_OFFSET is used when switching to/from reduced kernel mapping.
 * The actual value depends on internals in core_mmu_lpae.c and core_mmu_v7.c which we
 * rather not expose here. There's a compile time assertion to check that these magic numbers
 * are correct.
 */

@@ -618,7 +618,7 @@ void core_mmu_create_user_map(struct user_ta_ctx *utc,
memset(dir_info.table, 0, PGT_SIZE);
core_mmu_populate_user_map(&dir_info, utc);
map->user_map = virt_to_phys(dir_info.table) | TABLE_DESC;
map->asid = utc->context & TTBR_ASID_MASK;
map->asid = utc->mmu->asid;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should check ASID do not exeed hw limit (32).
As this is void routine, maybe COMPILE_TIME_ASSERT(MMU_NUM_ASIDS <= 2 * 32); with an exported MMU_NUM_ASIDS.

Or tee_mmu.c to check MMU_NUM_ASIDS against lpae/non-lpae heander file.
Or tee_mmu.c to comment MMU_NUM_ASIDS with /* 64 = 2 * 32, matches all mappings supported by core */.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hardware limit of 32? Do you have a reference?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Current AArch32 and AArch64 support up to 32 value for the ASID, no?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As far as I know it can be up to 8 bits in AArch32.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

right. 8bit, 256 values for both 32bit mmu, and AArch32/AArch64 LPAE.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Resolved in tee_mmu.c. thanks.

@@ -200,7 +200,7 @@ enum desc_type {
};

/* Main MMU L1 table for teecore */
static uint32_t main_mmu_l1_ttb[NUM_L1_ENTRIES]
uint32_t main_mmu_l1_ttb[2][NUM_L1_ENTRIES]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Replace 2 by 1 if !defined(CFG_CORE_UNMAP_CORE_AT_EL0)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, I'll fix.

* while in kernel mode and one to be used while in user mode. These are
* not static as the symbols are accessed directly from assembly.
*/
uint64_t l1_xlation_table[2][CFG_TEE_CORE_NB_CORE][NUM_L1_ENTRIES]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Replace 2 by 1 if !defined(CFG_CORE_UNMAP_CORE_AT_EL0)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, I'll fix

@jenswi-linaro
Copy link
Contributor Author

Update

Copy link
Contributor

@jforissier jforissier left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • For "core: make core_mmu.h asm friendly",
    "core: tlbi_asid() handle kernel mode ASID too",
    "core: add mobj_tee_ram",
    "core: thread_a32.S: move intr handler macros":
    Reviewed-by: Jerome Forissier <[email protected]>

  • For "core: refactor ASID management:
    Acked-by: Jerome Forissier <[email protected]>

  • For "core: thread: add thread_get_user_kcode()":
    with mention of 'the exception vector (thread_vect_table)' added to the commit commit:
    Reviewed-by: Jerome Forissier <[email protected]>

  • For "core: arm32: exception handlers in one section":
    It would be nice to say why it is needed, because when considered on it own, it's not longer obvious it's being done for KPTI. In any case:
    Reviewed-by: Jerome Forissier <[email protected]>

  • For "core: use minimal kernel map in user space":
    One comment about better documenting CFG_CORE_UNMAP_CORE_AT_EL0, other than that:
    Acked-by: Jerome Forissier <[email protected]>

@@ -43,6 +43,8 @@ ifeq ($(CFG_CORE_LARGE_PHYS_ADDR),y)
$(call force,CFG_WITH_LPAE,y)
endif

CFG_CORE_UNMAP_CORE_AT_EL0 ?= y
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Comment needed here: what does it do and why, give CVE ID, justify why '?= y' always although ARM claims only Cortex A75 are affected.


v = (vaddr_t)thread_vect_table;
thread_user_kcode_va = ROUNDDOWN(v, CORE_MMU_USER_CODE_SIZE);
thread_user_kcode_size = CORE_MMU_USER_CODE_SIZE * 2;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Separate issue maybe: I find CORE_MMU_USER_CODE_SIZE a bit misleading, if it's a page size then maybe it should be called CORE_MMU_USER_CODE_PAGE_SIZE.


v = (vaddr_t)thread_vect_table;
thread_user_kcode_va = ROUNDDOWN(v, CORE_MMU_USER_CODE_SIZE);
thread_user_kcode_size = CORE_MMU_USER_CODE_SIZE * 2;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you explain in the commit text what you mean by "required kernel mapping inside a user mapping"? (just the exception vector, if I understand correctly).

@jenswi-linaro
Copy link
Contributor Author

Update with the latest commit being 0809044
In next step I'll rebase on master, update commits messages as requested and apply tags where appropriate.

@jenswi-linaro
Copy link
Contributor Author

Rebased, squashed, commit messages updated and tags applied.

/*
* This macro is a bit hard to read due to all the ifdefs,
* we're testing for two different configs which makes four
* differnt combinations.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/differnt/different/

#ifdef CFG_CORE_UNMAP_CORE_AT_EL0
/* Update the mapping to use the full kernel mode mapping. */
sub r0, r0, #CORE_MMU_L1_TBL_OFFSET
#endif
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are we sure there is no offset MSB part to substract from r1 ?
I would think there is none, but it is possible the L1 is above the 4 first GByte ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, I'll change this into 64-bit arithmetic:

                subs    r0, r0, #CORE_MMU_L1_TBL_OFFSET
                sbc     r1, r1, #0

@@ -618,7 +618,7 @@ void core_mmu_create_user_map(struct user_ta_ctx *utc,
memset(dir_info.table, 0, PGT_SIZE);
core_mmu_populate_user_map(&dir_info, utc);
map->user_map = virt_to_phys(dir_info.table) | TABLE_DESC;
map->asid = utc->context & TTBR_ASID_MASK;
map->asid = utc->mmu->asid;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Resolved in tee_mmu.c. thanks.

TEE_MMU_UMAP_STACK_IDX;
struct tee_ta_region *prev_region = utc->mmu->regions +
TEE_MMU_UMAP_STACK_IDX - 1;
struct tee_ta_region *region = prev_region + 1;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

minor: is this change still useful?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, you're right. This is a leftover from an earlier approach.

@jforissier
Copy link
Contributor

@jenswi-linaro
Copy link
Contributor Author

I'd rather update the documentation in another pull request as the documentation requires extensive updates, it doesn't even mention LPAE.

@jforissier
Copy link
Contributor

@jenswi-linaro I agree if the doc is already wrong, I don't agree if it is simply incomplete right now and this PR would make it wrong.

@etienne-lms
Copy link
Contributor

I guess https://github.com/OP-TEE/optee_os/blob/master/documentation/optee_design.md#5-mmu should be updated?

The doc is almost still applicable since in does give much details. However, it currently shows the non LPAE setup only.

For the the p-r: Reviewed-by: Etienne Carriere <[email protected]> for all but "core: use minimal kernel map in user space" which I Acked-by: Etienne Carriere <[email protected]>

Makes core_mmu.h assembly friendly by excluding C code with #ifndef ASM

Reviewed-by: Etienne Carriere <[email protected]>
Reviewed-by: Jerome Forissier <[email protected]>
Acked-by: Andrew Davis <[email protected]>
Signed-off-by: Jens Wiklander <[email protected]>
Refactors Address Space Identifier management. The field in struct
user_ta_ctx is moved into struct tee_mmu_info and renamed to asid.

Allocation refactored internally with asid_alloc() and asid_free()
functions, based on bitstring.h macros.

ASIDs starts at 2, and is always an even number.

ASIDs with the lowest bit set is reserved for as the second ASID when
using ASIDs in pairs.

Reviewed-by: Etienne Carriere <[email protected]>
Acked-by: Jerome Forissier <[email protected]>
Acked-by: Andrew Davis <[email protected]>
Signed-off-by: Jens Wiklander <[email protected]>
When invalidating an ASID (lowest bit 0), clear the paired ASID (lowest
bit 1)too.

Reviewed-by: Etienne Carriere <[email protected]>
Reviewed-by: Jerome Forissier <[email protected]>
Acked-by: Andrew Davis <[email protected]>
Signed-off-by: Jens Wiklander <[email protected]>
Adds mobj_tee_ram to describe TEE RAM mapping inside a user mapping.

Reviewed-by: Etienne Carriere <[email protected]>
Reviewed-by: Jerome Forissier <[email protected]>
Acked-by: Andrew Davis <[email protected]>
Signed-off-by: Jens Wiklander <[email protected]>
Adds thread_get_user_kcode() to report required kernel mapping
(exception vector and some associated code in the same section as the
vector) inside a user mapping.

Reviewed-by: Etienne Carriere <[email protected]>
Reviewed-by: Jerome Forissier <[email protected]>
Acked-by: Andrew Davis <[email protected]>
Signed-off-by: Jens Wiklander <[email protected]>
Adds a minimal kernel mapping needed when user mapping is active.

Reviewed-by: Etienne Carriere <[email protected]>
Acked-by: Andrew Davis <[email protected]>
Signed-off-by: Jens Wiklander <[email protected]>
Adds a second translation table to be used while in user mode containing
user mode mapping and a minimal kernel mapping.

Reviewed-by: Etienne Carriere <[email protected]>
Acked-by: Andrew Davis <[email protected]>
Signed-off-by: Jens Wiklander <[email protected]>
Moves all exception handlers into the section of the vector,
.text.thread_vect_table. This makes it possible to later map just the
exception vector and the closest associated code while in user mode.

Reviewed-by: Etienne Carriere <[email protected]>
Reviewed-by: Jerome Forissier <[email protected]>
Acked-by: Andrew Davis <[email protected]>
Signed-off-by: Jens Wiklander <[email protected]>
Moves the interrupt handler macros closer to the vector.

Reviewed-by: Etienne Carriere <[email protected]>
Reviewed-by: Jerome Forissier <[email protected]>
Acked-by: Andrew Davis <[email protected]>
Signed-off-by: Jens Wiklander <[email protected]>
Adds a trampoline in the exception vector to switch to a minimal kernel
map when in user mode. When returning to kernel mode the full kernel
mode map is restored.

Arm32 tries to mimic the arm64 exception model somewhat by letting each
exception handler run with disabled asynchronous aborts, irq and fiq.

Form arm32 accesses to the cpus thread_core_local is only done via the
stack pointer in abort mode. Entry of user mode is only done via abort
mode, that means that the abort mode spsr register carries the new cpsr.
Care is taken to have all exceptions disabled while using abort mode.

ASIDs are paired with a user mode ASID with lowest bit sset and a
kernel mode ASID with the lowest bit cleared.

ASID 0 is reserved for kernel mode use when there's no user mode mapping
active.

With this change an active used mode mapping while in kernel mode uses
(asid | 0), and while in user mode (asid | 1). The switch is done via
the trampoline in the vector.

Acked-by: Etienne Carriere <[email protected]>
Acked-by: Jerome Forissier <[email protected]>
Acked-by: Andrew Davis <[email protected]>
Signed-off-by: Jens Wiklander <[email protected]>
Makes all mapping non-global to avoid the otherwise required tlb
invalidation when switching to user mode.

This change makes the fix for CVE-2017-5754 complete.

Reviewed-by: Etienne Carriere <[email protected]>
Acked-by: Andrew Davis <[email protected]>
Signed-off-by: Jens Wiklander <[email protected]>
Renames mattr_uflags_to_str() to mattr_perm_to_str() and report all
permission bits using a 7 bytes long string instead.

This allows observing the permissions of the minimal kernel mapping
added to the user space context.

Reviewed-by: Etienne Carriere <[email protected]>
Acked-by: Andrew Davis <[email protected]>
Signed-off-by: Jens Wiklander <[email protected]>
@jenswi-linaro
Copy link
Contributor Author

Squashed and tags applied
Note the not yet reviewed updated documentation

@jforissier
Copy link
Contributor

For the doc update:
Acked-by: Jerome Forissier <[email protected]>

This only describes the already documented legacy ARMv7 (short) table
format.

Acked-by: Jerome Forissier <[email protected]>
Signed-off-by: Jens Wiklander <[email protected]>
@jenswi-linaro
Copy link
Contributor Author

Tag applied

@jforissier
Copy link
Contributor

Thanks @jenswi-linaro 👍

@jforissier jforissier merged commit 294a3c3 into OP-TEE:master Jan 10, 2018
@jenswi-linaro jenswi-linaro deleted the cve-2017-5754 branch January 10, 2018 15:52
@etienne-lms
Copy link
Contributor

etienne-lms commented Jan 10, 2018 via email

@jenswi-linaro
Copy link
Contributor Author

Thanks, I've found the proper fix in #2060

@etienne-lms
Copy link
Contributor

etienne-lms commented Jan 10, 2018 via email

@etienne-lms
Copy link
Contributor

etienne-lms commented Jan 10, 2018 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants