Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RFC] Provision TEE threads for system invocation (take2) #109

Closed
wants to merge 5 commits into from

Conversation

etienne-lms
Copy link

Superseeds #108.

During reviews on how to provision TEE threads for system service, we considered a scheme where Linux kernel monitors system and regular TEE sessions, whitout extra services on OP-TEE secure side.

This RFC pull request proposed such an implementation. It is shared here and will be posted to LKML if we it's a prefered way, vesus what was proposed in #108.

Export find_session(), renamed optee_find_session() to find a TEE context
session from its session ID.

Signed-off-by: Etienne Carriere <[email protected]>
Adds an argument to do_call_with_arg() handler to tell whether the call
is a system call or nor. This change always sets this info to false
hence no functional change.

This change prepares management of system invocation proposed in a later
change.

Signed-off-by: Etienne Carriere <[email protected]>
Changes optee_close_session_helper() to get the session reference as
argument instead of the session ID.

Signed-off-by: Etienne Carriere <[email protected]>
Adds kernel client API function tee_client_system_session() for a client
to request a system service entry in TEE context.

OP-TEE SMC ABI entry uses this information to monitor TEE thread contexts
available in OP-TEE and ensure a system session always has a provisioned
entry context ready to use.

This feature is needed to prevent a system deadlock when several TEE
client applications invoke TEE, consuming all TEE thread contexts
available in OP-TEE. The deadlock can happen for example if all these
TEE threads issue an RPC call from TEE to Linux OS to access eMMC RPMB
partition data which device clock or regulator controller is accessed
through an OP-TEE SCMI services. In that case, Linux SCMI driver
must reach OP-TEE SCMI service without waiting one of the consumed
TEE thread is freed.

Signed-off-by: Etienne Carriere <[email protected]>
Changes SCMI optee transport to call tee_client_system_session()
to request optee driver to provision an entry context in OP-TEE
for processing OP-TEE messages. This prevents possible deadlock
in case OP-TEE threads are all consumed while these may be waiting
for a clock or regulator to be enable which SCMI OP-TEE service which
requires a free thread context to execute.

Signed-off-by: Etienne Carriere <[email protected]>
Copy link

@jenswi-linaro jenswi-linaro left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is quite nice.
I'll try to propose something less complicated with regards to optee_smc_do_call_with_arg().

optee->smc.invoke_fn(OPTEE_SMC_GET_THREAD_COUNT,
0, 0, 0, 0, 0, 0, 0, &res);
if (!res.a0) {
thd->thread_cnt = res.a1;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be initialized during the call to optee_probe(), then there's no need for serialization.

int rc = -EINVAL;
unsigned long flags;

if (thd->best_effort)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This can change in call_out_of_thread()

@@ -865,13 +952,15 @@ static void optee_handle_rpc(struct tee_context *ctx,
* Returns return code from secure world, 0 is OK
*/
static int optee_smc_do_call_with_arg(struct tee_context *ctx,
struct tee_shm *shm, u_int offs)
struct tee_shm *shm, u_int offs,
bool system_call)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

System call in kernel context is will make most kernel developers think of something different, how about system_thread or sys_thread?

@@ -907,6 +996,12 @@ static int optee_smc_do_call_with_arg(struct tee_context *ctx,
}
/* Initialize waiter */
optee_cq_wait_init(&optee->call_queue, &w);

/* May wait for a thread to become available in secure world */
if (!optee->thread.best_effort)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This logic is too complicated, I'll try to propose something integrated with the optee_cq_*() functions.

Copy link
Author

@etienne-lms etienne-lms left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for looking at this.
The fact some tee threads many not be available (be lost) complexify a bit the logic.

@jenswi-linaro
Copy link

The fact some tee threads many not be available (be lost) complexify a bit the logic.

That's one of the things I think can be simplified, as long as the logic doesn't break down completely that problem can be ignored. I mean, there are much worse things that can happen if the secure world starts to misbehave.

@etienne-lms
Copy link
Author

I mean, there are much worse things that can happen if the secure world starts to misbehave.

In that case, it is not a misbehaviour from the TEE, it rather how normal world secure plays with OP-TEE threads. An ealier boot stage could have left a thread suspended. It is not awaited but what if it happens? TEE and REE should not panic.

@jenswi-linaro
Copy link

I mean, there are much worse things that can happen if the secure world starts to misbehave.

In that case, it is not a misbehaviour from the TEE, it rather how normal world secure plays with OP-TEE threads. An ealier boot stage could have left a thread suspended. It is not awaited but what if it happens? TEE and REE should not panic.

Agreed, panic should be avoided. However, a boot stage handing over with threads active in the secure world is an error. So whatever we do after that is only best-effort.

@jenswi-linaro
Copy link

See another attempt at #110

@etienne-lms
Copy link
Author

Closing, superseded by #110.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants