Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Connecting RPMB to the storage APIs #296

Closed
youssefesmat opened this issue May 12, 2015 · 5 comments
Closed

Connecting RPMB to the storage APIs #296

youssefesmat opened this issue May 12, 2015 · 5 comments

Comments

@youssefesmat
Copy link
Contributor

Hello,

I have been working with Paul (paswan-ms) on RPMB support in OP-TEE. The following is a proposal for RPMB support that builds upon the encrypted FS changes (pull request 231).


Connecting the APIs

The proposal here is to add another tee_file_operations table that can be updated by the platform.
Today, the tee_svc_storage_XX APIs (tee_svc_storage.c) call the lower level APIs using the tee_file_ops function table. This table was changed with the security changes to call the tee_enc_fs_XX APIs. Internally the tee_enc_fs_XX APIs call the lower level tee_common_fs_XX APIs.

This change will add another table that the tee_enc_fs_XX APIs can use to call the lower layer physical storage APIs called tee_phy_file_ops. Thus, instead of tee_enc_fs_XX APIs calling tee_common_fs directly they will call tee_phy_file_ops.XX.

The table will be prototyped in tee_fs.h as follows:

extern struct tee_file_operations tee_phy_file_ops;

The table will be defined in tee_ fs_common.c in the following way:

#if ! defined(CFG_PHY_FS)
struct tee_file_operations tee_phy_file_ops = {
    .open = tee_fs_common_open,
    .close = tee_fs_common_close,
    .read = tee_ fs_common_read,
    .write = tee_ fs_common_write,
    .lseek = tee_ fs_common_lseek,
    .ftruncate = tee_ fs_common_ftruncate,
    .rename = tee_fs_common_rename,
    .unlink = tee_fs_common_unlink,
    .mkdir = tee_fs_common_mkdir,
    .opendir = tee_fs_common_opendir,
    .closedir = tee_fs_common_closedir,
    .readdir = tee_fs_common_readdir,
    .rmdir = tee_fs_common_rmdir,
    .access = tee_fs_common_access,
    .stat = tee_fs_common_stat //needs to be added
};
#endif

And tee_file_ops in tee_enc_fs.c will be changed to:

struct tee_file_operations tee_file_ops = {
    .open = tee_enc_fs_open,
    .close = tee_enc_fs_close,
    .read = tee_enc_fs_read,
    .write = tee_enc_fs_write,
    .lseek = tee_enc_fs_lseek,
    .ftruncate = tee_enc_fs_ftruncate,
    .rename = tee_phy_file_ops.rename,
    .unlink = tee_phy_file_ops.unlink,
    .mkdir = tee_phy_file_ops.mkdir,
    .opendir = tee_phy_file_ops.opendir,
    .closedir = tee_phy_file_ops.closedir,
    .readdir = tee_phy_file_ops.readdir,
    .rmdir = tee_phy_file_ops.rmdir,
    .access = tee_phy_file_ops.access,
    .stat = tee_phy_file_ops.stat
};

The platform can then define CFG_PHY_FS and define the table in platform specific files. In our case the platform will provide another definition for the table pointing to the RPMB APIs. Such as the following:

struct tee_file_operations tee_phy_file_ops = {
    .open = tee_rpmb_fs_open,
    .close = tee_rpmb_fs_close,
    .read = tee_ rpmb_fs_read,
    .write = tee_ rpmb_fs_write,
    .lseek = tee_ rpmb_fs_lseek,
    .ftruncate = tee_ rpmb_fs_ftruncate,
    .rename = tee_ rpmb_fs_rename,
    .unlink = tee_ rpmb_fs_unlink,
    .mkdir = tee_ rpmb_fs_mkdir,
    .opendir = tee_ rpmb_fs_opendir,
    .closedir = tee_ rpmb_fs_closedir,
    .readdir = tee_ rpmb_fs_readdir,
    .rmdir = tee_ rpmb_fs_rmdir,
    .access = tee_ rpmb_fs_access,
    .stat = tee_rpmb_fs_stat
};

RPMB APIs

The current implementation of RPMB implements a FAT like table that stores the file name, location and size in the RPMB block. The FAT table sits in the beginning of the RRMB block.

Directory Support

Directory support is missing from the current RPMB implementation that needs to be added to support calls from the higher level encryption APIs. In OPTEE today there is only a one-level deep directory hierarchy defined as TA_ID/OBJECT_ID. There is a directory for every TA that contains the objects that that TA owns.

Short Term Solution

The quickest short term solution to add “directory” support to the current implementation of RPMB is to store the full path as the name of the file in the FAT table. The FAT table supports a maximum file name of 48 characters. The object ID maximum length is 64 character, plus the size of the UUID for the TA is 16 plus one for the directory divisor character would require us to change the maximum file size to 81 characters or to (sizeof(TEE_UUID) + TEE_OBJECT_ID_MAX_LEN + 1).

With this approach the RPMB APIs that need to be implemented will be as follows:

  • Open: This API will look through the FAT for the file with full path in question and return a file descriptor representing the file. A new FAT entry can be created for the file if it does not already exist. This can be the index into the FAT table or can be implemented as a separate file descriptor table.
  • Close: Will release any used memory to represent the file.
  • Read: tee_rpmb_fs_read, this API will be changed to accept a file handle instead of a name since there must have been a preceding open call. This API does not require any functional changes. Note that the entire file must be read at once in the current implementation. In the short term this is acceptable because the encryption filesystem reads the entire file from the underlying storage on the open call.
  • Write: tee_rpmb_fs_write, this API will be changed to accept a file handle instead of a name. This API does not require functional changes. Note that the entire file must be written at once in the current implementation. In the short term this is acceptable because the encryption filesystem writes the entire file to the underlying storage during the close call.
  • Lseek: A call to lseek to 0 will return success, everything else will return a failure since the whole file must be written at once. There is a call to lseek to the end in tee_enc_fs.c to get the file length but this can be changed to use stat instead.
  • Ftruncate: Anything other than 0 will return an error to delete the whole file. This will need to be implemented. tee_enc_fs.c currently calls to truncate the file to the new encrypted file size in tee_enc_fs_close() and then is followed by a total write of the new file contents. This can be changed to a truncate(0) followed by a write.
  • Rename: tee_rpmb_fs_rename currently works. This will just need to be changed to take into account the “directory” portion of the name.
  • Unlink: tee_rpmb_fs_rm will work here.
  • Mkdir: This will return a failure and should not be called because the current implementation of tee_svc_storage_create_file calls open first with the full path which will succeed skipping the mkdir call.
  • Opendir: A call to this API will traverse the FAT table and generate a list of entries. A read index will also be stored.
  • Closedir: Will free the generated list of entries.
  • Readdir: Will return the tee_fs_dirent at the current index and increment the index.
  • Rmdir: Will return success if all files in that “directory” have been unlinked.
  • Access: Will return true if that file exits.

Optimizations

One performance issue that arrises from the proposed solution is that the entire FAT needs to be traversed to find a particular UUID/OBJECT_ID file instead of just traversing the directory files. This can be improved by:

  1. Creating a hash of the file names to FAT addresses for quick lookup.
  2. Longer term solution of adding directory entries to the FAT. In other words having special entries in the FAT that point to files that store the directory entries and their locations.

Currently, the RPMB code generates a tee_mm_pool_t to represent the FAT structure on every write call. This can be done during initialization and stored in memory avoiding the need to traverse the FAT and build the pool on every write call. Also an in memory representation of the FAT will save the need to read the FAT and issue RPC calls to normal world.

Open Questions

1. Is there a current ETA for pull request 231 such that we can take a dependency on it? 2. How is locking handled to the lower level RPCs? In other words can two threads attempt to write to the RPMB block at the same time?
@jbech-linaro
Copy link
Contributor

Hi @youssefesmat , thanks for your proposal, please let us have some time to digest and discuss your suggestions. We will come back with an answer soon again.

@jbech-linaro
Copy link
Contributor

Hi @youssefesmat ,

Sorry for taking some time to get back to you, but now we have been looking at
your proposal and in general we like your suggestions. However, there are a few
things we would like to raise.

  1. Connecting the APIs
    tee_file_operations
    We would prefer to not have platform specific tee_file_operations's. I.e,
    instead try to have only one for each type in generic code, i.e, flash based
    and the RPMB-based. We could add platform specific hooks if we find out that
    a certain platform has special needs.

    To start with, we would like to switch between the two using a compile time
    flag. Later on we most likely would like to make it possible to mix between
    both of them. For example, it could be useful to leverage RPMB for rollback
    counters while still having the possibility use flash for bulk storage. But
    as first patchset we would prefer either or.

  2. RPMB APIs
    We're OK with your suggestions here.

  3. Optimizations
    If implemented like a write-through cache, we're OK with this.

  4. Open Questions
    Q: Is there a current ETA for pull request 231 such that we can take a
    dependency on it?
    A: Yes, the due date for that particular work is end of June. So that must be
    taken into account. Also, we see that it is critical that we make use of
    "encfs" also when communicating with RPMB, since the communication goes
    via the REE (basically the AES-GCM is important).

    Q: How is locking handled to the lower level RPCs? In other words can two
    threads attempt to write to the RPMB block at the same time?
    A: No, they will be blocked in the Linux kernel driver. See this section in the kernel driver.

@youssefesmat
Copy link
Contributor Author

Thank you very much @jbech-linaro for looking at this. I will apply your feedback.

@jbech-linaro
Copy link
Contributor

Hi @youssefesmat ,

So, finally we have a plan from Linaro and it is as follows:

  1. We will take the fork you created and rebase it on top of our master branch. In the first shot we will just use a flag to switch between tee_fs_common.c and tee_rpmb_fs_common.c (actually things might change here slightly).
  2. We will implemented the missing pieces in tee-supplicant. Basically accepting messages and and call eMMC controller using the ioctl to Linux kernel.
  3. Eventually implement some kind of simple RPMB emulator used by tee-supplicant that will make further development and verification a bit easier than having to solely to rely on real hardware.

After that we will probably look into the optimizations. As you've already indicated, the FAT look-a-like file system might not be the most efficient way of handling the data.

I would like to close this particular "issue" now, since the work actually is ongoing as we're speaking and related to this I would believe that we should close #323, since those patches is more or less the same kind of code as we can find in the fork you gave us (which is what we're trying to rebase and use)? Right? Please let me know if you're OK with my proposal?

@ghost
Copy link

ghost commented Dec 22, 2015

I will close this issue based on @jbech-linaro previous comment, please feel free to re-open if you have additional comments and questions about this topic.

@ghost ghost closed this as completed Dec 22, 2015
This issue was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants