Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why should I trust an app? #46

Open
kjetilk opened this issue Oct 7, 2019 · 6 comments
Open

Why should I trust an app? #46

kjetilk opened this issue Oct 7, 2019 · 6 comments

Comments

@kjetilk
Copy link
Member

kjetilk commented Oct 7, 2019

We're discussing a lot about the technical management of access control, but there's much more to it in the social space. I'm fortunate enough to have kids who trust me enough to ask me whether they can install a certain app on their mobiles and tablets. Although I'm not revealing that to them, the usual answer is "I have no idea". I have very little to go on in terms of deciding whether an app is trustworthy. It is just a bunch of heuristics, and funnily, the kids develop their own heuristics too.

I think this highlights a much bigger problem than the technical ones, we have to enable people to make much more informed decisions about access control on the open Web for Solid to be useful beyond tight, social groups. If not, Solid is likely to also just accommodate a small number of large players, or be a place where social engineering is rampant to extract private information.

@elf-pavlik
Copy link
Member

Both user and resource associated Authorization Servers #43 could let users configure some kind of 'trust policies'. For example app that have specific certifications, published by specific entities etc.
This would affect Consent Screen when user gives app authorizations. It would show if app meets or doesn't meet those polices. IMO even stronger case to have dedicated Authorization Servers which would take all the responsibilities related to authorizing apps, revoking authorizations #24 etc.

@kjetilk
Copy link
Member Author

kjetilk commented Oct 7, 2019

Yes, but that is still just the technical aspect. A random user would not have much to go on to formulate those policies. Dedicated Authorization Servers could serve as a centralization point. I think we need to be much more elaborate in involving the social fabric of the ecosystem, the humans, not the machines.

@coolharsh55
Copy link

Hi. I agree that there are two forms of "trust" here - one related to security which relate to certificate, etc. and the other about humans which relate to how and why the app is using their data. For the second one, I am of the strong opinion that users should be able to express their own policies (or reuse from community) about how they want to let others use their data, and that such policies should be used to assist them in making decisions and reducing the ability of apps/companies to manipulate and take advantage of people.

For example, certain things that such (machine-readable) policies can help flag or provide additional contextual information when apps request access to make a better informed decision:

  1. Use of sensitive data categories
  2. Sharing with an absurdly large number of other third parties
  3. Asking access for too much data at once
  4. Explanations for why data needed as being vague
  5. Having to read stupidly large "privacy policies" and "terms and conditions" without actually understanding anything
  6. Not understanding the request and how it will impact them
  7. Not understanding who is the entity or company behind the app
  8. Identifying when the community has identified an app or an app provider being malicious
  9. Ensuring the "consent" is appropriate to context, for example, not allowing one click to give access to all of data at once
  10. Having an indication of a 'risk score' associated with the request - e.g. access to contacts for address books is low risk, access to medical records from a non-health entity is extremely high risk.

In order to enable this, both the app request and the user/community preference or guides need to be in machine-readable forms so that the agent can interpret and use them. Otherwise there is a strong likelihood to continue the current malpractices where users get a notice that only provides a link to a website T&C that they either don't read or don't fully understand, and end up giving access to do something with their data they had not anticipated or intended.

@elf-pavlik
Copy link
Member

Having to read stupidly large "privacy policies" and "terms and conditions" without actually understanding anything

IMO we should collaborate with community-driven services like https://tosdr.org/ to address this specific problem.

@woutermont
Copy link
Contributor

Currently, at use.id we're using the OIDC Dynamic Client Registration metadata values policy_uri (policies) and tos_uri (terms of service) to provide users with links to those documents. Towards the future, we aim to implement something like the ODRL vocabulary, combined with DPV or gConsent (like described #55). This legal consent information could possibly be embedded within SAI Data Grants (like described here). There already exists a specification combining ODRL with DPV specifically for Solid. In the end, this should enable both the user and the application to express legal conditions that are also machine-readable, and which can therefore be displayed in a structure manner, and even programatically compared.

@coolharsh55
Copy link

Hi. I have written up an article titled "Making Sense of Solid for Data Governance and GDPR" https://osf.io/m29hn/ that analyses how Solid in its current state relates to GDPR's requirements, what are some of the possible governance models (for Pods and Apps), and some issues that are known to be problematic also apply to Solid. The aim is to emphasise the necessity and importance of answering (through developments) the question this issue has raised. The article also explores some specific ideas for improving things (Section 8).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants