By Christopher Allen - [email protected]
Originally published 2004-08-16 at http://www.lifewithalacrity.com/2004/08/progressive_tru.html with minor edits and a technical and historical updates in 2015.
I believe that as we evolve social software to better serve our needs and the needs of the groups that we are involved in, we need to figure out how to apply an understanding of how human groups behave and work.
One useful concept I use I call "Progressive Trust". The basic idea is to model how trust works in the real world, between real people, rather then solely relying on mathematical or cryptographic trust.
This is how I typically explain progressive trust when I meet someone in-person at a conference:
You are now spending your most precious resource, your most unrenewable commodity -- time, in order to listen and understand what I have to say.
Why do you do so? Because by the act of us being here in this common space, at this conference, you have found a very simple credential from me -- that I'm willing to spend time here in a place that you are interested in as well. In turn, I'm willing to spend more time chatting with you for the same reason.
Why do we continue to chat, and not move on to other people to discuss with? Because as we chat we are exchanging a number of credentials -- people we know in common, common interests, meaningful ideas, etc. We may also present credentials typically issued by others, like our business cards, or explain our relationship to the host.
As our unspoken agreement to continue discussion evolves, we typically will unconsciously check to see if others are listening, and adapt our conversation thereafter. If the discussion becomes more personal or serious, we will often find ourselves moving to a more private portion of the room. As our discussions become deeper, we may begin to speak of things that hint at a mutual respect for confidentiality.
Also early on we'll begin to scope out the nature of our time together. Is it only professional, or a potential friendship? Even intimate relationships go through this phase -- are we with someone who wants to date? Is is possible that a future date might lead to something more?
If we agree to meet later to discuss more, before we meet again we may go authenticate some of the credentials given us. We'll not authenticate all of them, only enough to substantiate the level of assurance that we need for the risk we are taking (which may only be the future loss due to wasted time, but even that is a form of risk.) This authentication can consist simply of confirming information given, or it can be as complex as asking for an endorsement from a mutual colleague.
As our collaboration grows, we will find ourselves seeking more and more credentials, endorsements, etc., but they will not be enough. The next level of trust can only be established by experience of commitment -- for instance do we call back when we said we would? These tests typically start with small things, and then grow to larger things. At some point this may ultimately grow to form simple verbal contracts; over time richer, deeper social contracts are agreed upon that might not be written down.
Ultimately we may bring in third parties to witness, and thus possibly enforce our mutual obligations, whether it is just having a mutual colleague view our handshake or friends see us kiss, or whether it is having a legal, signed document.
At some point our mutual interests may be so large that we decide not just to collaborate, but to share assets, whether through a partnership, a corporation, or a marriage. Before this is complete there will be more credentials and authentication of those credentials and endorsements (talk to former employees, engage in credit checks, visit each others' families, take blood tests), as well as less risky tests of the full contract (signing a term sheet, or a marriage engagement).
This is the way human trust works. It also very similar to the way that groups work -- a corporation will collaborate with another corporation in the same way, starting with small trust, going on to tests, and leading ultimately up to partnerships and mergers.
Computer trust rarely works the way that human trust does. It starts with mathematical proofs -- that such and such a mathematical algorithm is extremely difficult, thus something built on it must also be difficult. These are then built on top of each other until a system is created. It often seeks a level of "perfect trust" that is rarely required by human trust.
One of the reasons why I chose to back the then nascent SSL (Secure Sockets Layer) Standard back in 1992-3, was that I felt that it much better mapped to the progressive trust model, and thus to human trust, then did its competitors.
At the time, the SET standard was backed by all the major players -- Visa, Mastercard, Microsoft, etc. However, it not only required strong mutual authentication, but it also require multiple authentications. SHTTP was backed by RSA, required digital signatures, preferably by both parties. SSL was not necessarily a clear winner.
But SSL starts out very simple -- first it just connects two parties with an integrity check, then it establishes simple confidentiality between them. If one party wants more confidentiality, they can upgrade to a stronger algorithm. Then one party can request a credential from the other, or both can. Either party has the option to request authentication of those credentials. This now prevents a man-in-the-middle. We could even ultimately choose to move into advanced options such as perfect-forward-secrecy, or non-repudiation.
Ultimately you could use SSL to come close to the level trust that SET tried to establish, but SSL isn't generally used that way because the market decided that only one party needed to send a credential -- the merchant. SSL also proved to be more flexible then for use just with web pages or credit cards -- now it is used for things like securing email and text messages, creating VPNs, and playing online games.
Yet advanced options such as client authentication, perfect forward secrecy, etc are still available. As needs or the marketplace evolves, these features can be enabled in the future, as recently happened after Snowden revelations.
Thus progressive trust is a useful conceptual model for understanding how trust might be built using online tools. Look at the tools that you are using now -- do they support various levels of trust, and a natural path between them? Or is trust more binary -- someone is only trusted, or not trusted. Are there implicit levels of progressive trust that are part of the culture of your group that might not embodied in the software itself?
Progressive trust also maps well to an user-interface design technique called Progressive Disclosure. It sequences information and actions across several different windows in order not overwhelm the user. You disclose basic information and choices first, then revealing progressively more complex information and choices. Thus you help the user manage the complexity of the application. Navigating group interactions and culture is also complex, thus progressive trust allows you to hide some of initial complexity of the trust model behind your tools, and thus lower barriers of entry to your software.