Sharemind is based on modern cryptographic technologies and we feel that the world should know more about them. Thus, we are starting a series on the #fundamentals of secure computing. The first post in this series comes from Marcel Keller, a talented researcher into programmable secure multi-party computation (MPC) - a key technology behind the Sharemind MPC product.

MPC is a technology that allows multiple parties to come together to privately compute a result from everybody's private inputs. There are multiple aspects of this process that need securing and Marcel's post is exactly about the different security properties that MPC implementations may have.

As a researcher in MPC, I often encounter the question which scheme to use. Trying to be as balanced as possible, my answer always is "it depends". The reason for this is that one cannot talk about MPC without talking about security models in my opinion. I like to compare this to the fact that encryption without key management makes little sense. In the following blog post, I will try to explain the basics of MPC security models.

The maybe easier question is how many parties in the computation are trusted. Arguably the most important distinction is between honest majority and dishonest majority. The latter includes two-party computation because honest-majority protocols require a strict majority. The distinction comes from the fact that MPC with an honest majority can be done using secret sharing alone. Secret sharing is a technique in the domain of information security (such as the one-time pad), and it not does depend on assumptions such as the hardness of factorization. On the other hand, general MPC for a dishonest majority requires some cryptographic primitives which in turn depend on some hardness assumption. The cost of these computational primitives is usually higher than secret sharing, which makes MPC with an honest majority cheaper.

The other question evolves around the behavior of dishonest parties and the security guarantee. At the lowest level, one can assume that every party follows the protocol and that dishonest parties only try to extract information from their view. This model is called passive or semi-honest security. Clearly, the result will always be correct in such a setting. If dishonest parties are assumed to deviate however, a stronger requirement is to guarantee correctness if the protocol finishes. Here we enter the domain of active or malicious security. It should not surprise that it is more expensive to be implemented given the increased adversarial power.

Yet another level of security is called fairness, which requires that every party learns the result if any party learns it. This might be hard to achieve. Consider a two-party protocol where one party is supposed to send the last message to let the other learn the result. If the former is corrupted, it could skip that and thus keep the result to itself. There are ways of solving this at the cost of several extra rounds. Finally, the highest level is guaranteed termination where every party learns the correct result in any case. This is impossible for two parties: If one stops responding, there is no way for the other to compute any information that they don't hold already.

In conclusion, it should be easy to see that the use case determines the security model, which in turn suggests a particular scheme. For example, if one institution wants to distribute information (and computation thereof) over several locations in order mitigate a potential breach, passive security and low corruption level might be sufficient. On the other hand, for a computation between parties on the open Internet where there is low level of trust (e.g., key generation ceremonies for a crypto currency), one certainly would like to have resilience against a dishonest majority or even full-threshold security where one honest party suffices to guarantee security.