The TGMA architecture offers the following benefits:
For example, a user might carry a key fob that uniquely identifies him (e.g., via near field communication) to his car's authentication module as he approaches the vehicle. He then uses the validator app on his smart phone, which has been configured for his identity and the vehicle, and Wi-Fi or cellular network communication with the vehicle to prove his authorization to operate the car. Theft of the key fob or interception of any of the wireless communication can not yield authorization to an attacker. If the app and/or phone employ password protection, as they should, theft of the smart phone does not help an attacker. This might also be done without a fob if the distance between the authentication module and validator can be measured with sufficient accuracy.
Recall that the M-AUTH protocol employed by TGMA is practically immune to communication eavesdropping, whether over LAN, cell, or Wi-Fi networks (even if communication is not encrypted, e.g. by WPA or WPA2), so an attacker cannot re-actuate a device by replaying or modifying captured network traffic, or by pretending to be a legitimate system interface.
Because a validator is independent of the system interface being accessed, a general-purpose validator implementation can be used to access a large set of system interfaces, even these non-traditional ones. Instead of each product vendor creating its own app for accessing its products, a single, generic validator app can give a user convenient access to many products (much like a person carries a keychain to hold keys for many locks).
Three general use cases for TGMA are presented here. To simplify the presentation, opportunities for various features and improvements are omitted.
This use case is like the previous one except that a sign-on approver is invoked by the system interface as the last step. The pre-configured sign-on approver is invoked (perhaps by HTTP over SSL/TLS, but not necessarily), with the system interface identifying itself. In this example, a web-service on a corporate web server is called to record that Sam has signed on and return an indication that sign-on may proceed.
In this standalone mode of operation:
Although the validator and the device have not mutually authenticated directly, they have both mutually authenticated with the server. If either the validator or device did not do this successfully in this transaction, or if an attacker is actively participating, message validation will fail (with very high probability).
Note that the account information used by the server to authenticate the device would normally be different from that used by the server to authenticate Bob's validator.
Since it appears that the procedure can be simplified if the server component can be integrated with the device, why does the validator not mutually authenticate with the device in this configuration? We are assuming in this use case that for practical reasons (e.g., limited memory, better configurability, simplified implementation, additional security) the device relies on the server to store and manage account information for the device's users. This reduces the amount of software on the device and data storage needs. More importantly, account information stored on the server can be shared by all of Bob's validators, administered remotely, and so on.
Is it safe to delegate the server component to a third party? What are the implications if the server component is compromised by an attacker? In particular, could a malicious server connect to the device as a legitimate validator? One preventative measure is for both the device and validator to send only part of the passcode to the server, or for them to send the same one-way hash of the passcode, so that only they know the complete passcode. The connection from the (purported) validator to the device must present the original passcode. In this way the server cannot use its copy of the shared key to impersonate the validator to control the device.
In an architecture where the device cannot display a passcode, it does not send a passcode to the server. Instead, the server provides Bob with a passcode after Step (2). In some contexts, such as when the device is non-shareable, this can be taken a step further by not requiring Bob to physically activate the device (say, a remote webcam) but instead activates the device by sending it a message.
To give another example, consider a use case where Bob wants to conduct some banking using an app on his smart phone. The app has been issued by Bob's bank and is preconfigured to authenticate by connecting to its server. Bob has a validator installed on his tablet (or laptop or desktop) that has been provisioned to mutually authenticate with the bank.
In a similar use case, Bob needs to interact with a Wi-Fi capable smart device in his home. The device has been configured to authenticate by connecting to a server - the server might run on Bob's computer, be provided by the device's vendor, or run on a provider's cloud-based service.
Continued in Part 6.
Recent research has investigated password hashing algorithms that are also memory intensive and cannot benefit from parallelization (bcrypt, scrypt, Password Hashing Competition). The goal is to make brute force searches of a captured password file impractical, even assuming that an attacker has custom hardware and powerful parallel processing capabilities. If weak passwords are permitted, however, there is no benefit because an attacker will find them quickly anyway. If "strong enough" passwords are used with a "strong enough" digest algorithm, brute force searches are already sufficiently difficult and there is not a lot to be gained. These brute-force resistant algorithms have a place somewhere in the middle, and they are effective at straightforwardly addressing a real problem.
But since a password hash computation must be performed for each (legitimate) authentication procedure, brute-force resistant algorithms have a cut-off-your-nose-to-spite-your-face aura because you are also making more work for yourself. Would it not be better to not use conventional passwords at all?
Extremely weak password management practices continue to be used ("7 million unsalted MD5 passwords leaked by Minecraft community Lifeboat").