An approver can consult a set of rules to decide whether to grant or deny user access. Rules might restrict a particular user's access based on the current date, day of the week, or time.
An approver is specified by a URL configured on the server for a user's account. Communication is secure and the system interface authenticates itself.
For children, regardless of the underlying system, there are several
applications:
Administration of the rules and monitoring/logging can be done remotely, in contrast to Parental Controls such as those provided by Mac OS X, and others.
The sign-on validator might be able to provide some of the same features as an approver, except that it is unnecessary for a validator to always have Internet access. Since an approver may be any web service, it has considerable flexibility.[2]
It would be useful to be able to automatically invoke a configurable callback when a user signs off. This would enable accurate session lengths to be determined, so that cumulative sign-on time could be computed and tested by an approver rule. Unfortunately, a reliable automatic sign-off callback is not possible in general. Sign off is sometimes implicit, as is often the case in web-based activities or when a system crashes. It may be possible for a system interface to provide an estimate, however, which could be made available to approver rules. As mentioned earlier, an approver could provide a system interface with a constraint on the maximum session duration, which could be used to terminate a session if there is adequate server-side support. An explicit sign-off operation through TGMA is possible and could be useful in some contexts.
In a common situation where a user is allowed to sign on to any of several closely related systems, such as when they are all within the same organization, a single sign-on capability can simplify authentication both for users and system administrators.
To add single sign-on to Use Case A, immediately after Sam successfully signs on, s1.example.com sends a message to the validator that contains an arbitrary token and a new encryption key. The message is encrypted using the shared session key. The validator stores the token and key in a list, associating it with the server, username, and service:
The format of the token might be an extension of the HTTP Cookie Specification (RFC 6265), describing where the validator is allowed to forward the token (server-spec), a lifetime/expiry date for the token (expiry), and opaque data (opaque-data):
When Sam later initiates sign-on at s2.example.com, which has no account information for sam but which is within the same federation as s1.example.com, Sam selects an identity within the SSO federation example.com on his validator. The validator determines that the previously stored token's server-spec allows it to be forwarded. Sam allows single sign-on authentication at s2.example.com to proceed. The validator sends the token to the system interface, which has been configured to provide single sign-on for federation example.com and that interacts with s2.example.com to sign-on sam in the normal way. The validator uses the encryption key it stored with the token to identity the user. Additional context can be encapsulated within the token and passed to s2.example.com.
Unlike the HTTP cookie mechanism, an arbitrary server/service matching method can be used so that hosts participating in single sign-on need not share a domain suffix. For example, if the token's server-spec also matches rick.fedroot.com, Sam could subsequently initiate sign-on with that server similarly.
Neither PAM nor applications that use it have a facility to return arbitrary information to the application that might be generated during authentication, such as an encryption key.
The main issue is that post-authentication, the shared encryption key resides on the server and validator, not on the client. But following authentication, it is operations between the client and server that require the encryption key.
It seems that the only practical way to leverage this is using a two-step operation. Post-authentication, the validator and server share an ephemeral key. In the first step, the server uses the encryption key in a transaction. The validator stores the key and associates it with the transaction. In the second step, the validator retrieves the key associated with the transaction and sends it to an authenticated server so that it can validate a signature or decrypt using the key.
A satisfactory, practical way to securely transmit the key to the client has not yet been developed:
% ftp alice@example.com:/home/alice/myfile
TGMA passcode: DJLYE
FTP Key?
FTP Key: Nzb2a7
In one solution, the private key might be encrypted and stored in a regular file, but the encryption key would be generated by and stored within the validator. After authentication, the validator would pass the decryption key to the web server over a secure connection. Alternatively, instead of storing the private key with the web server, store it in the validator; after authentication, the validator passes the (unencrypted) private key to the web server.
Note that if a server is compromised by an attacker that has obtained superuser privileges, its system administrator must assume that all unencrypted keys, even those that might be in a memory image (core dump, running process), could have been copied. If the private key is stored on the validator, it would be much more difficult for an attacker to obtain a copy of it.
Digital co-signing is similar, but like the two-man rule, employs more than one signature.
One possible (rough) design might add functionality to the openssl(1) utility Given a command line flag, the utility initializes, acts as TGMA server, and waits for a validator connection. After successful M-AUTH, the validator sends a (symmetric) encryption key to the utility to perform the requested encrypt/decrypt/sign operation. The user of the validator manages the keys (create, delete, etc.).
This architecture is a cross between TGMA and a password management application. It retains many of the features and characteristics of a password manager, but also some of the drawbacks. It offers some advantages, such as:
Its main drawbacks are that user-level passwords are stored in the validator and must be kept synchronized with the system interface, and a server-side TGMA component is still needed (unlike a password manager).
It is not clear that the hybrid architecture is superior to standard TGMA.
Once a server-side account has been configured, account information on the validator can be registered in any of several ways, largely automatically, depending on the degree of security required. For example, account information encoded in a text message or email message might be sent to the device and processed by the validator, a QR code image might be displayed or supplied as a hardcopy and scanned by the validator, or the validator might upload the account information over a secure connection with a preconfigured central server. So that it cannot be reused, the encoded account information expires immediately after the account has been enabled or if it has not been used within a validity period. Configuration shared by the server and validator might be transmitted over an SSL/TLS connection or using an ad hoc key-agreement protocol (such as Diffie-Hellman) and encrypted communication channel. To avoid SSL certificates, procedural precautions must be taken to ensure the legitimacy of the two parties; to prevent phishing, a paper copy of codewords might be provided to the user out-of-band, facilitating mutual authentication for provisioning.
Additional validators for an account can be registered in the same way. Once imported, the validator connects to the server to enable the account. Since only a registered validator can act on behalf of its user, any authentication attempt by an unregistered validator can be flagged as an attack (and its IP address could be blacklisted, for instance); while guessing attacks are impractical, attackers may not know it.
This account information can later be updated at any time, provided information on the server and validator are always synchronized and appropriate security precautions are observed.
Since a shared encryption key is available after every successful mutual authentication procedure, a server can update its account information securely at that time by defining a post-authentication protocol.
A server can unilaterally revoke a user's access simply by deleting account information. If a new account is created with the same user name, it will almost certainly be assigned different key material, rendering invalid any old account information that may be stored on validators.
A user can easily revoke the ability to sign on by deleting an account on the validator. A parent can disable a child's access to an account in this way, for example. Or, access can be suspended by placing (or changing) a password on the account. In both cases the account continues to exist on the server.
Continued in Part 7.