As websites lag in taking action on fundamental, known security problems, Google and Mozilla have started to take matters into their own hands to alert users about server or infrastructure flaws. The latest iteration is Google rolling out a warning and an error in a recent version of Chrome that waggles its finger at outdated encryption methods used for securing sessions. Mozilla will follow no later than January, though maybe earlier. Where are Apple and Microsoft hiding? More on them later.
Security is often a double-edged sword: one edge can be as dull as an old bread knife, while the other can slice you in two so neatly you never knew it happened. Most Internet connections remain in the client/server model, in which a client (a browser, email software, a photo app) contacts a central server or system which handles retrieving and storing data and other online interaction.
As a result, exploitation can happen on either end, because they are asymmetrical: the client and server typically don't run any of the same code, nor do they carry out precisely the same tasks. A hacker doesn't have to crack a server's code to be able to exploit a connection on any network over which the data flows. She or he just needs access to the client software to poke at and find weaknesses. And when there is a lot of different client software that can access a lot of different servers, there's more chance of breaking in--although the heterogeneity also means fewer people are affected with most hacks.
In peer-to-peer networking, a security failure can affect every user all at once, because the software tends to act as client and server dynamically or simultaneously. This is bad (everyone is exposed) but there's also a much higher motivation to fix things.
The clever part about browser makers examining and alerting users is that they take advantage of the asymmetry and variety to the benefit of users. A website will never let you know that it's using outdated security, but a browser can do so with impunity, so long as it's technically accurate. And a site isn't going to lock out Chrome users because Google's browser is tut-tutting.
Will the real certificate please stand up?
Chrome 42, pushed out as a stable desktop release in April, implements security warnings and errors that Google advised were on its timeline last September related to poor website security.
When a user visits a site that uses an outdated method of validating a digital certificate, and that certificate expires during 2016, Chrome will offer a warning, which will appear as a yellow yield sign on top of the lock in its location/search bar. If a certificate expires after 2016, the site will not load, the lock will have a red X overlaid, and the "https://" portion in the bar will be struck through in red. It's dramatic.
Digital certificates are used by web and other servers and software to allow secure sessions in which one party can be validated by so-called certificate authorities (CAs). Chrome's warning and error will also occur if any of the certificates that are used to validate the web server's certificate rely on outdated methods, because the weakest link in that chain can be broken.
These certificates are cryptographically "signed": the data that comprises them is fed into a hashing algorithm that performs a series of complicated operations to produce a unique, short sequence of numbers called a hash. The idea with a hash is that very similar inputs--two certificates with a single letter varying in the name of the domain, say--produce dramatically different hashes. Further, those hashes cannot be predicted by examining the text fed into the algorithm.
The upshot? You can be sure a piece of plaintext or data you receive hasn't been tampered with by running it through the same hashing algorithm to confirm the same hash results. The certificate authority's role is to sign the hash cryptographically, which can be verified in an operating system or browser against encryption information already stored for a given CA.
However, given enough time, all algorithms can be cracked with brute force or the use of cleverer mathematics. The MD5 algorithm was widely used for years until (and, unfortunately, well past when) it became clear it was possible to create a collision, in which a document could be modified to produce an identical hash to a trusted one, and thus the CA's digital signature would also appear valid. This allows a replacement MD5-signed certificate to be accepted as legitimate without gaining access to the secret materials used to create the original.
The SHA1 algorithm replaced MD5, but it, too, has been long teetering on the edge of computational advances that would allow the same sort of exploitation. Governments almost certainly can do so now in some cases; criminals maybe, too.
In 2011 (yes, 2011) leading browser makers and CAs agreed that SHA1's time had passed. And yet CAs continued to issue certificates signed with SHA1, because infrastructure has to be modified and catch up. The US's agency for standards, NIST, put 2014 as the last year that SHA1 should be considered acceptable, and yet here we are in 2015.
In most cases, generating a new certificate signed by the perfectly acceptable replacement SHA2 is a minor matter, and costs nothing but time and an infinitesimal amount of computing. (SHA2 comprises options for hashes of different bit lengths, and 256, 384, and 512 bits are recommended, with longer being better.)
But CAs have been laggards, because they don't necessarily make any money from migrating their customers to SHA2, although some charge for swapping out SHA1 for SHA2 certificates as if they were being "reissued." It costs CAs money to revise their processes and handle tech support if they stopped allowing SHA1 certificates at all, but it's necessary and critical, and mostly a set of one-time fixed costs.
Thus the current scenario. Google is being aggressive, because the company has gone all-in on removing the weak points of Internet security and integrity. I've written previously about other work Google has done to improve and monitor how certificates are issued and for which domains a certificate may be deemed valid.
Mozilla is a community-driven group, and the sense of its community is that user-facing warnings shouldn't start in earnest until January 1, 2016. At that point, Firefox will tell surfers that a connection is untrusted if a SHA1-signed certificate in the chain of trust from web server to CA was issued during 2016. Starting a year later, all SHA1 certificates will be rejected. Mozilla may start showing more user warnings earlier, at its discretion.
The explorers have gone on safari and we can't find them
The shame here is that the two biggest operating system makers after Android--Apple and Microsoft--haven't made a stronger showing here. As with the Chinese CNNIC registrar's misuse of a root certificate assignment I've written about a couple of times, neither Apple nor Microsoft is currently talking publicly about its plans, putting in place a sequence, nor warning users. Microsoft last posted about this in 2013, and its plans should loosely follow Mozilla's, unless it's quietly changed them. Apple has said nothing publicly.
The trouble is that CAs are sometimes tied to giant corporations and government agencies. When they engage in revealing processes, shaming outdated and flawed security, and barring access, this can cause high-level trouble in companies that have global businesses.
But keeping silent isn't good enough in an era in which it's been verified just how much security has been broken, and new exploits are being discovered daily. Apple and Microsoft need to step up to embrace the transparency that Google and Mozilla already have.