This forum is in permanent archive mode. Our new active community can be found here.

RSA's Corporate Network Security Breached

edited April 2011 in Technology
This has been kicking around the tech news for a couple of weeks now, but it seems like the fallout is still uncertain as details slowly leak about it.

First, a little background for those who need it. RSA was founded by three MIT professors who invented the RSA public/private key crypto algorithm (the letters "RSA" are the initials of their last names). A few years back, they were purchased by EMC and are now marketed as "the security division of EMC." Anyway, at least at first glance, you'd think these guys would know a thing or two about security given their backgrounds.

One of their main products is SecurID two-factor authentication. In a nutshell, this consists of a hardware (usually a fob with a little LCD display on it) or software gizmo (they have them for smartphones like the iPhone) that spits out a new single use password once a minute or so. This would be combined with a regular password to authenticate you with whatever server/service you're logging on to -- such as a corporate VPN (which is where I had experience using it). Since the passwords generated by this gizmo are only valid for a minute at a time and a hacker would need both this gizmo and the password of the user associated with it (each user has a unique gizmo), it makes it that much harder to gain unauthorized access.

Well, while RSA hasn't come clean about it, rumors abound that the database containing the mappings between the gizmos' serial numbers and whatever security keys are used to calculate the one-time passwords may have been stolen, therefore nullifying the effectiveness of their product (though not similar products by other companies, mind you).

The attack basically consisted of figuring out the corporate emails of employees at RSA via social networking sites, emailing them innocuous-looking attachments that contained remote access trojans, and gaining access to the corporate network via those trojans -- a basic spearphishing attack. You can read up a bit more on it at Ars Technica.

It'll be interesting to see the fallout of this. As I used to work for EMC (twice) and still know folks there, I've heard about some of the draconian IT policies that they've implemented in response to this attack. They even went so far as to purchase a company that specializes in detecting malware such as this in order to cope. Now, my current employer doesn't use SecurID, so it's not affecting me, but I wouldn't be surprised if some of you are affected.

Comments

  • Yubikey FTW.
  • I have heard that some RSA customers have switched to using Yubikey or other SecurID competitors -- especially since RSA is so secretive about just what the hell happened.

    The idea behind products like Yubikey and SecurID is sound, although I'm not enough of an expert to compare the implementations to see which one did it better. It looks like Yubikey has you set up your own infrastructure as opposed to relying heavily on a "black box" that RSA sells you and that ties in to a big database back at the mothership.
  • rumors abound that the database containing the mappings between the gizmos' serial numbers and whatever security keys are used to calculate the one-time passwords may have been stolen
    That would suck pretty hard.

    We just had to change our passwords because of this attack, but I wonder if we'll just have to switch to a completely different security system.

    It does strike me as interesting that RSA even had an avenue where such an attack was possible. Why would you keep information like that on a machine on a network? Keep it completely separate from everything else and use physical security to control access.
  • The only thing I can think of is ease of distributing database updates to customers. If a customer just ordered 5000 RSA fobs, they are going to need the serial number/key pairs for those fobs for their local RSA black box (unless said black box comes preloaded with the entire database for every customer, which probably has its own security issues). Instead of mailing them a CD with keys on them, for example, why not let them just download them from RSA's website? Oh, and allow the website to access, either directly or indirectly, the database containing these mappings.

    I didn't say this is a smart way to do it... but it certainly is an easy way to do it. Especially if you figure they may have quiet a bit of hubris seeing as how they're RSA.
  • Oh snap. I just put in my request to get a SecureID fob ten minutes ago. I doubt my company even knows of these goings on. Also, Yubikey FTW.
  • The whole security vs ease of use is a huge issue on the internet. All those fears about various infrastructures in the US that are susceptible to cyber-attack (nuclear power plants, power grids, air traffic control) would mostly disappear if they were to just disconnect them from the internet. Have a closed network running the power grid or whatever. The problem is that it's INSANELY more expensive to do this. You'd think Big Government, of all things, would know better, but ease of use and low cost beats out security in most cases.

    As a result, you have crazy laws being proposed advocating an internet kill-switch.
  • The whole security vs ease of use is a huge issue on the internet. All those fears about various infrastructures in the US that are susceptible to cyber-attack (nuclear power plants, power grids, air traffic control) would mostly disappear if they were to just disconnect them from the internet. Have a closed network running the power grid or whatever. The problem is that it's INSANELY more expensive to do this. You'd think Big Government, of all things, would know better, but ease of use and low cost beats out security in most cases.

    As a result, you have crazy laws being proposed advocating an internet kill-switch.
    Keeping those systems off the internet isn't a cure-all (although it does help). The Stuxnet worm, for example, spread via USB key drive in addition to the internet. It basically worked by first spreading itself via the internet. Once a computer is infected, it then also spreads itself to any USB drives attached to that system. Finally, when those USB drives are attached to another system, it spreads from the USB drive to the new system.

    This is how they infected the uranium enrichment systems in Iran. While the computers on those systems were not connected to the internet, the computers used by the scientists and technicians operating those systems were. So a technician downloads a driver update for the enrichment control computers over the internet to a USB drive, infects said USB drive in the process, and the worm jumps from the USB drive to the enrichment control computer after it's plugged in to install the driver update.
  • Why would you keep information like that on a machine on a network?
    You know how we talk about the ease of getting jobs and the shortage of good workers in the tech sector all the time? How most everyone is incompetent and we're desperate for more smart people?

    There aren't enough good IT/CS/CE/SE people in the world to handle the world's technology needs. So you end up with situations like this.

    However, it's also true that the "Mission Impossible" computer is useless. A non-networked computer effectively doesn't exist for any practical purpose: a hard drive in a safe would serve the same purpose. There are better solutions that are effectively secure, and the real failure was with the operators and the network security team. ;^)
  • Yep. Even RSA, the most high profile security company isn't secure. They were infiltrated because they have people working there who aren't smart enough because there aren't enough people who are smart enough. Even if there were, hiring them would be impossibly expensive.
  • he problem is that it's INSANELY more expensive to do this. You'd think Big Government, of all things, would know better, but ease of use and low cost beats out security in most cases.
    The financial world already does this. Extranets and networks like SFTI provide secure infrastructure for financial transactions. The cost of the technology isn't the crippling factor so much as the cost of the human capital to manage it. Even the financials are having a terrible time hiring enough skilled workers: there's no way the government will be able to out-bid them for the limited pool of talent.
  • Why would you keep information like that on a machine on a network?
    You know how we talk about the ease of getting jobs and the shortage of good workers in the tech sector all the time? How most everyone is incompetent and we're desperate for more smart people?

    There aren't enough good IT/CS/CE/SE people in the world to handle the world's technology needs. So you end up with situations like this.

    However, it's also true that the "Mission Impossible" computer is useless. A non-networked computer effectively doesn't exist for any practical purpose: a hard drive in a safe would serve the same purpose. There are better solutions that are effectively secure, and the real failure was with the operators and the network security team. ;^)
    An argument could be made that a computer like this should not be on a public network, but it's okay to have it on a private network. For example, at the Pentagon, your typical employee who requires external internet access would have two computers on their desk -- one hooked up to the public internet and one hooked up only to the Pentagon's private internal network. This, in theory, allows the computers with really secret stuff to still be networked to each to other and remain useful while protecting them from leaking information to the public internet (at least without someone copying information using a USB disk or CD from a private network machine to a public one).

    I do agree that the failure in this case is with the admins/operators/security team. It should not have been possible for a low-level HR or finance person's (which appear to be the targets of the initial spearphishing attacks) machine to access machines containing sensitive data such as the SecurID database.
  • edited April 2011
    he problem is that it's INSANELY more expensive to do this. You'd think Big Government, of all things, would know better, but ease of use and low cost beats out security in most cases.
    The financial world already does this. Extranets and networks like SFTI provide secure infrastructure for financial transactions. The cost of the technology isn't the crippling factor so much as the cost of the human capital to manage it. Even the financials are having a terrible time hiring enough skilled workers: there's no way the government will be able to out-bid them for the limited pool of talent.
    With the exception of maybe the NSA, but the NSA only needs so many IT/computer science geniuses to gets its stuff done. Not every government department is going to have the budget necessary to hire NSA or finance-industry caliber people.
    Post edited by Dragonmaster Lou on
  • Well, RSA has finally admitted that its SecureID token system has been compromised. They even admitted that the recent hacking attacks on Lockheed Martin took advantage of their compromised SecureID system. They now are going to replace all 40 million SecureID keyfobs that are currently in circulation.
Sign In or Register to comment.