Sunday, December 19, 2010

New tools

If you are a corporate information security practitioner and you want to try out some new tools during the free period you may get during the holidays, check out these tools.

Flint examines firewalls, quickly computes the effect of all the configuration rules, and then spots problems.
    This tool helps parse various log files and artifacts found on suspect systems and produce a body file that can be used to create a timeline, using tools such as mactime from TSK, for forensic investigators.

    This tool currently supports various logs including Windows OS, IIS, AV logs, and Firefox.

    This is a Nessus reporting tool, its purpose is to allow you to quickly and easily browse and view your scan jobs without the need to run up a nessus session. Some features include; 

    • Simply export scan jobs into XML format and copy to the XML folder
    • View by Risk
    • View by Severity
    • Executive summary as well as detailed reports
    • Ports and services report
    • Vulnerability categoy report
    • Export scan jobs to Excel (very useful with autofilter enabled).

    You need another tool to your web application testing arsenal? Netsparker announced a free edition of their well known commercial product, check it out. It has its limitations but worth checking out.

    You can term this as poor man's DLP. It has some basic DLP like search features, which are useful for organizations who are starting out and wants to know what are the sensitive information that are out there. It is a free and open source, agent-based, centrally-managed, massively distributable tool, it can simultaneously identify sensitive data at rest on hundreds or thousands of Microsoft Windows systems. 

    OWASP Code Crawler
    Are you looking for a simple code auditing tool that you want to show to the developers how vulnerable their code is? Here is a nice tool developed by the OWASP project. It is a static code review tool which searches for key topics within .NET and J2EE/JAVA code.

    MANDIANT Web Historian helps users review the list of websites (URLs) that are stored in the history files of the most commonly used browsers, including: Internet Explorer, Firefox and Chrome.

    • Collects web history, cookie history, file download history, and form history
    • Export data sets to XML, HTML or CSV
    • View page thumbnails and indexed content
    • Visualization using bar graphs, pie charts and timelines
    • Shows a quick “report card” of artifacts for various websites

    Saturday, November 20, 2010

    New Adobe Reader security feature - Protected Mode

    Adobe has introduced this new security feature in Adobe X, the latest version of the Adobe Reader. This feature helps prevent exploits seeking to install malware and/or change the registry. This should help reduce the web-based attacks with malicious PDF, which had skyrocketed in the past year or so. According to Symantec, it accounted for 49% of web-based attacks in 2009.

    Protected Mode is a sandboxing technology based on Microsoft's Practical Windows Sandboxing technique.  All operations required by Adobe Reader to display the PDF file to the user are run in a very restricted manner inside a confined environment, the "sandbox." Should Adobe Reader need to perform an action that is not permitted in the sandboxed environment, such as writing to the user's temporary folder or launching an attachment inside a PDF file using an external application (e.g. Microsoft Word), those requests are funneled through a "broker process," which has a strict set of policies (ACLs) for what is allowed and disallowed to prevent access to dangerous functionality.

    This option is enabled by default for all "write" calls on Windows 7, Windows Vista, Windows XP, Windows Server 2008, and Windows Server 2003. Of course, the effectiveness depends on the ACLs and what action is permitted and denied.

    Adobe Reader X is available here.

    Tuesday, November 2, 2010

    PCI 2.0 - What's new?

    PCI council released the new version of the standard, PCI 2.0. Overall I feel that it is a welcome change since it either clarified or modified some of the requirements rather than leaving it to the QSA's interpretation. The requirements are now more practical and removed many ambiguities. It is a good sign that PCI council is listening to the complaints and improving the standards while keeping the core standard and requirements untouched.

    Some of the major changes are below:

    • Added "virtualization components" and added "If virtualization technologies are used, verify that only one primary function is implemented per virtual system component or device. " - This has been one of the most debated controls and now we have more clarity in this area.
    • Clarified that segmentation may be achieved through physical or logical means. 
    • Clarified that direct connections should not be permitted between the Internet and internal networks.  - In the earlier version it was direct "route".
    • Removed specific references to IP masquerading and use of network address translation (NAT) technologies.  - PCI council has now realized that there are other methods to achieve the results. 
    • Clarified that it is permissible to store sensitive authentication data when there is a business justification and the data is stored securely. 
    • Clarified that key changes are required when keys reach the end of their defined cryptoperiod, rather than "at least annually."  - This is another big relief for IT operations folks and developers.
    • Clarified that key custodians should "formally acknowledge" their key-custodian responsibilities rather than "sign a form."  - When companies use online acknowledgements and digitally signed emails, this is no longer required.
    • Clarified that the test should confirm that audit log processes are in place to "immediately restore" log data, rather than that log data should be "immediately available" for analysis.  - This is the most welcome addition for SOC and monitoring groups since they don't need to keep the data "online". This will ensure that we have faster query time and faster analysis of near real-time data. This will also ensure that online data storage requirements and processing powers are no longer an issue, SIEM vendors will love this.
    • Clarified that the internal scan process includes rescans until passing results are obtained, or all "High" vulnerabilities as defined in PCI DSS Requirement 6.2 are resolved.  - This is another welcome addition, IT operations folks can now concentrate on fixing "high" risk vulnerabilities. However, this may force organizations to forget about medium and low risk vulnerabilities.
    • Modified the statement "Verify that noted vulnerabilities were corrected and testing repeated" to state "Verify that noted exploitable vulnerabilities were corrected and testing repeated".  - Even though it is good for the organizations, I have a feeling this is going to be controversial as to how organizations can verify the exploitability, using what services and tools. Is it really required to have a tool like Metasploit, CoreImpact, etc? This may require more clarification.

    The new version is available for download from here.

    Saturday, October 30, 2010

    Firesheep - New tool to hijack open wireless sessions

    Ian Gallagher and Eric Butler’s Firesheep plugin for Firefox has made lot of news this week. They published this tool at the Toorcon conference.

    More than anything it demonstrates security risks when you connect to open wireless networks. Wireless networks are broadcast in nature, which means that clients associated with a particular network have the ability to “see” or “capture” all the traffic passing over that broadcast network. Certain network interface cards and operating systems comes with that ability to capture and others don't.

    This tool makes it easy to capture that traffic and shows all the users who are connected on that network and are accessing a pre-configured set of web sites (includes many of the well known social networking and public email sites). The tool, then gives an option to access those user's accounts by taking over or attaching to the session. The tool does this by a method called sidejacking or session hijacking, where the session IDs (contained in session cookies) exchanged between the web site and the user’s browser in an unencrypted channel gets stolen from the open wireless packet captures and using those session IDs, the tool establishes connection to the web sites.

    Typically web servers generate these session IDs and is unique to a user for a particular session. Session IDs are sent by the server to the client either in a cookie or as a hidden variable. A person who happen to hijack the session ID gets the same privilege as the real user. The problem lies in encryption of the traffic throughout the session between the web server and the client. Many web sites do this only for the initial login to ensure that the login credentials do not get stolen. However, the post login traffic, which contains these sessions IDs and cookies are exchanged in an unencrypted channel. The session IDs and cookies ensure that the users do not have to login every time they use the web page, during a session.

    For those in the web application security world, this is a well known attack and has been part of the OWASP top 10 vulnerabilities or risk for many years. It is not the first tool that performed this type of attack. Back in 2007, Robert Graham revolutionized sidejacking with the introduction of the Hamster and Ferret tools, which had the similar capabilities but Firesheep is more user friendly and even non-geeks could use this at an open wireless network.

    The best preventive method is to force encryption during all stages of information exchange between the web server and the client. This is an effort from the web server side and many are moving towards that. Other options include, plugins such as HTTPS-everywhere, No-script and Force-TLS, which essentially forces encryption at all times for the web sites that gives this option.

    The slides of their Toorcon talk and the tool is available here

    Saturday, October 23, 2010

    Is privacy, information theft, and data breaches are big issues today?

    You bet. It is evident from the fact that many of the leading newspapers such as New York Times, Washington Post and Wall Street Journal are carrying out news items and their own investigations into the privacy issues and data breaches. Indian news papers are nor far behind, Times Of India reports such cases on a regular basis, even Dilbert is getting into this.

    US Government is also working on new legislations that enforces more privacy related controls. It is also  encouraging to see that Congressmen are more concerned about privacy breaches. The recent Facebook privacy issue caught the attention of Congressmen Edward Markey and Joe Barton, the two Congressmen have asked Facebook to answer questions regarding the Wall Street Journal report.

    Corporations are also becoming more concerned about privacy and information theft. In a recent survey of 800 senior executives at global firms, commissioned by Kroll, it was found that information theft was the most-reported form of fraud, with 27.3% of those surveyed reporting an incident of information theft in the previous 12 months, compared to 18% who reported information theft over the previous 12 months in 2009. It was also found that for the first time data theft has surpassed physical theft.

    The complete report is here.

    As information security practitioners what can we do to help? This calls for better monitoring, new detective/preventive controls and more improvements in the areas of people, process and technology to tackle this problem.

    Tuesday, October 19, 2010

    Facebook - more privacy issues

    An investigation conducted by Wall Street Journal, my former employer, found that many of the most popular applications on Facebook have been transmitting identifying information—in effect, providing access to people's names and, in some cases, their friends' names—to dozens of advertising and Internet tracking companies. This is true even if you set your profiles to Facebook's strictest privacy settings.

    The article is here

    The problem is that applications like Farmville runs on top of Facebook using "iframes" and it lets the application developers do whatever they want to do with the application including providing ads and sending whatever information they can collect from the browser such as IP address, browser cookies, etc.

    Wednesday, October 13, 2010

    Stuxnet update

    A loyal reader commented on my Stuxnet post mentioning that BitDefender has a free tool to remove the malware.

    From the BitDefender blog:

    BitDefender has added generic detection covering all variants of Stuxnet as of July 19, thus protecting its customers since day zero. Computer users that are not running a BitDefender security solution can now eliminate Stuxnet from the infected systems by running the attached removal tool. The tool can be run on both 32- and 64-bit installations and will eliminate both the rootkit drivers and the worm.

    The tool can be downloaded from here.

    Saturday, October 2, 2010

    State of Software Security

    Veracode, the company involved in application security testing, published a report on the finding from their assessments. This report represented 2,922 applications assessed by Veracode in the last 18 months. Some of their observations are below.

    • More than half of all software failed to meet an acceptable level of security and 8 out of 10 web applications failed to comply with the OWASP Top 10
    • Cross-site Scripting remains the most prevalent of all vulnerabilities
    • No single method of application security testing is adequate by itself
    • The security quality of applications from Banks, Insurance, and Financial Services industries was not commensurate with their business criticality
    The complete report is available here.

    Friday, October 1, 2010

    What is Stuxnet?

    Stuxnet is a malware that spreads via removable drives; it has been getting lot of press lately. Malware spreading through removable devices is not a new concept, so what is special about this malware? It is the first malware that was designed to inject code into SCADA systems.

    The initial attack vector is the malicious shortcut files (.LNK) that take advantage of the Windows operating system vulnerabilities that was recently identified ( MS10-046 ). Back in July, I wrote about this vulnerability here.

    When a drive containing malicious .LNK file  is accessed using an application (Windows Explorer or Internet Explorer), it tries to render the file that points to a malicious executable. What is interesting is that the user need not double click on the .LNK file to trigger the vulnerability; just opening the folder containing the malicious file is enough to get infected.

    Once executed, the worm is designed to search for SCADA systems manufactured by Siemens. Once the targeted SCADA systems are located, the malware uploads its own code to the programmable logic controllers of the SCADA system, and changes the whole behavior of the SCADA systems. Even though the initial attack vector is the malicious shortcut files, in the second stage it exploits an application vulnerability within the Siemens SCADA systems. This vulnerability, a hard coded password, is exploited to actually upload the code.

    Check the below links for more information on this worm 

    Sunday, September 26, 2010

    SiliconIndia Security Conference on October 2

    Received the following from SiliconIndia

    SiliconIndia is organizing Security Conference on October 2, 2010 in Bangalore. At the Conference, there will be two tracks: System & Network Security and Web Security. There will be exciting technical sessions, delivered by the people who know Security best. You will learn from top industry experts and leading-edge peers. You'll experience a complete technical immersion, shared with a developer community that is passionate about all exciting developments in the security world.

    To register, please visit: 

    Conference on Improving the Technology Trustmark

    IT- Circular.jpg

    Would be of interest for India based readers

    Saturday, September 25, 2010

    Web Application Configuration Analyzer (WACA)

    Microsoft published a new tool, Web Application Configuration Analyzer (WACA). This tool scans a server against a set of best practices recommended for production servers. The list of best practices is derived from the Microsoft Information Security & Risk Management Deployment Review Standards used internally at Microsoft to harden production and pre-production environments for line of business applications.

    It uses an agent-less scan that requires the user to have admin privileges on the target server, as well as any SQL Server instances running on that machine.

    Scan a machine for more than 140 rules
    Generate HTML based reports
    Compare two scans to view the differences
    Export results to Excel
    Export results to Team Foundation Server

    You can download the tool from Microsoft here .

    Twitter worm Social Networking security debate

    A cross site scripting vulnerability in Twitter was exploited this week and it was used to send random tweets to all the followers. The attack leveraged a common Javascript feature, “onmouseover”, which allows developers to program discrete actions when visitors move their mouse cursor over a designated area of a web page. So, depending on the number of followers a person has, they all get these random tweets. Check the Kaspersky blog for more information on this

    Even though Twitter closed this vulnerability, the lot of damage was done and it prompted New York Times to assemble an online debate on social networking security.

    The contributors included some big names like Ross Anderson and Edward Felten

    I particularly liked Ross Anderson’s comments

    The discipline of security economics teaches us that large systems often fail because incentives are poorly aligned; if someone guards a system while someone else bears the cost of failure, then failure is likely. Persistent security failures have the same general causes as market failures, and monopolies are particularly bad

    So as people move from the open environment of the Internet to the walled garden of Facebook, we can expect security to get worse. But that's not all; there are at least three further problems. First, Facebook has a strong incentive to collect as much personal information as possible from its users for sale to advertisers.
    Second, Facebook is trying hard to be the world's identity service provider of choice, so that people use their Facebook account to leave comments on blogs, newspapers and community Web sites. This will make Facebook an even bigger target.

    The entire online debate is available here . This is great stuff and must read for social networking security enthusiasts.

    Saturday, September 11, 2010

    New Adobe Reader 0-day

    This week Adobe published an advisory for the Reader, from the advisory:

    "This vulnerability (CVE-2010-2883) could cause a crash and potentially allow an attacker to take control of the affected system. There are reports that this vulnerability is being actively exploited in the wild."

    What is interesting about this vulnerability is that the exploit is so sophisticated that it affects all versions of Windows and it bypasses all windows controls including DEP and ASLR. I wrote the following while explaining DEP and the security benefits.

    "Last month's Adobe Acrobat critical vulnerabilitythat existed in a function called util.printd leads to a memory corruption causing code injection also could have been prevented if organizations had the DEP enabled on their machines."

    Metasploit blog analyzed this exploit and identified the following:

    * Vulnerability Type: Stack Buffer Overflow
    * Bypasses DEP: Yes
    * Bypasses ASLR: Yes
    * Exploit Requires JS: Yes
    * Vulnerability Requires JS: No

    Friday, September 10, 2010

    OAuth and Twitter's implementation

    Last month Twitter officially started using OAuth for all third party authorization to user's data. What is OAuth and what does this mean to regular users?

    OAuth is product of the Internet Engineering Task Force having an RFC number of 5849. It provides a method for users to authorize third party applications access to their resources without sharing their credentials. The protocol originated from the need to provide delegated access such as mashups to user controlled resources, the first version was released in 2007. It is now a widely used protocol by many web sites.

    One good example is a web user granting a third party service provider such as photo printing service access to the user's private data (photos). In this scenario, the user doesn't need to share the credentials but just an authorization to access the private data. 

    The service provider is responsible for all the authentication with the third party. Typically, the third party signs up with the service provider and request specific access to the user's private data and the provider prompts the user to provide the specific authorization. Upon receiving the authorization, the provider lets the third party access the private data using an access token. Yahoo developer site provides an excellent overview of this authorization process.

    Are there any known risk?

    A recent article at Arstechnica talks about the insecurities of Twitter's OAuth implementation where the writer was able to compromise the secret OAuth key in Twitter's very own official client application for Android. Once the secret key is compromised, a token can be requested to provide access to user's data. Users unknowingly clicks on the authorization request, which exposes their private data.

    Key takeaways

    Key takeaway for the end users are 
    • Be aware of third party applications that you allow access to your data. 
    • You should periodically check what applications are installed and remove unnecessary ones. 
    • Also understand that changing password does not revoke access for these applications. 

    Sunday, August 22, 2010

    Is eight-character password dead?

    A recent news item on CNN caught my eye, it said "Say goodbye to those wimpy, eight-letter passwords".

    This article is based on the research conducted by researchers at the Georgia Institute of technology.

    Their research primarily focussed on brute forcing passwords using powerful graphic cards that are available today on PCs. According to them, any passwords shorter than 12 characters could be vulnerable.

    Most of the organization currently use either 6 or 8 character passwords. Considering this, 12 characters would be difficult to get a buy-in from the user community and implement.

    So, should you be worried?

    Not so much in my opinion if you have a proper implementation of other controls such as the following:

    • Account lockout (after 3 to 5 attempts)
    • A controlled way to reset passwords
    • Proper verification mechanism for internal and third party users
    • Proper monitoring which looks for unusual account lockouts and brute force attempts
    • Proper segregation of duties
    • Proper server hardening, privilege access control and monitoring

    While it is good to have more characters in a password, it is not a major concern if you have multiple controls to protect against malicious use.

    This is similar to the FPGA cracking that was introduced few years go to crack WPA keys and Bluetooth PINs, of course it is much more expensive than the graphic cards.

    Sunday, August 8, 2010

    2010 Verizon DBIR

    2010 Verizon Data Breach report has been published, here are some of the highlights of the report:

    • 98% of breaches came from servers and application assets and the top type of asset in this category were databases. 
    • 48% of breaches involved privilege misuse. 
    • 48% were caused by insiders, this is a 26% increase from last year. 90% of these were as the result of deliberate and malicious activity.
    • 98 % of breaches were avoidable through simple or intermediate controls, this is 9% increase from last year.
    • 94% of all compromised records in 2009 were attributed to Financial Services.
    • Payment card data accounted for 78% of total records breached followed by personal information and bank account data.
    • The web continues to be a common path of malware infection. This is often accomplished through SQL injection or after the attacker has root access to a system.
    • In terms of enabling access, backdoors were logically atop the list again in 2009 (tied with keyloggers). 
    • 97% of the 140+ million records were compromised through customized malware.
    • The use of stolen credentials was the number one hacking type.
    • Breaches involving end-user devices nearly doubled from last year. Much of this growth can be attributed to credential-capturing malware.
    • 86% of victims of data breaches had evidence of the breach sitting in the log files of their databases.

    Apart from the recommendation provided by Verizon in the report, here are some more recommendations

    • Identify where your data is.
    • Classify the data and identify the criticality.
    • Make the business people aware of the risk and have them classify the data they handle.
    • Identify compliance requirements such as PCI and implement required controls.
    • Apply additional controls such as DRM tools to secure financial data.
    • Implement tools to control and monitor privileged user activity.
    • Make users accountable for misuse of credentials.
    • Segment the network and implement proper filtering rules on the firewalls (both inbound and outbound).
    • Implement tools to monitor database activity.
    • Implement more effective tools such as application white listing to control malware activity on desktops and servers.
    • Perform proper log analysis and real time threat detection based on logs and network traffic patterns with tools such as network anomaly detection.
    • Practice incident response.

    The full report can be downloaded from here.


    Saturday, August 7, 2010

    Sunday, August 1, 2010

    How to defend against APT

    I attended a recent presentation on APT and how to defend APT attacks by the folks from Mandiant.

    If you are still wondering what APT is, head over to my essay on demystifying APT. Richard from the TaoSecurity wrote an article on the July issue of the Information Security magazine on the same subject.

    The Mandiant talk involved some of the APT cases they handled over the years and discussed common problems they saw at client sites. They also provided remediation solutions and associated implementation challenges.

    Here are some of the notes on the remediation steps from that talk:

    Limit DynDNS providers (more than 70% of investigations involved that)
    Provide appropriate training for information security staff
    Segment internal network
    Patch 3rd party applications
    Use password management tools for controlling privileged users
    HIPS, put them in block mode
    Train users to handle unsophisticated attacks like regular social engineering attacks

    Wednesday, July 28, 2010

    Facebook directory listing

    Ron Bowes sent the following on the SANS mailing list yesterday.

    "Hey everybody,

    I spent some time recently spidering Facebook to get every person's name who has an account and is searchable. I released the data from phase 1 of that project today, and thought I'd share:"

    Basically, if you are looking for a directly listing of all Facebook users, similar to your phone directory, then head over to Some of the other interesting and publicly available searches are: Replace the ID with any digit above 4

    Sunday, July 25, 2010

    Top 5 Threats for Banking Institutions

    If you work for a banking institution or provide services for banking industry and wondering what are some of the biggest threats you need to look out for, here is the list. According to FDIC, the US bank deposit insurance organization, the top 5 threats are:

    1. Malware and Botnets
    2. Phishing
    3. Data Breaches
    4. Counterfeit Checks
    5. Mortgage Fraud

    Tuesday, July 20, 2010

    Microsoft 0-day Malformed Shortcut (.lnk file) Vulnerability

    This may not be breaking news for many. Brian Krebs posted this on his blog last Thursday, Microsoft published the advisory last Friday and followed it up with an update on Tuesday, where they mentioned

    Microsoft is currently working to develop a security update for Windows to address this vulnerability.

    This post is not about the vulnerability but an interesting observation from the Microsoft announcement. As you can see below, they have omitted Windows XP and SP2 from this, it may not be a surprise as the support for XP SP2 ended on July 13.

    It will be interesting to see if Microsoft does come up with a patch since the vulnerability announcement and the support end date were very close and the fact that this is a critical vulnerability.

    As for this specific vulnerability mitigation for large organizations, I recommend software restriction policies (SRP), there is an interesting article by Didier here on this topic. More information on SRP is available here.

    Saturday, July 17, 2010

    PCI updates

    VISA issued two "best practice" documents

    • Tokenization best practice. I touched on this topic here while discussing the new version of PCI, in this document VISA gives a broader requirement for tokenization.

    • The second document, PAN truncation best practice is a clarification on the requirements for merchants to store the card number for things like chargeback and refunds. National Retail Foundation discussed this in detail in their review here.

    Here is an excellent guide that provides simple and quick information security steps for small to mid-size merchants that accept credit and/or debit cards as a form of payment. It covers topics such as:

    • Laws and Mandates Governing Securing Customer Data
    • Securing Customers Data
    • What are five minimum security actions a small business should implement?
    • Information Security "Do's" and "Don'ts"

    You can download the document here.

    Friday, July 9, 2010

    DSCI Best Practices Meet 2010 - 28 July 2010 Bangalore

    India based readers may be familiar with DSCI, if not, it is an arm of NASSCOM involved in developing best practices for Data Security and Data Privacy in India.

    This meet will focus on addressing the security challenges, which are becoming more complex in the wake of evolving threat scenarios; compliance regulations that are becoming more stringent. How should the security organization respond with organizational boundaries disappearing; how should it structure itself to respond effectively? It will also give an opportunity to interact with the leaders in security and understand the practices that are evolving to address specific challenges. They deliberate on different approaches that are being adopted either while implementing technologies or establishing processes. We expect that the meet will be attended by over 250 participants from diverse industry verticals.

    More information and registration details are available here.

    Sunday, June 27, 2010

    Twitter Settles Charges that it Failed to Protect Consumers' Personal Information

    It is just not information security professionals like me complaining about privacy issues on social networking sites, others are taking a hard look at this as well including the US Federal Trade Commission (FTC). I reported in an earlier post that US Senators send a letter to Facebook, now FTC gets involved in a complaint against Twitter.

    FTC issues an administrative complaint when it has "reason to believe" that the law has been or is being violated, and it appears to the Commission that a proceeding is in the public interest. FTC employs the FTC Act to impose sanctions on firms that exhibit unfair or deceptive practices, such practices that they feel would likely result in the disclosure of personal information.

    There has been many similar complaints in the past but this is the first case against a social networking service.

    According to the FTC's press release, Twitter has agreed to settle FTC charges that it deceived consumers and put their privacy at risk by failing to safeguard their personal information. According their complaint,  some of the breaches on Twitter system were possible due to a failure to implement reasonable safeguards. The complaint originated from some of the high profile breaches including that of Barack Obama before he became the President.

    According to FTC, Twitter failed to implement some of the following safeguards:

    * requiring employees to use hard-to-guess administrative passwords that are not used for other programs, websites or networks; 
    * prohibiting employees from storing administrative passwords in plain text within their personal email accounts; 
    * suspending or disabling administrative passwords after a reasonable number of unsuccessful login attempts; 
    * providing an administrative login webpage that is made known only to authorized persons and is separate from the login page for users; 
    * enforcing periodic changes of administrative passwords by, for example, setting them to expire every 90 days; 
    * restricting access to administrative controls to employees whose jobs required it; and 
    * imposing other reasonable restrictions on administrative access, such as by restricting access to specified IP addresses. 

    As part of the settlement, Twitter is required to implement a variety of data security safeguards including "a comprehensive information security program, which will be assessed by an independent auditor every other year for 10 years".

    The main document of FTC complaint is here 

    Some of safeguards mentioned even though highly important, these are very hard to implement for many small businesses. FTC has some strong words for organizations in what they claim they do in terms of securing consumer information.

    "When a company promises consumers that their personal information is secure, it must live up to that promise,"

    I touched the topic of do we really need more regulations in the privacy area, here. For many organizations, there is no incentive to spending money on security related activities, this is where the value of regulations comes in. Data privacy regulations require organizations to invest in a minimum level of security controls. Such minimum level of security controls reduce the probability of a data breach and resulting harm.

    Even though many of the US and other countries privacy laws mandate only "reasonable" or minimum security, for many businesses that is not enough. While discussing the new Massachusetts privacy law I commented this:

    "organizations should look at this and other regulatory requirements as "minimum standards" and look upon setting up a higher level for themselves. Remember that Compliance != Security"

    The key takeaway is that organizations must take a hard look at their privacy policies and implement the specified controls to safeguard customer information. Information security practitioners should convey this to their business and technology leaders and implement such protection mechanisms or face sanctions.

    Tuesday, June 22, 2010

    Wireless Penetration Testers... SANS need your input

    I received this from the SANS mailing list for GIAC certified folks.

    The GIAC Wireless Penetration Testing and Ethical Hacking (GAWN) JTA
    committee has recommended an updated set of certification objectives, and we
    are conducting a formal Job Task Analysis.  We are seeking Wireless Security
    subject matter experts to vote on proposed changes and rate the relevance of
    each certification objective.

    If you have wireless security background and experience, especially if the
    experience involves penetration testing your input will be valuable in
    shaping this certification.  Please note that if your background does not
    include experience with wireless security, we are unable to use your input
    for the survey at this time.

    Your name may be listed in the validation report if this certification is
    submitted for ANSI accreditation.  This survey will take an estimated 15
    minutes of your time and can be accessed at the link below.  The survey will
    be available through 12:01 AM on 7/1.

    Thank You.

    Chris Carboni
    GIAC Technical Director

    Saturday, June 12, 2010

    Data leaks, 0-days, and mass infections

    June so far has been a busy month for 0-days, data leaks, and mass infections. If this is not news for you, jump to the analysis section at the end.

    Windows 0-day

    A new vulnerability has been identified and POC code has been published for this Windows 0-day affecting the help functionality.

    Windows use what is called as HCP protocol when the helpctr.exe executable is invoked to open the help files and connect using the HCP URI. HCP is similar to the HTTP protocol and uses a similar prefix hcp://

    The vulnerability is due to not validating URLs while using the HCP protocol, this allows passing arbitrary scripts to the operating system. In order to exploit this vulnerability, one has to invoke the help file to connect to a specially crafter URL. Such specially crafted URL could be sent in an email enticing the user to click on it. Once exploited, the adversary could assume the rights of the logged in user. So, if the user is logged on with administrative privileges, the adversary could take over the entire system.

    Microsoft issued an advisory and recommends removing or unregistering the HCP protocol through a registry setting.

    The full disclosure and the POC is here and the Microsoft advisory is here                 

    If you recall, this is not the first time vulnerabilities have come up in the "help" function. Here are the last two announcemets.

    Vulnerability in HTML Help ActiveX Control Could Allow Remote Code Execution

    Mass script injection attacks

    Several sites were the victims of a mass script injection attacks. The common point was that all were running ISS/, the general behavior which is observed on the affected sites include insertion of a particular script (ex: ""). 

    Another round of injection attacks was reported yesterday, affecting about 1000 sites. This time the script  points to "". 

    More information available here, here, and here

    Wordpress script injection attacks

    Thousands of WordPress blogs and other PHP-based sites were the victims of injection attacks, they were injected with a malicious script aimed at infecting visitor's machines with rogue security products.

    More information, available here

    AT&T iPad owners email leak

    Gawker reported that they were given data on 114,000 iPad user accounts by intruders who hacked an AT&T server.

    As per the technical details released by Gawker, it involved spoofing the user-agent in the header to make AT&T's servers respond to a request for harvesting the data.


    What's common on all these attacks? 

    It is input validation. 

    Input validation is the source of various attack techniques such as buffer overflows, cross-site scripting, SQL injection, and manipulation (query string, form field, cookie, header, etc). Input validation refers to how the application filters, scrubs, or rejects input. Proper validation should be done for variety of inputs such as type, length, format, and range.

    Detection and prevention methods include 

    • Network IPS, which can look at the script inserts and alert 
    • Host IPS and file integrity monitoring tools
    • Web application firewalls that can block the inline scripts.
    • Log monitoring - Proper log monitoring can identify script and file injection attacks
    • URLSCAN - This Microsoft tool is an ISAPI filter that intercepts every request the web server receives from the Internet and scans each request for anything unusual such as scripts.
    • URLRewrite - Another tool, it has similar functionality as the URLSCAN. The major difference is that with URLREWRITE allows you define regular expressions, so it is much more flexible and powerful.

    One interesting aspect that you may have noticed is the India connection in the mass injection attacks, specifically the domain has an India TLD. Let's try to get more information on this.

    lab:$ whois

    Domain ID:D4266272-AFIN
    Domain Name:2677.IN
    Created On:10-Jun-2010 10:33:51 UTC
    Last Updated On:10-Jun-2010 10:33:52 UTC
    Expiration Date:10-Jun-2011 10:33:51 UTC
    Sponsoring Registrar:Transecute Solutions Pvt. Ltd. (R120-AFIN)
    Registrant ID:TS_11029084
    Registrant Name:liu xiaowei
    Registrant Organization:liu xiaowei
    Registrant Street1:huang he lu 28 Hao
    Registrant Street2:
    Registrant Street3:
    Registrant City:zhou zhou
    Registrant State/Province:henan
    Registrant Postal Code:450001
    Registrant Country:CN

    This has an India TLD but registered in China. Let's look at where is it hosted

    lab:$ host has address


    OrgName:    RIPE Network Coordination Centre
    OrgID:      RIPE
    Address:    P.O. Box 10096
    City:       Amsterdam
    PostalCode: 1001EB
    Country:    NL

    So, as you can see, it was registered in China, has an India domain but hosted in Netherlands. This shows the international reach of cyber criminals making it difficult for organizations and law enforcement to act against them.

    Sunday, June 6, 2010

    Another Adobe 0-day

    Adobe announced a new vulnerability affecting Flash and Reader products. As per the report, this is being actively exploited in the wild.
    Over the past year or so we started seeing more PDF reader based attacks and there have been numerous exploits during this time. A recent report published by f-secure confirms this.
    Source: f-secure
    Last year, some of the major Reader vulnerabilities included the JavaScript bugs, the JBIG2 compression algorithm vulnerabilities, and memory corruption vulnerabilities.

    Back in March this year, Didier Stevens published another interesting attack, he discussed a POC relating to the /launch functionality in PDF files. More information is available here.
    So, with Adobe PDF Reader having all these vulnerabilities, what are our options?
    Online services like Google Docs can display pdf documents right in the web browser. The advantage of this method is that the pdf is not executed on the user's computer system which means that any exploits will have no effect. 

    Firefox has a plugin to open PDF documents in Google Docs, this plugin, GPDF can be found from the Mozilla repository.

    Saturday, June 5, 2010

    More on Cloud Computing

    In my three part essay (here, here, and here) on Cloud Computing, I covered mainly the security concerns and how organizations can prepare themselves before getting to cloud based computing solutions.

    If you are looking for real world case studies in the cloud computing space, then read on.

    In 2009 US federal government started a cloud computing initiative for the public sector agencies. As part of this initiative, a recent report was published, which gives an overview of this effort. 

    In this "state of public sector cloud computing", the Federal Chief Information Officer gives details on their approach to leverage cloud computing technology.

    The report also gives details on common characteristics and various deployment models available with cloud computing. The report concludes with many case studies of cloud computing implemented at various agencies. The major areas include software development, software testing, CRM, email systems, web based application, etc.

    The case studies are detailed and provides the reason and some of the benefits achieved by using cloud computing solutions.

    Friday, May 21, 2010

    Facebook and privacy issues

    Privacy in social networking sites is a hot topic these days but it is my opinion that it is only among privacy professionals and a section of general public. Even though there has been a spurt in people looking for "how to delete Facebook account" in Google, most of the social networking users love the way it is setup, its ability to connect to people, the ability to share, and the sheer amount of access it provides. Needless to say that such users are putting themselves at risk by doing so but privacy by definition does not exist if the users does not seek it. A recent study by consumer reports found that about 40% users posted their date of birth on social networking sites. The study also found that the user base almost doubled from 2009. This is one of the major reasons why it is also popular with criminals, where they indulge in a variety of nefarious activities including identity theft, marketing illegal products, spreading malware, stealing credentials, etc.

    While all these are going on, what are the providers of such social networking sites doing? They are most definitely coming up with new ways to setup privacy controls but sites like Facebook are bringing changes far too often, creating far too many options (Facebook has over 50 settings) confusing the users and making them not use it at all. While it is important for people to understand the privacy issues so that they can make informed choices, it is also the responsibility of the providers to help users make these choices.

    There has been an increased concern on privacy primarily due to increasing privacy related incidents. The increased concern has also been due to the media coverage it is getting, the latest being the WSJ article. New York Times also got involved and had Elliot Schrage, vice president for public policy at Facebook answer some of the user's concerns regarding Facebook's privacy settings, complete coverage is available here. Time magazine also had coverage on this topic, check here.

    As far as Facebook is concerned, there have been many changes to the privacy settings over the years. For example, in the beginning, user's personal information was visible only to their friends and their network, which is not the case now (with the default settings). Rather than spending time on what changed over the years, I recommend readers to head over to Matt's site, where he has a visual depiction of changes over the years, great stuff.

    The recent change that further complicated the privacy settings involved their decision to partner with Microsoft Docs and Yelp and share any publicly available information with those partners. If you don't want to do this, you have to manually opt-out of this feature for each individual partners. The data shared with these partners include name, picture, friends list, city, gender, and fan pages. We are not yet sure what these companies will do with this data but they are definitely getting more data than a typical advertising companies get when users click on an ad. 

    In Facebook, if you want to put the privacy setting back, there are some easy methods available.

    • A personal firewall vendor, Untangle announced the availability of a new bookmark utility to enable Facebook users to restore their privacy settings. Called SaveFace, it puts back the privacy settings to "friends only", it available from their site
    • Brian Kerbs announced in his blog yesterday, the availability of a new tool from This open source tool can help Facebook users very quickly determine what type of information they are sharing with the rest of the world.

    More than privacy settings, I strongly believe that user education is equally important, especially educating kids on various privacy issues. Users should be aware of newer threats affecting social networking sites and act responsibly that will not endanger their own privacy and the privacy of the organization they represent.


    We now have a recommended settings option and users need to click one button (“Everyone,” “Friends of Friends” or “Friends Only”) to restrict or open all their information to those groups. 
    EFF has a detailed instruction on the new settings.