Pages

Friday, May 29, 2009

Cross Site Request Forgery. What it is and how to work around it.

I was asked by my client if there's a way to solve the Cross Site Request Forgery (CSRF) issue that has been highlighted by their Internal Security Team. Previously, I help my client provide a common filter application (J2EE Filter) that filters out Cross Site Scripting Characters from a request and response. This helped them tremendously in passing the security assessment as all they have to do is define the filter on their applications and filter will do the rest.

As I have provided them the solution before, he asked me if there's a way to create a common standards for applications so that each applications need not to worry about Cross Site Request Forgery. Basically, their Internal Security Team requested them to put hidden keys that will be validated once the forms are submitted (for got post and get) and if the hidden keys are missing or wrong, then the form submission will fail.  

Anyway, this was the first time I heard of Cross Site Request Forgery, so I did some research. 

Apparently, Cross Site Request Forgery is a form of hijacking your application session in order to fool the users into submitting invalid contents to the application and the application will process it. How does it hijack your application ? And why only now ?

Basically it works this way. Imagine you are browsing your Online Banking and you need to transmit money to your GF/WIfe/Mother or whoever. You forgot the amount to transfer and the information si actullay on your Email. So you decided to check your online email account (by going to New -> Open Window on I.E., which will share the same session with the exisitng browser). While checking your email, you saw an email from someone asking you to check a good holiday getaway. You checked the email and you find that place impressive, so you decided to click on the image on that email. And that email redirected you to a genuine looking promotion site. After your read, you closed the email and decided to transfer the money.  Seems nothing happened right ? Wrong, you noticed that you have missing thousand of dollars on your account.

How did this happened ? Remember two things in these scenario :

  1. You are still logged in to your online banking. Technically, your session is still alive.

  2. You saw a genuine email inviting you to check out the new holiday getaway. However, when you clicked the image, apparently, the image link actually executes a javascript which will contact your bank by executing a get command with parameters such as : http://bank.example/withdraw?account=bob&amount=1000000&for=mallory Which actually withdraws from your account and transfers it to another account. And since your session is alive, this actually gets executed on your behalf. However, it looks harmless to you as you were just redirected to a vacation site, without knowing that you have just transferred thousands of dollars to another bank account.

Why only now ? It became prevalent because of the Tabs behaviour of the browser. Normally, what people doe before was just to launch a new IE browser (nobody goes to New->WIndow.. only a few), which thereby actually separates your banking session to your email. However, with the prevalency of the tab technology, people just open up a tab, which will actually share session with your exisitng banking application.

There are ounces of preventions on this thing. Two of the most popular ones are :

  1. Checking the HTTP Referrer.
  2. Having a hidden validation key for every form submission
The problem with checking HTTP Referrer is that this can be suppressed. Some HTTPS also omits HTTP Referrer.

Hidden Validation Key is one of the solution for this. How does this work ? Basically, for every form request, the application will issue a unique key that will be validated upon the submission of the request. Since the attacker does not know the correct key, if the form is submitted with an invalid key or missing key, the application can assume that this is an attack and fail the submission of the form.

The problem with implementing this solution is that you need to modify your application to conform to this. If you are using an MVC Framework, this might not be an issue as you can have your controller issue the key and check the key before the view generates the page. However, I don't think most of the applications out there actually uses MVC framework (take .NET for example). If you are my client and you have a lot of applications available, then you need to re-write this applications to include Hidden Validation Keys.

However, there's another solution to this issue (for both .NET and Java/J2EE). The solution is to have a J2EE Filter or IHttpModule do the job for your. How does this work ?

If you are familiar with .NET and/or Java, before the request or response reaches the application, any implemented filters via J2EE Filter or IHttpModule gets executed first. On this level, Filters can check the contents of the Request or Response. They can even modify the contents of this two. If I have a Filter that will actually help me :

  1. Generate a random key and store the key into the session.
  2. Add this Random key as a hidden value field on the Response part of the application as part of the Form Submission.
  3. When Form is submitted, I validate this key.
  4. If Key is valid, pass it to the applicaiton.
  5. If key is invalid, fail the submission.
This will solve the issue. And since its a J2EE Filter or IHttpModule, this can be re-used and is shareable to all applications. This mean... Taa-daaaa.. I DON'T even need to change any line in my existing applications to defeat  CSRF.

However, sometimes, storing a session variable may not be a good idea, especially if you are working on an environment with Load Balancing. Sometimes, session variables get lost so there's a tendency of Form Failure not because of CSRF but because of infrastructure failure. How do you do solve this ? Well, why don't we generate a Checksum key instead.

How does this work ? It works by doing the below :

  1. Read the contents of the Form Data. Based on the field names, generate a checksum with an algorithm known only to you.
  2. Add this key to the form data.
  3. When form is submitted, check the field names and calculate the checksum.
  4. If Key is valid, pass it to the applicaiton.
  5. If key is invalid, fail the submission.
Voila!!!. I don't need a session variable after all. So I told my client that this can be done.

Well, I emailed him that we can prevent CSRF without even changing your application. At most, only the configuration (web.xml or web.config) needs to be changed to include the Filter on the application.

So the filter will do this :


  1. User Request for Page
  2. Browser goes to the server and request for the page
  3. Application sends the page to the browser.
  4. However, Response Filter intercepts the response.
  5. Response Filter generates an authentication key. If you used a checksum based solution, this key is a calculated checksum.
  6. Response Filter saves the key to the J2EE Session, if you follow the session-based solution, otherwise this key is actually a checksum key.
  7. Response Filter appends the key to the HTML Form.
  8.  Response Filter sends the request to the Browser.
  9. Browser renders page (with the key embedded).
  10. User interacts and submits the page
  11. Information is send to the application
  12. However, Request filter intercepts the request.
  13. Request Filter checks for the authentication key
  14. Request Filter Authenticates Key by comparing with the J2EE Session, if you're using a Session-based solution, otherwise Filter will validate the checksum..
  15. If key is invalid, Request Filter generates a response and shows error page. Request ends.
  16. If valid, Request Filter sends the information to the Application.
  17. If Key is in the hidden field, hidden field is removed. This is optional and can be done in-cases where Application checks for any extra fields and invalidates the request if any extra-field is found.
  18. Application processes the information.
Next would be coding part. When I have time, I will probably code on both .NET and Java and share the codes to you. It will be a simple Filter that uses Session-based solution.

The Migration Story : Migrating from Websphere Portal 5.1 to 6.1 Part 2

For Part 1, click here

The first task to be done is to migrate the Production 5.1.0.1 into another server (5.1.0.1). This is an optional task, however, in my case it's required as my client does not want to touch the production server.

Prior to my task, Operations helped us export the Production 5.1.0.1 by doing the following :

  1. Run the following command :  ./xmlaccess.sh –in ExportRelease.xml –user [username] -password [password] -url http://[server]:9081/wps/config -out /tmp/release20090429/config.xml. 

    You can get the ExportRelease.xml HERE

  2. Backup the following files : PortalServer/installableApps, PortalServer/installedApps, AppServer/installedApps/[server]/wps.ear ,PortalServer/shared, PortalServer/deployed.

  3. Once done, the Operations have passed to me the config.xml file. And I'm good to go to import the settings and files to the new 5.1.0.1 server.

The below contains some tips and tricks as you will find out. 

For our import to work, the following is required :

  1. The Portal 5.1 must be setup as an empty portal. Portal Server 5.1 does not contain an "action-empty-portal" reference on WPConfig.sh (bat).  So you have to install it as empty. However, there's a workaround on this as shown later.

  2. You need to install all the fixes on the following link (otherwise you will hit issues such as Import takes very slow, etc.). Download the files HERE. You need an IBM Login to download the files. Don't install as yet.

  3. For my case,  since Portal 5.1 was not  an empty portal when it was handed over to me, (as I forgot to let them know that I need an empty portal) I have to use Portal 6.0 scripts and modify them. These scripts are compatible. Let me explain.

    action-empty-portal actually runs 3 XML Scripts via xmlaccess which are :

    1. CleanPortal.xml
    2. AddBasePortalResources.xml
    3. SchedulerCleanupTask.xml

    When you run wpsconfig.sh, it actually looks at the file called : wps_cfg.xml. This file maps the commands to the xml actions that needs to be run. For example, when you call action-empty-portal, the above scripts are run.

    For my case, I copied these scripts from Portal 6.0 to my Portal 5.1 and modified CleanPortal (as shown HERE) in order for it to work with Portal 5.1. I don't need to use AddBasePortalResources.xml as I don't need to add the language resources. However, I need the SchedulerCleanupTask.xml (as shown HERE)

    Now, I have to run these jobs individually. I ran it at this sequence :

  4. First, I ran this command : ./xmlaccess.sh -in /config/work/CleanPortal.xml -user [username]  -pwd [password] -out /tmp/xmlcleanportal.xml -url http://[server]:9081/wps/config

  5. And then, I ran this command : ./xmlaccess.sh -in  /config/work/SchedulerCleanupTask.xml -user [username] -pwd [password] -out /tmp/xmlcleanportal.xml -url http://[server]:9081/wps/config

  6. Before importing, I want to make sure that my credential-segment is added. So you need to run this command : WPSconfig.sh action-create-deployment-credentials

  7. After you run this commands, check that your portal is empty by restarting it and browsing it. You shoudl see an error something like VP Failed. This is normal for Portal 5.1.

  8. My portal 5.1 and the Production Portal 5.1 are of the same version so I copied everything fron the backedup installableApps and Deployed to the new Portal Server's installabeApps. After that, I copied the latest WAR files of our custom applications. If the Portal version is different, I would have just copied only the new WAR files from our custom applications

  9. Once done, Install the Fixes as specified in number 2.

  10. Copy your custom shared JAR files into the new PortalServer/shared folder.

  11. Copy your custom theme and skins to the new PortalServer

  12. Install any custom configurations and files that you may have. On my side, I configured my JDBC to the databases used by the applications. You can do whatever custom configuration and installation here (only on the WAS Side).

  13. Once done,  I edited my config.xml (XML file backedup from the production server) to point to the correct WAR files. For information, check out THIS LINK.

  14. I imported my configuration by running the following : ./xmlaccess.sh -in /xmlaccessfiles/config.xml -out /tmp/import.xml -user [username] -pwd [password] -url http://[server]:9081/wps/config. If you find that the import is running too slow (like one line per one minute), then you didn't install the fixes. Go back and check #2.

After doing so, I tested the portal server. You have to test the portal to make sure it works. For Portal 5.1, you need to take a look at your SystemOut.log and wps*.log under the PortalServer/log while testing to see if there's any portlet issues. Mine was the theme and skins due to a missing shared library I forgot to put in.

As I have tested it, my next task now is to dump this Portal using WPMigrate pre-upgrade task and import it ont he new Websphere Portal Server 6.1

The Migration Story : Migrating from Websphere Portal 5.1 to 6.1 Part 1

My current project right now is to migrate my client's Webspehre Portal 5.1.0.1 to Websphere Portal 6.1.0.1 I would like to list down the steps we did in order for the rest of the readers to understand the circumstances, failure and successes based on this experience.

For a start, I would like to mention the following information :

1. The team is segregated into the following : Engineering, Application Consultants (incl. Solution Architect and Project Manager) Quality Assurance Team and Operations.

2. The Portal Server has more than 40 applications (one application may contain 10 or more pages and each page containing 5 or more portlets) , mostly written in JSR 168 except for JSP Server Portlets.

3.  The Production Portal is live, so this Portal has to be duplicated on another machine.

4. The LDAP Server is used globally and it contains more than 1000 groups and 100,000 users. The Portal is enabled to use Dynamic Group. 

5. DB2 used is 8.2

6. The new architecture will be integrated with Omnifind Enterprise Search Server.

7. A new crawler will be created to craw the proprietary document management system.

8. Implemented a proper code change management on Websphere Portal.

Since the project team is composed of different teams, the following are the roles and responsibility of each teams :

1. Engineering Team : Setup the infrastructure, including Database Transfer and LDAP Integration.

2. Application Team : Migrate applications from 5.1.0.1 to 6.1, create crawler, Install Omnifind Enterprise and implement code change management.

3. Quality Assurance Team : Provide Load Test, Performance Testing and Security Testing

4. Operation Team : Certify the installation and migration and will support the servers installed and applications migrated on operational point of view.

Migrating is not the difficult, but the difficulty lies on making sure that these new environment will make my client's life easier, especially on managing Websphere Portal, promoting codes and implementing applications. The issue that my client is facing right now is that each Portal Environment is configured on it's own and there's no integration. I believe majority of the Portal Infrastructure is configured this way. What does this mean ? Let me give you an example.

Say you have 3 distinct portal environment, Development, Staging and Production. Each was installed on its own. Imagine creating an application for your user, say a business application. This business application requires 10 pages, with each page having an average of 5 portlets. Each page have different security configuration. As a developer, you know how to configure this on development environment. If your company is not that big and you don't have processes in-place, my guess is that your company will also ask you to deploy the portlets and pages plus configuration on staging and production. No problem, since you configured this and develop this, you know how to do so. However, let me remind you that doing this means you need to do every step in every environment, meaning that if you assign security in development for the portlet, you need to do so in staging and production. No problem since you're the one doing it.

However, let's bring the same scenario in a big MNC. MNC normally have proper processes in-place, meaning each environment is managed by distinct teams. On my client, they have a team managing development, staging and production. Development is open for developers so this is not a problem. However, the problem arises when this application is promoted in staging. A different team needs to install, re-do the configuration you did in development. Same thing will happen in production. The problem here is that, since this is a manual process, and different team are doing it, there's a tendency of human error, like misconfiguring the ACL on a portlet. This has a big issue on doing UAT and most of the time, on my client's experience, this causes delay on UAT. Aside from this, this brings frustration on the team managing staging and production as they are mostly blamed for the issues.

As we progress on this post, I will show you how to solve this issue by implementing Release Builder, to minimize manual work. You may wonder, why Release Builder and not Site Management ? 

The reason is that Site Mangement required the different environment to open up to each other, and as per my client's policy, this is a security breach. You don't expect Staging and Production servers communicating with each other as you're opening a hole for potential hacking (if say, your Production was compromised, which means your staging can be compromised and the hacker may ultimately enter your network).

I'm part of the Application Migration team so most of my post will detail the tasks that I'll be doing to migrate the applications.