Pages

Sunday, June 14, 2009

The Migration Story : Migrating from Websphere Portal 5.1 to 6.1 Part 3

Click here for part 2

After testing the migrated portal to 5.1, we're ready now to migrate to 6.1. Since the servers are located in different environment, I have to dump the 5.1 and then import it to the 6.1 server.

First of all, do the following first BEFORE migrating :

  1. Install Portal 6.1.0.1
  2. Enable your security to the same backend LDAP of which your previous server is authenticating. If it's a different LDAP, make sure that the LDAP branches (usersand groups) are the same as your previous LDAP.
  3. Install your target database server.
  4. Do the database transfer.
  5. Install the following fixes on your 5.1 environment : PK27753 PK28148 PK29999 PK30718 PK31425 PK32194 PK32211 PK32556 PK32626 PK34624 PK39530 PK42729 PK44723 PK47799 PK48653 PK53001 PK62044 PK63553 PK64160 PK40171
  6. If there's any RSS Portlet for 5.1, update it with the latest version.
  7. If portal 5.1 has deleted pages, ensure that you use the deleted cleanup service. Check the information center.
  8. If you have deleted users or groups, Deregister the users and group. Follow the procedures in Websphere Portal 5.1information center.
  9. stop and start your portal 5.1 to test.
  10. Considering that you have installed an HTTP Server, kindly change back the WpsHostPort to port 9081 instead of 80.
  11. Edit the file soap.clients.props and change the settings of com.ibm.SOAP.requestTimeout to a higher number than 180. (Suggests is to increase to 1800).
  12. Run WPmigrate with the portal-pre-upgrade task on Websphere Portal 5.1 from the CD_root/migration/portal_migration directory on the WebSphere Application Server Network Deployment Version 6.1 Supplements CD. This CD comes with WebSphere Portal and can be used on either Windows or UNIX. Run by executing below :

    WPmigrate.bat portal-pre-upgrade -DbackupDirectory=dirname -DcurrentPortalDirectory=dirname -DcurrentPortalAdminId=adminid -DcurrentPortalAdminPwd=adminpassword -DDbPassword=dbpassword -DGroupExport="true"

    Where :

    backupDirectory - The directory where data from the earlier server will be stored for subsequent use with the portal-post-upgrade task. If this directory does not exist, the portal-pre-upgrade task creates it. For example, if you specify mybackup, the migration task stores the data under the mybackup directory.

    currentPortalDirectory - The directory where the earlier portal server is installed.

    currentPortalAdminId - The administrator ID for the earlier portal server. This value is not required if it is already specified in the wpconfig.properties file on the earlier portal server.

    currentPortalAdminPwd - The administrator password for the earlier portal server.

    DbPassword - The DBMS user password for the earlier portal server. Specify this property value in the command line only if the property is not already specified as follows :

    In WebSphere Portal V5.1.0, 5.1.0.1, 5.1.0.2, 5.1.0.3, 5.1.0.4, or 5.1.0.5, the property value is specified in the wpconfig.properties file, next to the property name DbPassword.

    In WebSphere Portal V6.0.1.1 or later, the property value is specified in the wpconfig_dbdomain.properties file, next to the property name release.DbPassword.

    All other database properties must be specified in the properties file.

    GroupExport - This is optional. On my part, since it's integrated with LDAP, I didn't need to export the group. No errors occurred. This exports groups from the earlier version when the value is set to true. This property is set by default to false.

  13. Once run, check the allout.xml file to see the results.
If an Out of Memory condition occurs during the export process, see Technote 1299190 for additional instructions.

That's it for exporting. Next will be importing to the 6.1 environment

Saturday, June 6, 2009

Google Wave. A true unified communications model ?

This is what I call a true Unified Communications Model. For me, a unified communications model allows someone using an email to respond to a message triggered from an I.M., while another party to the conversation also received the same message from a social networking site. This is what is supposed to be called a Unified Communications Model.

Apparently, most of the U.C. solutions in the market now are half-baked, and is geered towards telecommunications and I.M. only. Given that, I have yet to see a solution which integrates I.M. communication with an email communication, other than the S.S.O, which actually just gives us presence awareness and nothing else. Some vendors are even guilty of calling their produce U.C. when in-fact, it only offers chat and Telephony and given that, there's not even a single integration between the two functionality. Some are guilty of providing everything that a communication platform can offer yet there's no single integration between them.

Anyway, here's the video of what Google Wave can offer and tell me if this is not truly a unified communications model. I, for one, particularly wanted to see how this will integrate with audio and video.


Configuring a new folder as a common shared library to deploy your shared libraries instead of using PortalServer/Shared or PortalServer/config

Article from : Websphere Portal Wiki by yours truly.


If you are a developer on a company who has a strong development process in-place, you are normally asked to deploy the shared jar files on the PortalServer/shared or PortalServer/config directory. You may have wondered, if you can actually deploy your jar files and property files in a separate folder(s) instead of those above given directory. The answer is YES. Though, this is not documented anywhere I believe.

While on the course of the portal migration project, I encountered all custom portlets to have 3 shared libraries configured on each of them. The funny thing is that all of the shared libraries configured are the same for all of the portlets. This is actually a headache for me during migration as there are more than 100 portlets deployed and I don't expect myself to configure them one by one. So, what I did was to configure 3 new shared libraries BUT configure my Portal Server to see this Shared Libraries, rather than deploying my libraries under PortalServer/shared or PortalServer/Config. I didn't deploy the shared library on the PortalServer/shared or PortalServer/config folder as what IBM manuals normally says.

But instead, this JAR files are stored on a different directory, but whenever Portal loads, it knows that this jar files are to be loaded as shared library that is to be shared by all portlets. How ? Follow the below instructions. Note that this instruction is for WAS 6.1, but the same can be found on 5.1 and 6.0 though in a different way of doing it.

  1.  Logon to the Websphere Application Server console.
  2. Go to Environment -> Shared Libraries
  3. Create your own Shared Library. Add the classpath pointing to your JAR files.
  4. Apply and save this configuration.
  5. Now, this is the fun part. Go to Servers -> Application Servers -> Websphere_Portal -> Java and Process Management -> ClassLoader.
  6. Class loader may vary but it should only contain one link. Click on that class loader.
  7. Click on the Shared Library References
  8. You will notice all the shared libraries that are used by Websphere Portal. Click on Add.
  9. Add your shared library.
  10. Save and restart Websphere Portal.
  11. Enjoy!!!

Friday, May 29, 2009

Cross Site Request Forgery. What it is and how to work around it.

I was asked by my client if there's a way to solve the Cross Site Request Forgery (CSRF) issue that has been highlighted by their Internal Security Team. Previously, I help my client provide a common filter application (J2EE Filter) that filters out Cross Site Scripting Characters from a request and response. This helped them tremendously in passing the security assessment as all they have to do is define the filter on their applications and filter will do the rest.

As I have provided them the solution before, he asked me if there's a way to create a common standards for applications so that each applications need not to worry about Cross Site Request Forgery. Basically, their Internal Security Team requested them to put hidden keys that will be validated once the forms are submitted (for got post and get) and if the hidden keys are missing or wrong, then the form submission will fail.  

Anyway, this was the first time I heard of Cross Site Request Forgery, so I did some research. 

Apparently, Cross Site Request Forgery is a form of hijacking your application session in order to fool the users into submitting invalid contents to the application and the application will process it. How does it hijack your application ? And why only now ?

Basically it works this way. Imagine you are browsing your Online Banking and you need to transmit money to your GF/WIfe/Mother or whoever. You forgot the amount to transfer and the information si actullay on your Email. So you decided to check your online email account (by going to New -> Open Window on I.E., which will share the same session with the exisitng browser). While checking your email, you saw an email from someone asking you to check a good holiday getaway. You checked the email and you find that place impressive, so you decided to click on the image on that email. And that email redirected you to a genuine looking promotion site. After your read, you closed the email and decided to transfer the money.  Seems nothing happened right ? Wrong, you noticed that you have missing thousand of dollars on your account.

How did this happened ? Remember two things in these scenario :

  1. You are still logged in to your online banking. Technically, your session is still alive.

  2. You saw a genuine email inviting you to check out the new holiday getaway. However, when you clicked the image, apparently, the image link actually executes a javascript which will contact your bank by executing a get command with parameters such as : http://bank.example/withdraw?account=bob&amount=1000000&for=mallory Which actually withdraws from your account and transfers it to another account. And since your session is alive, this actually gets executed on your behalf. However, it looks harmless to you as you were just redirected to a vacation site, without knowing that you have just transferred thousands of dollars to another bank account.

Why only now ? It became prevalent because of the Tabs behaviour of the browser. Normally, what people doe before was just to launch a new IE browser (nobody goes to New->WIndow.. only a few), which thereby actually separates your banking session to your email. However, with the prevalency of the tab technology, people just open up a tab, which will actually share session with your exisitng banking application.

There are ounces of preventions on this thing. Two of the most popular ones are :

  1. Checking the HTTP Referrer.
  2. Having a hidden validation key for every form submission
The problem with checking HTTP Referrer is that this can be suppressed. Some HTTPS also omits HTTP Referrer.

Hidden Validation Key is one of the solution for this. How does this work ? Basically, for every form request, the application will issue a unique key that will be validated upon the submission of the request. Since the attacker does not know the correct key, if the form is submitted with an invalid key or missing key, the application can assume that this is an attack and fail the submission of the form.

The problem with implementing this solution is that you need to modify your application to conform to this. If you are using an MVC Framework, this might not be an issue as you can have your controller issue the key and check the key before the view generates the page. However, I don't think most of the applications out there actually uses MVC framework (take .NET for example). If you are my client and you have a lot of applications available, then you need to re-write this applications to include Hidden Validation Keys.

However, there's another solution to this issue (for both .NET and Java/J2EE). The solution is to have a J2EE Filter or IHttpModule do the job for your. How does this work ?

If you are familiar with .NET and/or Java, before the request or response reaches the application, any implemented filters via J2EE Filter or IHttpModule gets executed first. On this level, Filters can check the contents of the Request or Response. They can even modify the contents of this two. If I have a Filter that will actually help me :

  1. Generate a random key and store the key into the session.
  2. Add this Random key as a hidden value field on the Response part of the application as part of the Form Submission.
  3. When Form is submitted, I validate this key.
  4. If Key is valid, pass it to the applicaiton.
  5. If key is invalid, fail the submission.
This will solve the issue. And since its a J2EE Filter or IHttpModule, this can be re-used and is shareable to all applications. This mean... Taa-daaaa.. I DON'T even need to change any line in my existing applications to defeat  CSRF.

However, sometimes, storing a session variable may not be a good idea, especially if you are working on an environment with Load Balancing. Sometimes, session variables get lost so there's a tendency of Form Failure not because of CSRF but because of infrastructure failure. How do you do solve this ? Well, why don't we generate a Checksum key instead.

How does this work ? It works by doing the below :

  1. Read the contents of the Form Data. Based on the field names, generate a checksum with an algorithm known only to you.
  2. Add this key to the form data.
  3. When form is submitted, check the field names and calculate the checksum.
  4. If Key is valid, pass it to the applicaiton.
  5. If key is invalid, fail the submission.
Voila!!!. I don't need a session variable after all. So I told my client that this can be done.

Well, I emailed him that we can prevent CSRF without even changing your application. At most, only the configuration (web.xml or web.config) needs to be changed to include the Filter on the application.

So the filter will do this :


  1. User Request for Page
  2. Browser goes to the server and request for the page
  3. Application sends the page to the browser.
  4. However, Response Filter intercepts the response.
  5. Response Filter generates an authentication key. If you used a checksum based solution, this key is a calculated checksum.
  6. Response Filter saves the key to the J2EE Session, if you follow the session-based solution, otherwise this key is actually a checksum key.
  7. Response Filter appends the key to the HTML Form.
  8.  Response Filter sends the request to the Browser.
  9. Browser renders page (with the key embedded).
  10. User interacts and submits the page
  11. Information is send to the application
  12. However, Request filter intercepts the request.
  13. Request Filter checks for the authentication key
  14. Request Filter Authenticates Key by comparing with the J2EE Session, if you're using a Session-based solution, otherwise Filter will validate the checksum..
  15. If key is invalid, Request Filter generates a response and shows error page. Request ends.
  16. If valid, Request Filter sends the information to the Application.
  17. If Key is in the hidden field, hidden field is removed. This is optional and can be done in-cases where Application checks for any extra fields and invalidates the request if any extra-field is found.
  18. Application processes the information.
Next would be coding part. When I have time, I will probably code on both .NET and Java and share the codes to you. It will be a simple Filter that uses Session-based solution.

The Migration Story : Migrating from Websphere Portal 5.1 to 6.1 Part 2

For Part 1, click here

The first task to be done is to migrate the Production 5.1.0.1 into another server (5.1.0.1). This is an optional task, however, in my case it's required as my client does not want to touch the production server.

Prior to my task, Operations helped us export the Production 5.1.0.1 by doing the following :

  1. Run the following command :  ./xmlaccess.sh –in ExportRelease.xml –user [username] -password [password] -url http://[server]:9081/wps/config -out /tmp/release20090429/config.xml. 

    You can get the ExportRelease.xml HERE

  2. Backup the following files : PortalServer/installableApps, PortalServer/installedApps, AppServer/installedApps/[server]/wps.ear ,PortalServer/shared, PortalServer/deployed.

  3. Once done, the Operations have passed to me the config.xml file. And I'm good to go to import the settings and files to the new 5.1.0.1 server.

The below contains some tips and tricks as you will find out. 

For our import to work, the following is required :

  1. The Portal 5.1 must be setup as an empty portal. Portal Server 5.1 does not contain an "action-empty-portal" reference on WPConfig.sh (bat).  So you have to install it as empty. However, there's a workaround on this as shown later.

  2. You need to install all the fixes on the following link (otherwise you will hit issues such as Import takes very slow, etc.). Download the files HERE. You need an IBM Login to download the files. Don't install as yet.

  3. For my case,  since Portal 5.1 was not  an empty portal when it was handed over to me, (as I forgot to let them know that I need an empty portal) I have to use Portal 6.0 scripts and modify them. These scripts are compatible. Let me explain.

    action-empty-portal actually runs 3 XML Scripts via xmlaccess which are :

    1. CleanPortal.xml
    2. AddBasePortalResources.xml
    3. SchedulerCleanupTask.xml

    When you run wpsconfig.sh, it actually looks at the file called : wps_cfg.xml. This file maps the commands to the xml actions that needs to be run. For example, when you call action-empty-portal, the above scripts are run.

    For my case, I copied these scripts from Portal 6.0 to my Portal 5.1 and modified CleanPortal (as shown HERE) in order for it to work with Portal 5.1. I don't need to use AddBasePortalResources.xml as I don't need to add the language resources. However, I need the SchedulerCleanupTask.xml (as shown HERE)

    Now, I have to run these jobs individually. I ran it at this sequence :

  4. First, I ran this command : ./xmlaccess.sh -in /config/work/CleanPortal.xml -user [username]  -pwd [password] -out /tmp/xmlcleanportal.xml -url http://[server]:9081/wps/config

  5. And then, I ran this command : ./xmlaccess.sh -in  /config/work/SchedulerCleanupTask.xml -user [username] -pwd [password] -out /tmp/xmlcleanportal.xml -url http://[server]:9081/wps/config

  6. Before importing, I want to make sure that my credential-segment is added. So you need to run this command : WPSconfig.sh action-create-deployment-credentials

  7. After you run this commands, check that your portal is empty by restarting it and browsing it. You shoudl see an error something like VP Failed. This is normal for Portal 5.1.

  8. My portal 5.1 and the Production Portal 5.1 are of the same version so I copied everything fron the backedup installableApps and Deployed to the new Portal Server's installabeApps. After that, I copied the latest WAR files of our custom applications. If the Portal version is different, I would have just copied only the new WAR files from our custom applications

  9. Once done, Install the Fixes as specified in number 2.

  10. Copy your custom shared JAR files into the new PortalServer/shared folder.

  11. Copy your custom theme and skins to the new PortalServer

  12. Install any custom configurations and files that you may have. On my side, I configured my JDBC to the databases used by the applications. You can do whatever custom configuration and installation here (only on the WAS Side).

  13. Once done,  I edited my config.xml (XML file backedup from the production server) to point to the correct WAR files. For information, check out THIS LINK.

  14. I imported my configuration by running the following : ./xmlaccess.sh -in /xmlaccessfiles/config.xml -out /tmp/import.xml -user [username] -pwd [password] -url http://[server]:9081/wps/config. If you find that the import is running too slow (like one line per one minute), then you didn't install the fixes. Go back and check #2.

After doing so, I tested the portal server. You have to test the portal to make sure it works. For Portal 5.1, you need to take a look at your SystemOut.log and wps*.log under the PortalServer/log while testing to see if there's any portlet issues. Mine was the theme and skins due to a missing shared library I forgot to put in.

As I have tested it, my next task now is to dump this Portal using WPMigrate pre-upgrade task and import it ont he new Websphere Portal Server 6.1

The Migration Story : Migrating from Websphere Portal 5.1 to 6.1 Part 1

My current project right now is to migrate my client's Webspehre Portal 5.1.0.1 to Websphere Portal 6.1.0.1 I would like to list down the steps we did in order for the rest of the readers to understand the circumstances, failure and successes based on this experience.

For a start, I would like to mention the following information :

1. The team is segregated into the following : Engineering, Application Consultants (incl. Solution Architect and Project Manager) Quality Assurance Team and Operations.

2. The Portal Server has more than 40 applications (one application may contain 10 or more pages and each page containing 5 or more portlets) , mostly written in JSR 168 except for JSP Server Portlets.

3.  The Production Portal is live, so this Portal has to be duplicated on another machine.

4. The LDAP Server is used globally and it contains more than 1000 groups and 100,000 users. The Portal is enabled to use Dynamic Group. 

5. DB2 used is 8.2

6. The new architecture will be integrated with Omnifind Enterprise Search Server.

7. A new crawler will be created to craw the proprietary document management system.

8. Implemented a proper code change management on Websphere Portal.

Since the project team is composed of different teams, the following are the roles and responsibility of each teams :

1. Engineering Team : Setup the infrastructure, including Database Transfer and LDAP Integration.

2. Application Team : Migrate applications from 5.1.0.1 to 6.1, create crawler, Install Omnifind Enterprise and implement code change management.

3. Quality Assurance Team : Provide Load Test, Performance Testing and Security Testing

4. Operation Team : Certify the installation and migration and will support the servers installed and applications migrated on operational point of view.

Migrating is not the difficult, but the difficulty lies on making sure that these new environment will make my client's life easier, especially on managing Websphere Portal, promoting codes and implementing applications. The issue that my client is facing right now is that each Portal Environment is configured on it's own and there's no integration. I believe majority of the Portal Infrastructure is configured this way. What does this mean ? Let me give you an example.

Say you have 3 distinct portal environment, Development, Staging and Production. Each was installed on its own. Imagine creating an application for your user, say a business application. This business application requires 10 pages, with each page having an average of 5 portlets. Each page have different security configuration. As a developer, you know how to configure this on development environment. If your company is not that big and you don't have processes in-place, my guess is that your company will also ask you to deploy the portlets and pages plus configuration on staging and production. No problem, since you configured this and develop this, you know how to do so. However, let me remind you that doing this means you need to do every step in every environment, meaning that if you assign security in development for the portlet, you need to do so in staging and production. No problem since you're the one doing it.

However, let's bring the same scenario in a big MNC. MNC normally have proper processes in-place, meaning each environment is managed by distinct teams. On my client, they have a team managing development, staging and production. Development is open for developers so this is not a problem. However, the problem arises when this application is promoted in staging. A different team needs to install, re-do the configuration you did in development. Same thing will happen in production. The problem here is that, since this is a manual process, and different team are doing it, there's a tendency of human error, like misconfiguring the ACL on a portlet. This has a big issue on doing UAT and most of the time, on my client's experience, this causes delay on UAT. Aside from this, this brings frustration on the team managing staging and production as they are mostly blamed for the issues.

As we progress on this post, I will show you how to solve this issue by implementing Release Builder, to minimize manual work. You may wonder, why Release Builder and not Site Management ? 

The reason is that Site Mangement required the different environment to open up to each other, and as per my client's policy, this is a security breach. You don't expect Staging and Production servers communicating with each other as you're opening a hole for potential hacking (if say, your Production was compromised, which means your staging can be compromised and the hacker may ultimately enter your network).

I'm part of the Application Migration team so most of my post will detail the tasks that I'll be doing to migrate the applications.

Tuesday, April 14, 2009

Failed in r_gsk_secure_soc_init: GSK_ERROR_BAD_CERT(gsk rc = 414) with


I'm helping a client setup a SSL for their Documentum Application. Whenever I Configure the SSL Key Store, I hit this error on the http_plugin.log file :

Failed in r_gsk_secure_soc_init: GSK_ERROR_BAD_CERT(gsk rc = 414) with


Monday, April 13, 2009

Sorry for the long absense... anyway here's a song for all of you

WAAHHHHHHHHHHHHH.. so long I haven't been blogging.. Hayyyzz.  Anyway I found a song, a very beautiful song (on melody atleast) , though I don't understand the lyrics.

I just want to share with you all... if anyone can translate.. please help.. thanks!!



Friday, January 9, 2009

New find : Google App Engine

Hi Guys,

 I saw something that's useful for all of us. This is called Google App Engine. Right now, it's open only to few accounts but it looks promising from it's description :

Google App Engine is a platform for building and hosting web applications on Google Web Servers. It was first released as a beta version in April 2008. During its preview release, only free accounts are offered. According to Google, a free account can use up to 500 MB of persistent storage and enough CPU and bandwidth for about 5 million page views a month. Google has also indicated its intention to eventually provide additional resources for a price. During the preview stage, Google App Engine tries to accommodate sudden surges in traffic, but errors will occur if the quota continues to be exceeded. (http://en.wikipedia.org/wiki/Google_App_Engine)

Isn't it cool ? I can use the same platform where my Gmail is hosted and same technology that they are using. That's seems pretty cool for me.. Only thing is it only supports Phyton.. for now...



New Application Coming Soon

Hi Guys,

 I might create a .NETCF applications that can be used to :

1. Schedule to send SMS
2. Auto send an SMS if you missed out a call.
3. Auto reply to SMS when an SMS is sent to you.
 
And probably another one which can add prefixes to calls whenever you need it (I do especially when I'm on overseas)

I have yet to do so but i think this will be the followup app.