Skip to main content

How to work around the limitations set by Same Origin Policy...

Last week, I wanted to make some Ajax requests from a page generated by some web application to another web application. These two applications are hosted in two different servers, with different domains. To do this type of a thing, you can't use a normal method because of the limitations set in place by "Same Origin Policy".

Same Origin Policy is a security concept for client side programing languages. This policy prevents the client side scripts accessing the pages from different web sites. For an example, if your application is hosted in the "hostA" and a client side script of your application wants to send an Ajax request to a another application that is hosted in "hostB", then this request will be prevented by the Same Origin Policy. To be able to make that type of a request, both host and protocol should be same, according to the Same Origin Policy.

For more details :
http://en.wikipedia.org/wiki/Same_origin_policy
https://developer.mozilla.org/En/Same_origin_policy_for_JavaScrip

There are few methods listed below to work around this limitation. 
  • With html5 there is a method formalized for this, but it supports only in the newest browsers. 
  • Cross Origin Resource Sharing (CORS) : This is a way of enabling open access between cross domains by making the server to be accessed by the pages of other servers. This is an excellent guide for this method: http://enable-cors.org/
  • JSONP : This provides a method for requesting data from a different server with a different domain. For more details: http://en.wikipedia.org/wiki/JSONP  
When I was researching about these, I found an extremely cool JavaScript library called "easyXDM", which is developed by combining above methods. I highly recommend that library for anyone that wants some work around for the cross domain requests.

More details:
http://easyxdm.net
https://github.com/oyvindkinsey/easyXDM

Comments

Post a Comment

Popular posts from this blog

How to create a new module for vtiger...

Recently, I had to create a new module for vtigerCRM for my client in current working place. I did search in many places including the official vtiger sites, but couldn’t find a better documentation for my purpose. The latest vtiger version at that time was 5.0.3. Because I had some experience doing lots of core modifications for this system, I did decide to read the source code and find how to add a new module. Finally, I could create a new module and started the project. So, I thought it will be a good thing to write some thing on my blog about this topic, so that others who want to do this thing can read. Given below is a brief description about how to create a new module for vtiger CRM 5.0.3. Source code of this example module is also available to Download.
Step 01: Creating the module directory and minimum required files.
Create a directory called “newModule” inside your vtiger modules directory, or any other name that you prefer. Now, module index file should be created. Create a…

De Morgan's Laws in Programming

Recently, while I was reviewing some codes, I saw there were some conditional statements that check for the same condition but written in different ways. Most of these statements were written with common sense without using any mathematical analysis, since those are too simple to go for a more formal approach. The two identical conditional statements that has been written in different ways are given below.

01)

if ($comment['deleted'] == '1' || $comment['approved'] == '0') {
                unset($conversationsArray[$key]);
} else {
               ++$count;
}

02)

if ($comment['deleted'] == '0' && $comment['approved'] == '1') {
               ++$count;             
} else {
             unset($conversationsArray[$key]);
}

Obviously, the above lines say that the inverse of the first condition is equals to the second condition and vice versa. That is...

 ($comment['deleted'] == '1' || $comment['ap…

How to create a simple Web Crawler

Web crawlers are used to extract information from web sites for many purposes.

The simple example given here accepts an URL and exposes some functions to query the content of the page.

To check out the source code of this example : https://github.com/nadeeth/crawler

If you are going to make any improvements to this code, I recommend you to follow TDD and use the unit test class in the code.

Step 1 : Create the class, init function and required attributes In this example, xpath is used for querying the given web page. There is an attribute to hold the page url, and another to hold the xpath object of the loaded page.

The init() function initializes the xpath object for the page URL assigned to url attribute.

class Crawler { public $url = false; protected $xpath = false; public function init() { $xmlDoc = new DOMDocument(); @$xmlDoc->loadHTML(file_get_contents($this->url)); $this->xpath = new DOMXPath(@$xmlDoc); } } In the next two ste…