A URL (Uniform Resource Locator)

In our day by day web life ,we process thousands of urls. A URL provides a way to access a resources on the web, the hypertext system that operates over the Internet. We save them, we share with others and sometime we create them (yes you heard me right).  The URL contains the name of the protocol to be used to access the resource and a resource name. A  url have 2 importants parts. The first part of a URL identifies what protocol to use i.e. http or https.  URL protocols include HTTP (Hypertext Transfer Protocol) and HTTPS (HTTP Secure) for web resources, “mailto” for email addresses, “ftp” for files on a File Transfer Protocol (FTP) server, and telnet for a session to access remote computers. Note that the protocol identifier and the resource name are separated by a colon and two forward slashes.  The second part identifies the IP address or domain name where the resource is located i.e. (rizdeveloperk.wordpress.com) or sometimes is also contains a sub domain separated by a dot symbol

The resource name is the complete address to the resource. The format of the resource name depends entirely on the protocol used, but for many protocols, including HTTP, the resource name contains one or more of the following components:

Host Name
The name of the machine on which the resource lives.
The pathname to the file on the machine.
Port Number
The port number to which to connect (typically optional).
A reference to a named anchor within a resource that usually identifies a specific location within a file (typically optional).

For many protocols, the host name and the filename are required, while the port number and reference are optional. For example, the resource name for an HTTP URL must specify a server on the network (Host Name) and the path to the document on that machine (Filename); it also can specify a port number and a reference

A URL is the most common type of Uniform Resource Identifier (URI). URIs are strings of characters used to identify a resource over a network.. A URL is mainly used to point to a webpage, a component of a webpage or a program on a website. The resource name consists of:

  • A domain name identifying a server or the web service; and
  • A program name or a path to the file on the server.

Optionally, it can also specify:

  • A network port to use in making the connection; or
  • A specific reference point within a file — a named anchor in an HTML (Hypertext Markup Language) file.

The resources available at any url is access through a Domain Name System(DNS), which could  be single server or cluster of servers running with different name on a system.

URL with WWW and Non WWW

It really doesn’t matter if you use http://www.techomentous.com or techmomentous.com. You can choose any depending on your views. Having 2 versions same time can cause problems. You can overcome this by forcing a version with 301 redirect from other version.  A website can live at http://www.example.com or example.com. It’s best for your site’s visibility to live at just one URL, or web address. There is no special advantage with any version, so it’s your choice. You have to create a 301 redirect to the URL you choose from the other URL.

History of URL

Uniform Resource Locators were defined in Request for Comments (RFC) 1738 in 1994 by Tim Berners-Lee, the inventor of the World Wide Web, and the URI working group of the Internet Engineering Task Force (IETF) as an outcome of collaboration started at the IETF Living Documents “Birds of a Feather” session in 1992.

The format combines the pre-existing system of domain names (created in 1985) with file path syntax, where slashes are used to separate directory and file names. Conventions already existed where server names could be prefixed to complete file paths, preceded by a double slash (//). Berners-Lee later expressed regret at the use of dots to separate the parts of the domain name within URIs, wishing he had used slashes throughout,[9] and also said that, given the colon following the first component of a URI, the two slashes before the domain name were unnecessary.

URL & Normalization

URL normalization (or URL canonicalization) is the process of picking the best URL from the available choices. It’s done to reduce and to have a standard URL than having many URL’s.  URL normalization is performed by crawlers to determine if two syntactically different URLs are equivalent.

The ultimate aim of the URL normalization is to reduce redundant Web crawling by having a set of URLs which point to a unique set of Web pages and to improve search engines for better and unique results. URL normalization is deployed by search engines to determine the importance of Web pages as well as to avoid indexing same Web pages. URL normalization is also refers as the process of identifying the similar and equivalent URL’s. The equivalent URL’s points to the same required resource which is in web user’s interest.

URL (Uniform Resource Locator) normalization is an important activity in web mining. Web data can be retrieved in smoother way using effective URL normalization technique. URL normalization also reduces lot of calculations in web mining activities.  Web page redirection and forward graphs can be used to measure the similarities between the URL’s and can also be used for URL clusters. The URL clusters can be used for URL normalization.

Canonical URL

Canonical URL is not a correct term, Canonicalization as mentioned above, is the process of picking the best URL from the available ones. More than one URL’s pointing to same page is often called Canonical URL, but it’s not a valid term as we mentioned above. Few example for Canonical URL’s.

  1. example.com/
  2. example.com
  3. example.com/index.html (if html)
  4. example.com/index.jsp(if java)
  5. example.com/index.php (if php)
  6. example.com/home.asp (if IIS)

In Most of Web sites the above URL displays same content,But technically all of these URL’s are different. A web server could return completely different content for all the URL’s above. Search engine will only consider one of them to be the canonical form of the URL. So it’s necessary that you make a choose a prefered one and make 301 redirect for other versions to the prefered one, in order to prevent duplicate content and get high search ranking.

References :

5 Minute Guide to Clustering – Java Web Apps in Tomcat

What is Cluster ?

In J2EE Enterprises are choosing Java 2, Enterprise Edition (J2EE) to deliver their mission-critical applications over the Web. Within the J2EE framework, clusters provide mission-critical services to ensure minimal downtime and maximum scalability. A cluster is a group of application servers that transparently run your J2EE application as if it were a single entity. To scale, you should include additional machines within the cluster. To minimize downtime, make sure every component of the cluster is redundant.

In this article we will gain a foundational understanding of setting up clustering in Apache & Tomcat. yes you heard me right. For the purposes of the rest of this article, when I say “Apache” I mean the web server (like xampp), and when I say “Tomcat” I mean Tomcat.



There are pretty much two ways to set up basic clustering, which use two different Apache modules. The architecture for both, is the same. Apache sits in front of the Tomcat nodes and acts as a load balancer.

Traffic is passed between Apache and Tomcat(s) using the binary AJP 1.3 protocol. The two modules are modjk and modproxy.

  • mod_jk stands for “jakarta” the original project under which Tomcat was developed. It is the older way of setting this up, but still has some advantages.
  • modproxy is a newer and more generic way of setting this up. The rest of this guide will focus on modproxy, since it ships “out of the box” with newer versions of Apache.

You should be able to follow this guide by downloading Apache and Tomcat default distributions and following the steps.

Clustering Background:

You can cluster at the request or session level. Request level means that each request may go to a different node – this is the ideal since the traffic would be balanced across all nodes, and if a node goes down, the user has no idea. Unfortunately this requires session replication between all nodes, not just of HttpSession, but ANY session state. For the purposes of this article I’m going to describe Session level clustering, since it is simpler to set up, and works regardless of the dynamics of your application. ……. After all we only have 5 minutes!

Session level clustering means if your application is one that requires a login or other forms of session-state, and one or more your Tomcat nodes goes down, on their next request, the user will be asked to log in again, since they will hit a different node which does not have any stored session data for the user.

This is still an improvement on a non-clustered environment where, if your node goes down, you have no application at all! And we still get the benefits of load balancing across nodes, which allows us to scale our application out horizontally across many machines. Anyhow without further ado, let’s get into the how-to.

Setting Up The Nodes:

In most situations you would be deploying the nodes on physically separate machines, but in this example we will set them up on a single machine, but on different ports. This allows us to easily test this configuration. Nothing much changes for the physically separate set up – just the Hostnames of the nodes as you would expect. Oh and I’m working on Windows – but aside from the installation of Apache and Tomcat nothing is different between platforms since the configuration files are standard on all platforms.

  1. DownloadTomcat .ZIP distribution, e.g.
  2. We’ll use a folder to install all this stuff in. Let’s say it’s “C:\cluster” for the purposes of the article.
  3. Unzip the Tomcat distro twice, into two folders –C:\cluster\tomcat-node-1C:\cluster\tomcat-node-2
  4. Start up each of the nodes, using the bin/startup.bat / bin/startup.sh scripts. Ensure they start. If they don’t you may need to point Tomcat to the JDK installation on your machine.
  5. Open up the server.xml configuration onc:\cluster\tomcat-node-1\conf\server.xml

There are two places we need to (potentially) configure –


The first line is the connector for the AJP protocol. The “port” attribute is the important part here. We will leave this one as is, but for our second (or subsequent) Tomcat nodes, we will need to change it to a different value. The second part is the “engine” element. The “jvmRoute” attribute has to be added – this configures the name of this node in the cluster. The “jvmRoute” must be unique across all your nodes. For our purposes we will use “node1” and “node2” for our two node cluster.

7. This step is optional, but for production configs, you may want to remove the HTTP connector for Tomcat – that’s one less port to secure, and you don’t need it for the cluster to operate. Comment out the following lines of the server.xml –


8.Now repeat this forC:\cluster\tomcat-node-2\conf\server.xml Change the jvmRoute to “node2” and the AJP connector port to “8019”.

We’re done with Tomcat. Start each node up, and ensure it still works.


Setting Up The Apache Cluster

Now this an important part

  1. Download and installApache HTTP Server.Use the custom option to install it into C:\cluster\apache2.2
  2. Now open up c:\cluster\apache2.2\conf\httpd.conf in your favourite text editor.
  3.  Firstly, we need to uncomment the following lines (delete the ‘#’) –


These enable the necessarymod_proxy modules in Apache.

4. Finally, go to the end of the file, and add the following:

<Proxy balancer://testcluster stickysession=JSESSIONID>
BalancerMember ajp:// min=10 max=100 route=node1 loadfactor=1
BalancerMember ajp:// min=20 max=200 route=node2 loadfactor=1

ProxyPass /examples balancer://testcluster/examples

The above is the actual clustering configuration. The first section configures a load balancer across our two nodes. The loadfactor can be modified to send more traffic to one or the other node. i.e. how much load can this member handle compared to the others? This allows you to balance effectively if you have multiple servers which have different hardware profiles. Note also the “route” setting which must match the names of the “jvmRoutes” in the Tomcat server.xml for each node. This in conjunction with the “stickysession” setting is key for a Tomcat cluster, as this configures the session management. It tells mod_proxy to look for the node’s route in the given session cookie to determine which node that session is using. This allows all requests from a given client to go to the node which is holding the session state for the client. The ProxyPass line configures the actual URL from Apache to the load balanced cluster. You may want this to be “/” e.g. “ProxyPass /balancer://testcluster/” In our case we’re just configuring the Tomcat /examples application for our test.

10.Save it, and restart your Apache server.

Test Run

With your Apache server running you should be able to go toh ttp://localhost/examples

You should get a 503 error page- This is because both Tomcat nodes are down.

Start up node1 (c:\cluster\tomcat-node-1\bin\startup) and reload http://localhost/examples

You should see the examples application from the default Tomcat installation


Shut down node1, and then start up node2. Repeat the test. You should see the same page as above. We have transparently moved from node1 to node2 since node1 went down.Start both nodes up and your cluster is now working.

You’re done!

Thanks For Reading Folks.

Let me know its useful or not in comments.

What is Cloud Computing ? – Day 3

In the previous post we have go through about various Cloud Computing as services  iaas, paas IDaas Naas and saas.

Now in this post we will study about the cloud computing management and cloud data storage. All the information in the What is Cloud Computing series is bring from http://www.tutorialspoint.com and howstuffworks.com the main aim of this series to provide information on cloud computing and in the next few post we will look into how clouds work for java.  Now lets move on to Cloud Management.

Cloud Computing Management

It is the responsibility of cloud provider to manage resources and their performance. Management may include several aspects of cloud computing such as load balancing, performance, storage and backups, capacity, deployment , etc. Management is required to access full functionality of resources in the cloud. Cloud Management involves a number of tasks to be performed by the cloud provider to ensure efficient use of cloud resources. Here, we will discuss some of these tasks:cloud_computing-cloud_management_tasks


It is required to timely audit the backups to ensure you can successfully restore randomly selected files of different users. Backups can be performed in following ways:

  • Backing up files by the company, from on-site computers to the disks that reside within the cloud.
  • Backing up files by the cloud provider.

It is necessary to know if cloud provider has encrypted the data, who has access to that data and if the backup is taken at different locations, you must know where.


The managers should develop a diagram describing a detailed process flow. This process flow will describe the movement of company’s data throughout the cloud solution.


The managers must know the procedure to exit from services of a particular cloud provider. There must exist procedures, enabling the managers to export company’s data to a file and importing it to another provider.


The managers should know the security plans of the provider for different services:

  • Multitenant use
  • E-commerce processing
  • Employee screening
  • Encryption policy


The managers should know the capacity planning in order to ensure whether the cloud provider will meet the future capacity requirement for his business or not.

It is also required to manage scaling capabilities in order to ensure services can be scaled up or down as per the user need.


In order to identify the errors in the system, managers must audit the logs on a regular basis.


It is necessary to test the solutions provided by the provider in order to validate that it gives the correct result and is error-free. This is necessary for a system to be robust and reliable.

Cloud Computing Data Storage

Cloud Storage is a service that allows to save data on offsite storage system managed by third-party and is made accessible by a web services API.

Storage Devices

Storage devices can be broadly classified into two categories:

  • Block Storage Devices
  • File Storage Devices


Block Storage Devices offer raw storage to the clients. This raw storage can be partitioned to create volumes.


File Storage Devices offers storage to clients in form of files, maintaining its own file system. This storage is in the form of Network Attached Storage (NAS).

Cloud Storage Classes

Cloud Storage can be broadly classified into two categories:

  • Unmanaged Cloud Storage
  • Managed Cloud Storage


Unmanaged Cloud Storage means that the storage is preconfigured for the consumer. The consumer cannot format nor the consumer can install own file system or change drive properties.


Managed Cloud Storage offers online storage space on demand. Managed cloud storage system presents what appears to the user to be a raw disk that the user can partition and format.

Creating Cloud Storage System

The cloud storage system stores multiple copies of data on multiple servers and in multiple locations. If one system fails, then it only requires to change the pointer to stored object’s location.

To aggregate storage assets into cloud storage systems, the cloud provider can use storage virtualization software, StorageGRID. It creates a virtualization layer that fetches storage from different storage devices into a single management system. It can also manage data from CIFSand NFS file system over the Internet. The following diagram shows how SystemGRID virtualizes the storage into storage clouds:


Virtual Storage Containers

Virtual storage containers offer high performance cloud storage systems. Logical Unit Number (LNU) of device, files and other objects are created in virtual storage containers. Following diagram shows a virtual storage container, defining a cloud storage domain:



Storing the data in cloud is not that simple task. Apart from its flexibility and convenience, it also has several challenges faced by the consumers. The consumers require ability to:

  • Provision additional storage on demand.
  • Know and restrict the physical location of the stored data.
  • Verify how data was erased?
  • Have access to a documented process for surely disposing of data storage hardware.
  • Administrator access control over data.

Continue reading

What is Cloud Computing ? – Day 2

This is the second post on What is cloud computing. In previous post we have learn about what is the cloud computing and how its work. We have also study about its architecture, Infrastructure Components and its various model. In this post we will study on iaas, paas IDaas Naas and saas.


IaaSprovides access to fundamental resources such as physical machines, virtual machines, virtual storage,

etc., Apart from these resources, the IaaS also offers:

  •  Virtual machine disk storage
  •  Virtual local area network (VLANs)
  •  Load balancers
  •  IP addresses
  •  Software bundles

All of the above resources are made available to end user via server virtualization. Moreover, these resources
are accessed by the customers as if they own them. All of the above resources are made available to end user via server virtualization. Moreover, these resources are accessed by the customers as if they own them.



IaaS allows the cloud provider to freely locate the infrastructure over the Internet in a cost-effective manner. Some of the key benefits of IaaS are listed below:

  • Full Control of the computing resources through Administrative Access to VMs.
  • Flexible and Efficient renting of Computer Hardware.
  • Portability, Interoperability with Legacy Applications.


IaaS allows the consumer to access computing resources through administrative access to virtual machines in the following manner:

  • Consumer issues administrative command to cloud provider to run the virtual machine or to save data on cloud’s server.
  • Consumer issues administrative command to virtual machines they owned to start web server or installing new applications.


IaaS resources such as virtual machines, storages, bandwidth, IP addresses, monitoring services, firewalls, etc., all are made available to the consumers on rent. The consumer has to pay based the length of time a consumer retains a resource. Also with administrative access to virtual machines, the consumer can also run any software, even a custom operating system.


It is possible to maintain legacy between applications and workloads between IaaS clouds. For example, network applications such as web server, e-mail server that normally runs on consumer-owned server hardware can also be run from VMs in IaaS cloud.


IaaS shares issues with PaaS and SaaS, such as Network dependence and browser based risks. It also have some specific issues associated with it. These issues are mentioned in the following diagram:



Because IaaS offers the consumer to run legacy software in provider’s infrastructure, therefore it exposes consumers to all of the security vulnerabilities of such legacy software.


The VM can become out of date with respect to security updates because IaaS allows the consumer to operate the virtual machines in running, suspended and off state. However, the provider can automatically update such VMs, but this mechanism is hard and complex.


IaaS offers an isolated environment to individual consumers through hypervisor. Hypervisor is a software layer that includes hardware support for virtualization to split a physical computer into multiple virtual machines.


The consumer uses virtual machines that in turn uses the common disk resources provided by the cloud provider. When the consumer releases the resource, the cloud provider must ensure that next consumer to rent the resource does not observe data residue from previous consumer.


Here are the characteristics of IaaS service model:

  • Virtual machines with pre-installed software.
  • Virtual machines with pre-installed Operating Systems such as Windows, Linux, and Solaris.
  • On-demand availability of resources.
  • Allows to store copies of particular data in different locations.
  • The computing resources can be easily scaled up and down.

Cloud Computing Platform as a Service(PaaS)

PaaS offers the runtime environment for applications. It also offers development & deployment tools, required to develop applications. PaaS has a feature of point-and-click tools that enables non-developers to create web applications.

Google’s App Engine, Force.com are examples of PaaS offering vendors. Developer may log on to these websites and use the built-in API to create web-based applications.

But the disadvantage of using PaaS is that the developer lock-in with a particular vendor. For example, an application written in Python against Google’s API using Google’s App Engine is likely to work only in that environment. Therefore, the vendor lock-in is the biggest problem in PaaS.

The following diagram shows how PaaS offers an API and development tools to the developers and how it helps the end user to access business applications.



Following are the benefits of PaaS model:



Consumer need not to bother much about the administration because it’s the responsibility of cloud provider.


Consumer need not purchase expensive hardware, servers, power and data storage.


It is very easy to scale up or down automatically based on application resource demands.


It is the responsibility of the cloud provider to maintain software versions and patch installations.


Like SaaS, PaaS also place significant burdens on consumer’s browsers to maintain reliable and secure connections to the provider systems. Therefore, PaaS shares many of the issues of SaaS. However, there are some specific issues associated with PaaS as shown in the following diagram:



Although standard languages are used yet the implementations of platforms services may vary. For example, file, queue, or hash table interfaces of one platform may differ from another, making it difficult to transfer workloads from one platform to another.


The PaaS applications are event oriented which poses resource constraints on applications, i.e., they have to answer a request in a given interval of time.


Since the PaaS applications are dependent on network, PaaS applications must explicitly use cryptography and manage security exposures.


Here are the characteristics of PaaS service model:

  • PaaS offers browser based development environment. It allows the developer to create database and edit the application code either via Application Programming Interface or point-and-click tools.
  • PaaS provides built-in security, scalability, and web service interfaces.
  • PaaS provides built-in tools for defining workflow and approval processes and defining business rules.
  • It is easy to integrate with other applications on the same platform.
  • PaaS also provides web services interfaces that allow us to connect the applications outside the platform.

PaaS Types

Based on the functions, the PaaS can be classified into four types as shown in the following diagram:



The Stand-alone PaaS works as an independent entity for a specific function. It does not include licensing, technical dependencies on specific SaaS applications.


The Application Delivery PaaS includes on-demand scaling and application security.


Open PaaS offers an open source software that helps a PaaS provider to run applications.


The Add-on PaaS allows to customize the existing SaaS platform.

Cloud Computing Software as a Service(SaaS)

Software as a Service (SaaS) model allows to provide software application as a service to the end users. It refers to a software that is deployed on a hosted service and is accessible via Internet. There are several SaaS applications, some of them are listed below:

  • Billing and Invoicing System
  • Customer Relationship Management (CRM) applications
  • Help Desk Applications
  • Human Resource (HR) Solutions

Some of the SaaS applications are not customizable such as an Office Suite. But SaaS provides us Application Programming Interface (API), which allows the developer to develop a customized application.


Here are the characteristics of SaaS service model:

  • SaaS makes the software available over the Internet.
  • The Software are maintained by the vendor rather than where they are running.
  • The license to the software may be subscription based or usage based. And it is billed on recurring basis.
  • SaaS applications are cost effective since they do not require any maintenance at end user side.
  • They are available on demand.
  • They can be scaled up or down on demand.
  • They are automatically upgraded and updated.
  • SaaS offers share data model. Therefore, multiple users can share single instance of infrastructure. It is not required to hard code the functionality for individual users.
  • All users are running same version of the software.


Using SaaS has proved to be beneficial in terms of scalability, efficiency, performance and much more. Some of the benefits are listed below:

  • Modest Software Tools
  • Efficient use of Software Licenses
  • Centralized Management & Data
  • Platform responsibilities managed by provider
  • Multitenant solutions


The SaaS application deployment requires a little or no client side software installation which results in the following benefits:

  • No requirement for complex software packages at client side
  • Little or no risk of configuration at client side
  • Low distribution cost


The client can have single license for multiple computers running at different locations which reduces the licensing cost. Also, there is no requirement for license servers because the software runs in the provider’s infrastructure.


The data stored by the cloud provider is centralized. However, the cloud providers may store data in a decentralized manner for sake of redundancy and reliability.


All platform responsibilities such as backups, system maintenance, security, hardware refresh, power management, etc., are performed by the cloud provider. The consumer need not to bother about them.


Multitenancy allows multiple users to share single instance of resources in virtual isolation. Consumers can customize their application without affecting the core functionality.


There are several issues associated with SaaS, some of them are listed below:

  • Browser based risks
  • Network dependence
  • Lack of portability between SaaS clouds


If the consumer visits malicious website and browser becomes infected, and the subsequent access to SaaS application might compromise the consumer’s data.

To avoid such risks, the consumer can use multiple browsers and dedicate a specific browser to access SaaS applications or can use virtual desktop while accessing the SaaS applications.


The SaaS application can be delivered only when network is continuously available. Also network should be reliable but the network reliability cannot be guaranteed either by cloud provider or the consumer.


Transferring workloads from one SaaS cloud to another is not so easy because work flow, business logics, user interfaces, support scripts can be provider specific.

Open SaaS and SOA

Open SaaS uses SaaS applications that are developed using open source programming language. These SaaS applications can run on any open source operating system and database. Open SaaS has several benefits, some of these are listed below:

  • No License Required
  • Low Deployment Cost
  • Less Vendor Lock-in
  • More portable applications
  • More Robust Solution

The following diagram shows the SaaS implementation based on SOA:


Cloud Computing Identity as a Service(IDaaS)


Employees in a company require to login into system to perform various tasks. These systems may be based on local server or cloud based. Following are the problems that an employee might face:

  • Remembering different username and password combinations for accessing multiple servers.
  • If an employee leaves the company, it’s required to ensure that each of the user’s account has been disabled. This increases workload on IT staff.

To solve above problems, a new technique emerged which is known as Identity as a Service (IDaaS).

IDaaS offers management of identity (information) as a digital entity. This identity can be used during electronic transactions.


Identity refers to set of attributes associated with something and make it recognizable. All objects may have same attributes, but their identity cannot be the same. This unique identity is assigned through unique identification attribute.

There are several identity services that have been deployed to validate services such as validating web sites, transactions, transaction participants, client, etc. Identity as a Service may include the following:

  • Directory Services
  • Federated Services
  • Registration
  • Authentication Services
  • Risk and Event monitoring
  • Single sign-on services
  • Identity and Profile management

Single Sign-On (SSO)

To solve the problem of using different username & password combination for different servers, companies now employ Single Sign-On software, which allows the user to login only one time and manages the user’s access to other systems.

SSO has single authentication server, managing multiple accesses to other systems, as shown in the following diagram:



There are several implementations of SSO. Here, we will discuss the common working of SSO:


Following steps explain the working of Single Sign-On software:

  1. User logs into the authentication server using a username and password.
  2. The authentication server returns the user’s ticket.
  3. User sends the ticket to intranet server.
  4. Intranet server sends the ticket to the authentication server.
  5. Authentication server sends the user’s security credentials for that server back to the intranet server.

If an employee leaves the company, then it just required to disable the user at the authentication server, which in turn disables the user’s access to all the systems.

Federated Identity Management (FIDM)

FIDM describes the technologies and protocols that enable a user to package security credentials across security domains. It uses Security Markup Language (SAML) to package a user’s security credentials as shown in the following diagram:



It offers users to login into multiple websites with single account. Google, Yahoo!, Flickr, MySpace, WordPress.com are some of the companies that support OpenID.


  • Increased site conversation rates.
  • Access to greater user profile content.
  • Fewer problems with lost passwords.
  • Ease of content integration into social networking sites.

Cloud Computing Network as a Service(NaaS)


Networks as a Service allows us to access to network infrastructure directly and securely. NaaS makes it possible to deploy custom routing protocols.

NaaS uses virtualized network infrastructure to provide network services to the consumer. It is the responsibility of NaaS provider to maintain and manage the network resources which decreases the workload from the consumer. Moreover, NaaS offers network as a utility.

NaaS is also based on pay-per-use model.

How NaaS is delivered?

To use NaaS model, the consumer is required to logon to the web portal, where he can get online API. Here, the consumer can customize the route.

In turn, consumer has to pay for the capacity used. It is also possible to turn off the capacity at any time.

Mobile NaaS

Mobile NaaS offers more efficient and flexible control over mobile devices. It uses virtualization to simplify the architecture to create more efficient processes.

Following diagram shows the Mobile NaaS service elements:


NaaS Benefits

NaaS offers a number of benefits, some of them are discussed below:



Each consumer is independent and can segregate the network.


Customers have to pay for high-capacity network only when needed.


There exists reliability treatments that can be applied for critical applications.


There exists data protection solution for highly sensitive applications.


It is very easy to integrate new service elements to the network.


There exists more open support models, which help to reduce the operation cost.


The customer traffic is logically isolated.

Continue reading