Saturday 11 November 2017

How to configure the C3P0 connection pool in Hibernate

Connection Pool
Connection pool is good for performance, as it prevents Java application create a connection each time when interact with database and minimizes the cost of opening and closing connections. See wiki connection pool explanation


Hibernate comes with internal connection pool, but not suitable for production use. In this tutorial, we show you how to integrate third party connection pool – C3P0, with Hibernate.

1. Get hibernate-c3p0.jar

To integrate c3p0 with Hibernate, you need hibernate-c3p0.jar, get it from JBoss repository.
File : pom.xml
<project ...>

 <repositories>
  <repository>
   <id>JBoss repository</id>
   <url>http://repository.jboss.org/nexus/content/groups/public/</url>
  </repository>
 </repositories>

 <dependencies>

  <dependency>
   <groupId>org.hibernate</groupId>
   <artifactId>hibernate-core</artifactId>
   <version>3.6.3.Final</version>
  </dependency>

  <!-- Hibernate c3p0 connection pool -->
  <dependency>
   <groupId>org.hibernate</groupId>
   <artifactId>hibernate-c3p0</artifactId>
   <version>3.6.3.Final</version>
  </dependency>

 </dependencies>
</project>
 

2. Configure c3p0 propertise

To configure c3p0, puts the c3p0 configuration details in “hibernate.cfg.xml“, like this :
File : hibernate.cfg.xml
<?xml version="1.0" encoding="utf-8"?> <!DOCTYPE hibernate-configuration PUBLIC "-//Hibernate/Hibernate Configuration DTD 3.0//EN" "http://hibernate.sourceforge.net/hibernate-configuration-3.0.dtd"> <hibernate-configuration> <session-factory> <property name="hibernate.connection.driver_class">oracle.jdbc.driver.OracleDriver</property> <property name="hibernate.connection.url">jdbc:oracle:thin:@localhost:1521:MKYONG</property> <property name="hibernate.connection.username">mkyong</property> <property name="hibernate.connection.password">password</property> <property name="hibernate.dialect">org.hibernate.dialect.Oracle10gDialect</property> <property name="hibernate.default_schema">MKYONG</property> <property name="show_sql">true</property> <property name="hibernate.c3p0.min_size">5</property> <property name="hibernate.c3p0.max_size">20</property> <property name="hibernate.c3p0.timeout">300</property> <property name="hibernate.c3p0.max_statements">50</property> <property name="hibernate.c3p0.idle_test_period">3000</property> <mapping class="com.mkyong.user.DBUser"></mapping> </session-factory> </hibernate-configuration>
  1. hibernate.c3p0.min_size – Minimum number of JDBC connections in the pool. Hibernate default: 1
  2. hibernate.c3p0.max_size – Maximum number of JDBC connections in the pool. Hibernate default: 100
  3. hibernate.c3p0.timeout – When an idle connection is removed from the pool (in second). Hibernate default: 0, never expire.
  4. hibernate.c3p0.max_statements – Number of prepared statements will be cached. Increase performance. Hibernate default: 0 , caching is disable.
  5. hibernate.c3p0.idle_test_period – idle time in seconds before a connection is automatically validated. Hibernate default: 0
Note
For detail about hibernate-c3p0 configuration settings, please read this article. 
 

Run it, output

Done, run it and see the following output :
c3p0 connection pool in hibernate
During the connection initialize process, 5 database connections are created in connection pool, ready reuse for your web application.

Reference

  1. http://docs.jboss.org/hibernate/core/3.6/reference/en-US/html_single/#d0e1748
  2. http://www.mchange.com/projects/c3p0/index.html#appendix_d
  3. https://www.mkyong.com/hibernate/how-to-configure-the-c3p0-connection-pool-in-hibernate/
 
 

Saturday 22 July 2017

How to: Linux / UNIX create soft link with ln command

Two types of links

There are two types of links
  • symbolic links: Refer to a symbolic path indicating the abstract location of another file
  • hard links : Refer to the specific location of physical data.

How do I create soft link / symbolic link?

Soft links are created with the ln command. For example, the following would create a soft link named link1 to a file named file1, both in the current directory
$ ln -s file1 link1
To verify new soft link run:
$ ls -l file1 link1
Sample outputs:
-rw-r--r--  1 veryv  wheel  0 Mar  7 22:01 file1
lrwxr-xr-x  1 veryv  wheel  5 Mar  7 22:01 link1 -> file1
From the above outputs it is clear that a symbolic link named ‘link1’ contains the name of the file named ‘file1’ to which it is linked. So the syntax is as follows to create a symbolic link in Unix or Linux, at the shell prompt:
$ ln -s {source-filename} {symbolic-filename}

For example create a softlink for /webroot/home/httpd/test.com/index.php as /home/vivek/index.php, enter the following command:
$ ln -s /webroot/home/httpd/test.com/index.php /home/vivek/index.php
$ ls -l

Sample outputs:
lrwxrwxrwx 1 vivek  vivek    16 2007-09-25 22:53 index.php -> /webroot/home/httpd/test.com/index.php
You can now edit the soft link named /home/vivek/index.php and /webroot/home/httpd/test.com/index.php will get updated:
$ vi /home/vivek/index.php
Your actual file /webroot/home/httpd/test.com/index.php remains on disk even if you deleted the soft link /home/vivek/index.php using the rm command:
$ rm /home/vivek/index.php ## <--- link gone ##
## But original/actual file remains as it is ##
$ ls -l /webroot/home/httpd/test.com/index.php

How to stop a domain name pointing to your website IP address

 
You cannot have it refuse connections, since the hostname (or IP) that the user is trying to use as their HTTP host is not known to the server until the client actually sends an HTTP request. The TCP listener is always bound to the IP address.
Would an HTTP error response be acceptable instead?

<VirtualHost *:80>
    ServerName catchall
    <Location />
        Order allow,deny
        Deny from all
    </Location>
</VirtualHost>

<VirtualHost *:80>
    ServerName example.com
    DocumentRoot /var/www/
    <Directory /var/www/>
        AllowOverride All
        Order allow,deny
        allow from all
    </Directory>
</VirtualHost>

Friday 21 July 2017

How To Set Up Apache Virtual Hosts on Ubuntu 14.04 LTS

 Introduction

The Apache web server is the most popular way of serving web content on the internet. It accounts for more than half of all active websites on the internet and is extremely powerful and flexible.
Apache breaks its functionality and components into individual units that can be customized and configured independently. The basic unit that describes an individual site or domain is called a virtual host.
These designations allow the administrator to use one server to host multiple domains or sites off of a single interface or IP by using a matching mechanism. This is relevant to anyone looking to host more than one site off of a single VPS.
Each domain that is configured will direct the visitor to a specific directory holding that site's information, never indicating that the same server is also responsible for other sites. This scheme is expandable without any software limit as long as your server can handle the load.
In this guide, we will walk you through how to set up Apache virtual hosts on an Ubuntu 14.04 VPS. During this process, you'll learn how to serve different content to different visitors depending on which domains they are requesting.

Prerequisites

Before you begin this tutorial, you should create a non-root user as described in steps 1-4 here.
You will also need to have Apache installed in order to work through these steps. If you haven't already done so, you can get Apache installed on your server through apt-get:
sudo apt-get update
sudo apt-get install apache2
After these steps are complete, we can get started.
For the purposes of this guide, my configuration will make a virtual host for example.com and another for test.com. These will be referenced throughout the guide, but you should substitute your own domains or values while following along.
To learn how to set up your domain names with DigitalOcean, follow this link. If you do not have domains available to play with, you can use dummy values.
We will show how to edit your local hosts file later on to test the configuration if you are using dummy values. This will allow you to test your configuration from your home computer, even though your content won't be available through the domain name to other visitors.

Step One — Create the Directory Structure

The first step that we are going to take is to make a directory structure that will hold the site data that we will be serving to visitors.
Our document root (the top-level directory that Apache looks at to find content to serve) will be set to individual directories under the /var/www directory. We will create a directory here for both of the virtual hosts we plan on making.
Within each of these directories, we will create a public_html folder that will hold our actual files. This gives us some flexibility in our hosting.
For instance, for our sites, we're going to make our directories like this:
sudo mkdir -p /var/www/example.com/public_html
sudo mkdir -p /var/www/test.com/public_html
The portions in red represent the domain names that we are wanting to serve from our VPS.

Step Two — Grant Permissions

Now we have the directory structure for our files, but they are owned by our root user. If we want our regular user to be able to modify files in our web directories, we can change the ownership by doing this:
sudo chown -R $USER:$USER /var/www/example.com/public_html
sudo chown -R $USER:$USER /var/www/test.com/public_html
The $USER variable will take the value of the user you are currently logged in as when you press "ENTER". By doing this, our regular user now owns the public_html subdirectories where we will be storing our content.
We should also modify our permissions a little bit to ensure that read access is permitted to the general web directory and all of the files and folders it contains so that pages can be served correctly:
sudo chmod -R 755 /var/www
Your web server should now have the permissions it needs to serve content, and your user should be able to create content within the necessary folders.

Step Three — Create Demo Pages for Each Virtual Host

We have our directory structure in place. Let's create some content to serve.
We're just going for a demonstration, so our pages will be very simple. We're just going to make an index.html page for each site.
Let's start with example.com. We can open up an index.html file in our editor by typing:
nano /var/www/example.com/public_html/index.html
In this file, create a simple HTML document that indicates the site it is connected to. My file looks like this:
<html>
  <head>
    <title>Welcome to Example.com!</title>
  </head>
  <body>
    <h1>Success!  The example.com virtual host is working!</h1>
  </body>
</html>
Save and close the file when you are finished.
We can copy this file to use as the basis for our second site by typing:
cp /var/www/example.com/public_html/index.html /var/www/test.com/public_html/index.html
We can then open the file and modify the relevant pieces of information:
nano /var/www/test.com/public_html/index.html
<html>
  <head>
    <title>Welcome to Test.com!</title>
  </head>
  <body>
    <h1>Success!  The test.com virtual host is working!</h1>
  </body>
</html>
Save and close this file as well. You now have the pages necessary to test the virtual host configuration.

Step Four — Create New Virtual Host Files

Virtual host files are the files that specify the actual configuration of our virtual hosts and dictate how the Apache web server will respond to various domain requests.
Apache comes with a default virtual host file called 000-default.conf that we can use as a jumping off point. We are going to copy it over to create a virtual host file for each of our domains.
We will start with one domain, configure it, copy it for our second domain, and then make the few further adjustments needed. The default Ubuntu configuration requires that each virtual host file end in .conf.

Create the First Virtual Host File

Start by copying the file for the first domain:
sudo cp /etc/apache2/sites-available/000-default.conf /etc/apache2/sites-available/example.com.conf
Open the new file in your editor with root privileges:
sudo nano /etc/apache2/sites-available/example.com.conf
The file will look something like this (I've removed the comments here to make the file more approachable):
<VirtualHost *:80>
    ServerAdmin webmaster@localhost
    DocumentRoot /var/www/html
    ErrorLog ${APACHE_LOG_DIR}/error.log
    CustomLog ${APACHE_LOG_DIR}/access.log combined
</VirtualHost>
As you can see, there's not much here. We will customize the items here for our first domain and add some additional directives. This virtual host section matches any requests that are made on port 80, the default HTTP port.
First, we need to change the ServerAdmin directive to an email that the site administrator can receive emails through.
ServerAdmin admin@example.com
After this, we need to add two directives. The first, called ServerName, establishes the base domain that should match for this virtual host definition. This will most likely be your domain. The second, called ServerAlias, defines further names that should match as if they were the base name. This is useful for matching hosts you defined, like www:
ServerName example.com
ServerAlias www.example.com
The only other thing we need to change for a basic virtual host file is the location of the document root for this domain. We already created the directory we need, so we just need to alter the DocumentRoot directive to reflect the directory we created:
DocumentRoot /var/www/example.com/public_html
In total, our virtualhost file should look like this:
<VirtualHost *:80>
    ServerAdmin admin@example.com
    ServerName example.com
    ServerAlias www.example.com
    DocumentRoot /var/www/example.com/public_html
    ErrorLog ${APACHE_LOG_DIR}/error.log
    CustomLog ${APACHE_LOG_DIR}/access.log combined
</VirtualHost>
Save and close the file.

Copy First Virtual Host and Customize for Second Domain

Now that we have our first virtual host file established, we can create our second one by copying that file and adjusting it as needed.
Start by copying it:
sudo cp /etc/apache2/sites-available/example.com.conf /etc/apache2/sites-available/test.com.conf
Open the new file with root privileges in your editor:
sudo nano /etc/apache2/sites-available/test.com.conf
You now need to modify all of the pieces of information to reference your second domain. When you are finished, it may look something like this:
<VirtualHost *:80>
    ServerAdmin admin@test.com
    ServerName test.com
    ServerAlias www.test.com
    DocumentRoot /var/www/test.com/public_html
    ErrorLog ${APACHE_LOG_DIR}/error.log
    CustomLog ${APACHE_LOG_DIR}/access.log combined
</VirtualHost>
Save and close the file when you are finished.

Step Five — Enable the New Virtual Host Files

Now that we have created our virtual host files, we must enable them. Apache includes some tools that allow us to do this.
We can use the a2ensite tool to enable each of our sites like this:
sudo a2ensite example.com.conf
sudo a2ensite test.com.conf
When you are finished, you need to restart Apache to make these changes take effect:
sudo service apache2 restart
You will most likely receive a message saying something similar to:
 * Restarting web server apache2
 AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.0.1. Set the 'ServerName' directive globally to suppress this message
This is a harmless message that does not affect our site.

Step Six — Set Up Local Hosts File (Optional)

If you haven't been using actual domain names that you own to test this procedure and have been using some example domains instead, you can at least test the functionality of this process by temporarily modifying the hosts file on your local computer.
This will intercept any requests for the domains that you configured and point them to your VPS server, just as the DNS system would do if you were using registered domains. This will only work from your computer though, and is simply useful for testing purposes.
Make sure you are operating on your local computer for these steps and not your VPS server. You will need to know the computer's administrative password or otherwise be a member of the administrative group.
If you are on a Mac or Linux computer, edit your local file with administrative privileges by typing:
sudo nano /etc/hosts
If you are on a Windows machine, you can find instructions on altering your hosts file here.
The details that you need to add are the public IP address of your VPS server followed by the domain you want to use to reach that VPS.
For the domains that I used in this guide, assuming that my VPS IP address is 111.111.111.111, I could add the following lines to the bottom of my hosts file:
127.0.0.1   localhost
127.0.1.1   guest-desktop
111.111.111.111 example.com
111.111.111.111 test.com
This will direct any requests for example.com and test.com on our computer and send them to our server at 111.111.111.111. This is what we want if we are not actually the owners of these domains in order to test our virtual hosts.
Save and close the file.

Step Seven — Test your Results

Now that you have your virtual hosts configured, you can test your setup easily by going to the domains that you configured in your web browser:
http://example.com
You should see a page that looks like this:
Apache virt host example
Likewise, if you can visit your second page:
http://test.com
You will see the file you created for your second site:
Apache virt host test
If both of these sites work well, you've successfully configured two virtual hosts on the same server.
If you adjusted your home computer's hosts file, you may want to delete the lines you added now that you verified that your configuration works. This will prevent your hosts file from being filled with entries that are not actually necessary.
If you need to access this long term, consider purchasing a domain name for each site you need and setting it up to point to your VPS server.

Conclusion

If you followed along, you should now have a single server handling two separate domain names. You can expand this process by following the steps we outlined above to make additional virtual hosts.
There is no software limit on the number of domain names Apache can handle, so feel free to make as many as your server is capable of handling.

Refrence - https://www.digitalocean.com/community/tutorials/how-to-set-up-apache-virtual-hosts-on-ubuntu-14-04-lts
 

How to move, copy and delete a file or, folder on linux

Sometimes you don't remember linux command for these basic operations copy, move, delete a file and folder. There are three commands for each operation,

  1. cp: Copying Files. A basic example of the cp command to copy files (keep the original file and make a duplicate of it) .
  2. mv: Moving (and Renaming) Files. The mv command lets you move a file from one directory location to another.
  3. rm: Deleting Files.
the Linux command line offers far greater power and efficiency than the GUI. For instance, to instantly seek out and move all of the files above to a subdirectory called budget, your command line instruction would simply be:
Each of the Linux commands to move, copy, or delete files have options to make it more productive. Read on to find out more.

1. cp: Copying Files

A basic example of the cp command to copy files (keep the original file and make a duplicate of it) might look like:
In this example, we copy the joe_expenses file to the cashflow directory, which (because we haven’t specified anything else) is in our login directory.

Additional Options

Options are similar to those for the mv command:
-i  for interactive, asks you to confirm if an existing file (perhaps a version of joe_expenses already exists in the cashflow directory) should be over written in the copying process.
-r for recursive, to copy all the subdirectories and files in a given directory and preserve the tree structure.
-v for verbose, shows files being copied one by one. For example:
1
cp joe_expenses cath expenses cashflow

2. mv: Moving (and Renaming) Files

The mv command lets you move a file from one directory location to another. It also lets you rename a file (there is no separate rename command).
Let’s start with the basic format:
In this case, if JOE1_expenses does not exist, it will be created with the exact content of joe_expenses, and joe_expenses will disappear.
If JOE1_expenses already exists, its content will be replaced with that of joe_expenses (and joe_expenses will still disappear).

Additional Options

Options for mv include:
-i for interactive, asks you to confirm if an existing file should be over written.
-f for force, overrides all interactivity and executes the mv instruction without returning any prompts. (You must be sure your instruction is exactly what you want if you decide to apply the -f option.)
-v for verbose, to show the files being moved one by one

3. rm: Deleting Files

File deletion is done using the rm (remove) command.
This will delete the joe_expenses file forever (maybe Joe would like that!).

Additional Options

The rm command options include -i (interactive), -f (force), -v (verbose), and -r (recursive).
Like the commands above, it can also be applied to more than one file at a time.
This will remove both of these files.
Using the wildcard character: “*”
This will remove joe_expenses, cath_expenses, mike_expenses, and robin_expenses, forever.
Likewise, if you decide you want to remove everything you copied into the cashflow directory above and the directory itself, use:

 Use Caution with These Commands

For each of these commands, the use of the -i (interactive) option is highly recommended, at least in the beginning. This gives you a second chance to spot any unfortunate mistakes.
Similarly, use caution if you apply either -f (force) or -r (recursive), especially if you are also using a wildcard character like “*” to apply the command to several files at once.

Beware of the -r Option!

We’ll say it once and once only. Don’t do this:
This will delete every file and every directory you have.

what is sitemap.xml file

Sitemaps are a URL inclusion protocol and complements robot.txt, a URL exclusion protocol.
The sitemap.xml file allows a webmaster to inform search engines about all URLs on a website that are available for crawling. It allows webmasters to include additional information about each URL: when it was last updated, how often it changes, and how important it is in relation to other URLs in the site. This allows search engines to crawl the site more intelligently. A Sitemap is an xml file that lists the URLs for a site.
Sitemaps are particularly beneficial on websites where:
  • some areas of the website are not available through the browsable interface
  • webmasters use rich Ajax, Silverlight, or Flash content that is not normally processed by search engines.
  • The site is very large and there is a chance for the web crawlers to overlook some of the new or recently updated content
  • When websites have a huge number of pages that are isolated or not well linked together, or
  • When a website has few external links
Example of a sitemap,

<?xml version="1.0" encoding="UTF-8"?>
<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
   xsi:schemaLocation="http://www.sitemaps.org/schemas/sitemap/0.9 http://www.sitemaps.org/schemas/sitemap/0.9/sitemap.xsd">
    <url>
        <loc>http://www.pineapplelabs.in/</loc>
        <lastmod>2017-07-21</lastmod>
        <changefreq>1</changefreq>
        <priority>1.0</priority>
    </url>
    <url>
        <loc>http://www.pineapplelabs.in/about.php</loc>
        <lastmod>2017-07-21</lastmod>
        <changefreq>1</changefreq>
        <priority>0.9</priority>
    </url>
    <url>
        <loc>http://www.pineapplelabs.in/career.php</loc>
        <lastmod>2017-07-21</lastmod>
        <changefreq>1</changefreq>
        <priority>0.9</priority>
    </url>
    <url>
        <loc>http://pineapple-labs.blogspot.in/</loc>
        <lastmod>2017-07-21</lastmod>
        <changefreq>1</changefreq>
        <priority>0.9</priority>
    </url>
</urlset>

Thursday 20 July 2017

What is robot.txt file on a website?

Search engines generally crawl a website using a computer program known as bots. Like google search web sites using Googlebot. robot.txt file, restrict a boat to have access to all the folders which contains some confidential data or, unnecessary data.

Below ate the file format explained with example,

The same result can be accomplished with an empty or missing robots.txt file.
This example tells all robots to stay out of a website:

User-agent: *
Disallow: /
 
This example tells all robots that they can visit all files because the wildcard * stands for all robots and the Disallow directive has no value, meaning no pages are disallowed.

User-agent: *
Disallow:
 
This example tells all robots to stay away from one specific file:

User-agent: * Disallow: /directory/file.html
  



This example tells all robots not to enter three directories:
 
User-agent: *
Disallow: /cgi-bin/
Disallow: /tmp/
Disallow: /junk/

Note that all other files in the specified directory will be processed.
This example tells a specific robot to stay out of a website:

User-agent: BadBot # replace 'BadBot' with the actual user-agent of the bot
Disallow: /
 
This example tells two specific robots not to enter one specific directory:

User-agent: BadBot # replace 'BadBot' with the actual user-agent of the bot
User-agent: Googlebot
Disallow: /private/
 
Example demonstrating how comments can be used:
 
# Comments appear after the "#" symbol at the start of a line, or after a directive
User-agent: * # match all bots
Disallow: / # keep them out
 
It is also possible to list multiple robots with their own rules. The actual robot string is defined by the crawler. A few sites, such as Google, support several user-agent strings that allow the operator to deny access to a subset of their services by using specific user-agent strings.
Example demonstrating multiple user-agents:

User-agent: googlebot        # all Google services
Disallow: /private/          # disallow this directory

User-agent: googlebot-news   # only the news service
Disallow: /                  # disallow everything

User-agent: *                # any robot
Disallow: /something/        # disallow this directory