Pages

Tuesday, February 5, 2013

MySQL dump

Taking mysql dump

# mysqldump -u user -ppassword DB_name --ignore-table=DB_name.table_name1 --ignore-table=DB_name.table_name1 > DB_dump.sql


Restoring from the dump

# mysql -uroot -proot DB_name < DB_dump.sql

Friday, January 25, 2013

Linux see the file size in MB with ls -l command

linux ls -l command

manohar@mani:~/git/prime-services$ ls -l
total 76
drwxrwxr-x 6 manohar manohar  4096 2013-01-25 11:37 authentication
-rw-rw-r-- 1 manohar manohar 10020 2013-01-24 14:10 build.gradle
-rw-rw-r-- 1 manohar manohar 10085 2012-10-31 22:16 build.gradle.cobertura
drwxrwxr-x 4 manohar manohar  4096 2012-07-02 14:50 ci
drwxrwxr-x 6 manohar manohar  4096 2013-01-25 11:37 context
drwxrwxr-x 9 manohar manohar  4096 2013-01-25 11:37 eventrouter
drwxrwxr-x 3 manohar manohar  4096 2012-07-02 14:50 example
-rw-rw-r-- 1 manohar manohar   602 2012-07-02 14:50 README
drwxrwxr-x 6 manohar manohar  4096 2013-01-25 11:37 reporting-dao
drwxrwxr-x 5 manohar manohar  4096 2013-01-25 11:37 routing-service
drwxrwxr-x 2 manohar manohar  4096 2013-01-21 11:30 scripts
-rw-rw-r-- 1 manohar manohar   129 2012-11-24 17:26 settings.gradle
drwxrwxr-x 6 manohar manohar  4096 2013-01-25 11:37 shared
drwxrwxr-x 9 manohar manohar  4096 2013-01-25 11:37 web-service
drwxrwxr-x 5 manohar manohar  4096 2013-01-25 11:37 ws-reporting
manohar@mani:~/git/prime-services$ 
To see file size in MB in linux ls -l command.

ls -l | awk '{printf "%s %i %s %s %.3fMB %s %i %s %s\n", $1,$2,$3,$4,$5/1024000,$6,$7,$8,$9}'


manohar@mani:~/git/prime-services$ ls -l | awk '{printf "%s %i %s %s %.3fMB %s %i %s %s\n", $1,$2,$3,$4,$5/1024000,$6,$7,$8,$9}'
total 76   0.000MB  0  
drwxrwxr-x 6 manohar manohar 0.004MB 2013-01-25 11 authentication 
-rw-rw-r-- 1 manohar manohar 0.010MB 2013-01-24 14 build.gradle 
-rw-rw-r-- 1 manohar manohar 0.010MB 2012-10-31 22 build.gradle.cobertura 
drwxrwxr-x 4 manohar manohar 0.004MB 2012-07-02 14 ci 
drwxrwxr-x 6 manohar manohar 0.004MB 2013-01-25 11 context 
drwxrwxr-x 9 manohar manohar 0.004MB 2013-01-25 11 eventrouter 
drwxrwxr-x 3 manohar manohar 0.004MB 2012-07-02 14 example 
-rw-rw-r-- 1 manohar manohar 0.001MB 2012-07-02 14 README 
drwxrwxr-x 6 manohar manohar 0.004MB 2013-01-25 11 reporting-dao 
drwxrwxr-x 5 manohar manohar 0.004MB 2013-01-25 11 routing-service 
drwxrwxr-x 2 manohar manohar 0.004MB 2013-01-21 11 scripts 
-rw-rw-r-- 1 manohar manohar 0.000MB 2012-11-24 17 settings.gradle 
drwxrwxr-x 6 manohar manohar 0.004MB 2013-01-25 11 shared 
drwxrwxr-x 9 manohar manohar 0.004MB 2013-01-25 11 web-service 
drwxrwxr-x 5 manohar manohar 0.004MB 2013-01-25 11 ws-reporting 
manohar@mani:~/git/prime-services$ 

Mysql Insert From Cross Join

Think of a scenario like this, we have a database of Doctors and Patients. All the doctors have access to all the patients information and all the patients can visit any doctor.
-- ----------------------------------------
-- Table `Doctors`
-- ----------------------------------------
CREATE TABLE IF NOT EXISTS `Doctors` (
  `doc_id` int(10) unsigned NOT NULL,
  `doc_first_name` VARCHAR(104) NOT NULL,
  `doc_last_name` VARCHAR(104) NOT NULL,
  `doc_discipline` VARCHAR(104) NOT NULL,
  PRIMARY KEY (`doc_id`)
) 
ENGINE=InnoDB DEFAULT CHARSET=utf8;
INSERT INTO `Doctors` VALUES(1,'Sintu', 'Kumar', 'otholaringology');
INSERT INTO `Doctors` VALUES(2,'R', 'Chinch', 'dentistry');
-- ----------------------------------------
-- Table `Patients`
-- ----------------------------------------
CREATE TABLE IF NOT EXISTS `Patients` (
  `p_id` int(10) unsigned NOT NULL,
  `p_first_name` VARCHAR(104) NOT NULL,
  `p_last_name` VARCHAR(104) NOT NULL,
  PRIMARY KEY (`p_id`)
) 
ENGINE=InnoDB DEFAULT CHARSET=utf8;
INSERT INTO `Patients` VALUES(1,'Jim','Cary');
INSERT INTO `Patients` VALUES(2,'John','Cook');
-- ----------------------------------------
-- Table `Doctors_Patients`
-- ----------------------------------------
CREATE TABLE IF NOT EXISTS `Doctors_Patients` (
  `id` int(10) unsigned NOT NULL auto_increment,
  `doctor_id` int(10) unsigned NOT NULL,
  `patient_id` int(10) unsigned NOT NULL,
  PRIMARY KEY (`id`),
  FOREIGN KEY (`doctor_id`) REFERENCES `Doctors` (`doc_id`) ON DELETE NO ACTION ON UPDATE NO ACTION,
  FOREIGN KEY (`patient_id`) REFERENCES `Patients` (`p_id`) ON DELETE NO ACTION ON UPDATE NO ACTION

) 
ENGINE=InnoDB DEFAULT CHARSET=utf8;
Now we want to populate the table `Doctors_Patients` with the Cartesian  product of the other two tables. For this we can simply use cross join as listed below.

INSERT INTO `Doctors_Patients`(doctor_id,patient_id) SELECT `doc_id`, `p_id` FROM `Doctors`  CROSS JOIN `Patients`;

And this is what it did.
select * from Doctors_Patients;

+----+-----------+------------+
| id | doctor_id | patient_id |
+----+-----------+------------+
|  1 |         1 |          1 |
|  2 |         2 |          1 |
|  3 |         1 |          2 |
|  4 |         2 |          2 |
+----+-----------+------------+
4 rows in set (0.08 sec)

Wednesday, January 2, 2013

Ehcache refresh on DB change from external source

Problem statement:

  1. I have 2 web applications say WebApp-1 and WebApp-2, they access same database say DB-1 using Hibeernate 3.6. 
  2. WebApp-2 uses L2 cache i.e. Ehcache whereas WebApp-1 doesn't use any L2 caching.
  3. When WebApp-1 modifies DB state, L2 cache of WebApp-2 doesn't reflect that, as a result of that modifactions doen by WebApp-1 are not visible to WebApp-2.
Problem solution: 
I have found these three solutions from ehcache documentation.

Use one of the following strategies to keep the data in the cache in sync:
  • data expiration: use the eviction algorithms included with Ehcache along with the timeToIdleSeconds and timetoLiveSeconds setting to enforce a maximum time for elements to live in the cache (forcing a re-load from the database or SOR).
  • message bus: use an application to make all updates to the database. When updates are made, post a message onto a message queue with a key to the item that was updated. All application instances can subscribe to the message bus and receive messages about data that is updated, and can synchronize their local copy of the data accordingly (for example by invalidating the cache entry for updated data)
  • triggers: Using a database trigger can accomplish a similar task as the message bus approach. Use the database trigger to execute code that can publish a message to a message bus. The advantage to this approach is that updates to the database do not have to be made only through a special application. The downside is that not all database triggers support full execution environments and it is often unadvisable to execute heavy-weight processing such as publishing messages on a queue during a database trigger.
I thought of using the message bus or JMS kind of solution. But, I did't use that, for that matter what I did used is a broadcasting solution.

So, as per my problem, WebApp-1 broadcasts all the change events whereas WebApp-2 forks a thread which listens to any broadacast messages send by WebApp-1. Here is the code snippet for the broadcaster and receiver.

Broadcaster:

import java.io.IOException;
import java.net.DatagramPacket;
import java.net.DatagramSocket;
import java.net.InetAddress;

import net.atomex.dashboard.controllers.AdvertiserController;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

public class CacheEventBroadcaster extends Thread {

        private String entityName = null;
        private Integer entityId = null;

        private DatagramSocket socket = null;

        private Logger logger = LoggerFactory.getLogger(AdvertiserController.class);

        
        public CacheEventBroadcaster(String entityName, Integer entityId)
                        throws IOException {                
                this("CacheEventThread", entityName, entityId);
        }

        public CacheEventBroadcaster(String name, String entityName,
                        Integer entityId) throws IOException {
                super(name);
                this.entityId = entityId;
                this.entityName = entityName;
                logger.trace("Starting cache event broadcaster...");
        }

        public void run() {
                try {
                        boolean connected = false;
                        while (!connected) {
                                // Create the datagram socket
                                try{
                                socket = new DatagramSocket(4445);
                                }catch (Exception e) {
                                        //do nothing but retry
                                        logger.trace("Exception: " + e.getMessage());
                                }
                                connected = true;
                        }

                        // Create the broadcast message
                        byte[] buf = new byte[1024];
                        String dString = entityName + ":" + entityId;
                        buf = dString.getBytes();
                        logger.trace("Broadcast message created : [" + dString + "]");

                        // Create the broadcast group
                        InetAddress group = InetAddress.getByName("230.0.0.1");
                        DatagramPacket packet = new DatagramPacket(buf, buf.length, group,
                                        4446);
                        logger.trace("Sending message to : " + group.getHostAddress());

                        // Send the message to broadcast group
                        socket.send(packet);
                        logger.trace("Message sent successfully.");

                        // close the socket
                        socket.close();
                } catch (IOException e) {
                        logger.trace("An error occured while brodcasting the message. "
                                        + e.getMessage());
                        e.printStackTrace();
                }
        }

        public static void broadcastCacheEvent(String entityName, Integer entityId) {
                try {
                        new CacheEventBroadcaster(entityName, entityId).start();
                } catch (IOException e) {
                        e.printStackTrace();
                }
        }
}


Use this piece of code to send the broadcast message.

CacheEventBroadcaster.broadcastCacheEvent("EntityName", id);


Receiver: 

import java.io.IOException;
import java.net.DatagramPacket;
import java.net.InetAddress;
import java.net.MulticastSocket;
import java.util.List;

import net.sf.ehcache.Cache;
import net.sf.ehcache.CacheManager;

import org.hibernate.Criteria;
import org.hibernate.criterion.Restrictions;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

import com.komli.prime.service.shared.cache.HibernateCacheProvider;
import com.komli.prime.service.shared.util.db.DatabaseFactory;

public class CacheEventsReceiver {
        static final Logger logger = LoggerFactory.getLogger(CacheEventsReceiver.class);
        public static void receiveCacheEvents() throws IOException {
                logger.info("Cache event listner deamon thread started...");
                new ReceiverThread().start();
        }
}

class ReceiverThread extends Thread {

        static final String QUERY_CACHE = "org.hibernate.cache.StandardQueryCache";
        static final Logger logger = LoggerFactory.getLogger(ReceiverThread.class);
        static CacheManager cacheManager = HibernateCacheProvider.getCacheManager();

        public void run() {

                MulticastSocket socket;
                try {
                        socket = new MulticastSocket(4446);
                        InetAddress address = InetAddress.getByName("230.0.0.1");
                        socket.joinGroup(address);
                        DatagramPacket packet;
                        String entityName;
                        Integer entityId;

                        while (true) {

                                byte[] buf = new byte[1024];
                                packet = new DatagramPacket(buf, buf.length);
                                socket.receive(packet);

                                String message = new String(packet.getData(), 0,
                                                packet.getLength());

                                if (message != null && !message.equals("")) {

                                        String inputs[] = message.split(":");
                                        entityName = inputs[0];
                                        entityId = new Integer(inputs[1]);

                                        logger.info("Received new cache event, message is "
                                                        + message);

                                        @SuppressWarnings("unchecked")
                                        Object entity = getById(
                                                        CacheObjectMap.getMappedClass(entityName), entityId);
                                        
                                        //Remove the query cache to get the latest changes,                                        
                                        Cache queryCache = cacheManager.getCache(QUERY_CACHE);
                                        queryCache.removeAll();
                                        logger.info("Query Cache Cleared...");                                        

                                        // Evict the entity from session
                                        DatabaseFactory
                                                        .createDatabase()
                                                        .getSessionFactory()
                                                        .evict(CacheObjectMap.getMappedClass(entityName),
                                                                        entityId);
                                }
                        }
                } catch (IOException e) {
                        logger.info("An error occured while brodcasting the message. "
                                                        + e.getMessage());
                        e.printStackTrace();
                }

        }

        public  T getById(Class c, long id) {
                DatabaseFactory.createDatabase().getSessionFactory()
                                .getCurrentSession().beginTransaction();
                Criteria criteria = DatabaseFactory.createDatabase()
                                .getSessionFactory().getCurrentSession().createCriteria(c);
                criteria.add(Restrictions.eq("id", (int) id));

                @SuppressWarnings("unchecked")
                List result = (List) criteria.list();
                if (result == null || result.size() == 0)
                        return null;
                return result.get(0);
        }
}


Put this piece somewhere to register/start above piece of code.

try {
        CacheEventsReceiver.receiveCacheEvents();
} catch (IOException e) {
        e.printStackTrace();
}


The CacheObjectMap is a string to class mapper. The sample code snippet is as below.

import java.util.HashMap;

public class CacheObjectMap {

        private static HashMap<String, Class> classMap;

        static {
                classMap = new HashMap<String, Class>();
                
                classMap.put("EntityName", com.komli.prime.service.shared.db.model.Entity.class);
                                
        }
        
        public static Class getMappedClass(String className){
                return classMap.get(className);
        }
}

Thursday, December 27, 2012

mvn jetty:run port

To start the jetty server in a different port than 8080 we can use this command.

mvn -Djetty.port=9999 jetty:run

Thursday, December 20, 2012

MySQL and Epoch Time

While working on MySQL, I found some useful functions to work with epoch time. They are well documented in the MySQL reference manual. Here is a list of few that I have used for my piece of work.
  • UNIX_TIMESTAMP() : Return a UNIX timestamp i.e. current time in epoch. See some examples below.
mysql> SELECT UNIX_TIMESTAMP() as cuurent_epoch_time;
       ->1355992290
  • UNIX_TIMESTAMP(DATE: Return a UNIX timestamp i.e. current time in epoch. See some examples below.
mysql> SELECT UNIX_TIMESTAMP(NOW()) as cuurent_epoch_time;
       ->1355993021
mysql> SELECT UNIX_TIMESTAMP('2012-12-20 10:30:19');
       ->1355979619
  • FROM_UNIXTIME(UNIX_TIMESTAMP: Format UNIX timestamp as a date. See some examples below.
mysql> SELECT FROM_UNIXTIME(1355993021);
       ->2012-12-20 14:13:41
mysql> SELECT FROM_UNIXTIME(UNIX_TIMESTAMP());
       ->2012-12-20 14:22:19
  • FROM_UNIXTIME(UNIX_TIMESTAMP , format: Format UNIX timestamp as per the format specified. See some examples below.
mysql> SELECT FROM_UNIXTIME(1355993021, "%Y");
       ->2012
mysql> SELECT FROM_UNIXTIME(UNIX_TIMESTAMP(), "%M");
       ->December
mysql> SELECT FROM_UNIXTIME(UNIX_TIMESTAMP(),'%Y %D %M %h:%i:%s %x');
       ->2012 20th December 02:35:53 2012

Wednesday, December 19, 2012

Current epoch time

Perl                    time
PHP                     time()
Java                  long epoch = System.currentTimeMillis()/1000;
MySQL                 SELECT unix_timestamp(now())
Oracle PL/SQL         SELECT (SYSDATE - TO_DATE('01-01-1970 00:00:00','DD-MM-YYY HH24:MI:SS')) * 24 * 60 * 60 FROM DUAL
JavaScript           Math.round(new Date().getTime()/1000.0) getTime() returns time in milliseconds.
Unix/Linux Shell     date +%s

Tuesday, December 18, 2012

Character count problem

You have a string of small letters a-e, which can be represented as a regular expression in the form of [a-e]*

Example: abcdef, eafdceeafe, etc etc

Problem: Write a program that counts maximum continuous occurrence of a character and their indexes. For example, if the string is "abbbbdcdef", it will be "b" : 4, starting at index 1 to 4.

public class TestStringCharCount {

   public static void main(String[] args) {

        int start = 0;
        int count = 0;
        int maxCount = 0;
        char maxChar = 'a';
        String str = "abcdefabbbbbbbbcdaeff";
        char[] chars = str.toCharArray();

        for (int i = 0; i < chars.length; i++) {
            count = 0;
            for (int j = i; j < chars.length; j++) {

                if (chars[i] == chars[j]) {
                    count++;
                } else {
                    break;
                }
                i = j;
            }

            if (count > maxCount) {
                maxCount = count;
                maxChar = chars[i];
                start = i;
            }

        }
        System.out.println("string = "+ str + ", length = "+ str.length());
        
        System.out.println(maxChar + ":" + maxCount + " starting at "
                + (start - maxCount) + " to " + start);
    }
}

Output: 
string = abcdefabbbbbbbbcdaeff, length = 21
b:8 starting at 6 to 14

Friday, December 14, 2012

Get current time in perl

#!/usr/bin/perl -w

print "current time in epoch is : ". time(). "\n";
print "current time is : ". localtime(time()). "\n";

my ($sec,$min,$hour,$day,$month,$yr19,@rest) =   localtime(time);

print "----------------------------------------\n";
print "Sec : ". $sec . "\n";
print "Min : ". $min . "\n";
print "Hour : ". $hour . "\n";
print "Day : ". $day . "\n";
print "Month : ". $month . "\n";
print "Year : ". $yr19 . "\n";
Output
current time in epoch is : 1355483046
current time is : Fri Dec 14 16:34:06 2012
-------------------------------------------
Sec : 6
Min : 34
Hour : 16
Day : 14
Month : 11
Year : 112

Monday, November 19, 2012

Hello World! Perl Example

My first PERL program. Create a new file

vi hello_world.pl

#!/usr/bin/perl -w
#
#
print "Hello World!\n";

exit;


Execute the above script with the command below.

perl hello_world.pl

Saturday, November 17, 2012

MySQL Performance Tuning



SHOW VARIABLES LIKE 'have_query_cache';
SHOW VARIABLES LIKE 'query_cache_size';
SET GLOBAL query_cache_size = 256000000;

Gradle jettyRun debug

To start gradle jettyRun in debug mode, follow the steps below.

Set the GRADLE_OPTS as shown in the screen

$ export GRADLE_OPTS="-Xdebug -Xrunjdwp:transport=dt_socket,address=9999,server=y,suspend=n"
$ gradle jettyRun

Start the jetty server

Jetty server started

Open the debug configuration settings in Eclipse
Click on the debug button to start debugger

Set the breakpoints
Call the methods and debug


Set the debug points in your code and open the URL

http://localhost:9999/komliprime-trafficking-service/api/strategi?authtoken=6991a70eb121f3f623ebc2567927fef5


Truncating log files in UNIX/Linux

Most of the time we need log files to be truncated for many reasons. We can use the below command to truncate the file contents.


sudo cp /dev/null trafficking.log