We have a large pile of web log data. We need to sessionize it, and also generate the previous domain, and next domain for each session. I am testing via an interactive job flow on AWS EMR.
Right now I'm able to get the data sessionized using this code here: http://goo.gl/L52Wf . It took a little work to get familiar with compiling and using a UDF, but I've made it that far.
Here is the header row and first line from the input file (tab delimited):
ID Date Rule code Project UID respondent_uid Type Tab ID URL domain URL path Duration Exit cause Details
11111111 2012-09-25T11:21:20.000Z 20120914_START_USTEST 20120914_TESTSITE_US_TR test6_EN_9 PAGE_VIEWED FF1348568479042 http://www.google.fr 11 OTHER
This is a tuple from the SESSIONS
relation (steps to get relation are shown below):
(2012-09-27 04:42:20.000,11999603,20120914_URL_ALL,20120914_TESTSITE_US_TR,2082810875_US_9,PAGE_VIEWED,CH17,http://hotmail.com,_news/2012/09/26/14113684,28,WINDOW_DEACTIVATED,,3019222a-5c4d-4767-a82e-2b4df5d9db6d)
This is roughly what I'm running right now to sessionize the test data:
register s3://TestBucket/Sessionize.jar
define Sessionize datafu.pig.sessions.Sessionize('30m');
A = load 's3://TestBucket/party2.gz' USING PigStorage() as (id: chararray, data_date: chararray, rule_code: chararray, project_uid: chararray, respondent_uid: chararray, type: chararray, tab_id: chararray, url_domain: chararray, url_path: chararray, duration: chararray, exit_cause: chararray, details: chararray);
B = foreach A generate $1, $0, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11;
C = filter B by id neq 'ID';
VIEWS = group C by (respondent_uid, url_domain);
SESSIONS = foreach VIEWS { VISITS = order C by data_date; generate FLATTEN(Sessionize(VISITS)) as (data_date: chararray, id: chararray, rule_code: chararray, project_uid: chararray, respondent_uid: chararray, type: chararray, tab_id: chararray, url_domain: chararray, url_path: chararray, duration: chararray, exit_cause: chararray, details: chararray, session_id); }
(The step at B is to move the date to the first position. Step at C is to filter out the file header)
I'm lost as far as the right direction to go with this from here.
Can I iterate over my SESSIONS
relation with foreach
and get next and previous domain from pig script? Would it be better to write a custom UDF and pass the SESSIONS
relation to that? (Writing my own UDF would be an adventure!..)
Any advice would be greatly appreciated. Even if someone could recommend what NOT to do, might be just as helpful, so I don't waste time researching a junk approach. I'm quite new to Hadoop and pig script so this is definitely not one of my strong areas (yet..).