Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Importing multi-level directories of logs in hadoop/pig

We store our logs in S3, and one of our (Pig) queries would grab three different log types. Each log type is in sets of subdirectories based upon type/date. For instance:

/logs/<type>/<year>/<month>/<day>/<hour>/lots_of_logs_for_this_hour_and_type.log*

my query would want to load all three types of logs, for a give time. For instance:

type1 = load 's3:/logs/type1/2011/03/08' as ...
type2 = load 's3:/logs/type2/2011/03/08' as ...
type3 = load 's3:/logs/type3/2011/03/08' as ...
result = join type1 ..., type2, etc...

my queries would then run against all of these logs.

What is the most efficient way to handle this?

  1. Do we need use the bash script expansion? Not sure if this works with multi directories, and I doubt it would be efficient (or even possible) if there were 10k logs to load.
  2. Do we create a service to aggregate all of the logs and push them to hdfs directly?
  3. Custom java/python importers?
  4. Other thoughts?

If you could leave some example code, if appropriate, as well, that would be helpful.

Thanks

like image 867
Joshua Ball Avatar asked Dec 07 '25 04:12

Joshua Ball


1 Answers

Globbing is supported by default with PigStorage so you could just try:

type1 = load 's3:/logs/type{1,2,3}/2011/03/08' as ..

or even

type1 = load 's3:/logs/*/2011/03/08' as ..

like image 200
Romain Avatar answered Dec 09 '25 00:12

Romain