Copy the register statements above and paste them into the pig terminal. Then you can LOAD from and STORE into accumulo.
$ pig
2012-03-02 08:15:25,808 [main] INFO org.apache.pig.Main - Logging error messages to: /home/developer/workspace/accumulo-pig/pig_1330694125807.log
2012-03-02 08:15:25,937 [main] INFO org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - Connecting to hadoop file system at: hdfs://127.0.0.1/
2012-03-02 08:15:26,032 [main] INFO org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - Connecting to map-reduce job tracker at: 127.0.0.1:9001
grunt> register /home/developer/workspace/accumulo-pig/lib/accumulo-core-1.5.0-incubating-SNAPSHOT.jar
grunt> register /home/developer/workspace/accumulo-pig/lib/cloudtrace-1.5.0-incubating-SNAPSHOT.jar
grunt> register /home/developer/workspace/accumulo-pig/lib/libthrift-0.6.1.jar
grunt> register /home/developer/workspace/accumulo-pig/lib/zookeeper-3.3.1.jar
grunt> register /home/developer/workspace/accumulo-pig/target/accumulo-pig-1.5.0-incubating-SNAPSHOT.jar
grunt>
grunt> DATA = LOAD 'accumulo://webpage?instance=inst&user=root&password=secret&zookeepers=127.0.0.1:2181&columns=f:cnt'>>using org.apache.accumulo.pig.AccumuloStorage() AS (row, cf, cq, cv, ts, val);
grunt>
grunt> DATA2 = FOREACH DATA GENERATE row, cf, cq, cv, val;
grunt>
grunt> STORE DATA2 into 'accumulo://webpage_content?instance=inst&user=root&password=secret&zookeepers=127.0.0.1:2181' using org.apache.accumulo.pig.AccumuloStorage();
2012-03-02 08:18:44,090 [main] INFO org.apache.pig.tools.pigstats.ScriptState - Pig features used in the script: UNKNOWN
2012-03-02 08:18:44,093 [main] INFO org.apache.pig.newplan.logical.rules.ColumnPruneVisitor - Columns pruned for DATA: $4
2012-03-02 08:18:44,108 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MRCompiler - File concatenation threshold: 100 optimistic? false
2012-03-02 08:18:44,110 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - MR plan size before optimization: 1
2012-03-02 08:18:44,110 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - MR plan size after optimization: 1
2012-03-02 08:18:44,117 [main] INFO org.apache.pig.tools.pigstats.ScriptState - Pig script settings are added to the job
2012-03-02 08:18:44,118 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - mapred.job.reduce.markreset.buffer.percent is not set, set to default 0.3
2012-03-02 08:18:44,120 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - creating jar file Job7611629033341757288.jar
2012-03-02 08:18:46,282 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - jar file Job7611629033341757288.jar created
2012-03-02 08:18:46,286 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Setting up single store job
2012-03-02 08:18:46,375 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 1 map-reduce job(s) waiting for submission.
2012-03-02 08:18:46,876 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 0% complete
2012-03-02 08:18:46,878 [Thread-17] INFO org.apache.pig.backend.hadoop.executionengine.util.MapRedUtil - Total input paths (combined) to process : 1
2012-03-02 08:18:47,887 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - HadoopJobId: job_201203020643_0001
2012-03-02 08:18:47,887 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - More information at: http://127.0.0.1:50030/jobdetails.jsp?jobid=job_201203020643_0001
2012-03-02 08:18:54,434 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 50% complete
2012-03-02 08:18:57,484 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 100% complete
2012-03-02 08:18:57,485 [main] INFO org.apache.pig.tools.pigstats.SimplePigStats - Script Statistics:
HadoopVersionPigVersionUserIdStartedAtFinishedAtFeatures
0.20.20.9.2developer2012-03-02 08:18:442012-03-02 08:18:57UNKNOWN
Success!
Job Stats (time in seconds):
JobIdMapsReducesMaxMapTimeMinMapTImeAvgMapTimeMaxReduceTimeMinReduceTimeAvgReduceTimeAliasFeatureOutputs
job_201203020643_000110333000DATA,DATA2MAP_ONLYaccumulo://webpage_content?instance=inst&user=root&password=secret&zookeepers=127.0.0.1:2181,
Input(s):
Successfully read 288 records from: "accumulo://webpage?instance=inst&user=root&password=secret&zookeepers=127.0.0.1:2181&columns=f:cnt"
Output(s):
Successfully stored 288 records in: "accumulo://webpage_content?instance=inst&user=root&password=secret&zookeepers=127.0.0.1:2181"
Counters:
Total records written : 288
Total bytes written : 0
Spillable Memory Manager spill count : 0
Total bags proactively spilled: 0
Total records proactively spilled: 0
Job DAG:
job_201203020643_0001
2012-03-02 08:18:57,492 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Success!
grunt>
Here are the pig commands run if you don’t want to look through the output above:
A more detailed blog post going in more detail of how/why this is useful will follow.
–Jason
Update (2012/03/04): you may want to run this as the first line of the pig script:
SET mapred.map.tasks.speculative.execution false
This will avoid ingesting duplicate entries into accumulo. For the data from this post, ingesting duplicate entries wouldn’t cause any real issues because Accumulo’s
1
VersioningIterator
would only keep the newest copy, but for columns/tables with aggregation configured (e.g. using
In this post I will outline the steps necessary to use Accumulo and Gora to store content retrieved by Nutch.
###Apache Accumulo
For those of you unfamiliar with Accumulo, it is an incubating Apache project and …
“Accumulo is a sorted, distributed key/value store based on Google’s BigTable design. It is built on top of Apache Hadoop, Zookeeper, and Thrift. It features a few novel improvements on the BigTable design in the form of cell-level access labels and a server-side programming mechanism that can modify key/value pairs at various points in the data management process.”
Accumulo is conceptually very similar to HBase, but it has some nice features that HBase is currently lacking. Some of these features are:
Cell level security
No fat row problem - i.e. entire rows don’t need to fit in RAM
No limitation on Column Families or when Column Families can be created
Server side, data local, programming abstraction called Iterators. Iterators are incredibly useful for adding functionality to Tablet Servers such as data local filtering, aggregation, and search.
Gora is a object relational/non-relational mapping for arbitrary data stores including both relational (MySQL) and non-relational data stores (HBase, Cassandra, Accumulo, etc). It was designed for Big Data applications and has support (interfaces) for Apache Pig, Apache Hive, Cascading, and generic Map/Reduce.
Apache Nutch
Nutch is a highly scalable web crawler built over Hadoop Map/Reduce. It was designed from the ground up to be an Internet scale web crawler. This is a great overview of Nutch’s architecture: Nutch as a Web Data Mining Platform
###Accumulo + Nutch + Gora
I generally prefer git over svn, in this post I use the source code hosted on github.
####1. Obtain all sources (and Accumulo patch for GORA)
#####Standard build and maven install for accumulo
cd accumulo
mvn package install
#####Patching GORA for Accumulo Support
Gora needs to be patched for support for Accumulo. This patch should be considered beta, but I found it works good enough for experimenting with Nutch/GORA. Note:
1
-DskipTests
is used because some of the tests seemed to hang indefinitely, so I skipped them for now.
cd ../gora
patch -p0 < ../GORA-65-1.patch
mvn package install-DskipTests
#####Building Nutch/GORA
So, getting Nutch/GORA to build was a bit of a challenge. I will outline some of the hoops I had to jump through. File paths mentioned below are assuming you are in the nutch project directory.
Run the following commands to checkout the nutchgora branch of Nutch.
cd ../nutch
git checkout origin/nutchgora
Modify the
1
ivy/ivy.xml
file. Change gora-core and gora-sql dependencies rev from
1
"0.1.1-incubating"
to
1
"0.2-SNAPSHOT"
. This is to match the patched version we just installed. Also, add the following lines:
section. This will configure ant/ivy to use your local maven repository when resolving dependencies. This is necessary because the patched version of GORA and the latest Accumulo version are not in any public maven repos.
edit $HOME/.ivy2/cache/jaxen/jaxen/ivy-1.1.3.xml and find and comment out the following lines. If anyone knows a more elegant way to accomplish this please let me know.
For this post I am only going to cover the basics for getting these systems to run on a single machine. Deploying and running over a cluster may be covered in another post.
######Configure and Start Hadoop
cd ..
wget ftp://apache.cs.utah.edu/apache.org//hadoop/common/hadoop-0.20.2/hadoop-0.20.2.tar.gz
tar zxvf hadoop-0.20.2.tar.gz
At this point, you should have a bare bones configured hadoop installation. It is time to start it…
Run the following commands:
mkdir-p ~/tmp/hadoop
cd hadoop-0.20.2
./bin/hadoop namenode -format
./bin/start-all.sh
If you configured everything properly, you should be able to open http://127.0.0.1:50070/dfshealth.jsp in a web browser and see a page that looks like this. If there is a message saying that the Namenode is in safe mode, wait a minute or two and refresh the page. It should go away.
You should also be able to open http://127.0.0.1:50030/jobtracker.jsp in a web browser and see a page that looks like this:
In both of these status webpages you should be able to see a
1
1
listed after
1
"Live Nodes"
and
1
"Nodes"
respectively.
######Configure and Start Zookeeper
cd ..
wget ftp://apache.cs.utah.edu/apache.org/zookeeper/zookeeper-3.4.3/zookeeper-3.4.3.tar.gz
tar zxvf zookeeper-3.4.3.tar.gz
Add the following to
1
zookeeper-3.4.3/conf/zoo.cfg
. Create this file if it does not exist. NOTE: replace
1
_USERNAME_
with your username.
# The number of milliseconds of each ticktickTime=2000
# The number of ticks that the initial# synchronization phase can takeinitLimit=10
# The number of ticks that can pass between# sending a request and getting an acknowledgementsyncLimit=5
# the directory where the snapshot is stored.dataDir=/home/_USERNAME_/tmp/zookeeper-data
# the port at which the clients will connectclientPort=2181
maxClientCnxns=100
Edit bin/zkEnv.sh. Right after the following lines.
Then it means zookeeper failed to start for some reason, isn’t listening on 127.0.0.1:2181, or you may have a local firewall blocking access to that port.
######Configure and Start Accumulo
cd ../accumulo
mvn package -P assemble
cp src/assemble/target/accumulo-1.5.0-incubating-SNAPSHOT-dist.tar.gz ../
cd ../
tar zxf accumulo-1.5.0-incubating-SNAPSHOT-dist.tar.gz
cd accumulo-1.5.0-incubating-SNAPSHOT/conf
rename s/.example// *.example
# basically disabling the custom security policy file for now# since I had issues getting accumulo to work with it enabledmv accumulo.policy accumulo.policy.example
Edit
1
accumulo-env.sh
. At the top of the file, define HADOOP_HOME, ZOOKEEPER_HOME, and JAVA_HOME. Here is an example:
At this point you should have a fully configured Accumulo installation. It is time to initialize it and start it…
./bin/accumulo init
You should see similar output to this. I set my instance name to
1
"inst"
and my password to
1
"secret"
. You may want to do the same for the sake of this tutorial or make sure to set the correct config parameters later.
23 08:15:26,635 [util.Initialize] INFO : Hadoop Filesystem is hdfs://127.0.0.1/
23 08:15:26,637 [util.Initialize] INFO : Accumulo data dir is /accumulo
23 08:15:26,637 [util.Initialize] INFO : Zookeeper server is localhost:2181
Warning!!! Your instance secret is still set to the default, this is not secure.
We highly recommend you change it.
You can change the instance secret in accumulo by using: bin/accumulo
org.apache.accumulo.server.util.ChangeSecret oldPassword newPassword.
You will also need to edit your secret in your configuration file by adding the property
instance.secret to your conf/accumulo-site.xml. Without this accumulo will not operate correctly
Instance name : inst
Enter initial password for root: ******
Confirm initial password for root: *****
23 08:15:34,100 [util.NativeCodeLoader] INFO : Loaded the native-hadoop library
23 08:15:34,337 [security.ZKAuthenticator] INFO : Initialized root user with
username: root at the request of user !SYSTEM
Note: If it appears to hang after you entered the instance name, zookeeper may not be running.
1
<CTRL>-C
the accumulo init and make sure zookeeper is running.
Now run:
./bin/start-all.sh
After this finishes, you should be able to open http://127.0.0.1:50095/ in a web browser and see a page very similar to this:
The important items to note on this page are the is a 1 after the Tablet Servers, Live Data Nodes, and Trackers in the “Accumulo Master”, “NameNode”, and “JobTracker” tables, respectively. There should also be entry in the “Zookeeper” table.
###4. Crawl
At this point, you should have a fully functional Hadoop, Zookeeper, and Accumulo install, so we are ready to run a Nutch web crawl. Create a file with URLs, one per line, call it seeds.txt and place it in your home directory. I added the following URLs to my seeds file:
cd ../nutch/
./runtime/local/bin/nutch crawl file://$HOME/seeds.txt -depth 1
You should see some log messages printed to the console, but hopefully no stack traces. If you see a stack trace, you may need to go back and check your configs to make sure they match the ones we created earlier.
After the crawler finishes, you should be able to explore it using the accumulo shell.
cd ../accumulo-1.5.0-incubating-SNAPSHOT/bin/
./accumulo shell -u root -p secret
Further details and exploration of this data in Accumulo will have to wait for another blog post.
I ended up posting all the modified code from Gora (accumulo patch) and Nutchgora (patches for getting gora-accumulo working) to my github. Check it out.
Update (3/2): someone told me that they had problems getting nutch to build and that this patch worked for them (even though the patch is for GORA). I would be curious if anyone else has this same issue. Here is the error they encountered when building with
1
ant
:
[ivy:resolve] :: problems summary ::
[ivy:resolve] :::: WARNINGS
[ivy:resolve] ::::::::::::::::::::::::::::::::::::::::::::::
[ivy:resolve] :: UNRESOLVED DEPENDENCIES ::
[ivy:resolve] ::::::::::::::::::::::::::::::::::::::::::::::
[ivy:resolve] :: log4j#log4j;1.2.15: configuration not found in log4j#log4j;1.2.15: 'master'.
My name is Jason Trost. I am a developer and security researcher deeply interested in Big Data/cloud computing and machine Learning. I have a few years experience using Hadoop and Mapreduce to process and analyze computer network and security data. I also have experience developing applications with Apache Accumulo and Backtype’s Storm. I like Java and python, as well as UNIX shell scripting. This blog is going to cover topics ranging from processing data with Hadoop and MapReduce to applied machine learning techniques to interesting hacking and computer network defense tools I encounter. I am always open to requests for blog posts…