Hadoop 2.2.0和HBase-0.98 安装snappy
1、安装需要的依赖包及软件
需要安装的依赖包有:
gcc、c++、 autoconf、automake、libtool
需要安装的配套软件有:
Java6、Maven
关于上面的依赖包,如果在Ubuntu下,使用sudo apt-get install * 命令安装,如果在CentOS下,使用sudo yum install *命令来安装。
关于配套的Java和Maven的安装,参考博文《Linux下Java、Maven、 Tomcat 的安装》。
2、 下载 snappy -1.1.2
可供下载的地址:
------------------------------------------分割线------------------------------------------
免费下载地址在 http://linux.linuxidc.com/
用户名与密码都是 www.linuxidc.com
具体下载目录在 /2014年资料/12月/25日/ Hadoop 2.2.0和HBase-0.98 安装snappy
下载方法见 http://www.linuxidc.com/Linux/2013-07/87684.htm
------------------------------------------分割线------------------------------------------
3、编译并动态安装
下载后解压到某个文件夹,此处假设解压的地址位home目录。再执行如下命令如下:
$ cd ~/ snappy -1.1.2 $ sudo ./configure $ sudo ./make $ sudo make install
然后执行如下命令查看是否安装成功。
$ cd /usr/local/lib $ ll lib snappy .* -rw-r--r-- 1 root root 233506 Aug 7 11:56 lib snappy .a -rwxr-xr-x 1 root root 953 Aug 7 11:56 lib snappy .la lrwxrwxrwx 1 root root 18 Aug 7 11:56 libsnappy.so -> libsnappy.so.1.2.1 lrwxrwxrwx 1 root root 18 Aug 7 11:56 libsnappy.so.1 -> libsnappy.so.1.2.1 -rwxr-xr-x 1 root root 147758 Aug 7 11:56 libsnappy.so.1.2.1如果安装过程中没有遇到错误,且/usr/local/lib目录下有上面的文件,表示安装成功。
4、hadoop-snappy源码编译
1)下载源码,两种方式
a、安装svn,如果是ubuntu,使用sudo apt-get install subversion;如果是centos,使用sudo yum install subversion命令安装。
b、使用svn 从谷歌的svn仓库中checkout源码,使用如下命令:
$ svn checkout http://hadoop-snappy.googlecode.com/svn/trunk/ hadoop-snappy
这样就在执行命令的目录下将hadoop-snappy的源码拷贝出来放在hadoop-snappy目录中。
不过因为谷歌的服务在大陆总是出问题,所以也可以选择直接下载,见本文上面的Linux公社资源下载连接。
2)编译hadoop-snappy源码
切换到hadoop-snappy源码的目录下,执行如下命令:
a、如果上面安装snappy使用的是默认路径,命令为:
mvn package
b、如果上面安装的snappy使用的是自定义路径,则命令为:
mvn package [-Dsnappy.prefix=SNAPPY_INSTALLATION_DIR]
其中SNAPPY_INSTALLATION_DIR位snappy安装路径。
编译过程中可能出现的问题:
a)/root/modules/hadoop-snappy/maven/build-compilenative.xml:62: Execute failed: java.io.IOException: Cannot run program “autoreconf” (in directory “/root/modules/hadoop-snappy/target/native-src”): java.io.IOException: error=2, No such file or directory
解决方案:说明缺少文件,但是这个文件是在target下的,是编译过程中自动生成的,原本就不该存在,这是问什么呢?其实根本问题不是缺文件,而是 Hadoop Snappy是需要一定的前置条件。所以请参考最上面的安装依赖包介绍安装依赖包。
b)出现如下错误提示:
[exec] make: *** [src/org/apache/hadoop/io/compress/snappy/SnappyCompressor.lo] Error 1 [ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.6:run (compile) on project hadoop-snappy: An Ant BuildException has occured: The following error occurred while executing this line: [ERROR] /home/ngc/Char/snap/hadoop-snappy/hadoop-snappy-read-only/maven/build-compilenative.xml:75: exec returned:
解决方案: Hadoop Snappy的官方文档仅仅列出了需要gcc,而没有列出需要什么版本的gcc。而实际上,Hadoop Snappy是需要gcc4.4的。如果gcc版本高于默认的4.4版本,就会报错。
假设使用的系统为centos,使用如下命令:(注:ubuntu需要将sudo yum install 换成sudo apt-get install)
sudo yum install gcc-4.4 sudo rm /usr/bin/gcc
使用如下命令查看是否替换成功:
$ gcc --version gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-3) This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.c)
出现如下错误提示:
[exec] /bin/bash ./libtool --tag=CC --mode=link gcc -g -Wall -fPIC -O2 -m64 -g -O2 -version-info 0:1:0 -L/usr/local//lib -o libhadoopsnappy.la -rpath /usr/local/lib src/org/apache/hadoop/io/compress/snappy/SnappyCompressor.lo src/org/apache/hadoop/io/compress/snappy/SnappyDecompressor.lo -ljvm -ldl [exec] /usr/bin/ld: cannot find -ljvm [exec] collect2: ld returned 1 exit status [exec] make: *** [libhadoopsnappy.la] 错误 1 [exec] libtool: link: gcc -shared -fPIC -DPIC src/org/apache/hadoop/io/compress/snappy/.libs/SnappyCompressor.o src/org/apache/hadoop/io/compress/snappy/.libs/SnappyDecompressor.o -L/usr/local//lib -ljvm -ldl -O2 -m64 -O2 -Wl,-soname -Wl,libhadoopsnappy.so.0 -o .libs/libhadoopsnappy.so.0.0.1
这是因为没有把安装jvm的libjvm.so symbolic链接到usr/local/lib。如果你的系统是64位,可到/root/bin/jdk1.6.0_37/jre/lib/amd64/server/察看libjvm.so 链接到的地方,这里修改如下,使用命令:
$ sudo ln -s /usr/local/jdk1.6.0_45/jre/lib/amd64/server/libjvm.so /usr/local/lib/
问题即可解决。
5、Hadoop 2.2.0配置snappy
hadoop-snappy编译成功后,会在hadoop-snappy目录下的target目录中生成一些文件,其中有一个文件名为:hadoop-snappy-0.0.1-SNAPSHOT.tar.gz
1)解压target下hadoop-snappy-0.0.1-SNAPSHOT.tar.gz,解压后,复制lib文件
$ sudo cp -r ~/snappy-hadoop/target/hadoop-snappy-0.0.1-SNAPSHOT/lib/native/Linux-amd64-64/* $HADOOP_HOME/lib/native/Linux-amd64-64/
2)将target下的hadoop-snappy-0.0.1-SNAPSHOT.jar复制到$HADOOP_HOME/lib 下。
3)配置$HADOOP_HOME/etc/hadoop/hadoop-env.sh,添加:
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$HADOOP_HOME/lib/native/Linux-amd64-64/:/usr/local/lib/
4) 配置$HADOOP_HOME/etc/hadoop/mapred-site.xml,这个文件中,所有跟压缩有关的配置选项有:
<property> <name>mapred.output.compress</name> <value>false</value> <description>Should the job outputs be compressed? </description> </property> <property> <name>mapred.output.compression.type</name> <value>RECORD</value> <description>If the job outputs are to compressed as SequenceFiles, how should they be compressed? Should be one of NONE, RECORD or BLOCK. </description> </property> <property> <name>mapred.output.compression.codec</name> <value>org.apache.hadoop.io.compress.DefaultCodec</value> <description>If the job outputs are compressed, how should they be compressed? </description> </property> <property> <name>mapred.compress.map.output</name> <value>false</value> <description>Should the outputs of the maps be compressed before being sent across the network. Uses SequenceFile compression. </description> </property> <property> <name>mapred.map.output.compression.codec</name> <value>org.apache.hadoop.io.compress.DefaultCodec</value> <description>If the map outputs are compressed, how should they be compressed? </description> </property>
可以根据自己的需要,去进行配置。其中,codec的类型如下:
<property> <name>io.compression.codecs</name> <value> org.apache.hadoop.io.compress.GzipCodec, org.apache.hadoop.io.compress.DefaultCodec, org.apache.hadoop.io.compress.BZip2Codec, org.apache.hadoop.io.compress.SnappyCodec </value> </property>SnappyCodec就代表了snappy压缩方式。
5)配置好了以后,重启hadoop 集群 即可。
6、 HBase 0.98配置snappy
1)配置 HBase lib/native/Linux-amd64-64/ 中的lib文件。简单起见,我们只需要将$HADOOP_HOME/lib/native/Linux-amd64-64/下lib文件,全部复制到相应 HBase 目录下:
$ sudo cp -r $HADOOP_HOME/lib/native/Linux-amd64-64/* $HBASE_HOME/lib/native/Linux-amd64-64/
2)配置 HBase 环境变量hbase-env.sh
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$HADOOP_HOME/lib/native/Linux-amd64-64/:/usr/local/lib/ export HBASE_LIBRARY_PATH=$HBASE_LIBRARY_PATH:$HBASE_HOME/lib/native/Linux-amd64-64/:/usr/local/lib/ export CLASSPATH=$CLASSPATH:$HBASE_LIBRARY_PATH
注意:别忘记了在habase-env.sh的开始位置配置HADOOP_HOME和HBASE_HOME。
3)配置好之后,重启 HBase 即可。
4)验证是否安装成功
在HBase的安装目录下,执行如下语句:
$ bin/hbase shell 2014-08-07 15:11:35,874 INFO [main] Configuration.deprecation: hadoop.native.lib is deprecated. Instead, use io.native.lib.available HBase Shell; enter 'help<RETURN>' for list of supported commands. Type "exit<RETURN>" to leave the HBase Shell Version 0.98.2-hadoop2, r1591526, Wed Apr 30 20:17:33 PDT 2014
hbase(main):001:0>
然后执行创建语句:
hbase(main):001:0> create 'test_snappy', {NAME => 'cf', COMPRESSION => 'SNAPPY'} SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/home/q/hbase/hbase-0.98.2-hadoop2/lib/slf4j-log4j12-1.6.4.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/home/q/hadoop2x/hadoop-2.2.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. 0 row(s) in 1.2580 seconds
=> Hbase::Table - test_snappy hbase(main):002:0>
查看创建的test_snappy表:
hbase(main):002:0> describe 'test_snappy' DESCRIPTION ENABLED 'test_snappy', {NAME => 'cf', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', VERSIONS => '1', COMPRESSIO true N => 'SNAPPY', MIN_VERSIONS => '0', TTL => '2147483647', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOC KCACHE => 'true'} 1 row(s) in 0.0420 seconds
可以看到,COMPRESSION => 'SNAPPY'。
接下来,插入数据试试:
hbase(main):003:0> put 'test_snappy', 'key1', 'cf:q1', 'value1' 0 row(s) in 0.0790 seconds
hbase(main):004:0>
遍历test_snappy表试试:
hbase(main):004:0> scan 'test_snappy' ROW COLUMN+CELL key1 column=cf:q1, timestamp=1407395814255, value=value1 1 row(s) in 0.0170 seconds
hbase(main):005:0>
以上过程均能正确执行,说明配置正确。
错误解决方案:
a)配置后,启动hbase出现如下异常:
WARN [main] util.CompressionTest: Can't instantiate codec: snappy java.io.IOException: java.lang.UnsatisfiedLinkError: org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy()Z at org.apache.hadoop.hbase.util.CompressionTest.testCompression(CompressionTest.java:96) at org.apache.hadoop.hbase.util.CompressionTest.testCompression(CompressionTest.java:62)