lz4源碼
1. 如何解密Oracle Warp加密過程
Oracle加密的原理就是先對源碼進行lz壓縮lzstr,然後對壓縮數據進行SHA-1運算得到40位的加密串shstr,然後將加密串與壓縮串拼接得到shstr+lzstr,然後對拼接後的字元串進行Oracle雙字元轉換(轉換表)。最後將轉換後的字元串進行base64編碼,最終得到wrap的加密串。
The default file extension for input_file is sql. The default nameof output_file is input_file.plb. Therefore, these commands are equivalent:
wrapiname=/mydir/myfile
wrapiname=/mydir/myfile.sql oname=/mydir/myfile.plb
Thisexample specifies a different file extension for input_file and adifferent name for output_file:
wrapiname=/mydir/myfile.src oname=/yourdir/yourfile.out
wrap 的使用步驟如下:
(1)將我們要加密的sql 語句保存到一個sql文本里。
(2)用wrap 進行處理,指定輸入的sql,即我們第一步的問題,然後指定輸出的路徑和文件名,默認擴展名是plb。
(3)執行我們第二部進過wrap 處理的sql,即plb文件,創建我們的對象.
示例1:wrap funcation
--函數
CREATE OR REPLACE FUNCTION F_DAVE (
n int
) RETURN string
IS
BEGIN
IF n = 1 THEN
RETURN 'Dave is DBA!';
ELSIF n = 2 THEN
RETURN 'Dave come from AnQing!';
ELSE
RETURN 'Dave come from HuaiNing!';
END IF;
END;
/
SYS@dave2(db2)> select F_DAVE(4) fromal;
F_DAVE(4)
--------------------------------------------------------------------------------
Dave come from HuaiNing!
BTW: 今天群里有人問我的blog的例子里為啥有安慶,因為我是安慶懷寧人。
[oracle@db2 ~]$ pwd
/home/oracle
[oracle@db2 ~]$ cat dave.sql
CREATE OR REPLACE FUNCTION F_DAVE (
n int
) RETURNstring
IS
BEGIN
IF n = 1 THEN
RETURN 'Dave is DBA!';
ELSIF n = 2 THEN
RETURN 'Dave come from AnQing!';
ELSE
RETURN 'Dave come from HuaiNing!';
END IF;
END;
/
[oracle@db2 ~]$ wrap iname=dave.sql
PL/SQL Wrapper: Release 10.2.0.1.0-Proction on Thu Aug 18 22:59:14 2011
Copyright (c) 1993, 2004, Oracle. All rights reserved.
Processing dave.sql to dave.plb
[oracle@db2 ~]$ ls
bifile.bbd dave.plb dave.sql Desktop log.bbd
[oracle@db2 ~]$ cat dave.plb
CREATE OR REPLACE FUNCTION F_DAVE wrapped
a000000
1
abcd
abcd
abcd
abcd
abcd
abcd
abcd
abcd
abcd
abcd
abcd
abcd
abcd
abcd
abcd
8
10d e7
xR
crtc/BRdQJjutbna/9+g0LlaSx87/znV+y926S1AeC0IRi/tjPJTyvJereDdk8mftMo8QMjV
fw0xXn0zVagAawwNVhSAiy//gJ5B
wAj75ph6EA==
/
SYS@dave2(db2)> @dave.plb
--再次調用函數,正常使用:
SYS@dave2(db2)> select F_DAVE(4) fromal;
F_DAVE(4)
--------------------------------------------------------------------------------
Dave come from HuaiNing!
--查看函數源碼,已經加過密了:
SYS@dave2(db2)> select text fromdba_source where name='F_DAVE';
TEXT
--------------------------------------------------------------------------------
FUNCTION F_DAVE wrapped
a000000
1
abcd
abcd
abcd
abcd
abcd
abcd
abcd
abcd
TEXT
--------------------------------------------------------------------------------
abcd
abcd
abcd
abcd
abcd
abcd
abcd
8
10d e7
xR
crtc/BRdQJjutbna/9+g0LlaSx87/znV+y926S1AeC0IRi/tjPJTyvJereDdk8mftMo8QMjV
TEXT
--------------------------------------------------------------------------------
fw0xXn0zVagAawwNVhSAiy//gJ5B
wAj75ph6EA==
2. 如何安裝hadoop本地壓縮庫
Hadoop安裝配置snappy壓縮
[一]、 實驗環境
CentOS 6.3 64位
Hadoop 2.6.0
JDK 1.7.0_75
[二]、 snappy編譯安裝
2.1、下載源碼
到官網 http://code.google.com/p/snappy/ 或者到 https://github.com/google/snappy
下載源碼,目前版本為 1.1.1。
2.2、編譯安裝
解壓 tar -zxvf snappy-1.1.1.tar.gz ,然後以 root 用戶 執行標準的三步進行編譯安裝:
/configure
make
make install
默認是安裝到 /usr/local/lib ,這時在此目錄下查看:
[hadoop@micmiu ~]$ ls -lh /usr/local/lib |grep snappy
-rw-r--r-- 1 root root 229K Mar 10 11:28 libsnappy.a
-rwxr-xr-x 1 root root 953 Mar 10 11:28 libsnappy.la
lrwxrwxrwx 1 root root 18 Mar 10 11:28 libsnappy.so ->
libsnappy.so.1.2.0
lrwxrwxrwx 1 root root 18 Mar 10 11:28 libsnappy.so.1 ->
libsnappy.so.1.2.0
-rwxr-xr-x 1 root root 145K Mar 10 11:28 libsnappy.so.1.2.0
安裝過程沒有錯誤同時能看到上面的動態庫,基本表示snappy 安裝編譯成功。
[三]、Hadoop snappy 安裝配置
3.1、hadoop 動態庫重新編譯支持snappy
hadoop動態庫編譯參考:Hadoop2.2.0源碼編譯 和 Hadoop2.x在Ubuntu系統中編譯源碼 ,只是把最後編譯的命令中增加
-Drequire.snappy :
1mvn package -Pdist,native -DskipTests -Dtar -Drequire.snappy
把重新編譯生成的hadoop動態庫替換原來的。
3.2、hadoop-snappy 下載
目前官網沒有軟體包提供,只能藉助 svn 下載源碼:
1svn checkout http://hadoop-snappy.googlecode.com/svn/trunk/
hadoop-snappy
3.3、hadoop-snappy 編譯
1mvn package [-Dsnappy.prefix=SNAPPY_INSTALLATION_DIR]
PS:如果上面 snappy安裝路徑是默認的話,即 /usr/local/lib,則此處
[-Dsnappy.prefix=SNAPPY_INSTALLATION_DIR] 可以省略,或者
-Dsnappy.prefix=/usr/local/lib
編譯成功後,把編譯後target下的 hadoop-snappy-0.0.1-SNAPSHOT.jar 復制到 $HADOOP_HOME/lib
,同時把編譯生成後的動態庫 到 $HADOOP_HOME/lib/native/ 目錄下:
1cp -r
$HADOOP-SNAPPY_CODE_HOME/target/hadoop-snappy-0.0.1-SNAPSHOT/lib/native/Linux-amd64-64
$HADOOP_HOME/lib/native/
3.4、編譯過程中常見錯誤處理
① 缺少一些第三方依賴
官方文檔中提到編譯前提需要:gcc c++, autoconf, automake, libtool, java 6, JAVA_HOME set,
Maven 3
②錯誤信息:
[exec] libtool: link: gcc -shared
src/org/apache/hadoop/io/compress/snappy/.libs/SnappyCompressor.o
src/org/apache/hadoop/io/compress/snappy/.libs/SnappyDecompressor.o
-L/usr/local/lib -ljvm -ldl -m64 -Wl,-soname -Wl,libhadoopsnappy.so.0 -o
.libs/libhadoopsnappy.so.0.0.1
[exec] /usr/bin/ld: cannot find -ljvm
[exec] collect2: ld returned 1 exit status
[exec] make: *** [libhadoopsnappy.la] Error 1
或者
[exec] /bin/sh ./libtool --tag=CC --mode=link gcc -g -Wall -fPIC -O2 -m64
-g -O2 -version-info 0:1:0 -L/usr/local/lib -o libhadoopsna/usr/bin/ld: cannot
find -ljvm
[exec] collect2: ld returned 1 exit status
[exec] make: *** [libhadoopsnappy.la] Error 1
[exec] ppy.la -rpath /usr/local/lib
src/org/apache/hadoop/io/compress/snappy/SnappyCompressor.lo
src/org/apache/hadoop/io/compress/snappy/SnappyDecompressor.lo -ljvm -ldl
[exec] libtool: link: gcc -shared
src/org/apache/hadoop/io/compress/snappy/.libs/SnappyCompressor.o
src/org/apache/hadoop/io/compress/snappy/.libs/SnappyDecompressor.o
-L/usr/local/lib -ljvm -ldl -m64 -Wl,-soname -Wl,libhadoopsnappy.so.0 -o
.libs/libhadoopsnappy.so.0.0.1
[ant] Exiting
/home/hadoop/codes/hadoop-snappy/maven/build-compilenative.xml.
這個錯誤是因為沒有把安裝jvm的libjvm.so 鏈接到
/usr/local/lib。如果你的系統時amd64,可以執行如下命令解決這個問題:
1ln -s /usr/java/jdk1.7.0_75/jre/lib/amd64/server/libjvm.so
/usr/local/lib/
[四]、hadoop配置修改
4.1、修改 $HADOOP_HOME/etc/hadoop/hadoop-env.sh,添加:
1export
LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$HADOOP_HOME/lib/native/Linux-amd64-64/
4.2、修改 $HADOOP_HOME/etc/hadoop/core-site.xml:
XHTML
io.compression.codecs
org.apache.hadoop.io.compress.GzipCodec,
org.apache.hadoop.io.compress.DefaultCodec,
org.apache.hadoop.io.compress.BZip2Codec,
org.apache.hadoop.io.compress.SnappyCodec
4.3、修改 $HADOOP_HOME/etc/hadoop/mapred-site.xml 中有關壓縮屬性,測試snappy:
XHTML
maprece.map.output.compress
true
maprece.map.output.compress.codec
org.apache.hadoop.io.compress.SnappyCodec[五]、測試驗證
全部配置好後(集群中所有的節點都需要動態庫和修改配置),重啟hadoop集群環境,運行自帶的測試實例
wordcount,如果maprece過程中沒有錯誤信息即表示snappy壓縮安裝方法配置成功。
當然hadoop也提供了本地庫的測試方法 hadoop checknative :
[hadoop@micmiu ~]$ hadoop checknative
15/03/17 22:57:59 INFO bzip2.Bzip2Factory: Successfully loaded &
initialized native-bzip2 library system-native
15/03/17 22:57:59 INFO zlib.ZlibFactory: Successfully loaded &
initialized native-zlib library
Native library checking:
hadoop: true
/usr/local/share/hadoop-2.6.0/lib/native/libhadoop.so.1.0.0
zlib: true /lib64/libz.so.1
snappy: true
/usr/local/share/hadoop/lib/native/Linux-amd64-64/libsnappy.so.1
lz4: true revision:99
bzip2: true /lib64/libbz2.so.1
openssl: true /usr/lib64/libcrypto.so
3. 錯誤: 找不到或無法載入主類 Djava.library.path=.usr.hadoop.hadoop-2.8.0.lib:.
最近,打算Hbase建表用snappy壓縮時,碰到一些Hadoop本地庫的問題。其實這些問題是一直存在的,只是不影響正常使用,就沒有引起重視。這次希望徹底解決以下問題:
問題一:執行start-dfs.sh時出現以下日誌
xxxx: Java HotSpot(TM) 64-Bit Server VM warning: You have loaded library /usr/local/hadoop-2.4.0/lib/native/libhadoop.so which might have disabled stack guard. The VM will try to fix the stack guard now.
xxxx: It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.
這是因為官網提供的版本本地庫是32位的,在64位主機環境下無法執行。需要下載hadoop源碼進行編譯(如何編譯源碼可以上網搜索),編譯成功後,找到native下的文件拷貝到${HADOOP_HOME}/lib/native目錄下即可。
問題二:執行start-dfs.sh時出現以下日誌
WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
在網上找到的所有文章中,都是說在hadoop-env.sh中加入以下兩行配置:
export HADOOP_COMMON_LIB_NATIVE_DIR=${HADOOP_HOME}/lib/native
export HADOOP_OPTS="-Djava.library.path=${HADOOP_HOME}/lib/"
但是在測試過程中,加入以上配置還是會提示告警信息,說明本地庫未載入成功。
開啟debug:
export HADOOP_ROOT_LOGGER=DEBUG,console
執行start-dfs.sh,發現以下日誌:
DEBUG util.NativeCodeLoader: Failed to load native-hadoop with error: Java.lang.UnsatisfiedLinkError: no hadoop in java.library.path
從日誌中可以看出hadoop庫不在java.library.path所配置的目錄下,應該是java.library.path配置的路徑有問題。在hadoop-env.sh中重新配置:
export HADOOP_COMMON_LIB_NATIVE_DIR=${HADOOP_HOME}/lib/native
export HADOOP_OPTS="-Djava.library.path=${HADOOP_HOME}/lib/native/"
執行start-dfs.sh,告警信息不再顯示。經測試,其實只需export HADOOP_OPTS即可解決問題。
驗證本地庫是否載入成功:hadoop checknative
15/08/18 10:31:17 INFO bzip2.Bzip2Factory: Successfully loaded & initialized native-bzip2 library system-native
15/08/18 10:31:17 INFO zlib.ZlibFactory: Successfully loaded & initialized native-zlib library
Native library checking:
hadoop: true /usr/local/hadoop-2.4.0/lib/native/libhadoop.so.1.0.0
zlib: true /lib64/libz.so.1
snappy: true /usr/local/hadoop-2.4.0/lib/native/Linux-amd64-64/libsnappy.so.1
lz4: true revision:99
bzip2: true /lib64/libbz2.so.1
以上說明本地庫已經載入成功。
4. xcode7.0怎麼添加使用資料庫的框架
1> 將mysql頭文件目錄添加到xcode頭文件搜索路徑中
項目屬性--> Build Settings --> Search Paths --> Header Search Paths,添加/usr/local/mysql/include
2> 將mysql庫文件目錄添加到xcode庫文件搜索路徑中
項目屬性--> Build Settings --> Search Paths --> Library Search Paths,添加/usr/local/mysql/lib
3> 添加鏈接標記選項
項目屬性--> Build Settings --> Linking --> Other Linker Flags,添加如下標記:
-lmysqlclient
-lm
-lz
4> 將mysql的動態庫鏈接到/usr/lib目錄下
ln -s /usr/local/mysql/lib/libmysqlclient.18.dylib /usr/lib