当前位置:首页 » 操作系统 » lz4源码

lz4源码

发布时间: 2022-05-05 02:24:43

1. 如何解密Oracle Warp加密过程

Oracle加密的原理就是先对源码进行lz压缩lzstr,然后对压缩数据进行SHA-1运算得到40位的加密串shstr,然后将加密串与压缩串拼接得到shstr+lzstr,然后对拼接后的字符串进行Oracle双字符转换(转换表)。最后将转换后的字符串进行base64编码,最终得到wrap的加密串。

The default file extension for input_file is sql. The default nameof output_file is input_file.plb. Therefore, these commands are equivalent:
wrapiname=/mydir/myfile
wrapiname=/mydir/myfile.sql oname=/mydir/myfile.plb
Thisexample specifies a different file extension for input_file and adifferent name for output_file:
wrapiname=/mydir/myfile.src oname=/yourdir/yourfile.out

wrap 的使用步骤如下:
(1)将我们要加密的sql 语句保存到一个sql文本里。
(2)用wrap 进行处理,指定输入的sql,即我们第一步的问题,然后指定输出的路径和文件名,默认扩展名是plb。
(3)执行我们第二部进过wrap 处理的sql,即plb文件,创建我们的对象.


示例1:wrap funcation
--函数
CREATE OR REPLACE FUNCTION F_DAVE (
n int
) RETURN string
IS
BEGIN
IF n = 1 THEN
RETURN 'Dave is DBA!';
ELSIF n = 2 THEN
RETURN 'Dave come from AnQing!';
ELSE
RETURN 'Dave come from HuaiNing!';
END IF;
END;
/

SYS@dave2(db2)> select F_DAVE(4) fromal;
F_DAVE(4)
--------------------------------------------------------------------------------
Dave come from HuaiNing!

BTW: 今天群里有人问我的blog的例子里为啥有安庆,因为我是安庆怀宁人。

[oracle@db2 ~]$ pwd
/home/oracle
[oracle@db2 ~]$ cat dave.sql
CREATE OR REPLACE FUNCTION F_DAVE (
n int
) RETURNstring
IS
BEGIN
IF n = 1 THEN
RETURN 'Dave is DBA!';
ELSIF n = 2 THEN
RETURN 'Dave come from AnQing!';
ELSE
RETURN 'Dave come from HuaiNing!';
END IF;
END;
/

[oracle@db2 ~]$ wrap iname=dave.sql

PL/SQL Wrapper: Release 10.2.0.1.0-Proction on Thu Aug 18 22:59:14 2011

Copyright (c) 1993, 2004, Oracle. All rights reserved.

Processing dave.sql to dave.plb
[oracle@db2 ~]$ ls
bifile.bbd dave.plb dave.sql Desktop log.bbd

[oracle@db2 ~]$ cat dave.plb
CREATE OR REPLACE FUNCTION F_DAVE wrapped
a000000
1
abcd
abcd
abcd
abcd
abcd
abcd
abcd
abcd
abcd
abcd
abcd
abcd
abcd
abcd
abcd
8
10d e7
xR
crtc/BRdQJjutbna/9+g0LlaSx87/znV+y926S1AeC0IRi/tjPJTyvJereDdk8mftMo8QMjV
fw0xXn0zVagAawwNVhSAiy//gJ5B
wAj75ph6EA==

/

SYS@dave2(db2)> @dave.plb

--再次调用函数,正常使用:
SYS@dave2(db2)> select F_DAVE(4) fromal;

F_DAVE(4)
--------------------------------------------------------------------------------
Dave come from HuaiNing!

--查看函数源码,已经加过密了:
SYS@dave2(db2)> select text fromdba_source where name='F_DAVE';

TEXT
--------------------------------------------------------------------------------
FUNCTION F_DAVE wrapped
a000000
1
abcd
abcd
abcd
abcd
abcd
abcd
abcd
abcd

TEXT
--------------------------------------------------------------------------------
abcd
abcd
abcd
abcd
abcd
abcd
abcd
8
10d e7
xR
crtc/BRdQJjutbna/9+g0LlaSx87/znV+y926S1AeC0IRi/tjPJTyvJereDdk8mftMo8QMjV

TEXT
--------------------------------------------------------------------------------
fw0xXn0zVagAawwNVhSAiy//gJ5B
wAj75ph6EA==

2. 如何安装hadoop本地压缩库

Hadoop安装配置snappy压缩

[一]、 实验环境

CentOS 6.3 64位

Hadoop 2.6.0

JDK 1.7.0_75

[二]、 snappy编译安装

2.1、下载源码

到官网 http://code.google.com/p/snappy/ 或者到 https://github.com/google/snappy
下载源码,目前版本为 1.1.1。

2.2、编译安装

解压 tar -zxvf snappy-1.1.1.tar.gz ,然后以 root 用户 执行标准的三步进行编译安装:

/configure

make

make install

默认是安装到 /usr/local/lib ,这时在此目录下查看:

[hadoop@micmiu ~]$ ls -lh /usr/local/lib |grep snappy

-rw-r--r-- 1 root root 229K Mar 10 11:28 libsnappy.a

-rwxr-xr-x 1 root root 953 Mar 10 11:28 libsnappy.la

lrwxrwxrwx 1 root root 18 Mar 10 11:28 libsnappy.so ->
libsnappy.so.1.2.0

lrwxrwxrwx 1 root root 18 Mar 10 11:28 libsnappy.so.1 ->
libsnappy.so.1.2.0

-rwxr-xr-x 1 root root 145K Mar 10 11:28 libsnappy.so.1.2.0

安装过程没有错误同时能看到上面的动态库,基本表示snappy 安装编译成功。

[三]、Hadoop snappy 安装配置

3.1、hadoop 动态库重新编译支持snappy

hadoop动态库编译参考:Hadoop2.2.0源码编译 和 Hadoop2.x在Ubuntu系统中编译源码 ,只是把最后编译的命令中增加
-Drequire.snappy :

1mvn package -Pdist,native -DskipTests -Dtar -Drequire.snappy

把重新编译生成的hadoop动态库替换原来的。

3.2、hadoop-snappy 下载

目前官网没有软件包提供,只能借助 svn 下载源码:

1svn checkout http://hadoop-snappy.googlecode.com/svn/trunk/
hadoop-snappy

3.3、hadoop-snappy 编译

1mvn package [-Dsnappy.prefix=SNAPPY_INSTALLATION_DIR]

PS:如果上面 snappy安装路径是默认的话,即 /usr/local/lib,则此处
[-Dsnappy.prefix=SNAPPY_INSTALLATION_DIR] 可以省略,或者
-Dsnappy.prefix=/usr/local/lib

编译成功后,把编译后target下的 hadoop-snappy-0.0.1-SNAPSHOT.jar 复制到 $HADOOP_HOME/lib
,同时把编译生成后的动态库 到 $HADOOP_HOME/lib/native/ 目录下:

1cp -r
$HADOOP-SNAPPY_CODE_HOME/target/hadoop-snappy-0.0.1-SNAPSHOT/lib/native/Linux-amd64-64
$HADOOP_HOME/lib/native/

3.4、编译过程中常见错误处理

① 缺少一些第三方依赖

官方文档中提到编译前提需要:gcc c++, autoconf, automake, libtool, java 6, JAVA_HOME set,
Maven 3

②错误信息:

[exec] libtool: link: gcc -shared
src/org/apache/hadoop/io/compress/snappy/.libs/SnappyCompressor.o
src/org/apache/hadoop/io/compress/snappy/.libs/SnappyDecompressor.o
-L/usr/local/lib -ljvm -ldl -m64 -Wl,-soname -Wl,libhadoopsnappy.so.0 -o
.libs/libhadoopsnappy.so.0.0.1

[exec] /usr/bin/ld: cannot find -ljvm

[exec] collect2: ld returned 1 exit status

[exec] make: *** [libhadoopsnappy.la] Error 1

或者

[exec] /bin/sh ./libtool --tag=CC --mode=link gcc -g -Wall -fPIC -O2 -m64
-g -O2 -version-info 0:1:0 -L/usr/local/lib -o libhadoopsna/usr/bin/ld: cannot
find -ljvm

[exec] collect2: ld returned 1 exit status

[exec] make: *** [libhadoopsnappy.la] Error 1

[exec] ppy.la -rpath /usr/local/lib
src/org/apache/hadoop/io/compress/snappy/SnappyCompressor.lo
src/org/apache/hadoop/io/compress/snappy/SnappyDecompressor.lo -ljvm -ldl

[exec] libtool: link: gcc -shared
src/org/apache/hadoop/io/compress/snappy/.libs/SnappyCompressor.o
src/org/apache/hadoop/io/compress/snappy/.libs/SnappyDecompressor.o
-L/usr/local/lib -ljvm -ldl -m64 -Wl,-soname -Wl,libhadoopsnappy.so.0 -o
.libs/libhadoopsnappy.so.0.0.1

[ant] Exiting
/home/hadoop/codes/hadoop-snappy/maven/build-compilenative.xml.

这个错误是因为没有把安装jvm的libjvm.so 链接到
/usr/local/lib。如果你的系统时amd64,可以执行如下命令解决这个问题:

1ln -s /usr/java/jdk1.7.0_75/jre/lib/amd64/server/libjvm.so
/usr/local/lib/

[四]、hadoop配置修改

4.1、修改 $HADOOP_HOME/etc/hadoop/hadoop-env.sh,添加:

1export
LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$HADOOP_HOME/lib/native/Linux-amd64-64/

4.2、修改 $HADOOP_HOME/etc/hadoop/core-site.xml:

XHTML

io.compression.codecs

org.apache.hadoop.io.compress.GzipCodec,

org.apache.hadoop.io.compress.DefaultCodec,

org.apache.hadoop.io.compress.BZip2Codec,

org.apache.hadoop.io.compress.SnappyCodec

4.3、修改 $HADOOP_HOME/etc/hadoop/mapred-site.xml 中有关压缩属性,测试snappy:

XHTML

maprece.map.output.compress

true

maprece.map.output.compress.codec

org.apache.hadoop.io.compress.SnappyCodec[五]、测试验证

全部配置好后(集群中所有的节点都需要动态库和修改配置),重启hadoop集群环境,运行自带的测试实例
wordcount,如果maprece过程中没有错误信息即表示snappy压缩安装方法配置成功。

当然hadoop也提供了本地库的测试方法 hadoop checknative :

[hadoop@micmiu ~]$ hadoop checknative

15/03/17 22:57:59 INFO bzip2.Bzip2Factory: Successfully loaded &
initialized native-bzip2 library system-native

15/03/17 22:57:59 INFO zlib.ZlibFactory: Successfully loaded &
initialized native-zlib library

Native library checking:

hadoop: true
/usr/local/share/hadoop-2.6.0/lib/native/libhadoop.so.1.0.0

zlib: true /lib64/libz.so.1

snappy: true
/usr/local/share/hadoop/lib/native/Linux-amd64-64/libsnappy.so.1

lz4: true revision:99

bzip2: true /lib64/libbz2.so.1

openssl: true /usr/lib64/libcrypto.so

3. 错误: 找不到或无法加载主类 Djava.library.path=.usr.hadoop.hadoop-2.8.0.lib:.

最近,打算Hbase建表用snappy压缩时,碰到一些Hadoop本地库的问题。其实这些问题是一直存在的,只是不影响正常使用,就没有引起重视。这次希望彻底解决以下问题:
问题一:执行start-dfs.sh时出现以下日志
xxxx: Java HotSpot(TM) 64-Bit Server VM warning: You have loaded library /usr/local/hadoop-2.4.0/lib/native/libhadoop.so which might have disabled stack guard. The VM will try to fix the stack guard now.
xxxx: It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.
这是因为官网提供的版本本地库是32位的,在64位主机环境下无法执行。需要下载hadoop源码进行编译(如何编译源码可以上网搜索),编译成功后,找到native下的文件拷贝到${HADOOP_HOME}/lib/native目录下即可。
问题二:执行start-dfs.sh时出现以下日志
WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
在网上找到的所有文章中,都是说在hadoop-env.sh中加入以下两行配置:
export HADOOP_COMMON_LIB_NATIVE_DIR=${HADOOP_HOME}/lib/native
export HADOOP_OPTS="-Djava.library.path=${HADOOP_HOME}/lib/"
但是在测试过程中,加入以上配置还是会提示告警信息,说明本地库未加载成功。
开启debug:
export HADOOP_ROOT_LOGGER=DEBUG,console
执行start-dfs.sh,发现以下日志:
DEBUG util.NativeCodeLoader: Failed to load native-hadoop with error: Java.lang.UnsatisfiedLinkError: no hadoop in java.library.path
从日志中可以看出hadoop库不在java.library.path所配置的目录下,应该是java.library.path配置的路径有问题。在hadoop-env.sh中重新配置:
export HADOOP_COMMON_LIB_NATIVE_DIR=${HADOOP_HOME}/lib/native
export HADOOP_OPTS="-Djava.library.path=${HADOOP_HOME}/lib/native/"
执行start-dfs.sh,告警信息不再显示。经测试,其实只需export HADOOP_OPTS即可解决问题。
验证本地库是否加载成功:hadoop checknative
15/08/18 10:31:17 INFO bzip2.Bzip2Factory: Successfully loaded & initialized native-bzip2 library system-native
15/08/18 10:31:17 INFO zlib.ZlibFactory: Successfully loaded & initialized native-zlib library
Native library checking:
hadoop: true /usr/local/hadoop-2.4.0/lib/native/libhadoop.so.1.0.0
zlib: true /lib64/libz.so.1
snappy: true /usr/local/hadoop-2.4.0/lib/native/Linux-amd64-64/libsnappy.so.1
lz4: true revision:99
bzip2: true /lib64/libbz2.so.1
以上说明本地库已经加载成功。

4. xcode7.0怎么添加使用数据库的框架

1> 将mysql头文件目录添加到xcode头文件搜索路径中
项目属性--> Build Settings --> Search Paths --> Header Search Paths,添加/usr/local/mysql/include
2> 将mysql库文件目录添加到xcode库文件搜索路径中
项目属性--> Build Settings --> Search Paths --> Library Search Paths,添加/usr/local/mysql/lib
3> 添加链接标记选项
项目属性--> Build Settings --> Linking --> Other Linker Flags,添加如下标记:

-lmysqlclient
-lm
-lz
4> 将mysql的动态库链接到/usr/lib目录下
ln -s /usr/local/mysql/lib/libmysqlclient.18.dylib /usr/lib

热点内容
什么配置可以算神机 发布:2024-10-06 21:52:09 浏览:421
两条吊筋怎么配置高度 发布:2024-10-06 21:46:22 浏览:490
安卓平板b站缓存文件位置 发布:2024-10-06 21:44:43 浏览:906
能缓存视频的播放器 发布:2024-10-06 21:36:48 浏览:132
安卓接入点哪个好 发布:2024-10-06 21:25:01 浏览:450
ns服务器怎么搭建 发布:2024-10-06 20:56:22 浏览:806
自解压功能 发布:2024-10-06 20:51:40 浏览:591
win7限制上传速度 发布:2024-10-06 20:42:58 浏览:541
php判断字符串是否空 发布:2024-10-06 20:42:48 浏览:116
行业均衡配置基金有哪些 发布:2024-10-06 20:42:11 浏览:194