当前位置:主页 > 查看内容

编译flink1.12.2以适配cdh6.3.2并制作其parcel

发布时间:2021-05-28 00:00| 位朋友查看

简介:前言 一步一步地实践已成熟的flink1.12以parcel方式部署cdh6.x。 编译 下载 kafka-avro-serializer-5.3.0 和 kafka-schema-registry-client-5.5.2 注意不要使用flink源码中的 kafka-avro-serializer-5.5.2 因为编译报错如下图 针对上图的解决方案为 [ rootma……

前言

一步一步地实践已成熟的flink1.12以parcel方式部署cdh6.x。

编译

下载kafka-avro-serializer-5.3.0kafka-schema-registry-client-5.5.2
注意:不要使用flink源码中的kafka-avro-serializer-5.5.2,因为编译报错如下图:
在这里插入图片描述
针对上图的解决方案为:

[root@master flink-release-1.12.2]# vi ./flink-end-to-end-tests/flink-end-to-end-tests-common-kafka/pom.xml
    110                 <dependency>
    111                         <!-- https://mvnrepository.com/artifact/io.confluent/kafka-avro-serializer -->
    112                         <groupId>io.confluent</groupId>
    113                         <artifactId>kafka-avro-serializer</artifactId>
    114                         <!-- <version>5.5.2</version> -->
    115                         <version>5.3.0</version>
    116                         <scope>test</scope>
    117                 </dependency>

注释掉第114行并新增第115行,即修改版本号。
由于上面两个jar包无法下载,本人手动下载后自己加载到maven本地仓库中

[root@master flink-release-1.12.2]# cd /opt/module/flink-shaded-release-12.0/
[root@master flink-shaded-release-12.0]# mvn clean install -DskipTests -Dhadoop.version=3.0.0-cdh6.3.2
[root@master flink-release-1.12.2]# mvn install:install-file -Dfile=/root/jcz/my_jar/kafka-avro-serializer-5.3.0.jar -DgroupId=io.confluent -DartifactId=kafka-avro-serializer -Dversion=5.3.0 -Dpackaging=jar
[root@master flink-release-1.12.2]# mvn install:install-file -Dfile=/root/jcz/my_jar/kafka-schema-registry-client-5.5.2.jar -DgroupId=io.confluent -DartifactId=kafka-schema-registry-client -Dversion=5.5.2 -Dpackaging=jar
[root@master flink-release-1.12.2]# mvn -T2C clean install -DskipTests -Dfast -Pinclude-hadoop -Pvendor-repos -Dhadoop.version=3.0.0-cdh6.3.2 -Dflink.shaded.version=12.0 -Dscala-2.12
[root@master ~]# scp -r /opt/module/flink-release-1.12.2/flink-dist/target/flink-1.12.2-bin/flink-1.12.2 /var/www/html/
[root@master ~]# cd /var/www/html/
[root@master html]# tar -czvf flink-1.12.2-bin-scala_2.12.tgz flink-1.12.2
[root@master ~]# systemctl start httpd
[root@master soft]# git clone https://github.com/pkeropen/flink-parcel.git
[root@master soft]# chmod -R  777 flink-parcel
[root@master soft]# cd flink-parcel/
[root@master flink-parcel]# vi flink-parcel.properties
#FLINK 下载地址
#FLINK_URL=https://mirrors.tuna.tsinghua.edu.cn/apache/flink/flink-1.9.1/flink-1.9.1-bin-scala_2.12.tgz
FLINK_URL=http://192.168.2.95/flink-1.12.2-bin-scala_2.12.tgz

#flink版本号
FLINK_VERSION=1.12.2

#扩展版本号
EXTENS_VERSION=BIN-SCALA_2.12

#操作系统版本,以centos为例
OS_VERSION=7

#CDH 小版本
CDH_MIN_FULL=5.2
CDH_MAX_FULL=6.3.3

#CDH大版本
CDH_MIN=5
CDH_MAX=6
[root@master flink-parcel]# ./build.sh parcel

上面在html目录下操作完成后,打开浏览器输入ip后可见:
在这里插入图片描述
若看不到Centos7虚拟机的上图,则需:
在这里插入图片描述

报错解决

  1. 报错01
[ERROR] Failed to execute goal on project cloudera-manager-schema: Could not resolve dependencies for project com.cloudera.cmf.schema:cloudera-manager-schema:jar:5.8.0: Could not find artifact commons-cli:commons-cli:jar:1.3-cloudera-pre-r1439998 in aliyun (http://maven.aliyun.com/nexus/content/groups/public)

参考livy 集成cdh中编译parcel包出现问题解决,解决方案为:

[root@master ~]# mvn install:install-file -Dfile=/root/jcz/my_jar/commons-cli-1.3-cloudera-pre-r1439998.jar -DgroupId=commons-cli -DartifactId=commons-cli -Dversion=1.3-cloudera-pre-r1439998 -Dpackaging=jar
[root@master flink-parcel]# ./build.sh parcel
Validation succeeded.
[root@master flink-parcel]# ll ./FLINK-1.12.2-BIN-SCALA_2.12_build
total 327520
-rw-r--r-- 1 root root 335369210 Apr  9 00:49 FLINK-1.12.2-BIN-SCALA_2.12-el7.parcel
-rw-r--r-- 1 root root        41 Apr  9 00:49 FLINK-1.12.2-BIN-SCALA_2.12-el7.parcel.sha
-rw-r--r-- 1 root root       583 Apr  9 00:49 manifest.json
[root@master flink-parcel]# ll
total 327540
-rwxrwxrwx 1 root root      5863 Apr  8 22:47 build.sh
drwxr-xr-x 6 root root       142 Apr  9 00:39 cm_ext
drwxr-xr-x 4 root root        29 Apr  9 00:48 FLINK-1.12.2-BIN-SCALA_2.12
drwxr-xr-x 2 root root       123 Apr  9 00:49 FLINK-1.12.2-BIN-SCALA_2.12_build
-rw-r--r-- 1 root root 335366412 Apr  8 02:45 flink-1.12.2-bin-scala_2.12.tgz
drwxrwxrwx 5 root root        53 Apr  8 22:47 flink-csd-on-yarn-src
drwxrwxrwx 5 root root        53 Apr  8 22:47 flink-csd-standalone-src
-rwxrwxrwx 1 root root       411 Apr  8 22:49 flink-parcel.properties
drwxrwxrwx 3 root root        85 Apr  8 22:47 flink-parcel-src
-rwxrwxrwx 1 root root     11357 Apr  8 22:47 LICENSE
-rwxrwxrwx 1 root root      4334 Apr  8 22:47 README.md
[root@master flink-parcel]# ./build.sh csd_on_yarn
Validation succeeded.
[root@master flink-parcel]# ll FLINK_ON_YARN-1.12.2.jar
-rw-r--r-- 1 root root 8259 Apr  9 00:53 FLINK_ON_YARN-1.12.2.jar
[root@master flink-parcel]# tar -cvf ./FLINK-1.12.2-BIN-SCALA_2.12.tar ./FLINK-1.12.2-BIN-SCALA_2.12_build/

将上述FLINK-1.12.2-BIN-SCALA_2.12.tar和FLINK_ON_YARN-1.12.2.jar上传到正式环境服务器(局域网yum提供的节点)。

[root@cdh632-master01 ~]# mkdir /opt/soft/my-flink-parcel
[root@cdh632-master01 ~]# cd /opt/soft/my-flink-parcel/
[root@cdh632-master01 my-flink-parcel]# tar -xvf FLINK-1.12.2-BIN-SCALA_2.12.tar -C /var/www/html
[root@cdh632-master01 my-flink-parcel]# cd /var/www/html
[root@cdh632-master01 html]# ll
total 8
drwxr-xr-x. 2 root root 4096 Mar  4  2020 cloudera-repos
drwxr-xr-x  2 root root 4096 Apr  9 00:49 FLINK-1.12.2-BIN-SCALA_2.12_build
[root@cdh632-master01 html]# mv FLINK-1.12.2-BIN-SCALA_2.12_build flink
[root@cdh632-master01 html]# cd flink/
[root@cdh632-master01 flink]# createrepo .
[root@cdh632-master01 flink]# cd /etc/yum.repos.d
[root@cdh632-master01 yum.repos.d]# vi flink.repo
[flink]
name=flink
baseurl=http://cdh632-master01/flink
enabled=1
gpgcheck=0
[root@cdh632-master01 yum.repos.d]# cd
[root@cdh632-master01 ~]# yum clean all
[root@cdh632-master01 ~]# yum makecache

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
2)点击“下载”后,报错02:Parcel FLINK-1.12.2-BIN-SCALA_2.12-el7.parcel 的错误:哈希验证失败。
参考cloudera manager手动安装flink、livy parcel出现哈希验证失败,解决方案为:

[root@cdh632-master01 ~]# vi /etc/httpd/conf/httpd.conf
AddType application/x-gzip .gz .tgz .parcel #此处添加.parcel
[root@cdh632-master01 ~]# systemctl restart httpd

点击“分配”:
在这里插入图片描述
点击“激活”
在这里插入图片描述
在这里插入图片描述
接下来准备重启CM:

[root@cdh632-master01 ~]# cp /opt/soft/my-flink-parcel/FLINK_ON_YARN-1.12.2.jar /opt/cloudera/csd/
[root@cdh632-master01 ~]# systemctl stop cloudera-scm-agent
[root@cdh632-worker02 ~]# systemctl stop cloudera-scm-agent
[root@cdh632-worker03 ~]# systemctl stop cloudera-scm-agent
[root@cdh632-master01 ~]# systemctl restart cloudera-scm-server
[root@cdh632-master01 ~]# systemctl start cloudera-scm-agent
[root@cdh632-worker02 ~]# systemctl start cloudera-scm-agent
[root@cdh632-worker03 ~]# systemctl start cloudera-scm-agent

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
不要纠结此报错,多次点击“返回”,一直回退到cdh首页。
在这里插入图片描述

[root@cdh632-master01 ~]# find / -name flink-shaded-hadoop-*
/opt/module/repository/org/apache/flink/flink-shaded-hadoop-2-uber/3.0.0-cdh6.3.2-10.0/flink-shaded-hadoop-2-uber-3.0.0-cdh6.3.2-10.0.jar
[root@cdh632-master01 lib]# scp /opt/module/repository/org/apache/flink/flink-shaded-hadoop-2-uber/3.0.0-cdh6.3.2-10.0/flink-shaded-hadoop-2-uber-3.0.0-cdh6.3.2-10.0.jar root@cdh632-master01:/opt/cloudera/parcels/FLINK/lib/flink/lib/
[root@cdh632-master01 lib]# scp /opt/module/repository/org/apache/flink/flink-shaded-hadoop-2-uber/3.0.0-cdh6.3.2-10.0/flink-shaded-hadoop-2-uber-3.0.0-cdh6.3.2-10.0.jar root@cdh632-worker02:/opt/cloudera/parcels/FLINK/lib/flink/lib/
[root@cdh632-master01 lib]# scp /opt/module/repository/org/apache/flink/flink-shaded-hadoop-2-uber/3.0.0-cdh6.3.2-10.0/flink-shaded-hadoop-2-uber-3.0.0-cdh6.3.2-10.0.jar root@cdh632-worker03:/opt/cloudera/parcels/FLINK/lib/flink/lib/
所有Flink-yarn角色所在机器均执行如下:
[root@cdh632-master01 ~]# vi /etc/profile
export HADOOP_CLASSPATH=/opt/cloudera/parcels/FLINK/lib/flink/lib
[root@cdh632-master01 ~]# source /etc/profile
  1. 上述操作后试图鼠标重启Flink-yarn的所有角色,但多次尝试均报错03:flink-yarn.sh: line 17: rotateLogFilesWithPrefix: command not found,网友也遇到了此问题:https://download.csdn.net/download/orangelzc/15936248,最终的解决方案:
[root@cdh632-master01 ~]# scp /opt/module/repository/org/apache/flink/flink-shaded-hadoop-2-uber/2.7.5-10.0/flink-shaded-hadoop-2-uber-2.7.5-10.0.jar root@cdh632-worker03:/opt/cloudera/parcels/FLINK/lib/flink/lib/
[root@cdh632-master01 ~]# scp /opt/module/repository/org/apache/flink/flink-shaded-hadoop-2-uber/2.7.5-10.0/flink-shaded-hadoop-2-uber-2.7.5-10.0.jar root@cdh632-worker02:/opt/cloudera/parcels/FLINK/lib/flink/lib/
[root@cdh632-master01 ~]# scp /opt/module/repository/org/apache/flink/flink-shaded-hadoop-2-uber/2.7.5-10.0/flink-shaded-hadoop-2-uber-2.7.5-10.0.jar root@cdh632-master01:/opt/cloudera/parcels/FLINK/lib/flink/lib/

再次鼠标重启Flink-yarn的所有角色,竟然成功了。虽然我最初通过:
[root@master flink-shaded-release-12.0]# mvn clean install -DskipTests -Dhadoop.version=3.0.0-cdh6.3.2
编译并产生了flink-shaded-hadoop-2-uber-3.0.0-cdh6.3.2-10.0.jar,但却没识别,好像只识别flink-shaded-hadoop-2-uber-2.7.5-10.0.jar。待探究。
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

在这里插入图片描述

参考

基于CDH-6.2.0编译flink-1.12.1(Hadoop-3.0.0&Hive-2.1.1)
centos7 下httpd服务器开启目录
cdh6 flink 安装

;原文链接:https://blog.csdn.net/benpaodexiaowoniu/article/details/115500230
本站部分内容转载于网络,版权归原作者所有,转载之目的在于传播更多优秀技术内容,如有侵权请联系QQ/微信:153890879删除,谢谢!

推荐图文


随机推荐