登陆 | 注册 | 留言 | 设首页 | 加收藏
当前位置: 网站首页 > 前沿技术 > 文章 当前位置: 前沿技术 > 文章

腾讯云大数据平台MapReduce-Flink任务提交问题总结

时间:2024-08-19    点击: 次    来源:网络    作者:佚名 - 小 + 大

3. 详细问题与解决方案
1、将所有Flink通用组件 :<scope>provided</scope>

<properties>
  <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
  <!-- Flink本地开发版本 -->
  <flink.version>1.14.3</flink.version>
  <hadoop.version>3.0.0</hadoop.version>
  <target.java.version>1.8</target.java.version>
  <scala.binary.version>2.11</scala.binary.version>
  <maven.compiler.source>${target.java.version}</maven.compiler.source>
  <maven.compiler.target>${target.java.version}</maven.compiler.target>
  <log4j.version>2.12.1</log4j.version>
</properties>
<dependencies>
  <dependency>
    <groupId>org.apache.kafka</groupId>
    <artifactId>kafka-clients</artifactId>
    <version>2.0.1</version>
    <scope>provided</scope>
  </dependency>
  <dependency>
    <groupId>org.apache.kafka</groupId>
    <artifactId>kafka-tools</artifactId>
    <version>2.0.1</version>
    <scope>provided</scope>
  </dependency>
 
  <dependency>
    <groupId>org.apache.flink</groupId>
    <artifactId>flink-core</artifactId>
    <version>${flink.version}</version>
    <scope>provided</scope>
  </dependency>
  <dependency>
    <groupId>org.apache.flink</groupId>
    <artifactId>flink-connector-kafka_2.11</artifactId>
    <version>${flink.version}</version>
    <scope>provided</scope>
  </dependency>
  <dependency>
    <groupId>org.apache.flink</groupId>
    <artifactId>flink-streaming-java_${scala.binary.version}</artifactId>
    <version>${flink.version}</version>
    <scope>provided</scope>
  </dependency>
  <dependency>
    <groupId>org.apache.flink</groupId>
    <artifactId>flink-clients_${scala.binary.version}</artifactId>
    <version>${flink.version}</version>
    <scope>provided</scope>
  </dependency>
  <!-- Table Program Dependencies-->
  <dependency>
    <groupId>org.apache.flink</groupId>
    <artifactId>flink-table-api-java-bridge_${scala.binary.version}</artifactId>
    <version>${flink.version}</version>
    <scope>provided</scope>
  </dependency>
  <dependency>
    <groupId>org.apache.flink</groupId>
    <artifactId>flink-table-planner_${scala.binary.version}</artifactId>
    <version>${flink.version}</version>
    <scope>provided</scope>
  </dependency>
  <dependency>
    <groupId>org.apache.flink</groupId>
    <artifactId>flink-streaming-scala_${scala.binary.version}</artifactId>
    <version>${flink.version}</version>
    <scope>provided</scope>
  </dependency>
  <dependency>
    <groupId>org.apache.flink</groupId>
    <artifactId>flink-table-common</artifactId>
    <version>${flink.version}</version>
    <scope>provided</scope>
  </dependency>
  <!--RocksDB状态后端 -->
  <dependency>
    <groupId>org.apache.flink</groupId>
    <artifactId>flink-statebackend-rocksdb_${scala.binary.version}</artifactId>
    <version>${flink.version}</version>
    <scope>provided</scope>
  </dependency>
  <dependency>
    <groupId>org.apache.hadoop</groupId>
    <artifactId>hadoop-client</artifactId>
    <version>3.0.0</version>
    <scope>provided</scope>
  </dependency>
  <dependency>
    <groupId>org.apache.flink</groupId>
    <artifactId>flink-connector-hbase-base_${scala.binary.version}</artifactId>
    <version>${flink.version}</version>
    <scope>provided</scope>
  </dependency>
  <dependency>
    <groupId>org.apache.flink</groupId>
    <artifactId>flink-connector-hbase-2.2_${scala.binary.version}</artifactId>
    <version>${flink.version}</version>
    <scope>provided</scope>
  </dependency>
  <dependency>
    <groupId>org.apache.commons</groupId>
    <artifactId>commons-lang3</artifactId>
    <version>3.9</version>
  </dependency>
  <dependency>
    <groupId>org.apache.hbase</groupId>
    <artifactId>hbase-common</artifactId>
    <version>2.2.1</version>
    <scope>provided</scope>
  </dependency>
  <dependency>
    <groupId>org.apache.hbase</groupId>
    <artifactId>hbase-client</artifactId>
    <version>2.2.1</version>
    <scope>provided</scope>
  </dependency>
  <dependency>
    <groupId>org.jasypt</groupId>
    <artifactId>jasypt</artifactId>
    <version>1.9.3</version>
  </dependency>
  <dependency>
    <groupId>com.alibaba</groupId>
    <artifactId>fastjson</artifactId>
    <version>1.2.60</version>
  </dependency>
  <dependency>
    <groupId>redis.clients</groupId>
    <artifactId>jedis</artifactId>
    <version>3.7.0</version>
  </dependency>
  <dependency>
    <groupId>com.alibaba</groupId>
    <artifactId>druid</artifactId>
    <version>1.1.16</version>
  </dependency>
  <dependency>
    <groupId>com.vividsolutions</groupId>
    <artifactId>jts-core</artifactId>
    <version>1.14.0</version>
  </dependency>
  <dependency>
    <groupId>org.apache.doris</groupId>
    <artifactId>flink-doris-connector-1.14_2.11</artifactId>
    <version>1.1.1</version>
    <scope>provided</scope>
  </dependency>
  <dependency>
    <groupId>mysql</groupId>
    <artifactId>mysql-connector-java</artifactId>
    <version>8.0.12</version>
  </dependency>
  <dependency>
    <groupId>com.ververica</groupId>
    <artifactId>flink-connector-mysql-cdc</artifactId>
    <version>2.2.1</version>
    <exclusions>
      <exclusion>
        <artifactId>flink-shaded-guava</artifactId>
        <groupId>org.apache.flink</groupId>
      </exclusion>
    </exclusions>
    <scope>provided</scope>
  </dependency>
 
</dependencies>
2、额外的包

在运行过程中会遇到这个错误

Setting HADOOP_CONF_DIR=/etc/hadoop/conf because no HADOOP_CONF_DIR or HADOOP_CLASSPATH was set.
Setting HBASE_CONF_DIR=/etc/hbase/conf because no HBASE_CONF_DIR was set.
2023-11-06 15:48:25,402 INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli                [] - Dynamic Property set: taskmanager.memory.process.size=6144m
2023-11-06 15:48:25,402 INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli                [] - Dynamic Property set: taskmanager.memory.process.size=6144m
2023-11-06 15:48:25,402 INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli                [] - Dynamic Property set: taskmanager.memory.managed.fraction=0
2023-11-06 15:48:25,402 INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli                [] - Dynamic Property set: taskmanager.memory.managed.fraction=0
2023-11-06 15:48:25,402 INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli                [] - Dynamic Property set: taskmanager.memory.jvm-metaspace.size=256m
2023-11-06 15:48:25,402 INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli                [] - Dynamic Property set: taskmanager.memory.jvm-metaspace.size=256m
2023-11-06 15:48:25,403 INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli                [] - Dynamic Property set: taskmanager.memory.network.fraction=0.02
2023-11-06 15:48:25,403 INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli                [] - Dynamic Property set: taskmanager.memory.network.fraction=0.02
2023-11-06 15:48:25,403 INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli                [] - Dynamic Property set: jobmanager.memory.process.size=1024m
2023-11-06 15:48:25,403 INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli                [] - Dynamic Property set: jobmanager.memory.process.size=1024m
2023-11-06 15:48:25,403 INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli                [] - Dynamic Property set: yarn.application.name=【实时指标-区域(5分钟滚动30分钟)】hidcp-indexes-area
2023-11-06 15:48:25,403 INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli                [] - Dynamic Property set: yarn.application.name=【实时指标-区域(5分钟滚动30分钟)】hidcp-indexes-area
2023-11-06 15:48:25,403 INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli                [] - Dynamic Property set: jobmanager.memory.jvm-metaspace.size=256m
2023-11-06 15:48:25,403 INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli                [] - Dynamic Property set: jobmanager.memory.jvm-metaspace.size=256m
====配置文件解密完成=====
====配置文件解密完成=====
====配置文件加密完成=====
2023-11-06 15:48:26,031 WARN  org.apache.flink.yarn.configuration.YarnLogConfigUtil        [] - The configuration directory ('/data/bigdata/tbds/usr/hdp/2.2.0.0-2041/flink-1.15.0/conf') already contains a LOG4J config file.If you want to use logback, then please delete or rename the log configuration file.
2023-11-06 15:48:26,263 INFO  org.apache.hadoop.yarn.client.AHSProxy                       [] - Connecting to Application History server at tbds-46-70-198-17/46.70.198.17:10200
2023-11-06 15:48:26,269 INFO  org.apache.flink.yarn.YarnClusterDescriptor                  [] - No path for the flink jar passed. Using the location of class org.apache.flink.yarn.YarnClusterDescriptor to locate the jar
2023-11-06 15:48:26,281 WARN  org.apache.flink.yarn.YarnClusterDescriptor                  [] - Job Clusters are deprecated since Flink 1.15. Please use an Application Cluster/Application Mode instead.
2023-11-06 15:48:26,508 INFO  org.apache.hadoop.conf.Configuration                         [] - resource-types.xml not found
2023-11-06 15:48:26,509 INFO  org.apache.hadoop.yarn.util.resource.ResourceUtils           [] - Unable to find 'resource-types.xml'.
2023-11-06 15:48:26,519 WARN  org.apache.flink.yarn.YarnClusterDescriptor                  [] - Neither the HADOOP_CONF_DIR nor the YARN_CONF_DIR environment variable is set. The Flink YARN Client needs one of these to be set to properly load the Hadoop configuration for accessing YARN.
2023-11-06 15:48:26,560 INFO  org.apache.flink.yarn.YarnClusterDescriptor                  [] - Cluster specification: ClusterSpecification{masterMemoryMB=1024, taskManagerMemoryMB=6144, slotsPerTaskManager=5}
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
2023-11-06 15:48:26,679 INFO  org.apache.hadoop.conf.Configuration.deprecation             [] - No unit for dfs.client.datanode-restart.timeout(30) assuming SECONDS
2023-11-06 15:48:26,698 INFO  org.apache.hadoop.conf.Configuration.deprecation             [] - No unit for dfs.client.datanode-restart.timeout(30) assuming SECONDS
2023-11-06 15:48:26,910 WARN  org.apache.hadoop.hdfs.shortcircuit.DomainSocketFactory      [] - The short-circuit local reads feature cannot be used because libhadoop cannot be loaded.
2023-11-06 15:48:27,123 INFO  org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient [] - SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2023-11-06 15:48:28,077 INFO  org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient [] - SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2023-11-06 15:48:28,101 INFO  org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient [] - SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2023-11-06 15:48:28,125 INFO  org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient [] - SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2023-11-06 15:48:28,551 INFO  org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient [] - SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2023-11-06 15:48:28,579 INFO  org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient [] - SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2023-11-06 15:48:29,027 INFO  org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient [] - SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2023-11-06 15:48:29,692 INFO  org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient [] - SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2023-11-06 15:48:30,618 INFO  org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient [] - SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2023-11-06 15:48:30,746 INFO  org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient [] - SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2023-11-06 15:48:31,011 INFO  org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient [] - SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2023-11-06 15:48:31,871 INFO  org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient [] - SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2023-11-06 15:48:32,432 INFO  org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient [] - SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2023-11-06 15:48:32,781 INFO  org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient [] - SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2023-11-06 15:48:33,233 INFO  org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient [] - SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2023-11-06 15:48:33,543 INFO  org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient [] - SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2023-11-06 15:48:33,568 INFO  org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient [] - SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2023-11-06 15:48:33,992 INFO  org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient [] - SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2023-11-06 15:48:34,437 INFO  org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient [] - SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2023-11-06 15:48:34,458 INFO  org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient [] - SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2023-11-06 15:48:35,465 INFO  org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient [] - SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2023-11-06 15:48:35,561 INFO  org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient [] - SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2023-11-06 15:48:35,985 INFO  org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient [] - SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2023-11-06 15:48:36,004 INFO  org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient [] - SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2023-11-06 15:48:36,020 INFO  org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient [] - SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2023-11-06 15:48:37,239 INFO  org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient [] - SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2023-11-06 15:48:37,257 INFO  org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient [] - SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2023-11-06 15:48:37,280 INFO  org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient [] - SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2023-11-06 15:48:37,298 INFO  org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient [] - SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2023-11-06 15:48:37,731 INFO  org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient [] - SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2023-11-06 15:48:38,150 INFO  org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient [] - SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2023-11-06 15:48:38,170 INFO  org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient [] - SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2023-11-06 15:48:38,187 INFO  org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient [] - SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2023-11-06 15:48:38,620 INFO  org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient [] - SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2023-11-06 15:48:39,455 INFO  org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient [] - SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2023-11-06 15:48:40,809 INFO  org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient [] - SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2023-11-06 15:48:41,979 INFO  org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient [] - SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2023-11-06 15:48:42,011 INFO  org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient [] - SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2023-11-06 15:48:43,243 INFO  org.apache.flink.yarn.YarnClusterDescriptor                  [] - Adding delegation token to the AM container.
2023-11-06 15:48:43,244 INFO  org.apache.flink.yarn.Utils                                  [] - Obtaining delegation tokens for HDFS and HBase.
2023-11-06 15:48:43,262 INFO  org.apache.hadoop.hdfs.DFSClient                             [] - Created token for haixin: HDFS_DELEGATION_TOKEN owner=haixin/tbds.instance@TBDS-LWI9T5PK, renewer=yarn, realUser=, issueDate=1699256923276, maxDate=1699861723276, sequenceNumber=64314, masterKeyId=90 on ha-hdfs:hdfsCluster
2023-11-06 15:48:43,265 INFO  org.apache.hadoop.mapreduce.security.TokenCache              [] - Got dt for hdfs://hdfsCluster; Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:hdfsCluster, Ident: (token for haixin: HDFS_DELEGATION_TOKEN owner=haixin/tbds.instance@TBDS-LWI9T5PK, renewer=yarn, realUser=, issueDate=1699256923276, maxDate=1699861723276, sequenceNumber=64314, masterKeyId=90)
2023-11-06 15:48:43,266 INFO  org.apache.flink.yarn.Utils                                  [] - Attempting to obtain Kerberos security token for HBase
2023-11-06 15:48:43,266 INFO  org.apache.flink.yarn.Utils                                  [] - HBase is not available (not packaged with this application): ClassNotFoundException : "org.apache.hadoop.hbase.HBaseConfiguration".
2023-11-06 15:48:43,280 INFO  org.apache.flink.yarn.YarnClusterDescriptor                  [] - Submitting application master application_1694000934158_64042
2023-11-06 15:48:43,297 INFO  org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl    [] - Timeline service address: tbds-46-70-198-17:8188
java.lang.NoClassDefFoundError: javax/ws/rs/core/Link$Builder. It appears that the timeline client failed to initiate because an incompatible dependency in classpath. If timeline service is optional to this client, try to work around by setting yarn.timeline-service.enabled to false in client configuration.
    at java.lang.Class.getDeclaredConstructors0(Native Method)
    at java.lang.Class.privateGetDeclaredConstructors(Class.java:2671)
    at java.lang.Class.getConstructor0(Class.java:3075)
    at java.lang.Class.newInstance(Class.java:412)
    at javax.ws.rs.ext.FactoryFinder.newInstance(FactoryFinder.java:65)
    at javax.ws.rs.ext.FactoryFinder.find(FactoryFinder.java:117)
    at javax.ws.rs.ext.RuntimeDelegate.findDelegate(RuntimeDelegate.java:105)
    at javax.ws.rs.ext.RuntimeDelegate.getInstance(RuntimeDelegate.java:91)
    at javax.ws.rs.core.MediaType.<clinit>(MediaType.java:44)
    at com.sun.jersey.core.header.MediaTypes.<clinit>(MediaTypes.java:64)
    at com.sun.jersey.core.spi.factory.MessageBodyFactory.initReaders(MessageBodyFactory.java:182)
    at com.sun.jersey.core.spi.factory.MessageBodyFactory.initReaders(MessageBodyFactory.java:175)
    at com.sun.jersey.core.spi.factory.MessageBodyFactory.init(MessageBodyFactory.java:162)
    at com.sun.jersey.api.client.Client.init(Client.java:343)
    at com.sun.jersey.api.client.Client.access$000(Client.java:119)
    at com.sun.jersey.api.client.Client$1.f(Client.java:192)
    at com.sun.jersey.api.client.Client$1.f(Client.java:188)
    at com.sun.jersey.spi.inject.Errors.processWithErrors(Errors.java:193)
    at com.sun.jersey.api.client.Client.<init>(Client.java:188)
    at com.sun.jersey.api.client.Client.<init>(Client.java:171)
    at org.apache.hadoop.yarn.client.api.impl.TimelineConnector.serviceInit(TimelineConnector.java:122)
    at org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
    at org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:108)
    at org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.serviceInit(TimelineClientImpl.java:130)
    at org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
    at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.getTimelineDelegationToken(YarnClientImpl.java:410)
    at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.addTimelineDelegationToken(YarnClientImpl.java:386)
    at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.submitApplication(YarnClientImpl.java:305)
    at org.apache.flink.yarn.YarnClusterDescriptor.startAppMaster(YarnClusterDescriptor.java:1246)
    at org.apache.flink.yarn.YarnClusterDescriptor.deployInternal(YarnClusterDescriptor.java:619)
    at org.apache.flink.yarn.YarnClusterDescriptor.deployJobCluster(YarnClusterDescriptor.java:487)
    at org.apache.flink.client.deployment.executors.AbstractJobClusterExecutor.execute(AbstractJobClusterExecutor.java:82)
    at org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.executeAsync(StreamExecutionEnvironment.java:2095)
    at org.apache.flink.client.program.StreamContextEnvironment.executeAsync(StreamContextEnvironment.java:188)
    at org.apache.flink.client.program.StreamContextEnvironment.execute(StreamContextEnvironment.java:119)
    at org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1969)
    at com.hisense.hidcp.indexes.realtime.cityvehicle.area.AreaVehiclesMain.main(AreaVehiclesMain.java:121)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:355)
    at org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:222)
    at org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:114)
    at org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:836)
    at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:247)
    at org.apache.flink.client.cli.CliFrontend.parseAndRun(CliFrontend.java:1078)
    at org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1156)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
    at org.apache.flink.runtime.security.contexts.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
    at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1156)
2023-11-06 15:48:43,484 INFO  org.apache.flink.yarn.YarnClusterDescriptor                  [] - Cancelling deployment from Deployment Failure Hook
2023-11-06 15:48:43,489 INFO  org.apache.hadoop.yarn.client.AHSProxy                       [] - Connecting to Application History server at tbds-46-70-198-17/46.70.198.17:10200
2023-11-06 15:48:43,491 INFO  org.apache.flink.yarn.YarnClusterDescriptor                  [] - Killing YARN application
2023-11-06 15:48:43,613 INFO  org.apache.flink.yarn.YarnClusterDescriptor                  [] - Deleting files in hdfs://hdfsCluster/user/haixin/.flink/application_1694000934158_64042.
2023-11-06 15:48:43,614 INFO  org.apache.hadoop.conf.Configuration.deprecation             [] - No unit for dfs.client.datanode-restart.timeout(30) assuming SECONDS
2023-11-06 15:48:43,617 INFO  org.apache.hadoop.conf.Configuration.deprecation             [] - No unit for dfs.client.datanode-restart.timeout(30) assuming SECONDS






解决方案:在依赖里面添加ws包

<dependency>
    <groupId>javax.ws.rs</groupId>
    <artifactId>javax.ws.rs-api</artifactId>
    <version>2.1</version>
    <scope>compile</scope>
</dependency>

上一篇:大数据权限认证Kerberos

下一篇:没有了

推荐阅读
鲁ICP备2022041402号  |   QQ:8346417  |  地址:山东青岛