本文将为您提供关于hive连接一段时间后报错ExecutionError,returncode2fromorg.apache.hadoop.hive.ql.exec.mr.MapRedTask的详细介
本文将为您提供关于hive 连接一段时间后报错 Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask的详细介绍,同时,我们还将为您提供关于eclipse换了高版本的maven插件后报错:org.apache.maven.archiver.MavenArchiver.getManifest(org.apache.maven.project、Encountered IOException running create table job: : org.apache.hadoop.hive.conf.HiveConf、ERROR hive.HiveConfig: Could not load org.apache.hadoop.hive.conf.HiveConf. Make sure HIVE_CONF_D...、Error, return code 1 from org.apache.hadoop.hive.ql.exec.spark.SparkTask的实用信息。
本文目录一览:- hive 连接一段时间后报错 Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
- eclipse换了高版本的maven插件后报错:org.apache.maven.archiver.MavenArchiver.getManifest(org.apache.maven.project
- Encountered IOException running create table job: : org.apache.hadoop.hive.conf.HiveConf
- ERROR hive.HiveConfig: Could not load org.apache.hadoop.hive.conf.HiveConf. Make sure HIVE_CONF_D...
- Error, return code 1 from org.apache.hadoop.hive.ql.exec.spark.SparkTask
hive 连接一段时间后报错 Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
应该是在连接的时候就 没有对 map 和 reduce 进行内存配置
jdbc:hive2://xxxxxxx/yyyyy?mapreduce.map.memory.mb=3809;mapreduce.map.java.opts=-Xmx3428m;mapreduce.reduce.memory.mb=2560;mapreduce.reduce.java.opts=-Xmx2304m
eclipse换了高版本的maven插件后报错:org.apache.maven.archiver.MavenArchiver.getManifest(org.apache.maven.project
在给eclipse换了高版本的maven插件后,引入jar包报如下的错误:
org.apache.maven.archiver.MavenArchiver.getManifest(org.apache.maven.project
解决方法是:help–>install new software, 然后add,添加如下链接,
http://repo1.maven.org/maven2/.m2e/connectors/m2eclipse-mavenarchiver/0.17.2/N/LATEST/
一直下一步就可以,后面提示重启eclipse,然后重启eclipse后,右击项目,点击maven–>update project, 错误就没了
Encountered IOException running create table job: : org.apache.hadoop.hive.conf.HiveConf
19/08/09 16:30:16 ERROR hive.HiveConfig: Could not load org.apache.hadoop.hive.conf.HiveConf. Make sure HIVE_CONF_DIR is set correctly.
19/08/09 16:30:16 ERROR tool.CreateHiveTableTool: Encountered IOException running create table job: java.io.IOException: java.lang.ClassNotFoundException: org.apache.hadoop.hive.conf.HiveConf
at org.apache.sqoop.hive.HiveConfig.getHiveConf(HiveConfig.java:50)
at org.apache.sqoop.hive.HiveImport.getHiveArgs(HiveImport.java:392)
at org.apache.sqoop.hive.HiveImport.executeExternalHiveScript(HiveImport.java:379)
at org.apache.sqoop.hive.HiveImport.executeScript(HiveImport.java:337)
at org.apache.sqoop.hive.HiveImport.importTable(HiveImport.java:241)
at org.apache.sqoop.tool.CreateHiveTableTool.run(CreateHiveTableTool.java:57)
at org.apache.sqoop.Sqoop.run(Sqoop.java:147)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:183)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:234)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:243)
at org.apache.sqoop.Sqoop.main(Sqoop.java:252)
Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.conf.HiveConf
使用 sqoop 创建和 mysql 相同的表结构的时候:
sqoop create-hive-table --connect jdbc:mysql://10.136.198.112:3306/questions --username root --password root --table user --hive-table hhive
报错
解决方法,添加环境变量
往 /etc/profile 最后加入 export HADOOP_CLASSPATH=$HADOOP_CLASSPATH:$HIVE_HOME/lib/*
并且刷新 source /etc/profile
然后再次执行 sqoop 语句,执行成功,但是 hive 中并没有生成表
解决方法:讲 hive/config 下的 hive-site.xml 配置文件复制到 sqoop/config 下,再次执行,成功
hive 中的表已经生成
ERROR hive.HiveConfig: Could not load org.apache.hadoop.hive.conf.HiveConf. Make sure HIVE_CONF_D...
Sqoop 导入 mysql 表中的数据到 hive,出现如下错误:
解决方法:
往 /etc/profile 最后加入 export HADOOP_CLASSPATH=$HADOOP_CLASSPATH:$HIVE_HOME/lib/*然后刷新配置,source /etc/profile
Error, return code 1 from org.apache.hadoop.hive.ql.exec.spark.SparkTask
Showing 4096 bytes of 17167 total. Click here for the full log.
.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:542) 17/01/20 09:39:23 ERROR client.RemoteDriver: Shutting down remote driver due to error: java.lang.InterruptedException java.lang.InterruptedException at java.lang.Object.wait(Native Method) at org.apache.spark.scheduler.TaskSchedulerImpl.waitBackendReady(TaskSchedulerImpl.scala:623) at org.apache.spark.scheduler.TaskSchedulerImpl.postStartHook(TaskSchedulerImpl.scala:170) at org.apache.spark.scheduler.cluster.YarnClusterScheduler.postStartHook(YarnClusterScheduler.scala:33) at org.apache.spark.SparkContext.<init>(SparkContext.scala:595) at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:59) at org.apache.hive.spark.client.RemoteDriver.<init>(RemoteDriver.java:169) at org.apache.hive.spark.client.RemoteDriver.main(RemoteDriver.java:556) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:542) 17/01/20 09:39:23 INFO yarn.ApplicationMaster: Unregistering ApplicationMaster with FAILED (diag message: Uncaught exception: org.apache.hadoop.yarn.exceptions.InvalidResourceRequestException: Invalid resource request, requested virtual cores < 0, or requested virtual cores > max configured, requestedVirtualCores=4, maxVirtualCores=2 at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.validateResourceRequest(SchedulerUtils.java:258) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.normalizeAndValidateRequest(SchedulerUtils.java:226) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.normalizeAndvalidateRequest(SchedulerUtils.java:233) at org.apache.hadoop.yarn.server.resourcemanager.RMServerUtils.normalizeAndValidateRequests(RMServerUtils.java:97) at org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService.allocate(ApplicationMasterService.java:504) at org.apache.hadoop.yarn.api.impl.pb.service.ApplicationMasterProtocolPBServiceImpl.allocate(ApplicationMasterProtocolPBServiceImpl.java:60) at org.apache.hadoop.yarn.proto.ApplicationMasterProtocol$ApplicationMasterProtocolService$2.callBlockingMethod(ApplicationMasterProtocol.java:99) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080) ) 17/01/20 09:39:23 INFO impl.AMRMClientImpl: Waiting for application to be successfully unregistered. 17/01/20 09:39:24 INFO yarn.ApplicationMaster: Deleting staging directory .sparkStaging/application_1484288256809_0021 17/01/20 09:39:24 INFO storage.DiskBlockManager: Shutdown hook called 17/01/20 09:39:24 INFO util.ShutdownHookManager: Shutdown hook called 17/01/20 09:39:24 INFO util.ShutdownHookManager: Deleting directory /yarn/nm/usercache/anonymous/appcache/application_1484288256809_0021/spark-3f3ac5b0-5a46-48d7-929b-81b7820c9e81/userFiles-af94b1af-604f-4423-b1e4-0384e372c1f8 17/01/20 09:39:24 INFO util.ShutdownHookManager: Deleting directory /yarn/nm/usercache/anonymous/appcache/application_1484288256809_0021/spark-3f3ac5b0-5a46-48d7-929b-81b7820c9e81
今天关于hive 连接一段时间后报错 Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask的介绍到此结束,谢谢您的阅读,有关eclipse换了高版本的maven插件后报错:org.apache.maven.archiver.MavenArchiver.getManifest(org.apache.maven.project、Encountered IOException running create table job: : org.apache.hadoop.hive.conf.HiveConf、ERROR hive.HiveConfig: Could not load org.apache.hadoop.hive.conf.HiveConf. Make sure HIVE_CONF_D...、Error, return code 1 from org.apache.hadoop.hive.ql.exec.spark.SparkTask等更多相关知识的信息可以在本站进行查询。
本文标签: