Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

loadFileSystems error when calling a program that uses libhdfs

The code is libhdfs testing code.

 int main(int argc, char **argv)
{
    hdfsFS fs = hdfsConnect("hdfs://labossrv14", 9000);
    const char* writePath = "/libhdfs_test.txt";
    hdfsFile writeFile = hdfsOpenFile(fs, writePath, O_WRONLY|O_CREAT, 0, 0, 0);
    if(!writeFile)
    {
        fprintf(stderr, "Failed to open %s for writing!\n", writePath);
        exit(-1);
    }
    char* buffer = "Hello, libhdfs!";
    tSize num_written_bytes = hdfsWrite(fs, writeFile, (void*)buffer, strlen(buffer)+1);
    if (hdfsFlush(fs, writeFile))
    {
        fprintf(stderr, "Failed to 'flush' %s\n", writePath);
        exit(-1);
    }
    hdfsCloseFile(fs, writeFile);
}

I paid a lot of effort to compile this code successfully, but it doesn't work when I run the program. And the error message is below.

loadFileSystems error:
(unable to get stack trace for java.lang.NoClassDefFoundError exception: ExceptionUtils::getStackTrace error.)
hdfsBuilderConnect(forceNewInstance=0, nn=labossrv14, port=9000, kerbTicketCachePath=(NULL), userName=(NULL)) error:
(unable to get stack trace for java.lang.NoClassDefFoundError exception: ExceptionUtils::getStackTrace error.)
hdfsOpenFile(/libhdfs_test.txt): constructNewObjectOfPath error:
(unable to get stack trace for java.lang.NoClassDefFoundError exception: ExceptionUtils::getStackTrace error.)
Failed to open /libhdfs_test.txt for writing!

I play with this stuff according to the official document. And I find that the problem may be the incorrect CLASSPATH. My CLASSPATH is following, which is combined by the classpath generated from "hadoop classpath --glob" and the path of the lib of jdk and jre.

export CLASSPATH=/home/junzhao/hadoop/hadoop-2.5.2/etc/hadoop:/home/junzhao/hadoop/hadoop-2.5.2/share/hadoop/common/lib/*:/home/junzhao/hadoop/hadoop-2.5.2/share/hadoop/common/*:/home/junzhao/hadoop/hadoop-2.5.2/share/hadoop/hdfs:/home/junzhao/hadoop/hadoop-2.5.2/share/hadoop/hdfs/lib/*:/home/junzhao/hadoop/hadoop-2.5.2/share/hadoop/hdfs/*:/home/junzhao/hadoop/hadoop-2.5.2/share/hadoop/yarn/lib/*:/home/junzhao/hadoop/hadoop-2.5.2/share/hadoop/yarn/*:/home/junzhao/hadoop/hadoop-2.5.2/share/hadoop/mapreduce/lib/*:/home/junzhao/hadoop/hadoop-2.5.2/share/hadoop/mapreduce/*:/contrib/capacity-scheduler/*.jar:/usr/lib/jvm/java-8-oracle/lib:/usr/lib/jvm/java-8-oracle/jre/lib:$CLASSPATH

Does anyone have some good solutions? Thanks!

like image 488
Junyao Zhao Avatar asked Nov 20 '25 14:11

Junyao Zhao


1 Answers

I read again some information in the tutorials and some questions proposed before. Finally I find that the problem is caused by the fact that JNI doesn't expand the wildcards in CLASSPATH. So I just put all jars into CLASSPATH, and the problem is solved. Since this command "hadoop classpath --glob" also will generate wildcards, it explains that why the official document says this

It is not valid to use wildcard syntax for specifying multiple jars. It may be useful to run hadoop classpath --glob or hadoop classpath --jar to generate the correct classpath for your deployment.

I misunderstood this paragraph yesterday.

see also Hadoop C++ HDFS test running Exception and Can JNI be made to honour wildcard expansion in the classpath?

like image 112
Junyao Zhao Avatar answered Nov 22 '25 03:11

Junyao Zhao



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!