Skip to content

KERBEROS_REALM is unset when creating a shell on HDFS pods #763

@Jimvin

Description

@Jimvin

Affected Stackable version

25.11

Affected Apache HDFS version

3.4.2

Current and expected behavior

When I shell into a pod to run an HDFS admin command and run hdfs haadmin -getAllServiceState to see the namenode HA status I receive an error message:

stackable@simple-hdfs-namenode-default-0 /stackable/hadoop-3.4.2-stackable25.11.0 $ hdfs haadmin -getAllServiceState
2026-03-10 14:28:00,336 WARN  ipc.Client (Client.java:run(749)) - Exception encountered while connecting to the server simple-hdfs-namenode-default-0.simple-hdfs-namenode-default.default.svc.cluster.local/10.1.17.125:8020
javax.security.sasl.SaslException: Bad Kerberos server principal configuration [Caused by java.lang.IllegalArgumentException: Server has invalid Kerberos principal: nn/simple-hdfs.default.svc.cluster.local@KNAB.COM, expecting: nn/simple-hdfs.default.svc.cluster.local@${env.KERBEROS_REALM}]

core-site.xml contains references to the environment variable KERBEROS_REALM in place of the actual Kerberos realm name and this variable is not set in the shell.

Possible solution

Set the KERBEROS_REALM environment variable so that command line tools use the correct realm name.

Additional context

No response

Environment

No response

Would you like to work on fixing this bug?

None

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions