Forum

Information and discussion related to the Kognitio on Hadoop product
Contributor
Offline
User avatar
Posts: 386
Joined: Thu May 23, 2013 4:48 pm

"NO RUNNING DAEMONS FOR INIT COMMAND!"

by markc » Fri Oct 28, 2016 11:15 am

I am bringing up a kodoop cluster, and getting this error:

Logging startup to startup.T_2016-10-26_23:13:55_BST.
--> Cleaning up unwanted files/processes.
--> No processes stopped on one or more nodes.
--> Examining system components.
--> Configuring WX2 software.
Generation results:
WARNING: New boota is the same as new bootb.
--> Initialising internal storage.
NO RUNNING DAEMONS FOR INIT COMMAND!
Either no daemons are configured or they exited with errors.
Unable to boot without at least 1 working daemon.

What has gone wrong, and how can I correct it?
Reply with quote Top
Contributor
Offline
User avatar
Posts: 386
Joined: Thu May 23, 2013 4:48 pm

Re: "NO RUNNING DAEMONS FOR INIT COMMAND!"

by markc » Fri Oct 28, 2016 11:22 am

One possible cause here is that the default ulimit for the yarn user doesn't allow enough processes - this causes things to fail during startup and the container eventually exits.

As noted in "Preparing your Hadoop cluster" subsection "Setting user limits" from http://www.kognitio.com/forums/Getting% ... Hadoop.pdf:
Some Linux distributions ship with a configuration that sets ulimit values for users. Most of these are not a problem but the 'nprocs' (max user processes or -u) limit can cause problems when running with the Kognitio software because it also counts threads. The Kognitio software is aggressively multi-threaded and can often exceed this limit.

This limit should be set to a large number (minimum 100,000) for any user under which the Kodoop yarn tasks will run. Typically this will be the 'yarn' user but it might also be the edge node user if your Hadoop cluster is configured to run Yarn jobs with setuid.

To change the limit edit the /etc/security/limits.conf file or edit/add a file in /etc/security/limits.d. The details of this will vary depending on your Linux distribution and cluster setup but an example is:

$cat /etc/security/limits.d/99-kognitio.conf
# increase the process limit for the yarn user.
yarn soft nproc 100000

Once you have changed the limits you will need to restart your Yarn service to make sure it picks the new limits up correctly.
Reply with quote Top

Who is online

Users browsing this forum: No registered users and 1 guest

cron