Release: 1.5 -This Document was generated 2014-12-05
Copyright © 2014 init.at informationstechnologie GmbH
Abstract
This document is the official documentation for CORVUS® and NOCTUA®. It explains general concepts of the software, gives an overview of it's components and walks you through various installation and administration tasks, the appendix documents and explains the stable part of the API.
Although you hold only one Documentation in your hands or read it on screen, CORVUS® and NOCTUA® is not exact the same software. In fact NOCTUA® is part of CORVUS®. But why only one documentation for two different peace of software? Because of many overlapped parts and functions it makes no sense to write two documentations. Instead of two nearly the same documentations repeating many of the content we decided to provide only one documentation with clear visible content.
Purpose of CORVUS® is the HPC Cluster Management. It allows administrators to setup and manage a huge number of nodes in clusters or even more than one cluster at once.
Especially in cooperation with NOCTUA® it offers both, management and monitoring of your cluster, two essential parts every admin of HPC sooner or later have to think about.
NOCTUA® is an extensive Software Package for monitoring devices. Monitoring means to observe, record, collect and display different aspects of hardware and software. This monitored aspects could be close to hardware like CPU Temperature, CPU Voltage, FAN Spin or existing or not existing NET devices but also close to services running on monitored machines like SSH daemons, POSTFIX daemons, HTTP services or simply the availability of devices via ping.
Not only monitoring of devices is possible but also different methods of reaction due to reached predefined limits. And best of all, NOCTUA® is free Software, licensed under the GNU GPL 2.0 License.
You can find more Information about free software and it's benefits at http://www.fsf.org
Even if the main task of CORVUS® and NOCTUA® is to ease configuration and administration of icinga, there are some special and unique features other software solutions do not have implemented.
Following list contains the exclusive components that makes our software unique and unreachable on software market:
Web front-end - Userfriendly and easy to use web front-end to access to all data, configurations and settings. |
| |
Livestatus - Graphical realtime indication tool to display monitored data of devices or cluster. No more need to interpret date. |
| |
Graphing - Beautiful graphing of collected data, compound graphs for devicegroups or whole cluster, aggregation of graphs and many options to control output. |
| |
Peering - Possibility to connect monitored devices to peers. Displays whole network topology of connected devices. |
| |
Weather map [WIP] - Fancy graphical realtime image map to simply show data flow of devices. |
| |
Localisation [WIP] - Integration of collected data into google Maps™ . Facility to see cluster/device status at a glance. |
| |
Image Maps [WIP] - Option to upload user images or photos from cluster infrastructure, plant layout or server racks. Status data of cluster and devices can be displayed as overlay on image. |
| |
Usermanagement - Administrator tool to manage groups, users and corresponding permissions. |
| |
Packageinstall - Option to install system packages over the web front-end. (Operating system package management will take care for the exact procedure of installation.) |
| |
RMS - Resource Management System - Job management system for user to easy handle their jobs via the web front-end. Also |
| |
Virtual desktops - Automatic virtual desktop management system. Full access to remote machines displayed inside of your favorite browser. |
| |
License Management - License Management System to keep track of used, unused or locked licenses. With graphical output and history function. |
|
In this section you will find out about what technical requirements CORVUS® and NOCTUA® has.
Like every other software, CORVUS® and NOCTUA® has also certain system requirements. Because CORVUS® and NOCTUA® is free Software, everybody who has enough programming knowledge could be able to port it to other free software systems.
The good news: You don't have to port anything if you already use one of the following LINUX distributions:
Debian
Ubuntu
CentOS
Opensuse
SLES
For exact versions please take a look into Chapter 2, Installation
Monitoring configurations are stored in databases for faster access and therefore faster reaction time and further more flexible administration of data.
CORVUS® and NOCTUA®is using Django as its database interface so every database which is compatible with Django can be used. The recommended Database is PostgreSQL (mostly due to license issues), another well tested one would be MySQL.
CORVUS® packages are available for following operating systems:
Debian Squeeze (6.x)
Debian Wheezy (7.x)
Ubuntu 12.04
CentOS 6.5
openSuSE (12.1, 12.3, 13.1)
SLES 11 (SP1, SP2, SP3)
It might be very well possible to get the software up and running on different platforms - but this type of operation is not tested at all.
If you are running one of the supported operating systems add the repository matching your installation to your package management software. Refer to the manual of your package management software how to do that.
There are 2 main and 1 extra repository you have to deal with to install either CORVUS® or NOCTUA®.
Main repository for CORVUS®. It includes among others mother
, cluster-server
,
package-server
, package-client
, discovery-server
,
collectd-init
This is the main repository for NOCTUA®. It includes among others noctua
, init-monit
This is the extended repository which contains packages CORVUS® and NOCTUA® are based on like python-modules
,
icinga
, nginx
, uwsgi
Repositories are available for oldstable (1.x), stable (2.x) and for master (devel) versions of above mentioned operating systems. The system running on your machines and the version of CORVUS® or NOCTUA® you want, determine which repository you must use in your package manager.
The current developer Version, containing the newest functions and modules. Very fast update and change cycle due to active development. Sometimes a bug could slipped in but usually it works fine. From time to time it will be merged into stable.
The current stable version for productive environment. Most features and functions are included and there are no knowing bugs.
This is the oldest available software version. No new features or functions are included and updates or changes will be done only for security issues.
Based on the above mentioned operating system, repository and desired software version, resulting repositories can be added.
There are two different ways to add new repositories for monitoring software by init.at™ to the operating system. It can be added all at once in one central file or in a repository directory. For each operating system there are special repository directories.
Table 2.1. For debian based systems
Debian wheezy |
/etc/apt/sources.list.d/
|
Debian squeeze |
/etc/apt/sources.list.d/
|
Ubuntu 12.04 |
/etc/apt/sources.list.d/
|
There are two different ways to add new repositories for monitoring software by init.at™ to the operating system. It can be added all at once in one central file or in a repository directory.
Following you can see some examples of source.list
content.
This are the lines you must add to your /etc/apt/sources.list
The relevant parts for deb based package manager looks like this for devel version and wheezy:
deb http://www.initat.org/cluster/DEBs/debian_wheezy/cluster-devel wheezy main deb http://www.initat.org/cluster/DEBs/debian_wheezy/monit-devel wheezy main deb http://www.initat.org/cluster/DEBs/debian_wheezy/extra wheezy main
The relevant parts for deb based package manager looks like this for stable version and wheezy:
deb http://www.initat.org/cluster/DEBs/debian_wheezy/cluster-2.0 wheezy main deb http://www.initat.org/cluster/DEBs/debian_wheezy/monit-2.0 wheezy main deb http://www.initat.org/cluster/DEBs/debian_wheezy/extra wheezy main
The relevant parts for deb based package manager looks like this for oldstable version and wheezy:
deb http://www.initat.org/cluster/DEBs/debian_wheezy/cluster-1.0 wheezy main deb http://www.initat.org/cluster/DEBs/debian_wheezy/monit-1.0 wheezy main deb http://www.initat.org/cluster/DEBs/debian_wheezy/extra wheezy main
Of course, all above is true for debian squeeze, just replace wheezy with squeeze and you are done.
The second way to add repositories to the system is to put *.list
files into your repository directory.
Name of the directory in debin is/etc/apt/sources.list.d/
With wget you are able to download the *.list
file into your repository directory. Go into
/etc/apt/sources.list.d/
and enter this command to download repository file.
wget http://www.initat.org/cluster/repository_files/initat_debian_wheezy_devel.list
For ubuntu 12.04 you can also add repositories either in one file, into /etc/apt/sources.list
or in repository directory
/etc/apt/sources.list.d
deb http://www.initat.org/cluster/DEBs/ubuntu_12.04/cluster-devel precise main deb http://www.initat.org/cluster/DEBs/ubuntu_12.04/monit-devel precise main deb http://www.initat.org/cluster/DEBs/ubuntu_12.04/extra precise main
deb http://www.initat.org/cluster/DEBs/ubuntu_12.04/cluster-2.0 precise main deb http://www.initat.org/cluster/DEBs/ubuntu_12.04/monit-2.0 precise main deb http://www.initat.org/cluster/DEBs/ubuntu_12.04/extra precise main
Almost the same procedure like for debian must be done in ubuntu. The only difference is the link.
With wget you are able to download the *.list
file into your repository directory. Go into
/etc/apt/sources.list.d/
and enter this command to download repository file.
wget http://www.initat.org/cluster/repository_files/initat_ubuntu_1204_devel.list
Debian and ubuntu use an other package manager than CentOS, OpenSUSE or SLES. For that reason, on rpm based operating systems sources.list
does not exist. Rather there are a few files for repository management not only one. All relevant repository files stays in the directory
/etc/zypp/repos.d/
.
[cluster_devel_remote] name=cluster_devel_remote enabled=1 autorefresh=0 baseurl=http://www.initat.org/cluster/RPMs/suse_13.1/cluster-devel type=rpm-md
[monit_devel_remote] name=cluster_devel_remote enabled=1 autorefresh=0 baseurl=http://www.initat.org/cluster/RPMs/suse_13.1/monit-devel type=rpm-md
[init-extra] name=init-extra enabled=1 autorefresh=0 baseurl=http://www.initat.org/cluster/RPMs/suse_13.1/extra type=rpm-md
You need, as mentioned before, at least 2 main and 1 extra repository. Either you create new files with your favorite editor or you can use zypper command.
Zypper command for adding repositories is:
zypper ar http://www.initat.org/cluster/RPMs/suse_13.1/cluster-devel cluster-devel
You can use the same pattern for other suse versions, only replace suse_13.1 with your desired version, for example suse_12.3.
Alternative it's possible to download repositories direct from internet instead of editing files manually. There are two URLs you can get repositories from.
For deb repositories look at http://www.initat.org/cluster/DEBs/
For rpm repositories look at http://www.initat.org/cluster/RPMs/
Repository directory is /etc/yum.repos.d/
. Place your desired *.repo files inside this directory, do a yum check-update and you are
ready to install CORVUS® / NOCTUA®
[initat_cluster] autorefresh=1 enabled=1 type=rpm-md name=initat_cluster baseurl=http://www.initat.org/cluster/RPMs/rhel_6.2/cluster-devel [initat_extra] autorefresh=1 enabled=1 type=rpm-md name=initat_extra baseurl=http://www.initat.org/cluster/RPMs/rhel_6.2/extra [initat_monit] autorefresh=1 enabled=1 type=rpm-md name=initat_monit baseurl=http://www.initat.org/cluster/RPMs/rhel_6.2/monit-devel
Before installing packages make sure to remove the following conflicting packages:
nginx
uwsgi
memcached
Install the package containing the at daemon before attempting to install CORVUS® packages.
cluster-backbone-sql
webfrontend
host-monitoring
nginx-init
uswgi-init
cluster-server
mother
package-server
cluster-config-server
from the newly added repositories. These packages pull in all the necessary dependencies to use CORVUS®. Ignore the output from the package post install scripts on how to populate the database. Setup the following processes to be started at your default runlevel:
done by setup_noctua
nginx
uwsgi
host-monitoring
at
Refer to the documentation of your Database System on how to create users and databases.
The database access data is stored in
/etc/sysconfig/cluster/db.cf
, a sample file is provided under
/etc/sysconfig/cluster/db.cf.sample
. If you want to connect via local socket leave DB_HOST empty. Fill in the user and database information.
Every daemon and process from CORVUS® is using this file to gain access to the database. The File has to be readable for the following system entities:
The user of the uwsgi-processes (wwwrun on SUSE systems)
The system group idg
A typical set of rights would look like
-rw-r----- 1 wwwrun idg 156 May 7 2013 /etc/sysconfig/cluster/db.cf
Nearly every aspect of CORVUS® and NOCTUA® is administrated via the webfrontend, so the next steps after the initial database setup are done there. The following system processes are needed to access the webfrontend:
nginx (the web-server, started via /etc/init.d/nginx start)
uwsgi (serves the application, started via /etc/init.d/uwsgi-init start)
memcached (for storing session data, started via /etc/init.d/memcached start)
In order to check if these processes are running simply issue the command /opt/cluster/sbin/check_scripts.py --system memcached nginx uwsgi-init which should give an output similar to
Name type status ------------------------ memcached system running nginx system running uwsgi-init system running
The Webfrontend for NOCTUA® can be accessed via http://SERVERNAME/cluster or by http://IP_ADDRESS:18080/cluster/
The Webfrontend for CORVUS® can be accessed via http://SERVERNAME/cluster or by http://IP_ADDRESS:8080/cluster/
For booting nodes you have to have an NFS server up and running. The entries in your /etc/exports file will be added automatically by the cluster software.
Alternative to usual installation of binary packages via repositories and the operating system package manager like zypper, apt-get or yum, you can install a virtual machine with already installed CORVUS® or NOCTUA®. We distribute two popular image file formats running with libvirt/qemu and vmware. For information how to set up your VM environment, please take a look at the corresponding documentation of your VM vendor.
Following steps have to be done to run a KVM libvirt/qemu virtual machine with preinstalled CORVUS®/NOCTUA®
Download the KVM/libvirt image and move it into the right image directory e.g. /usr/local/share/images/
.
Copy an existing *.xml or create a new one
Edit your new *.xml file and add or modify
Define your new virtual machine
If your machine is setup correct, only you have to do is to start the virtual machine and have fun with monitoring.
From time to time, new software packages were built and can be downloaded. Especially for the master development branch there are frequent updates which can be applied to get new functions or features or simply fixing some bugs. Update period for master is about every second day.
The stable banch gets less frequent updates than the master version. Because it is the stable branch, most updates for stable affected security issues und bugfixes. Really big updates are done only if the master is stable enough for productive environment. The update period time is about 4-6 month.
The update procedure is very comfortable, it based on the system integrated package manager, for example zypper in OpenSUSE or apt-get in debian.
Comands for updating/upgrading all installed software by package manager are:
Refresh repositories and do whole system upgrade in OpenSUSE
Refresh repositories and do whole system upgrade in debian
Of course, you are also able to only update single packages, for example the package handbook-init. The command looks similar to the command used to update all packages:
Refresh repositories and do single package upgrade in OpenSUSE
Refresh repositories and do single package upgrade in debian
For other distributions please look into your distributors package management description.
The only thing you have to do is to set the nodes to boot from network( PXE ).
Go to /etc/sysconfig/cluster/cluster_license and set the required license entries to enabled="yes".
Most configuration in CORVUS® administrators have to do, will be accessed over a standard html compatible browser like Mozilla Firefox™ or Google Chrome™. Once CORVUS® is installed and all required services are running, all you have to do is to connect to the server via browser.
Type in
http://SERVER-IP-ADDRESS:18080/cluster/
or
http://SERVERNAME/cluster/
in your browser addressbar to connect to the server. If you connect the first time to the server you will be redirected to the account info page.
You really have to change your password now. If you don't change it, CORVUS® takes his own during installation procedure generated password you never seen before and next time you can not log in.
If you running the
setup_noctua.sh
script manually, a new password will be generated. In this case you must look for the password in your shell output.
NOCTUA webfrontend offers you a very clear view. There are three areas you will work with:
Menu area (1)
Tree area (2)
Main area (3)
In the menu area you'll find submenus, buttons, date, time and user section.
Submenus
Base
Users
Monitoring
Session
CORVUS® offers some additional menus:
RMS - Resource management System
Cluster
Buttons
cluster server information
show cluster handbook as pdf
show index
number of background jobs
In the tree area you can find your device group tree and associated devices. Located on top, there is a searchfield and 2 buttons.
Searchfield
use selection Button (green with arrow)
clear selection Button (red with circle)
Group
FQDN (Full Qualified Domain Name)
Category
All the configurations and input takes place in the main area. According to the selected or preselected devices and settings, corresponding page appears.
Figure 3.7. Possible main area
One possible view of main area after select some devices in "device network"
The cluster server information button shows two overview tabs, one tab with information about definied cluster roles and one with information about server
Inside this upper tab, there is a table showing the Name, reachableIP and the defined cost of each of them. This tab is only a display tab.
It is a matter of services providing special functionality to the server.
Inside this tab, there is a table showing following information:
Server information
Name of service
Type of service {node, server,system}
Kind of Check
Status if service is installed or not
Versionnumber of installed service
Number of processes started
Displays memory usage as number and as statusbar
Button to apply action to the services
After installation of CORVUS® or NOCTUA® the user admin and the group admingrp already exists. This is the user you have to change password for after first login into your fresh installed system.
User admin has all possible rights and permissions to add, to modify and to delete devices/groups etc. User admin is also able to do reconfiguration of database and of course able to add or delete new user.
If you want to set restrictions for some user or groups, for example for external staff, you have to create this new restricted user/group with following buttons:
To add a new group in user management, klick the "create group" button, fill out the form and confirm your input by klicking the "Create" button.
The form is self-explanatory, but some input should be mentioned anyway:
Internal group ID
Set basic permissions to get access to selected devicegroup
Another extended form can be shown by clicking the new created group in the user/group tree:
A more complex permission system appears.
Similar structure and procedure is true for creating new user.
Also here we must mention some contents:
Internal user ID
Is the superior group
Operating system group
Owns all rights and permissions like the admin own
The permission system is divided into several parts which covers certain functions. Some permissions depend on other permissions, or in other words, chainpermissions. The more permissions user get the more powerfull they can act. The user "admin" or "superuser" is the most powerfull user. Admin have all possible rights and permissions.
Below is a list with permissions and what their functions are.
background_job
Shows additional menu button:
Session Background Job Info
config
Shows additional menu button:
Base
Configurations
device
Shows graphs tab for selected devices. Depends on possibility to choose devices (acess all devices)
Shows disk tab for selected devices. Depends on possibility to choose devices (acess all devices)
Change basic settings (General) for selected devices. Depends on possibility to choose devices (acess all devices)
Shows new top-menu named Cluster
Show Config tab for selected devices. Depends on possibility to choose devices (acess all devices)
Show Category tab for selected devices. Depends on possibility to choose devices (acess all devices)
Shows new top-menu:
Base
Device connections
Show Location tab for selected devices. Depends on possibility to choose devices (acess all devices)
Shows 3 new tabs for selected devices:
Livestatus
Monconfig
MonHint
Shows new top menu content:
Base
device network Depends on possibility to choose devices
(acess all devices)
Show vars tab for selected devices and new top menu:
Base
Device variables.
Depends on possibility to choose devices (acess all devices)
The main permission to show devices. Most of above permissions depends on it. Shows existing devices in device tree on the left.
group
...
image
...
kernel
...
mon_check_command
Shows new top menu content under:
Monitoring
Basic Setup / Build Info
network
...
...
package
Shows new top menu under:
Cluster
Package install. Additional software packages can be choosen and installed by
this menu button.
partition_fs
...
user
Shows new top menu content unter:
Session
Admin
...
Shows new top menu content unter:
Base
Category tree
Shows 2 new top menu content unter:
Base
Crerate new device / Device tree /
Shows new top menu content unter
Base
Domain name tree
...
The permission level defines what can be done by users. In combination with the permission itself, administrators are more flexible in assigning rights and permissions to user or to groups.
Below are 4 main permission levels which can be assigned.
Permits the user to read data. User can't change, create or delete data.
Permits the user to change existing data. Includes read-only level.
Permits user to change and create new data. Deletion is not possible.
All Permissions are granted.
Installation of packages via the webfrontend is another helpful feature provided by CORVUS® and NOCTUA®. It offers you to install software packages on one or many systems over the webfrontend, without needs to login on each local machine and install packages manually with the command line.
Your CORVUS® or NOCTUA® operates as central package installation entity, stores its repositories in the database and can also distribute its repositories to connected nodes.
It's a huge ease for user with less experience to do software installation with a few clicks instead of typing long and cryptic terminal commands.
In this section you can learn how to setup this feature, how to configure and how to use it.
Two important services for this function are: package-client, package-server
Before you are able to install packages by the webfrontend, you have to configure your machines appropriate. Not only the server-side configuration but also the client-side configuration is essential to make installation and distribution of packages working.
On top menu, go to Session
Settings. Enable the Button for
package installation (package) and reload the page.
Click your server device from device-tree on the left side, go into "Config" tab, click on the blue arrow button and activate "package_server" on dropdown menu.
Start the package-server by navigating to cluster server information and open the lower dropdown menu with click on the arrow. Push "Action" button for package-server and choose start.
So far, your server is ready for package installation. Also the clients/nodes have to be prepared for package installation.
Make sure package-client service is installed and running on the nodes/clients. To check the status of package-client use check_cluster command. Status of package-client should be "running".
Last step to setup package-installation is to enter your server (package-server) IP-address (or hostname) in /etc/packagserver
on the client machine.
The main configuration file for package-server is /etc/sysconfig/package-server
. It content should be self-explanatory and looks like this:
Table 5.1. package-server config options
options | default value | description |
---|---|---|
PID_NAME=
|
package-client/package-client
|
Name of PID files |
KILL_RUNNING=
|
True
|
... |
USER=
|
idpacks
|
Username |
GROUP=
|
idg
|
Groupname |
GROUPS=
|
['idg']
|
$$ |
LOG_DESTINATION=
|
uds:/var/lib/logging-server/py_log_zmq
|
Destination of log files |
LOG_NAME=
|
package-server
|
Name of log file |
SERVER_PUB_PORT=
|
8007
|
Server port for communication with client |
NODE_PORT=
|
2003
|
Client port for communication with server |
DELETE_MISSING_REPOS=
|
False
|
Capability to deleting missing repos |
The main configuration file for package-client is /etc/sysconfig/package-client
. It content should be self-explanatory and looks like this:
Table 5.2. package-client config options
options | default value | description |
---|---|---|
PID_NAME=
|
package-client/package-client
|
Name of PID files |
KILL_RUNNING=
|
True
|
... |
COM_PORT=
|
2003
|
Client port for communication with server |
SERVER_COM_PORT=
|
8007
|
Server port for communication with client |
LOG_DESTINATION=
|
uds:/var/lib/logging-server/py_log_zmq
|
Destination of log files |
LOG_NAME=
|
package-client
|
Name of log file |
NICE_LEVEL=
|
15
|
Nice level the log daemon running at |
MODIFY_REPOS=
|
False
|
Capability to modify repositories |
PACKAGE_SERVER_FILE=
|
/etc/packageserver
|
$$ |
PACKAGE_SERVER_ID_FILE=
|
/etc/packageserver_id
|
$$ |
Set "MODIFY_REPOS=False" to forbid repository modification.
There are two common ways to install additional packages.
Package installation with operating system package manager
Package installation with package upload in directory
Usually the first method is recommended for standard installation of available packages. All software and packages your running system provides, can be installed via "Package install". It starts your system package-manager in background (apt-get, yum, zypper) and install selected packages on selected nodes.
In top menu, go to Cluster
Package install.
Push the Rescan button to update you repositories.
Go to Package search tab and search for the packages you want to install on the system.
If there are some results, list all matching packages with the show button. In below appeared list choose your desired package version by pushing one of the the right buttons (take exact/take latest).
Go to Install tab, select devices the package should be installed for and push "attach" button.
On top, a new button "action" appears. Push the button, choose "Target state" install and submit your settings. The package will be installed automatic on your selected nodes.
If your system do not provide some packages you really want to install, there is an other way to go. In this special case you can either download fitting binary packages from external sources and place it in the right directory or you can compile and build your own package from sourcecode.
Upload your package into your upload directory on your server: /opt/cluster/system/packages/
Execute the update script update_repo.sh
in /opt/cluster/system/packages/
to refresh your repositories.
This script does:
#!/bin/bash cd /opt/cluster/system/packages createrepo . yum clean all yum makecache
Maybe you have to "Sync to clients"/"Clear caches" to get the new repositories on all nodes.
Now, if you search after uploaded package you should get some results. To install uploaded package follow the same procedure as install packages from system package manager.
Download source files and extract it.
Compile your software as usual and install it (.configure ; make; make install).
Once your package is installed, use make_package.py to create a new *.rpm package.
Run the update_repo.sh to refresh your repositories.
Maybe you have to "Sync to clients"/"Clear caches" to get the new repositories on all nodes.
On top, a new button "action" appears. Push the button, choose "Target state" install and submit your settings. The package will be installed automatic on your selected nodes.
To delete packages do following steps:
In top menu navigate to Cluster
Package install, and choose the
Install tab.
Select packages and nodes to delete it from.
Push the Action button and choose erase from Target state dropdown menu. To finish deletion click on the Submit button.
An essential aspect in CORVUS® is the job management system. Main reason for using clusters is a higher computing power to calculate jobs. The calculation of data will be splitted into pieces and every node or slot can calculate each piece separately, this results in a higher speed of calculation. The organisation of slots, cluster and jobdistribution is done by the SGE - son of grid engine. SGE provides special commands and tools to control jobs distributed to the nodes.
The RMS is the coupling between the SGE and our web front-end. With enabled RMS you are able to manage jobs without any using of SGE commands.
Like mentioned before, the RMS is a powerful addon for managing jobs on clusters. It consists of packages and services working together to provide management functions for transmitted jobs.
Important parts of RMS are:
SGE part
SGE - Son of Grid Engine
Commandline tools like:
qdel
qstat
qacct
Look command-reference or manual page of sge_intro
to show complete list of commands.
man sge_intro
init.at part
RMS-server.py - Server between SGE and Webfrontend
Webfrontend
Commandline tools like:
sjs
sns
Both commands sjs and sns are links to /opt/cluster/bin/sgestat.py
.
Environment variables for setting up RMS can be found under /etc/
/etc/sge_cell
Name of SGE
/etc/sge_server
Hostname or IP address of sge server.
/etc/sge_root
Directory sge installs to.
To get RMS working it is not enough only to install the package, you must also edit some config files and build the SGE part manually. Below step by step how to install RMS will help you installing RMS and run the required services.
Even it should be obvious, before you are able to install RMS make sure you already installed noctua and its dependencies.
Install rms-tools:
zypper ref; zypper in rms-tools
Set environment variables in /etc/sge_cell
, /etc/sge_server
and /etc/sge_
Setting of environment variables must be done before compiling SGE!
Download the latest version (Latest version for 2014.09.25 is 8.1.7) of SGE package from https://arc.liv.ac.uk/trac/SGE
wget http://arc.liv.ac.uk/downloads/SGE/releases/8.1.7/sge-8.1.7.tar.gz
Extract sge-8.1.7.tar.gz
archive to /src/
, change into extracted directory
and run our buildscript placed under /opt/cluster/sge/build_sge6x.sh
.
tar xzf sge-8.1.7.tar.gz
cd /src/source/
/opt/cluster/sge/build_sge6x.sh
If your system can not compile and output some error messages, make sure you already installed necessary build-tools and development packages. Dependent of your operating system package names and count could differ.
Now directories under /opt/sge62
exists and service sge_qmaster is running.
Test if sge_qmaster is running:
ps aux | grep sge_qmaster
Set $PATH variables by running script located under /etc/profile.d/batchsys.sh
. /etc/profile.d/batchsys.sh
Run followed scripts:
/opt/cluster/sge/create_sge_links.py and /opt/cluster/sge/modify_sge_config.sh
RMS overview provides 4 tabs. Not only for displaying information but also to control jobs. There are a couple of green buttons on the bottom of overview page to hide or unhide columns.
The first tab of RMS overview displays current running jobs in the grid engine. You can get some background informations like jobids, owner, runtime or nodelist of each job. On the right side there is an action button to delete or force delete running jobs.
The second tab of RMS overview displays the current waiting jobs. This are jobs waiting in the SGE queue for execution. Among other infos, it shows the "WaitTime", "Depends" and the "LeftTime".
The third tab of RMS overview displays done jobs and specific columns like "ExitStatus", "Failed" or "RunTime".
For direct usage of the SGE, there are a couple of commands:
Commands the SGE provides are:
Queue Configuration, allows the system administrator to add, delete, and modify the current Grid Engine configuration, including queue management, host management, complex management and user management.
qlogin initiates a telnet or similar login session with automatic selection of a suitable host.
qmake is a replacement for the standard Unix make facility. It extends make with an ability to distribute independent make steps across a cluster of suitable machines.
qmod allows the owner(s) of a queue to suspend and enable queues, e.g. all queues associated with his machine (all currently active processes in this queue are also signaled) or to suspend and enable jobs executing in the queues.
qmon provides a Motif command interface to all Grid Engine functions. The status of all, or a private selection of, the configured queues is displayed on-line by changing colors at corresponding queue icons.
qquota provides a status listing of all currently used resource quotas (see sge_resource_quota(5)).
qrsh can be used for various purposes such as providing remote execution of interactive applications via Grid Engine comparable to the standard Unix facility rsh, to allow for the submission of batch jobs which, upon execu- tion, support terminal I/O (standard/error output and standard input) and terminal control, to provide a batch job submission client which remains active until the job has finished or to allow for the Grid Engine-controlled remote execution of the tasks of parallel jobs.
qselect prints a list of queue names corresponding to specified selection criteria. The output of qselect is usu- ally fed into other Grid Engine commands to apply actions on a selected set of queues.
qsh opens an interactive shell (in an xterm(1)) on a low loaded host. Any kind of interactive job can be run in this shell.
qtcsh is a fully compatible replacement for the widely known and used Unix C-Shell (csh) derivative tcsh. It pro- vides a command-shell with the extension of transparently distributing execution of designated applications to suitable and lightly loaded hosts via Grid Engine.
Common way to submit jobs to the cluster is to use grid engines "q" commands. Assumed that your cluster configuration is correct, running jobs on cluster is as easy as running jobs on local machines.
Following steps have to be done to transfer jobs to queue:
test
TEST2
First...
Second...
Virtual desktop is a technology to transfer display output from remote graphic cards to your local machine graphic card.
User are often forced to work on remote machines because of computation power, license issues or simply geographical distance. In this cases, user usually have to start their remote desktop manually via a command line or similar tools.
With our virtual desktop technology there is no need to manually start anything. The back-end of CORVUS® takes care of sessions, ports, passwords etc., makes the relevant settings and saves it in the global database for you. Not only the settings and configurations will be done automatically by the back-end but also in cooperation with the web front-end it provides the display output.
That way you are able to access and work on remote machines via the web front-end on your favorite browser.
To activate the virtual desktop technology, first of all you have to define a Virtual Desktop session in
User Management.
In the main menu on top of the page navigate to Users
Overview, do left mouse click on the admin user.
Figure 7.2. Virtual desktop session
Before using virtual desktops you have to define a session for it.
Virtual desktop settings
Please insert text here...
Protocol which will be used for virtual desktop session
Portnumber of connecting client - if set to "0", port will be random
Portnumber of vnc server
Window manager system for systems with more than one window manager
Preset of virtual desktop size. It's the windowsize the virtual desktop will be displayed into.
Checkbox to make sure the server is always running
After at least one virtual desktop session is defined, the back-end takes control of the further process. It looks continuously every 5 minutes for a running vnc-server. After discovering a running vnc-server, there will be new entries and buttons in virtual desktop tab.
Now you have the choice to view your remote desktop in the main home page or in a new browser tab.
Connection to remote desktop is as simple as login to your local system, even more simple like this. Just push one of the buttons and enjoy your virtual desktop inline or in new opened tab.
Figure 7.3. Virtual KDE Session
KDE session inside web front end with started 3D software and xterminal
To change your window manager or change the virtual desktop screen size, simply navigate to Users
Overview and choose the user of virtual desktop session.
Scroll down to section "Virtual Desktops", change setting and push the modify button to change settings.
The Simple Network Management Protocol is a official RFC internet-standard-protocol which is designed to handle variables of devices like switches, router, server, workstation, printer, bridges, hubs, and more.
Variables contain hardware information and configuration of devices and can be picked up manually by special SNMP commands like snmpwalk. NOCTUA® implements SNMP as "autodiscovery" service, capable to scan network devices and get as much information about it as possible.
In the context of monitoring, snmp can deliver a huge amount of information about devices. Unfortunately there are some differences of implementation from several hardware vendors, as a result it is very difficult extracting useful and realistic data out of the snmp stack.
For this reason, NOCTUA® uses some intelligent algorithm and filter to avoid insertion of faulty data into the database.
To get SNMP data from devices, first of all target devices are required to provide such SNMP data. Most hardware in the network segment like swithes, router, server, printer, etc... provide SNMP by default.
For operating systems like windows or SUSE/RedHat machines, there are SNMP daemons which fist have to be started before they provide SNMP data.
Please read your operating system documentation or contact your administrator to find out how to activate SNMP daemon on your machines.
To activate SNMP discovery for one device, simply select the checkbox Enable perfdata, check IPMI and SNMP. To get this checkbox, either select your device and left click the home icon on top, or double click the device.
To reach SNMP scan, go to Base
Device network.
There are no SNMP schemes yet in the settings window. Now perform a SNMP scan with left click on the orange update network button.
It appears a SNMP setting window, where you are able to adjust some basic settings.
Settings
The IP address of the device, a valid domainname or a valid host name.
SNMP security settings, either public or private
Number of snmp version, either 1 or 2
If this flag is marked, previously done config, SNMP auto discovery scan can not reading out will be deleted.
Depending on your network size and structure, it takes some time to get complete SNMP data tree, apply filter and algorithms to it and write the extracted data into the database.
After performing SNMP scan, you will get some new network confic entries for the scanned device.
That way you automatically get a couple of netdevices with according names, values, MAC addresses, MTU values, speed, etc... without to invest much time or manpower. A very handy and timesaving tool for administrators.
One of the most interesting question admins wondering about is where monitored devices are located. Location means on the one hand the real physical position of devices.
On the other hand location could be structural location representing network infrastructure in context of functionality not in context of realistic physical locations or network connections.
No matter if structural or physical locations, both of them have to be configured the same way.
To add new device locations first of all we must create a new entry into the category tree. For this step you can, but do not have to select any device before.
Navigate to Base
Category tree
and choose the Categories tab.
Left click on create new button, a new window appears below. Enter a new category name and choose location as parent category.
For advanced settings of new created category entry click left onto the caegory in category tree or push the modify button beside.
Advanced location settings
Name of category tree entry and its parent category
Coordinates for defined google map points
Checkbox to lock google map points in place
Checkbox to define location as physical one
If we go back and choose the Google maps tab, we notice a red Flag onto the google map and also two new buttons, an icon and category name appeared beside the map.
The blue locate button zooms the map in. With the green add location gfx button you are able to upload user image maps in two steps:
Define Location graphic name
Once you named your new location graphics, a new modify button appears. Use the button to upload user images.
Modify added graphic entry to upload user image
Of course you can add even more than just one user image, so you can create a stepwise zooming from google map to detailed server room photographs.
Figure 9.5. Concepts of zoom levels with multiple image maps
Zoom levels with according user image maps
CORVUS® allows you also to edit uploaded images with the preview and enhance button.
Following self-explanatory buttons are accessible if you want to edit your uploaded image for quality reasons.
Following editing buttons are integrated:
left/right rotation (rotates image 90° clockwise or counter clockwise)
increase/decrease image brightness
sharpen/unsharpen image
Filter (includes a bunch of predefined filter for)
undo (undo last editing action)
restore original image
With localisation it is not only possible to display and locate the exact position of devices in different zoom levels, but also the status of monitored devices. That way you can get the best possible overview of your serverroom for example.
Once you have created new location categories and added some photos or images, you can easily add device livestatus to it.
Select all devices you wish to add and click either the home button or use selection button.
Navigate to the Location tab select the checkbox and left click on the location category. It appears a show location map button on the right side with some informations about the image and a small preview of it. Push the button to show the image map.
Now you can place your livestatus burst on the right place at the image by clicking on the set button.
After placing livestatus burst on the right place left click on the lock button to prevent the livestatus burst from moving.
Use the remove button to remove livestatus burst from image.
The primary purpose of NOCTUA® is to monitor network devices and devicegroups. Nearly each measurable value like space, speed, temperature, rpm, availability and much more can be monitored, observed, recorded and evaluated.
There are almost no limits about which device can be monitored. Typical devices are:
Fileserver
Cluster
Webserver
Switches
Printer
Router
Telephone systems
Thin clients
To begin slowly, first lets do a basic example configuration. In this basic example we want to check a simple ping response for a host in a local network. With this monitoring information we can make assumption about the networkdevice itself or its sourrounding network area.
Create a new device (connected to your monitoring server) and configure at least one network device for it, one IP address and one peer/network topology connection.
Select the new device from device tree and navigate to the config tab.
Enable the check_ping config to activate the check.
To make sure you must rebuild your config database. Go to top menu, click Monitoring rebuild config
(cached, RC)
Now, that we know how to create simple checks for single devices, lets do a more complex configuration with more than one device and more than one check.
For this plan we have to use devicegroups with defined check configs:
In top menu navigate to Base Device tree
Create a new devicegroup by pushing the create devicegroup button.
Create some new devices by navigating to BaseCreate new device and entering some
domains into the Fully qualified device name field. The IP address should be automatic resolved, if not, try to push the
Resolve button.
Choose your monitoring server as "Connect to" device.
There are three types of configuration data that can be associated with a configuration.
Variables
Monitoring Config
Scripts
Variables let your override CORVUS® specific settings or pass information into CORVUS®. Monitoring Configs are used to describe which check should be performed against the devices that are associated with the config. The most powerful part of the Configuration system are the Scripts. These allow you to execute arbitrary Python code to generate files and directories on the fly. There are several utility functions already accessible.
do_fstab() do_etc_hosts() do_nets() do_routes() do_uuid()
The Python dictionary
conf_dict
is available as well. It
contains configuration information like node ip ...
To include an already existing file in the node config use show_config_script.py to render the content as Python code ready for inclusion.
show_config_script.py
[
FILENAME
]
Now that you have your first node up and running we have to say
something
about configuring nodes.
START_SCRIPTS
,
INIT_MODS
INIT_MODS specifies which modules are loaded (per TFTP from the
server after
when the initial ramdisk boots The only module that has to be included in
the initial ramdisk is the module required for your network hardware.
The monitoring configurations support the following syntax. $USER1$ $USER2$ $USER3$ Commands containing @ are special. They are created for all discs, network interfaces, ... @DISC@
An other special feature of CORVUS® / NOCTUA® is the ability to get partition data without any need to configure it. To get this feature run, only thing you have to do is to install the discovery-server on the machine you want and activate it.
Once it is installed you must activate it in deviceconfig like you do for RMS or Package-Install. (Klick on device
Config and on the blue arrow, select "discovery_server")
Now you can easily get partition data by pushing the fetch partition info button.
To explain parameterized checks, first of all we have to understand checks itself. Usually a check is a command, created in the monitoring web interface and executed by icinga. Some possible icinga commands are:
check_apt check_breeze check_by_ssh check_clamd check_cluster check_dhcp check_dig check_disk check_disk_smb check_dns check_dummy check_file_age check_flexlm check_ping ...
For every single command there are some special options. Below are some options for the check_ping command:
Options: -h, --help Print detailed help screen -V, --version Print version information --extra-opts=[section][@file] Read options from an ini file. See https://www.monitoring-plugins.org/doc/extra-opts.html for usage and examples. -4, --use-ipv4 Use IPv4 connection -6, --use-ipv6 Use IPv6 connection -H, --hostname=HOST host to ping -w, --warning=THRESHOLD warning threshold pair -c, --critical=THRESHOLD critical threshold pair -p, --packets=INTEGER number of ICMP ECHO packets to send (Default: 5) -L, --link show HTML in the plugin output (obsoleted by urlize) -t, --timeout=INTEGER Seconds before connection times out (default: 10)
Now that we know what checks really are we can go ahead and explain parameterized checks.
In CORVUS® there are two different methods to create checks (icinga commands).
Checks will be defined individual with fixed options and bound on specific devices. These checks are always specific, that means to change one option of the check is the same as to change the whole check.
Checks are defined globally as Parameterized check and bound on devices. These checks are not specific, that means to change one option of the check it is enough to change the parameter of it.
There are 10 devices, 7 of them should be checked on port 80 and 3 of them on port 8080:
Solution fixed method:
You have to set up two different checks, one check with option set to port 80 (-p 80) and one check with option set to port 8080 (-p 8080).
Solution parameterized method:
You have to set up only one check with parameterized options (-p $PORT_NUMBER). Now you are able to modify the port option parameter to every desired value without changing the check itself.
For some reason we will create checks with 5 different warning values.
Solution fixed method:
You have to set up five different checks with five different warning option values. If there are even 10 different values you have a lot to do because you need to create 10 different checks.
Solution parameterized method:
You have to set up only one check with parameterized warning option value and change the parameter for each of the five different warning values. If there are also 10 different warning option values you only have to change the warning option parameter for each device instead of renew the check.
The main advantage of parameterized checks in contrast to fixed defined checks is a more flexible way to handle checks. A direct influence on check options is also a benefit.
With parameterizing it is possible to change some check option values after creating it. Additional, it is faster to set option values than to set whole checks, so your administration effort decrease.
The bigger and more complex a Network is, the more efficient it is to use parameterized checks.
An other advantage of parameterizing is the possibility to react faster in case of alternation established setup.
Graphs are one of the most important tools for monitoring devices. They allows you to create graphs of collected data for different timeranges easily. You do not have to write couple of config files or modify existing one. All the configuration will be done by the web front-end, lean back and keep an eye of your automatic generated graphs.
Below the graph itself there is a legend and a table with numeric values. It contains following parts:
RRD-graph legend
Describes the color of lines or areas and corresponding data.
Physical unit of displayed values.
Minimum value of displayed graph
Average value of displayed graph
Maximum value of displayed graph
Last value in timelime of displayed graph
Total amount of displayed graph
RRD stands for Round Robin Database and is a special designed database structure to collect data circular. That means that the database must be setup for the right amount of data which should be collect.
For that reason there are following advantages and disadvantages:
No danger to overfill database
After some time data will be overwritten and can not more be displayed with higher resolution.
But the monitoring software takes care for this details so you have not to agonize about it.
To collect data and draw graphs in CORVUS® and NOCTUA® there are more services appropriate for. Lower figure illustrates how they work together and how the dataflow between each other is.
Dotted parts are still in progress but will be very soon implemented into CORVUS® or NOCTUA® because of better data flow distribution, less read/write access and therefor less load on the server.
Datatransfer should only takes place if rrd-graphs will be requested.
To makes rrd-graphing work, the rrd-grapher service and collectd-init service must already run.
Select one ore more devices you want RRD graphs for and click either on the "house" button in top menu or on the green "use selection" button below the top menu.
If you only need RRD graphs for one device, just click on the device name in the device tree view.
In both cases there will be some new tabs displayed, one of them named Graphs
RRD data is not collected mandatory for every device. To find out if there are some rrd-graphs for devices, look for a pencil logo beside the name of the device in the devicetree. Figure 12.3, “ Available rrd graphs ”
The rrd frontend follows the same structure like other parts of NOCTUA®. There are buttons, lists selections, inputfields and if drawn of course the graphs itself.
Also there is a tree on the left side, but this time not for devices but for monitored or collected data.
The size in pixel the output graph will be. This size relates only for the graphs, not for legend. Keep this in mind if you want to insert graphs somewhere else.
Output graph size
420x200
640x300
800x350
1024x400
1280x450
Selection which timerange should be displayed. There are "last" and "current" selections.
Timerange
draw graphs of the last 24 hours from now ((now-24h) - now)
draw the whole last day (00:00 -23:59)
draw the whole current week (sunday -saturday)
draw the whole last week (sunday -saturday)
draw the whole current month
draw the whole last month
draw the whole current year (Jan - Dec)
draw the whole last year (Jan - Dec)
With the timeshift option you get a tool in your hands to map current graphs on future timeline. For example this is handy to compare current graphs with graphs drawed 1 week ago.
Timeshift
do not draw extra comparing graphs
draw one normal graph () plus the same graph 1 hour later (dotted)
draw one normal graph plus the same graph 1 day later (dotted)
draw one normal graph plus the same graph 1 week later (dotted)
draw one normal graph plus the same graph 1 month (31 days) later (dotted)
draw one normal graph plus the same graph 1 year (365 days) later (dotted)
Show specific jobs
Set the current timeframe to now
Set the endtime of the graph
Hide empty graphs
Always include y-axis = 0 into graph
harmonize ordinate, for direct comparison with other graphs
Draw one graph for all devices
Apart from typing starttime and endtime of graph into the inputfield or picking the start and endtime from calendar, you can also select timearea direct from the graph itself. To zoom into desired timearea, simply move your mouse over the graph, the mousearrow changes to a cross hair and now you are able to draw a rectangle field over the graph. At the same time the area outside the selection gets darker. After releasing the mousebutton, area can be moved around or resized.
Push the apply button to zoom into selected area or use Esc key to abort selection.
The rrd tree is, similar to the device tree, a overview. The difference is that a parent object can consists of one or more child objects. For instance the parent object mem contains 4 child objects, avail, free, icsw and used.
For parent groups it makes sense to summarize some graphs. This kind of summarization is called aggregation in CORVUS® and NOCTUA®. Best way to get group information or to get overview of a cluster is to use aggregation. Aggregation is more complex than simple addition of data. It manages interferences and calculates values the most effective way to display sums as realistic as possible.
Above graph shows the 15 minutes single load value for 3 devices (pink, purpur and green) and a combined sum graph (brown) of all device graphs.
Compound view can be found on top of the monitoring data tree. It combines several data (for example load, cpu, processes, memory and io) on one multigraph, no need to select n graphs.
An other advantage of compund graphs is stacking. Some graphs are more significant if values are displayed stacked.
Above figure explains stacking in context of memory graphing.
In this example you can see straightaway the parts of memory usage in relation to available memory.
To obtain information about the general status of CORVUS® use check_scripts.py .
Example 13.1. Using check_scripts.py to view running services
clusterserver:~ # check_scripts.py -t --mode show --server ALL Name type Thread info status ---------------------------------------------------------- logcheck-server server not installed skipped package-server server not installed skipped mother server not installed skipped rrd-grapher server all 9 threads running running rms-server server not installed skipped cluster-server server all 5 threads running running cluster-config-server server not installed skipped host-relay server all 13 threads running running snmp-relay server all 21 threads running running md-config-server server all 21 threads running running
To show the last errors from the logfile you can use lse .
lse
[
-l
Error number
]
For more information type
lse --help
.
Example 13.2. Using lse to display the last error
clusterserver:~ #
lse -l 1
Found 40 error records Error 40 occured yesterday, 17:12:47, pid 11507, uid/gid is (30/8 [wwwrun/www]), source init.at.cluster.srv_routing, 72 lines: 0 (err) : IOS_type : error 1 (err) : args : None 2 (err) : created : 1409152367.94 3 (err) : exc_info : None 4 (err) : exc_text : None 5 (err) : filename : routing.py 6 (err) : funcName : _build_resolv_dict 7 (err) : gid : 8 8 (err) : levelname : err 9 (err) : levelno : 40 10 (err) : lineno : 179 11 (err) : message : device 'METADEV_server_group' (srv_type grapher) has an illegal device_type MD
Retrieving node information in an automated fashion is often useful in hunting down errors and bugs. To retrieve information about the nodes use collclient.py .
collclient.py
[
--host
Nodename
] [command]
For more information execute
collclient.py --help
CORVUS® and NOCTUA® provides its own logging service. In case of something goes wrong the logging-server writes its logs under
/var/log/cluster/
. Access to these log files is given by the command lse. Of course it is also possible to read the logfiles directly
by your favorite editor.
Critical errorlogs will also be delivered by mail. So you do not have to check your logs permanent, you will be notified by mail if there are critical errors.
Setting for recipient of errorlog mails is stored in /etc/sysconfig/logging-server
.
Another configuration file for mail notification is /etc/sysconfig/meta-server
.
Replace the given mailaddress in the line containing TO_ADDR= with your desired mail address.
# from name and addr FROM_NAME=pythonerror #FROM_ADDR=localhost.localdomain # to addr TO_ADDR=mymail@gmail.com # mailserver MAILSERVER=localhost
After editing the logging-server configuration file, the logging-server daemon must be restarted:
rclogging-server restart
The new configuration take effect after restart logging-server daemon.
Collection of repeated Questions about CORVUS® and NOCTUA®.
If you get something like in the picture above, you have to install fetchmsttfonts (OpenSUSE) and ... (debian) package.
This is a server internal error, likely the server can't find some files. Take a look into /var/log/nginx/error.log for detailed error message.
For some reason the webserver nginx doesn't run. Start it manually, for example with "service nginx start"
Please wait a moment till database connection is active and reload the page. If you still get this message after waiting a time you have to start uwsgi-init, for example with "service uwsgi-init start"
For some changes in your configuration, especially network configuration, you have to rebuild config (cached, RC) first. If your config is stored in cache, you have even to rebuild config (refresh)
Most likely the discovery-server is not installed or discovery-server service is not running. Make sure the discovery-server is installed and running. Run the check_cluster.sh command and look for "discovery-server". If it is not running, start it either by commandline rcdiscovery-server start or via the webfrontend under cluster server information
An other possible reason for that malfunction could be disabled discovery server config for your monitoring server. To enable it select your monitoring server device, navigate to the config tab and select the discovery server settings.
Sometimes complex network topology slows down display output in firefox. This issue affects firefox up to version 31.0. Reason is likely bad javascript interpretation on firefox side. If you get bad graphic display performance, try to use another browser e.g. chromium or google chrome™.
Sometimes it could be useful to reset a user password. For example if someone forget the password or something else goes wrong.
A short guide how to reset a login password by direct access to the database via clustershell follows:
Open a terminal (e.g. xterm, Konsole, gnometerminal) on your system and start the clustershell:
clustershell
Python 2.7.8 (default, Jul 29 2014, 08:10:43) [GCC 4.8.1 20130909 [gcc-4_8-branch revision 202388]] on linux2 Type "help", "copyright", "credits" or "license" for more information. (InteractiveConsole) >>>
Import relevant database content
from initat.cluster.backbone.models import user
Define new variable to work with:
my_user = user.objects.get(login="admin")
Set your new password.
my_user.password="MY-new-_passW0Rd17"
Please set a secure password with more than 8 character
Control your new set password:
print my_user.password
Save your new created password to the database:
my_user.save()
Exit the clustershell
exit()
From now you are able to login with your new password.
If you must wait long time while pending upload and the infolabel "Please wait..." is shown after upload image with add location gfx button, reload the page to resolv this issue.
Django is a free and open source web application framework, written in Python
Free Software is software licensed under a free license, allowing you to use, change and redistribute the software under the same license.
Free software license named GNU General Public License, more Information at [http://www.gnu.org/licenses/]
Group of devices connected to each other over data connections.
Network File System
Some option values of this check (command) can be accessed by parameter.
Preboot Execution Environment