Optimizing database performance: 5 TIPS to optimize your database’s performance with a monitoring system

August 28, 2017 — by steve0


Database performance

Optimizing database performance: 5 TIPS based on using monitoring systems to avoid performance issues

How frustrating is it when an application or a webpage is slow?

A database is the nerve center of any application meaning if you experience performance issues the cause is most probably located there. Moreover, if you provide an IT service a misbehaving database is going to occasion unhappy clientele and an overloaded complaints box. That’s why we’re taking a look at ways of optimizing your database to increase database performance and avoid unnecessary problems while simultaneously improving your platform.

A monitoring system is key for increasing database performance, providing feedback on your devices and applications, and pointing you toward any problems with detailed historic data on, end user response times and much more.

Here are our 5 recommendations for optimizing database performance:

1. The number 1 recommend is to choose a good monitoring system. If you’re still in search phase you might like to take a look at our article Top 16 best monitoring tools for 2016. Whichever software you decide to go with should have a variety of tools and elements, and, very importantly, it needs to offer a global view of all the tools, services and applications running on your system – their well functioning and their location, whether physical or virtual, and including your database. A well set up monitoring tool is going to provide you with the necessary oversight of your IT environment, giving you advance warning of anomalous network behavior, creating alerts, locating chokepoints and bottlenecks slowing your system down and allowing your system administrators to take preemptive action.

2. Continuing with the importance of a monitoring tool, the second tip is to know your database’s history in order to know at what moments the database is susceptible to slowdowns and failure, and consequently when extra vigilance and care and attention is required. Comparing the database’s performance over time gives insightful data into which applications are active at the same time and when variations in use can cause problems. Once any variation has been located your monitoring system allows your admin people to trace changes in code or configuration and deal speedily with the issue.

3. Thirdly, get your DB organized. Information is only helpful if it’s accessible. Avoid fields and charts that you never use, and simplify the fields you do use as much as possible save data in an orderly way, easily searchable, establish relations, don’t store large binary files, etc.

4. If you’ve followed the three tips above you now have your monitoring system deployed and you’re managing your database correctly. The next step is to go beyond the traditional systems, introducing intuitive screens that display your infrastructure’s status visually, via easily interpretable graphs and charts, and coupled with trouble-shooting tools that facilitate rapid location of and solutions for any problems detected or anticipated. Tools like Pandora FMS allow you to view how application requests are executed and show the different processes and resources that the application is waiting for. You shouldn’t underestimate the visual element, especially if you come from a technical background, as an intuitive display can effectively communicate information to non-technical staff, saving time and making the decision-making process more fluid.

database performance

5. Last but not least, you’ll know whether your applications or online elements are compatible with the different OSs, devices and technologies in your IT architecture. That’s what a global and unified vision of an application’s performance will provide. Analyzing wait times gives useful data on whether there is a problem with one platform or another, see where the errors are and solve them, and ensure your personnel are happy and productive.

If, in the unlikely case that you’ve followed our recommendations and are still experiencing database issues it just may be that you need a new kind of DB, better suited to its environment. For example, if you operate a website and it’s starting to get more hits it’s possible that it may not respond to the new demands placed on it, making a replacement database practically imperative.

Getting the most out of your database (or even replacing it) needn’t be difficult and will be an asset to your organization. Hopefully these five tips will be the start of a wonderful new relationship with your DB, whether you’re toiling in the guts of the network or reading the reports sent to you from your IT staff. If your thirst for network system knowledge is still unquenched you can visit our blog or feast your eyes and ears at our YouTube channel.


Double bind; Network or application, which one is at fault?

August 17, 2017 — by steve2


double bind featured

Double Bind; Network or applications, which is to blame?

Are you ready? First of all, I want you to not think of an elephant. An African or an Indian elephant, it makes no difference which one you don’t picture in your mind, just don’t imagine a large, grey land mammal, notorious for its prodigious memory, long trunk and big flappy ears.

If you’re like me, the first thing you did was to imagine an elephant. This kind of self-contradictory, unresolvable message creates a logical short circuit, and produces what Gregory Bateson was the first to term a “double bind”. Bateson was referring to mental and emotional states, but the terminology has passed over into the world of IT and network monitoring.

In the IT world, when networks and applications are so bound up together, and something goes wrong, it is difficult to disentangle the connections and identify where the fault lays.




First step: investigate your network

Always a good place to start, even though the problem may be somewhere else. To carry out diagnostics on the workings of your network use deep packet inspection to inspect and analyze packets for the type of data they contain, their origin and destination. DPI ought to be able to tell you if the problem is on your network, or at least to allow you to eliminate network issues from your list. Apart from this essential purpose, DPIs can warn about malware, prioritize and/or monitor network traffic, or, in this case, to identify critical applications which could be impacting negatively on your network. If it’s a network problem, go to step 2, solving the problem, otherwise, you can rule out a network issue and proceed to diagnosing your application stack.

Applications have different functions although they usually work harmoniously, forming the base of the system, and ensuring everything ticks along smoothly. However, this makes isolating the offending application all the more complicated, as they are so interconnected.

To find a solution to this problem you have to know how each application is related in the stack, including all the components that ensure their correct functioning. This is where your database comes in and the storage systems that make up the infrastructure, giving you global oversight of your applications, and their history.

Running manual diagnostics on your infrastructure or your Cloud-based resources is impossible. Nevertheless, a good monitoring system like our own Pandora FMS will give you the oversight and information control necessary to locate and solve problems the moment they are detected.

Of course, the best way to solve any kind of problem, is to know when it’s coming down the pipe and be able to anticipate it and rely on automated actions, or alerts, incase of fails, slowdowns or errors, and be able to respond quickly. It won’t be possible in every case to anticipate errors but in this case it’s possible to mitigate the negative consequences evaluating the financial implications of shutting down the network to perform repairs or application changes. Keeping your eye on the bottom line might not be your number one priority as a systems administrator, but your CFO will appreciate it. If you follow our tips for identifying the source of a double bind, you are sure to save time, money, increase your network security and restore your clients’ peace of mind, as well as your own.


Best system tools: Tools every Windows systems administrator should know

August 15, 2017 — by steve2


best system tools featured

Best system tools every Windows sysadmin should know

If you have any experience administrating systems, or indeed if you are a bona fide systems administrator you’ll know that certain tools are fundamental, both for your job and for your mental health, making you an efficient and agile systems administrator. You also know that knowing those tools is a practically indispensable requirement. Most of them are either integrated in the OS, or are third-party efforts that perform tasks or improve on the OS’s default tools. This is a fair and open forum, and this article makes mention of all the above instances, dividing them into the following categories:

  • Operating system
  • Web browser
  • Command lines
  • Server administration
  • Monitoring
  • Reference images
  • Virtual machine administration
  • Remote access

Operating system(OS)

We can divide this section into two overall groups, Windows 7 or Windows 8/8.1 10, which, depending on the environment you are administrating, have their pros and cons. If you’re working in a Windows environment I’ll assume you aren’t a heretic and that you use Ubuntu. If you’d like to get more out of Windows admin you should be using Windows OS. Of course it’s also possible to administrate using Linux, although it’s more complicated. Even though some applications and tools will run on this OS, we’re going to go the official route and declare that if you’re adminstrating Windows, then you should be using Windows.

Windows 7
Featuring a robust OS, proven over time, it’s stable and has many updates. It can be used for administrating any Windows environment, but it’s only recommended for those based on Server 2008 R2 or earlier. The server management tools included on 7 are fine for adminstrating those systems, although with some losses, including not being able to use the admin console “Server Manager” or incompatibilities administrating virtual machines in Hyper V if you’re in a Server 2012 or 2016 ecosystem.

Windows 8/8.1 10

best system tools

Windows 10 Creators Update

Microsoft’s latest OS, Windows 8/8.1 was justly criticized for its lack of aesthetic appeal and for the loss of the taskbar (quickly recovered for Windows 10). Apart from this oversight, it shares most of its characteristics with its successor. 10 had an abrupt launch and at the beginning suffered from instability and the server management software wasn’t available at first. At time of press I don’t see any reason to still be using 7 when 10 is available, unless you’re running some legacy software somewhere in your IT infrastructure. Otherwise, 10 also includes the Server manager tool, which is capable of managing your servers, plus the latest version of PowerShell with a huge amount of new cmdlets and the ability to adminster Hyper-V 2012 and 2016 from your work station, or even convert it into a hypervisor in order to create your own VMs.

Web browser

Why are we talking about web browsers in an article about sysadmin tools?

A latest-generation web browser is fundamental for a systems administrator, as many everyday applications have been designed as, or migrated to, webapps, such as VMware, monitoring tools like Pandora FMS, Zabbix or Nagios, and even Cloud-based service administration tools like Azure, Office 365 or Amazon Web Services (AWS) are administrated from a web interface. In my particular case, I use Chrome, which, yes, consumes a lot of RAM, but works better and faster and has other features that make it, in my opinion, the best browser currently available. Of course, others will disagree, and everyone will have their favorite, be it Firefox, Opera, Vivaldi or o Microsoft Edge, take your pick.

best system tools

Google Chrome Versión 59.0.3071.109

Command lines

If you’re using Windows then cmd.exe (command prompt) is irreplaceable, as is PowerShell. You could also use PowerShell ISE that is helpful when it comes to command searches and writing scripts, but takes along time to load and the interface is highly uncomfortable. My personal recommendation is Cmder, a lightweight piece of software that doesn’t need installing and is compatible with cmd or Powershell and even includes some Unix commands. It works in tabs and you can open consoles with different credentials meaning that in a single window you can have various tabs with your cmd and Powershell, both as a “limited” user or with your sysadmin user.

best system tools

Cmder Console

Another Cmder particularity is that you can make ssh connections from any console, whether you’re in cmd or PowerShell, with the normal ssh command that you use on bash to connect to any Linux server, reconfigure anything on Pandora FMS or reboot the http service on your webpage in Apache.

Server administration

In this section there isn’t much to say beyond the fact that the admin tools included in Windows 7, or, my particular favorite, Server Manager in 8 and 10 allow you to administrate your server as though you were there in situ, or open your admin console as a DNS, DHCP or Active Directory user and computers from your PC. Powershell will also allow you to perform blocks of tasks and send cmd-lets or scripts to various servers simultaneously from a command line at your workstation.

best system tools

Server Manager

If you work in a large environment and you have the opportunity to work with System Center it will make life a lot easier for you as you can administrate whole clusters of servers with just a few clicks, but this tool is a bird of a different feather.


When you have to bend a cluster of servers or services or even work stations to your digital will there is no time to follow each one closely. You need a tool that can monitor the services, send alerts if necessary, and even carry out predefined actions such as booting a service or capturing a log and sending it by email if the programmed conditions are met.
There are many tools of this type and the majority of them work out of online environments and operate best in Linux. That’s the reason why ssh connection tools are your best bet. Our personal preference is for Pandora FMS, naturally, but there are alternatives like Nagios or Zabbix.
I’ve been getting to know Pandora FMS recently and have come to recognize its awesome power and configuration potential. It can be as complex as your ecosystem demands, as well as being available in both Open source and Enterprise versions, the latter replete with extended functions. If they had this deployed at the CERN laboratory I wouldn’t be surprised, it’s that good.

best system tools

Pandora FMS 7 Web Console

As in the administration section if you have a system center you can also monitor your Windows servers from there, although, if you are going to monitor third-party servers you will need something extra in your toolkit.

Reference images

Deploying and configuring servers and/or work stations, plus installing any required applications is a Herculean task if we address each instance separately. Luckily we have tools like clonezilla, OpenGnSys, Acronis Snap, Symantec Ghost or the Windows server role WDS. In my own case I use a tool included in Windows called dsim.exe to administrate, capture and deploy system images individually and the Windows role server WDS to massively deploy because it allows me to use permissions from the active directory structure and automatically deploy different reference images across different hardware in function of that structure.

Administrating virtual machines

The future has arrived, in the Cloud. Batteries of servers running thousands of services in air-conditioned data centers, new servers deployed in seconds, maximum availability and all thanks to virtualization. These days a physical server is not only a server but N servers running on the same machine, offering different services to different groups of users, with no overlap, and no domino effect in case of one server going down. But what is behind this technology? A magic wizard, you ask? If you work in system administration then you’ll know that it is not magic, but clusters of servers, often virtual ones, which are providing those services. These days a sysadmin also has to know how to virtualize and how to administrate virtual machines correctly. To achieve this, and depending on the environment you find yourself in, there are different tools – VMware Vspher, Xenserver or the tool included on Windows server, Hyper-V. They all have their pros and cons and their own modus operandi and high availability options, rapid deployment of VMs, cloned from a reference VM. The most well-known and widespread is VMware. Xenserver has a free version and Hyper-v comes with a Windows Server package just like any other role. That’s why these alternatives have taken off recently, competeing strongly with VMware.

best system tools

VMWare vSphere web client

That said, in this section there is not a lot of variety, and your choice is usually limited to the virtualization system already in use which you will have to get used to, using the specific corresponding tools. If you find yourself having to implement a virtualization tool from scratch, good luck! Unfortunately, that situation takes us beyond the remit of this article.

Remote access

You’re sitting at your work station, carrying out your daily tasks when you get a warning about a server. You’re on the fifth floor and the server is in the basement, and getting exercise during working hours was never part of the deal. Luckily, with remote access software the problem can generally be fixed from the comfort of your seat, with no physical exertion involved. Each environment has an implementation that works for it.

RDP. A remote desktop protocol, it comes as standard with a client on your Windows work station, and access is easy to configure. Access to your server just as though you were sitting right in front of it

best system tools

SSH. Secure Shell, a protocol used by UNIX. Yes, UNIX, as, even though we’re talking here about Windows environments, 90% of the time you’ll be dealing with a Linux machine. To run a quick diagnostic you can connect remotely via SSH, and a window will open displaying the server bash that you have to administrate. We’ve already spoken about cmdr, a multipurpose tool that allows you to make SSH connections. If you want a more specific tool for this purpose, you could do worse than to try the well-known Putty, easy to use and powerful enough to get you out of any kind of trouble.

Team Viewer. Remote access from anywhere, with the caveat that you need to have the client already installed and with the connection going through an intermediary server. Even so, it’s a very useful tool that can help you out quickly and easily, one of its advantages being that it can connect via port 80 or 443, the same ports used for web browsing and that are usually open.

best system tools

Remote Desktop Ehorus

Ehorus. A remote administration tool focused on enterprise environments. Like Teamviewer it connects through an intermediary server and uses an agent installed on the remote hardware. The difference is that Ehorus‘s intermediary server can be installed in your office and you can administrate it yourself, or alternatively, you can contract it as SaaS and confide its administration to the service provider. Another pro is that you don’t need to install any software on the machine that initiates the connection, and you can connect via web browser.

VNC. Remote desktop system that has been with us for a long time, and like RDP or SSH it works via a direct connection. What’s more, it’s an economic option. You can deploy a proxy server in your company in order to remotely access PCs from outside your own network. For example, Real VNC o Ultravnc.

This is a list of the indispensable tools that any good systems administrator should know, especially those who work in a Windows ecosystem and are interested in streamlining workflow. There are others, of course, and other categories that this post does not cover but watch this space, as we’ll be looking at more tools and other applications in the future.


Meet the hottest databases of 2017!

August 9, 2017 — by steve2


best databases

Best databases of the year 2017

The database ecosystem, like any ecosystem, is subject to the iron rule of digital Darwinism, which posits continuous evolution and adaptation to the IT environment. Last time we looked at the best databases of 2016, and, although 12 months is not a long time in evolutionary terms, we’re back again to check out the hottest databases of 2017! That’s right, it’s the swimsuit issue! (Of databases)

If you’re in the market for a new database, the possibilities are basically the relational type on one hand, and on the other, the NoSQL variety. The earliest databases to emerge from the primordial digital ooze back in the 1970s dominated the IT world: they were large hungry beasts that devoured all the data they could sink their teeth into, until they began to die out and a new breed of database evolved, NoSQL databases, relatively recently in world history, in the last decade or so. Finally, like early hominids, these two distinct subtypes produced hybrid creatures, such as the SQL/NoSQL database, in-memory databases or DBaaS.

A selection of storage technology that is easy to use, secure and with practical tools as standard, plus a support community contributing to the lifecycle of the product, will all make a difference when it comes to managing your business and improving productivity, becoming a fundamental component of your business. Why leave the correct functioning of your database (and your network in general) to chance? Database monitoring is a priority in any IT installation.

If you were going to hire a new technician for your support team you would first check out their résumé and hold a preliminary round of interviews. Well, it’s no different when you’re looking for a hot new database for the office; you have to look at the pros and cons, and not only technical specs. You need to ask yourself: Where is the business at right now? Are we growing? Consolidating? Are we ready to scale up? Or to scale sideways? Will the database you choose adapt itself to the ecosystem in which it finds itself, or will it be a digital Dodo, unable to evolve quick enough to survive?

The essential questions are:

  • How many clients do you need to service?
  • What amount of data will you need to manage?
  • Will you need to implement “batches” that access the database?
  • What kind of response time do my clients expect?
  • How will the database scale if my number of clients and transactions grows?
  • How will you monitor your database to avoid down time?
  • Do you need a relational database or NoSQL?
  • Will the database misbehave if it crashes? How will it behave in case of a problem?

Follow the link to check out a comparative of NoSQL vs SQL databases.
And now, the moment you’ve all been waiting for, the loveliest models of 2017 are here, with their finest characteristics on display for all to see.

Hottest commercial databases

The market is presently dominated by DB2, SQL Server, Oracle and IBM. On Windows OS the SQL Server is the usual database of choice, while Oracle and DB2 are the apex predators of the Mainframe/Unix/Linux ecosystem.

Microsoft SQL Server

Developed by Microsoft, and exclusively compatible with Windows. Many are trained in its use, and it is easy to acquire. Since its integration with Microsoft Azure its flexibility and performance have improved, plus it now permits information from other servers to be administered, improving its usability.



Oracle can run on practically any system, and is served by many who are trained in its ways, having many adepts. Also of note are its many tools oriented toward monitoring and admin.

Oracle Benchmark:


The second-most used database for Unix/Linux ecosystems, right behind the dominant Oracle, and the #1 choice for Mainframe. DB2 has its followers, trained in its arts, but there are fewer initiated in its ways than for Oracle. On the other hand, one who follows IBM D2 has no need to be able to operate in a Unix/Linux environment.

DB2 Benchmark:


Designed with Big Data in mind, its storage and data analysis capacity are gargantuan.

SAP Sybase

A survivor from an earlier age, when it reigned over its competition, it remains a solid bet in terms of scalability and performance.


Another victim of the Dot-com extinction event of the 1990s. After an initial positive run it got devoured by a large predator (IBM) following a series of bad management decisions. Like many technological evolutionary dead ends (see also Ascential) it has become extinct, although its vestigial digital DNA can still be found in some IBM tools and applications.

Hottest Open Source relational databases

In this category there are three outstanding beauties: MySQL, MaríaDB and PostgreSQL. While each one is attractive in its own right, they share some characteristics: a powerful support community, open code that users can modify as needed, and last but not least, that they are free and available.

best databases

NoSQL databases

NoSQL databases evolved in response to external conditions and, although they belong to the same family group, they have many unique characteristics, developed in isolation from others of their species. The family group has split into sub-categories that each model data according to their particular evolutionary processes. We can classify them in four groups:

NoSQL Oriented to key-value

The simplest data model, ideal for when you access your data by key. The difference being that in this case data can be stored without defining any specific schema. They are very efficient for reading and writing, and are designed to scale massively thereby achieving extremely fast response times. Data is usually stored in complex structures such as the amusingly named BLOB. Some examples of this kind of database would be the following:
Redis: Open Source and free software.
Riak: A dedicated key-value database, outstanding document storage and search functionality
Oracle NoSQL
Microsoft Azure Table Storage

Document oriented NoSQL

This system uses various formats (JSON, XML) and features the capacity to change schema without stopping the database. Developers can upload indexed documents and get access through the database storage engine. Their flexibility makes them one of the most versatile tools, Mongo DB and Couchbase Server being two noteworthy examples.

Mongo DB

One of the most flourishing databases of the present epoch, capable of working with both structured and unstructured data, and with excellent scaling and performance capabilities. It has a highly skilled and powerful coven of followers who can introduce the uninitiated into its ways. Will work with key-value pairs, allowing access to different parts of the stored data.

Mongo DB does not support operational atomicity but does guarantee eventual consistency. Changes are reproduced throughout all the nodes although it cannot be guaranteed that all nodes will receive the data at the same time.

Couchbase Server

An open-source database licensed under Apache. Despite its name it is unable to guarantee 100% data integrity. On the plus side it features an intuitive administration console, via which it is possible to easily access ridiculous amounts of data.

Mark Logic Server

This small, furry, seemingly insignificant database has triumphed thanks to its data integrity and XML, JSON and RDF compatibility.

Supported systems: Windows, Solaris, Red Hat, Suse, CentOS, Amazon Linux and Mac OS.

Elastic Search

Worthy of mention: RavenDB, Apache Jena and Pivotal GemFire.

Column oriented databases; No SQL

This species represents data values as “columns” that allow the user to map keys to values and group them in structures. They are used in environments where you need to access various columns with many rows. They are most useful for processing and events analysis, content management and data analysis.

Apache Cassandra

Created by Facebook and now freely distributed. Recommended for databases handling unimaginable quantities of data. There is also an Enterprise version called Datastax Enterprise.

Supported formats: ASCII, bigint, BLOB, Boolean, counter, decimal, double, float, int, text, timestamp, UUID, VARCHAR and varint.


Apache Hbase

Designed to support multiple read and write access to huge amounts of data in real time. Hadoop mail and file system are two points in favor.

NoSQL graph databases

These models are focused on properties and the relations among them, making use of graph theory to connect databases. Each element is joined to its adjacent element. Recommended if your data is highly inter-relational, as in social media networks, fraud detection, real time updates, etc. The logic is, a column for each structure, and for each relation two columns.


Supports data integration, high availability and clustered scaling, with furthermore a good administration panel.

Infinite graph

A license-only database that supports Mac OS, Linux and Windows.
Benchmark: On demand from Objectivity.

best databases

Hybrid models

More companies are offering hybrid solutions that use various database engines that admit various NoSQL setups plus relational engines.

To name a few examples, these include, CortexDB, Foundation DB and Orient DB, which all offer different NoSQL models.
IBM has extended its DB2 databases to include the possibility to use BLU Acceleration with NoSQL database, allowing data to be saved in XML, JSON and graph mode.

Databases as a Service

Database as a Service’s offer is differentiated from the rest of the formats in this article as it is Cloud-based. The user simply inputs the data to be saved and the service provider does the rest. The convenience of this system is sure to increase its popularity in the future.

Amazon SimpleDB

A database that offers a simple web service interface to store and create data sets. If you want to create access to simple databases Amazon SimpleDB is an option to keep in mind.

Data is saved as text and generates structures made up of pairs of parameter values. Data is indexed automatically, making searches very convenient.

Benchmark not available.

List of Pandora FMS database monitoring modules:

Database Modules and plugins
Oracle Oracle Monitoring
DB2 DB2 Monitoring
SQL Server SQL Server Monitoring
Teradata Teradata Pandora FMS Enterprise Monitoring
SAP Sybase Sybase Monitoring
Informix Informix Monitoring
MySQL Active MySQL connections
MySQl Cluster
MysQL Monitoring
MySQL Performance
MySQL Plugin
MySQL Server Advanced Monitoring
Postgre SQL Perl PostgreSQL
PostgreSQL Plugin Monitoring
Postgre SQL Plugin Monitoring
PostgreSQL Plugin Agents
Mongo DB MongoDB plugin monitoring
Couchbase Monitoring Couchbase with Pandora FMS Enterprise
Mark Logic Server Monitoring MarkLogic with Pandora FMS Enterprise
Elastic Search Monitoring Elastic Search with Pandora FMS Enterprise
Redis Monitoring Redis with Pandora FMS Enterprise
Riak Monitoring Riak with Pandora FMS Enterprise
Microsoft Azure Table Storage Monitoring Azure with Pandora FMS Enterprise
Apache Cassandra Monitoring Apache Cassandra
Apache Hbase Monitoring Apache hbase
Neo4j Monitoring Nwo4j with Pandora FMS Enterprise
Infinite graph Monitoring Infinite Graph with Pandora FMS Enterprise
Amazon SimpleDB Monitoring Amazon SimpleDB with Pandora FMS Enterprise

This article is written as a quick introduction that hopefully points out the need for a prior study of your own company or organization’s requirements. It’s not necessary to invest in the most gargantuan, fastest, vertically scaling solution if your business does not demand those characteristics. Talk to your tech people and work out which is the best solution for the short to medium term.

Have we forgotten your favorite database? Let us know if your perfect solution didn’t make the list and we’ll include it in our end-of-year round up when we distribute our in-house awards for best tech products of the year, the Doras.

If you’d like to know more about Pandora FMS and its database monitoring potential, or about any of the other many, many things our flexible monitoring solution can offer, just follow the link.


Mainframe System: Control and Savings

August 7, 2017 — by steve0


mainframe system

Mainframe system: Making savings and improving control

The robustness and reliability of IBM’s Z-series are beyond doubt. Nevertheless, IBM’s invoicing protocol is a little peculiar, charging, as they do, for contracted processing capacity. If clients pass a specific threshold for a period of time, usually four hours, they are automatically billed at a higher scale.

Due to Mainframe system’ own particular characteristics, they tend to become “islands of information” that slip below the CIO’s radar:

  • They have their own operations and IT development personnel who are a class apart from the rest of the IT department. In the case of staff changes, the resulting drama would imply an extra investment in training and a steep learning curve for new staff.
  • They can’t be integrated with other company control systems, due to their proprietary monitoring system.
  • They have to be budgeted for separately: being proprietary systems it’s practically impossible to negotiate with the supplier.

Due to the above conditions, it’s almost reasonable to think about data retrieved from Mainframe system as being close to faith-based, seeing as getting a second opinion on the data is a non-starter.

Case study: our client, a road-haulage company, presented us with a challenge. Pandora FMS had to be capable of reporting and of alerting when the billing threshold was being exceeded. With this information the CIO would be able to consider all the open processes in execution at that moment and decide if any could be suspended, or, alternatively, reassigned to moments of low system activity.

After three months of collaboration, the results were the following:

  • Our technical staff worked out a system for extracting information on the system’s processing power consumption directly from the Mainframe, related to the processes that triggered said consumption, the users responsible for the processes, and the relation with the rest of the processes.
  • Using this data we were able to establish a new alert system to alert when processing consumption thresholds were under pressure.
  • Pandora FMS can generate reports in various ways that incorporate all the information necessary for different company profiles, delivering them to the relevant people and in the relevant way (more or less technical, with specific information omitted or included, depending on the needs of the person receiving the report). This grading of the information makes it easier for the systems chief to take a decision in the case of billing thresholds being breached, or the for the financial and purchasing departments to be prepared for renewal negotiations regarding the Mainframe; or the CIO for when it comes to taking strategic decisions about whether to continue with the technology currently in place or look for alternatives.

Based on these findings, and Pandora FMS’s capacities, it was simple to integrate the IBM “information island” into the company’s monitoring system and within the broader IT archipelago, achieving significant improvements in terms of resource-related results, basically by incorporating the team previously dedicated solely to the IBM Mainframe environment more fully into the company.

If you enjoyed this article, and would like to read more, why not take a look at our blog, where there are many more IT-related articles. There’s also a YouTube channel featuring tutorials and other Pandora FMS-related videos.


CPU temperature monitoring with Pandora FMS

August 3, 2017 — by steve1


cpu temperature

Monitoring CPU temperature with Pandora FMS

CPU temperature is one of the most important metrics to keep in mind when it comes to monitoring hardware. An overheated CPU can cause sudden system interruptions, as a self-protecting mechanism, or melt the CPU or even render it outright lifeless.

If you don’t want your production systems, databases, backups, web servers or hardware to go down as a result of CPU overheating, read on. We’re going to outline a few ways of implementing CPU temperature monitoring using Pandora FMS on Windows and Linux systems, and network devices. Get a heads up on values, different types of alerts and be pro-active when problems come up.


The wmic utility allows you to get all kinds of information by using Windows WMI. To monitor CPU temperature, execute the following command from a cmd with admin privileges;

wmic /namespace:\\root\wmi PATH MSAcpi_ThermalZoneTemperature get CurrentTemperature

This will give you the CPU temperature in Kelvin. If you want the data in Celsius, use the operation:

Celsius_Result = (Kelvin_Result / 10) – 273

We’ve created a module on our agent software with the following structure:

module_name temperature tenths kelvin
module_type generic_data
module_exec wmic /namespace:\\root\wmi PATH MSAcpi_ThermalZoneTemperature get CurrentTemperature | tail -2

Plus a synthetic module to make the conversion from Kelvin to Celsius.

Here’s the result:

cpu temperature


On Linux systems we can get the data differently, in function of the distribution in use. Keep in mind that this kind of check is only applicable to hardware, with heat sensors for reading CPU temperature, for example, as virtual devices are managed by software.

In the case of Ubuntu/Debian systems this information is usually found in the following directories:


On our example machine we can see the content of the following files:

$ cat temp1_input temp2_input temp3_input

In this case the overall CPU temperature plus the temperature of each nucleus is displayed. The first two figures show degrees and the other three show decimals. Most systems don’t have precise enough sensors to display decimals, so, as a workaround to achieve a more granular definition of the metrics, you can apply a post-procedure. One module’s configuration would be as follows:

module_name Temperature CPU 1
module_type generic_data
module_exec cat /sys/class/hwmon/hwmon1/temp2_input
module_postprocess 0.001
module_unit º C

And the result on the Pandora FMS console, would be:

cpu temperature

Remember! As mentioned, the location of this information may vary according to the Linux distribution in use, making a brief prior investigation mandatory before creating the checks.

Implementation on network devices

In this case use network checks, via SNMP, which will allow you to quickly and easily get feedback on the device in question.

Again, some prior investigation is necessary, as each network device is different, with its manufacturers standards, and there isn’t a “universal value” that can be generically applied in all cases.

You’ll need the device’s IP address, its community, and SNMP version and the OID of the check. The OID will enable you to receive specific information relevant to the device in question.

The module’s configuration on the Pandora FMS console will look like this:

cpu temperature


Pandora FMS: a new vision of monitoring

July 28, 2017 — by steve0


new monitoring vision featured
There are so many diverse and distinct IT infrastructures, different business needs, and various levels of monitoring oversight requirements, that today not every monitoring system can meet the demand. From the beginnings of monitoring practices until now there has been a certain stability in terms of what was expected of a monitoring tool, but the explosion of new technologies and the impulse of new demands from business managers have changed the expectations put on data, and creating a space for a new vision of monitoring.

Pandora FMS represents a new focus, oriented toward the new data demands, and aimed at large companies, without forgetting SMEs either.

Below you can see a graph showing the evolution of monitoring systems from 2000 to 2014:

new monitoring vision eschema

Once a company reaches a certain size it’s necessary to formalize some departments and positions and you’re going to find a CEO, a CIO and a CFO. This latter is the one who takes decisions regarding the company’s capital, how to invest it, where to spend it, where to get more, and the overall financial plan for the company’s future.

The CFO works always with two concepts in mind: “costs and investments”, and is almost exclusively interested in saving costs, increasing profits and making improvements when financially reasonable or necessary.

Within the monitoring sector there are tools for all budgets; the important initial criteria are the CFO’s objectives:

– Cost savings: do more with less. Monitoring isn’t free (except when it is, but keep in mind, buy cheap, buy twice) and analyzing the pros and cons of investing in a monitoring system, or any other IT infrastructure, is necessary, as there are alternatives, such as Open and free-license monitoring tools, outsourcing to a third-party or a Cloud system

On the other hand there is the CIO, who works under the financial obligation to justify every cent spent on IT. For this position, Pandora FMS can be an invaluable support when it comes to answering the following questions:

  • Are we getting all we can out of our IT resources?
  • Are we getting the service we pay for?
  • Are we complying with our SLAs?

– Improving your business operations: Regarding the second point, how can you improve your business by getting the most out of your IT systems? Pandora FMS can help in the decision-making process by supplying monitoring feedback on your business processes, monitor your KPIs and presenting the data simply and clearly and in real time. All this makes managing your organization easier and can provide significant cost reductions.

To sum up, the only constant, as always, is change, and technology is on an exponential curve on its way to who knows where? As networks get more complex, it creates more demand from business for a tool that can disentangle so many products, keep different protocols working together, avoid predictable upsets, reduce costs and improve your business. Prevention is always better than cure and if you don’t think so, ask British Airways


Integrating Pandora FMS alerts in JIRA

July 25, 2017 — by steve0


jira integration

JIRA integration? Doesn’t Pandora FMS already have a perfectly compatible ticketing tool in the form of Integria IMS? True, but JIRA, from the software stable Atlassian, has really taken off after being initially used for software development. When JIRA Service Desk was launched JIRA came into its own as a ticketing tool, in combination with the Service Desk that is deployed when the program is installed. Using the same software it’s possible to manage software development, tickets, and projects. These characteristics, in tandem with the add-ons library, have turned JIRA into a welcome guest in many IT departments.

JIRA now has more than 11,500 clients worldwide, so, priding themselves on its flexibility and adaptability, the good people at Pandora FMS HQ have started working on a project to get JIRA and Pandora FMS communicating. Read on if you want to find out how to integrate your Pandora FMS instance with your JIRA. It’s possible to automatize issue generation with on-premise or Cloud versions of JIRA.

Taking as an example starting point an IT installation monitored with Pandora FMS and with JIRA Service Desk as a ticketing tool. Pandora FMS provides the necessary oversight of the infrastructure, advising on possible errors and giving a global view of your environment, while JIRA and the technical staff manage the disparate hardware and software incidents, all meaning that, when Pandora FMS detects any anomalous activity JIRA automatically generates a ticket.

This is a way of centralizing all the data referring to your organization’s infrastructural anomalies. Taking a look at the Atlassian documentation, which sounds like a tome of ancient lore dreamt up by the inhabitants of a long-lost ocean continent, you can hook up both environments – Pandora FMS and JIRA – via JIRA’s API, thereby getting the best out of both. From version 7 of JIRA the API is activated by default, so you don’t need to change anything on the tool.

To carry out the integration the Pandora FMS dev team have developed a plugin (available at the Pandora FMS plugin library), which is easy to install and which automates issue creation on JIRA, generating a ticket the moment a module status changes on Pandora FMS and an alert is triggered.

The first thing you have to do is to download the plugin and put it in a file, e.g. – /usr/share/pandor_server/util/ – with the other Pandora FMS server scripts, using WinSCP, for example.

jira integration

Once the file is uploaded, check that you have permission to execute. In case you don’t, the command chmod +x will put things right.
The plugin is now ready to be configured. Pandora FMS alerts are based on three components; commands, actions and templates. To begin, create a command based on the script by going to Alerts > Commands and introducing a command that will launch when an alert is triggered. Use the following command:

/usr/share/pandora_server/util/ -c _field1_ -u _field2_ -k _field3_ -t _field4_ -d "_field5_" -p "_field6_" -a _field7_ -g _field8_ -i _field9_

jira integration

Now it’s time to configure the action. Go to Alerts > Actions and introduce your instance’s data, giving the values to the macros established previously. This allows Pandora FMS to communicate with JIRA and carry out the creation of issues. Use the Pandora FMS macros to define the values in each field so that the JIRA issue will be created with the data collected by Pandora FMS.

To give an example, in our environment, it looks like this:

jira integration

As you complete the parameters at the bottom with values, you’ll be able to check the command that Pandora FMS executes when an alert is triggered in the Command preview.

We’ve opted to configure the fields with the following system macros. This is just an example, and you can configure as you like. It’s worth pointing out that in order to introduce line breaks in the ticket description, they should be indicated in the appropriate field with \n.

jira integration

With the plugin installed and configured it’s time to verify that everything is working by forcing an alert to trigger. Attach a Pandora FMS agent to a module and establish issue creation on JIRA as the action to execute.

jira integration

When the alert triggers check JIRA for the result:

jira integration

As you can see, Pandora FMS is a flexible monitoring tool that allows cross-platform integration quickly and easily, with its plugin library able to solve any issue you may have. Everyone has access to the Open Source library, plus anyone can develop their own plugins and upload them for communal perusal and deployment. Enterprise version users also have an Enterprise plugin library, focused on business applications.

Issues can be created for any project, with its corresponding roles, workflows, custom fields etc. Alternatively, you can attach a new type of issue to an existing project and bring under control your Pandora FMS instances, for example. JIRA’s potential is almost infinite. Any alert triggered by Pandora FMS and configured with the Issue Creation action will have their corresponding ticket on the tool. This is particularly useful for technical staff who find themselves sorting out infrastructure problems, as the issue is automatically assigned to their incident queue, keeping everything centralized. It will also be useful with respect to report generation on JIRA, since the incident data is useful for other add-ons like eazyBI or Power Report.


UX monitoring in a transit company: a Pandora FMS case history

July 20, 2017 — by steve0


ux monitoring

09:00 A.M, on a beautiful Monday morning. The scene: a board meeting of a transit company.

– I’ve been going over the numbers for the last quarter, gentlemen, and let me tell you we find ourselves up against the same problem as always – begins the CEO – a couple of our provincial offices are complaining again about how slow the reservation application system is to access. Six months, gentlemen, and the IT department is still working on it. Why I oughta…I’m just about running out of patience here boys, I really am. Why, if I told you that just those two offices were responsible for 10% of our passenger volume and 15% of product transit, well…

The CIO spoke next.

– Well, it’s complicated. There are so many factors that I don’t know quite where to begin. My guys tell me that there is just no reason why those branches should be making such a commotion about the application, when not a single other branch is saying anything. We already checked all the systems involved – two times! I even sent my best man down there to those offices, in person, and see if he could find anything, but no. Nothing. Charlie. You know Charlie? Well, he came back scratching his head, saying there was nothing strange going on.

Director of Customer Attention:

– We’re also getting some pretty negative feedback from the customers themselves via the website.

-Well, if “you” don’t want to be “ex” employees – the CEO joked, steelily – you need to improve our UX.

The meeting went on, coffee went cold. Sandwiches went stale and yellow at their edges. The CIO left the meeting tasked with coming back with answers. Finally the transit company decided to install a monitoring platform: Pandora FMS. And now, speeding forward in time we can see what Pandora FMS has done for them.

  • Firstly, the CIO will never have to pronounce the dread phrase “the guys in the department tell me…” Pandora FMS comes with a powerful report system, which can tailor the information it contains to the specific profile of the reader – whether they are technical or non-technical personnel. The CIO can now present detailed reports on the status of the systems in real time, or with historical data; as up-to-the-minute as you want them to be.
  • Secondly, Pandora FMS can create a service map showing all related elements, both internal and external: servers, routers, databases, communication lines, OSs, virtual systems, etc., plus indicate which elements are functioning outside their configured parameters.
  • Finally, Pandora FMS v. 7 comes with a UX simulator, which launches recurring procedures that mimic the actions of a customer activity on your website (buying tickets, making reservations, looking up information…). The UX monitor measures response times during all phases of the operation, from outside your own web, recreating exactly the user’s experience.

The CIO’s conclusions?

  • The internal complaints coincided with drops in bandwidth, incompatible with the contract with the provider. Immediate conclusion: review all contracts with the provider, show them the data, seek redress and find a new provider.
  • Some specific complaints were unfounded, and were subjective opinions rather than demonstrable facts. A quick cross-reference with other operators highlighted that the conditions were the same for all staff, and any perceived decline in communication quality was only a subjective perception.
  • The question of the website was trickier. A bottleneck was found, affecting response times, located between the database that supported the application, and the application itself. Monitoring demonstrated that no components of the network were saturated, but that still some queries were taking a long time to resolve. The CIO handed the matter over to the internal developers for them to review the application with the supplier.

Pandora FMS is the CIO’s perfect back up for difficult meetings, a real time tool for meeting any challenge raised by other departments. In the IT department itself system errors are now quickly identified and solutions found in record time.

If you want to know more real case studies, please visit our website


What’s new in Pandora FMS 7.0 NG 707

July 17, 2017 — by steve0


whats new pandorafms 707

Pandora FMS 7.0 NG, package 707, contains numerous functional improvements and visual upgrades. Here’s a selection of some of the most important changes on the latest version:

Visual improvements

  • Better group display options on different console views.
  • Better visuals on Netflow display and Metaconsole.

whats new pandorafms 707

  • The visual design of the new events sound console has been cleaned up and also now includes features the agent that generated the event.
  • The service view design has been overhauled, and made easier to use.
  • The value of the “unit” field now updates correctly when policies are modified and/or the data on said module changes.
  • Text backgrounds have been shaded in to provide a better viewing experience.

whats new pandorafms 707

Other improvements

  • Now incorporates previous 24 hour events history on agents’ detailed view:

whats new pandorafms 707

  • Better authentication protocols to avoid XSS vulnerabilities.
  • Historical database can now perform custom SQL queries for producing reports.
  • Update agent cache to Metaconsole process now optimized.
  • Improvements to log file size limit, when displayed on the console.
  • “Header” parameter option added when creating a webserver-type module.
  • Trap search results on the SNMP console are now more informative when no existing traps coincide with the search.
  • Agent fields can now be customized, adding and modifying fields such as encriptions, and displaying the data on screen with hidden characters.
  • The events creation and validation plugin now includes the following upgrades:

whats new pandorafms 707

    • URLS included when adding critical/warning/unknown-type actions.
    • Data sending and Ison custom fields improved.
    • A new event can validate previous ones with the same ID Extra.
    • Agents associated with an event can now be selected by name, without using the numeric ID.
    • Automatic agent creation, if not specified otherwise on the event.
  • View modules with no assigned alerts and create alerts from the same display.

whats new pandorafms 707

Problems solved

  • “servername” now allows upper case characters.
  • A problem with Generating Network Maps on Windows servers running Pandora FMS now debugged.
  • SNMP traps alerts macros debugged.
  • Problems with agent plugin “pandora_df_used” when detecting network volume debugged.
  • The “pandora_db” plugin displayed a warning when the server name was not defined on the server’s config file. Now it takes the machine’s default hostname to avoid displaying the warning.
  • Data graphics over six months old were buggy, due to a modification on the X axis. Debugged.

whats new pandorafms 707

  • Synchronization on the data server module status when modified through the console now debugged.
  • Plugin server modules related to the macro that assigns the agent its alias now debugged.
  • Agent config files plugins suffered write errors after applying an agent plugins policy. Now corrected.
  • Search filters and page changes bugs now fixed.
  • System time modules generated by the SNMP wizard now displays correctly.

Download Pandora FMS

Download the latest, fully updated version of Pandora FMS from the download section of our website:


Network administration in IT companies: 5 steps to success

July 17, 2017 — by steve0


network administration

Network administration in the IT sector is a Sisyphean task, an uphill battle to deal with new technologies, ward off cyber attacks, keep abreast of updates, and keep the tubes clean.
Maintaining a network at optimum performance, keeping in mind its cost, its evolution, and daily monitoring, can add up to a steep bill on headache pills. To save you a bit of cash on Advil, we’ve come up with five guidelines to help with network administration:

1. Select your material and human resources
2. Know your network
3. Know your devices
4. Be client-facing
5. Constant revision and evaluation

1. Choose the best resources, human and technological

To competently oversee a network requires training, and, if it’s backed up by official certification, so much the better. When you find the right person they need to be trained in the specific tool you use in your organization.
A tool like Pandora FMS is going to simplify your network administration, and give you a heads-up when something goes wrong. Pandora FMS’s visual components enable you to see all its operations at a glance, and on a single screen.

2. Know your network

As Francis Bacon declared “Scientia est potentia”, knowledge is power. Network mapping gives you that power, and the power that we are dealing with here is immeasurable: The power to understand the capacity, needs and resources of your network, and to administrate the hell out of it.

Depending on the size of your business, your network will be to scale, and, like precious, byte-based snowflakes, each one is different, but you still need to be aware of its operational protocols and its capacities. Without forgetting the Internet of Things, which gives you a window on a wider world outside your immediate office network, allowing you to track geographically vehicles, devices, cell phones, and so on, that are related to your business activities.

network administration

Network maps are your best friend for so many reasons that it’s surprising they don’t have their own Hallmark card:

– Detect/correct bad network behavior.
– Allow you to streamline your resources, and wring the last drop of MBs from your network.
– Reduce costs by controlling expenses.
– Let you know the geographic location of resources in a data center/server farm, for example; an invaluable help for the on-site technician.
– Better network security.
– Maintain quality control, with graphs, reports and reams of figures, accurate to four decimal places.
– Control updates and network patches, and avoid service interruptions.

3. Know your devices

Network administration is present in more fields than ever before, mixing it up with signals and images packaged together, voicemail with data services, different types of network – LAN, WAN, MAN, all employing different OSs and protocols.
A good sysadmin should know the devices present on the network, and adapt working practices to the environment: how each component operates in order to monitor them at maximum efficiency.

Technology is here to stay

Anyone who thought that the Internet was just a new kind of hula hoop or spacehopper – here today, gone tomorrow – probably feels pretty stupid right around now. Internet is here to stay, at least until The Big One hits, so get used to it. Furthermore, the Internet has migrated to a plethora of network-enabled devices, grouped together under the umbrella term, The Internet of Things, implying the need for new protocols, security, hardware, OSs, etc…

4. Client-facing. Always.

Real products for real people, real sysadmins overseeing real networks.
Pandora FMS is designed to be a multi-functional workhorse, with enough grunt to tame the wildest beasts on your IT setup. As a systems administrator, your task is to maintain the network and provide in-house and third-party support.

5. Review and evaluate processes

A network is like a woman complicated organism; constantly changing, never the same, frustrating, full of data that must be sifted to arrive at a true understanding, and requiring a lot of care and attention.

Achieving successful network management is a non-linear process with a lot of ups and downs. But, with the help of a good monitoring system like Pandora FMS you’ll see quick and efficient changes in your network monitoring.

network administration

We hope you enjoyed reading about our tips for network mastery. If you’re interested in subjugating your network to your will, visit the Pandora FMS website.

Geek culture

Star Trek Enterprise Monitoring

July 14, 2017 — by steve0


star trek enterprise monitoring

I’ve always been a fan of science fiction, and, if pushed, a Trekkie. I’m fascinated as much by the technology as by the stories; by the philosophy as by Uhura. Those spaceships that seem to have been constructed not in a factory but by some kind of techno-alchemist; by the multicultural crew who always seem to know exactly what they have to do because they have the information they need to hand. I’m fascinated by the crisis suites, where colorful screens display deck plans of the ship indicating precisely and immediately where the problem is.

Is this wonderful future utopia the result of Big Data, the IoT or AI? Is the Starfleet crew merely a motley of interstellar Devops in colorful uniforms?

As the CEO of my own company, one of my dreams is for everything to work as well as it does in Star Trek, with the difference being that, instead of seeking out new life and new civilizations, I prefer to seek out new clients and new sectors, and beam my technology down to solve their problems. I also dream of those screens that display damage reports during the thick of space-battle, the maxed-out Warp drive, or the amount of oxygen remaining before the crew breathe their last in the dark abysm of outer space.

star trek enterprise monitoring

The mere idea of having one of those screens in front of me on the bridge of the Starship Enterprise…err, I mean, my office, and being able to see client orders in real time, incidents ordered according to their origin and the level of truth-bending in SLA agreements, fills me with joy. I’d also love to see a flashing green or red light next to the name of the redshirt responsible for each hull breach incident and be able to push a button that generates a PDF containing all the necessary information to give him before sending him down to fix it, laptop between teeth.

The perfection of the Starship Enterprise is its marvelous Warp drive and integrated IT systems. Who knows whether space ships in the 24th century will have Windows or Linux installed, but whichever OS Starfleet uses, it’s quick to alert when pressure builds up in the main cryo-pump and the Warp drive starts to overheat. Apparently, Java has no place in the future utopia of the United Federation of Planets.

However many zettabytes of data the Enterprise generates, it never seems to ruffle its captain, meditating silently in his chair on the bridge. Of course, he has all the information he needs at his fingertips (once again, those delicious screens!), and doesn’t need Scotty sending him reports full of technical jargon. Instead he gets information and updates from the ship itself. Only when a Klingon attack vessel leaps out of subspace, or an attractive, blue-skinned alien princess starts singing does he break a sweat.

Maybe this is the secret of the United Federation’s harmonious functioning: its leaders and commanders receive the information they need as and when they want it, with the minimum of fuss.

star trek enterprise monitoring

Only two score and ten years ago the Tricorder was the height of our technological fantasies, the future-dream of our younger selves (when it wasn’t our very own lightsaber), while nowadays we nonchalantly tap our iPhones, between blasé slurps of skinny frappes, inured to the powerful and sexy infinity rectangle in our palm. As captains of industry, if not of spaceships, we need to enable our handhelds to give us real-time feedback on our business process statuses. We have the technology, we have the capability (sorry, wrong TV show). What’s stopping us from realizing our dreams?

star trek enterprise monitoring

There’s no need to get into bed with BigData; and we’re still recovering from our affaire with the Business Intelligence heartbreakers. The Devops want to party and we’re still hung over from the Cloud. What’s going to be next? Total assimilation by the Borgs?

Star Trek has been monitoring since the 1960s and it hasn’t done too badly. Problems have always been identified, located and fixed before the Warp drive ever shut down completely. It turns out that the ship could take it, despite Scotty’s familiar protests. We all have a screen in front of us, but where are the magic panels of the Enterprise, replete with the information we desire?

star trek enterprise monitoring
star trek enterprise monitoring

Monitoring is as old as the steam engine or records of the Sumerian harvests. We don’t have to wait until the 24th century to have access to the information we need about our business operations.

Let’s seize this ripe technological moment of interconnectedness to extract information in real time about client incidents, leads, payments, delivery problems, delays, IT infrastructure incidents, anything that impedes the correct functioning of your clients’ business operations. Why wait for the managers of the various divisions to report? I want to see information flowing across my screens. I want my monitoring cup to runneth over. Let it rain data!

In my office there are a couple of 60¨ TV screens that display the flow of incoming leads, show incidents reported by my clients and all the critical webs and infrastructure of my business. Nothing escapes my all-seeing eye, and if you think that sounds like science fiction, you’re right. I love sci-fi; that’s why I created Pandora FMS.


Monitoring Veritas Backup Exec with Pandora FMS

July 11, 2017 — by steve0


monitorizacion veritas

We recently published an article about general backup monitoring with Pandora FMS, and a more specific one about monitoring Bacula, one of the most well-known Opensource backup administration platforms on the market.

There are, of course, other commercial alternatives, and this time we’re going to look at Veritas monitoring. Veritas Backup Exec is another well-known commercial software that can be used in conjunction with Pandora FMS to increase its potential and your options.

As with Bacula, Veritas Backup Exec is designed for Enterprise, and is made up of various components, the key ones of which are:

  • Server: in charge of executing backup tasks.
  • Remote admin console: enables management tasks to be performed by connecting various servers.
  • Agents: on which backup tasks are performed.
  • DataBase: SQL Server instance where data is saved and Backup Exec activity is registered.

In the Bacula article we spoke about different ways to approach monitoring this kind of backup tool. Possibilities range from monitoring services separately using software agents or network checks, to executing specific database queries to get more detailed information.

With Veritas we are taking another approach, proof again of Pandora FMS’s flexibility and the free hand it gives its users to customize the tool for exactly the kind of monitoring required in each case. This time it’s monitoring updates or notifications automatically generated by Veritas Backup Exec, sent to provide feedback on tasks that have been successfully completed, error warnings, and so on.

Pandora FMS has a plugin designed to check email contents, according to certain parameters, which can be found at the official library:

To use it correctly you have to configure the Backup Exec email notifications, firstly by configuring an SMTP server and a sender’s address from which the notifications will be sent automatically. This field is going to be useful to set the email monitoring filters:

veritas monitoring

In this case let’s use Sender name.

Next, configure the email addresses or contacts as targets of the Backup Exec notifications you want to deliver. Monitor this inbox to control the status of the backup functions:

veritas monitoring

Once you’ve set this up, activate the email notifications for the required tasks:

veritas monitoring

It’s also worth mentioning that Backup Exec can be configured, as we’ve seen, to send notifications on the basis of internal alerts. These alerts vary according to priority: Attention required, Error, Warning or Informative.

With this in mind you can focus your monitoring on any of the notifications you like.

Let’s look at how to use the email inbox monitoring plugin pandora_imap.

In this case we’re talking about a server plugin that checks the information from the Pandora server remotely and without the need to install software agents.

Once it’s downloaded and transferred to the Pandora FMS server, enable the plugin from the web console. The following screenshots show how to correctly load the plugin using minimal parameters:

veritas monitoring

veritas monitoring

veritas monitoring

Once you’ve registered the plugin on the console you can start to use it as a module to perform the checks you like.

In the example we’re monitoring an inbox to search for emails that contain the text chain “Backup error” and the plugin will inform us every time it locates a match.

Configure the module like this:

veritas monitoring

Our example inbox has received two emails matching the parameters (“Backup error”) and the module has executed, displaying the corresponding value:

veritas monitoring

It is worth configuring the module so that its execution is adapted as closely as possible to the characteristics of your environment. So, for example, if you carry out backup tasks at night and it’s then when the notification emails with the results are sent out, then it doesn’t make much sense to have the module executing every five minutes. In this hypothesis, a single notification each day at 06:00 hours is a good solution.

Pandora FMS has the Cron functionality for module execution, determining its execution only in the following situations:

veritas monitoring


Bacula monitoring: keep your backup system safe

July 6, 2017 — by steve0


bacula monitoring

We’ve spoken before on the blog about the importance of not only backing up your data (and the various Cloud-platforms available to do so) but also about the added importance of monitoring your backup systems. A rock solid system will allow you to find out immediately if there is any problem in your backup generation system, and furthermore, to anticipate issues before they turn into data extinction level events.

Exponential growth of IT systems puts all companies, large or small, at risk of seeing their IT assets, services and, most crucially, data, compromised. Growth such as this far outruns attempts to secure completely your systems, leaving them vulnerable to cybercriminals, so-called “blackhats”, capable of causing problems throughout your IT infrastructure.

Apart from targeted attacks on behalf of malignant hackers, there is also the daily struggle with viruses, trojan horses, and worms, capable of automatic, self-propagating attacks. The recent WannaCry virus, a species of ransomware that has been a thorn in the side of a number of global companies and individuals, is a reminder of what’s out there.

For all these motives, IT security chiefs are obligated to stay on their toes, as are the systems they administrate. To do this, two components are imperative: backups and monitoring. These two safeguards work best when combined, and the present article looks at how to monitor a backup system with Pandora FMS.

We’re taking Bacula as our solution for this case. Bacula is one of the most well-known and powerful OpenSource products on the market. The following tutorial assumes that you are already using Pandora FMS and Bacula, or that you think it could be a good solution for your present back up needs.

Bacula components

Before starting to monitor Bacula, a few words about how it works. The basic concepts are:

  • Director: the central server or component that executes the jobs.
  • Jobs: tasks, whether carrying out or recovering backups.
  • Bacula-fd: daemon file, or client. Systems to be backed up.
  • Bacula-sd: storage daemon, or fileserver. The physical location of the backups.
  • DataBase: to store metadata on all tasks performed.

With this basic outline in mind, let’s take a look at how to monitor Bacula using Pandora FMS.

Local Monitoring

Firstly, check all the Bacula components are running A-OK monitoring them individually using software agents, i.e. installing a service on the machines to make sure all the Bacula services are running.

The software agent uses commands like the one below:

bacula monitoring

The command will vary slightly, depending on the component being monitored. The local module looks something like this:

module_name Bacula director status
module_type generic_proc
module_exec service bacula-dir status 2> /dev/null | grep active | wc –l
module_description Check if bacula-dir service is up and running

Also included are basic checks to ascertain the machine’s overall status: CPU, memory and disk use. The Pandora FMS console displays the results like this:

bacula monitoring

You can monitor the most essential Bacula components like this, and get total control over anything that could affect your backups.

Once you know the status of your machines and services you’re ready to extend your monitoring with the Bacula monitoring plugin. This simple little agent plugin internally monitors the MySQL database, mining the metadata saved from the previous backup tasks for useful nuggets.

With the agent monitoring the MySQL database from inside it’s easy to extend the logic to include a higher number of checks and more granularity, as well as configuring the output to display timers showing hours, days, etc.

Remote monitoring

bacula monitoring

With a little more Bacula knowledge, it’s possible to set up remote network checks for certain components.

Analyzing how Bacula works you can see that the different components communicate among themselves via specific ports, which we can make use of. Using remote TCP checks launched from your Pandora FMS server you can monitor services and check they are running in each of their corresponding locations:

  • bacula-dir: port 9101.
  • bacula-fd: port 9102.
  • bacula-sd: port 9103.
  • MySQL database: port 3306.

Bacula and MySQL use these ports by default, and we’ll also use them in this case, although they can be modified in the corresponding config files.

Pandora FMS’s remote checks come in handy here for checking that all the posts are open and active. Once the modules have been created their configuration looks like this:

bacula monitoring

In the previous example we used port 9101, corresponding to the Bacula-director, and we continue in the same way for the other ports, maintaining remote oversight of all running services.

Remote checks look like this on the Pandora FMS console:

bacula monitoring

Under best practise mandates, the nomenclature used is similar to the default, resulting in simpler filtering and quicker locating of the remote checks to see whether the Bacula services are running correctly:

bacula monitoring

Another way to remotely monitor Bacula, without having to install agents, is through its web console, where statistics and metrics related to tasks performed by Bacula are displayed.

The management console enables you to get information on automatically generated Bacula reports, using Pandora FMS:

bacula monitoring

Viewing general backup task data:

bacula monitoring

Some examples of the many options that using Pandora FMS web monitoring enables.


Apart from the previously mentioned methods of monitoring, Pandora FMS is known for its flexibility and multiple options, meaning you can adapt your monitoring to your system or company’s needs.

Pandora FMS’s power and flexibility permit many different approaches to monitoring your tech (in this case, backups). Another option could be to monitor logs, searching for execution codes, error registries, warnings, etc. Monitoring the Bacula web console, retrieving all kinds of metrics, monitoring how much space your backups are taking up on your Bacula-sd server, monitoring connections among Bacula services using netstat, and so on.

Pandora FMS gives you complete control and oversight of your backups, graphs and reports to observe if any problems have occurred, how frequently, and the trend over time. Furthermore, with predictive modules you can monitor the filesystem and find out how long it takes to fill up, in order to give you an extra heads up when it comes to taking decisions about whether to increase space or clean up old backups:

bacula monitoring


SLA reports: monitor with Pandora FMS and avoid problems

July 3, 2017 — by steve0


informes sla

With more companies outsourcing specialized, or simply tiresome and ungratifying, work to third-party providers, SLA agreements are more common and in demand than ever, principally in the world of IT service management, networking and telecommunications. These can be detailed legal documents, covering diverse metrics impossible to monitor manually, giving rise to the need for more reliable SLA reports. Pandora FMS has your back when it comes to producing more reliable, automatically generated SLA reports.

An SLA (Service Level Agreement) is, as the name suggests, a commitment to provide a minimum level of service, as agreed between a client (the one who ultimately provides the service), and the third-party who assumes responsibility for an aspect of the service. An IT network SLA will probably include metrics such as total availability time, availability during specific periods, response times, and more. As is obvious, without technological help, these metrics are impossible to monitor, which is where Pandora FMS comes in.

How can Pandora FMS help? Whether you are a service provider or a client Pandora FMS is designed to be flexible enough to help you set up any kind of check, and provide real, up-to-the-moment data on the status of services received or offered, benefitting both SLA parties.

Using the data collected, Pandora FMS can produce many kinds of SLA report, and perform calculations automatically as well as display values precisely, graphically and in a manner that is easy for the reader to understand. That means you don’t need a technical background to look at a Pandora FMS SLA report and see clearly where your service is operating correctly or any areas where SLA metrics are not being met.

Here’s a practical example of Pandora FMS monitoring a server, checking for both availability and latency:

sla reports

Pandora FMS will automatically create an SLA report that shows availability as a percentage and also if the latency rates are returning the agreed values. It’s as easy as configuring the values agreed upon when drawing up the SLA.

sla reports

In “SLA Min.” and “Max.” the accepted values for minimum service are indicated, and in the “SLA Limit” column the percentages established between the client and the provider are introduced.

Using this configuration allows total flexibility when it comes to fine-tuning Pandora FMS’s availability reports, and permits all the requirements of your SLA agreements to be monitored for full compliance. Furthermore, any kind of check carried out by Pandora FMS can subsequently generate a report automatically, giving users the power to measure any SLA metric imaginable.

Once the report is configured the values are calculated and displayed in real time, providing granular detail in order to analyze if the terms of the service contract are being met:

sla reports

Apart from these custom reports Pandora FMS can also display data in detail in other formats, and is able to generate reports quickly for use in meetings, as supplementary proofs in quality control audits, or to send by email and according to a programmable schedule to inform relevant parties on the status of contracted services.

Different report formats:

    • Monthly: Allows the reader to see at a glance all the checks they want, according to the relevant month or months.

sla reports

    • Weekly: Displays weekly data within a calendar month, separated into groups of days numbered according to how many days the specific week has. For example, the following screenshot shows the first week of April has only two days, following the logic of the calendar.

sla reports

    • Hourly: Gives an hourly breakdown within the month in question, permitting fine granular detail and control over SLAs.

sla reports

Different types of SLA reports will give the user visual feedback on checks, and total time spent in each state, such as total time in incorrect status (SLA unfulfilled), total time in correct status (SLA fulfilled), the number of checks returning incorrect values, and SLA percentages, both total and individual, for each group of days, weeks, or months.

Furthermore, Pandora FMS Enterprise version can export PDF reports, at an individual level of customization that permits them to adapt to any corporate colors or logos necessary to create a unified corporate image.

Geek cultureTech

Manolo v1.0: single task monitoring systems

June 29, 2017 — by steve0


In a small provincial building society, the kind that were made extinct by the 2008 financial crash and by bad management, there worked a certain Manolo.

In reality, he was an IBM employee, and he was there for basically two reasons: he was a recent graduate, with good grades and eager to start gaining professional experience…at a rock-bottom salary point. Plus, he lived two doors down from the building society’s CPD.

A match made in heaven.

He soon got the hang of working at the building society, where his tasks were many and diverse – checking that the x86 servers responsible for email were functioning correctly, registering or unsubscribing clients, and also maintaining the servers that handled virtualization. There were, in fact, two of these, but the CPD was only little and there wasn’t much on the x86. He had some networking knowledge, so ended up taking care of them as well, as well as registering new devices and paths.

Slowly but surely, and surrounded by country living technophobes, Manolo became more essential by the day.

He was also the chief go-between when it came to dealing with the city slickers at head office. Security, communications and the office network were all controlled from the Capital of the Kingdom, via the building society’s Mainframe. Whenever a technological intervention was required, there was our humble techno-Stakhanov, telephone hotly pinched between shoulder and ear, slaving deep in the data mines for the greater good.

Sadly, Manolo was not a Stakhanovite automaton and he soon began to dabble in a spot of benign hacking. It’s one thing to attend to an incident during working hours, it’s quite another to be awoken like a detainee at Guantanamo at three o’clock in the morning and dragged like a common refugee across the border of sleep, whenever the ATM network goes down. When it’s the director himself who summons you during the wee small hours to a crisis meeting because he CAN’T OPEN AN EMAIL ATTACHMENT, this, my friend, is the end. Thus began Manolo s slippery slide down the security slide, and he started providing support from home using a remote connection unknown to his fellow workers. He also put in place an OpenSource monitoring system. Without authorization, sure, but it made his life a little easier.

Three years went by. Three harvests were collected in the sleepy Spanish town. Three vintages were ripened, matured and bottled in the strong Castilian sun. Births, deaths, marriages, harvests – the slow, regular rhythms of country life. During this time our hero Manolo attended planning meetings, met with suppliers, but kept no records. The network, services, applications, all were there on the servers, but only Manolo knew them, their location, their installations. No one else was interested in this arcane lore, no one asked and no one authorized.

As you can imagine, this story doesn’t have a happy ending, at least for the client. Like Macbeth, Manolo’s ambition grew. First he desired to formalize his relationship via a legal marriage contract, according to the custom of his people, and also to emancipate himself from his progenitors by becoming a property owner. So he opened negotiations with his boss in Madrid and the answer always came back unaltered: the contract is non negotiable, operational overheads are calculated to the last cent, when the next round of negotiations begins, then maybe…A tiny idea began to grow in Manolo ‘s mind. Not to be the next King of Scotland, but to find a position where his talents would be economically compensated. He started a discreet job hunt, started attending interviews during work hours, and updated his CV with his new skills.

Not long after a new job came his way: more rewarding, more challenging, better paid. Thus began what we might term “The Plumber Effect”. That is, what’s the first thing a plumber says when you call him to your house on Friday afternoon? “Who did this work? What a mess, that’s all going to have to come out”.

Don’t get me wrong, Manolo was a good professional, he just had his own way of doing things. Nor were the employees of the building society. Why should they worry about putting out fires if there weren’t any? IBM had its best practices but it relies on its employees and sub-contractees to carry them out.

When the plumber starts talking like this, you know it’s time to ante up. Want to get to the other side? Pay the ferryman, and shut up.

The moral of this story is valid for most areas of IT, and that’s no different in our own area of expertise: monitoring.

1. If you develop a monitoring system ad hoc, without professional support from the manufacturer, then instead of having a configurable tool you’ve just signed a blank check. Only a single human in the whole wide world is going to be able to disentangle this particular Gordian knot, and they’ve just gone to another company.

2. If the monitoring system you’ve chosen makes it compulsory to install extra software on the servers you need to be 100% certain of what you’re installing. Just as it can extract data it can also input it. Or maybe you didn’t know that in order to extract information from an OS the easiest way is with the administrator’s password?

3. A professional monitoring tool shouldn’t only give you feedback on what’s happening with your network, but also on what has already happened, when it happened and who’s been affected.

4. A tool like Pandora FMS incorporates a knowledge base that gives you info on all the elements that make up your network.

5. As Dr. House says, everybody lies, and the first one to do so is the patient. Are you really going to trust the health and well-being of your network to a ” Manolo “? A monitoring system must incorporate a system of reports that allow you to know all your systems’, applications’ and services’ statuses at any time and preferably at a glance, according to the user profile who is consulting. The manager may not need to know the exact status of the servers but the head of IT would like to get that information, and if it can be sent to a mobile device when the person in question is out of the office, what’s not to like?

Finally, by way of a happy ending, Manolo l is now married, living in the capital, working for a large company and doing a job he enjoys. There’s even a baby on the way, to round out this cozy portrait. Let’s draw the curtain on this uplifting domestic scene, but not before remarking that the only grit in Manolo’s ointment are the occasional but regular phone calls from his old colleagues at the building society asking for “a small favor” when they can’t find a contract for a license, or the support number for a certain piece of hardware, or how to get the director back in to his email account the umpteenth time he forgets his password (hint: it’s “password123”!). Patience, Manolo


Monitoring isolated networks: Sync server

June 26, 2017 — by steve0


sync server

One of globalization’s many effects has been the creation of IT outsourcing, and the rise of countries such as India as global players in IT services. Extending networks out into cyberspace has given rise to the need and the possibility for machines to be remotely controlled, whether you’re speaking about domotics (remotely controlling the temperature or lighting of your home while you’re away, for example), or fixing a bug on a machine on another continent.

Companies now have their resources, IT and otherwise, geographically distributed; in different branches or offices, subcontracted in the Cloud, and using local or global support services. Pandora FMS is right there, in this global mix of interconnected, and isolated networks, contributing technological solutions to global business/tech issues with new features and functions, including the sync server on the latest version of the flexible monitoring tool Pandora FMS version 7 “Next Generation”.

The sync server has been designed in response to demands for monitoring secure isolated environments without outgoing connections. These restrictions mean that communication must be initiated from an external network. This is an important distinction as a bidirectional open network can employ a satellite server or a proxy without any problem.


The sync server system consists of deploying the monitoring park to isolated networks that are unable to communicate for themselves with the main Pandora FMS server or the MySQL database.

This function allows software agents to be deployed and to carry out network checks against remote networks that are isolated from the Pandora FMS central server. Install a connection point for the Pandora FMS central server to collect the information from the isolated network. This will be the only via of communication. The rest of the network will remain secure and isolated and won’t need to initiate communications to the outside.

sync server

Imagine you have two datacenters in two different countries, one in Europe, the other in Asia. Both are secured, and communications are impossible from the Asian office, due to the danger of cyber attack communications are impossible from the Asian office to the Pandora FMS server in Europe. Since the server monitor has access to sensitive material, we’ll use this function so that all communications are initiated from the Pandora FMS server in Europe.

Another possibility is to maintain a DMZ under monitoring to make sure that no connections are made between this network and your internal network. Doing this will increase security, avoiding man in the middle and network poisoning attacks.

This extension of monitoring practice, irrespective of localization and achieving security offers a new and powerful possibility for scaling your monitoring.

Monitoring isolated networks can be performed in combination with a satellite server and proxy and broker modes with Pandora FMS agents thanks to the sync server’s potential to combine with other functions.

sync server

The above outline shows an example of distributed environment that could be monitored combining various Pandora FMS functions. The possibilities are endless when it comes to adapting to different topologies.

Operational overview

On the remote network a tentacle server is installed as a communication point, which will also receive the information from software agents installed on your network devices. You can install a satellite server at the same node to execute remote checks against any network device.

All data can be transferred to the Pandora FMS central server via this communication point as soon as it initiates a connection and recovers the information collected since the last time the connection was made.

The particularity of the sync server’s operational modus operandi, as distinct from the satellite server or a Tentacle proxy, is that communications are always initiated from the Pandora FMS server, and neither outgoing communications or the sending of packets from the remote network is permitted.


Before getting on to configuration, make sure that both the Pandora FMS server and the Tentacle server installed on the network are updated to version 7 of Pandora FMS.

On the Pandora FMS server modify the pandora_server.conf file with the following parameters:

syncserver 1

And in the remote Tentacle server start up script add only the two parameters “-I –o”:

TENTACLE_EXT:OPTS="-i.*\.conf:conf;.*\.md5:md5;.*\.zip:collections -I -o"

The sync server environment also supports secure SSL communications; in order to configure this, add some additional parameters to the previously mentioned files.

In pandora_server.conf:

sync_ca /home/cacert.pem
sync_cert /home/tentaclecert.pem
sync_key /home/tentaclekey.pem

In the Tentacle startup script tentacle_serverd (on one line):

TENTACLE_EXT_OPTS="-i.*\.conf:conf;.*\.md5:md5;.*\.zip:collections -e /home/tentaclecert.pem -k /home/tentaclekey.pem -f /home/cacert.pem"


What’s new in Pandora FMS 7.0 NG 705

June 20, 2017 — by steve0


whats new 705

Presenting 705, the latest packet from Pandora FMS 7.0 NG, with numerous improvements, better-looking visuals and debugs. Here’s a list of the most important changes:

Improved funcionalities

  • Multi-origin macros. From now on you can use content from any agent module that generates alerts to include more information about the agent.
  • An additional tag filter in mass module operations to facilitate the operation in certain specific cases:

whats new 705

  • Improvements in mass Plugin operations.
  • ‘Strict ACL’ option eliminated from User menu.
  • New report item with Maximum, Average and Minimum values.
  • Improved free text search.
  • Configuration changes (Events detection) collected in inventory data, such as installed software packets, network hardware configuration, paths, installed IPs or hardware, now include more detail on the change; a new item, an eliminated item, a modification of an existing item, etc.
  • Inventory list can now be ordered by agent.
  • Group view display is now in black and white when displaying total data (ALL) to avoid confusion.

whats new 705

  • A new search filter (by date) has been added to the SNMP console.

whats new 705


Better visuals

  • Better visuals on the events sound console. It now displays details of the event that triggered the alarm and the interface is cleaner and clearer.
  • Better Service Graphs displays.
  • Better inventory display.

whats new 705

  • There is a new option for displaying Module Graphs without compacting the image. You can now zoom in on full resolution for more detail.
  • Improvements in the visualization of some graphs of the Dashboards that previously left leveled out.

whats new 705

  • Improved Events filters: as well as including small improvements in predefined searches, it also allows custom event fields to be searched.

whats new 705



  • Creating Modules in Policies sometimes gave problems: debugged.
  • Issues with User paging debugged.
  • Issues with Networkmap auto refresh debugged.
  • Exporting agents via CSV debugged.
  • The Target IP macro gave problems when creating policy modules: debugged.

whats new 705

  • Customizing the login page was problematic: debugged.
  • Agent names on Windows that included spaces gave problems. Use inverted commas to avoid the issue, eg: “agent_name “Windows 2008 Server”.
  • The parameter thread_stack my.cnf (on the Pandora FMS ISO by default) has changed, as it affected MySQL behavior.
  • Content from one agent module can now be used on any alert to provide more information.

How to download Pandora FMS

You can download the latest version of Pandora FMS from Download section of our website, or download the pdf with all the information clicking here.

Cloud & VirtualizationTech

Virtualization and the Cloud: Round and round your data goes…

June 19, 2017 — by steve0


Virtualization and Cloud computing are revolutionizing the IT ecosphere and, like all revolutions, there are good and bad consequences and extra responsibility for the supposed beneficiaries. CEOs and CIOs are obligated to take decisions on the fly, in a protean environment where the technological foundation they stand upon changes more often than a teenager getting ready to go out on a Friday night. Burdened by too much information, and acting under pressure, strategic decisions taken in the techno-heat of the techno-moment can create unwanted techno-outcomes as a result of departmental decision-making.

A departmental decision means a decision taken adjacent to the core business, as pilot projects for road-testing new technologies. If you take your eye off one of these balls, the results for your infrastructure can be catastrophic, horrendous and definitely not good at all. Added to this is the wider working of your business: mergers, acquisitions, hirings and firings, restructurings, outsourcing and downsizing, refinancing and rebranding. The end result can be a multi-provider IT environment, with your Cloud supplier, SaaS provider, virtualization dealer, OSs and databases being a motley crew of incompatible head-bangers and princesses.

I’m a consultant for a number of start-ups and medium-size companies, plus a handful of blue chips, and one of my clients, an industrial group, is experiencing this authentic modern horror show. Their infrastructure is totally distributed, with four CPDs, virtualization provided by various suppliers, Cloud-based SaaS…and each system with its own monitoring tool providing oversight of critical functions. What they save on one hand, by employing these technologies, they are losing with the other, in terms of less control and overcomplicated administration.

The question is: where’s the little ball? Round and round and round she goes, where she stops nobody knows…Substitute the little ball for your IT services and/or and you start to get an idea of the problem…If you don’t know where your applications are running or your data is stored how can you expect to be able to respond in case of an internal IT crisis?

  • Do you always know where your services are being executed? Or what hardware is supporting which virtual infrastructure? If it’s a single provider, like VMware, to name an example, then obviously you know. If it’s a single CPD, then it’s under control. But what happens when your infrastructure is complex, multi-provider and distributed?
  • A long-standing client of mine made the observation that they put a friendly Neckbeard in charge of their systems and networks. This individual was a rock star when it came to networks and system engineering; any question, at any time, and he had the answer. Unfortunately, this rock star was subcontracted, and when the contract expired, well, you can imagine the mess…

And what about the Cloud? I seem to hear you say. There are as many Clouds housed in Korean data centers as there are actual clouds in the skies of Montana, and they all make the same claims: keep your services isolated from those of other clients; better security; basic backup services; redundancy; high availability…But, try asking yourself these questions:

  • Are we getting the contractually guaranteed processing power we were promised?
  • Are we getting the necessary storage?
  • Are your files getting correctly backed up, and will they be available in case of your own systems going down?
  • Are the high-availability systems working correctly? Is the provider prepared for any collapse or attack on their systems? Where exactly are those backup systems anyway?

A year from now, new EU regulation, the GDPR, will be in place, placing some serious demands on data protection. Not only that, but also regarding the information an organization is obligated to supply in case of total or temporary loss of data. Another obligation to keep in mind regarding your data for when the regulation comes into effect.

Pandora FMS provides a unified solution, from a single administrative and information point, allowing users to identify inefficiencies, and overexploited or underused resources. The ability to take better decisions, in a nutshell. It allows its users to justify initiating new modernization projects, system integration projects, to make cost comparisons in function of the services they need to run in different IT environments. It minimizes operating costs by consolidating operators. To sum up, the keywords are: location, control and recovery. Know where your data is located, keep control of it and ensure you can recover it in case of need.

Network Maps with Pandora FMS: Creation, navigation and editing

June 12, 2017 — by steve0


The wait is finally over, and the seventh version of Pandora FMS, “Next Generation” has arrived to keep your networks in working order, and more. Now including UX monitoring, transaction monitoring, extra features and visual highlights, interactive network maps and events history. It’s difficult to imagine that there is a more powerful and complete monitoring software currently available. We’re excited to talk about the new network maps upgrades, but if you’d like to find out about the other new additions click here.

The glowing iridescent Pandora FMS Omni-Brain that directs the office hive-mind has instructed the developers to make changes to the network maps function, consolidating both Open and Enterprise versions into a single tool, all-in-one. It’s now possible to display network maps totally visually and dynamically, with greater interaction possibilities, and represent any kind of network topology, including manual L2 links. You can also view all and any sub-networks that your organization is running and/or maintaining, on- or off-site; create hierarchy relations allowing a greater level of topological detail than ever before.

Creating network maps

Network maps can be created from:

  • An agent group, if there are hierarchy relations between nodes in a group and these are going to be shown on the map.
  • A network mask, to define the boundaries of a sub-network.
  • Finally, one of the most usual ways, via self-discovery tasks. A reconnaissance task can be carried out to detect your network topology, respecting the connections and relations between nodes. At network interface level and layer 2 relations, information is presented automatically.

Keep in mind the relations between modules and agents to define the network topology you want to view.

In the following screenshot the available options for map-generating can be seen. You can select a group of agents (Group), a recon task (Recon task), or a network mask (CIDR IP mask).

network maps

If “Recon task” is selected the map design will show discovered nodes and any relations detected among them:

network maps

In a wider environment the perspective is going to be different: here you can see what a network map with more connected nodes would look like:

network maps

You can see how Pandora FMS connects to intermediary locations in the node diaspora. These locations usually correspond to routers, switches or access points.

Navigating network maps

Simpler than before, once a network map has been created you can move around it by simply dragging the mouse. Double-click or scroll to zoom in.

If you zoom in on a recon task-generated map you’ll see an image like the one below, allowing you to see relations between different map elements in more detail, including those at interface level.

network maps

It’s also easier to navigate map elements; simply drag and drop the elements, or scroll around the map at your leisure.

Editing maps

But there’s more; Pandora FMS 7’s maps are completely dynamic, meaning their default design can be modified, and elements displayed in the way that best suits the user. All intuitively and by simply using the mouse.

Double-click on any node on the map and you’ll see different edit options deployed, plus their relevant details. Likewise, you can now create, delete or modify relations between nodes and also their appearance.

network maps

To create a dependent relation between nodes or interfaces simply click on the node and create the relation by defining the parent and child element. You can also change the position of the node by dragging and dropping them on the map. If you need to move various nodes simultaneously, press “ctrl” and select the groups you want to move.

network maps

Right click on a node to deploy its options, see details or create a relation between two nodes at interface level, selecting the parent and child element respectively. Right click on a blank space to see the following options:

network maps

One of the most important labor-saving tools is the automatic generation of relations. This is possible thanks to self-discovery tasks, that allow relations between existing nodes to be automatically detected.

Last but not least, Pandora FMS 7 Next Generation includes the holding area. If you need to manually add new agents and relations to a pre-existing map, or if the recon task discovers new hosts, using the “refresh holding area” option will display nodes created or discovered subsequently in the “holding area”, and the original map will maintain its aesthetic, not being sullied with elements created a posteriori. Drag the new nodes out of the holding area to see its corresponding relations by clicking refresh.

In the Pandora FMS video “Network Maps” you can see everything we’ve explained in this article, and find out how to create, edit, and use a network map in a dynamic, graphic and easier way than in previous versions.

For more info visit or our YouTube channel.



June 8, 2017 — by steve0


novedades 704

This package of improvements, while not incorporating big functions, includes over 70 small changes and patches. The most relevant ones we’ve listed below.

Functional improvements

  • Elements from the database history can now be included in reports, extending their functions and the capacity of any report element (in previous versions, they were limited to graphics).
  • CSV file fields are now customizable. Go to general setup to select this option.
  • New dashboard widget displays UX monitoring:

whats new 704

  • “Module templates” with blank spaces can no longer be created. This improves editing and facilitates policy maintenance.
  • New macro: _alert_unknown_instructions_ displays instructions for alerts triggered by unknown status, joining those triggered by critical and warning status.
  • The metaconsole sync button now forces license changes to the node.
  • MapQuest maps (Open alternative to GoogleMaps) has been updated.
  • ACL predefined profiles (Standard user and Pandora Administrator) were unable to modify policy thresholds. This is now fixed.
  • You can reset forgotten passwords from the console. The system will send a link to the user to change their password. The console now has a specific configuration to manage mails sent from the console, which affects both planned PDF reports and the new password recovery system:

whats new 704

  • Administrators can now modify the default login page for all users.
  • A new config token has been implemented on agents that enables them to get an alias by the system command: agent_alias_cmd
  • The Export Server now returns all module configuration parameters including thresholds, units, tags, etc..
  • Decimal places on module thresholds graphs can now be shown, if there are any.
  • When launching a Dashboard slideshow a paging list appears to simplify Dashboard selection.

whats new 704

  • Graphics containers: “Containers” can now be defined that allow combined or module graphics to be ordered and prioritized, adding optional dynamic rules that allow certain graphics to be incorporated automatically:
  • New macros included to allow use of additional agent IPs. 1) _all_address_ that displays all the agent’s IPs, and 2) _address_n_ where n represents the IP position you want to show.
  • It’s now possible to incorporate a predetermined filter in events view. Users can define the filter at the user detail editor page:

whats new 704

Visual improvements

  • Error popups during installation fixed:


  • Dashboard general view now includes paging.
  • Better menu display with strict ACL mode active.
  • The yellow traps console icon was invisible, so we changed the color.
  • Pandora FMS mobile login issue (special characters) fixed. Reported via GitHub.
  • Service Maps view on Dashboard now fits to screen.
  • Graph legends now take up less space, and display the information more compactly.

whats new 704

  • ”Parent elements” on visual consoles had an issue with some map elements and text labels. This is now fixed.
  • Dashboards and visual consoles adjust better to fullscreen mode, avoiding showing scroll bars.

whats new 704

Bugs fixed

  • Agent/alias labeling confusion in duplicate agent configuration section fixed (previously the name of the agent appeared in the alias field).
  • On the SNMP traps console the internal agent name was displayed, instead of its alias.
  • Group dashboard assignation on user start screen fixed.
  • When unlinking from policy-applied modules the internal agent name appeared in the list of unlinked modules in policies. This is now fixed.
  • On the Open version the version being used wasn’t displayed at the foot of the page. Now fixed.
  • Logs viewer was displaying real agent name instead of alias. Fixed.
  • Fields on the export server related to massive operations now fixed.
  • php5 packets for Debian no longer have to be dependent.
  • Small filtering issues with generating dynamic reports from templates now fixed:

whats new 704

  • AIX agent startup fixed.
  • Problems with agent broker mode containing asynchronous modules on a Windows agent.

How to download Pandora FMS

You can download the latest version of Pandora FMS from Download section of our website:

Backup monitoring: Ransomware and other malware mean monitoring is a necessity, not a luxury

June 6, 2017 — by steve2


backup monitoring

Why backup monitoring is fundamental

A backup, for those whose IT knowledge is a little rusty, is a copy of certain files stored in a safe place for reasons of security. This practice usually covers you in case of incidents or problems on your principal network, as the data is generally stored on an unassociated network, or, more usually, completely offline. It should be a regular practice for any systems administrator, as well as complementary backup monitoring. Let’s look at some example situations.

Reason number one: security. With the example of Wannacry still fresh, alarm bells are ringing in every sector’s IT departments, making IT security a trending topic. This is one of the subjects that interests us in the present article, and its relation to backup monitoring. If we can’t avoid these frequent attacks against our infrastructure, at least we can take steps to mitigate the negative effects.

A little more context; what is ransomware? It’s malicious software designed to restrict users’ access to their own files, by blocking the OS, encrypting the data on them, blocking hard drives, and demanding a ransom for their safe return.

There is a multitude of virus of this kind, propagated via trojan horses or computer ‘worms’, that invite the careless user to open an infected file or to click on a doubtful link. Wannacry is the virus on everyone’s minds right now, but this specimen was preceded by others, with names like Reveton, CryptoLocker, CryptoWall, TeslaCrypt, Mamba, TorrentLocker, etc.


Fundamental security recommendations are to maintain all your software and systems up to date, use a corporate antivirus, and obviously, not to trust any unsolicited URLs or suspicious emails.

Add to this the use of backups. These should be stored offline, on hard discs out of the reach of infectable hardware, so, in the case of attack, your data can be recovered and your business, organization or infrastructure affected as little as possible.

Despite backups being used ever more frequently, particularly at corporate and business level, it is still complicated to know if your data is being stored correctly, if there is enough space on the drives where the data is saved, if there has been a problem inside the anonymous black box, or if any one of a number of backup fundamentals has been omitted. Hence, backup monitoring.


Backups are created in diverse ways, depending on various factors; available technology, systems, capacity, requisites, company policy, etc. That’s why, far from being a trivial issue, every back up is a unique case, with its own complexities and inherent problems. Let’s take a look at some sample backup cases plus how to monitor them for maximum guarantees, ensuring that our precious data is available in times of virus crisis.

There are three basic types of backup:

  • Complete backup: a complete copy of all your files.
  • Differential backup: makes a copy of all new or modified files created since the last complete backup.
  • Incremental backup: a copy of all new or modified files created since the last complete or differential backup. The optimal choice in terms of performance and disc space as well as being the most widely used.

Due to the particularities of each generation of backups a generic monitoring system applicable to all cases is currently unavailable. Despite this, Pandora FMS is flexible and customizable and gives you options for covering most cases of backup monitoring, whatever the specifics of your case.

Search patterns in backup monitoring

Whatever kind of backup you use, or the methods of its creation, it will be stored under certain directory patterns and files that can be used as references: name, date, time, version, etc .

This is due to the necessity of quick and intuitive access to backup files, which should be given easily identifiable labels, and which you can take advantage of to create the monitoring you need.

The following is a practical case of monitoring backups that are saved on an FTP server with a system of specific labeling. We’ll use a remote backup monitoring plugin with the following parameters:

  • Host of the FTP server.
  • User of the FTP server.
  • Password of the user.
  • Path where the remote FTP files are located.
  • Maximum days in regards to the calculations. That is, Pandora FMS will check, within the configured timeframe, that the backups are being stored correctly.
  • Minimum size that the backup files should have, keeping in mind the amount of data to be stored on a daily basis.
  • Regular expression of the backup search word.
  • Timeout cancels the execution in case of uncompleted connection.

In the screenshot below you can see how the parameters are used to perform backup file searches whose names match the regular expressions given on specific systems and paths. It also checks that the file size is correct and that the number of backup files created is also correct.

backup monitoring

Taking all these options into account you can set up very reliable backup monitoring thanks to Pandora FMS.

backup monitoring

Examples like this show how simple it is to set up backup monitoring with Pandora FMS, employing different systems for collecting information. In this case a remote FTP server connection has been used, although installing a software agent to collect the data locally would be another option to achieve the same result.

Search terms will vary according to the specific case, and in the majority will be based on the patterns mentioned above, using regular expressions and sifting the results.

Apart from monitoring the backups themselves, the hardware on which they are saved can also be monitored to check that everything is fine. Pandora FMS offers plenty of options for this kind of monitoring as well, whether you want to monitor using SNMP, an FTP plugin, or software agents to carry regular out local checks in more detail.

Thanks to Pandora FMS’s flexibility it’s possible to monitor backups in various ways, using different options to maintain control and security on your systems, while maintaining tight security.

Furthermore, in cases of security breaches and compromised systems Pandora FMS’s visual reports can give you essential feedback on where incidents have been produced. The following graph shows where a couple of incidents have occurred with the system of backup generation:

backup monitoring

There are many commercially available options for automated file backup. Most of them are powerful and can be complicated tools to use. In coming articles we’ll be looking in detail at how to use Pandora FMS to monitor dedicated backup systems such as Bacula or Veritas Backup Exec.


Conclusions that can be drawn from the British Airways crisis: the need for a monitoring system

June 1, 2017 — by steve1


crisis british airways

Another PR disaster for a company whose initials begin with ‘B’. After BP’s monumental “Deepwater Horizon” catastrophe, and Tony Hayward’s subsequent PR nightmare, BA boss Álex Cruz can apparently think of nothing better to do than follow in Tony and BP’s footsteps.

Of course, Álex probably has reason to maintain silence and the BA official line is that as simple a thing as a power surge was to blame for grounding 75,000 customers at Heathrow and Gatwick over the May Bank Holiday Weekend.

It’s difficult not to immediately think of a cyberattack, whether via ransomware or something more malicious, in these days when WannaCry is still affecting IT systems around the world. BA denies this strenuously, but then they would.

The fact is that, as a cost saving measure BA has outsourced many secondary IT functions (non-critical according to BA). This raises the question of who is responsible for these services, and what are the quality control measures these third-party suppliers have in place? What regulation do they have to observe?

The solution to more outsourcing and externalization and downsizing is more vigilance and oversight. Monitoring consists in verifying that backup systems are working at all times, and that operations staff are meeting Service Level Agreements.

Sancho Lerena, Ártica ST CEO, manufacturer of Pandora FMS, explains how to avoid network problems which impact on your customers and your company’s brand by using a monitoring system. From Cinco Dias 01 June, 2017.

Cloud & Virtualization

Hybrid Cloud: where does your data go?

June 1, 2017 — by steve0


hybris cloud

I wandered lonely as a hybrid Cloud, Wordsworth might have mused. But he would have been mistaken, as the hybrid Cloud has two close companions, the public and private Clouds. Nevertheless, hybrid Cloud is an evocative expression to conjure with, and the IT sector is a sector of accidental poetry; no one intended for code to be beautiful, but seen from the right angle it can have the same elegance as a line of Tennyson’s. It’s a world that’s based on metaphors (virus, streaming, link, search engine, ping, window…etc) that help people to understand the effects of zeroes and ones endlessly combining and recombining to create the digital spaces we now inhabit.

What is the Cloud?

The Cloud is a metaphor for multiple-floors of rows upon rows of banks upon banks of industrially air-conditioned servers housed inside enormous and anonymous data centers located in suburban industrial parks or impersonal office districts. Nothing very poetic about that, but it begs the question: Why the choice of ‘Cloud’ as a metaphor? “Data” in this metaphor is water, the element that nourishes our digital lives.

Still, we’re not here to talk about our confusing IT metaphors. The humdrum everyday reality is that the mind-boggling quantities of data we are producing every second of every minute of every day require more and more capacious servers on which to be stored, so that application and service processes can be executed more quickly, saving us time and latency. The Cloud is accessible from anywhere with an Internet connection, allowing you to transfer or locate data at any moment.

Processing refers to improving a device’s resources, giving access to online programs without the need to install any software on your own hardware. For example smart phones, from which millions of images are uploaded to the Cloud every day, keeping the devices free of cumbersome data such as photos or videos, and giving users immediate access to their material.

What is the public Cloud?

A Cloud service open to the general public, and defined as such to differentiate it from the private Cloud, which is a later development. It’s the basic version of the Cloud, hosted by a third-party provider. Usually free-of-charge, and currently used by a long list of companies. The downside of this model is, what happens if your provider leaves the market, or decides to start charging for their services. What happens to your data then?

And the private Cloud?

Exclusive to the company, organization or business that makes use of it, being composed of external or internal services, and always administered by the entity itself. It’s a more expensive option, but your data is more secure. The contra on this side of the equation is, of course, the cost, and also the danger that the provider can also exit the market for whatever reason.

The solution to these possible situations is the hybrid Cloud.

So, what is the hybrid Cloud?

A combination of the best of both models described above, maintaining both internal and external providers. On one hand, it provides the same services as the public cloud, while on the other being essentially a private, and more secure, Cloud model, giving you the best of both worlds; the security of the private Cloud, the ability to share work loads among teams or individuals, more data implementation, and more flexibility, plus the power to migrate between the two models as necessary.

The drawbacks of hybridity? That the infrastructure presents an added layer of complexity. Private Cloud work loads have to be able to interact with the public Cloud, leading to questions of compatibility and connectivity, with the added requirement of a solid network on which to run. So, where is my data going?

Because it’s not literally a cloud, right? Ideally, critical services and sensitive data should be stored on the private Cloud, for maximum security, while non-essential services and less compromising data, (back-ups, etc) can safely be garrisoned on the public version.

What is the future for the Cloud?

An increase in the use of hybrid Cloud platforms and a consequent increase in the need for good monitoring practices to keep abreast of this exciting, innovative but potentially insecure data depository system.


Service monitoring: Another way of monitoring

May 29, 2017 — by steve0


In a world where technology is a means to an end, monitoring isn’t only checking availability, pinging, and performance; not everything is servers, databases and command lines…there’s another kind of monitoring. It’s subtle, but potent; service monitoring.


What is a service, in this context? In the IT world, a service is something you offer, whether a function or an outsourced task, to your clients or collaborators, who might be, for example, an online shop, a hotel search site, a delivery company, CRM, a support site, etc.

Here’s a company schematic including three typical services:

service monitoring

The three services that are offered to the clients are dependent on the technology underpinning them. All three services are critical for the company because if only one of them fails there will be serious repercussions for the company in terms of income and damage to their reputation.

We’ll be looking at both high level and low level monitoring of these services, the elements involved, dependent relations and the technological resources that maintain the service.


Before defining the service tree at high level the infrastructure in question should already be monitored and copacetic, collecting data from your systems. With this as your baseline, you ought to be able to know which of the components can be classed as critical. That’s right, they’re the ones on which your services depend directly. Having identified your critical services the next step is to consider what the impact of an outage in any of those services would be; would it imply a total interruption of service or merely a deterioration?


Going down to the next service level we find the elements that derive from the different services, for example the online shop:

service monitoring

These four logical elements need to be checked and that leads us to the following level of service monitoring, at low level:

service monitoring
service monitoring

In the diagrams above, both Content Updated and Communications, constitute services. In order to define the service tree, each of the lower level elements has to be monitored (Systems UP, Router Up, Database UP…) up to high level to get a picture of the status of your services, creating the structures with basic elements and nested services.

So what happens if there’s a problem with any of the monitored components? How will that be reflected on the service tree?


During the analysis phase, the service tree structure is defined, and the key components identified, as well as their relative importance in the overall functioning of the service.

Basing the findings on the prior analysis, we can determine which of the constituent elements of the service tree carry the most weight.

  • Case #1: one of the servers that ensure that the content is kept up-to-date is down – a critical blow at “Content Updated” level. Nevertheless, the online shop is still up, customers can place orders and generally everything is functioning adequately. The deterioration in the service is not yet critical, and will be shown on the service tree as:
    • Content updated: displaying red status, due to a critical component being down. Total interruption of service. Def Con 1.
    • Online shop: displaying warning status, due to problems in a non-critical component. It means the service has suffered any deterioration.
      service monitoring
    • Chip company: displaying warning status due to non-critical problems in one of the components. Service deterioration.
      service monitoring
  • Case 2: communications have been interrupted because the principal router is down, which is a critical problem at “Communications” level. It’s not possible to access the online store, making “Online shop” level suffer a critical service collapse, reflected like so:
    • Communications: critical status due to the router being down.
    • Online shop: critical status due to the total interruption of service.
      service monitoring
    • Chip company: displaying critical status due to the total collapse of one of the three principal services.
      service monitoring

Final structure

Bigger and more complex service trees can be built, and the status of the services overseen, at both high and low level, and visual representations of the systems permit easy analysis.

service monitoring

In the following screenshot a real example of a service tree can be seen. Using this setup we control our own monitoring system here at Ártica HQ:

service monitoring

The screenshot below shows a cluster of services being monitored:

service monitoring

Service trees can be used on dashboards and visual consoles and are ideal for displaying on large screens in your crisis room, CPD or monitoring operator control suite.

Module applications

Apart from creating clear and legible structures to view on modules, you can also apply this system of service monitoring in an autonomous and proactive manner, because all the elements you can see in a structure have their equivalents in the form of modules. When you create a service one of the configuration options consists of selecting an agent to save the modules that have been automatically generated, while the service is configured.
For each service, three modules will appear:

  • SLA Value Service: current percentage value of SLA compliance.
  • SLA Service: whether the SLA is being currently fulfilled.
  • Service: total service weight.

The service weight is a way of viewing the service’s state of health. More weight usually means more elements are showing Warning or Critical status. Weights are configurable but we’ll leave those details for another time (watch this space).

The following screenshot shows three “Web Services” modules, displaying the values in question. Firstly, the total weight (0), telling you everything is A-OK; in second place that the SLA is being met; and in third place, the SLA percentage. As you can see there’s a critical thresholdwhen the SLA value drops to 95%, in which case you’ll be notified immediately:

service monitoring

The advantage of these modules is that they allow thresholds to be defined (at the same time as you create the service, if necessary), and to assign alerts, meaning that when there are negative SLA values, or the status of the services is not good, your support department gets an SMS or email informing them about it.

Furthermore, it’s possible to display graphs, which is very useful to show the SLA’s percentage and see its developments, whether there’s been an incident that has affected SLA compliance, or any other factor or anomaly that could affect the SLA numbers.


PDR UX Monitoring with Pandora FMS: Desktop Activity Monitoring.

May 16, 2017 — by steve2


E-commerce has grown into an almost two trillion dollar market since the first online purchase (of cannabis!) was made back in the heady, freewheeling days of ARPANET in the early 70s. Those pioneers of online consumer activity from Stanford and MIT were certainly interested in their particular user experience, and surely in gauging exactly how mellow it was, but PDR UX monitoring has little to do with monitoring chocolate deficiency levels or pizza delivery networks and everything to do with gaining oversight over an important business area that is facing questions and issues of the kind that every sector faces when it grows quickly and piecemeal.

The need to perform transactions electronically has led to the proliferation of online platforms, services, and suppliers; the need for new applications and their respective operating systems to work together on large networks; the density of network traffic and the availability of websites; all these challenges have brought about the need for a new area of monitoring. The latest upgrade of Pandora FMS – version 7 NG – covers all these issues with its new user experience monitoring function: Pandora FMS UX. It records browser and/or desktop activity, generating a series of modules that are then loaded on to the Pandora FMS server to be processed.

These modules contain data about the whole process, from start to finish, timings and, in case of error, screenshots showing where such errors have been produced.

Pandora PDR UX monitoring has two modes: web browser monitoring, and desktop activity monitoring.

  • Pandora Web Robot (PWR): web browser monitoring which replicates a fictitious user’s activity, checking sequences of actions and their constituent sections (or phases), and timings.
  • Pandora Desktop Robot (PDR): replicates Windows desktop activity; oriented primarily to monitoring heavy desktop applications.

This second function is the focus of this article. Replicating desktop activity might be something like moving and clicking the mouse, or using the keyboard, which the system will do automatically, replicating user activity.

Say you want to monitor your FTP server. To record the process:

  • Open the app to connect to FTP (e.g. FileZilla)
  • Copy a file to your desktop
  • Open the file and check the contents
  • Delete file

The whole of this process can be automated, as we’ll see shortly. Check that the sequence is valid from start to finish (the recording was successful), and the timings.

We’ll also take a look at how to divide the recorded sequence into different phases with the objective of measuring the time taken, of the whole sequence and also of each of the individual phases, and how to subsequently analyze the data. Using the feedback we’ll be able to see that connecting to the FTP server is the slowest part of the operation, indicating possible performance issues. Or you might find that opening the FTP client is the lengthiest part of the operation, meaning the PC lacks necessary resources.

PDR: UX desktop monitoring

Setting up your environment

PDR monitoring only runs on Windows, and is recommended for virtual machines. Prerequisites:

  • Windows in desktop mode with auto start and auto login
  • Create directories: C:\PDR and C:\probes.
  • file decompressed and provided in C:\PDR.

PDR session recording

Now you’re ready to start recording.

Just to simplify, the recording will consist of the following: open Windows calculator, calculate something and check the result is correct. This tells you if the calculator works and how long it takes to start up and perform the operation. With this as your baseline test you can now monitor any desktop application, by simply automatically replicating user activity such as mouse clicks and text input.

Start pdr.cmd, decompressed in C:\PDR, and after it loads you’ll see this:

monitorizacion ux pdr

On the left hand menu are some typical actions. Select the action and the area where you want to apply it.

General actions:

monitorizacion ux pdr

Flow controls:

monitorizacion ux pdr

Let’s look at the calculator again, and see how to record a session:

1. Choose “click” and the location where you want to apply it and the screen will show “area selection” mode:

monitorizacion ux pdr

You’ll see this:

monitorizacion ux pdr

2. Introduce “type” and “calc”, and afterward the ‘’wait’’ actions and “calculadora” (calculator) and click on it when it appears. When the calculator window opens input the following actions, like so.

3. Introduce the actions one by one, like so:

monitorizacion ux pdr

4. Save the process, and replay by clicking “Run”.


  • To fine-tune any of the actions (e.g. selecting an exact point to click) double-click on any of the images in the control screen.
  • Insert wait between each click to ensure the execution isn’t detained by any delay in the OS.
  • The recorder will look for the same area as in the control screen, so take care that nothing is selected (cursor hovering over an element, for example).
    The recording system has many possibilities but it has a steep learning curve. Play around with it, recording different sessions, replay them, and so on, before making your own recordings and launching them with Pandora FMS.

When you save the project, a folder is created with a .py file containing the automated script code and the control images from the sequence.

Recording a transaction session

Let’s expand on the previous example, again using the calculator to perform an operation and then save the result in a text file. This is a longer process, so let’s divide it into three and get timings for each of the phases.

To record a PDR transaction session in three phases, just set up to record each phase individually. This means you don’t have to modify a recording and split it into three (as was necessary with PWR). This way, different scripts are recorded separately and Pandora UX reconstructs the whole transaction, based on the execution indicated.

Follow this order:

  • Open calculator and do the sum.
  • Copy the result, open a notepad, and paste.
  • Save the notepad file.

Now let’s look at how to execute different PDR scripts to make up a complete transaction.

Executing PDR sessions

Execution consists of a call to the binary pandora_ux_x64.exe, and the necessary arguments to replay the recordings. Firstly, check it from the command line, to make sure that the process has successfully completed. Once you have the green light, add it to the agent configuration file with module_plugin, so that it can take control of automatic execution.

Execute as follows:

pandora_ux_x64.exe -exe C:\PDR\pdr.cmd -args -r -script C:\pandora_ux\calculadora.sikuli -folder C:\pandora_ux\ -ss_config active

Indicate the complete routes for all files and directories involved.

-exe: pdr.cmd file route.
-args –r: pdr.cmd file arguments.
-script: directory with session guidelines.
-folder: where you’re going to save the control screenshots. Save with backslash -> \

Additional parameters are –checkpoint, which shows a screenshot of the last point of the process; and, –post, which allows you to execute commands after replaying the whole sequence, very useful to check there are no processes or windows still open.

The modules are:

  • UX_Status_project_name.
  • UX_Time_project_name.
  • UX_Control_Snapshot_project_name (only on the first execution).

If any phase contains an error, the following module will also be created:
– UX_Snapshot_project_name.

The complete configuration line to use in the agent’s configuration file should look something like this (one complete line):

module_plugin C:\Users\artica\Documents\Producto\UX-Trans\ux\pandora_ux_x64.exe -exe C:\PDR\pdr.cmd -args -r -script C:\PDR\calc.sikuli -folder C:\PDR\ -ss_config active -checkpoint -post "taskkill /F /IM calc.exe"

Once the line’s been added to the agent, remember that it has to run in process mode to interact correctly with the desktop. Execute the agent as follows (from a terminal with cmd.exe and permissions):

"C:\Program Files\pandora_agent\PandoraAgent.exe" --process

The desktop will start to move of its own accord, replaying the recording. Don’t interrupt the process.

When the process has finalized and the data sent to the Pandora FMS server, you’ll see the modules on the console with their designated names:

monitorizacion ux pdr

Executing PDR transaction sessions

To replay sequences as transactions it isn’t necessary to modify them, only to record them as different sessions and then specify the corresponding parameters so that Pandora UX interprets them as a transaction process containing various phases.

Make three different recordings:

  • Open calculator and perform simple operation (script calc.sikuli).
  • Present result as a plain text file (script savecalc.sikuli).
  • Save text file in a specific location, overwriting previous (script savefile.sikuli).

Now that there are three recordings, let’s see how to execute them so that each one represents one phase in a complete process.

Execute following:

C:\Users\artica\Documents\Producto\UX-Trans\ux\pandora_ux_x64.exe -exe C:\PDR\pdr.cmd -args -r -t calculadora_fases -script C:\PDR\calc.sikuli,C:\PDR\savecalc.sikuli,C:\PDR\savefile.sikuli -folder C:\PDR\ -ss_config active -checkpoint -post "taskkill /F /IM calc.exe"

As can be seen, it’s slightly different from the individual execute.

The new parameters are:

-t: this argument indicates the name you want to give to the whole process, made up of all the transactions.

-t calculadora_fases

-script: the same as in the individual execution but now separated by commas.

-script C:\PDR\calc.sikuli,C:\PDR\savecalc.sikuli,C:\PDR\savefile.sikuli

Replay the scripts one after the other, to make sure that they complete correctly. If everything’s A-OK, add the execute line below to the agent’s configuration file (as a single line):

module_plugin C:\Users\artica\Documents\Producto\UX-Trans\ux\pandora_ux_x64.exe -exe C:\PDR\pdr.cmd -args -r -t calculadora_fases -script C:\PDR\calc.sikuli,C:\PDR\savecalc.sikuli,C:\PDR\savefile.sikuli -folder C:\PDR\ -ss_config active -checkpoint -post "taskkill /F /IM calc.exe"

When the agent executes in process mode, you’ll see the modules on the Pandora console like so:

monitorizacion ux pdr

As you can see, the three phases have been included in Calculadora_fases, and timings for each are shown. Using these modules you can generate alerts and view their respective graphs individually, or even combine them for comparative purposes and present them in reports.

Access the UX section of the agent on the console to view the same information:

monitorizacion ux pdr


Network management: reduce alerts for better performance

May 11, 2017 — by steve2


When faced with the technological plenitude offered by almost any company’s IT infrastructure you might be tempted to think that installing a monitoring system to oversee each device, and alert your team when there’s an issue, is the best bet. Of course, here at Pandora FMS we love monitoring, but even we realize that less is often more.

network management featured

“Man’s reach should exceed his grasp”, wrote Robert Browning, when he wanted to extol the human spirit and its insatiable ambition. However, when it comes to network monitoring, too much ambition will leave you trying to micromanage every node, which, if you try to do it manually, like the man in Browning’s verse, will leave little time for anything else.

What do you really need to monitor?

Basically, we’re talking about huge amounts of data, machines, devices, elements, components, gee-gaws, gadgets, and so on, so the best way to go about monitoring these elements is to set up automated alerts. Forego monitoring non-essential equipment and concentrate on business-critical hardware and software

Network Management and Alerts

Webster’s dictionary doesn’t have a lot to say about alerts in the monitoring sense, but we can define them as configurable responses to network events. These responses are channeled through messaging services such as email, Twitter, Telegram, SMS, or even as command executions. Alerts can employ custom properties to identify relational systems and thereby be created intelligently. It’s possible to set up alerts to trigger when certain conditions are met, such as an agent being unresponsive for 10 minutes, or when the CPU’s memory is overloaded.
Configuring alerts to this level of fine-tuning can be complicated on many monitoring tools, which is why Pandora FMS has a modular alerts system allowing the user to separate the triggering condition that launches the alert from the action to execute when the alert is triggered from the command executed.

Modularity is the key to simplifying alert configuration, and will save you time in the long run, as once a new alert is configured you don’t have to configure it again in case you decide to add another agent. Pandora FMS simplifies alert deployment of configured alerts, and makes network management much easier.

Deactivated or deleted alerts

Before cancelling or deleting an alert, remember you can also modify the alerts you’ve previously configured. Take a look at the trigger conditions and add new ones-hey presto! You’ve just reduced the number of alerts that are going to be unnecessarily generated, saving time and money. Give yourself a pat on the back!

Before deleting any alert definitively, you can decide to deactivate it and put it on Standby (the difference between the two states is that alerts on Standby are visible in the alerts view). This is useful if you’re doing some network plumbing and you don’t want alerts triggering at a specific time, for example.

Click on “disable alert” to deactivate an alert from the agent side.

network management

And lastly, if you want to eliminate an alert from the agent you just have to click the trashcan icon on the right.

network management

Why monitor?

If your IT environment is composed of heavyweight machines and applications – real beasts – it’s almost impossible to know which machines are running smoothly, or what exactly has gone wrong and where. A monitoring tool gives you the necessary oversight, and a flexible monitoring system like Pandora FMS, with its custom options and module-based alerts, facilitates deployment and maintenance more than certain legacy systems that aren’t 100% integrated, or that don’t easily scale up when your organization does.

So, you’ve deployed your monitoring, configured your alerts, and installed your agents. But you don’t want to be disturbed by inconsequential alerts all the time. Hello, Cascade Protection!

Cascade protection

Cascade Protection is a Pandora FMS feature that allows you to avoid a ‘flooding’ of alerts if a group of agents can’t be reached due to a connection failure. These kinds of things tend to happen if an intermediate device such as a router or a switch is down and all the devices behind it simply cease to be reachable by Pandora FMS. It’s probable the devices are working as they’re supposed to, but if Pandora FMS can’t ping them, it considers them to be ‘down’. For those about to be saturated with alerts, we salute you. For the rest, Pandora FMS devised Cascade Protection.

With Cascade Protection activated, only one alert gets triggered, indicating that the router, for example, is down. You’ll still see the rest of the downed elements marked in in red, you just won’t get swamped with alerts.

To get the most out of this function, configure an alert associated to a CRITICAL condition on all parents, and so avoiding triggering alerts on the child agents. Check out the Pandora FMS Wiki for more on how to set up Cascade Protection.

Check out more ideas on how to get the most out your monitoring tool by integrating Pandora FMS alerts in Twitter.


Bandwidth monitoring: Don’t get conned and get what you pay for

May 9, 2017 — by steve1


bandwidth monitoring featured

When was the last time you checked your phone bill? Itemized all its contents and checked all those boxes? How could you know if your ISP is giving you the service you pay for? Does your network get saturated even though you’re nowhere near your data limit?

In a business context this is a high priority issue, since we’re talking about much bigger volumes of data, and other bandwidth problems related to latency, packet loss, unfinished processes that result in failure. All of this affects the bottom line.

So, how do you monitor your domestic or company bandwidth with Pandora FMS? Let’s take a look.

Monitoring your company bandwidth

One of a system administrator’s tasks is to keep an eye on agreements related to Internet bandwidth, and check that the service contracted is fulfilled.

A standard network configuration in an average office might look something like this, with all the traffic passing through a switch, which is connected to a router, as in the diagram:

bandwidth monitoring

First, identify the key points of the network where the network traffic passes through, in this case one of the switch’s interfaces, just where it connects with the router.

bandwidth monitoring

As can be seen, by monitoring the critical point through which the traffic is passing it is possible to obtain real incoming and outgoing values, enabling you to determine whether your bandwidth is operating optimally.

You’ll most likely be using SNMP to get this information, as it’s present on practically all the hardware on the market, is widely-used in network monitoring, and furthermore, is completely integrated into Pandora FMS.

bandwidth Monitoring  at home

Basically the same, only in this case we want to monitor the bandwidth on a single, specific, piece of hardware, since not all domestic routers support SNMP.

You can use the default netstat utility to monitor incoming and outgoing traffic on Windows systems and get incoming and outgoing network traffic statistics to determine your real bandwidth usage.

Use the ‘netstat –e’ command to get a readout of the bytes sent and received:

bandwidth monitoring

Try downloading something heavy and see for yourself how the values start to shoot up. To monitor these values, calculate the difference between two executions, using accurate data about the total volume of information and the transfer velocity in bytes/second, or, in other words, the bandwidth.

Pandora FMS is set up to work with this kind of information, and runs totally integrated checks:

bandwidth monitoring

In the graph below, you can see a low and stable volume of network traffic running through a system that corresponds with standard working routines, or with night shifts when the machines are practically on standby. Info-dense downloads, such as ISOs or updates, are represented by the spikes in the graph:

bandwidth monitoring

Network monitoring with Netflow

Another alternative for obtaining data on network traffic is with Netflow, a network standard that allows the user to obtain ample data on network traffic, such as IP addresses, protocols used, open ports, etc.

If you find yourself asking which machine is taking up the most bandwidth? Or, which IP addresses and webs are the most visited from my network? You can find the answers here:

In this article where we talk about how to configure Netflow, a Raspberry Pi, and of course Pandora FMS to monitor network traffic.

bandwidth monitoring

Data correlation

Monitoring bandwidth with Pandora FMS not only provides details on your service but also supplies other analytical data regarding other possible problems in your IT infrastructure, such as, if you might need to contract lines with more capacity, if your hardware is up to the job of handling the loads under which it is placed without occasioning bottlenecks, if there is any network congestion, whether employees are overloading the network downloading and uploading large data packets, and more things besides.

It can also help to locate the root cause of any issues with packet loss, as in this article.

All this information can be presented in reports that back up your arguments with concrete figures.

bandwidth monitoring


Packet Loss: Problems, causes and solutions

May 3, 2017 — by steve3


One of the most important metrics related to network performance, packet loss is a monitoring fundamental. So what’s it all about?

packet loss featured

The Oxford English dictionary probably defines ‘loss’ as a feeling of lack, missing something that was once in one’s possession. Basically, loss has mainly negative connotations, except weight loss if you’re on a diet. Packet is a container used for sending contents. If you lose the packet you also lose the contents.

At upper network layers, data travels in the form of packets, which deliver the information in a way that the receiver can order and use. Packet loss is when this information doesn’t arrive correctly.

Packet loss issues

  • Out-of-date information. Especially noticeable in real time situations, such as streaming services or online videogames. A few microseconds of delay can be the difference between capturing the flag in Counterstrike or being the ignominious recipient of a well-timed headshot; or, live-streaming the final of a sporting event and getting the result through your Twitter feed before witnessing it “live and direct”.
  • Slow loading times. Why is the webpage taking so long to load? Did I wake up this morning in 2005? Probably not, you’re just another silent victim of packet loss.
  • Loading interruptions. Wait, wait…still loading. Look, the progress bar has almost reached its destination at the top right of the screen. Just…two…more…seconds. If you add up all the time the Internet has cost humanity, waiting for pages to load, it adds up to over 25,000 years. Enough time for simple organisms to evolve new limbs or complex human civilizations to appear, peak and bottom out. Also, your email may not arrive.
  • Closed connections. Remote servers for websites, file downloads, online videos, and so on, may end up closing their connections if the channel is open for too long without a clean, uninterrupted connection. This is usually a security measure, if that makes it any better.
  • Missing information = websites that resemble a 90s Geocities page.

Why packets go missing

  • Damaged hardware. Take your pick: damaged network card; deteriorated ports or connections, a bad router, or bad wiring in your office or building.
  • Hardware capacity and bottlenecks. Sometimes, even though navigation speed is OK, and data is transiting smoothly through the network you still might find yourself dealing with hardware limitations. Imagine you contracted a higher velocity Internet connection, from 1GB to 10GB. However, your monitoring reports informs you that one of your devices is operating at 100% capacity for prolonged periods. If a node such as a switch doesn’t have the capacity to correctly manage the volume of traffic it receives you’re going to see a bottleneck.

perdida de paquetes cuello botella

  • Network congestion. Information travels through multiple devices and links. If any of those points is maxed out a queue is going to form and the information pass through more slowly, and even get discarded if a certain amount of time has passed. Unlike bottlenecks, this kind of issue isn’t restricted to a single node, but is a generalized problem.
  • Wi-Fi. It’s pretty normal for packets to be lost on Wi-Fi networks, as wireless networks are open to some unpredictable and/or uncontrollable elements, such as interference from other wireless networks, distance, thick medieval walls around Starbucks in Kraków, etc.
  • Bugs in network devices. The software on your network devices may be corrupted, or buggy, so update it when necessary.


Monitoring packet loss

If you suffer any of these situations, you should be monitoring for packet loss. Using Pandora FMS and the plugin should give you the feedback you need to identify when and where your packets are bleeding out.

It works by pinging a remote component or element, such as an IP address, hostname or website, and checks whether there has been any packet loss.

When you deploy packet loss monitoring, you’ll see a single module on the Pandora FMS console that contains all the information the plugin has collected, and allowing you to see at what time any packet loss occurred.

The graph below shows a loss of packets from an office’s Wi-Fi Access point. Everything is fine but for one moment when the network experiences a severe loss of data packets. Using this information you can analyze the potential cause:

packet loss
Looking at the graph representing packet loss on the Internet side informs us that there is constant packet loss, but that the values are low, indicating that there probably isn’t another kind of problem implicated:

packet loss

Once we’ve established that there is a loss of data, we can start to comb through the feedback, eliminating improbables and unlikelies, until we find a coherent solution.

In order to be able to contrast data, it’s a good idea to monitor packet loss and latency times in parallel to find out if there’s any correlation between slow latency times, and loss of data.

The following graphs show the correlation between latency in seconds (graph 1) and packet loss (graph 2):

packet loss
packet loss

All this information can be presented in reports that combine graphs with data obtained through monitoring:

packet loss


Packet loss remedies

There is no universal solution to this problem yet, as the causes of packet loss are varied. Here are some of the basic checks you can run in order to find out what is and isn’t wrong.

  • Check connections. Check that there are no cables or ports badly installed, or deteriorated.
  • Restart routers and other hardware. A classic IT trouble-shooting technique.
  • Use a cable connection. When in doubt, plug it in.
  • Keep network device software up-to-date. In case of possible bugs in your OS or on your network devices keep all software updated. It’s important to mention that if you’ve diagnosed packet losses from different pieces of hardware just updating your OS probably won’t help as the problem is probably not on your hardware
  • Replace defective and inefficient hardware. If you’ve run diagnostics on your network and it’s still leaking packets you may just have to bite the bullet and head on down to the old computer store and upgrade your equipment.


Dynamic monitoring: a new functionality for Pandora FMS

April 27, 2017 — by steve2


Pandora FMS, version 7.0 NG has been updated to include new functions designed for complex network environments. One of these new additions is dynamic monitoring, a buzzword in the monitoring sector for a while now. So, what is it?

dynamic monitoring


Dynamic monitoring consists of predictive analysis of, and adaptation to, your system’s warning parameters. It is an automated feature and is based on pre-existing data, harvested from the system’s history. Warning and critical thresholds are automatically and dynamically redefined according to information collected during a previously established time period.

Automatically configured thresholds are a big help when it comes to usability and setup of your monitoring tool, saving you the necessity of carrying out a prior systems study in order to fix your thresholds. Pandora will now handle this task automatically.

This obviously relies on pre-existing information, as it’s impossible to know what the normal values of the systems are. When the AI-enabled version appears this will be one of the functions it includes, but we’re not quite there yet.

Operational overview

The dynamic monitoring system uses existing data to calculate trend deviations and, based on those, automatically reconfigures the different modules’ thresholds.

Intelligent work mode analyzes information from a set time period (e.g. one week), establishing average values, trends and deviations from the data. Using this information it establishes warning thresholds that could be either over or under the values (dependent thresholds). The values can be modified manually once established.


Dynamic monitoring is configured from the Pandora FMS console, but requires predictionserver to be enabled on the pandora_server.conf. file.

predictionserver 1

Establish a range on each module’s individual config file within which dynamic monitoring can take place, and indicate the time interval from which the samples are collected:

dynamic monitoring

In the previous example all data from the last seven days has been collected in order to calculate the thresholds.

Use Dynamic Threshold Min. and Dynamic Threshold Max. for greater flexibility in automatically generated thresholds.

dynamic monitoring

In the screenshot, the minimum value has been incremented by 5% and the maximum by 10%, creating higher thresholds.

These fields can be inverted, reducing the threshold intervals, as below:

dynamic monitoring

There’s also another parameter, Dynamic Threshold Two Tailed, which creates thresholds that are not only above the average values (by default) but which are also below. This kind of operation is similar to using the inverse interval threshold function.

dynamic monitoring

In the graphs below there are two examples that both correspond to dynamic thresholds for a module on which the interval has been established as 24 hours.

In the first example the Dynamic Threshold Two Tailed parameter is not selected:

dynamic monitoring

In the second, Dynamic Threshold Two Tailed is now selected:

dynamic monitoring

Both configurations can, of course, be performed massively with the use of policies.


Some real-life examples will help to better see the configuration and the effect they have on your dynamic monitoring.

Case 1

Starting from a web latency module apply a basic configuration with a one-week interval:

dynamic monitoring

Once applied, you’ll have the following thresholds:

dynamic monitoring

So, if the module status registers a warning when latency is above 0.33 secs. and critical status when it’s above 0.37 secs.

dynamic monitoring

Keeping in mind that this is a relaxed threshold, you can reduce it by 20% so the alerts are triggered more easily. In order to achieve this, modify the values in the Dynamic Threshold Min. field and use a negative value to lower the threshold minimums. As there isn’t a maximum value, since critical status is registered from a specific time going forward, you don’t have to modify the Dynamic Threshold Max. field:

dynamic monitoring

Once the changes have been applied they show the following status:

dynamic monitoring

And the graph should look something like this:

dynamic monitoring

Case 2

This example represents monitoring the temperature of a control room. The graph showing the values of the last week shown below:

dynamic monitoring

In this case it is very important that the temperature remain stable, which we can monitor with the Dynamic Threshold Two Tailed parameter to define the upper and lower thresholds. The following configuration was used:

dynamic monitoring

And the automatically generated thresholds:

dynamic monitoring

The graph displays the following:

dynamic monitoring

As can be seen, anything between 23’10 y 26 is considered normal, being the optimal temperature for the location. Any deviation from the established norm will trigger an alert.

If you really need to dial them in, the Dynamic Threshold Min. and Dynamic Threshold Max. parameters are extremely flexible, tweakable to the percentages you need.

Network topology and distributed monitoring

April 24, 2017 — by steve2


network topology featured

Introduction to network topology

This time we’re dedicating an article to distributed monitoring, and we’re going to talk about the many possibilities Pandora FMS offers in the area of distributed environments and diverse network topology.

So what is a distributed environment? It refers to networks that are not centralized in one geographic location, such as those formed by local office branches of a national or international company.

Most companies’ IT infrastructure is now split between physical hardware in the office, plus the attendant OSs and apps, and another part that is in the Cloud or outsourced.

This inevitably gives rise to very distinct network topologies in which not all the IT resources are under the same roof. That’s why Pandora FMS offers different features and functions in order to cover these kinds of networks.

First let’s take a quick look at the two basic kinds of monitoring and then how to adapt them to the kind of decentralized monitoring Pandora FMS offers.

Basic monitoring

Applicable to both centralized and distributed monitoring.

Remote monitoring

The first category of monitoring consists of launching checks across a network to collect data on hardware, software, latency, availability and so on. These checks are carried out via standard network protocols such as ICMP, SNMP, TCP/UDP, HTTP, etc. They are usually launched from a central monitoring server that initiates the checks and are intended to give immediate feedback.

network topology

Typical remote monitoring checks are:

  • Hardware checks (Host Alive)
  • Communications latency (Host Latency)
  • Monitoring a port to check that a service is online (HTTP port 80)
  • Network traffic (SNMP)
  • Web site monitoring

Agent monitoring

A small piece of software is installed which collects data on the OS. This kind of monitoring allows data to be harvested from deeper layers, to monitor apps from “inside” the server.

Communication is almost always initiated by the agent, but can also be done so by the server itself. Data collected by Pandora FMS agents is sent in XML packets.

network topology

Typical data collected by agents concern:

  • CPU and memory use
  • Hard drive capacity
  • Active processes
  • Online/active services
  • Internal application monitoring

Distributed monitoring

How to apply these two kinds of monitoring to distributed network topology using Pandora FMS.

Agent remote checks – broker mode

Let’s say you’re monitoring a Windows machine with agent software installed, and a few basic monitoring checks running. There’s also a router you want to monitor that provides the external connection for the Windows device. But, from Pandora FMS it’s not possible to reach this sub-network, and logically, it’s impossible for the server to execute remote checks.

Since the Windows hardware is connected directly to the router, you can use the agent’s broker mode to monitor the remote router and send the data to Pandora FMS as if it were a separate agent.

network topology

Technical operation

A software agent carries out remote checks rather than the server.

The software agent uses the available network protocols to perform the remote checks. Once the information has been collected from the remote system the agent-broker sends it to the Pandora FMS server.

network topology

Monitoring remote networks with proxy agents – proxy mode

A different network topology problem; you want to monitor a complete sub-network composed of various machines. Unfortunately, your Pandora FMS server is located in a different segment of the network, without access to the unmonitored sub-network. This time the software agents are installed on the machines, in which case the broker agent solution is unworkable and you need to use proxy-agent mode. This gives you a point of contact between the Pandora FMS server and the sub-network, where software agents can be installed without any problem. These agents send XML packets to the proxy agent that in turn sends them in the same format to the Pandora FMS central server agents.

Technical operation

First, a word about Tentacle. This is a proprietary communications protocol used by Pandora FMS to transfer data files between agent and server, with various work modes, one of which is proxy mode.

Software agents can use Tentacle’s proxy mode to function as proxies for other agents. In this mode, a software agent receives the XML packets from other agents and resends them to the Pandora FMS central server. Note the operational difference between proxy mode and broker mode; the former allows data packets from other software agents to be resent, whereas broker mode doesn’t, as in the latter mode there are no agents installed on the remote network.

network topology

This is useful if you have a network from which only one server can communicate with the Pandora FMS server. The agents installed on machines without access to the server will send their XML files to the proxy agent, which in turn sends them to the server.

Multi-server distributed monitoring

This time you want to monitor your HQ’s IT landscape. Enabling communications is simple, as you’re dealing with an internal corporate network, inaccessible from outside. However, the amount of hardware to monitor means that with just a single Pandora FMS server performance will suffer.

In this case the solution is to install various Pandora FMS servers in parallel, connected to the same database and capable of working independently. On one hand, the workload is divided among various servers, each of which takes care of a different office sub-network, and on the other, it permits easy viewing of the data from a single control point, as only one database is used.

Technical operation

Pandora FMS installation comprises three basic components: console, server and database.

If there are various Pandora FMS servers in a single installation it’s important to know whether all of them are connected to the same database. These kinds of installations are generally used when the number of devices is too high for a single server to handle, or if there’s an option to enable database communication from other sub-networks. Installing additional servers can also be an alternative to proxy mode.

network topology

The above schematic shows a total of three Pandora FMS servers, two of which are monitoring a single network, dividing the load, and a third monitors another network. All three are connected to a single database.

The user can access all the information from the console, without being preoccupied by the workings of the three servers.

Distributed delegated monitoring – Export server

Various clients use our monitoring services, meaning that there will be an independent Pandora FMS installation in each of their offices. In our head office we also install a Pandora FMS server and enable the export server. This lets us observe on our own console all information proceeding from our clients’ infrastructure.

This exact copy of our clients’ monitoring allows us to establish our own alerts, thresholds and events. This allows us to work in tandem and anticipate possible problems and issues on our clients’ behalf.

Technical operation

This configuration permits us to run various databases, as well as their corresponding servers and consoles. Each installation with its own database is one instance, and it handles monitoring and data storage of different environments.

One situation where it can be used is in monitoring various clients’ networks, each one with a distinct database containing different information.

network topology

Remote network monitoring with local and network checks – Satellite server

Imagine you need to bring an external DMZ type network topology under monitoring oversight, using both remote checks and software agents. In this case it’s not possible to use an additional Pandora FMS server, as we’re talking about a network from which direct communication to our database can’t be initiated. Furthermore, agent broker and proxy mode are unviable, so it’s time for the satellite server.

Install the satellite server in the DMZ, where it will handle not only remote checks but also be monitored by agents, sending all the data to the Pandora FMS server in the corporate network.

Technical operation

A fast-evolving function, satellite server can be installed on a network and independently execute remote checks and redirect XML files from other proxy agents.

network topology

Unlike a regular server installation, the satellite server doesn’t need a direct database connection. It sends all collected information to the central Pandora FMS server via Tentacle. This makes it one of the best options for deploying monitoring on networks that a Pandora FMS server can’t reach, allowing as it does, to perform in proxy mode and also launch remote checks by itself. It also includes specific functions for carrying out remote checks, making it a better option for remote monitoring than agent broker mode.

Monitoring isolated restricted networks

An organization has two datacenters, one in Europe and the other in Asia. Both environments are secure and restricted, but, given the increasing prevalence of cyber attacks and the sensitive nature of the data in use by Pandora FMS, there can be no direct communication between the European and Asian offices. In this case, enable the sync server in the European Pandora FMS installation and install a satellite server and various agents to monitor the Asian datacenter, where the satellite listens and waits for a connection from outside the network.

network topology

Communications are initiated by the sync server Europe-side, without allowing any connection from the Asian datacenter, where there is a complete system installed comprising satellite server and tentacle in listening mode.

Technical operation

One of the new functions of Pandora FMS version 7.0 “Next Generation”, for use on isolated and restricted networks from which it is not possible to initiate outside-network communications.

The Pandora FMS server itself initiates communications with the isolated environment, allowing agent-based monitoring or remote monitoring, combining the functions of the sync server with satellite, proxy or broker.
The Pandora FMS server in sync server mode will initiate communication with the isolated environment where there is a Tentacle server installed in listen mode.

network topology

Who watches the watchmen?

April 12, 2017 — by steve0


SLA agreement featured

For the last half-century we have lived in a new, network-enabled global age, and for the last couple of decades, under the new paradigm of Globalism, which means your HQ could be in London, your accounts department in Dublin and your IT support in Mombai. The age of inequality is creating opportunities for some as fast as it is eliminating them for others: we call this outsourcing, and it has triumphed on the back of several claims:

  • Cost-savings and economies of scale: your systems are installed in shared environments, but isolated from other clients’ systems by powerful security protocols, thereby achieving the best of both worlds: a secure environment, and a dedicated IT team, which costs you a fraction to employ because you’re sharing costs with other, globally-minded outfits.
  • Improved systems performance, with 24/7 oversight, and a team of IT admins who eat, sleep and breathe shoulder-to-shoulder with your IT installation, and those of hundreds of others, The IT administrators are aces, seeing as they’re constantly dealing with incidents as they run out the pipe, rather than with the issues of a single IT eco-system.
  • Universally accessible systems. The Cloud is all around, accessible from almost any network, any timezone, any OS, and it’s also backed up by a flexible and dedicated team of outsourcees.
  • Just to make sure there are no hiccups in the service, everything is signed, sealed and backed up by an SLA agreement to give you peace of mind.

So, who checks that those SLA agreements are adhered to? The same company who provides the outsources service also turns in a weekly report on the status of the systems associated to your service.

If you’re thinking about outsourcing your IT administration and/or oversight, here are a few questions you can ask yourself:

  • The next time the subject comes up, will you be able to say to your CFO that the weekly outsourced operations report provides enough information and justification to outsource your IT services? Does that report contain all the answers to the questions other department heads put to you? Is it flexible enough to adapt to new technological developments?
  • Are the reports 100% accurate?
  • What about service outages? Do I get real time feedback, or am I going to be finding out at the same time as my irate clients?
  • What about if your systems suffer a process of degradation? Is that going to be considered in your weekly SLA report? Or, as long as the systems are still on their feet, will they even figure in the report?
  • Can I be sure that my shared Cloud resources are reflected exactly in reality as they appear to be contractually? Disc speed, memory, processing power, bandwidth…
  • Are your agreed maintenance standards being applied? Details such as hardware or data backups, or a history of interventions on your outsourced systems need to be covered in your reports
  • Is my data secret? Is it safe? As long as there is third-party hardware involved, security is going to be the most important issue for the majority of outsources.

Of course, we don’t actually think that out suppliers are scamming us, but at the same time, we’re talking about a hefty sum when an SLA is not respected, and there is not only a financial cost to assume but also the damage to reputations, to confidence, credibility – the whole matrix of capitalist fundamentals – plus technical, administrative and service problems on top of that. In other words, money: 0, ¥€$!

A reputable monitoring system helps to keep your SLAs all in a row, plus allowing you to dig down into the data, for extra feedback, even at the most fine-grained level. A good monitoring tool will also allow you to cross-reference third-party data and keep an eye open for divergencies that can be corrected for optimal service. All that, plus the standard markers you expect from monitoring (when, where and why an error occurred).

A good console panel, service map and event manager lets you quickly evaluate an incident’s impact, which services are affected and to plan accordingly.

Pandora FMS can provide the answers to all these vital questions. It can help you scale correctly for the size of the environment you want to monitor, generate reports to a programmable schedule, that help to explore your systems and justify their correct operational status, and of course ensure that the SLAs are being met.

Pandora FMS is the most flexible monitoring system on the market, adaptable to almost any IT environment, purpose, OS…, giving you the power to oversee servers, apps, communications, security, traffic, websites, services, UX, transactions from a single, centralized tool, whatever your resources, and however they may be distributed – around various offices and branches, or in the Cloud, or in different countries. It’s all the same to Pandora FMS.

Crisis management: How to manage an IT crisis without losing your head

March 31, 2017 — by steve2


crisis management featured

When ideas, products, services and capital come together, something wonderful happens; a business is born. But no-one can predict that business’s future: how it’s going to grow, if it’s going to grow, how the investors will react to future events, etc. All through the life of a business different kinds of intelligence will come into play: when you have to deal with a crisis is when you need to employ various kinds of intelligence simultaneously. Crises provoke extreme reactions, and the person, team, company who can keep their cool is always going to be better at crisis management than those who give in to panic.

5 keys to help you manage an IT crisis:

1. Acknowledge the situation, accept responsibility and apologize
The three As: Acknowledge, Accept, Apologize. These go a long way to restoring confidence in your brand, or product or service. They represent a good deployment of emotional intelligence. You’ve established a relationship of trust with your client or customer and now is the time to withdraw some of those emotional funds you have on deposit. Honesty is key.

To take a recent example, in March 2017, Amazon’s Web Service provision was disrupted by a typo in a command line. Amazon didn’t delay in acknowledging the cause of the problem, or in accepting responsibility and apologizing. The emotional effect is to generate sympathy and even to bring customers closer to the brand. Who’s never slipped up, we ask ourselves. If Amazon can admit they made an error, so can you.

2. Explain your fix in layperson’s terms
After acknowledging that there is a problem, communication remains paramount. None of your customers should be out of the loop; keep everyone informed, explain that a fix is being worked on, and keep your language non-technical (if it’s a technical problem you’re dealing with). Not everyone in your organization is an engineer, and blinding people with jargon can seem very close to being evasive. Again, emotional intelligence is as valuable in this case as technical know-how. If possible, give your customers pointers on how they can continue working even if the network, for example, is down. An IT crisis isn’t an automatic pass to be non-productive. Those pencils and notepads don’t rely on software updates.

3. Facts are your friends
If it’s a serious issue you’re dealing with, don’t beat around the bush. In the case of Amazon, they acknowledged the reality of the situation, without pointing fingers or naming names. They explained the situation and followed the steps already touched upon.

4. Communication
We already saw the three As. This one we might sum up as the three Cs: Communication, Communication, Communication. The objective behind all these steps is to regain trust and good communication is a sign of trust. Peter Drucker, the business philosopher, affirmed that 60% of all management problems are due to bad communication. A constant back and forth, fluid communication, are a watchword for success. They might not guarantee it, but they provide useful feedback on both the good and the bad, what’s working and what’s not.

Keep it simple, and honest, as noted, and free of technical language.

5. Regain trust
The work doesn’t finish when the crisis is solved. Whether within your organization or your clients’ people may be worried that the situation could repeat itself. This requires more of the steps we’ve already seen, especially communication.

Finally, don’t forget to thank your team. It may seem obvious but when is a better time for a pat on the back, and the acknowledgement of a job well done than when a crisis is averted


What’s new for Pandora FMS 7.0 Next Generation

March 29, 2017 — by steve2


pandora fms ng 7 featured

Twenty four hectic months after Pandora FMS 6.0’s launch, we are proud to present the latest version of our proprietary monitoring software: Pandora FMS 7.0 Next Generation. It comes stacked with new functions and debugs, destined to simplify your day-to-day monitoring tasks.

Added functions

New interactive network maps

All previous types of maps have been consolidated, and their functions integrated into automatic network topology detection. It also allow users to link to L2 manuals.


Business transaction monitoring

Distributed business transaction monitoring provides oversight and feedback on each phase of any level of business transaction (online sales, security certification systems).

pandora ng release notes

Visual console upgrades

We’ve worked on improving current functions, fine-tuning here and there, and improving the look of the final product. New icons, true type fonts and a considerable overhaul of the editor in terms of usability.

pandora ng release notes

Console now includes event history.

Being able to store events on the database means long-term events-reporting.

Dashboard upgrades

New Dashboard widgets: histogram module, agent/module status grid with improved filters, module data (icon, value), SLA percentages with histogram and data charts. Share your dashboard with other users who don’t have access to the console (public link).

pandora ng release notes

All services at a glance

New global vision lets you see all current services and their status.

pandora ng release notes

Rename agents and matching names.

Agents are free to be relabelled, and will subsequently count as aliases. This allows host names to be duplicated on a single installation.

Rolling Release

From the current release (Pandora FMS NG: Next Generation) onward, patches and improvements will be applied incrementally, directly from the console, without the need for migrations or updates.

UX Monitoring

UX monitoring for complete transactions, start to finish, including flash, java and complex actions. Test heavyweight desktop applications remotely, through various transaction phases, checking each is successfully completed, and timings related to each phase.

pandora ng release notes

Dynamic monitoring: automatic threshold calculation

An intelligent and predictive system that allows different module thresholds to be automatically established, based on data collected during a specific timeframe.

Visual help to select thresholds

Dynamic graphs displaying approximate representations of established module thresholds, helping the user to more correctly establish the thresholds required.

pandora ng release notes

Mobile version for Metaconsole.

pandora ng release notes

Update Manager Online on Metaconsole

If you’re online it’s now possible to update the Metaconsole without having to manually download packets.

New: Pandora FMS Sync-Server

Extend your monitoring to isolated and/or restricted networks. Communication is initiated by the Pandora FMS server, instead of from the remote network to the server.

pandora ng release notes

Satellite server upgrades

– The satellite server allows block SSH checks, reutilizing a single remote connection to carry out different checks against remote machines.
– Credentials safe storage allows passwords (wmi, SSH) to be securely saved, via encryption, and reemployed on different devices, with no need to define them for each host.
– L2+L3 network recognition, adopting the ReconServer model.
– Includes disabled networks for Satellite checks, for faster and more efficient scanning.

New ISO installation based on CentOS7

The OS used on the official ISO is now updated, from CentOS 6 to CentOS 7, as well as containing the latest version of Pandora FMS.

Major and minor upgrades

  • Agent IP searches now also allow for secondary IPs.
  • Improved SLA algorithm.
  • New report types: weekly and monthly SLA.
  • New checkbox added for massive agent deletion. Allows user to select only disabled agents.
  • Real time report execution has been limited. If a report is too dense a warning will show with an option to send the report by email.
  • Now possible to change dashboard groups.
  • Custom reports can now be even more fine-tuned.
  • Upgrades on mobile console.
  • Added: a list of any agents in collections.
  • Programmed tasks improvements. New task added to ‘Create reports from template and send by email’.
  • Fixed: a problem with agent installer permissions on tar.gz. packets.
  • Warning added to explain that perl-Sys-Syslog dependencies are necessary.
  • Updated: pandora_agent.conf for FreeBSD.
  • Fixed: errors in exporting event reports to PDF and CSV.
  • Public Dashboards link.
  • Improvements in Active Directory integration and authentication.
  • Fixed: problems when importing policies that included plugin server modules.
  • Added: customizable percentiles on graphs.
  • Order IPAM lists according to IP, network, interval or latest update.
  • New programmed tasks extension on the Metaconsole.
  • Fixed: a problem with the zoom on the service maps.

Do you want to download Pandora FMS 7.0 Next Generation?
Click on the link below, and in the download area of the Pandora FMS website select 7.0 package.


Top 16 best network monitoring tools for 2016

January 2, 2017 — by steve63



Towards the end of 2016 we made a short introduction to network monitoring and we told you about the main characteristics to keep in mind when selecting a network monitoring tool. This was meant for users whose installation couldn’t conform with standard syslog monitoring or standard bandwidths.

To see what characteristics we talked about in order for you to make a smart choice, you can refer to that article about network monitoring. In addition, read this article to get more understanding of a network monitor.

IoT Monitoring and the Cyber Monday Blues

November 28, 2016 — by steve0



Last year, before IoT monitoring became a thing, experts were worrying about zombie computers. While they worried about powerful desktop PCs, a fifth column of helpful little home devices has crept under the radar and connected to the world wide web. Like sleeper cells waiting for the order to attack they lived among us; recording our favorite shows, regulating the temperature or light in our homes, watering our plants. Then, on October 21st, they were hijacked to send millions of requests to a bunch of service providers’ servers, laying low Internet giants such as Twitter or PayPal, and disrupting Internet services across the USA.

Today is Cyber Monday, and new regiments of these bots are marching off the shelves, and, although logically, consumer confidence in these devices is down, demand has hardly been affected, with Black Friday and Cyber Monday about to kick off our annual orgy of consumerism lasting through to the hangover of the New Year. How can we ensure that these bots are safe? During the present rush to market of devices, designed with functionality rather than security in mind, the focus is all on what they can do for us and very little on what might be done to us through them.

This last attack are came through household consumer goods, but what about pacemakers, automated saline or insulin drips in hospitals, or driverless cars? These devices also belong to the Internet of Things, a catchall term to describe any device with in Internet connection, however diverse the function of the device may be; programing a DVR doesn’t seem to have much in common with integrating cardiac monitoring into a hospital’s IT infrastructure, or hunting Pokemons in the park, with sending a delivery truck on a preprogrammed delivery run down rural backroads.

Certainly the introduction of legislation could be a start, if we had years to address the problem, which we don’t. It seems like the solution is going to have to come from inside the industry, as is so often the case (for good and bad), and it is clear that IoT monitoring is going to have a part to play. Certainly, the industry could take some responsibility for introducing default protocols in case of anomalous behavior in their devices, but in their defect, IoT monitoring will inevitably step up to the plate. DDoS attacks aside, where else can monitoring play a role in helping to administrate this proliferation of interconnected devices?

Monitoring household devices, and making damage control provisions for similar DDoS attacks would seem to be a given, and hardly a technological challenge; the tool should be able to tell you where the attack originates, and which components and elements of your network are affected. This in turn lets you know how your business or organization could be impacted and allow you take action.

In other areas we will see more positive, pro-active benefits of IoT monitoring. Hospitals don’t like to acknowledge it, but mistakes happen; late nights, long shifts, high patient turnover, even illegible writing can play a part in a medical mix-up. Automating routine hospital tasks such as administrating medicines via drip, or regulating insulin delivery, is becoming the future standard, and monitoring those tasks, making them less prone to human error is highly achievable. You establish your parameters (the amount of medicine to be delivered, the frequency, etc.) and your automated system carries out its function faultlessly. The monitoring runs in the background, ensuring the system is working correctly.

What about when the subject is up and about? Now we have wearables (smart watch, heart rate monitors) and implantables (pacemakers), which can connect to the Internet, share data, collate it, analyze it, and generally provide a lot of health-related numbers to crunch Doctors and patients will soon be looking at these figures, represented graphically through an IoT monitoring platform.

We’re dealing with a problem of nomenclature as well as a security problem. Security is a question of corporate responsibility in terms of diversifying their default passwords, and anticipating how their bots will be integrated into a larger network. Once they’re in that larger system, monitoring can also play its own part. DDoS attacks are almost impossible to predict, due to the suddenness with which they happen, although it may be possible to identify anomalous network usage in terms of traffic spikes or bad requests. If our tool collates enough data, it could be used identify the circumstances leading up to an attack and give us a little wiggle room before the spam hits the fan.