main

MonitoringServer Monitoring

How to monitor an Apache web server with Pandora FMS

July 13, 2018 — by Alberto Dominguez0

Monitoring-web-server-Apache-featured.png

leyes de la tecnologia

Monitoring Web Server Apache with Pandora FMS

What is an Apache Web server?

In today’s article, you will learn how to monitor in depth an Apache web server with Pandora FMS. But first, let’s find out what Apache is.

It is the most widely used open source HTTP web server on the market, as it is multiplatform, free, high performance, and one of the most secure and powerful.

It was founded in 1999, in the United States, by a group of eight developers who initially formed the Apache Group, which would lead to the Apache Software Foundation.

Among its many advantages are its free and open source cost, its compatibility with Linux, MacOs and Windows, its SSL and TLS security support, its global and functional support team and its performance (one million visits per day).

The Apache Software Foundation logo

Monitoring web server Apache is not as simple as monitoring the status of the process or making a web request to see if it returns anything. This would be a basic monitoring that anyone could do with Pandora FMS, since there are some examples in the documentation.

Performance Monitoring web server Apache

There is a plugin in the Pandora FMS library that allows us, along with the Apache server status module, to obtain detailed information about the server performance.

In addition, we can configure the server to obtain detailed information about each instance or web domain that we are serving on the server.

The first step is, obviously, to have Pandora FMS installed. Then, we will install a Pandora FMS agent in the Linux server where the Apache is located.

Once the agent is installed, we will install the Apache plugin from the module library:

https://pandorafms.com/library/apache-performance-plugin/

We will download it and copy it to the plugins directory of the linux agent, which is in /etc/pandora/plugins

In order to use the plugin we need to configure the Apache server (Monitoring web server Apache) to use the server-status module, which gives detailed server information. In order to do this, edit the file /etc/httpd/conf/httpd.conf and add the following configuration:


ExtendedStatus on

<Location /server-status>
SetHandler server-status
Order deny,allow
Deny from all
Allow from XX.XX.XX.XX
</Location>

Where it says XX.XX.XX.XX.XX we will put the main IP of our WEB server. So that it will only accept requests from itself, for safety.

Once these changes are made, we will restart the web server and launch the plugin manually to verify that it returns any data:

/etc/pandora/plugins/apache_plugin http://46.105.97.91/server-status

It has to return an XML with data, since it is an agent plugin that returns several modules. This is an extract of the entire XML:

<module>
<name><![CDATA[Apache: Uptime]]&gt;</name>
<description><![CDATA[Uptime since reboot (sec)]]&gt;</description>
type generic_data/type

<min>0</min>
<disabled>0</disabled>
<data><![CDATA[248008]]&gt;</data>
</module>

Once we have verified that it works, we will add the plugin to the Pandora FMS agent with the following line:

module_plugin apache_plugin http://XX.XX.XX.XX/server-status

Once again, we are trying to replace XX.XX.XX.XX with the Apache server IP, the same machine where the Pandora FMS agent is executed.

Once this is done and the agent is restarted to get the new configuration, it should have a view similar to this one:

screenshot of the Pandora FMS agent

Server status monitoring

In addition to performance monitoring, we should do a basic monitoring web server Apache process; a module would be enough to verify that the daemon is working:

module_begin
module_name Apache Status
module_type generic_proc
module_exec ps aux | grep httpd | grep -v grep | wc -l
module_end

Being a Boolean module, it would only be set to CRITICAL when its value is 0, but it will also help us to know how many HTTPD threads are active on the server.

Load monitoring of a specific instance

In Apache we can configure an instance -which in its terminology is a virtual host- to use a specific log, only for itself, in this way:


<VirtualHost *:80>
ServerAdmin [email protected]
DocumentRoot /var/www/mydomain
ServerName mydomain.com
CustomLog logs/access_log_mydomain common

</VirtualHost>

Now we only have to monitor the number of entries of this file to find out how many requests per second we have in our server, through an incremental module:


module_begin
module_name MyDomain Request/sec
module_type generic_data_inc
module_exec wc -l /var/log/httpd/access_log_mydomain | awk '{ print $1 }'
module_end

You can watch the tutorial on how to monitor an Apache web server here:

MonitoringNetwork

How SDN change our vision on networks?

June 28, 2018 — by Alexander La rosa0

SDN-featured.png

SDN: Challenges for Network Administrator’s and Monitoring

SDN Software Defined Networking

SDN: Challenges for Network Administrator’s and Monitoring

Last December, Acumen Research and Consulting, a global provider of market research, published a report titled “Software Defined Network (SDN) Market” where they estimated a compound annual growth rate (CAGR) of 47% for SDN in the period of 2016 – 2022.

In 2016, Cisco launched its DNA (Digital Network Architecture), which is more based on software than hardware.

In 2017, Cisco acquired Viptela to complete its SD-WAN (Software Defined WAN) offer. Also, in 2017, IDC (International Data Corporation) estimated for SD-WAN infrastructure and services revenues a CAGR of 69.6% reaching $8 billion in 2021.

All those statistics show us that the business around network is changing, but apart from new offers from our ISP or cloud services provider, does SDN really imply a change in the way of understanding, designing, managing and monitoring networks?

We have to start by clarifying that SDN is an architectural approach not a specific product. Actually SDN is the result of the application of virtualization paradigm to the world of networks.

In general, virtualization seeks to separate the logical part from the physical part in any process. In server virtualization for example, we can create a fully functional server without having any particular physical equipment for it.

Let’s translate this paradigm to a basic function of a switch:

When a packet arrives at a switch, the rules built into its firmware tell the switch where to put the packet, so all the packets that share the same conditions are treated in the same way.

In a more advanced switch, we can define rules in a configuration environment through a command line interface (CLI) but we have to configure each one of the switches in our platform.

When applying virtualization, we have all the rules for all the switches (logical part) separated from the switches themselves (physical part). SDN applies this principle to all networking equipment.

Therefore SDN proposes the separation of:

  • Control Level: in this level, an application called SDN Controller decides how packets have to flow through the network, and it also performs configuration and management activities.
  • From Data Level: this level actually enables the movement of the packets from one point to another. Here we can find network nodes (any physical and virtual networking equipment). In SDN we say traffic moves through the network nodes rather than towards or from them.

With those two levels defined the idea is that network administrators can change any network rules when necessary interacting with a centralized control console without touching individual network nodes one by one.

This interaction defines a third level in the architecture called

  • Application Level: In this level we find programs that build an abstract view of the network for decision-making purposes. These applications have to work with user’s needs, service requirements, and management.

In the following image we can see a basic model of SDN architecture:

basic model of SDN architecture

Finally there are two elements to mention:

  • Northbound API: these APIs are used to allow the communication between SDN Controller and applications running over the network. By using a northbound API, an application can program the network and request services from it. They enable basic network functions like loop routing, avoidance, security and modifying or customizing network control among others.

    Northbound APIs are also used to integrate SDN Controller with external automation stacks and cloud operating systems like OpenStack, VCloud Director and CloudStack.

  • Southbound API: These APIs enable the communication between SDN Controller and network nodes. SDN Controller uses this communication to identify network topology, determine traffic flows, define the behavior of network nodes and implement the request generated by a Northbound API.

SDN was originally just about this separation of functions; however the architecture has evolved to embrace the automation and virtualization of network services as well, in order to bestow network administrators with the power to deliver network services wherever they are needed without regard to what specific equipment is required.

This automation implies that SDN-based networks have to detect changes in pattern of traffic flow and select the better path based on parameters like application type, quality of services and security rules.

Up to here, our brief introduction to SDN. If the reader wants to go deeper, we recommend visiting the websites of Open Networking Foundation and SDX central.

So, let’s go back to the original question: does SDN really imply a change in the way of understanding, designing, managing and monitoring networks?

Traditionally, network administrators have a very strong connection to the hardware; we usually configure every switch, router and firewall using a command line interface.

This “usual way of doing things” gives us a deep knowledge about the platform, however we have always agreed that this way of working is laborious, prone to errors and slows down changes. With SDN, we may have to think less about commands and configurations and think more about rules and services.

On the other hand, virtualization has taken a long time to impact the world of networks and has taken longer to make an impact in companies that are not Internet service providers or mega corporations

Then, this change may be less hard for those IT teams that have experience with server virtualization, containers and have faced the challenges of DevOps methodology (a topic we discussed previously in this same blog).

In terms of monitoring, the fundamental challenge is how to monitor networks considering the complexity and transience that SDN implies. For example, how to do application performance monitoring if the network topology can change several times a day.

There are some monitoring tools designed to be at an Application Level as part of Network Management Systems. Those tools face the problem of complexity, doing controller monitoring and regular network monitoring in the devices on the Data Level.

The real challenge with an agile structure is to identify the entry of new devices and automatically adjust the monitoring scheme.

Furthermore, troubleshooting on SDN-based networks requires an important effort in interactivity and contextual analysis. In practice, it will not be enough to see the network as it is in a certain moment, but we will need to move forward and backward in the topology in order to identify the performance problems associated with routes to optimize the whole process.

Therefore, we can foresee a large amount of data extracted from the platform that must be stored and then filtered under a flexible visualization scheme.

Finally, we must say that many of the challenges mentioned here have already been assumed by some monitoring tools. Those tools with flexible architectures and extensive experience in virtual environment monitoring can be successful. We invite you to know the full scope of Pandora FMS in virtualized environments by visiting our website.

Redactor técnico con más de diez años de experiencia manejando proyectos de monitorización. Es un auténtico apasionado del Yoga y la meditación.

Monitoring

Catch up, meet IOT and 5G

June 15, 2018 — by Alberto Dominguez0

iot-and-5g-featured.png

iot and 5g

IOT and 5G, learn all about the new priorities of technology

I’m pretty sure that every morning, before leaving home, when you look at the bright oval of your mirror in the bathroom, you look at your freshly watered-down face and say, “Wow, how can I be so handsome and modern at the same time?.” You don’t know how, but you manage to keep up with everything. I know, you’re not just doing it to show off. It’s a lifestyle; it’s YOUR lifestyle. But, I have to say, this modus vivendi is a bit risky and frenetic. You always have to be aware of everything, so that you don’t let any innovation or trend slip by. You know that it is up to you to continue to look so well among your colleagues and, of course, in front of the mirror.

In order to help you remain the king/queen of the new technologies, here’s an article about IOT and 5G. In case you were linking 5G (or the 5th generation of mobile technology) to downloads at a higher speed, I have to tell you that you are wrong and you can’t even imagine what this technology is like. There it goes:

As you know by now we use an increasing amount of data between smartphones, smart TVs, smart clocks, virtual reality, drones, autonomous vehicles, silent washing machines, refrigerators with defrost, voice assistants… In short, anything you can imagine. What does this mean? The IOT and 5G are already among us. IoT, the so-called Internet of Things and 5G, the 5th generation of mobile telephony.

All this use and interconnectivity involves the transmission of large amounts of data and, of course, a significant number of simultaneous connections. In order to do this, the future, or in other words the present, requires greater efficiency and lower energy consumption when it comes to enjoying technology.

5G is essential for us to continue communicating with each other and offers a path for all kinds of innovations. More secure means of transport, instant communication, virtual reality, intelligent cities… Billions of devices will be permanently connected creating a network that unites us all.

IOT and 5G go hand in hand, holding each other romantically and technologically. In fact, we could say that without 5G the Internet of Things could not exist. This statement may seem exaggerated today, but it certainly will not be in the near future, when we will presumably have more than 20 billion IoT devices around the world. When there are a lot of “things” (IoT) that need to take up the same space and at the same time on the network, it is only possible thanks to the possibilities of 5G.

Today, 3G and 4G networks are still not responding in real time. But, with the giant steps humanity is taking, we will be able to see clearly and first hand how the services that now suffer from a time of delay will soon need a new, more immediate access technology.

Surely it doesn’t take any effort to imagine a scenario, in the future, where cars themselves communicate with their drivers, with the pedestrians themselves, with traffic signs, with other cars, with the boring people who work in the tollbooth and with anything that surrounds them.

It will be much more difficult to have an accident. And Volvo or Tesla, who are always aware of this, know it and work to reduce road accidents thanks to this communication between “things”.

You don’t really need to imagine yourself in these pulp magazine or science fiction stories to truly understand what the IOT and 5G are, since right now many devices and programs operate on the basis of this hyper-connected world. Companies like Huawei have already launched projects that use real user data and use IoT technology.

The previous 4G system improved, considerably, its delay and efficiency problems, but it has become obsolete with the introduction of this 5th generation, which is already designed for total versatility, scalability, and energy savings. This means that all devices and networks created between them using IoT technology use only what is required to operate, simply by consuming what is needed.

In order to understand the importance and the progress that has been made thanks to the 5G characteristics, we should point out and define the term “latency“, which basically means “delay”. We talk about latency as a measurement pattern that estimates the time it takes a data packet to go from one chosen point to another. Imagine from the nearest telephone antenna to your latest generation mobile phone, for example. What the 5G network technology promises is that the latency (“delay”) that would currently exist from 10 milliseconds is reduced to 1 millisecond. You’ll have the fastest cell phone in the Western, buddy. Apart from the speed, which seems enough to me, the use of this technology, in general, will be more efficient, and you will be able to connect and disconnect, your devices according to your needs. Think of it as a hive mind or a neural network. The IOT and 5G will make everything work as one.

By interconnecting all our devices at almost instantaneous speeds, 5G will allow us to live in an intelligent future, in which machines and humans are able to establish real and efficient communication. For example, your bathroom mirror may be able to analyze your breath to send a signal to your refrigerator so that it can recommend a snack for you. Dude, that’s the future. The ability to move large data packets across countless networks is fundamental to transforming the promised land of the Internet of Things into a feasible reality within our reach.

If you’re still more interested in knowing what this 5G technology means, and in finding out where it comes from and where it started, we recommend What is 5G technology? If you want to go one step further in the field of interconnectivity of your devices and platforms, you can choose to start with 8 social network monitoring tools.

See you soon!

InternetMonitoring

What would our life without Internet be like?

June 11, 2018 — by Alberto Dominguez0

life-without-internet-featured.png

life without internet

Life without Internet; could this be possible in our world?

Wait a minute! Wait a minute! Read on before you end your life.

As far as we know, there are no efforts to eliminate the Internet, so you have no reason to worry. But it’s interesting to know what our life would be like without the Internet, right?

Although you can’t even imagine it, that time existed. These are ancient times, when everything was rural. Kind of like the’90s, sort of.

At that time there were already such malevolent machines, certainly created by some demon, called computers. But, although it may seem crazy, they were all disconnected from each other, as if they were living in isolation, and they were only used for nondescript tasks, such as writing texts or working (although video games already existed).

Thankfully, a few years later everything changed. The internet came into our lives, cities grew, water irrigation watered our fields and the world was filled with light and colour.

But what would it be like to live without the Internet? Would you dare to live such a terrible scenario? Let’s have a look at this terrible scenario.

– We’d have to go out on the street to do our shopping

Do you know those places they call shops where some people (specially old people) do their shopping? Well, if you didn’t have the Internet, you’d have to shop there!

Imagine that scenario. You want to buy a USB (for example) and you have to get off the couch, get dressed and go out on the cold street. And on top of that, you have to do it only during opening hours, because outside these hours the shops are closed. Would it be worth living like this?

– We would need to look at maps to get to the places in a life without Internet

You may be lucky to live in a time when maps are no longer needed. But those of us who have used them, we know the horrors of them.

These were the days when it was necessary to look in a book (hundreds of pages) or a map (which was always creased) in order to find a way to reach another city or, even worse, a specific street. And there was no one to understand that, no voice to guide you on your way. Some people, who were desperate, were even reckless enough to ask other passers-by about the best way to reach their destination. As we said, it was horrifying.

– We’d be much worse informed in a life without Internet

Today, Google saves our lives several times a day. But imagine if it didn’t exist. You’d probably try to take your own life instantly. Luckily, you couldn’t do it because, without Google, you wouldn’t know how to do it.

Luckily, we have access to all the information provided by the Internet, so we can easily read Immanuel Kant’s complete works, learn more about Planck’s constant or find out what our cousin Segismundo, who has a coffee shop in Cuenca, had for breakfast yesterday. Oh, thank goodness!

– We’d be so very bored in a life without Internet

Imagine that you have to take a train ride and you don’t have a smartphone with signal coverage. Okay, don’t get your thumbs up. We already know how bad you are every time you lose signal, so imagine that scenario 24 hours a day.

Have you passed out? Well, I hope you haven’t, so keep reading. There was a time when things were like this and to cover up the idle times we had to amuse ourselves with strange and unthinkable tasks like reading books on paper or looking at pedestrians. And people were happy! The survival capacity of human beings is almost limitless…

– We should be talking to people face to face in a life without internet

Now, imagine if social networks or Whatsapp didn’t exist.

Before you have a nervous breakdown, let me remind you of that time. Dark times in which we had to use instruments such as the landline telephone (there were no phones either), letters, carrier pigeons or even personal contact (argh!) to communicate. Luckily, God created Tim Berners-Lee

– Everything would be much more difficult

Work, communications, leisure… everything would be harder, more uncomfortable, more difficult and even dirtier.

There are, however, those who argue that some things would improve. We would enjoy more time with our friends and family in person or we would spend more time in the countryside, while enjoying nature. but who wants those things?

The Internet is undoubtedly an absolute must nowadays. It has changed our lives to the point of turning the previous era into a fuzzy and terrible historical period, similar to the Middle Ages (or even worse). Luckily, all that was left behind and happiness entered our lives for good.

And thanks to the Internet, as we were saying, you have access to universal knowledge, as if it were a story by Jorge Luis Borges (if you want to know who this man is, you can look him up on the Internet). And since universal knowledge is at your disposal, how about taking a few moments to get to know Pandora FMS?

Pandora FMS is flexible monitoring software, which is capable of monitoring devices, infrastructures, applications, services and business processes.

Do you want to know more about what Pandora FMS can do for you? Click here: https://pandorafms.com

Currently, many companies and organizations around the world already have Pandora FMS. Do you want to meet some of our clients and read some of our success stories? Take a look: https://pandorafms.com/customers/

Or maybe you have some questions about Pandora FMS and you want to send a message. You can do that too! This is quite easy to do, thanks to the contact form which can be found at the following address: https://pandorafms.com/company/contact/

The Pandora FMS team will be happy to help you. Go ahead, send a message before the Internet is gone!

And don’t forget to leave a comment in the comment section down below!

GeekMonitoring

Is It Time to Say Goodbye to Your desktop PC?

June 8, 2018 — by Alberto Dominguez3

the-Desktop-PC-is-dead-featured.png

the Desktop PC is dead featured

The Desktop PC is dead, this is another step forward

Is it time to say goodbye to your PC?

Do you think that there’s no point in changing? It takes too much effort. What is new seems so difficult and not very user-friendly… and what is old, is so well known and – as soon as you get away from it – is so nostalgic that it is very difficult to evolve and progress. I mean, if life was about crossing a river through a path of slightly separated rocks, I would stay on the first one. A bit out of fear, but also because I have found it to be stable, and it keeps me dry and I have become fond of it.

However, I am sorry to say that life is about evolution and is not just an analogy of rivers and rocks. And do you know what evolves the fastest? Probably technology. So, little by little, we must get used to the fact that our technical devices, from mobile phones to laptops, are getting faster and faster and are being improved. In fact, for these reasons, that’s why the desktop pc is dead, as we know it, so we must be prepared for the future.

Many of us have already grown up with them. I still remember those dust-collecting things next to the oversized white monitors. Heavy and bulky machines that needed several reliable friends in order to be moved. Ugh…! That maze of cables, of all sizes and distances, that used to connect the speakers and the printer and scanner set together, totally inseparable… Over time, the PC gradually changed from being a switchboard in science fiction films to becoming, from the second half of the 1990s, a common household electrical appliance in homes. With the arrival of the year 2000 and the following decade, the prices of these appliances, which were still inaccessible to many families, dropped. There was a possibility of paying in thousands of instalments and even replacing the cutlery as a gift when opening a bank account. This was the beginning of the desktop PC era for the whole family.

But the story goes on, and we are still far from saying that the Desktop PC is dead. But in fact, there has already been a disappearance of the desktop PC, which we loved so much and which decorated one of the corners of our room with its shape. Back then, in the late 1990s and early 2000s, of course, laptops were still too expensive. We only remember them on the knees of a certain type of person. Those who needed it to take their work everywhere. Architects, lawyers, designers, those people who shout on Wall Street… But, again, magic happened, and in just a couple of years, laptops were introduced into our daily lives. It ended up being the favorite gift for teenagers, far better than the latest video game or the latest scooter.

It wasn’t all good news, though. At first laptops weren’t as “upgradeable” as desktop PCs, and when something broke, sometimes it was much better and faster to buy a new one. This was due to the lack of specialized workshops and the fact that laptops were not made up of as many handy modules as their older brothers, the desktop computers. In addition, the level of diagnostics required for a laptop PC was much higher at the time.

We have recently overcome many of these problems. In fact, we are beginning to move beyond the laptop itself to get closer and closer to the disappearance of the PC. In our decade, we are witnessing the enlargement of a new wave of information and technology that is getting closer and closer. This time, mobility and personalization will be its strong points.

It started out as a phone to take anywhere and for your parents to keep an eye on you and now it threatens to lead to disappearance of the PC. And this whole rise of mobile devices, with increasing accessibility, has enabled a broad democratization of technologies to which more and more people can have access. Do you know someone without a phone? If so, respect that. They are like pandas or monarch butterflies, endangered beings.

It is not only the phones that are to blame for this disappearance from the PC; it is also what we call with an ethereal tone “The Cloud”. This massive processing and storage of data in servers that house the user’s information and gives you instant access to your data at all times, wherever you are, and through any device, probably from a mobile phone, which is lighter and can always be carried in your pocket.

The Cloud represents an era in which individuals are offered an incredible level of malleability and manageability in devices, focusing from personal computers to smartphones, tablets and other breakthrough devices such as the smart clock, the smart fridge, or, without exaggerating, the smart house.

In the last decade we as users have learned a lot. There is no longer a father, unaware of this world, who now has a giant computer for his oldest son, showing off that he has the most modern multimedia encyclopedia installed. Now we all know about the technology breakthrough and have very different expectations of it. Virtualization and streaming require you to always stay close to your platform. The new social, work and leisure environments, too. This is achieved by a much better and lighter mobile device. It is accessible and close-at-hand at all times, with the same applications and programs used on a PC.

It’s a pity and I know that many of you, like myself, have a hard time getting rid of those PCs with stickers that have already been fossilized on their casings. But that’s how fickle and fast technology and computers are, folks. To quote the man from the apple company, Steve Jobs: “When we were an agricultural country, all the cars were vans, because that’s all you needed on the farms, but when the cities grew, cars with power steering and automatic transmissions began to arrive. The PCs will be like these vans. They will still be around us, but very few people will use them.

If the future is your present and the computer field is your thing, perhaps you want to read something like What is Web 4.0?, or you can immerse yourself in DevOps Architecture: Monitoring Challenges . Now that you know that the Desktop PC is dead, what are your thoughts on this? We want to hear from you, let us know your opinion, so don’t forget to leave a comment in the comment section down below! Did you enjoy this “the Desktop PC is dead” article?

IPAMMonitoringPandora FMS

How can I use php IPAM as an auxiliary tool in Pandora FMS?

June 7, 2018 — by Jimmy Olano0

phpIPAM-featured.png

smart city

php IPAM: Check IP addresses and all their changes

Php IPAM , as its compound name indicates , is the use and administration, through PHP, of Internet Protocol Address Management (IPAM) addresses, which makes the software very unique. In this article we will look at how we can use php IPAM as an auxiliary tool, although we will find that it has certain monitoring-based elements, let’s go!

php IPAM and Pandora FMS

php IPAM Open-source IP address management logo

In a previous article we explained what an IP calculator is and how it works when implemented in Pandora FMS. The truth is that since May 2014 (and formally released in version 5.1). Pandora FMS includes IPAM as a very useful extension for the management, the discovery, the existing agents associations, the registration of comments in each one of the IP addresses, also being able to establish exceptions from the monitoring point of view. The IP addresses can be detected through ping or through the IP addresses given to us by the agents previously registered: Pandora FMS will do the equivalences and correspondences for us.

Addresses view PHP IPAM

When the detections are controlled by the Pandora FMS agents, it is easy to know their operating systems, which are represented graphically in the IPAM extension. We can set alerts when an IP address changes, we can also see an overview with all the addresses or in a segmented way: the active devices, the managed devices, etc. and their possible combinations.
So, what is the advantage of using php IPAM?

php IPAM

There is a great variety of software available to work with IPAM and in this article we will look at php IPAM software in detail as an auxiliary tool but first we will look at the concepts that involve IP addresses: without this abstract concept the Internet and the monitoring would not exist at all.

Introduction

Today, in almost every business and home, there is a local area network. At the start of the Internet, the IP address system was planned in such a way that it was decided to set aside three types of network for private use on local area networks. These addresses are called private addresses. Each organization can use these private addresses to connect its devices and each device can also have one or more IP addresses. (For example, a laptop can be connected by Ethernet cable, by its integrated “wifi” antenna and also by a “wifi” antenna connected by USB port; generally web servers are redundant and have two Ethernet connections for data traffic and at least one Ethernet connection for monitoring).
The addresses are repeated in each organization, but there is no problem because they are not connected to each other and this is where the Internet comes in, the interconnection of private networks through the network address translation (NAT) because on the Internet each public IP address is unique and routers get a public IP address which is “shared” within each organization.

Numerical practice (IP calculators)

How many IP addresses do we need in our organization?
There are three types for private use:

  • Class A: 10.0.0.0 a 10.255.255.255 (8 network bits, 24 host bits).
  • Class B: 172.16.0.0 a 172.31.255.255 (16 network bits, 16 host bits)
  • Class C: 192.168.0.0 a 192.168.255.255 (24 network bits, 8 host bits)

In short, class A allows a single network with millions of IPv4 addresses, class B allows 16 networks, each one with thousands of IP addresses and class C allows hundreds of IPv4 addresses (exactly 253 because 0 and 255 have other uses that we will discuss later on). There are other kinds of networks that are not relevant for this article, such as Class D for multicasting and Class E for research and development.
For our minds, in reality, Class C networks are adapted to our needs at home and in small businesses. Keeping track of one or two dozen devices on the router that shares a public IP address for the entire private network is easy and convenient to use.

But in the other cases, thousands and even millions of IP addresses are a real nightmare, a very time-consuming undertaking. For these cases we have to follow exactly the same steps as the Internet Corporation for Assigned Names and Numbers (ICANN): to assign each of the departments a subnetwork and to delegate the authority to the department heads. But writing down and recording everything easily exceeds the capabilities of a spreadsheet and that’s why the php IPAM software was developed.

Brief history of php IPAM

The first public version was 0.1 and by November 2014 php IPAM released its version 1.1 of Miha Petkovsek’s workmanship, in January 2016 version 1.2 and in July 2017 version 1.3 was released.

It is developed under the free software modality, GNU 3 license since June 2015 when the repositories were moved to GitHub and apparently its funding source is donations of both money and web hosting. In fact, the site is in a special virtual machine, provided by a webhosting company. This free software model has brought together a select community so it is difficult to find information about it.

Installation of php IPAM

Unlike the historical details, there are many tutorials about php IPAM on the internet, some of them are better than others; we tried it on Ubuntu 16 and then we got the images that you can see. The first thing we did at the end of the installation was to upload our own logo for evaluation purposes and fill in the data that identifies our test server including the sending of email which is very important to receive the alerts, although this is not so common due to the nature of the work.

Administration panel php IPAM

In short, you need a MySQL database server, an Apache web server, the proper firewall permissions when scanning your network or local area networks… and even the subnetworks, this leads us to the following point.

Subnetworks (or in other words: how php IPAM works)

Some of us do not work with hosting companies or Internet access providers, so the examples discussed here are based on private addresses. We can assure you that everything described here applies to the Internet and there are organizations “that make money” with the help of php IPAM, which in turn donates to the FreeBSD Foundation, so this is an endless cycle.
The tasks of php IPAM are the tracking and storage of information in a MySQL database of:

  • Our devices and their IP addresses.
  • Our computer cabinets or racks
  • Our circuits (with the help of Google Maps APIs)
  • It supports LDAP authentication which saves us a lot of work.
  • To keep track of virtual private networks.
  • The NATs we have created for our computers to connect to the external.
  • You can even make notes in a sort of micro Wikipedia! (although we do not recommend it for issue management, perhaps it could be done in chronological order of cause/effect/solution to avoid making the same mistakes twice).

And also the subnets that we have created.

In order to talk about subnetworks, which are a routine and difficult task to synchronize between computers, we first have to explain how they are managed and why they exist. In 1980, the devices that were in charge of connecting to Internet were very basic devices. Thus, the need to create work subnets arose, but the computers did not have the necessary technology. The solution was to create subnet masks as a way for the devices to quickly detect whether the message or datagram received belonged to the subnet they had been assigned to work on.
That is why the mask is used to distinguish the IP address between the part that identifies the network and the part that identifies the device, apart from the fact that it does so with binary numbers. The network masks have the same format as the IPv4 addresses but always with values for each byte always one by one, for example the network mask for a private class A network that we will create in php IPAM would be the following:

11111111.00000000.00000000.00000000

That is, in decimal notation, 255.0.0.0.0 since our private network can contain millions of IPv4 addresses (the multiplication of the last three bytes 255 x 255 x 255 = 16581375 approximately because the zero addresses identify the network and 255 is used for multicasting). Now let’s imagine that it is possible that we have at least one million devices in our organization. How would the devices maintain the transport and shipment of packages or datagrams with such a volume of users?
The practical solution of that time was to apply an AND operation (an AND operation is a simple multiplication) to each packet address and netmask, in order to determine if it was for the network or subnet. Let’s assume that a packet is routed to 10.0.7.23 on our Class A private network

0000000010.000000000000.0000000007.0001010111 <-package addressed to 10.0.7.23 11111111.000000000000.000000000000000000 <- net mask 255.0.0.0.0 00000010.000000000000.0000000000.0000000000 <-oper. AND results in 10.0.0.0.0 (multiplication)

Here is something that is implicit: our network mask, "the one we chose" is implicit because a class A network the first octet identifies the network, that is 8 characters so in a notation called CI DR (which implies the address of our network and the network mask together) is represented by the following simply as follows: 10.0.0.0/8

Creating a private subnet

Now imagine that in our private class A network we need a subnet where we will place the machines for our programming tests. For this purpose, 253 IPv4 addresses are more than enough, so by mutual agreement with our computer we are assigned 10.0.0.0.0/24 this means that in the private network class A 10.0.0.0.0 we are assigned a net mask 255.255.255.0 and going back to the last numerical example, but with our net mask:

0000000010.000000000000.00000111.0001010111 <-package addressed to 10.0.7.23 11111111.11111111.11111111.0000000000 <- net mask 255.255.255.255.0 00000010.000000000000.00000111.0000000000 <-oper. AND results in 10.0.7.0 (it is not our network, our network is identified as 10.0.0.0.0)

Public subnet

At the end of the last century we realized that IPv4 addresses were running out due to the exponential growth of the Internet, so ICANN realized that many Internet Providers ("Internet Service Providers" or ISP) were assigned very large blocks which caused idle IPv4 addresses. The decision was Solomonic: to allocate using network masks for each provider in order to fit the right size and believe it or not, their computers do these calculations millions of times a day: they compare addresses against network masks to see if they are in their subnet (today, in fact, there are other mechanisms implemented in the now existing modems, routers, hubs and repeaters but the basic concept remains the same). That's why we said that many organizations benefit from the use of php IPAM to register and plan assigned addresses.

Subnets with variable length mask

If you have understood up to this point what we have explained, some academics or professionals could be surprised. We can move on to the concept of variable length network masks: as simple as they are not limited to the values per byte (8, 16 and 24) but to any other value, such as 10.0.0.0.0/28 for our example (we will leave this one for you to practice with your IP calculator).

Creation of users and groups in php IPAM

After this conceptual basis, we recommend to start with the creation of groups and then users when working with php IPAM. Php IPAM includes by default two created groups: administrator group and guest group. This will simply allow us to group our users together when it comes to a search or visualization or to keep in mind some kind of order, i.e. it does not provide any other use of writing or inheritance rights.

Add user view php IPAM

In the user creation part we will have the complete overview: to create a user as a normal user or administrator and at the same time it will show us the groups that we have created ( remember that the description of the group with its user classification is not binding, a group that is described as "Remote Administrators" does not mean that its users have administrator rights). Simplicity is often welcome, by using the two groups that are created we can work unless the organization is extremely large.

When creating a user it will show us how our users will authenticate themselves and we will only see two options: by local database and by Apache which is an outdated and file-based method that we do not recommend. Our choice will be LDAP, however, we must first activate the php-ldap library on our Apache server and restart the service. In total there are seven possible forms of authentication:

  • Local.
  • Apache.
  • Active Directory (AD).
  • LDAP.
  • NetIQ.
  • Radius.
  • SAMLv2.

IP calculator

Once we have paved the way for work we can create our networks and sub-networks. The easiest way is to go to the integrated IP calculator, enter your assigned CIDR or the one you will create yourself in the case of private networks. php IPAM will calculate and display all values and will offer to create a network with that result and even its subnets.

Important note: if some subnetworks or phpIPAM subnetworks overlap, this will be indicated in a red letter warning.

IPv4v6 calculator php IPAM view

Once the CIDR values have been set correctly, we can assign the user groups the corresponding rights over that network or sub-network (note that the rights are given here as an option). Here we would like to highlight once again the peculiar way of managing read and write rights in php IPAM!

Edit Section view php IPAM

Exploration of devices in php IPAM

To facilitate the tasks of adding php IPAM, it has the utility of using the ping program to contact the devices to our network or subnetwork previously created, but the interesting thing is that there are two more ways, one for the devices that we already have and the other one for the new devices:

  • Through agents: this is similar to Pandora FMS, that is, a software installed in each machine and suitable to the current operating system. In Windows environment the Active Directory will allow us to massify this task and in GNU/Linux by SSH and well designed scripts we will be able to manage hundreds or even thousands of machines as desired.
  • Here's what's new and very useful:
    • When installing new devices we can configure them with the name of the network or sub-network.
    • With its respective script it connects to the API of php IPAM.
    • Check the last free IP address in that range.
    • Self-assigning this value.
    • It confirms to php IPAM and is registered in the MySQL database.
    • This way php IPAM becomes a kind of DHCP with the difference that it is only used once: when the virtual machine or device is installed (it can also be real, it is the same but it is obvious that a human being is in charge of the process, this would happen when mounting the server farm racks, which in turn contain the virtual machines).
    • Currently there are virtual machines that can even be built by our own clients as they wish on our server farms ( each one chooses his or her operating system for the hypervisor we have configured with php IPAM.
    • Of course it also includes deletion routines, in case the virtual devices are deleted or the real devices are moved or deactivated.

php IPAM: auxiliary tool

php IPAM can also register our DNS servers , monitoring idle IP addresses and devices, with a simple ping , fping or pear ping.

Ping check result - IP address details php IPAM

It uses agents, registers and manages NATs as well as basic monitoring with SNMP, but first we must configure our Apache server with the php-snmp module . Then we can configure the devices to monitor . It will even allow us to create SNMP traps , which are useful for the creation of event alerts . For a better compression we have a complete article on the basic aspects to consult through SNMP.

php IPAM SNMP management view

php IPAM: exporting and importing data

To finish, php IPAM has the option to import or export data in spreadsheet format of the sections we might need or we will be able to export (but not import) the complete database in SQL format or in a large spreadsheet!

Select sections and subnets fields to export view php IPAM

In the case of importing data we must follow a detailed protocol, in order to have an exact compatibility of the data:

Select IP addresses file and fields to import php IPAM

Automation in data export

Although the option spreadsheet is available, We suggest to develop through the API few scripts that allow extracting updated data and whether there have been changes in the network and what those changes are, if we have added or removed devices, ourselves or our clients to whom we have delegated the creation of virtual machines and thus we will be able to monitor with Pandora FMS the complete infrastructure.

If we want to go one step further, we could also create a plugin for Pandora FMS that consults directly in the database looking for changes but it is an advanced topic, it would not be a normal plugin (obviously with read-only rights, always from php IPAM to Pandora FMS).

Conclusion

We predict a great future if with php IPAM, which is created in free software and to which we have access to its source code, we can obtain data on what devices we should monitor in an almost completely automated way, keeping the infrastructure planning and development computers completely isolated from the monitoring and maintenance ones. We see then, an excellent symbiotic relationship between both software products!

Please leave us your comments or opinions and we will be happy to answer all of them.

Redactor técnico. Comenzó a estudiar ingeniería en 1987 y a programar con software privativo. Ahora tiene un blog en el que difunde el conocimiento del software libre.

artificial intelligenceFrikiGeekMonitoringMonitorización

Smart city; imagining the cities of the future

June 4, 2018 — by Robin Izquierdo0

smart-city-featured-960x493.png

smart city

What’s a Smart city; a glimpse into the cities of the future

The definition of “Smart City” is constantly changing. As the term is explored in depth -since it is of recent creation- and technology is developed, the concept of Smart City is transformed, advancing with the objective of improving the optimization of resources and quality of life of its inhabitants.

We could define the Smart City as a planned city that takes advantage of the use of technology in such a way that it becomes an ecological, connected, sustainable and rational space for the enjoyment of its inhabitants. As can be deduced from this definition, in order to achieve the development of a Smart City, multiple aspects come into play, encompassing the use of urban planning, engineering, information technology and whatever human beings may know to improve the efficiency and habitability of a city.

In fact, the more practical aspects of improvement are contemplated, the more “intelligent” a city is. In a Smart City not only issues such as providing space for hundreds of wi-fi connection points or using LED street light bulbs for street lighting are considered; the use of planning and technology to achieve improvements such as the use of time through efficient urban planning and public transport, the existence of top-quality health services or improvements in the functioning of local government are also part of the development of a more intelligent city.

How will the cities of the future be?

Thinking about what a big city could be like in just a few years’ time is a fascinating exercise. Let your imagination run wild for a few moments…

As soon as we leave the intelligent building in which we develop our work we will be surprised by the purity of the air, achieved thanks to a more ecological traffic and the presence of gardens, both vertical and horizontal, which will extend through streets and buildings.

The roads and streets will be clean and in excellent condition, thanks to cleaning services and automated repairs. Other urban services, such as street lighting and sewerage, will also be in perfect condition, thanks to robots and drones that will come to perform maintenance as soon as necessary.

The city itself will provide us with all kinds of useful information. Combined with the use of augmented reality devices will allow us to know in real time questions such as which park can best be seen at sunset, which commercial street is less congested, or information of tourist interest, such as the history of buildings or monuments.

Solar panels and intelligent materials (for example, tiles capable of generating electricity through kinetic energy generated by footprints already exist) will come together to provide the city with enough clean energy. Water harvesting systems will maximise their use, for example by using rainwater, and it will even be possible to generate part of the city’s food needs on vertical crops, which could even be located inside buildings, illuminated by LED lights.

If we need to move around the city, we will only have to ask, through our mobile phone or any other device, that the city’s electric car service pick us up. The use of autonomous public vehicles will bring great benefits to traffic in the city. On the one hand, by eliminating private cars, the total number of vehicles will be drastically reduced. In addition, being in continuous use will free up the space traditionally occupied by parked vehicles. These improvements, together with intelligent traffic management, will make city travel more environmentally friendly, faster and safer.

Of course, self-propelled electric vehicles will easily find charging stations or even be able to dispense with them thanks to mobile recharging systems. It will also be very easy to charge all of our devices, perhaps wirelessly.

The streets will also be safer. Thousands of cameras will monitor public spaces and sensors such as those capable of detecting gunshots, fires or smoke will be continuously communicated with public services such as the police or firefighters. The City Councils will have control centres – already existing in some cities – from which they will be able to make all kinds of decisions based on the information they receive, in order to improve the functioning of the city, perhaps aided (or even directly managed) by Artificial Intelligence systems.

In addition to all these improvements, the transformation into intelligent cities will have a major impact on populations in areas where living conditions are particularly harsh. Thus, new refrigeration technologies applied in cities located in desert areas will be of great importance for people living in them, improving their habitability.

The picture of the intelligent cities of the future is certainly hopeful. Will they really be as we imagine, or will we have to settle for dark and polluted cities? Some studies state that in the year 2050, 75% of the world’s population will live in urban areas. Will we manage to create cities capable of giving a good quality of life to such a large number of people, or on the contrary will we resign ourselves to living in oppressive cities like those that science fiction has so often shown us?

While we are waiting for cities to become better and smarter, let us introduce you to Pandora FMS. Pandora FMS is a flexible monitoring software, capable of monitoring devices, infrastructures, applications, services and business processes.

You may be one of the people responsible for implementing improvements in your city to make it smarter. Find out how Pandora FMS can help you here: https://pandorafms.com/

Or ask us anything you want about Pandora FMS, in the contact form that you can find in the following address: https://pandorafms.com/company/contact/

The Pandora FMS team will be happy to assist you!

Redactor técnico. Aunque su nombre indique lo contrario, Robin es de Madrid. Le encanta viajar, el cine y pensar y escribir sobre el futuro de la tecnología.

DevelopmentFeaturesMonitoringPandora FMSRelease

What’s New in Pandora FMS 7.0 NG 723

June 1, 2018 — by Irene Carrasco0

whatsnew-723-featured.png

whatsnew 723
This last update package of Pandora FMS 7.0 NG contains improvements as well as visual changes and includes the resolution of some problems. A list of the most important changes can be found below.

New features and improvements

Real-time graphics of SNMP modules

They allow to see, at the user’s request, the real time data of an SNMP interface or any other module that can be sounded in real time from the Pandora FMS console:

whatsnew pandorafms 723

Rebranding

It allows you to make an OEM with your Pandora FMS installation, changing the name of both the product and the developer or manufacturer in all the sections of the console, and also in the main console screens of the server. It is possible to replace all logos, icons and references throughout the application.

whatsnew pandorafms 723

External Tentacle server configuration file

From this version on, the behaviour of the Tentacle server is much more flexible and easy to configure from the outside, thanks to the new external configuration file (optional).

Events in “progress”

It is now easier to use this intermediate status of events directly from Training and Event Management. Especially useful when working in multi-person operation teams in different shifts.

whatsnew pandorafms 723

whatsnew pandorafms 723

New GIS maps to work locally

With this new version it is possible to add a new data source for GIS maps. From version 7.0 Build 723 it is possible to add connections to WMS (Web Map Service) servers, such as GeoServer, which allows you to have GIS information in your own installation, without depending on external connections.

whatsnew pandorafms 723

Other small improvements

  • The export of events to CSV format has been improved, adding among other fields the internal (and unique) ID of the event.
  • Pandora’s internal audit log, which allows a total traceability, by user and operation, can now be exported to CSV format.
  • Improvements in the visualization of the graphs (tick separators, scale and zoom with detail in the graphs).
  • Improvements in the Oracle plugin, which now allows (optionally) to generate an agent for each database instance instead of grouping them all in the same agent.
  • Small visual improvements in the visualization of logs.

Troubleshooting

  • Three security vulnerabilities that affected previous versions have been fixed (CVE-2018-11221, CVE-2018-11222 and CVE-2018-11223).
  • Reported problems in group synchronization between nodes and meta console have been fixed.
  • Fixed a problem in the RSS view that prevented it from working.
  • Fixed a serious bug in the Tree view that sometimes did not show the information to the user.
  • Fixed problems in the network module editor in policies.
  • Fixed a minor problem with network maps for network masks /32
  • Fixed a problem in the visualization of SLA’s in dashboards.
  • Fixed a problem that prevented the correct editing of custom graphics in the visual console.
  • Fixed a bug when importing policies.
  • In the custom graphs the percentile graphs were not displayed correctly, it has already been fixed.

Download Pandora FMS

The last updated version of Pandora FMS can be downloaded from the downloads section of our website:
https://pandorafms.org/en/features/free-download-monitoring-software/

GeekMonitoringMonitorización

Self-driving cars: the Ultimate Driving Machine?

May 28, 2018 — by Alberto Dominguez0

self-driving-cars-featured-960x493.png

self-driving cars

Self-driving cars; Will these change our lives for the better?

Perhaps on your last extended outing you may have felt the fear of having your mobile phone in the final 5% of battery life. And you might have thought, “How useless we humans are without technology”.

You can take comfort in the fact that human beings’ relationship with technology goes back a long way. So it is very likely that the first hominid who forgot his battle stick at home, when he went hunting, also felt very useless against the enemy without that futuristic breakthrough that was once an ergonomic and strong stick.

Today, we are witnessing the growth of all kinds of inventions. It’s overwhelming to live in the future. It’ s so exciting and fun. But we have also seen how our parents, who used to be quite wise when talking about streets and shortcuts, now they depend on GPS in order to get somewhere.

Where is the problem, then? Probably in that technology has allowed us to go one step further each time, with less understanding of what we do. This considerably increases our dependency.

So, perhaps we grant too many abilities to technological tools, thus avoiding the challenge of learning large amounts of knowledge. It is reasonable to think, therefore, that with the arrival of new autonomous technologies in the world of transport, we will also lose some of our confidence on the road. Although if we stick to the data, even today 90% of traffic accidents are still the result of human error. Perhaps it is time to take the plunge and make a serious commitment to the future.

This science fiction technology that is taking so much time and resources to finally achieve self-driving cars, is so complex and thorough. These are some of the key factors that must be considered: the speed of the vehicle, its behaviour and feedback with other cars, the entire longitude, latitudes and distances around it, and even its exact location, with the greatest possible accuracy, regarding the world. There must be no room for any kind of error, the functionality must be perfect.

These self-driving cars promise the unthinkable. The idea of the disappearance of traffic jams has even been raised, claiming that it is only a problem of lack of coordination between all the vehicles on the road. Although it is possible that there will be greater harmony due to the continuous contact between them, thus making it possible for congestion and accidents to be reduced, but we need to bear in mind the large number of vehicles that are currently blocking the road. So this sounds quite good but I’m afraid we will still have traffic jams even with self-driving cars.

What about parking? That was the part where we all suffered the most in our driving school practice test and now these self-driving cars promise to do it on their own. It doesn’t matter if you live in a tiny village in La Mancha or in the capital’s largest residential area, it is becoming increasingly difficult to find places to park. With this type of vehicle the problem would no longer be yours.Dear friend, self-driving cars are autonomous. You will be able to get out of it whenever you want and let it move on its own until it finds its own place in the shade. You get off at your destination and then it will take a little spin around the neighbourhood.

You’ re likely to get blown away by something at this point in the article. So now, you’ve put yourself into conspiracy mode, in the very essence of Black Mirror, and you’ve already noticed: “If my car constantly needs its satellites in real time in order for it to work… does that mean that a person will be able to know where I am at any time? The short answer is “yes”. The long one: “Yes, bro, it’s a little scary, so you can start trembling now”. We can put ourselves in the worst possible situation and say that this could be dangerous, as we will have no privacy or place to hide while on-board. Obviously, the big car companies are also prepared for the worst and are struggling to prevent this kind of invasion of privacy, but your data will still be there. Unless, of course, you choose to live the simple life of the Amish society.

In any case, and although the project of Level 5 vehicles on the European scale of autonomy (in which the driver is no longer needed) is getting closer and closer, there are still certain obstacles to overcome before this scenario becomes a reality. Technology already makes it possible to manufacture these self-driving cars, such as the new Audi A8, which is already a Level 3 car, but it is impossible to use because the legislation doesn’t allow it, and we are not anywhere near having definitive laws for this unknown situation. Of course, you also have to think of all the necessary elements, both on the road and in the vehicles themselves, so that they can operate properly. We are talking about ultrasound sensors, radars, cameras, 5G telephony, high-definition mapping, automated electric engines, powerful processors… Besides the immense network of computer resources that would be necessary for the connectivity and security of all the vehicles on the road.

There is still a long way to go, but with this pace, which is getting faster and faster, we will get there soon. By the way:

self-driving cars

If you want to dig deeper into this world of science fiction in the purest Minority Report style, you might want to take a look at The Dangers of Artificial Stupidity, or you might want to continue your research on autonomy and monitoring in the article If fetal monitoring is a real thing, then why don’t you monitor your business?

Don’t forget to leave a comment in the comment section. Do you think self-driving cars will change our lives for the better? Let us know down below! We are looking forward to hearing from you.

GeekMonitoringMonitorización

12 Google facts that you may not know.

May 25, 2018 — by Alberto Dominguez0

google-facts-featured-960x493.png

google facts

Google facts: 12 things that will most definitely surprise you

Everybody knows Google, right? It is the most widely used search engine in the world. Even though it is only in his twenties ( it started in 1998), it seems that it has been around all its life.

Google’s history begins in 1995, when two Stanford University students, Larry Page and Sergei Brin, came together to create a search engine, which they first called” BackRub”. They had no idea where it would all end up…

It’s hard to estimate how many people use Google every day, but the figure is certainly in the billions. That’s why Google can be considered one of the most popular sites in the world. But, although millions of users use it almost every day, there are many peculiar aspects about this popular search engine that most of its users are unaware of. Do you want to discover some Google facts? You’re in the right place.

Its name comes from the word Googol

Yep, that is right!. You need to know that Googol is a very high number, then this makes perfect sense.

The creators of Google, Page and Brin, chose this name to represent the enormous amount of information that the search engine was going to process. Although at the time, they didn’t know how far they would go….

The colours of the letters of the logo are not chosen randomly

Its colours (yellow, red, blue and green) come from the Lego pieces. Why? When the founders of Google were testing their search engine in 1996, they used the pieces of the popular construction game to create part of the server casing that was in charge of it. Today, it can be seen at the Stanford Computer Museum in the United States.

Another interesting thing of these Google facts: there are many ways to get to Google

In addition to typing “google.com”, you can do this through “gogle.com” or “gooogle.com”, among many other combinations. Its founders were so forward-thinking that they took into account even the typographical errors that Internet users might make when searching for their website. Yep, the founders were that clever….

Their page was too simple for the era

At a time when the web pages were very dramatic and the loading speed was very limited, it was usually necessary to wait a long time before a web page was fully loaded. That’s why, when accessing Google, the page was so simple, it loaded so fast, many users thought it hadn’t loaded correctly. To avoid this and to let visitors know that “that was it”, copyright information was added at the bottom of the page.

The first doodle. (Another great one of these Google Facts)

It was a simple performance of “Burning Man”, the symbol of a well-known rock festival held annually in Black Rock (USA). The purpose of this was to announce the day Page and Brin attended this festival in 1998.

The company was close to being sold

At the end of the 20th century, Google was very close to being sold to some of the major Internet companies of that time. Its price? A million dollars. Today, its market value is a lot bigger than that.

The “I’m feeling lucky” button is very expensive for Google

If you use Google frequently, you might know about this button located just below its search box. It takes you directly to the first result obtained without showing you the remaining results, which means that no advertising is shown to users.

It is estimated that this button is used by approximately 1% of their Google users, so some people have made some calculations to find out the company’s financial losses. Considering the profits that Google usually generates…. the losses are estimated at hundreds of millions of dollars annually. However, this button is not expected to be removed in the short term as, according to the company itself, it is part of the company’s image and philosophy.

Google has “a few” servers

Although only the company knows the exact number, it is estimated that these exceed one million, and that they represent around 2% of those that exist worldwide. Not bad for a company that started out building them with Lego parts….

Google is really important on the Internet

In 2013, Google’s services (including Youtube, Gmail, etc.) were down a few minutes. Global Internet traffic decreased by 40% during the time of the downturn.

Millions of people would like to work at Google

Its main work centre, “Googleplex“, located in Mountain View, California (USA) is probably the most well-known in the world. Inside, its employees find many facilities, such as a constant presence of free and ecological food, and all kinds of places dedicated to rest and leisure.

Its algorithms are among the most influential in the world

Its algorithms, which determine the search results and the order each page gets within them, have an enormous influence and can even determine the fate of some companies. It is estimated that these change about 500 times a year, and no one knows exactly what the criteria are.

There is a version of Google in Klingon

So, if any visitor of the alien race of Star Trek needs to search something, they will have no problem searching in their own language.

As you’ve seen, there are many Google facts that you probably didn’t know about. And now, why don’t you take a moment to discover Pandora FMS?

Pandora FMS is flexible monitoring software, which is able to monitor devices, infrastructures, applications, services and business processes.

Do you want to see what Pandora FMS can do for you? Click here: https://pandorafms.com

Or you can also send us any question you may have about Pandora FMS. You can do it in a very simple way, using the contact form at the following address: https://pandorafms.com/company/contact/

The Pandora FMS team will be happy to help you!

MonitoringSystem Monitoring

Observability and Monitoring, same thing?

May 24, 2018 — by Alexander La rosa0

observability-featured-960x493.png

observability

Observability: a systems’ attribute and its possible influence in Monitoring

We have been listening to the term Observability for a while now, always associated with Monitoring and even though the term changes meaning depending on the article we are reading or the conference we are assisting to, it seems like Observability is here to stay.

Now, what is Observability and what does it have to do with Monitoring?

There are those who consider that Observability is nothing more than a modern concept for Monitoring, but if we review the concept of Observability, we can see that idea is not well supported.

«In control theory, observability is a measure of how well internal states of a system can be inferred from knowledge of its external outputs. »

It states that Observability is an attribute of the systems and not an activity we executed. This is the basic difference between Observability and Monitoring.

The concept of Observability refers to a “system” but since we are working with Observability in relation to Monitoring, we propose to understand it as a system for any element to be monitored, i.e. server, network, service or application.

Based on the concept, we understand that any system can be more or less Observable but the concept does not indicate anything about:

  1. In what way should we measure the outputs of the system?
  2. In what way, given the evaluation of the outputs, can or must we infer the state of the system?

Some authors explain Observability as an umbrella concept that includes monitoring activities plus alerts and alerts management, visualization, trace analysis for distributed systems and log analysis.

This concept is also difficult to accept, specially for those like myself who understand that is precisely the monitoring whose has to include those activities in order to execute its major objective of translate IT metrics into business meaning and attending challenges imposed by emerging technologies like application development based on containers, cloud infrastructure and DevOps.

However, Observability does not seem like an attribute we can dismiss, on the contrary, it looks important enough to include it in the same group as efficiency, usability, testability auditability, reliability, etc.

Then, let’s assume that Observability is a desirable attribute in our systems and monitoring is the activity of observing and evaluating the behavior and performance of said systems.

In this case, it is important to review which characteristics the system must have to be observed and which monitoring system will be used to observe them.

Regarding the architecture of the elements to be monitored, the concept of Observability leads us to consider, among other things:

Observability and monitoring as key elements in design

It is desirable that Observability and monitoring are not an afterthought when designing; instead, they must be considered from the beginning, thus avoiding problems during the implementation of monitoring systems.

In fact, in its SRE guide (Site Reliability Engineering), Google explains how they use a reformulated Maslow pyramid to implement distributed systems where Monitoring is included as the base of the pyramid.

Originally, Abraham Maslow proposed a pyramid organizing human needs in a hierarchy of need, having the most essential needs in the bottom; bearing this in mind, Google engineers took that model and adapted it to the key elements for developing and running distributed systems.

observability

Without monitoring, you have no way to tell whether the service is even working; absent a thoughtfully designed monitoring infrastructure, you’re flying blind.
-Google SRE guide, chapter 3

A reduced cost for implementing a monitoring scheme

Highly observable systems will mean a lower cost for the implementation of a monitoring scheme.

Let’s consider for a moment we want to integrate a specific system to our platform but we need our own custom-developed solution for monitoring either because none of the commercial monitoring tools includes this kind of monitoring or because the system’s nature make it incompatible with those monitoring tools. Then maybe, the best solution here is to dismiss this system and choose another more compatible with the idea of observability.

Observability as a cultural value

In 2013 Twitter published the first of two documents about how its engineering group faces the need to evaluate the performance of its services.

In this first document they report that they have a group of people called Observability team.

In the second document (2106) they established the mission of the Observability team:

“… provide full-stack libraries and multiple services to our internal engineering teams to monitor service health, alert on issues, support root cause investigation by providing distributed systems call traces, and support diagnosis by creating a searchable index of aggregated application/system logs. »

In this way Twitter lets us to know how important the Observability culture in the company is.

Nowadays, authors like Theo Schlossnagle (@postwait) and Baron Schwartz (@xaprb) have pointed out the importance of a solid Observability culture.

Well known failure possibilities

Designing and developing observable systems implies necessarily a solid knowledge of the failure possibilities for the system’s group of key elements.

That knowledge could be the base for a later choice of the correct metrics, a proper alert customization and for the definition of an appropriate process for fault recovery and performance maintenance.

Regarding the architecture of monitoring systems we have to retake this classification and evaluate which one is more compatible with the idea of Observability:

Blackbox monitoring

Blackbox monitoring refers to a monitoring a system from the outside, based on the externally visible behavior and treating the system as a blackbox.

Blackbox monitoring is based on a centralized process of data collection through queries to elements to be monitored.

So, monitored elements assume a passive role, responding only about their behavior and performance when they are queried by a central collector or active element. Blackbox monitoring implies vertical scalability.

A very rudimentary example would be a central system pinging in order to have information about activity of monitored elements.

Traditionally, blackbox monitoring is focused on the measurement of availability and has as priority the reduction of downtime.

Whitebox monitoring

In a whitebox architecture the monitored elements become active when sending data about their behavior and performance to a monitoring system that is able to listen to them.

The emitters report data whenever they’re able, generally as soon as the information is generated. The transmission is executed choosing a scheme and communication format that’s appropriate for the monitored element and the collection system. Whitebox monitoring implies vertical scalability.

This kind of architecture is focused to evaluate the behavior and the quality of services exposed by internal elements of the system.

Observability guides us more towards whitebox monitoring. The main reason is the maintenance of internal elements reporting, which means a big advantage at the moment of inferring which are the internal status. However, we still think that the best idea would be a mixed scheme.

For our readers interested in Pandora’s architecture, we recommend visiting this page for more detail.

Redactor técnico con más de diez años de experiencia manejando proyectos de monitorización. Es un auténtico apasionado del Yoga y la meditación.

artificial intelligenceGeekMonitoring

Supercomputers: Several reasons to love them

May 21, 2018 — by Alberto Dominguez1

supercomputers-featured.png

supercomputers

Supercomputers: What are they for? What is the Top500?

Supercomputers are not well known to the general public, but they influence our lives more than you might think.

These are supercomputers, these huge gadgets, which occupy large, quiet and disturbing rooms. These things are from another planet.

The world’s leading supercomputers cost millions of dollars and can consume the same amount of energy as an entire village.

Most people have only seen one supercomputer in films, usually in science fiction. They seem to be part of a world of their own, which only very few people have access to, and in many cases they are surrounded by a certain mystery. They are considered a matter of state for governments around the world. But what do we use them for?

What is the use of a supercomputer?

These are unknown to the average citizen, and many people might wonder about the use of a supercomputer.

The truth is that supercomputers are unique machines, very different from the machines we are used to using in our daily lives. Its processing speed is twice as fast as that of personal computers, and it is made up of thousands of powerful computers that work together to serve hundreds of highly specialized users.

Its most common uses are in scientific research. Its enormous computing capabilities are very useful for solving complex problems and performing simulations that human beings would take years to emulate or would never be able to reproduce directly.

For example, a supercomputer can be used to make calculations about aircraft aerodynamics; or to model protein folding (far-reaching research that could help in the fight against diseases such as Alzheimer’s, cystic fibrosis or various types of cancer); or to develop all kinds of drugs. It can also simulate the evolution of a star, which is of great interest for cosmological research, or nuclear explosions, which, while not a very desirable goal, actually contributes to reducing the number of nuclear tests on the ground.

In addition to this. Supercomputers play an important role in climate prediction, becoming highly relevant in such important events as the prevention and management of natural disasters.

What is the Top 500?

Top500 is the name of a project that is responsible for developing a ranking of the 500 most powerful supercomputers in the world.

The project started in 1993 in the area around the University of Mannheim, and updates data every 6 months.

One of the most interesting parts of the Top500 is that it is very useful to check whether or not the technology meets the expectations of Moore’s Law.

The truth is that, according to the data we know, this has been the situation to date. According to Wikipedia, the evolution since 1993 allows us to determine that the performance of supercomputers has doubled every 14 months or so, which means that the Gordon Moore prediction made public in 1965 has been met at a very good pace.

Top500 includes both maximum and cumulative average data. Thus, we can understand the evolution of both the computing power of the main machines on the planet, and the sum of the 500 main ones, as well as the average of all of them.

Today (Information from November 2017), the honour of being at the top of the list of the fastest supercomputers in the world is held by Sunway TaihuLight, a system developed in China, which reaches a maximum performance of 93.01 Petaflops, which is a speed around one million times faster than the most powerful machine developed in 1993.

Why is the evolution of supercomputing so interesting?

Given the predictions of Moore’s Law and the unstoppable evolution of technology, supercomputing is bound to achieve achievements that our ancestors could hardly have dreamed of.

For example, the Tiahne-3 supercomputer is already under construction, in China, and it is expected to be capable of achieving the milestone of reaching the level of 1 exaflop, which is considered to be the processing speed of a human brain (indeed, the human brain has an enormous capacity for calculation, although it is not capable of focusing it on a single use, but instead it is used for all kinds of tasks, such as movement coordination or the management of body functions). Tianhe-3 is expected to be up and running in 2020.

One of the reasons why it’s so interesting to learn about the evolution of supercomputing is that it largely represents the evolution of IT, at least in terms of hardware. Thus, although we may not have a Tiahne-3 in our room, it helps us check the pace of development of computers and their constant improvement, including the devices we use in our daily lives. And it’s always interesting to know what the most powerful computers on the planet are capable of doing.

The evolution of supercomputers makes us confident to have more and more intelligent devices to help us solve the many problems that affect Humanity. Although there are always those who suspect that a supercomputer could lead us to the end of the human species, right?

In this blog we hope that this will not be the fate of the future and that, instead, supercomputers will contribute to making life a little more enjoyable on this blue planet that we love so much. Something that even Pandora FMS also tries to do.

Pandora FMS is flexible monitoring software. It is capable of monitoring devices, infrastructures, applications, services and business processes.

Do you want to know what Pandora FMS can do for you? Enter here:
https://pandorafms.com

Or you can also send us any question you may have about Pandora FMS. You can easily do this by using the contact form at the following address:
https://pandorafms.com/company/contact/

The Pandora FMS team will be happy to help you!

Monitoring

Computing Monitoring. Pandora FMS + eHorus

May 18, 2018 — by Alberto Dominguez0

computing-monitoring-featured.png

computing monitoring

Computer monitoring, inventory and remote control; a natural relationship

What do computer monitoring tools, inventory tools and remote control tools have to do with computer systems?

This is a question that human beings have been asking themselves since the beginnings of time….

Well, that might not be true actually because at the dawn of time none of these tools existed, but it is still an interesting question. Depending on what you do, you may not find a connection. But, if you work with this kind of tools in your professional life, you will understand that these are closely linked.

The relationship between computing monitoring, inventory and remote access is a natural one. If you work in the Information Technology sector and one of your duties is to monitor computer systems, you will know the advantages of having an inventory or a remote management system.

In this article we are not going to see each of them in detail. We will not go into what a computer monitoring system is, what an inventory is or what a remote control system is. In order to learn more about what they are and how they work, you have many articles on this blog that will reveal many of their secrets, especially in terms of monitoring.

But we are going to look at some of the relationships that exist between these three types of tools, through the possibilities offered by Pandora FMS and another tool that you may not know yet, called eHorus. Let’s do this!

Pandora FMS: monitoring and inventory

Do you know Pandora FMS? If you don’t, what are you waiting for? It is one of the most flexible monitoring tools on the market.

If you are a regular reader of this blog, you probably already know Pandora FMS pretty well. But one of the things you might not know (although you might guess it) is that, besides its multiple features, you can also use Pandora FMS to create an inventory of your computer resources.

It makes perfect sense, doesn’t it? Since you are monitoring a number of different activities, for example a company’s servers, you can create an inventory with the information that the monitoring system provides. Through its monitoring, the Pandora FMS inventory allows you to obtain very useful data about your systems, like the version and firmware, the installed hardware, the operating system version, etc.

In addition to this, monitoring offers periodically updated information, so that you can learn about any changes that may occur in the inventory.

Pandora FMS collects the necessary data to form the inventory both remotely and locally, through agents. Do you want to know more details, or see how the inventory data is displayed? Check out the Pandora FMS Documentation.

Or you can also get more information at the following address:
https://pandorafms.com/monitoring-solutions/inventory/

Pandora FMS and eHorus: computing monitoring and remote access

If you are into system monitoring you will understand this. You often need access to the computers that are being monitored, and this can mean spending a lot of time traveling if they are in different locations.

Think about the following scenario: Pandora FMS detects that one of the devices that are being monitored is having some problem, so it generates an alert. The problem is that this device is not in the city where you work, it’ s quite a few kilometres away. And the situation is critical, so you need to take immediate action. How do you plan to do that? You don’t currently have the ability for teleportation. You don’t even have a tiny helicopter….

However, luckily, for this type of situation we have remote access systems. And one of them, developed by Ártica Soluciones Tecnológicas (the creator of Pandora FMS), is eHorus.

Ehorus can be used together with Pandora FMS to have access to Windows, Linux or Mac computers on demand.

Ehorus has many advantages. It can be used from a tablet, mobile phone or any available computer. All you need is an Internet connection.

Ehorus is a very useful solution when used together with Pandora FMS, don’t you think?

But this is not the only thing eHorus can do. Do you want to learn a little bit more about it? You can find more information here: https://pandorafms.com/remote-control-software/

These are just some very basic examples. If you are an IT professional, you can certainly imagine many situations in which these tools will relate to each other. The relationship between computing monitoring, inventory and remote access is very meaningful and occurs naturally in the daily work of an IT professional. And if you have this kind of tools and you can use them together, like Pandora FMS, its inventory and eHorus, then these will make your work a lot easier.

And now, we would like to tell you very briefly some things about Pandora FMS. Pandora FMS is a flexible monitoring system, capable of monitoring devices, infrastructures, applications, services and business processes.

Pandora FMS already has many clients around the world; some of them are top level companies and organizations. Do you want to know some of them and some of our success stories? Take a look, here: https://pandorafms.com/customers/

Or maybe you want to learn more about what Pandora FMS has to offer. Then enter here: https://pandorafms.com

Or maybe you want to get in touch directly with the Pandora FMS team to make a specific question. You can send any query that you may have about Pandora FMS in a very simple way, using the contact form that can be found at the following address: https://pandorafms.com/company/contact/

There is a great team behind Pandora FMS that will be happy to help you!

artificial intelligenceGeekMonitoring

Pandora FMS vs IA Skynet. I have no mouth and I must monitor

May 17, 2018 — by Alberto Dominguez1

i-have-no-mouth-and-i-must-scream-featured.png

I have no mouth and I must scream

I have no mouth and I must scream. Are we afraid of AI?

We crave it and we are afraid of it in equal parts. Artificial Intelligence is on our horizon. A closer and nearer horizon. We can almost touch it with our fingers….

Will it solve mankind’s greatest problems or will it end the history of our species? It is not easy to give an answer to this question. In general, and perhaps because the apocalyptic scenarios are much more playful than the utopian ones, science fiction has almost always chosen to portray the relationship between humans and artificial intelligences in a very gloomy way…

The examples in film and literature are countless. From timeless classics like Blade Runner to modern myths like Matrix. And of course, without ignoring an icon in the field, such as the Terminator. Have you ever seen any of the films in the saga? If this is the case, then you will probably know the name of the great enemy of Humanity in this group of films: Skynet.

You probably do not know what its most immediate precedent is, which is found in a story from the distant year 1967, called “I have no mouth and I must scream“.

I have no mouth and I must scream.

This unforgettable title is the name of one of the most terrifying stories of the great Harlan Ellison, one of the most important science fiction writers of the 20th century.

It tells the terrible story of a group of five people, four men and one woman, who happen to be the last survivors of a nuclear holocaust and are under the rule of an all-powerful entity called AM.

AM is an evil being with unlimited power. It hates the human species beyond belief, and has fun torturing its “guests”, and it artificially extends their lives for decades just to get the pleasure of continuing to torment them.

As if that were not enough, the options available for the desperate humans to get rid of AM are non-existent. His actions are so terrible that, at a certain point in the story, one of the characters begs for help from Jesus Christ and then later realises that it makes no sense to ask God for help, since if there were something with a power similar to God’s, it would undoubtedly be AM.

But where did this horrible and mind-blowing entity come from? If I tell you that AM is an Artificial Intelligence, and it was originally a military system, then you will connect the dots, and if I tell you that, at a certain point, it becomes aware of itself and decides to exterminate the human species, you will surely think that you have already heard this story…

Indeed, the similarities between AM and Skynet are obvious. It is so obvious that, according to Wikipedia, Ellison himself has repeatedly stated that Skynet would be based on AM and the Terminator saga itself would have taken ideas from two episodes of the series “Beyond the Limit”, scripted by Ellison.

In fact, and after some controversy, the beginning of the credits for the Terminator films included an express recognition of Harlan Ellison’s work. The connections seem to be clear.

Should we be afraid of Artificial Intelligence?

Entities like AM and Skynet bring fear into our bodies, but that does not mean that we should abandon the development of Artificial Intelligence, but it is a path of such importance that we must walk with all possible precautions.

Over the past few years, multiple voices have warned us of this. From Elon Musk to Bill Gates, from the Swedish philosopher Nick Bostrom to the recently deceased Stephen Hawking, many people from the world of Science, Technology and Philosophy have expressed opinions that vary from one extreme to the other; from those who believe that AI will be the end of Humanity to those who think that it will be the greatest achievement ever achieved and that it will raise the standards of quality of life above anything ever imagined.

Who is right and who is wrong?

It is too early to give an answer to this question. It seems certain that, above all other considerations, nothing will stop human curiosity and the desire to improve, so sooner or later, technology will move forward to give us an answer… or a conclusion.

In any case, we must bear in mind that Artificial General Intelligence is self-conscious and capable of handling any kind of situation that the real world might present, and that it is a Weak Artificial Intelligence, like some of the ones that we already have today and that can be very helpful for very specialized tasks.

In fact, many experts believe that we will never be able to develop self-aware Artificial General Intelligence, or that it will take hundreds of years to do so, since we are not yet able to understand how the human brain works. But there are also those who think that the creation of it is a time thing and we will see it between us much sooner than most people would expect (around 2030, more specifically).

Could Pandora FMS stop Skynet or AM?

Hopefully, we’ll never have to answer a question like this again! The truth is that Pandora FMS is a powerful and flexible monitoring tool, but we have not yet thought about how it could stop a self-conscious and all-powerful Artificial Intelligence. It will be a matter of thinking about it….

Pandora FMS can monitor devices, infrastructures, applications, services or business processes. And it does it so well that many large companies around the world already have it.

And, have you ever watched “I have no mouth and I must scream”? What are your thoughts regarding this film “I have no mouth and I must scream”? Let us know down below in the comment section.

Do you want to know more about Pandora FMS? Click here: https://pandorafms.com

Or maybe you want to send us a query about Pandora FMS. You can easily do this by using the contact form at the following address: https://pandorafms.com/company/contact/

The Pandora FMS team will be happy to help you!

artificial intelligenceMonitoring

What is an algorithm? Learn about the most famous algorithms.

May 14, 2018 — by Alberto Dominguez0

what-is-an-algorithm-featured.png

what is an algorithm

What is an algorithm? A simple description and some famous examples

Mohammed Ibn Musa-al-Khwarizmi is a mathematician and creator of the term: “algorithm”, this term has become a buzzword, thanks to the rise of artificial intelligence.

Algorithms to suggest a possible match, algorithms to invest in stock exchanges, algorithms to predict crime, algorithms to organize our searches on the Internet… Algorithms are everywhere, even if they are unnoticed. These guide our economy, our purchases, and even our way of thinking.

But what is an algorithm?

If you’re reading this article and have a technical background, you may know what an algorithm is, and you may even write algorithms frequently.

But there are millions of people who are unsure of what that term means and how it affects our lives. In fact, “what is an algorithm?” is a frequent search on major search engines.

Regardless of your background, stay with us! In this article we are going to answer this question: what is an algorithm? and we will discover some of today’s most influential algorithms.

What is an algorithm?

If we refer to mathematics, which is the field in which the term originates, we can say that algorithm is an ordered and finite set of operations that must be followed in order to solve a problem.

And what exactly does this mean? Let’s break it down into two parts.

  • It is an orderly set of operations, which means that it is a chain of precise instructions that must be followed in an orderly manner.
  • A good way to figure it out is through a cooking recipe, which is still a simple algorithm. In any of them, a specific and orderly procedure is described (“First you heat half a pot of water. Then add a pinch of salt. Then cut the pepper into pieces, removing the seeds and the nerves…“), therefore each of these operations is what makes up the algorithm.

    Thus, the algorithm will take the form of a flowchart.

  • Its purpose is to solve a problem, which means that it has a defined objective.
  • This is the part which makes things a little more complicated. When we write an algorithm, we do it to produce a result. It’s not just a matter of writing a nice set of commands that lead nowhere, it’s done rationally and with a specific object.

What happens is that reality always makes things complicated. If, for example, we create an algorithm designed to work in real life, the orders in the algorithm must include instructions that consider the different situations we may face.

Thus, the shape of the flowchart that forms the algorithm will become an enormous “tree” of instructions that, depending on its complexity, may even give us surprising results.

What are algorithms used for?

Once you have established the limits of what an algorithm is, you will wonder how it is used in our daily lives.

Since this is a technological blog and deeply related to information technology, we are going to focus on this field which is also making algorithms trendy.

When a developer creates a program, he is essentially creating a set of algorithms. A computer program is a set of commands given to the machine, written in a specific language, to perform a series of specific operations in order to obtain a result.

As you can imagine, a raw computer does not understand human language. That’s why, to communicate with your computer, the programmer uses the programming languages.

The programming language is therefore the tool that serves as a bridge between the human language and the language that the machine can understand. Therefore, the programmer can develop the algorithms and create a series of instructions that the computer can “understand” thanks to the programming language (since computers do not have their own will, they have no other choice but to do so).

Learn about some algorithms that are more famous (and influential) than a rock star

While all this may sound formal, and even boring, programmers around the world have made some algorithms as famous as movie stars and more influential than any politician. Let’s meet some of them.

  • PageRank, by Google.
  • It is one of the most widely used in the world. This is the set of algorithms that Google uses to determine the importance of documents indexed by its search engine.

    In other words, when you do a Google search, this is one of the elements to decide the order in which the results will be displayed.

  • The Facebook Timeline algorithm.
  • This is another algorithm that influences our life much more than we might think.

    The set of algorithms that feed the Facebook Timeline determines the contents that will be displayed in the most visited space of the social network. Thus, based on a series of parameters (personal tastes, response to previous content, etc.), the algorithms decide which content the social network will show us and in which order it will do so.

  • High Frequency Trading Algorithms.
  • They circulate billions of dollars in the markets every day. These are algorithms used by many of the most important financial institutions in the world, which launch orders on the market based on the profit they expect to obtain, according to the market conditions at any given time.

    To such an extent they are relevant that such algorithms are now considered to be dominant in markets and much more influential than human operators.

  • Algorithm of Round Robin.
  • Okay, this algorithm is probably much less well known than the previous ones, but it is widely used in the computer field. Have you ever wondered how a computer determines its priorities when it has to perform several tasks at once? Imagine, for example, that you have a word processor, a spreadsheet and a web browser open at the same time. In general, it can be said that this algorithm determines the amount of time that the CPU of a computer will spend on each of the processes in progress.

What is the future of algorithms?

Rather than thinking about the future of algorithms, some people would claim that the future belongs to them.

The algorithm is, in fact, at the heart of such potentially powerful technologies as artificial intelligence. Algorithms are already the basis of automatic learning technologies, or “machine learning”, thus surprising us every day with new features. If you are particularly interested in the subject of artificial intelligence, you can consult other previous articles on this subject in our blog.

Today, algorithms are behind technologies such as virtual assistants or autonomous vehicles. But what about tomorrow…?

And what do you think? Will algorithms ever take over the Earth? You can take part in this post by leaving your opinion in the comment section at the end of this article.

And may the altgorithms be good to us…

GeekInteroperabilityMonitoring

What does IoT need to thrive? Interoperability comes into play

May 11, 2018 — by Alberto Dominguez0

interoperability-in-iot-featured.png

interoperability in iot

Interoperability in IoT; a key factor for its development

The transformative potential of the Internet of Things is obvious, and many companies are aware of this.
With the development of the new 5G networks, the Internet of Things will grow along the way, perhaps to the point of reaching figures as relevant as the $11 billion a year that McKinsey Consulting expects for this market in 2025.

But it won’t be easy. The Internet of Things is able to develop in so many areas and it has so many uses that its own diversity can be the main obstacle to its growth.
In an environment where countless devices of different types and technical profiles will operate (from household appliances to wearables, from autonomous vehicles to drones, among many others), manufactured by thousands of different brands (each with their own standards), developing the ability for them to communicate with each other will not only be a technical challenge, but also a matter of mutual consensus.

This is why the interoperability in IoT emerges as a major need for the development of the Internet of things.
In order to understand its relevance, let’s dig a little deeper into the concept. Interoperability is basically the ability for systems or components of systems to communicate with each other, regardless of their manufacturer or technical specifications.

For example, imagine that two IoT devices need to send each other any kind of information but are unable to do so because they “speak a different language”.
Imagine that the system that regulates the air conditioning of your home “speaks” in a language provided by its manufacturer and the one that controls the system for opening and closing the windows of your home only “understands” its own language because it has been created by a different company. They would be unable to communicate with each other and they would be unable to take action in a coordinated manner.

Or even more serious. Imagine that you are travelling in an autonomous vehicle and you need to communicate with other vehicles on the road to coordinate your movements and to be able to drive safely. What if they could not do so because of the incompatibility of brands which would make the exchange of information impossible? In this type of situation, even people’s lives could be put at risk.

This is the reason why interoperability is essential for the correct development of IoT. This is a problem that compromises the future of this technology and must be solved to allow its expansion.

And since this is a key aspect in evaluating the growth possibilities of the Internet of Things, we can ask ourselves: What is the current state of things when we talk about the interoperability in IoT?

The current state of interoperability in IoT and some attempts to improve the situation

If we look at the current state of things, we can say that the issue of interoperability can clearly be improved. The market is very fragmented, especially due to incompatibilities between brands, and a common effort is needed to reach common standards for communication.

The IoT philosophy is not exactly to create local, closed and limited environments. Exactly the opposite. The philosophy of IoT is to create a world in which millions of devices are able to communicate with each other in the best and widest possible way, without technical or commercial limitations, in order to make our lives a little better.

However, this is not an impossible task. It is by no means the first time that the leading technology developers and manufacturers have agreed to set generally accepted standards. Let’s think, for example, about the Internet, and how the homogenization of communication protocols has led to the growth of the network.

There are some initiatives to support interoperability in IoT.
One of them is, for example, IEEE P2413 – Standard for an Architectural Infrastructure for the Internet of Things, a standardisation project aimed at identifying similarities in IoT environments as diverse as intelligent buildings, intelligent transport systems or healthcare.

The other one is an EU project, called Iot-A, created to develop architectures that can be applied in different domains.
We also have the open source initiative called IoTivity, with over 300 members, including leading companies in the sector, which is aimed at guiding and promoting cooperation between companies and developers.

Or the so-called Industrial Internet Reference Architecture (IIRA), created in 2014 by some of the main operators in the market and focused on industrial IoT applications.

However, these are not the only initiatives. In such a diverse and insistent area, we believe that there have been multiple attempts at unification, but it is not yet possible to determine which standardization criteria will finally be chosen.

Final conclusions and a little IoT monitoring

The future development of IoT will depend to a large extent on improved interoperability, as we have already mentioned. And in order to achieve this, values such as cooperation and flexibility will be essential.

Likewise, when monitoring IoT devices, the flexibility of the monitoring systems will be something to be considered.

That’s why it is necessary to get to know Pandora FMS. Pandora is flexible monitoring software, which is capable of monitoring devices, infrastructures, applications, services and business processes. And it can also carry out IoT monitoring.

Do you want to know what Pandora FMS can do when it comes to monitoring IoT? Well, it’s easy for you. Just send us a message with all your questions. You can easily do this using the contact form at the following address: https://pandorafms.com/company/contact/

But before you do this and if you want to know more about Pandora FMS IoT monitoring, you can check out this link: https://pandorafms.com/monitoring-solutions/monitoring-iot/

What do you think about interoperability in IoT? Let us know down below by leaving a comment in the comment section. We will read all your comments and we are sure they will be helpful. Remember to share this article on your social networks like Facebook or Twitter. Thank you very much!

Do not hesitate to contact the Pandora FMS team. They’ll be happy to help you!

MonitoringServer MonitoringServers

Learn how to monitor Zimbra with this comprehensive tutorial

May 10, 2018 — by Javier García0

monitoring-zimbra-featured.png

monitoring zimbra

Monitoring Zimbra: with this tutorial you’ll find it quite easy to do

1. Context

1.1. What is Zimbra?

Loyal to our style, let’s get started by having a look at what Zimbra is. Zimbra is a product from Synacor, which offer us a fairly complete collaborative platform, which includes email, file exchange, calendar, chat and video chat, as well as empowering around 500 million email inboxes. Additionally, it’s worth mentioning that Zimbra Collaboration was built on an easy to implement platform with a great messaging and collaboration system.

Zimbra is likely to be implemented within the installations, in the cloud or, if you prefer, as a hybrid solution and even as a hosted service throughout any of the commercial solutions of Zimbra or services offered by Synacor. Zimbra’s solutions provide to their users the control of the physical locations in which their own collab’s information is allocated.

Apropos this last aspect, we could blame it on the growing interest in the location of the data by governmental authorities (or state authorities to be more all-inclusive) and also by the industries who are being meticulously regulated, as it is the case of medical organizations or financial companies.

A ‘file manager case’ it’s included in Zimbra Collaboration Server, which allows the user to:

  • Save attached files
  • Sharing files with other users
  • Load documents

1.2. Other important features of Zimbra Collaboration:

Zimbra Collaboration Server includes Zimbra Mobile which offers to their clients Microsoft Exchange and ActiveSync. The information is always available, without the need of installing any clients, neither middleware applications. To make it clear: it consists of a complete communication solution which allows the clients to send (and receive) emails, to add and to edit contacts in Zimbra Mobile’s address book, using a global list of addresses or ‘GAL’, or create dates and meetings, as well as managing the tasks list.

Another relevant feature from Zimbra is that Zimlets and API give the clients the possibility to download and integrate new functionalities with the aim to customize Zimbra’s experience and, therefore, widen its performance. Zimlets, in particular, include integration with Salesforce.com and Webex.

monitoring zimbra

Let’s have a look at another important characteristic: Zimbra Collaboration offers Zimbra Talk, which provides the users with collab text, voice, and video capabilities, integrated with the user’s interface of Zimbra. Regarding all this, we could say that all Zimbra’s functionality (except Zimbra Suite Plus and Zimbra Talk) are included in the principal product. Thanks to this, the clients don’t have the need to constantly buy additional products.

2. How to monitor Zimbra Collaboration Server?

To understand how to monitor this well-stocked and practical server, we have to consider the following:

2.1. Statistics and server’s status

To capture and show the statistics of the server, we count on Zimbra Logger package, which is useful for:

  • Keeping control of the mailbox capacity.
  • Tracking messages and creating night reports.
  • Log files.
  • Overseeing the MTA mail queue.
  • Supervise, through an SNMP tool, the error selected messages with SNMP snapshots.

It’s worth mentioning, continuing with Zimbra Logger, that in the “Module Library” of Pandora FMS, we can find valuable information about Zimbra Collaboration, concerning specifically Zimbra Mail. Zimbra Logger has a useful and much needed set of tools aimed at the creation of reports and message tracking.

Despite the package Logger installation being optional, we recommend doing it. Otherwise, the status information of the server and its statistics won’t be able to be captured, as well as the message tracking won’t be available for us.

2.1.1. Environments with more than one server.

Logger is just enabled for one mailbox. Therefore, the host for the tool to monitor Zimbra is the one responsible for checking the status of each and every one of servers using Zimbra. Also, it’s in charge of displaying the information within the administration console of Zimbra. The information is updated every 10 minutes.

However, in an installation of several servers, we have to set the configuration files of syslog, in each server, so we can allow the logger to show us the statistics from the server in its respective console. In addition, it’s needed to enable the logger of the host. So, in case of not having this configuration set when Zimbra Collaboration Server was installed, we recommend doing it as soon as possible.

2.1.2.The statistics of the server

We must keep in mind that, to enable the statistics, we have to write in each server (in its root directory) the following: /opt/zimbra/bin/zmsyslogsetup , which will give us the possibility for the server to show us the statistics. What’s more, to log the remote computers’ statistics in the host’s Logger display, we have to enable syslog.

To achieve this, we can edit the log file /etc/sysconfig/syslog, adding -r to the configuration of SYSLOGD_OPTIONS, like this: SYSLOGD_options = “-r -m 0” . Then, we have to disable the syslog daemon and write the following: /etc/init.d/syslogd stop . Next, we will get the syslog daemon started again by writing: /etc/init.d/syslogd start . It’s important to mention that all these steps are not necessary for the installation of just one node.

2.1.3. Server’s status check-up

The section called “Server Status” lists all the services and servers, along with its status and, really importantly, when was the last time the status was checked for the last time. To better understand this concept, we’ll say that the servers include the LDAP, the MTA, and the mailbox.

In addition, the services include LDAP, MTA, SNMP, inbox, anti-virus, anti-spam, logger, and orthography corrector. Now, when it comes to starting a server (in case that it’s not already running), we can use the following command: zmcontrol CLI. As well, we could initiate and stop services through the administration console of Zimbra, inside Servers and, more specifically, in the tab called “Services”.

monitoring zimbra

2.1.4.The server’s performance statistics

Something really important to bear in mind when it comes to monitoring Zimbra is the fact that the section “Server Statistics” show us several bar graphs, in which we can see the volume of the messages, the count of them, the activity of the anti-virus and information related with the spam. This graphical information can be seen within the last 48 hours and in periods of 30, 60 and 365 days. To be more clear about this, here’s a more in detail explanation:

  • The count of messages shows us the amount of these, both the received and the sent ones, every hour, every day.
  • The volume of messages gives us the information about the size, in bytes, of both types of messages, in the same way, every hour, each day.
  • The anti-virus and the anti-spam activity show the number of messages which were checked by Zimbra, in both the search of the anti-virus and the anti-spam, as well as the number of messages which were discarded as “spam” and the ones which were considered a threat.
  • The drive shows us the use of our storage and, also, the available space for individual servers. We can sort this information by the last hour, day, month and year.

Important: the anti-virus and anti-spam activity graphs, in addition to the count of messages, they do different recounts for several reasons. One of these reasons is because the sent messages can’t go through the filter Amavisd as it quite is the case that the architecture of the system doesn’t require for them to be verified. Another reason is that the messages are sent and checked by Amavisd in search for viruses and spam before being delivered to all the recipients.

We have considered important to give a brief explanation of what Amavisd is. This open source tool consists of a filter of content for email, which also implements the email messaging transference to decode them, as well as interacting with the external content filters to give us protection against viruses, malware, and spam. We’ll also mention that it could be considered as an interface between an email software, as MTA, and one or more filters of content.

Remember when we said before that Zimbra also has services which include LDAP, MTA, SNMP, inbox, anti-virus,etc.? Right, we can use Amavisd additionally to detect banned content or to capture syntax errors within the email messages. You can also quarantine and then release or store messages in mailboxes or in a SQL database. The last version of Amavisd is 2.11.0, which was launched in April 2016.

2.1.5. Message tracking

It’s possible to track a message which has been received or sent during the last 30 days. Each email has a heading which shows us the route it has had, from its own origin up to its destination. This information is used to trace the route of the email when there’s an issue with it. In this case, Zimbra’s utility, zmmsgtrace , can be executed to look for emails, filling the following attributes:

  • Message ID: -i [msd_id]
  • Address of the sender (“From”): -s [sender_addr]
  • Address of the recipient (“To”): -r [rcpt_addr]
  • IP address from which was sent: -f [ip_address]
  • Date and time: -t aaaammdd (hhmmsss)

To finish with this subsection, we can sum this up by saying that the heading of the email in Zimbra we can see it through the display offered by the web client of Zimbra Collaboration, in which case we can right-click on a certain message to select “Show original”. In case the messages are being displayed through the “Conversation view”, first we’ll have to open the conversation to see the messages, and then we’ll select the message we want to read.

2.1.6.Creating daily mail reports

When we installed the package Logger, it’s set automatically, in crontab, a daily mail report which contains the following information:

  • The total number of messages which were handled by Zimbra MTA.
  • The errors from the logs of Zimbra MTA Postfix.
  • The delay (in seconds) for the delivery of messages.
  • Information regarding the size of the message, in total and average of bytes for each message.
  • The amount of returned deliveries.
  • The majority of the active recipients’ accounts and the number of sent messages.
  • The majority of the active senders’ accounts and the number of messages.

PS: The report contains all the data which we’ve just listed is called every morning, at the same time it’s sent to the email of the administrator.

2.2. Monitoring mail queues

To monitor Zimbra when it comes to the supervision of mailbox queues we know that if we have any issues with the deliveries of the mails, we can check the queues of sent emails within the administration console. For this purpose, we have to access the section “Mail queues monitoring”, to analyze if we can solve those issues, keeping in mind that when we open the queues, the content shown belongs to the delayed, active, received, corrupt and “waiting” queues. Also, we can see the number of messages, their origin, and destination. Additionally, if you want to read a description of the types of queues, we suggest you check Zimbra’s site on monitoring mail queues.

2.3. Monitoring mail storage

We can access the information about the mail storage, for all the accounts, through the administration console and, more specifically, in Supervision > Server Statistics > Mail Storage. Inside this last tab we’ll see the following information for each account:

  • Assigned mail storage.
  • Used storage.
  • Percentage of the assigned storage used.

Be careful: when the assigned storage is completely used, all the messages will be rejected. Therefore, the users will need to free some space (by deleting emails) in order to receive those emails. Another option is to increase the assigned storage for emails.

2.4.Log files

The processes related to Zimbra create files for the majority of the activities of Zimbra Collaboration Suite. It’s not needed to check most of the log files as the most relevant logs appear, as well, in many main log files, being the case, for example, of Zimbra’s syslog (which specifies the activities of Zimbra Collaboration Suite MTA), Logger, Authentication, and Directory.

3.Concluding

Monitoring Zimbra is a relatively easy task, as long as we follow the steps closely to the recommendations we have shown you in this clear and simple tutorial. For those in search to complete this information, we propose you to check Pandora FMS to find additional solutions.

Data BasesMonitoring

How to effectively monitor MarkLogic Server?

May 7, 2018 — by Rodrigo Giraldo Valencia0

marklogic-monitoring-featured.png

marklogic monitoring

MarkLogic monitoring is now possible with this great tutorial

1. Context

When we talk about MarkLogic Monitoring, we need to know that there are several types of NoSQL databases. We will not mention them all, but we will start by mentioning the “Key/value” bases, in order to state that the way of relating the data is based on indexed structures, based on a key to which a value is associated and, hence, its denomination. This value can be made up of several simple values, and in this case the structures of the programming languages used are known as “maps” or “symbol tables”.

It is not that we pretend to create a basic programming course, but first we need to contextualize the topic. Let’s proceed, then, to note that the Key/value database type are those of the “sub-type” associated with NoSQL.

The most popular ones are Amazon Dynamo, Google BigTable, Azure Tables and Apache Cassandra. In addition to the Key/value type, we have the Tabulars, including BigTable and HBase/Hadoop which was created by Apache to compete with the previous one (with BigTable).

In addition to this, we have the Document Repositories, where important information uses a documentary format, which is a format that ” encloses ” the information and the format itself. For example, we have XML, JSON and even binary formats such as PDF or Microsoft Office formats. In these cases, the documents are associated with a key value that allows them to be indexed. These are some of the commercial databases:

  • MongoDB, which is the best known, which is used by newspapers like The New York Times.
  • Apache CouchDB, which is the database that is oriented towards documents.
  • Amazon Simple DB.

In addition to Key/Value, Tabulars and Document Stores, we have another type of database, that is, Native XML. This is where the MarkLogic monitoring becomes valuable.

Currently, the XML document format is used in almost any commercial database. The term “native” is used when the system uses, at all times and without exception, XML for the purpose of storing and managing data. In the Native databases, the information is stored in documentary form through the use of XML, while at the same time it is handled with the languages related to the same XML, such as XPath, SQuery and other queries. As far as advanced management is concerned, it is carried out using connectors that allow XML to be handled from languages such as Java. Among the Native type bases, we have MarkLogic Server, eXist and BaseX.

2. How can I monitor MarkLogic Server?

We are going to teach our readers how to monitor MarkLogic Server in the clearest and easiest way possible. Let’s start, then, by pointing out that MarkLogic Server provides us with a rich set of features to carry out the supervision that comes with, in the first place, a previous configuration of the MarkLogic monitoring board and then it offers us an administration API that allows us to integrate the server itself with certain pre-existing monitoring applications and to create our own customized (monitoring) applications.

2.1. First, let’s take a look at an overall view of the monitoring tool we are going to refer to in this publication.

It should be noted that those who intend to use this monitoring tool should know that it must be used for the following purposes:

  • To track the day-to-day operations of the MarkLogic server environment.
  • To analyze the initial planning and adjustment capacity of the aforementioned environment of this server. For more details you can visit this MarkLogic Availability, Scalability and Failover Guide.
  • In order to solve performance problems with the application, readers can visit the Performance and Query Adjustment Guide. It’s not that we don’t want to go into all the details but that the process taught in this tutorial is very extensive.
  • For the resolution of application problems and other bugs, we need to know that the monitoring metrics and thresholds of interest will vary depending on the specifics of the hardware/software environment and also on the configuration of the MarkLogic server cluster.
    • Notwithstanding the above, MarkLogic Server is only a part of your overall environment. Thus, the health of our cluster will depend on the state of the underlying infrastructure, considering the network bandwidth, I/O disk, memory and CPU.

      2.2. Choosing a specific tool to monitor MarkLogic

      Although this tutorial is focused on the tools that are available in the MarkLogic server itself, for monitoring purposes, we recommend readers to expand their own monitoring horizons with a tool like Pandora FMS, although, if you want basic monitoring, it could be enough with MarkLogic’s methods. The truth is that if we complement it with Pandora FMS, we will be able to supervise all our computer environment, in order to gather applications and network metrics (together with the same MarkLogic metrics), and operating systems.

      In order to be more explicit and specific, we should note that there are many monitoring tools on the market that, it is fair to say, have very important features, such as trends, alerts and, very important, log analysis to help us monitor the entire environment. MarkLogic Server now has the following monitoring tools:

      • A monitoring panel that monitors the MarkLogic server. This is a dashboard that is pre-configured to monitor server-specific metrics. For more details, please refer to the “Use of MarkLogic Server Monitoring Dashboard”.
      • A dashboard, where you can see the monitoring history to capture and, in the process, use historical performance data for a MarkLogic cluster. For more information on this, we encourage you to review the “MarkLogic Server Monitoring History”.
      • A RESTful Management API, which can be used to integrate the MarkLogic server with the existing monitoring application. Let’s remember that the other option is to proceed to create our own custom monitoring applications. For more details, we suggest visiting the ” Use of Marklogic’s Administration API “.

      marklogic moinitoring

      2.3. The architecture of monitoring. A short but deep analysis

      All monitoring tools use a RESTful Management API. Before we continue with this concept, in Pandora FMS we can find a short, but accurate explanation about it, more specifically in the subtitle “Mongo Query Language”. Let’s continue, then, to note that this API is used to communicate with the MarkLogic server. So, the monitoring tool sends HTTP requests to a monitor host in a MarkLogic cluster.

      Regarding this monitor host, let’s note that it collects the requested information from the cluster and, in the process, returns it in the form of an HTTP response to the monitoring tool. For those who want more information about this topic, you can visit the ” Use of the Administration API “.

      3. Monitoring and security tools

      In order to access the monitoring features, which are described in this tutorial on how to monitor MarkLogic, you must assign a user and play the role of user administrator. In addition to this, monitoring tools must be verified as a user with this role. This “user administrator role” is assigned to us at http://marklogic.com/xdmp/privileges/manage, which executes the privilege and also gives us access to the administration API.

      In addition to this, it gives us access to the application server management and, as if this were not enough, it allows us to access the user interface for the purposes of the administrator and control panel configuration. Now, the “user management” feature also gives us access (read-only) to configuration information and cluster status, except for security settings.

      So, continuing with the process, let’s say that if we have enabled SSL in Manager App Server, their respective URLs must start with HTTPS, instead of HTTP. Additionally, we need to have a MarkLogic certificate in our browser, as described in the “Security Guide” and, more specifically, in the section that refers to the way to “Access an SSL-enabled Server from a WebDAV Browser or Client”.

      3.1. MarkLogic cluster (HTTP requests/answers)

      This cluster includes:

      • Monitoring tool
      • User
      • Administration API
      • Applications
      • Operating System
      • Host monitor
      • Network

      4. What are the guidelines for setting up our own monitoring tools?

      The MarkLogic monitoring tools allow us to set thresholds on specific metrics, to send us an alert when one of the metrics exceeds a preset value. Here it is necessary to establish a performance baseline, since many metrics that can help us with alerts and troubleshooting are important as we ourselves understand normal performance patterns. Thus, when we want to monitor an application server for slow queries, we are going to require a different threshold compared to an application that generates many long running queries and an HTTP application server, in which the queries are usually in the 100 ms range.

      Commercial monitoring tools allow data storage to support this kind of trend analysis. Now, developing an initial baseline and, in addition, adjusting it if the profile of our application changes, will give us greater performance (better results) for the purposes of developing a monitoring strategy. On the other hand, let’s say it is necessary to balance integrity with performance. We need to be more explicit, so that readers can better understand the situation. Collecting and, in the process, storing monitoring metrics has a cost in terms of performance, which is why we must balance the ” integrity ” of the desired performance metrics against their cost.

      Therefore the cost of collecting monitoring metrics varies, the more resources you control, the higher the cost. For example, to offer more, since it seems like a kind of wordplay: if you have a lot of hosts, the state of the server will be more expensive. In other words, if we have a lot of forests’, the state of the database will be more expensive. Some situations may arise, where you monitor (only temporarily) certain metrics, but once the problem has been fixed, those metrics are no longer monitored.

      5. A balance technique that we recommend in Pandora FMS, through this tutorial

      This technique consists of measuring system performance within a heavy load environment and then enabling our own monitoring tool and calculating the overload. We can then reduce the overhead by reducing the frequency of collection, but by reducing the number of metrics collected or by writing a management API to create a customized view that identifies the specific metrics that are of our interest. Thus, each underlying management API response includes an elapsed value to help us calculate the relative cost of each response.

      Those who wish to learn more about this specific topic can visit the ” Use of the MarkLogic Administration API. And, for those who want details on how to write a plug-in to the Management API, check out “Extending the Management API with Plug-ins”.

      6. Monitoring metrics of interest to the MarkLogic server

      Environments and workloads have their variations. Each environment has a unique set of requirements that are based on variables including:

      • The configuration of the cluster
      • The hardware
      • The operating system
      • Query patterns and updates
      • Feature sets
      • Other system components
      • So if the replication is not configured in your environment, we can remove templates or some policies that monitor that feature. So, we think we have answered this question: “Does MarkLogic Server have adequate resources for monitoring purposes?” The answer is “Yes”, as long as it is basic monitoring.

        In order to monitor MarkLogic properly, it is necessary to bear in mind that it is a server that is designed to use, in a very complete way, the system resources. But, if we want a more complete monitoring we would need customized applications or a monitoring software like Pandora FMS.

        Rodrigo Giraldo, redactor técnico freelance. Abogado y estudiante de astrobiología, le apasiona la informática, la lectura y la investigación científica.

    DevelopmentFeaturesFuncionalidadesMonitoringMonitorizaciónPandora FMSPluginsRelease

    What’s New in Pandora FMS 7.0 NG 722

    May 4, 2018 — by Irene Carrasco1

    whatsnew-722-featured.png

    whatsnew 722

    This last update package of Pandora FMS 7.0 NG contains improvements as well as visual changes and includes the resolution of some problems. A list of the most important changes can be found below.

    New features and improvements

    • Implementation of SMNP v3 in Satellite Server and Network Server Enterprise. Allows to apply the incredible polling speed of Pandora FMS that it had for versions 1 and 2c also to the SNMP v3 devices.
    • Improvements in GIS views, extending the information shown by each node. In the next update, we’ll have more GIS feature enhancements!

    whatsnew 722

    • Massive operations when editing modules of a policy. It allows us to modify common parameters of dozens of components of an existing policy, to implement massive policy changes. Many users had asked us for this functionality.

    whatsnew 722

    • New Events extension for Chrome. We had not updated more than a year the Chrome extension that allows us to view Pandora FMS events in real time. It’s already uploaded to the official extensions store.

    whatsnew 722

    whatsnew 722

    whatsnew 722

    • Visual Consoles. The status inheritance system has been expanded to take weights into account.

    whatsnew 722

    • Plugin VMWare. The new version allows to monitor the standard networks, grouping virtual switches, indicating traffic and number of connected virtual machines, grouped by VLAN. Added extra counters for VM, user configurable, vcpuAllocation, memoryMBAllocation and guestOS modules. It has also been added the possibility of encrypting the datacenter passwords, combos to the configuration form, as well as the correction of some errors in the graphs.

    whatsnew 722

    • Calculation of SLAs and prioritization of states. Unification of colors and criteria in some reports that visualized it in different ways. Planned stops will now be displayed in purple.

    whatsnew 722

    • Improved cluster monitoring, adding alerts from the wizard itself, improving API/CLI calls and simplifying the interface.

    whatsnew 722

    Troubleshooting

    • When deleting a parent group that contains child groups, the children are hooked to the root group to avoid “invisible” groups that are independent.
    • Fixed a problem in the advanced use of LDAP authentication, in user autocreation, and now it is allowed to use the “=” character as part of the data.
    • Fixed a bug that affected the processing of Inventory XMLs in some very specific circumstances.
    • The performance of some Dashboards in Firefox browsers has been improved.
    • Some problems in the interface and the way to paint relationships between nodes of the Network Maps have been solved.

    whatsnew 722

    • Fixed some issues with the Event View pagination.
    • Fixed SQL errors when there was an access login for users without a profile assigned.
    • Small problem solved in the use of the filter by group of the Monitor View.
    • Fixed a problem in the Metaconsole Services interface.
    • Improved the status update of the elements that compose the Visual Consoles in Metaconsole.
    • Graph Container ACLs bugs fixed.
    • Fixed a problem with exporting events to CSV.

    Download Pandora FMS

    The last updated version of Pandora FMS can be downloaded from the downloads section of our website:
    https://pandorafms.org/en/features/free-download-monitoring-software/

    GeekInternetMonitoringNetwork

    What is Internet of things? Learn all about its monitoring

    May 3, 2018 — by Alberto Dominguez3

    what-is-internet-of-things-featured.png

    what is internet of things

    What is Internet of things? Discover how it’ll change our world

    Hey! Do you know what is Internet of Things? The Internet of Things, or IoT (which stands for Internet of Things) is the network of physical devices, vehicles, home appliances and other items embedded with electronics, software, sensors, actuators, and connectivity which enables these objects to connect and exchange data.

    The Internet of Things is a buzzword when it comes to IT and new technologies, we all know that. We are pretty sure that you’ve heard about it. This term is everywhere. But what is Internet of things? Let’s answer that question now.

    The Internet of Things will be a massive part of our lives and we won’t even notice it.

    But what is Internet of things? Well, the concept of Internet of Things first appeared in 1999, in the vicinity of the Auto-ID Centre at MIT, and it was developed by one of its cofounders, Kevin Ashton, who subsequently expanded and supplemented the Internet of things during the following years.

    But what is the Internet of things? Do you actually know? In order to better understand all about the Internet of Things and its scope we will go to Wikipedia. According to this,
    The Internet of things would encode 50 to 100 trillion objects, and be able to follow the movement of those objects. Human beings in surveyed urban environments are each surrounded by 1000 to 5000 trackable objects.

    In 2015 there were already 83 million smart devices in people`s homes. This number is about to grow up to 193 million devices in 2020 and will for sure go on growing in the near future.

    In order to understand what this would mean, we will make the following observation: throughout history, most of the generated information has been in the hands of human beings. During the last few years, an important fraction of that information generated has fallen into the hands of computers. The expansion of the Internet of Things would cause a huge increase in both the volume of information generated and the amount of information shared, at levels never before seen in history.

    How can we see the real consequences of all this? Let’s have a look at some examples:

    • The domestic use and especially in the field of home automation, this is one of the first uses that comes to mind when we talk about IoT. Some IoT applications are already quite popular, while many others are still in development. Now you can, for example, turn on the heating through a mobile phone before you get home. But not only that, the IoT will allow the automation of new tasks. For example, your fridge will be able to discover your needs according to your tastes and order online so that you receive what you need as soon as it detects that your stocks are about to run out.
    • Public services will reach an incredible dimension thanks to the IoT. The immense amount of data that is generated (coming, for example, from sensors distributed by the city) will improve the safety, transport or even health of citizens. For example, these will be used to measure the level of environmental pollution in a given area, or to detect accidents caused by a flood.
    • At a personal level, and through the use of wearables, the devices that we will wear will acquire all kinds of functions. For example, the smart watch that you will carry on your wrist will be able to guide you inside a department store and lead you directly to the product you are looking for.
    • At a business level, the IoT will be quite useful. From the field of marketing to industrial production, the number of IoT uses for the company will be endless, and this is something that we are currently witnessing.

    And these are just a few examples. As the Internet of Things develops, new applications will be created, based on the idea behind the IoT: the exchange of information without human intervention (or with minimal intervention). In addition to this, technological improvements such as 5G and the next generation of phones will allow the operation of the IoT to be faster and more effective over time.

    But, as you can imagine, an infrastructure that involves billions of devices running simultaneously will always be exposed to failures. And this is when good monitoring is needed. With good IoT monitoring it will be possible to control specific aspects, such as the status of the devices, their firmware version or their battery levels. In addition to this, one of the key factors when monitoring IoT will be flexibility, considering the diversity of devices and environments in which the Internet of Things will be developed.

    And now, do you know what Pandora FMS is? Pandora FMS is a flexible monitoring software, which is capable of monitoring devices, infrastructures, applications, services and business processes. And, it is also capable of monitoring IoT devices.

    The best way to learn all about the IoT monitoring of Pandora FMS is to ask our team who created it, don’t you think? For example, do you want to know what Pandora FMS can monitor? You can do that using the contact form that can be found at the following address: https://pandorafms.com/company/contact/

    Our Pandora FMS team will be happy to help you!

    So then, what is Internet of things? We hope we have answered question with this article. Do you want to find out more? Then check any of our other articles about monitoring, these are all published in our blog, and don’t forget to leave a comment in the comment section, what are your thoughts on the Internet of things? Do you think this will be useful? Do you think it will be worth it? And if you have read any of our other articles, which one was your favourite? We want to know!

    Let us know by leaving a comment in the comment section down below, we look forward to hearing from you! Thank you very much.

    MonitoringMonitorizaciónMonitorización de SistemasSystem MonitoringVulnerabilities

    Watch out for ghosts in your computer! Learn all about Meltdown and Spectre, two vulnerabilities for CPU’s!

    April 30, 2018 — by Jimmy Olano0

    Meltdown-and-Spectre-featured.png

    Meltdown and Spectre

    The importance of monitoring your CPU: Meltdown and Spectre

    Introduction

    At the end of the 20th century we witnessed something wonderful, since 1977 Dr.Dieter Seitzer and his team at the University Erlangen-Nuremberg thought about digitizing sound, but there was a small problem. The downside was that there wasn’t any hardware that could codify it apart from the computers at the time; which were out of reach for a large part of the population. Ten years later they were joined by Mr. Karlheinz Brandenburg, from the Fraunhofer-Gesellshaft company who could program the LC-ATC which stands for “Low Complexity Adaptive Transform Coding” algorithm along with a group of talented developers, but due to the tremendous amount of time required, it could only be tested in a very limited audio material.

    But, what does this have to do with monitoring CPU’s, Meltdown and Spectre? Okay, calm down, now you will see. Well, computers were growing, not only in the Central Processing Units (CPU) but also in all its components. In 1981 CD Compact Discs were invented and there was innovation in all environments. In 1989 the MP3 was patented in Germany and in 1996 the Internet became widespread. 1980 was a wonderful decade for computer science, (you might have witnessed this), but the 90s were going to blow our minds. The processors in 1997 were able to reproduce the MP3 music that we had dreamed about for so long and soon the software was lagging behind with the new processors and the manufacturers noticed such a detail: the systems were wasting clock cycles (computing power per second) even when we listened to Britney Spears or U2 from our 700 megabyte hard drives. In 1999 portable MP3 players were invented, power and miniaturization began to be part of our lives.

    Another great addition was the inclusion of the mathematical coprocessors in an integrated manner in the CPUs, chips with a separate socket in the motherboards which were (and still are) responsible for calculating -they do not make decisions or comparisons- in a faster way to then deliver them to the CPU. This meant computers with greater speed and efficiency but we must point out that in 1994 Intel had problems with these coprocessors when returning errors from the range of ten thousandths onwards. The media overly publicized this case; even the company Intel offered to change the defective CPUs free of charge. This serves as a precedent for the issue that concerns us, both for the specialized users and for the rest of people.

    Meltdown and Spectre

    The arrival of the 21st century

    After the so-called computer chaos of 2000, two companies controlled the CPU production: AMD ® (previously known as Cyrix) and Intel ® .

    Meltdown and Spectre

    Both manufacturers had experience i manufacturing processors so they were faster than the existing software at the time. In order to take advantage of wasted CPU cycles, they sold them with the following integrated features:

    • Speculative execution: essentially the processor when it comes to a conditional quickly calculates both results before evaluating the conditional, then it returns the correct result and discards the incorrect one.
    • Privilege levels: this has nothing to do with processor speed, but with the reservation of memory zones in order to be used only and exclusively by the kernels of current operating systems. This is all about security, user applications closely monitored by the operating system.
    • Out of order execution: although it was a milestone at the time, it does not mean more relevance in this case.
    • CPU cache: with the rise of fast but expensive memory chips, CPU manufacturers changed the architecture so that small amounts in different layers between the CPU and RAM – were used and are used to store data and programs which are frequently used. This means greater speed but less safety.
    • Parallelization of processes: Having two or more processors and/or cores in a computer enhances the previous lines with only a small cost in computers. After the appearance of the Pentium Pro model in 1995, all these conditions became a reality.

    CPU monitoring

    Pandora FMS emerged in 2004 and one of its first concerns was the workload of the CPUs, among other metrics that are also important. The monitoring of the CPU’s temperature has always been an important metric, especially in these days where any website is “infected” by JavaScripts which undermine cryptocurrency without our consent, taking the CPUs up to almost 100% of use. This creates an overheating that even turns off the computers for alarms specified in the motherboards (Pandora FMS will have informed us about this in advance, before this happens). As we can see, CPU monitoring is quite hard.

    The second decade of the 21st century

    In 2016 another problem was discovered with the Intel processors, this time when searching for prime Mersenne numbers (the study of prime numbers is important for cryptography and our privacy). This could be verified even with the execution of a legendary software designed to “stress” servers called Prime95 : the computers were “freezing” upon reaching the exponent 14 942 209. Fortunately, Intel was able to distribute a BIOS update. But nothing would prepare us for January 2018, when Meltdown and Spectre were presented one after the other.

    January 4, 2018

    This was the day when Intel officially admitted the failure Spectre and Meltdown found by the Google Project Zero , and began to be disclosed by the media. In this blog we publish an article that serves as a guide to the type of crisis of information technologies, for those responsible of that company.

    How does Meltdown work ?

    Meltdown, as the name indicates, wants to melt down the barriers that divide Random-Access Memory (RAM) and cache memory in a very simplified way, its mechanism of action goes like this:

    • The program attacks and requests access to an area of ​​memory that is forbidden by default (1st query).
    • Immediately it makes a second request conditioned to the value it expects to obtain from the first consultation.
    • Here the speculative execution that we described at the beginning of the article comes into action: the CPU resolves both queries immediately.
    • The processor properly detects that both queries do not proceed (the 1st query is a reading of a value outside the allowed area for the program and the 2nd query processes according to the value read from the 1st) and denies both results.
    • Although both results are banned in the program, the two queries are left on hold on processor cache memory, since they had been calculated before checking privileges. As we told you, modern CPUs are so fast that they take advantage of all the available time in order to gain speed. AMD processors are not affected by Meltdown since they check privileges before speculative execution. The processors Itanium® (all) and Atom® (prior to 2013) are the only ones from Intel® that are not affected.
    • Now, the attacking program makes a third request with the same scheme as the first one but this time in a valid memory area where it has read privilege: if it is returned immediately it means that the first two queries were executed and are in cache. We must say that: the third query is similar to the first onr and the CPU “already knows” since it has it in cache memory and it is “resolved” beforehand.
    • Attacking the attacking program again and again creates methodically and systematically a map of the RAM memory against the cache memory: thus knowing what areas of RAM each of the running programs belongs to, gets a correlation with the cache.
    • Once the map is assembled, the data collection stage will begin: you may notice a particular program and its “dumpster”
    • The term “dumpster” is used to illustrate the operations that remain in the cache: when these data are delivered to the program that legally requested them, the processor simply leaves them there.

    Meltdown and Spectre

    How does Spectre work ?

    It works in a very similar way but with two variants, one is more difficult to execute than the other one. It takes advantage of the branch predictor , a special case of speculative execution. Due to the nature of this, they affect AMD processors since they rely on breaking the isolation between applications that follow good programming practices, which ironically makes them even more vulnerable to Spectre.

    Meltdown and Spectre

    Meltdown and Spectre: its impact on the industry

    Imagine a data centre with a thousand computers in which these characteristics are restricted , we would actually have, 700 computers with the same workload and in order to return to the previous level of productivity we would need to buy 300 more computers. Although it will take years to find the solution in the CPU architecture itself, there are many ideas to get out of this quagmire.

    How can we be protected?

    We can protect ourselves from Meltdown updating the kernel of our operating systems, in the case of Ubuntu, it can be updated without resetting, but for other operating systems resetting is inevitable. For a normal user this is not a problem, but for large companies restarting their servers, in addition to monitoring the workload after “patching“, can mean a great deal of time and money.

    Unfortunately for Spectre, it will take time until we have good protection, however as soon as we find a solution to this problem we will write an article about it. Thank you very much for your attention, don’t forget to write down your questions and comments!

    Redactor técnico. Comenzó a estudiar ingeniería en 1987 y a programar con software privativo. Ahora tiene un blog en el que difunde el conocimiento del software libre.

    artificial intelligenceGeekMonitoring

    What is a smart building?

    April 27, 2018 — by Alberto Dominguez0

    smart-building-featured.png

    smart building

    Smart building; you will see this in the not-too-distant future

    Have you ever thought about the buildings of the future? Right now, I am pretty sure that you are thinking about those sci-fi films with huge buildings with strange shapes in futuristic cities (surrounded by flying vehicles, of course).

    We are going to have to wait some time in order to witness those flying cars, but the buildings of the future are beginning to take shape and there is even a term for it, a smart building.

    According to the Intelligent Building Institute in Washington (United States), an intelligent building “provides a productive and cost-effective environment through optimization of four basic elements: structure, systems, services and management, and the interrelationship between them.”

    A Smart building refers to a new way of buildings that use technology in order to improve the quality of life or energy efficiency. Would you like to live in a smart building? Well, you will be living in one of them very soon, and I am pretty sure you are going to enjoy it.

    How will a smart building work?

    Entering a smart building will be very different. Let’s take a look.

    In the parking area, there will be sensors that will communicate with our vehicle to indicate those free spaces available for parking; these sensors will be distributed around the parking area. If our vehicle is a self-driving car then the parking process will be completely automatic.

    As soon as you get out of your car, you will be able to witness that the gardens surrounding the building will be in perfect condition. There will be robots that will take care of the garden; and also they will use that water collected from the rain using collection systems in order to water the garden and it will also be used for toilets, which means savings.

    From the outside, you will be able to see that the shape of the building will be different from the traditional buildings. You’ll be amazed since the building will be completely clean, due to the fact that it will be covered by nanomaterials that will decompose and repel pollution.

    The walls made of brick and concrete will no longer exist, instead there will be large windows, and in the rooftop and facade, you will be able to see solar panels and small wind turbines that will supply the whole building with clean energy. Some building may even modify its facade in order to adapt itself to sunlight and the environment, and some of these buildings might even have vertical gardens.

    The rooftop will be able to receive drones that will be in charge of mail delivery, parcel service, providing all kinds of supplies required by the building, and then, these supplies will be distributed to the recipients in an automated way.

    At the reception area, you will be surprised by how pleasant the atmosphere is. Climatic conditions will be monitored (light, humidity, temperature, etc.) so that the air-conditioning systems of the building will be able to provide optimal conditions, regardless of the time of the year. In addition to this, plants and filtering systems that are located in the hall will be able to clean the air. A nice and pleasant light will light up the reception thanks to the windows, which will change their colour depending on the lighting needs, which will mean significant savings in electricity.

    The reception of the building will be equipped with interactive screens that will offer all kinds of useful information, such as schedules and scheduled meetings but it will also be able to tell if a specific person is inside the building and where this person is (with the appropriate security measures).

    When taking the lift, we will be taken to our desired floor, thanks to a facial recognition system, which will be able to identify us and it will also be to memorize our habits. According to security needs, this same system will limit access to certain areas.

    In the corridors of the building, the sensors will be able to adapt themselves to the light and temperature according to factors such as the presence or absence of human beings or the natural luminosity according to the time of day.

    All these sensors will provide greater security within the building. In the case of fire, in addition to activating a fire-fighting system, the smart building will also notify the fire service quickly. In the case of unauthorized access, then it will be automatically connected to the Police or private security services.

    When it comes to an office building, the building automation will help you work more efficiently. You will be able to access scheduled meetings from your mobile phone, or to book rooms and also you will be able to check if the lights of your office have been switched off, and you will also be able to adjust the temperature of the room before arriving.

    Some buildings will even produce their own food, thanks to vertical agriculture powered by LED lights, which will provide fresh and quality food without the need for farming areas.

    These are just some ideas, and although it might be hard to believe, some of these are already being implemented in some exceptional buildings. Over time, these advances will be very common and they will help us make our lives more pleasant, in a more ecological environment.

    So now you know what a smart building is. These buildings will be cleaner, safer, and more efficient. These buildings will be focused on the human being and they will take care of the environment.

    By the way, in order to control the proper functioning of these buildings, monitoring will be needed.

    Pandora FMS is a great and flexible monitoring software for your company or organization. If you want to know how Pandora FMS can help you, click here: https://pandorafms.com

    Or if you want to ask us directly what Pandora FMS can offer you, you can get in touch with us here: https://pandorafms.com/company/contact/

    We will be happy to assist you.

    DatabaseMonitoring

    Learn how to monitor Oracle GoldenGate Monitoring

    April 26, 2018 — by Rodrigo Giraldo Valencia2

    oracle-goldengate-monitoring-featured.png

    oracle goldengate monitoring

    Oracle GoldenGate Monitoring: Learn how to monitor this

    What is Oracle GoldenGate?

    Oracle GoldenGate allows the exchange and manipulation of data in a company, allowing decisions to be made in real time. It works with multiple platforms, while moving transactions that are committed to the integrity of the transaction, allowing a minimum overload in its infrastructure. It has a modular architecture, providing flexibility, while being able to extract and replicate selected data records, changes to DDL which stands for “Data Definition Language” and transactional changes.

    Regarding DDL support, we know that certain capture or delivery configurations and topologies vary according to the type of database. From Pandora FMS, we recommend our readers to consult the documentation related to the installation and configuration of Oracle GoldenGate, for their respective databases, in order to obtain detailed information regarding the supported configurations and, also, about those features. It is important to know all this, before analysing all about GoldenGate monitoring.
    GoldenGate supports several requirements, such as:

    • Initial load and database migration
    • High availability and business continuity
    • Data storage and decision support
    • Data integration

    For those who want to obtain complete information about the processing methodology, feature, configuration requirements and compatible topologies, they should also check the Oracle GoldenGate documentation for their respective databases.
    Along with this architecture and before getting into Golden Gate monitoring, let’s say that it can be configured for several purposes:
    ○ For extraction and replication of transactional operations of DML which stands for data manipulation language and changes in the data definition language or DDL (for compatible databases), in order to maintain an adequate data coherence of origin and destination.

    • For a static extraction of data records from a database and the loading of those records into a different database.
    • For the extraction of a database and the replication to a file outside of that database.

    Oracle GoldenGate components

    The components of GoldenGate are the following ones: Data pump, Extract, Control points, Extract files or Trails, Replicat, Collector and Manager.

    The extraction process stands out due to the Oracle GoldenGate capture mechanism. In addition to this, the extract is executed in a source system or in a descending database or, if preferred, in both of them.
    We can configure “Extract” in different ways:

  • By changing the synchronization: It turns out that Extract captures the DML and DDL operations , once the initial synchronization has been carried out.
  • With initial checks: for initial data loads, Extract captures a current and static set of data, straight from its source objects.
    • Method 1. To extract captures from a data source, we can make Source Tables (when the execution is an initial load) and from the recovery records of the database or the transaction records (just like with the records of “Redo” of Oracle Data Base or, also of the audit records of SQL / MX). However, the actual method of capturing the records varies according to the type of database concerned in each case.
      For example, Oracle GoldenGate for Oracle offers an integrated capture mode, in which Extract enters to interact, directly, with a database base registration server that, in turn, proceeds to extract the flow of Oracle transactions. From Pandora FMS, we recommend readers who wish to obtain more detailed information about the integrated capture, to click here.

      Method 2. Another method is found in a third-party capture module, which provides us with a communication layer that passes data and metadata from an external API to the Extraction API. It turns out that the provider of the database, provides us with the components that, in turn, extract the data operations and proceeds to pass them to “Extract”.

      Oracle GoldenGate Monitoring

      Data Pumps

      What is a Data Pump? It is an extraction group that is within the Oracle GoldenGate source configuration. When a Data Pump is not used, Extract should proceed to send the data operations that have been captured to a remote path in the target. However, in a typical configuration of a Data Pump, the so-called Primary Extraction Group proceeds to write in a record located in the source system. The Data Pump proceeds to read this path and sends data operations, via the network to a remote path on the target, while the Pump adds storage flexibility and of course, it also serves to isolate the (primary) extraction process from the TCP / IP activity.

      In general terms, a Data Pump is capable of carrying out data filtering, conversion and mapping, but it can also be configured in “step mode”, this way the data is transferred without any manipulation. This way of passing, also known as “Pass-through mode” increases the performance of the Pump, since that entire feature that looks for object definitions is omitted.
      For those readers who want to expand the information we have provided about Oracle GoldenGate, from Pandora FMS we recommend clicking on this link .

      But how do we carry out Oracle GoldenGate Monitoring?

      By using the information commands in GGSCI:

      To see and analyse the processing information, use GGSCI. The following are the commands to see the process information:

      • The command INFO {EXTRACT│REPLICAT} group [DETAIL] shows us: execute status, control points, approximate delay and environmental information.
      • INFO MANAGER shows us: execute status and port number
      • INFO AL shows us INFO output for all Oracle GoldenGate processes in the system
      • STATS {EXTRACT │ REPLICAT} group shows us the statistics on the processing volume and the number of operations carried out
      • STATUS {EXTRACT │ REPLICAT} group shows us the execution status, which is, start, execution, stop and abended
      • STATUS MANAGER shows us Execute status
      • LAG {EXTRACT │ REPLICAT} group shows us the latency between the last record processed and the time stamp in the data source
      • INFO {EXTTRAIL │ RMTTRAIL} trail shows us the name of the associated process, the position of the last processed data and the maximum file size

      Important note: there are many other commands for Oracle GoldenGate monitoring. We recommend readers to check this link.

      Oracle GoldenGate monitoring through the analysis of an extract recovery:

      Limited Recovery is exclusive for Oracle. If Extract closes in an unusual way, when a long-term transaction is open, it may seem like it takes a long time to recover, when starting again.
      To recover its processing status, Extract must perform a search through the online (and archived) logs, in order to find the first log for that long-running transaction. The further back in time for the start of the TRANSACTIONS, the more recovery time will be needed, and Extract may seem frozen. To avoid this and confirm that Extract is being recovered properly, we must use the SEND EXTRACT command with the STATUS option.

      Thus, one of the following status annotations will appear, while we will be able to continue with the process as Extract changes its log reading position, during the recovery itself:

      • In recovery [1] : it indicates that the extract is being retrieved at its checkpoint, in the transaction log.
      • In recovery [1] : it indicates that the extract is recovering from its control point, until the end of the road.
      • Recovery complete : it means that the recovery has finished, while normal processing will be resumed.

      Oracle GoldenGate Monitoring

      Monitoring:

      The statistics of Lag show us the Oracle GoldenGate processes, follow the rhythms of the amount of data generated by commercial applications. Through this information, we can diagnose suspicious inconveniences and adjust the performance of those processes. In order to minimize the latency between the source and destination databases.

      Regarding Lag for Extract, the delay is the difference (in seconds) between the time at which a log was processed by Extract (according to the system clock) and the time stamp of that log in the data source . For Replicat, the delay is the difference (also in seconds) between the time when Replicat processed the last log (according to the system clock) and the time stamp of the log on the way. Now, to observe the delay statistics, we must use the LAG or SEND command in GGSCI.

      It is very important to keep in mind that the INFO command returns delay statistics. However, in this case, the statistics are taken from the last log that was controlled and not from the current log, which we are processing. Also, this command is less accurate than LAG and INFO.

      For Oracle GoldenGate monitoring, you need to control how Lag is being reported:

      In order to specify the interval in which the Administrator is verifying the delay of Extract and Replicat, we must use the parameter LAGREPORTMINUTES or LAGREPORTHOURS. On the other hand, in order to establish a critical lag threshold and to force a warning message in the error log, when the threshold is reached, we must use the parameter LAGCRITICALSECONDS; LAGCRITICALMINUTES or, if preferred, LAGCRITICALHOURS . Let’s clarify, that these parameters affect the Extract and Replicat processes.

      Now, to determine a threshold of delay, We must use the parameters LAGINFOSECONDS , LAFINFOMINUTES or, , LAGINFOHPURS . But, in case the delay exceeds the value that we have specified, Oracle GoldenGate reports the delay information in the error log. In case the delay exceeds the value specified with the parameter LAGCRITICAL, the Administrator will inform us that the delay is critical. If the delay is not critical, we are informed by a message. A value of zero (0) causes a message to be forced to the specified frequency, with the parameters LAGREPORTMINUTES or LAGREPORTHOURS .

      We have talked about the most important tools for Golden Gate monitoring, but if you want to check some other methods such as Volume Supervision Processing, the use of the Error Registry, the use of the Process Report, the use of the File Disposal, Maintenance of Discard Files, Use of System logs, Reconciliation of Time Differences, then click here to go to the Official Oracle page . In addition to this, in Pandora FMS you will be able to find valuable information on server monitoring.

      Rodrigo Giraldo, redactor técnico freelance. Abogado y estudiante de astrobiología, le apasiona la informática, la lectura y la investigación científica.

    MonitoringServer MonitoringServersVirtual machines

    Do you use virtual machines to increase security?

    April 23, 2018 — by Rodrigo Giraldo Valencia0

    virtual-machines-featured.png

    virtual machines

    Virtual machines: Do you want greater security for your computer?

    Do you know what virtual machines are? Are you aware of how important they are? Are you aware of the benefits that can be obtained from them? Also known as “virtualization software”, these machines are essentially software with another operating system inside them, and therefore your computer and another devices accept them as a real computer. Essentially, let’s say that these machines are a bit like a computer inside your computer.

    There are currently two types of virtualization software, which are different due to their functionalities: the process and the system (these are the ones that are usually used in the computer world.) System virtual machines are those that emulate a computer. From these concepts, we can say that on the one hand these machines (which are software,) have their own hard drive, their graphics card and all the conventional hardware components of the “physical” computers, but virtually.

    All the components of these machines are virtual, but this does not mean that they don’t exist

    Let’s have a look at that with a clear example: any of these machines can have reserved resources of 20 GB and 2 GB of RAM of hard disk, which come from somewhere right? Well, they come from the “physical” PC in which we have installed the virtual machine or virtualization software, which can also be called “host” or “hypervisor”.

    Some of you may be asking yourselves a rather interesting question: “Is it possible to install a virtual machine inside another virtual machine?” The answer is “Yes, it is possible”. And due to this situation a user can have many computers inside his “physical” PC. Another important thing when it comes to the security of our PC (with the growing threats of malware) is constituted by the fact that virtual machines cannot access data on your host computer. Although, the conventional computer and the virtualization software work within a single physical device, they are isolated.

    This does not necessarily mean that the most important and traditional virtualization software on the market, such as VMWare and VirtualBox, don’t have the tools to access the physical PC. They are able to do it, but it depends on the user. A virtual machine is a virtual computer system. It is a container of isolated software, with an operating system and an application, within itself. In addition to this, each virtual machine is autonomous and absolutely independent.

    virtual machines

    We need to mention that a thin layer of software, called a “hypervisor”, separates virtual machines from the host of the “physical” PC, while dynamically allocating the computing resources to each of the virtualization software, according to the needs of the user.

    Among the most important features of virtual machines, we know that:

    • Multiple operating systems can be executed within a physical machine.
    • System resources are divided between different virtualization software.
    • They provide fault isolation and security at the hardware level.
    • They preserve performance through controls, with advanced resources.
    • The entire state of a virtual machine is saved in files.
    • It is possible to move or copy the virtualization software in a simple way, by copying files.
    • It is possible to migrate any virtual machine to any physical server.

    Operation of the virtual machine

    Imagine mapping the virtual devices with the “real” devices that are present in the physical machine. Therefore, a virtual machine can emulate a 16-bit Sound Blaster sound card. However, it is actually connected to the internal sound card of the computer motherboard that can be Realtek. Virtualization can be carried out by software (this is quite common) or by hardware, in which case you can get a better performance. Since 2005, it is common for processors to have the technology of hardware virtualization, although it is not always activated by default.

    A virtual machine can be either system or process

    The second one works differently, and it is less ambitious than the system one and instead of emulating a PC completely, it executes a specific process, such as an application within its environment of execution. So every time a user runs an application based on the .NET Framework or Java, it is using a virtual process machine.

    In addition to this, a virtual process machine gives us the possibility to enjoy applications that behave in the same way, on such different platforms, such as Windows, Linux or Mac. If some of you are programmers, you will have noticed that they have not paid much attention to this situation, so when it comes to virtual machines, they usually refer directly to the systems.

    virtual machines

    Why is virtualization software so important?

    1. We can deliberately run malware

    Due to the isolated space of a virtual machine, some users could be somehow reckless with the security factor and do some things that should be avoided. For example, we should never open attachments of emails that we have not requested, since they could be hiding malware.

    Then, in addition to being able to use the virtual machine to run possible viruses, and to see and explore how they behave, virtualization software helps us test files that are suspicious and it discards them.

    However, we must bear in mind that these behaviours also have their risks, given that the most recent and sophisticated malware could have the ability to detect that the environment is virtualized and thus, it might try to exit the “host” operating system to the OS” host “.

    2. It is possible to create instant backups or “Snapshots”

    We talk about their ability to create snapshots at the system level that can be restored instantly. Imagine that you intend to install a new application that is in its trial version and that may be unstable. Or you might want to uninstall a significant amount of software accumulated in recent months.

    Well, if you are a bit indecisive when it comes to this situation. Then, we can make an instant backup because, if something goes wrong, we will be able to restore the snapshot and continue as if nothing had happened.

    3. We can run old or incompatible software

    Sometimes, we have to use some important program, which is not updated, so it may become incompatible with our system. Sometimes we may need an application, which might only be compatible with a specific operating system. In this case, a virtual machine is the only solution!

    4. We can test a new operating system

    If we are Windows users and we are exploring Linux, there are several options, among these a dual boot configuration. However, it would be better to do it through the virtualization offered by virtual machines. Thus, for the Windows operating system (as “host”) it will only be necessary to install VirtualBox and create a virtual machine. Then, we can take any Linux installation ISO (we recommend Linux Mint or a recent version of Ubuntu) and then install it in the created virtual machine.

    This way, we can run Linux, the operating system “guest”, in a window within Windows, the OS “Host”. A virtual machine can test any operating system, since the machine acts as a Sandbox and if something goes wrong in the “guest” operating system, it cannot affect the “host” OS.

    5. We can explore our operating system

    You don’t have to be scared about possible repercussions. Therefore, we can virtualize Windows 10 within Windows 10 and in that way, play with the registry. If we are curious about the System32 Directory, we can use the “guest” OS to open files, delete them and even edit them. This way, we can see how far we can go, without causing damage in the “host” operating system. Another reason why virtual machines are important is: because it is possible to clone an operating system to another machine and we can develop software for other platforms.

    Finally, we can say that other important virtualization software are: QEMU and Parallels, while Microsoft has launched several software for Windows, such as Windows XP Mode, Virtual PC and the new HyperV. Now, for those people who might be interested in monitoring this type of applications or a different one, we recommend Pandora FMS, the most flexible monitoring software in the entire market. If you have any questions you can use this contact form, thank you very much.

    Rodrigo Giraldo, redactor técnico freelance. Abogado y estudiante de astrobiología, le apasiona la informática, la lectura y la investigación científica.

    artificial intelligenceIntegrationsMonitoring

    Avoid doing the work that can be done by a machine, automate software development!

    April 20, 2018 — by Jimmy Olano0

    continuous-integration-featured.png

    continuous integration

    Continuous Integration Software: Learn all about Jenkins

    In a previous post we explained all about Continuous Release of Software and today we will see the previous step: the Continuous Integration of software but from the point of view of automated tools . We chose Jenkins for didactic purposes since it is written as a free software license and it is very flexible, just like Pandora FMS.
    First of all, we will talk about some practical tips to contrast it with Jenkins, which will also work for any other similar software such as Buildbot or GitLab CI or even more advanced tools such as Concourse or Drone(in fact, there is a Jenkins plug-in for Drone) thus combining Continuous Integration and Continuous Release of software.

    Continuous Integration (Change is the only constant)

    Continuous Integration is hard to implement but once it is ready, programmers always wonder: how did we manage to live without this in the past? In order to explain its operation, we must briefly look at what a software repository and a version control system are.

    How does a software repository work?

    A software repository gathers all the necessary files so that a program or system can be compiled or installed on a computer. If a version control software is responsible for its administration, then we will have the perfect combination for a new programmer to start working on the code development (this is known as process chain).

    Nowadays, the most widespread version control system written in free software is Git, which allows us to have a distributed copy of all the code used in an application or project. So a new employee can obtain a copy of the whole project, make the modifications in that copy, and once completed, the employee is able to perform compilation tests (convert high-level language with low level language or machine language) and the employee is also able to test databases or data processing.
    Once you know that everything is fine, we will have to check if the project has been modified, in this case it will download (automatically) modified files and a summary will be submitted if these updates affect the work that you have just made. In this case, it will correct, assimilate and combine the new code (this usually happens when two programmers work simultaneously in one code)
    After all this, it will compile and it will try again and finally it will “upload” its contribution to the repertory, but wait, there’s more: and now let’s say hi to Jenkins, this is the software that will be responsible for carrying out the tests automatically and once approved, its work will be finished (and then the cycle will be repeated).

    Tips for the Continuous Integration

    We have collected the best tips in order to obtain the best results in the Continuous Integration of software; don’t forget to leave a comment in the comment section if you think there’s something missing.

    • Place everything in the main repository: Excepting credentials, passwords and private codes. Everything else must be in the repository, the source code and the test scripts, property files, database structures, installation scripts and third – party libraries. Programmers should be warned that temporary files or collections made by programming environments (which are easily recognizable by their file extensions) are not “uploaded” to the main repository.
    • A main trunk with very few branches: Going back to the example of the new employee as a programmer, it is good to create a development branch to assign a job, perhaps, in order to correct a bug. This branch must have short duration and then it must be added to the main trunk, and it will even be tested twice with Jenkins.
    • Discipline: The work of programmers must be carried out accurately and it must contribute to the repository as often as possible.
    • The Golden rule: The longer the time spent without compiling, the harder it will be to find the bugs , and in the worst possible scenario these could overlap with each other, thus making it difficult to detect them.
    • The compilation tests must be quick: Around 10 to 20 minutes after uploading the contribution of the programmer, an approval or rejection must be issued, graphically on a web page or by email. In order to achieve this, we must look at special considerations as soon as we look into Jenkins.

    Automating processes with Jenkins

    Our Continuous Integration of Software, when applied to our chain of processes, must set us free from the work of pre-compilation, compilation and even installation (creation of databases, configuration of predetermined values, etc.). For this, Jenkins, which is software whose logo shows an English butler who is ready to help us in our daily lives, is written in free software under the permissive MIT license and it runs on GNU / Linux.

    continuous integration

    It was formerly known as the Hudson project and it was made with the Java language, it was created by the company Sun Microsystems in 2005 and eventually it became the property of the Oracle company. With the privatization of the company, Jenkins was born in 2011.

    It’s quite easy to install: you need to add the public key from your website, and then add it to the list of repositories and then install it from the command line. Jenkins offers a web interface, therefore we need to install Apache or Ngix as a web server as well as an email engine for notifications, and once enabled it will let us install the main plugins in order to work.

    continuous integration

    For Jenkins, most things are plugins, including the chain of processes. Due to the amount of plugins that are available, it is difficult to replicate Jenkins environments on other computers. We can confirm our Continuous Integration through the Apache Groovy language, which is derived and simplified from Java or you can create those you need through the web interface.

    continuous integration

    Here, we get to see the versatility of Jenkins: we simply create, using a Plugin, a folder or directory called Pandora FMS to store there the customized files related to the Continuous Integration process, but not the source code itself. Then we must make a practical use of it: if we have our hosted project on GitHub, by using a plugin, we can specifically tell our user ID (and password if the code is private) and in one or two hours Jenkins will take care of analysing our repositories through the API of GitHub (in fact this API limits the use by time, that is why Jenkins takes so long when it comes to downloading, respecting the “rules of etiquette” when it comes to the use of resources).

    Declaring our identifiers in our source code
    Once Jenkins is connected to our repository and the analysis is done, we will see the following :

    continuous integration

    For example, if we use Python language we must include the following file with the extension “.jenkins”:
    /* Requires the Docker Pipeline plugin */
    node('docker') {
    checkout scm stage('Build') {
    docker.image('python:3.5.1').inside {
    sh 'python --version' }
    }
    }

    Jenkins needs this in order to create an environment contained in Docker , a software that consumes fewer resources than a virtual machine and offers the same advantages, thus Jenkins can create a test scenario for the specified language and it can run python–version .

    Specifying our tests for continuous integration

    Now we will add our scripts and although it will take a while to create them, this way we will save a lot of time since it automates the tasks of compilation and testing. Here we have an example on how to instruct Jenkins in GNU / Linux with sh :
    pipeline {
    agent any stages {
    stage('Build') {
    steps {
    sh 'echo "Hola Mundo"'
    sh ''' echo "Múltiples líneas permiten escalonar el trabajo"
    ls –lah
    '''
    }
    }
    }
    }

    continuous integration

    Private environments also have support, for Microsoft Windows we will use the bat process:
    pipeline {
    agent any
    stages {
    stage('Build') {
    steps {
    bat 'set'
    }
    }
    }
    }

    Likewise we will make the respective scripts thus allowing each programming language to execute the respective compilers, just to name a few (free and privative):

    Hardware for our battery of tests

    Today hardware has lagged behind compared to virtual machines; we can run several of them with only one computer. Jenkins needs an infrastructure in order to do its work in a few minutes and now we want to recommend the following:

    • A local area network: which must be well managed and planned (automated) with PHIpam .
    • A repository of Operating Systems (OS): with ISO images of the OS needed, as well as a repository and/or proxy server with updates for them.
    • We must have a manual integration machine: a human being must corroborate Jenkins‘ daily work.
    • We must have one or several machines with accumulated compilations: especially if our project works on several platforms or operating systems, we will then make the necessary scripts to partially compile the integrated differences throughout the day, at a first level, this guarantees us fast approvals of Jenkins.
    • Several machines for nocturnal executions: on “clean” and also “updated” OS but these must not have received our project (we will make scripts for these cases too) obtaining approvals at a second level.
    • Machines with heavy software, which is already installed, and stable: such as database, a Java environment or even web servers, in order to be used by the rest of the computers that carry out tests.
    • Model integration machines: which are configured with the minimum, recommended and maximum hardware that we officially recommend for our project.
    • Identical machines (as much as possible) to the servers in production: same amount of RAM, cores, video memory, IP address, etc.

    In all these scenarios, we can draw on virtual computers, in order to test in real machines through the Continuous Implementation of software.

    At this point we recommend Pandora FMS on the issue of monitoring the continuous integration of software as a way to measure the stress on the systems with our new project that is installed. The fact that this is correctly compiled and installed it does not mean that it is doing what it should be doing; in fact in Pandora FMS we offer the Pandora Web Robot (PWR) for web applications and Pandora Desktop Robot (PDR) for desktop applications and in order to test those scenarios with detailed and accurate reports.

    Conclusion

    As you know, software development in the 21st century is in the pre-stage of software creating software, but that already enters the field of artificial intelligence.
    Meanwhile, nowadays, in Pandora FMS from version 7.0 (Next Generation) we use rolling release and we adapt to new technologies thanks to our flexibility, do you want to find out more? Click on this link: https://pandorafms.com

    Do you have any questions or comments about Pandora FMS?
    Get in touch with us for more information: https://pandorafms.com/company/contact/

    Redactor técnico. Comenzó a estudiar ingeniería en 1987 y a programar con software privativo. Ahora tiene un blog en el que difunde el conocimiento del software libre.

    GeekMonitoringNetwork

    What is 5G and how will it change our lives

    April 18, 2018 — by Alberto Dominguez1

    5g-featured.png

    5g

    5G. What is it? When will it arrive and why do we need this now?

    The term 5G stands for “Fifth Generation“. As you may guess by the number 5, there have been previous generations, we find ourselves surrounded by the 4th generation at this very moment, but things are about to change.

    Okay, we are pretty sure that you have realised that in the phone world things have changed quite a lot over the past 10-15 years. Not long ago, mobile phones were slow and heavy and did not have Internet access. If we look back in time, we, as humans, used to use horse-drawn carriage and we used to wear clothes that were not very good-looking but quite useful and practical, like robes. If we keep going back in time, we will find that time when we used to paint with charcoal in caverns and that roasted mammoth used to be a delight for the most refined palates. Oh wait; let’s get back to mobile phones.

    In times of 1G (first generation), back in the 80s, the technology was mainly analogue and there was no international standard, but it depended on each country.

    The first standardization that reached more than one country – called GSM – emerged in the European environment in the 90s, and it gave rise to the Second Generation (2G). A few years later, it was clearly insufficient due to its slowness – it was only intended for the use of voice and SMS – and it was followed by the Third Generation (3G), also called UMTS, at the end of the 20th century, which introduced the use of the internet on mobile devices (at low speed, of course).

    About ten years later, around 2009, a new evolutionary leap took place in response to the demands of consumers; therefore the Fourth Generation (4G) was created, also called LTE. This is, the one we use today, which represents a substantial improvement in the speed of data transmission.

    And now, you might be wondering, why do I want a higher transmission speed if I am already able to watch videos of dancing dogs on YouTube? And more important, when will 5G be around us?

    Why do we actually need 5G?

    Although the current technology already allows us to watch videos of dancing dogs at a great speed, the technologies that we will see in the upcoming years, will demand a new standard that will turn into a significant increase in the speed of data transmission, in the amplification of frequency bands and in the reduction of latency. What does this mean?

    Bandwidth amplification is very important in relation to the reduction of interference. In the upcoming years, thanks to the Internet of Things, the number of devices connected to the network is expected to be multiplied by more than 10 (Huawei, the brand, estimates that around 100,000 million devices will be connected to the Internet Network caused by the IoT by 2025). 5G will operate in a bandwidth wider than 4G, which will enable a greater number of connections without interference, and will make it possible for cars, appliances or wearables to be connected to the network without interruptions.

    Equally important or maybe more is the matter of decreasing latency. When we talk about latency we refer, roughly, to the time that a device takes to request information and receive it. Probably, this is not that important if we talk about watching videos online, but for some technologies such as autonomous vehicles, the response time will be vital. Regarding to this, 5G is expected to reduce the current latency time by approximately 10 to 50 times (up until about a millisecond), which can be the point of difference between life and death when talking about a vehicle that moves at a speed of 100 km/h.

    For some people, 5G will be the basis on which the long-awaited Fourth Industrial Revolution will be based, which will transform the life of human beings in a way as never seen before.

    The importance dimension of 5G can only be seen today. We can already imagine what will happen when telephones, computers, wearables, vehicles, appliances, buildings and millions of sensors of all kinds connect with each other to share information, but, until then, we will not be able to know the change that this will imply in our lives.

    When will 5G arrive?

    In order to answer this question, we need to bear in mind that nowadays there is still no 5G standard, so an exact date cannot be established. However, the deployment is expected to begin around 2020, which will probably coincide with the celebration of some sporting events, such as the Olympic Games that will be held in that year in the Japanese city of Tokyo.

    However, the development of the 5G standard faces many difficulties. For example, it is necessary for all countries to agree on the bandwidth that will be used.

    In addition to this, just like it happened when establishing previous generations of mobile telephony, the deployment of 5G technology is expected to be uneven between countries, it will probably start in the most developed countries of Asia and the United States, and then it will move to Europe and then it will be in other countries of the world.

    Conclusion

    We can say that even though previous generations of mobile technology were aimed at people, 5G will mainly be used by machines. Millions of devices will connect with each other to receive all kinds of information which will make our lives easier. By the way, someone will need to have control for all those devices to work properly, don’t you think?

    While we await the arrival of 5G, we have to remind you that monitoring is already very useful today in order to control the proper functioning of devices and infrastructures, applications, services or business processes.

    Do you know Pandora FMS? It is a flexible monitoring software that will adapt to your needs.

    Learn more about Pandora FMS by clicking here: https://pandorafms.com

    Or perhaps you want to find out what exactly Pandora FMS can monitor. In order to find out all about it you can ask our Pandora FMS team. How can you do that? Well that’s quite easy, you can do that by using the contact form that can be found at the following address:

    https://pandorafms.com/company/contact/

    MonitoringMonitorizaciónMonitorización de SistemasSystem Monitoring

    Dynamic thresholds in monitoring. Do you know what they are used for?

    April 16, 2018 — by Alberto Dominguez0

    dynamic-thresholds-featured.png

    dynamic thresholds

    Dynamic thresholds: some characteristics of dynamic monitoring

    A threshold is a value used to change from one state to another in a check and dynamic monitoring is used to automatically adjust the thresholds of module states in an intelligent and predictive way. When the thresholds are defined by dynamic monitoring then we are talking about dynamic thresholds.

    Depending on the threshold, a different state is defined; this way we will be able to find out the state of our check. Therefore, the state of the different checks will depend on these thresholds and from this information, we will be able to find out if the server, process, application or network element are working properly and we will also be able to find out if there is any anomaly or incidence so that we can launch the corresponding alerts in order to solve it.

    Internally, at a low level, the operation of dynamic thresholds results in a collection of the values ​​of a given period and a calculation of a mean and a standard deviation. It is necessary to establish a period so that the monitoring system can learn from the data collected, and also should be able to analyse which values ​​are below or above the average and thus it will help us assess whether there are possible incidents in our IT infrastructure. If we give it a low learning time, such as five or ten minutes, the system will only have recent data so it is advisable to set a period of time of days or weeks so that more values ​​are considered when performing these calculations to obtain the dynamic thresholds in a more solid way, through a greater number of values. Once the calculations have been carried out with the obtained data in order to obtain both the mean and the standard deviation, these are used to establish the corresponding dynamic thresholds in the modules automatically. These thresholds change, depending on the data that has been collected, the recalculation is performed, therefore, thresholds vary and adapt themselves to the new reality by applying the intelligence mentioned above about the data.

    Under these premises, we obtain the following advantages in our monitoring tool when applying dynamic monitoring:

    • It applies the thresholds automatically. The main feature is that once it has learned from the data collected in the defined time, thresholds are automatically applied in those modules we want, it is not necessary to define the thresholds manually. This task is automated and it is also customized according to the values ​​of each module, which leads us to think about intelligent monitoring.
    • It recalculates the thresholds. The system recalculates thresholds from time to time based on the information obtained, therefore these are updated with the values received.
    • It provides flexibility when defining thresholds. Although dynamic monitoring is focused on automation, it is also possible to manually adjust a dynamic range in order to give greater flexibility to those automatically generated thresholds.

    Once we have seen its operation and the advantages of monitoring our infrastructure using dynamic thresholds, we will put it into practice by looking at some examples.

    Example 1

    In this first example we are monitoring the web latency. We define a learning time of 7 days (a week) in the module:
    dynamic thresholds

    Once the indicated configuration has been applied, the following thresholds have been defined:
    dynamic thresholds

    This module will change its status to warning as soon as the latency is higher than 0.33 seconds and to critical state as soon as it is higher than 0.37 seconds. We represent these changes on the graph in order to see the action that will be carried out:
    dynamic thresholds

    We can refine this more, since the threshold has been set high due to the peaks included in the monitoring. It is possible to reduce it by 20% so that alerts are triggered with a lower threshold. We will modify the values ​​of the Dynamic Threshold Min. Field using a negative value so that the minimum thresholds fall. As there is no maximum value, since it will be considered critical from a certain time on, we do not have to modify the Dynamic Threshold Max:
    dynamic thresholds

    After applying the changes, the thresholds have been recalculated, in this state:
    dynamic thresholds

    Now, the graph looks something like this:
    dynamic thresholds

    By applying the dynamic thresholds we have managed to define the thresholds, in addition to that, we have verified that apart from the calculation that is made to obtain the thresholds depending on the time, we can dig deeper and manage to adjust the changes of state even more according to our monitoring needs.

    Example 2

    In this second example we are monitoring the temperature in a CPD. The temperature in the CPD must be controlled and stable, so if you consult the monitoring graph, you will see the following information:
    dynamic thresholds

    As you can see in the image, it is a linear graph that is always placed between the same values, it shouldn’t have sudden changes, which means that it shouldn’t rise or fall too much.

    In this case we wanted to adjust the thresholds as much as possible, so we have manually defined a dynamic range with the following information:
    dynamic thresholds

    We have enabled the “Dynamic Threshold Two Tailed” parameter to define thresholds both above and below. These are the dynamic thresholds generated:
    dynamic thresholds

    These are shown in the graph:
    dynamic thresholds

    This way, we are adjusting as much as possible to the temperature desired for the CPD, between 23 and 26 degrees. By looking at the graph, we can analyse that everything that is in the range between 23’10 and 26 is considered normal. Everything that goes beyond these thresholds will trigger the alerts.

    The Pandora FMS monitoring software has dynamic monitoring, it establishes dynamic thresholds in the modules so it is not necessary to be aware of the definition of the thresholds for alerts and events. Let Pandora FMS do it for you.

    You can find out more information about dynamic monitoring in Pandora FMS in the following link.

    And don’t forget to leave a comment in the comment section. Our Pandora FMS team will be happy to answer your questions.

    MonitoringUsabilityWindows

    Do you know what the best antivirus for Windows 10 is?

    April 13, 2018 — by Rodrigo Giraldo Valencia2

    antivirus-for-windows-10-featured.png

    antivirus for Windows 10

    Antivirus Windows 10: Encuentra el más adecuado para 2018

    From Pandora FMS we don’t recommend our readers to download the most popular antivirus on the Internet, because its popularity or good reputation might not translate into effectiveness. Malware, spyware and adware are becoming increasingly sophisticated and many of those antivirus that you see online, might not protect your computer as they should. Just like biological viruses, computer viruses can mutate and might become resistant and powerful. Therefore, we will look at the best antivirus for Windows 10 in 2018.

    When it comes to antivirus, the best thing to do is to check with independent antivirus laboratories, such as the German AV-Test Institute. This one is known for working 24 hours a day and hunting more than 6,000 million viruses. Other independent laboratories, whose opinions have been considered, are the MRG-Effitas and the Simon Edwards Labs (which is the successor of Dennis Technology Labs), in addition to the West Coast and the ICSA Labs.

    It is important to know that when it comes to virus and antivirus, the great reality is that malware is more dangerous and might become devastating. But the truth is that there is no “anti-malware” but antivirus, which is used to fight malware, Trojans, adware, spyware, and the evil Ransomware among many others.

    It is very important to know that Windows Defender, the antivirus by default of Windows 10, has not shown the best results according to the investigations carried out by the most important independent laboratories, although it has been improved in recent months.

    On the other hand, many of the aforementioned laboratories use around 100 hosting URLs with malware to test each product, this way locating the most recent URLs specifically by the MRG-Effitas laboratory. This one and some of the aforementioned laboratories test the products, and by detecting which of those antivirus software, they manage to avoid access to malicious ULRs and , they are also able to remove malware during download.

    We must keep in mind that spam filtering and firewalls, which are not common features in Antivirus, may be present in some of the products that we will mention later on. There are other additional features, such as secure browsers for financial transactions, the removal of traces left by your computer and mobile devices, the removal of browsing history, the safe removal of confidential files and the virtual keyboard to counteract the so-called “keyloggers“, in addition to the multiplatform protection.

    The vulnerability analysis offered by antivirus programs are able, in most cases, to verify that all the necessary patches are present and can even apply missing patches. But malicious spyware (which is hidden) can record every move you make with your keyboard. But wait! There’s more…there are Trojans that have the incredible ability to pass themselves off as valid programs, while stealing all your personal data.

    Other features that are necessary for a product (antivirus) to be described as “good” or “very good” by the aforementioned laboratories are: these must know how to handle spyware, given the extremely harmful features of this ” bug”. Many of these antivirus programs might have dodgy behaviours, but some are legit. Are these able to avoid “false positives”? This is another factor that must be considered by laboratories in order to assess antivirus software.

    antivirus for windows 10

    Okay, so let’s make a list of the best antivirus for Windows 10 in 2018, we will begin with the most qualified ones, by the laboratories that we have already mentioned and others that we have discovered along the way:

    • Sophos Home Premium, which is quite good to fight malware, and it’s also suitable for Windows 8.x and Windows 10.
    • Kaspersky Anti-Virus
    • Bitdefender Anti-Virus Plus
    • Norton Anti-Virus Basic
    • McAfee Antivirus Plus, which allows you to install protection on all your Windows, Android, Mac OS and iOS devices, with a single subscription.
    • Webroot SecureAnywhere Anti-Virus, which is an antivirus program that has a technology based on unusual behaviour.

    Let’s analyse separately the best programs in order to protect Windows 10, according to the criterion of independent laboratories, bear in mind that we will not consider their hype or their popularity on the Internet.

    1. Bitdefender Antivirus Plus:

    The laboratories give this antivirus program the best reviews. In addition to this, Bitdefender’s Internet Security 2018 is like the big brother and, the strongest in the Bitdefender’s Antivirus Plus package, by the way. It provides security against intimidating viruses. On the other hand you can get a bi-directional firewall that prevents those annoying viruses that are already installed on your computer, from getting in touch with the Internet.

    In addition to this, it offers an Internet browser that is independent and is designed, especially for your banking security. In addition to this, it has a real “crusher” that removes all traces of your PC, while also protecting your webcam, so that you cannot be spied on.

    It has an anti-spam tool, file encryption, a parental adviser, it has anti-ransomware and anti-phishing features, and a rescue mode that will ensure that your computer boots safely, from rootkits.It’s algorithms measure the data to discover unknown threats and also to discover the new threats that might have appeared on the internet.

    2. Kaspersky Total Security Antivirus:

    The different laboratories place Kaspersky at the top of the list, and they clarify that Kaspersky has other versions, but the best one is Total Security. It includes safe mode for children, while also protecting them from inappropriate content and messages. It also has a built-in password manager so you can track the ever-growing list of them. It also has an online backup so you can keep your files safe.

    It has traditional viral scans and backup software, and you will also have a firewall to protect you from unknown connections, as well as a very advanced anti-malware security to detect “bugs” before they infect your computer. It gives you anti-phishing protection so that your personal information is secure. It is a great antivirus for Windows 10.

    antivirus for windows 10

    3. McAfee Antivirus Plus:

    This is quite important: with a single one-year subscription, all your devices will be protected, regardless of the different operating systems such as Windows, Android, iOS or MacOS, as long as they are all located in the same place. Just like Norton, it has been around for a while.

    You should know, that not all these features are available for all devices. But, what kind of security do you want to have for your PC? A great protection against malware, a great technical support, a firewall and a user interface, this should be simple. In addition to this, it has protection against commercial viruses. We know that Bitdefender and Kaspersky have a better performance for protection; McAfee is the best for people who have a large number of devices in one place.

    4. Webroot SecureAnywhere Antivirus:

    It is a small but very fast program, since it barely uses the resources of your system. It can delay the harmful activity of Ransomware. It comes with a package of services, which is quite interesting. The signature database is stored in the cloud, so it only occupies about 4 MB of RAM, when the system is inactive. In addition to this, it does not require permanent updates (which are annoying for many users); therefore, this is one of the best and most comfortable antivirus for Windows 10.

    Now, for those computer professionals and webmasters of websites or blogs, who might be interested in antivirus programs, but also in monitoring software, we recommend Pandora FMS. In addition to having extensive information on viruses and antivirus, Pandora FMS is flexible and capable of monitoring devices, infrastructures, applications, business processes and much more. If you still don’t know about us, you can ask us any question you might have in this contact form.

    Rodrigo Giraldo, redactor técnico freelance. Abogado y estudiante de astrobiología, le apasiona la informática, la lectura y la investigación científica.

    ExtensionsFeaturesMonitoringPandora FMSRelease

    What’s New Pandora FMS 7.0 NG 721

    April 12, 2018 — by Irene Carrasco0

    whatsnew-721-featured.png

    whatsnew 721
    This last update package of Pandora FMS 7.0 NG contains improvements as well as visual changes and includes the resolution of some problems. A list of the most important changes can be found below.

    Visual enhancements

    • Changes in the audit log.
    • Tree view for non-initiated agents or modules.

    whatsnew 721

    • Digital clocks.
    • Graph values.
    • Elements of each of the specific profiles.
    • SQL graphics of custom reports.
    • Network Map Relationships.

    whatsnew 721

    • Dashboards in Firefox.
    • Description of the MIBs in the SNMP queries from the Pandora FMS console.
    • Full scale reports of network interface type in the metaconsole.
    • Non-initiated agents in the tree view of the metaconsole.

    New features

    • It is now possible to apply templates to an agent using API/CLI.
    • Assigning tags to events once they have been created is now possible.
    • Implementation of secondary groups.
    • Agent migration from the metaconsole via API/CLI.

    Improvements in Cluster display

    whatsnew 721

    • A group filter with recursion has been added.
    • An agent search engine has been incorporated.
    • ACL permissions have been optimized in both the cluster view and the creator.

    Additional improvements

    • Permission management in the scheduled tasks.
    • Agent’s Help.
    • Login with SAML.
    • Event pagination after applying filters.
    • SNMP modules with communities with entities.
    • Exporting logs to CSV on Windows computers.
    • Tree view policy section.
    • The access to file collections.
    • Alert validation.

    Solved problems

    • The problem with monitoring the Windows log security has been solved.
    • The bug in policies that do not adopt modules has been fixed.
    • The remote server configuration log problem for NMS licenses has been fixed.
    • The bug in editing parameters in VMware plugin has been fixed.
    • The SLA report error has been fixed.
    • The problem with special characters in the passwords to access Pandora FMS has been solved.
    • The SQL bug in the report view by those created through the wizard has been fixed.
    • The problem with creating modules with CLI has been solved.

    Download Pandora FMS

    The last updated version of Pandora FMS can be downloaded from the downloads section of our website:
    https://pandorafms.org/en/features/free-download-monitoring-software/

    Data BasesMonitoring

    PostgreSQL10: logical replication and monitoring

    April 9, 2018 — by Jimmy Olano0

    PostgreSQL10-featured.png

    PostgreSQL10

    PostgreSQL10 logical replication. Find out all about it here

    Some time ago, we published a study on PostgreSQL monitoring in a very detailed way. A few days ago, a specialized magazine announced the good news about the launch of a plugin to monitor PostgreSQL10 with Pandora FMS, our monitoring tool. Today we will enrich our knowledge with the new version of PostgreSQL: version 10, let’s go!

    PostgreSQL10

    Introduction to PostgreSQL10

    PostgreSQL is a powerful database which has been mentioned in our other articles as one of the best relational free distribution databases and nowadays this is still true. There, you will find a summary of the articles which have been published in this blog or in the Pandora FMS forum, this way you will be able to broaden your information about PostgreSQL 10 but also about its previous versions. We need to mention that these articles are still fully valid, since the new versions always include a compatibility, thus the software that uses PostgreSQL can be updated to version 10. PostgreSQL is completely written in open source with a license which is similar to BSD and MIT. Essentially the license indicates that we can do anything we want with the source code as long as we don’t have any liability for its use to the University of California.

    With this article, we want to continue the other article that we mentioned eariler, where we explained the query of the locks in PostgreSQL with the pg_locks parameter:

    SELECT COUNT(*) FROM pg_locks;

    When several users accessed the same registry (or several ones) a lock was produced to avoid collisions of versions in the data, which is an important parameter to monitor. In a different study we did on another popular database, we introduced and briefly explained what the ACID is, (Atomicity, Consistency, Isolation and Durability), this way, from the registration of new data and transactions together create locks in a relational database: and yes, this also happens in other storage engines, hence the importance of the new feature of PostgreSQL10!

    Locks, locks, locks: the nightmare of programmers

    We need to explain why locks get in the way with our tasks of Database Monitoring. This happens because of the backup monitoring, their correct execution, storage in a safe place, and databases are not exempted from these locks. In addition to this, we must follow the basic advice in order to optimize the performance of them, this way everything is harmonized with a monitoring system.

    This can be carried out with specialized care (for the “Enterprise” version of Pandora FMS, and there are user training plans), in addition to this, the network administrators keep replicas of the databases on other servers in different physical locations.
    These replicas represent several advantages:

    • These may be physically located outside the company or in their branches, as well as always considering the encrypted communications in order to pass data from one side to another.
    • Conceptually it can be considered as “backup copies” which are updated essentially in real time.
    • We will be able to perform audit tasks and statistics on the replicas without affecting the performance of the main database.

    What is a database replica in PostgreSQL10?

    A database replica replicates the data in a very simple way: it copies the files of the master database, bit by bit, byte by byte. PostgreSQL10 saves in the specific files each element of a database and in order to do that it takes a binary record, a kind of log that summarizes the change made in those files when records are added or modified. This way of replicating is also used by other database engines since it is a well-known scheme.

    A collateral effect of database replicas is that the slave will not be available while the replicated information is being written. This is not something to be worried about, the most important things is what happens in the master database when there is a lock of records or, even worse, a set of records that belong to a transaction that must be reversed. These should not be copied in the replica, because it is an information that will not be permanently registered and will be deleted (only a summary of the information will be saved).

    Let’s look at these previous things with a simple example: two clients of a bank keep accounts and client A wants to transfer money to a client B (in real life this is much more complex, this is only a simplified example). At least two writings must be made in the database: one debiting the amount to customer A and another one crediting the amount to customer B: when both events have been verified the transaction can be completed and the changes become permanent.

    What would happen if, client A made this transfer of money to customer B, and then the automatic payment of his credit card were deducted, ending up with no balance for the transfer? Well, the accreditation that would have been made to customer B and the debit made to customer A would not be permanently registered and would be discarded: this is how ACID works, guaranteeing the integrity of the data and thus complicating the replication of the information.

    The replication process does not know anything about registers or users that want to record or modify data, the replication process only knows that the files must be the same in both machines and if the origin writes data in any of those files, then it must wait until it finishes so that the file is then available to be read and copied.

    What is a logical replication of a database in PostgreSQL10?

    The approach is different in PostgreSQL10 and consists of the following: it does not matter the way in which the normal replica makes the process of consulting the binary record, which keeps track of the files that have been modified since the last successful replication. This is what’s new, it translates these changes into information about the records that are already permanently recorded in the database, these records are then read and added in the replica. This way, blocks are ignored because we don’t know how these will end up (either permanent or discarded), which is a very practical and ingenious solution and also gives us additional benefits.

    How are logical replications possible in PostgreSQL10?

    Thanks to this new version 10 we can install a PostgreSQL extension called pglogical from the software house 2ndQuadrant and they have added logical features to PostgreSQL since version 9.4.

    pglogical is available as free software under the same licensing conditions of PostgreSQL10 and in order to install it we must follow the following steps that we will explain in a practical way when using GNU/Linux Debian and its derivatives:

    • First we must add the repository in our computer from the PostgreSQL website, in the case of Ubuntu we have version 9.5 and we need version 10.
    • We must insert the key of the repository which will guarantee us that whatever we download , will be legit according to what is published in the PostgreSQL page (all the details in this link).
    • We will do the same process with pglogical, we will add the repository from the 2ndQuadrant website in order to get the latest version available.
    • We must also add the respective key of the 2ndQuadrant repository (all the details in this link).
    • Once we have configured the repositories, we command apt-get update and then apt-upgrade.
    • Finally, we need to install PostgreSQL10 with apt-get install postgresql-10 pgadmin3 and pslogical with apt-get install postgresql-10-pglogical.
    • We tested these processes on a 64-bit Ubuntu 16.04 machine (in fact the database offered during the installation of Ubuntu Server is indeed PostgreSQL) and the only problem we had, was with the Russian-language dictionary regarding the Hunspell spell checker.
    • PostgreSQL10

      If we want to experience the latest development version of pglogical we can bring the source code straight from its repository on GitHub. In the website of 2ndQuadrant , they inform that the version of pglogical must always match the version of PostgreSQL installed and different machines can work with different versions (and replicate between them) as long as the above is respected, so depending the database administrator, this is how it should be done.

      Extending the utility of logical replication in PostgreSQL10

      The logical replication overcomes certain technical limitations of normal replication thus fulfilling our goal of data backup but we can go further: we already know that the logical replication is NOT a true and faithful copy, byte by byte, of the master database. Therefore the information that we are copying from the master database arrives at the replica as if it were an “independent” database and will write in its own files, depending on the type of hardware and the operating system installed etc…
      A register X is copied in the replica and is identical byte to byte to its original, but the way in which this is written on the hard disk will be different in both machines. In order to finish this we must point out that with a logical copy we can extract statistical or audit information from it without having to wait for the replication to be written (for example, every five minutes or a gigabyte, whichever comes first).

      The fact of extracting statistical or audit information from a replica database implies that we must write the queries (or even modify indexes) which, of course, do not exist in the master database, but we need to give back the information: when writing these queries (even if they are temporary) the database is no longer a faithful and exact copy of the master database, which causes problems when replicating files one by one.

      With the logical replication we will not have that problem since it is guaranteed that all the original records are copied in the replica, which guarantees (because it is a machine for replicating) that these cannot be modified or deleted but they can be read and consulted.

      Extending the utility of logical replication leads us to practical examples, for example, the Credit Card Department of a bank must keep record of clients in real time without impacting the main database with their work: we will be able to install a logical replication server that will only copy the data of the clients who have credit cards. These data could be personal data, bank accounts and of course data of credit cards. It is not necessary to replicate all the customers of the bank, only a part of them; Likewise, the Credit Card Department can even create additional tables to analyze bank movements thus being able to approve an increase in the customer credit limit and thus other things which mean income of money for the company.

      Configuring PostgreSQL10 for logical replications

      When installing PostgreSQL10, this defaults in the WAL configuration by default. This configuration allows recovering from unexpected shutdowns or failures that have prevented the data from being recorded on the hard disk.

      In the case of replications, logical replication is a new feature that many people do not yet because it is disabled by default. First, we need to have exclusive access to the database (we will be connected with the proper credentials) and second we need to change the value of the wal_level parameter to ‘logical‘. In order to know the location of the postgresql.conf file, we just need to execute in a psql console the command: show config_file and edit the file thus modifying: set wal_level = logical and save the file. Then we will need to restart the service, which is not a problem since we are the only ones who are connected.

      This change will tell PostgreSQL10 that it will have to add the corresponding registers in order to translate the binary catalogs into record catalogs, hence the need to momentarily stop the database and restart it. PostgreSQL10 has the ability to host scripts in the Python language, so what we describe will depend on each database administrator this way working together with the network administrator(s) in order to take advantage of night time or early morning to do the work without impacting the normal daily work of the company.

      Creating publications in the master database

      PostgreSQL10 is configured to work with publications that we must define in the master database. If we are connected with the appropriate credentials by a terminal window , we will create a publication for our example of the Credit Card Department of our imaginary bank , it will be something like this:

      CREATE PUBLICATION dpto_tc FOR TABLE clients, bank_accounts, credit_card;

      This will create a publication called dpto_tc for the tables called clients, bank_accounts, credit_card for the logical replication
      If we need to add all the tables to a single publication, we should write the following:

      CREATE PUBLICATION all_the_tables FOR ALL TABLES;

      We must emphasize that by default for the publications that we add, the data of those tables will be copied to the logical replication in its entirety, however there is the option to copy only the data that have been added after the creation of the publication.

      Preparing the logical replication

      Once we have defined the publications, we will proceed to carry out the work that may involve greater thought and decision from us: we will have to create the data structure of each and every one of the tables that each publications and if we use the order ” FOR ALL TABLES “in at least one of the publications, we will have to make an identical copy of the entire structure of the database.

      That is why we recommend to advance work and always create a complete copy of the entire structure of the database since pglogical will never do this work for us and when replicating, it will only return a ‘table not found’ error (which will lead us to the monitoring of the work of logical replication, so Pandora FMS, get ready to monitor!).

      Creating subscriptions

      Once the data structure is ready to receive the logical replica, we must create a subscription using the same names of the publications created. Once we are properly connected to the machine that will contain the logical replica, the syntax will be the following (we will use the same bank example):

      CREATE SUBSCRIPTION dpto_tc CONNECTION 'host=bd_maestra dbname=mi_credenciales ...' PUBLICATION dpto_tc;

      To make it easier, the subscription will have the same name as the publication and regarding the connection data we must include the values ​​according to our network structure and configuration: waiting and expiration time of the connection attempt, the port, etc… everything according to the RFC3986 standard.

      Modifying the publications

      With the ALTER PUBLICATION command in the master database we can add new tables, delete, change users or even rename the publication, among other options.

      Keeping subscriptions up to date

      We can automate the maintenance of subscriptions in the slave database with the following order:

      ALTER SUBSCRIPTION dpto_tc REFRESH PUBLICATION;

      This will update the tables that we have added, so that’s why talked about copying the complete structure of all the tables in the database but we need to dwell on this: if we create a new table in the origin and add it to the publication, we must also create this structure of table in the destination and then update the subscription.

      Monitoring logical replication in PostgreSQL10

      Just like normal replication, which we can extract its state with pg_stat_replication in the logical replicas we will use pg_stat_subscription as follows:

      SELECT * FROM pg_stat_subscription;

      We can also select specific fields of subscriptions:

      • application_name: the name of the subscription
      • backend_start: specific date and time of the start of logical replication.
      • state: if it is working we will see “streaming” or transmitting.
      • sent_location: hexadecimal value for binary audit purposes.
      • write_location: previous idem.
      • flush_location: previous idem.
      • sync_state : it returns asynchronous value, which is executed independently or in the background.

      To finish this article, now programmers will need to create a script to connects both databases in read only mode and compare record by record to see if the information matches in both origin and destination. This process, could be carried out at dawn or on weekends and the results should be stored in a third database or in log files, in order to be monitored with Pandora FMS so that we configure the respective alerts in an appropriate way.

      Conclusion

      We have hardly gone through the logical replication since there are still many other features such as:

      • Row-level filtering (registration): just like the CHECK command, we can only replicate those that meet a certain rule.
      • Column-level Filtering (field): if a table contains many fields that are not relevant for the credit card department (as in our practical example), we will only replicate the ones that we want.
      • pglogical has a parameter that is unique to this add-on and that consists in delaying replications according to the period of time we need: we might need replication to start at night when the employees are gone. This feature is not “embedded” in PostgreSQL10.

      We think that these concepts will become quite common in the future in other data management environments, if you have any question or comment, don’t forget to write it down here! Thank you !

      Redactor técnico. Comenzó a estudiar ingeniería en 1987 y a programar con software privativo. Ahora tiene un blog en el que difunde el conocimiento del software libre.

    MonitoringMonitorizaciónMonitorización de RedesNetwork Monitoring

    8 Tools for social media monitoring

    April 6, 2018 — by Alberto Dominguez0

    social-media-monitoring-featured.png

    social media monitoring

    Social media monitoring. Find out 8 tools for great monitoring

    From your cousin Paco to your grandmother Maria. From the Pharmacist around the corner to the President of your country. Some studies claim that about 3,000 million people around the world already use social networks. Don’t you think that’s a lot of people?

    Since its emergence 10 years ago, social networks have quickly become one of the great things about the Internet. The success of social networks has led millions of people to spend hours and hours each day immersed in endless content and for many of these people this is why they use the network. Given that the number of Internet users is estimated at around 4,000 million users, it could be said that about 75% of people who use the Internet use it for social networks.

    Nowadays, companies know how important social networks are. Today, most businesses have profiles on social networks. On the other hand, thousands of professionals also have used these to make themselves popular or to share ideas and experiences.

    But the direct presence is not the only concern for brands when it comes to social networks. Users perform all kinds of comments about companies, brands or products in their own profiles. Thus, there are basics concepts for survival online such as digital reputation, which is in the hands of users through these networks like Twitter, Facebook, etc.

    In addition to this, it is not just a matter of digital reputation for companies on social networks. The good thing about social networks is that these can be used to advertise the products of a company or these can also be used to generate some buzz about a product.

    However, it’s quite difficult to control the volume of content that is generated there. All this volume and complexity, combined with the interest of professionals and companies, has generated all kinds of emerging applications for social media monitoring. Both those ones which specialize in specific networks, and also those ones which are capable of monitoring dozens of them at once, there are many of them to suit the needs of each user.

    When it comes to companies, these applications are very useful to answer a question that is quite important nowadays: What do my clients think of my brand?

    In this article we will briefly discover some of the tools for social media monitoring which can be found on the Internet. Most of these have some cost-free option, but some of them also have paid versions, which provide additional features. Let’s start!

    Hootsuite

    This is one of the most popular monitoring tools on social networks and according to their own website, it has more than 15 million users. It’s capable of being used in a number of social networks, specially the most common ones; it is easy to use and intuitive. It allows you to monitor your brand and to monitor whatever your customers say about your brand on social networks.

    Klout

    It is one of the most popular tools. It is specially focused on content. It has features that suggest content that may be of interest to followers and it can monitor their reaction. His “Klout score” is quite popular, which is an index between 1 and 100, which represents the influence or ability to generate actions, in every user.

    Social Mention

    It is able to monitor mentions received by a brand in more than 100 social networks. It classifies its influence through 4 different categories: “reach” “Passion”, “sentiment” and “strength”

    Howsociable

    It’s very useful for measuring the presence of a brand on social networks. One of its distinguishing features is that it rates differently for each platform, which allows us to distinguish what platforms are performing best for our brand and which ones should improve.

    Twitter Analytics

    This is one of the greatest tools when it comes to monitoring Twitter. It is able to measure the interaction and improve the success of “tweets” but it also able to explore the interests, locations, and demographics followers.

    Tweetdeck

    Another tool which is quite relevant among those for Twitter monitoring. It has several features in relation to this social network, which makes it a very complete tool for monitoring this great social network.

    Google Trends

    It is one of the classic ones in this industry. It monitors the most common searches on the most used search engine in the world: Google. It also allows you to compare results, by country and to see graphically the evolution of the interest (search terms) for a brand.

    Google Alerts

    This is another Google service sends by email the new content that has been generated and has been found by the search engine, and that contains the search terms selected by the user. It has configuration options that allow you to select variables such as the type of alert or its frequency.

    And so far we have seen some of the best known social media monitoring tools, but there are many more, according to the taste and needs of users. As in so many other fields, the offer you can find on the Internet is limitless.

    And yourself, what do you think of social networks? Do you think they will continue being quite popular for the next few years or, perhaps, do you think they will be overcome by some new trend? What do you think?

    We are looking forward to hearing your opinion. Let us know your opinion down below in the comment section. Have you checked any of our other articles? We have a lot of articles, which talk about many different topics.
    If you have checked any of our other articles, then which one is your favourite?
    We want to know!

    And remember, if you have any question; don’t forget to get in touch with us! We will be quite happy to help you!

    Don’t forget to check our products in our website! We have many different technological tools that might help your company. We have Pandora FMS, eHorus, and Integria IMS. Do you already use one of these? Have a look at our website to check them out! And let us know your thoughts on these social networks monitoring tools.

    Thank you very much for participating! We look forward to hearing from you!

    MonitoringMonitorizaciónMonitorización de SistemasSystem Monitoring

    Is your box black or white? Monitor with a different approach

    April 5, 2018 — by Jimmy Olano0

    blackbox-and-whitebox-testing-featured.png

    blackbox and whitebox testing

    Blackbox and whitebox testing for a better monitoring

    In the automated field of computing, some terms or trends are in from time to time. Today we will look at the concept of blackbox and whitebox testing, applying it to the science and art of monitoring.

    blackbox and whitebox testing

    Introduction

    Today we will show a new approach to the matter of monitoring, from installing Pandora FMS (or any other software that we have selected, since we will talk for the most part about concepts) to then introduce the ideas of blackbox and whitebox testing. We will be brief in our explanation; however, we will include links so that whoever wants to delve into every aspect, is able to do so.

    Collecting metrics

    Every administrator of a local area network must be aware that monitoring is unavoidable and that with Pandora FMS the task becomes much easier. For this, Pandora FMS takes into account the most important metrics, but since as -for now- Pandora FMS can’t do magic, it uses dynamic monitoring: a feature that has been present since the release of our version 7.0 NG to make the installation easier.
    The metrics we can collect are grouped into four categories:

    All these metrics are collected in very different ways and forms, depending on the network topology, which leads us to distributed monitoring, where we explain thoroughly the flexibility that characterizes Pandora FMS.

    Alert management

    After a certain amount of time (let’s say a week) collecting data, our lives will be filled with alerts (either by email or by messaging services such as Telegram or Twitter), which is completely normal and is not to be frightened or pushed back:

    • If it is something really important, we will proceed to correct it and leave the alert as it is, starting to harvest the fruit of our work with Pandora FMS.
    • If it doesn’t deserve more attention we can modify the alert (yes, we know that the dynamic monitoring of Pandora FMS was the one who included it, but in the end we are the ones who decide) adjusting the values to avoid excessive repetition. Each alert in Pandora FMS has a comment tab where we can justify and/or explain the reason why we modify the alert values. This way we can go on vacation and our substitutes will have a human guide present to advise them.
    • It is necessary to know that an alert can be suspended so that it is not shown in the Pandora FMS desktop, either because we are going to do something punctual and urgent or we can even program it in certain schedules (for example, when it comes to backup database server data, logically the network will be congested and trigger an alert, since Pandora FMS is not aware of how our policy or way to backup data are).
    • The next step would be to deactivate a certain alert, which is the most advisable thing to do instead of eliminating it, because in the monitoring field, today we have an environment and tomorrow we don’t know what will happen, so we could need it again and we’ll spare ourselves the work of creating it again. This also explains why monitoring tasks cannot be fully automated.
    • When we modify an alert, one of the values that we can set is the maximum number of repetitions that Pandora FMS will notify us (this is like when our skin feels the first raindrops of rain, after a certain time, and being already wet, it stops informing us about it). However, there are other events that take place in cascade and trigger a large amount of mass alerts: if the modem with which they access the Internet in a certain branch office is, for example, damaged or not connected, all devices in that local area network will trigger the alarms (assuming that we do not have a satellite server). To do this, Pandora FMS has a Cascade Protection under a “parent-child” entity model: we activate the corresponding checkbox and then we go on to associate it with the parent agent. In the event that the parent officer has any critical status alerts, the child agents will not trigger their alarms.

    blackbox and whitebox testing

    • We cannot end this section without commenting that once all these alerts have been fine-tuned, we will be able to evolve and create generic alerts by groups of agents (in order to reuse them in new devices that we add to our networks) and even create alerts by correlation of events to identify and act in cases where no classic alert is triggered. Imagine that we have several web servers with balanced load and we have them configured to alert if any one exceeds 90% of CPU usage but it happens that each and every one of them reach 60% and up to 70% of their capacity: this is a good time to alert us to make the decision to first review what is causing the overload and if it is necessary to add more servers, if the case were the natural growth of the company and its web clients. In addition, it is even useful for detecting hardware and/or software modifications that lead to investigating and/or adding more monitoring agents (or at least their modification).

    Whitebox

    We can then define the whitebox model: we know our system, how it works, what the processes are, and with the help of Pandora FMS we can place agents (and even satellites) to collect the data. We are in the whitebox category because we have the complete map, we know in detail every process and the complete mechanism, there is nothing hidden or closed for us. Obviously, the collection of metrics under the whitebox scheme allows us to save time and effort, since we know in advance where and how key points work and we can monitor vertically. However, it may happen that some unknown or unexpected aspect, under certain conditions, escapes from our grip, so the whitebox test arises.

    Whitebox testing

    Whitebox testing is also known as a transparent box (among other names) and is actually out of our reach as it belongs to the development team and operations team (which we are attached to as a monitoring team) and takes advantage of our knowledge of the software and system to make it part of a test process. It happens that, under certain circumstances that we have detected by our alerts, based on metrics well collected by the whitebox model, we can indicate the exact conditions to reproduce a certain exception. The advantages are clear:

    • We obtain a better overview of the situation.
    • Helps optimizing the code.
    • Introspection of programmers, awareness of their actions.
    • Allows finding hidden errors.
    • All this leads to efficiency in finding errors and problems.

    Disadvantages of whitebox testing:

    • We need to know the source code of the involved software(s).
    • Requires a high level of understanding and experience of the affected program.

    Monitoring software using the whitebox model

    Apart from Pandora FMS itself, there are many other softwares that use this model, some time ago, we published an article about Zabbix (where we will be able to see its operation in detail even though it is a comparison); we also have PRTG Network Monitor (which we evaluated and is of the same weight and size as Pandora FMS but with proprietary software).

    When users report that “the system is going slow,”

    Although we already have our Swiss pocket knife (Pandora FMS) and we are more ready than a child explorer, at some point the dreaded qualitative report of one or more end users will arrive:”the system is slow“.
    With reports, our patient must be that of a saint: from our users, whether it’s employees or clients (the latter will have no shame in complaining, with or without a reason), here is where we must sharpen our wit. For the users who are employed by us, we should indicate the most appropriate method to report any problems.

    blackbox and whitebox testing

    In the case of the company’s customers, we must rely on the customer service department. This doesn’t mean that the battle is lost, but that it’s time to use the artillery we have in Pandora FMS.

    Applications that interact with users

    Today, our world of information is divided into two types of applications: on one hand, the usual one, which is installed in an operating system through a suitable process and properly configured for that particular environment, which we have always known as vulgar desktop applications (compiled especially for a particular environment).
    On the other hand, there is, ironically, also a desktop application but which during the last ten years has gained a tremendous prominence thanks to the add-ons developed for it: our web browsers. A web browser programmed with free software such as Mozilla Firefox offers developers the typical HTML (CSS included) and JavaScript languages, giving us a well known and secure environment, regardless of the operating system installed or hardware used. But even more, it allows us to incorporate plug-ins for a wide variety of tasks, from games to emulating operating systems or simply running virtual terminals. Programming is progressing more and more towards this sector, given the obvious advantages.

    Monitoring desktop applications

    Pandora FMS has for the Windows operating system the Pandora Desktop Robot (PDR) which will allow us to record actions on any installed software and obtain its results (right execution or not, processing time) that later can be sent to our monitoring server to evaluate the data.
    It is recommended to install the PDR and the probe in virtual machines with auto-start and user auto-registration in order to record the actions, save them and we’re ready to go. We can have these virtual machines running continuously to run periodic tests or launch them when a user reports a problem to us. To do this, we will reproduce the situation once only because we will save it and program it to run autonomously many times, then analyze the results and confirm either the error or the error in the report.

    A more daring option is to install it through Active Directory on the end users’ machines and from there, from the real environment, run our tests. This option must be carried out very tactfully, even on specific users who repeat the same report over and over again.

    Monitoring web applications

    There are two components that we can monitor: on the server side and on the client side. On the server side we will make use of the whitebox model because we know how it works, to which database it connects, the languages used, etc. But on the client’s side we will make use of the Pandora Web Robot (PWR) that allows the navigation through web sites simulating user actions and collecting the results as if it was just normal monitoring. You can watch this explanatory video on Youtube.

    Among the possibilities of this monitoring, it’s included the ability to install virtual or real machines in different geographical situations, let’s take the case of different cities and even continents pointing to the same web server, this will give us a real panorama and most importantly, a quantitative report that can be confronted with what is reported by users (corrective mode) or by tasks programmed by ourselves (preventive mode).

    Blackbox

    The last thing we’ll see is the blackbox model because we can apply it even if we don’t have the data of a whitebox model at hand (in fact, it’s an independent model). The blackbox model can be used in any desktop application or web application and start collecting data for a while as if it were normal metrics and averaging values (again, let’s take a week per subject) to generate alerts from there.

    It’s called blackbox model because we don’t know how applications work, where they go and where they come from, what we know is what the end user sees, who will tell us which processes are critical and that we will analyze and monitor in search of deterioration (or perhaps improvement) in software performance. We are at the final end, where we only know the result and have no idea what causes it.

    We can see this blackbox model as an audit process: a company hires us to review its software in different environments but does not provide us with the source code or allow us to access their servers, only their API or WEB services as the case may be. Its operation is hidden but we can quietly send our reports quantitatively where the values are extended/exceeded or when they have changed significantly (a value of 10% is always a good variation reference, for better or for worse) and in different conditions (time, geographical location, connection method, different computers and/or operating systems, etc.).

    In the development of proprietary software blackbox model is very handy: Pandora FMS is the tool to test and / or take to the extreme future applications that will reach users without compromising the source code or touching the servers or infrastructure. In other words, we monitor even before the production stage begins, like a kind of beta user but with very specialized tools.
    Following the last example, our client may be interested in finding out where it happens or what causes the problem, but we will not be able to give that answer since we were hired to work under the blackbox model and said model only evaluates results, not the causes, unless we have Pandora FMS in our arsenal.

    Blackbox testing

    Blackbox testing (also known as functional testing) treats the software under test as a whole without knowing its internal components. The tests use software interfaces and try to ensure that they work as expected. As long as the functionality of the interfaces remains unchanged, testing should be successful even if internal functions are changed. The blackbox test is “conscious” of what the program should do, but has no knowledge of how it does it. Blackbox testing is the most commonly used type of testing in traditional organizations that have beta users as a separate department, especially when they are not coding experts and have difficulty understanding the code. Provides an external perspective, such as an audit, of the software under test.

    Software that uses blackbox model

    Nagios (although its agents are a bit cumbersome to set up) is a software in the end, itt actually gets its metrics this way.

    Monitoring: whitebox and blackbox

    Pandora FMS is designed to adapt to many of the challenges that arise, each company has its particularities but it does not mean that we are not prepared at all. The monitoring of services is something very different from the usual thing: the services will be that series of functions that we offer to our clients or collaborators. These services, in short, will be low level (whitebox model) or high level (blackbox model) so it is a mixed working model and we even dare to say that it is the most appropriate but also the one that involves more work because both results must be combined and delivered to the development and operations teams in order to find the solution to the errors or find possible performance improvement.

    Conclusions

    We have covered at least 80% of the monitoring material, in the most enjoyable way possible. If you wish, you can add our article to your favorite websites so that you can read -and discover- in several days the reason why we are so passionate about our work: it includes programming, network administration and assistance to end-users, users programmers and even robots – what a titanic task!

    Redactor técnico. Comenzó a estudiar ingeniería en 1987 y a programar con software privativo. Ahora tiene un blog en el que difunde el conocimiento del software libre.

    FrikiGeekMonitoringMonitorización

    What is Google Pay?

    April 2, 2018 — by Rodrigo Giraldo Valencia0

    gpay-featured.png

    gpay

    Gpay, what is it? Quick, easy and safe online payments

    On January 8, Pali Bhat, Google’s vice-chairman in management of payment products, announced that from that date on, “Everything would be brought together in Google Pay.” Bhat was the promoter of this great idea, which is meant to make our lives easier. Let’s have a quick look at it.

    For those people who have used Android Pay to pay for food and have used Chrome for their payment information or have purchased apps on Google Play, it’s clear that they were able to experience how Google helped them pay. While these processes were being developed, Google worked hard in order to make these experiences easier and shorter in time. In addition to this, they also worked hard to make these processes safer and more consistent.

    This way, Google Pay was invented. Here, all forms of payment are brought together, including Android Pay and Google Wallet. It is now possible to search it in Google applications, either online or in stores, and is available in different applications and websites for different types of customers.

    gpay

    When it comes to developers, we can say that they will be able to visit the website “Payment Solutions”, to explore how they can implement Google Pay, although in this tutorial we will give you the important information so that you don’t waste time. The developers will also be able to work with one of the partners of the processor of this new Google application, in order to carry out an even simpler integration. Then, you will be able to discover how to achieve this in this post.

    Let’s have a look at some of the most outstanding features of this Google innovation:

    • The entire payment process will be easier and faster.
    • Customer experience will be improved.
    • It will reach billions of Google users, all over the world.
    • It is implemented only once and can be used everywhere.
    • It is a very simple process for the suppliers of goods and services and for their clients.
    • Conversions are improved, with simple payments, on time.
    • Customers will be able to pay very fast, this way, it will be easier to purchase in the app and also in Chrome with just a phone.
    • Global reach of customers in Google Scale.

    Why has the shopping experience significantly improved?

    Since the payment process is very simple, fast and agile, the shopping basket abandonment is considerably reduced, while customer satisfaction is increased. In addition to this, there are no additional fees, since Google does not charge sellers or buyers. Additionally, customers/buyers can access any of their credit or debit cards that are stored, so that they can have control and pay as they wish.

    Regarding the security factor, we need to point out that additional security is provided, while Google stores the payment information of customers/buyers, in the safest way, and only shares what is required for each transaction. Let’s have a look at one of the interesting factors from Pandora FMS: the implementation of this new Google application.

    How is Google Pay implemented and operated?

    Well, it works with several payment processors:

    Ayden

    It turns out that hundreds of millions of people, around the world, add payment cards to their respective Google accounts which, in other words, can be used to withdraw from any Google product, so that there’s no need to enter your payment details every single time. With this new Google payment API, we can enable the same payment experience, for our own products and/or services without effort. Customers, in turn, can pay with any credit or debit card stored in their Google accounts. So sellers and service providers can allow payment online, without any interruptions in Android applications or, in Chrome with a mobile phone.

    Google Pay is a unique solution that accepts payments from anywhere and from any device, this is due to:

    • Complete functionality for mobile phones.
    • Global reach, with a single solution.
    • A Targeted risk management.
    • Greater authorisation for subscriptions.
    • Flexible and easy integrations.

    As we mentioned earlier, Ayden (one of the several payment processors of this Google API), is an excellent solution to accept payments anywhere and from any device. In addition to this, Ayden reviews payment flows without friction, plus it has Ayden MarketPay which is a payment solution for marketplaces, with different payment methods, and has “Revenue/Accelerate” that allows the optimization of authorization, and it also allows payments in the store.

    Additionally, Ayden has RevenueProtect (integrated risk management), a global reach with a local focus, a unified report of crossed channels, and an entire associated ecosystem that offers the option of connecting existing platforms.

    Braintree

    Currently, Google Pay is available in the beta version, with the latest Android SDK and Java Script. In addition to this, it offers a mobile shopping experience, both in the application itself and in the mobile web for customers with compatible Android devices. On the other hand, it allows customers to pay with cards stored in their Google accounts, in addition to those stored in Android Pay. In other words, we can say that this new Google API extends the existing feature of Google’s Android Pay, to include access to the safe of the Google card. This allows customers to make purchases quickly and, also, safely on their Android mobile devices.

    Then, when selecting this new Google API in the Android application or, in the mobile web, customers can pay using the cards associated with their respective Google accounts. Now, regarding compatibility, we can say that the place where the business of a seller of products or services is domiciled determines the ability to either accept Google Pay or not. Then, most of the merchants that are located in the regions that we will mention later on, and depending on their processing configuration, will be able to accept “G-Pay” transactions from eligible customers, with these types of cards:

    • United States: MasterCard, Visa, American Express and Discover.
    • Europe: MasterCard, Visa and American Express.
    • Australia: MasterCard, Visa and American Express.
    • APAC (Asia-Pacific): MasterCard, Visa and American Express.

    When it comes to Europe, APAC and Australia, it is necessary to bear in mind that, in order to be eligible to accept the new Google API, with American Express, you must proceed with the Amex accounts. For those who are not sure of the configuration they need, we recommend you visit this page of Baintgree to contact them and solve the problem.

    Entrepreneurs and merchants domiciled in the United States and integrated with Google Pay, will be able to accept Discover cards from their respective customers, without having to carry out any additional integration work. But, if you use split shipments or recurring billing in this new Google API, you should contact Discover, to set up the necessary permits, in order to avoid annoying rejections.

    Regarding the compatibility with the customer, let’s say that right now, businessmen and merchants can accept “G-Pay” from customers with Android devices, in the countries that we have mentioned:

    • Australia
    • Belgium
    • Spain
    • United States
    • Hong Kong
    • Ireland
    • Poland
    • United Kingdom
    • Singapore
    • Ukraine

    When it comes to Ukraine, we must clarify that customers in this country can make purchases through this new Google API. However, merchants domiciled in this Eastern European country are not eligible to work through Braintree.

    Now the transactions of “G-Pay” process and resolve, just like credit card transactions but it is possible to identify them in the Control Panel, by its logo-symbol of single payment. When it comes to the registration, we need to point out that there are no standardized (or additional) rates to process “G-Pay” transactions, given that the price of this Google API is the same as the other transactions with credit cards.

    gpay

    EBANX

    Through this processor, business owners / merchants let their customers pay with local cards stored in their respective Google accounts. It is a good payment experience without interruptions, and with a few clicks, customers who have stored credit card data in their accounts of Gmail, Android Pay, Google Play or YouTube, can safely pay for their purchases.

    When it comes to Latin America, businessmen already have the possibility of offering their local payment options to their customers by integrating EBANX with Google Pay. In general terms, the integration is easy and fast between “G-Pay”, and the application or mobile website, with EBANX. And when it comes to the operation, let’s say that it is enough for the client to click on the button called “Buy with Google Pay” during the payment itself, then choose between using a payment method already stored or add a new one to your Google account. Then, the client must grant the respective permission to the merchant / businessman to access “G-Pay” and, click on “Send order” this way, finishing the transaction.

    Paysafe

    Through this processor, it is possible to simplify the payment process for customers, by offering “G-Pay” on the mobile web and, also, in the Android application. This one provides improved security and also, reduced PCI responsibility for greater peace of mind. The integration is simple and unique, through the Paysafe SDK, while providing the full capabilities of Google Pay.

    With this processor, cards from several Google accounts of users are accepted. Additionally, the entrepreneur can offer an uninterrupted payment experience, within the mobile application and, also, in the mobile web. Entrepreneurs can also offer secure transactions, using, “tokenization” and fingerprint authentication, if available.

    With Paysafe, customer data is protected, given that card data are not stored on the devices, while information is encrypted by Google and is only available to its payment providers, who are certified. For this new and better experience, “G-Pay” unlocks a global audience of hundreds of millions of new customers.

    And how can we configure Google Pay?

    Using Java Script v3 SDK

    The new Google API is available in the beta version, with the latest version of JavaScript v3 SDK and Android v2 SDK. But bear in mind that JavaScript requires an Android device with Chrome v61 or higher. On the preparation, let’s say that, in order to accept payments through “G-Pay”, in SandBox or in Production, it will be necessary to enable it in the Control Panel. To do so, it is necessary to log in previously, in the SandBox Control Panel or in the Production Control Panel, so you need to log in previously. Then, we should go to Settings> Processing> Google Pay.

    If “G-Pay” is not yet enabled, you must click on the respective button. If this new Google API is already enabled in our Control Panel, but we need to enable this payment method, we will need to get in touch with the support team.

    Through Android 2V SDK

    Android SDK requires Google Play Services Wallet 11.4.0 or later. When it comes to its configuration, in order to be able to accept payments through “G-Pay” in SandBox or in Production, it is necessary to enable it in the Control Panel and to do this; we must login in the SandBox Control Panel or, in its Production Control Panel.

    Then, we have to go to Settings> Processing> Google Pay, but if “G-Pay” is not yet enabled, we must click on the respective button. If we already have “G-Pay” enabled in our Control Panel, but we need to enable this payment method for a commercial account, it will be necessary to contact the support team. Anyway, in Pandora FMS, we will find valuable additional information for entrepreneurs.

    Rodrigo Giraldo, redactor técnico freelance. Abogado y estudiante de astrobiología, le apasiona la informática, la lectura y la investigación científica.

    GeekMonitoring

    Some reasons why you need a webcam cover

    March 29, 2018 — by Alberto Dominguez0

    webcam-cover-featured.png

    webcam cover

    Webcam cover; discover some reasons why you need to protect your PC

    Perhaps you like to use your computer in summer, you know that special season when you run around your house in your underwear and you think that nobody can see you. Or perhaps all members of your family show up out the blue in the screen of your phone. Or maybe someone can detect whether you are at home, at work, or traveling. Or they might be able to spy you and find out what you do when you work with your computer. This is not a joke, if you have a webcam, you are in danger and everyone should respect your privacy.

    The truth is that there are many good reasons why you shouldn’t allow anyone to access the webcam of your laptop or smartphone.

    The problem is that there are evil people out there who actually think this way and they are able to attack our privacy. From “voyeurs” to people who are interested in accessing details of our private lives or other important top secret data, such as passwords. What can we do if we come across someone like that?

    Some people are already aware of the problem and take steps in order to protect themselves. We are pretty sure that you’ve seen at some point people who use laptops on the street or in coffee shops; and these people protect their webcam of their laptops with stickers, or a small plastic thingy, or even a post-it note. There are many different ways to have a webcam cover. In fact, some companies have realized that this is quite essential and have launched all kinds of small items (nicely designed) in order to have a webcam cover in a more elegant way. Anyway, what’s important is to choose a safe thing in order to cover your webcam so that nobody can see what you do.

    When it comes to mobile phones, here we are talking about a really dangerous issue. To be honest, we take smartphones with us everywhere, and these are part of important moments in our lives and most of them have cameras that can send any image, so these cameras should also be protected. However, in this case, not many people actually do this.

    Whether on a laptop, whether on a smartphone, whether on a tablet, or in any camera from a device which can be accessed via the Internet, the truth is that you should use a tool for webcam cover so that nobody can access it and spy you.

    Yes, despite all the potential risks, there will always be people who consider all these things a bit “too much”. Maybe you’re one of them. Why do you think so? Let’s have a look at some additional factors so that you take into account this type of protection:

    • It’s quite easy to access the webcam of your computer. “Malware Infection” of your device can happen through different ways, such as when downloading software, or when you enter a website or when you download something that was attached in an email, among other things. In addition to this, some apps, which are available for free download, these might access our webcam, and sometimes we authorize that without even realising it…
    • This type of malicious software is more popular than you might think. It is known, as “creepware” and it’s quite popular. In fact, there are communities of webcams spies that are called “voyeurs”, and ill-intentioned people, who exchange pictures, videos and malicious programs. Although it is impossible to know an exact figure, it is estimated that there are thousands of “infected” devices around the world. Be careful, yours might be one of these.
    • You might think that the LED from your camera will warn you if your camera is on, but sometimes it won’t work like that. In fact, this is not a safe method to know if the camera is being used or not. They may be unauthorized access to the camera without the use of the LED lights, so some people might sneak around and spy you and you won’t even notice.
    • This type of “malware” (malicious software) is quite easy to avoid and protect. Unlike other types of infection, when you might actually need the intervention of computer professionals, when it comes to spyware the best effective remedy is usually to install a physical tool that blocks the camera to avoid spying in order to gain more security and to be able to use the camera when you need it.
    • As you can see, these are some good reasons for webcam cover that we can find in your electronic devices. Yes, if at some point you want to use one of your cameras remember that you might be vulnerable to potential spies …

      So this is it, we have seen some reasons why you need to be very careful with the cameras of our devices. Do you want to find out more?

      It would be great if you could take a few minutes to discover Pandora FMS. Pandora FMS is highly flexible monitoring software that suits the needs of your business or organization. Pandora FMS is able to monitor devices, infrastructure, applications, services and business processes.

      You already know that monitoring can be very important for a company. Do you want to know what a good monitoring software can do for you? Click here to learn more about Pandora FMS : https://pandorafms.com

      Many businesses and organizations around the world, and universities, corporations, insurance companies, hospitals, transport companies or public entities already use Pandora FMS. Do you want to discover some of our clients? Click here : https://pandorafms.com/customers/

      Or you can ask us any question you might have about Pandora FMS. You can do that using the contact form, which is found at the following address:
      https://pandorafms.com/company/contact/

      And don’t forget to leave a comment in the comment section down below, let us know if you have found a different webcam cover, we look forward to hearing from you!
      And remember that…

      Our Pandora FMS team is a great team and they will be happy to help you!

    FrikiGeekMonitoringMonitorización

    Shops of the future: What is smart retail?

    March 22, 2018 — by Alberto Dominguez0

    smart-retail-fetaured.png

    smart retail

    Smart retail. What is it? Discover how this will change everything

    What is smart retail? How will this change the future of retail?

    Some people think that stores will disappear over time. We will buy everything online, and if we want to “take a look” at the products that we want to buy, we will do that while sitting on the couch at home, via virtual reality.

    However, it’s quite difficult to imagine that the common habit of shopping will disappear. A lot of people enjoy going to a physical store and they also enjoy searching for the product, it’s an undeniable pleasure for them, even when they can easily buy anything from their computer.

    However, the stores of the future will change, nowadays some of them have already changed for the better.

    When we talk about “smart retail”, we are talking about the use of techniques and technologies used to make the customer experience more satisfying and more personalized.

    The idea of smart retail is to close a sale, but also the idea is to get the customer to experience a customized experience so that the customer wants to return to the establishment in the future. In order to achieve this, concepts such as “customer journey” in the store, are quite relevant.

    If we have a deep look into it, the customer journey is the route that the client takes which goes from observing the showcase of the store until the client leaves after purchasing a product.

    What is that route? What are the specific moments that are part of it and create the customer experience? How can we improve each one of those moments and unite them in order to create a complete and personalized shopping experience, which will satisfy the client in order for the client to come back to the store? This is the kind of questions that are answered with smart retail.

    But, how will the customer experience change in the future? Okay let’s say that…

    The interaction with the customer will start a few meters away from the establishment. Given the proximity of the client, personalized messages will be sent to the customer’s phones and these will let them know about their product and the price or discounts.

    Once the customer is in front of the store, the client will be quite impressed with the store window. Different screens and three-dimensional effects, designed to attract the attention of the clients, will appear before their eyes, this will prompt them to enter the establishment and this way they will start their own journey. In addition to this, these stores will be able to detect the gender or age of the passer-by, this way the stores will offer personalized advertising.

    Once inside, we will see different things from what we find in a store nowadays. These will have become “exhibition halls” (like “museums”) in a way that the products will be there for us to observe, touch or test.

    But wait, there’s more: the products on display will be part of an environment and a story, in which smells and sounds will be integrated with the customers in order for the clients to be part of it, by touching their emotions.

    And what will we do if we actually want something? It will be as easy as ordering the purchase through our phones or screens which will be enabled for that purpose, and our products will then be sent automatically to our house.

    But bear in mind that some products will also be available to us immediately if we wish, but it won’t be that common.

    The customer service will also be quite different. Although some establishments will keep human employees, in others, they will be replaced by robots and screens equipped with artificial intelligence. Probably, this will also determine the opening hours and days, which in many cases will be extended, and it will reach 24 hours and 365 days a year.

    The fitting rooms will also be very different. These will also be equipped with artificial intelligence, and these will know your tastes and your measurements, and will be able to advise you and recommend products that might be interesting for you.

    Finally, the payment method will also be very different. The cashiers and the queues will disappear for good. The customers will pay for their purchases through their phones. In some cases, this will be done automatically, so that when the customers leave the establishment, they will get charged straight away.

    But, in addition to more “standard” stores, other highly specialized establishments will offer personalised experiences for customers who are keen to live unique moments. Do you want to buy Star Wars products? How about doing that inside the Death Star while a big space battle is held outside? Do you want to buy products for diving? What if you could do that in the environment of an impressive seabed? The possibilities will be endless…

    Now that you know what smart retail is and the changes that we will see in the stores of the future, you need to know that technology will be essential when developing these. And by technology, we mean monitoring. And when it comes to monitoring, the best thing is Pandora FMS.

    Wait; don’t you know what Pandora FMS is?

    Pandora is a very flexible monitoring software, which is capable of monitoring devices, infrastructures, applications, services and business processes.

    Discover what Pandora FMS can do for you.
    Click here: https://pandorafms.com

    Companies and organizations all over the world already use Pandora FMS. Do you want to discover some of our clients? Click here: https://pandorafms.com/customers/

    Or you can also send us any question you might have about Pandora FMS. You can do that through the contact form that can be found at the following address: https://pandorafms.com/company/contact/

    Contact us.
    The Pandora FMS team will be willing to help you.

    And don’t forget to leave your comments in the comment section down below; your comments might help other readers of this blog. Let us know. We want to hear your thoughts!

    Thank you very much!

    Bases de datosData BasesMonitoringMonitorización

    Tutorial: guide to Tomcat monitoring

    March 19, 2018 — by Rodrigo Giraldo Valencia2

    tomcat-monitoring-featured.png

    tomcat monitoring

    Tomcat monitoring: learn how to monitor this application server

    1. Context

    1.1. What is Tomcat?

    Before putting together a complete guide on tomcat monitoring, let’s see what it is and why you should consider this as an option. Apache Tomcat or simply Tomcat, is a Servlet container that is used to compile and run web applications in Java. In addition to this, we must note that it implements and supports JSP pages or “Java Server Pages” and, also, Java Sockets, while also supporting Servlets. In addition to this, Tomcat is compatible with technologies such as Java WebSocket and Expression Language, both of the Java ecosystem.

    But, what is a Java Servlet? It is a Java object that extends from javax.servelt.http.HttpServlet which gives us the possibility to create dynamic web applications, which means that it allows us to carry out queries, as well as inserting or deleting data.

    In addition to this, these are small programs or applets written in Java that, in turn, allow us to formulate requests through the HTTP protocol. The Servlets receive requests from a web browser, and they also process them and then send back a response to the browser itself, which is usually done in HTML. In order to do this, they use the tools of the Java language.

    Let’s see what a Servlet container is. Remember that at the beginning of this article we mentioned that Tomcat is a container of Servlets. Well, this container is a program which is capable of receiving requests from web pages to then redirect these requests to a specific Servlet object. And one of the most popular containers is Apache Tomcat or, simply, Tomcat.

    In the list of modules of Pandora FMS, we have Tomcat as an application server, as well as JMX Generic, which gives us a list and information about the status of all Servlets servers deployed in the application server, which is in Tomcat itself, for this tutorial. This clarification is necessary, as we will find out all about Tomcat monitoring through Pandora FMS.

    In order to find out all about Tomcat, and about the way Servlet containers work, and about what Apache Ant is, as well as the steps to create a Servlet with Java, we will explain it with this guide.

    tomcat monitoring

    1.2. Why should you use Tomcat on Cloud Servers?

    The installation of Tomcat in the Cloud (in the Cloud servers), can be done quickly, in a few minutes we can have the latest versions of the programs and libraries needed. Additionally, when any new version or security “Update” of the programs arise, then its update is quite simple.
    Then, if we have an Apache web server, which works together with Tomcat, we will have an interesting range of possibilities offered by this powerful server. The installed database is MySQL, although it is also possible to install any other management system, such as, PostgreSQL or Oracle.

    2. But how can I achieve Tomcat monitoring?

    If we want to ensure the proper functioning of the applications, which are running on the Tomcat server, it is essential to have specific control over the metrics and the features of the runtime. In the same way, this control is essential to prevent or resolve problems in a timely manner. So, let’s say that Tomcat performance monitoring can be carried out through the JMX beans or, perhaps, using a specific monitoring tool, for this we could choose between JavaMelody, Tomcat Manager and even MoSKito.

    But, as with any monitoring, it is important to know the important things to be monitored and the acceptable values ​​in terms of metrics. Then, let’s see how we should configure Tomcat monitoring and let’s find out those metrics in order to control its performance.

    “Tomcat Performance Metrics”. Whenever we want to verify the performance of an application that has been implemented in a server, with Tomcat, we must take into consideration that there are several areas that can provide us with clues or concrete data, in order to know if everything is working as we wish, within the ideal parameters. Critical areas to be monitored:

    • Memory usage: it is a critical reading (analysis), because, if it is running in the heap memory, it will slow down the application and could even lead to “Out of Memory”. On the other hand, if very little memory is being used, memory requirements could be reduced, this way, costs could be reduced too.
    • The use of sub-processes: if there are too many active threads running at the same time, then this could slow down the application and, even the entire server.
    • Garbage collection: this is a process that usually consumes a large amount of resources, so it is necessary to determine the proper frequency in which it should be executed, and it is also necessary to release a required quantity of memory, every time we want to carry out Tomcat monitoring.
    • Number of sessions: the number of sessions that the server can support is similar to the number of requests.
    • You can request at all times: this metric refers to the number of requests that the server can handle, in a specific unit of time. This one can help us determine our needs when it comes to hardware.
    • Database connection group: By monitoring this, we carry out the adjustment of the number of connections in a group required by this application.
    • Response time: if the system takes quite a long time when responding to requests, then users might leave the service. Therefore, it is essential to control this response time, and it is also essential to investigate the possible causes of delays.
    • Activity time: it is a simple measure, which shows us how long the server has been running.
    • Error rates: it is a useful metric to identify potential problems in the code base.

    In order to finish with this subtopic, you have to bear in mind that the Tomcat server uses its respective assistant to monitor the performance, by providing JMX beans, for most of these metrics, which can be verified by using a tool such as Tomcat Manager or, JavaMelody. In our case, we are going to show readers how to monitor Tomcat from Pandora FMS, let’s learn all about Tomcat monitoring here:

    • Monitoring and Managing Tomcat or, in other words, from Tomcat Manager, and
    • JavaMelody or Tomcat Performance Monitoring

    2.1. Tomcat monitoring using Pandora FMS: Monitoring and Managing Tomcat, or Tomcat Manager

    We are not going to bore our readers with endless stuff about computer language, since they will be able to analyse the content of the links shown down below. Looking and analysing within a server, obtaining some necessary statistics or, alternatively, reconfiguring certain items of an application, are daily tasks in administration/monitoring.

    tomcat monitoring

    If we are going to monitor Tomcat remotely, then it is essential to enable JMX Remote. If we are going to monitor this locally, which means using the same user as in Tomcat, then we won’t need to do this installation. So, let’s say that the Oracle website includes a list of options, and it also contains the instructions to configure JMX Remote in Java:

    http://docs.oracle.com/javase/6/docs/technotes/guides/management/agent.html

    Quick configuration guide for Java 6: the parameters that we are going to show you now, must be added to the setenv.batscript of the respective Tomcat, you can go to RUNNING.txt if you want to obtain more details.

    This syntax is suitable for Microsoft Windows, but we also need to consider that the command must be on the same line. If Tomcat is running as a Windows service, then we must use its configuration dialog, in order to set the Java options for the service. To undo * xes remove “set” from the beginning of the line. Let’s take a look, then:

    Set CATALINA_OPTS=-Dcom.sun.management.jmxremote
    -Dcom.sun.management.jmxremote.port=%my.jmx.port%
    -Dcom.sun.management.jmxremote.ssl=false
    -Docm.sun.management.jmxremote.authenticate=false

    When we need authorization, we have to make the following changes:

    -Dcom.sun.management.jmxremote.authenticate=true
    -Dcom.sun.management.jmxremote.password.file=../conf/jmx/remote.password
    -Dcom.sun.management.jmxremote.access.file=..conf/jmxremote.access

    Subsequently, we edit the access authorization file $ CATALINA_BASE/conf/jmxremote.acces , so it’s something like this:

    monitorRole readonly
    controlRole redwrite

    Then, we edit the password file $ CATALINA_BASE/conf/jmxremote.password, like this:

    monitorRole tomcat
    controlRole tomcat

    Important tip: the password file must be read only, and it must only be accessible by the user of the operating system, where Tomcat is running. In addition to this, the JSR 160 JMX-Adapter opens a second data channel, in a random port. If we have a local firewall installed, this could be a problem. However, in Pandora we suggest the configuration JmxRemoteLifecycleListener in order to solve it.

    Now, from Monitoring and Managing Tomcat, it is also possible to:

    • Manage Tomcat with JMX remote Ant tasks
    • JMXAccessorOpenTask, as JMX open connection task
    • JMXAccessorGetTask, to get the Ant task attribute value
    • JMXAccessorSetTask, to set the value of the Ant task attribute
    • JMXAccessorInvokeTask, for Invoke MBean operation Ant task
    • JMXAccessorQueryTask, for query MBean Ant task
    • JMXAccessorCreateTask, for remote create MBean Ant
    • JMXAccessorUnregisterTask, for remote wipe of the MBean Ant task record
    • JMXAccessorCondition, for express condition purposes
    • JMXAccessorEqualsCondition, which is the same as the condition of MBean Ant
    • It is also possible to use JMXProxyServlet

    You need to bear in mind that: each of the JMX modalities that we have just listed, has its own list of attributes and certain warnings that we must consider.

    2.2. Tomcat monitoring using JavaMelody

    It is the other monitoring tool that we propose from Pandora FMS. When it comes to monitoring the performance of Tomcat with JavaMelody and if we are using Maven, we just need to add the javamelody-core dependency to the pom.xml, like this:


    net.bull.javamelody
    javamelody-core
    1.69.0

    So we have gone through the simple way to enable monitoring for our web application, we can access the monitoring screens in /monitoring URL. We need to point out that JavaMelody has quite useful graphs to show us the information we need about different metrics or performance measures, and it also has a way to find the values ​​of Tomvat JMX beans. We must consider that most beans are specific to JVM.

    We must remember that we are talking about the metrics or “critical areas” to be monitored. Regarding to this, specifically, the use of memory, it is necessary to consider that monitoring the memory available and used, is useful to guarantee the correct functioning, but also for statistical purposes. So when the system is not longer able to create new objects, due to lack of memory, this is how the aforementioned JVM will fire an exception.

    On garbage collection, we must consider that it is the process by which unused objects are discarded, so that the memory can be released again. If the system spends more than 98% of the CPU time in garbage collection and, retrieves less than 2% of the heap, the JVM will launch an OutOfMemory with the message: “the upper limit of the GC has been exceeded”, which might mean a memory loss, so we should observe (and analyse) the values ​​of these limits, and investigate the code.

    From the use of threads, let’s say that in order to find the status of the threads used, Tomcat itself provides us with the ThreadPool MBean. In addition to this, the attributes currentThreadsBusy, currentThreadCount and maxThreads give us information about the number of threads that are currently occupied (in real time, of course), in addition to the number of threads that are currently in the group of threads and the maximum number of threads that we can create. We must bear in mind that Tomcat uses the maxThreads number of 200, by default.

    The request performance and response time, database connections and error rates are the other metrics that must be considered for Tomcat monitoring. In any case, you can analyse other aspects of this monitoring, in this Pandora FMS document.

    Rodrigo Giraldo, redactor técnico freelance. Abogado y estudiante de astrobiología, le apasiona la informática, la lectura y la investigación científica.