Why end-to-end monitoring? The DNS filtering example

Let’s imagine the following situation

Let’s suppose that a government (in any country) intends to block on Internet what it considers as illegal content. After many deliberations but without any judicial review, this government allows the filtering of Internet, under the supervision of its police’s services. I am not even getting into the ethical issues that this solution raises – I already wrote a lot about it (on my blog for instance).

Let’s shift our focus on the technical aspect of this proposed suggestion and the obvious consequences it has. As I already mentioned in a previous article: “Centralizing again what should not be centralized will undeniably weaken the Internet’s resistance, which will thus be more vulnerable in the event of an attack or side-effects.

The problem now is to find a way to become proactive regarding these risks, and not spend time enduring -sooner or later- the malfunctions that will certainly come up.

 

Filter by using servers’ names

The solution chosen by the “government” we were talking about, is to filter websites’ accesses through the server’s name resolution mechanism (DNS). This “resolver”, often offered by the internet service provider, is what enables the connection between the website you want to access (www.witbe.net for instance) and the physical address of the server supposed to answer this request (81.88.96.84) to be established.

But on many levels, this method is risky, because of:

  • Over blocking risks: as it is dealing with an entire server and not the specific webpage that has illegal content
  • Uselessness of the solution: considering how easy it is to bypass this type of control
  • Overbid risks… when it will be figured out that it is easy to go around this measure, there will be a risk to force the Internet Service Providers to accept resolution name types of request only from their own servers, “under the control” of the government – or even force DPI mechanisms (Deep Packet Inspection) to go even further in controlling who does what, where and when?
  • Political risks: for now, we’re living in a democracy. What is going to happen the day we decide -for security reasons- to centralize again the operating of Internet networks, and this control falls into the wrong hands?
  • Risks regarding side-effects, and especially the introduction of technologies, modifying the normal Internet mechanisms’ functioning (resolver, routing…), through the introduction of exceptions.

" We learned a few days ago that websites such as Google.fr and Wikipedia found themselves censored after a technical issue. "

Is all of this possible?

Yes, and this is exactly what just happened in France. We learned a few days ago that websites such as Google.fr and Wikipedia found themselves censored after a technical issue. Users were redirected to a page of the French Government stating they had tried to connect to a site whose content encourages terrorism, or publicly condones terrorism.

It didn’t take Orange too long to respond, with tons of unhappy customers:

“As a result of human error during a technical operation on a server, our clients might have had trouble connecting to google.fr and wikepedia.fr, and found themselves redirected to a notification page from the French Ministry. The incident lasted approximately an hour and access to the webpages is on the road to recovery.”

The point here is not to start finger-pointing. Mistakes can happen. It was predictable. It can happen to any operator, as much as it WILL happen to any operator. The problem, as usual, is not checking but controlling a sensitive situation.

 

These mistakes are inevitable

Intentional or not, mistakes can happen, and are easy to correct… when the issue is detected before it’s too late – that is, before many clients suffer from it.
When they are technical, these mistakes can be much harder to control and to correct, because of the technologies used. Indeed, because of the cache mechanisms involved, there isn’t ONE name resolution server but many, who have to synchronize with a common source and then “live their lives”. “Root pollution” situations, or worst, hacking situations can always happen and lead a request to the wrong website.
This is the case when a website is identified as “terrorist” and is redirected to the French Interior Ministry’s “Parking” page. We could imagine that for a specific geographic area operated by the “resolver”, a “hacker” would redirect a Bank’s website, Google or e-commerce websites to its own websites. Imagine the damage… and on top of that, it’s hard to detect and perceive it.

Classic monitoring solutions are failing, because they tend to focus more on the availability of a service (or of an equipment) than on the quality of its response.

" It is fundamental to be warned as soon as possible of any malfunction, before the user gets to experience it, to experience it too often, and starts complaining for real. In general, when a user complains, it is already too late. "

“We can’t solve problems by using the same kind of thinking we used when we created them.”

… said Albert Einstein. We can always try to control everything, nothing can replace practice. And this is where Remote Monitoring, as we recommend it at Witbe, can be of great help, by giving crucial information, from the real operational risk location, regarding:

  • the availability of the resolvers
  • the performance: how long do they take to respond, are they sensitive to charge and when?
  • the integrity: … and here we are. Do they respond correctly? When someone wants to access www.bnp.fr, is the connection correctly made with 213.56.75.132?

It is indeed appropriate to really inquire the name resolution that we want to control where it will be served. One must “supervise” and watch closely the good functioning of all the resolvers installed in the network.

 

But there are solutions: Witbe SLM Robots can operate this type of monitoring

They can check that a resolver is correctly responding.
They can check that a website that was supposed to be blocked is still censored. In other words, they can free themselves from the resolver’s mechanism and implement a part of the OCLCTIC’s list (Central Office for Action to Combat Crime connected with ITC) to make sure that the websites are still correctly blocked – depending on the updates of the list in the resolvers.

We’re seeing it in the Internet world: practice is overriding theory. It is fundamental to be warned as soon as possible of any malfunction, before the user gets to experience it, to experience it too often, and starts complaining for real. In general, when a user complains, it is already too late. The QoE is thus affected over the long term and any excellent work provided before that is thrown away for nothing.