Leave us your email address and be the first to receive a notification when Robin posts a new blog.
Welcome to another blog in the category of ‘Oooh! what does this button do?’. This time we will take a look at the feature URL/FQDN filtering of NSX-T. With this feature you can allow or deny the traffic to specific URLs/FQDNs. This control was presented in NSX-T 2.4, but it could only be used with a predefined domains list. In the current version NSX-T 3.1, it is also possible to add URLs or FQDNs. There is a limitation of 64 characters to the FQDN and the custom FQDNs must end with a registered Top-Level domain. For example: a *.Local FQDN is not possible.
Because this feature is only available in NSX-T, it is a nice bonus when you migrate from NSX-V to NSX-T. The only option you have in NSX-V is to create an IPSet for the URL/FDQN, which gives trouble when the FQDN contains a lot of changing IP-addresses. A work-around is a scheduled task that updates the IPSet, but that is not a very clean way of working and asking for errors.
I will now show you how the feature ‘URL filtering’ works by some examples. This feature uses DNS Snooping to obtain a mapping between the IP-address and the FQDN. The first step to allow us to use this feature, is to create a firewall rule that enables DNS filtering om Layer-7. I’ve setup the following rules in our lab:
- Rule 6123 – To allow traffic towards DNS, including the Layer-7 Context Profile for DNS.
- Rule 6120 – A rule with a tag-based source group, to allow traffic towards the Internet.
In our lab we defined the internet as everything not in RFC1918
- Rule 6121 – A rule that denies all traffic towards the internet
As expected, I’m not able to browse the internet on my test VM and if I look at the firewall rules applied to my test VM I can see that only the firewall rules 6123 and 6120 are applied.
For the command line we can also view the configuration of the RFC_1918 group, that is Negated in the firewall rule.
For the first test I will open our own website redlogic.nl. Adding a custom URL or FQDN can be done from the Inventory > Context Profiles, where you can also find all the predefined FQDNs.
Custom FQDN’s can be created with or without the *. to include the subdomains. In our lab I’m also adding *.nu.nl and vexpert.vmware.com to show some examples later in this blog. The next step is to add it to a Context Profile. This can also be done during the creation of the firewall rule, but now you also know where to find the configuration.
Next step is to add a firewall rule with the newly created context profile to allow traffic towards our website.
Because we only allow traffic towards *.redlogic.nl and as almost every website it tries to connect to other URLs, the page loads a bit slower. But we can now load the page where it was timed out before. As shown for the URL nu.nl.
But it not always that straight forward, because as stated before many websites use redirects, external content, or are hosted in clouds. My first idea for this blog was to show the example with the vExperts website, but that did not work straight from the initial configuration.
With this configuration I was unable to access the vExpert website from my test VM. This is probably caused by the vExpert website being hosted in AWS behind an elastic load balancer. This can be found either by resolving the website with an nslookup command or by viewing the FQDN entries of the distributed firewall on the filter for my Test VM. This can be done using the cli command vsipioctl getfqdnentries -f <DFW-fliter>
So, after adding *.elb.amazonaws.com to the list of FQDN’s for the vExpert website I was able to access the vExpert website. But again, accessing the website took a lot more patience than before.
As a last test I tried to create a firewall rule before the DNS firewall rule to test if that made any difference, but at least in my lab setup I was able to access the website nu.nl.
So, how useful is this feature in NSX-T?
As shown with the examples above it is not always straight forward to allow access to ‘customer’ websites using this method, but I don’t think that this is the main use case. To me this is more useful to allow servers to retrieve updates from specific URLs without opening complete internet access. Besides that, I haven’t looked at deploying this on a large scale. So, I can’t do any predictions on the performance impact or additional limitations.
Again, it’s always fun and educating to test these features in a lab environment. Hopefully this information is useful to you. Thanks for reading the blog and if you have any questions or remarks please send them to me.
Questions, Remarks & Comments
If you have any questions and need more clarification, we are more than happy to dig deeper. Any comments are also appreciated. You can either post it online or send it directly to the author, it’s your choice.