Much ado about a broadcast

Much ado about a broadcast

I write this to remind us admins of the little things that sometimes cause us big headaches and the importance of monitoring which is very easy to neglect.

While I was onsite last month visiting a BWC member institution, the network admin happened to log into his router and saw one of his local interfaces receiving up to 40Mbps traffic. This is a network that rarely gets 10Mbps on that interface. On closer inspection using the Torch feature of the Mikrotik RouterOS, we found that a handful of local computers were sending unusual broadcast traffic painted to look like netbios and DHCP. As first-aid, we filtered the offensive ports and the impact on the router subsided… only to resurrect on different ports a few minutes later.

Although the attacking entities could change ports, they continued to originate as broadcast traffic so that was one common denominator. You can’t just block all broadcast traffic, can you? If you do, true DHCP, Netbios and all other traffic that relies on broadcast announcements wouldn’t work. With managed switches however, you can limit the percentage of broadcast traffic that each port can send.

As a better fix, we added broadcast rate limits on all the switches (Cisco and HP) within the campus that allowed us to do so. Yes, one more reason why you should have managed switches. The good news now is that for those institutions that consider themselves financially challenged, you can get Managed 5-port Gigabit switches for less than N20,000. You can be creative and use these on building-to-building links within your network so you can easily isolate/contain segments with anomalies. The ones running Mikrotik SwOS include broadcast-limiting as well as a number of other high end features.

Traffic spewing worms are very common with pirated Windows systems that do not have up to date patches. For those who may not understand the impact of a few unpatched MS Windows computers with worms, consider this:
You have a 24-port switch each with a 100Mbps port. Computers on a number of ports start spewing out traffic at the speed of their network cards. Because it is a directed broadcast, they are sending say 10Mbps to every other computer in the same subnet. All of a sudden, the switch finds itself handling 10Mbps times the number of computers in the subnet. If the switch does not die while processing this spurious traffic, it passes it on to the gateway which presumably has at least one leg in the problematic subnet. If your router is not well configured, it wastes CPU cycles handling the rubbish. I also think routing your networks will help reduce the impacts of directed broadcasts though some people say the overhead of multiple routers is a disadvantage.

In the final analysis, monitoring is good and long term monitoring is even better. Without traffic utilization graphs, nobody would have noticed that Internet bandwidth was not being utilized to the maximum. Without interface monitoring tools, we wouldn’t have known which interface to investigate further. Without network segmentation, the building in question wouldn’t have been located quickly.

However, without looking into your network often enough, you might not be able to tell the difference between the “normal” and the “abnormal”. A step further would be to collect long term data from netflow or traffic-flow capable devices for better visibility into what’s going on within your network. Think nfsen, cacti and company. Free and easy to install (sometimes even as virtual machine appliances for the busy/lazy/smart)

Leave your thought here