Security Myths?

I find it very interesting when an article debunks itself while talking about debunking myths. If you have not read the recent Network World article titled “13 Security Myths You’ll Hear – But Should You Believe?” you can read it here:

http://www.networkworld.com/news/2012/021412-security-myths-256109.html?page=1

While most of the “myths” are very obvious to anyone who has worked in computer support for very long, one of them I found quite interesting. The third “myth” referenced in the article is, “Regular expiration (typically every 90 days) strengthens password systems.” First, while I completely disagree that this is a myth taken within the context of a complete security system including proper user training, it appears that the article itself debunks the debunking of this “myth”. Note the following from myth number 6, “He adds that while 30-day expiration might be good advice for some high-risk environments, it often is not the best policy because such a short period of time tends to induce users to develop predictable patterns or otherwise decrease the effectiveness of their passwords. A length of between 90 to 120 days is more realistic, he says.”

Now here’s the reality of it from my perspective. If you never change passwords, an internal employee can brute passwords for months and even years until he gains access to sensitive accounts. If you change passwords every 90+ days while having strong passwords that are easy to remember, you accomplish the best security. Strong passwords that are easy to remember can take weeks or months to back with brute force. For example, the password S0L34r43ms3r is VERY easy to remember, well it’s easy for me to remember, but you have no idea why. Brute forcing this password would take months with most systems. Therefore, I have a strong password. If I change it every 90-120 days, I will have a good balance of security and usability.

Does every employee need to change his or her password every 90-120 days? No, certainly not. Some employees have access to absolutely no sensitive information. We can allow them to change their passwords either every 6-12 months or never, depending on our security policies. The point is that different levels of access demand different levels of security.

While I felt the article was very good and it did reference some research to defend the “myth” suggested in relation ot password resets, the reality is that the article and the research (which I’ve read) does not properly consider a full security system based on effective policies and training. Granted, few organizations implement such a system, but, hey, we’re only talking theory in this context anyway, right? It sure would be nice if security could move from theory to practical implementation in every organization, but it hasn’t. The reason? By and large, because most organizations (most are small companies) never experience a security incident beyond viruses, worms and DoS attacks. That’s just life.

IEEE 802.1X Authentication – Device Roles

The IEEE 802.1X (802.1X-2004) standard defines three roles involved in an authentication system that provides port-based access control:

  • Supplicant
  • Authenticator
  • Authentication Server

The supplicant is the entity containing the port that wishes to gain access to services offered by the system or network to which the authenticator is connected. Stated in common Wi-Fi terminology, the IEEE 802.1X supplicant is the device desiring to gain access to the WLAN.

The authenticator is the entity containing the port that wishes to enforce authentication before granting access to services offered by the system or network to which it is connected. Again, stated in common Wi-Fi terminology, the IEEE 802.1X authenticator is the access point (AP) through which the wireless clients connect to the network. In controller-based systems, it can also be the controller that acts as the authenticator.

The authentication server is the system that performs the authentication processes required to verify the credentials provided by the supplicant through the authenticator. RADIUS servers are commonly used as the authentication server in an IEEE 802.1X implementation for WLANs.

This is the first portion you must grasp to properly understand 802.1X authentication systems. You must know about these three roles and why they exist. It is important to remember that, in a WLAN, the authentication server is not likely a wireless device at all, but rather is a server computer or a network appliance that provides the service to the APs and controllers (as well as other devices requiring authentication to the network).

Finally, before leaving this introductory post, remember that IEEE 802.1X is not a wireless standard, it is an authentication framework that can be used by wired and wireless systems. Also, the actual authentication used will vary depending on the Extensible Authentication Protocol (EAP) type used. IEEE 802.1X provides the framework on which the actual EAP authentication protocols operate.

Defining Wi-Fi: Noise Floor

This series of blogs (Defining Wi-Fi) will likely stretch to infinity. The blogs will focus on defining terms related to Wi-Fi at a level between the dictionary and a concise encyclopedia, but not quite matching either. Hopefully, the community finds them helpful over time.

NOTE: Entry created August 26, 2016.

The noise floor, in wireless networking, is the RF energy in the receiver space from other intentional and unintentional radiators nearby or at a distance as well as natural phenomena that results in the existence of electromagnetic energy at some measurable level. Defined differently it is the sum of all those signals or energy generators that you aren’t trying to receive. It is a moment-by-moment factor in RF signal reception. The following capture from AirMagnet Spectrum XT shows the noise floor related to channels 1 and 6 in 2.4 GHz.

Noise Floor in Spectrum XT

Two common myths are believed about the noise floor.

  1. The noise floor is the same on all channels in a band.
  2. The noise floor can be measured at a moment and that is the constant noise floor.

The first myth is very important as the noise floor may well be several dB higher in some channels than in others (remember, -95 dBm is higher than -100 dBm when measuring RF energy). This will impact SNR (read my definition of SNR here) and cause variance in data rates available on those channels if not considered. While the noise floor may be constant across channels in what we sometimes call a “clean” environment, it is not uncommon to see channel 1 with a noise floor of say -97 dBm and channel 6 with a noise floor of say -95 dBm (these numbers are just for example purposes). This variance is a difference of 60% in signal strength. Depending on the received signal strength, it can easily result in a data rate 2-3 levels (or more) lower in the channel with a higher noise floor.

The second myth assumes that there are no intermittent radiators (a term used instead of transmitters to include unintentional radiators) present. Such radiators may only generate RF energy periodically and can be missed with a quick measurement. Additionally, such devices may cause reception problems after the WLAN is operational because of their manual use. That is, a human turns them on when they want to use them and, when at rest, they do not cause interference at other times. For example, microwave ovens.

We usually use the term interference (instead of noise floor), which I will define in detail in a later post, to reference nearby radiators that cause significant RF energy in a channel at levels greater than what the noise floor would be without them, such as the previously mentioned microwave oven. This differentiation is important because we can often do something about such components (remove them, change the channels, shield them, etc.). However, when considering the noise floor on a moment-by-moment basis, one could argue that these devices raise the noise floor. Why? Because even when they are present, a lower data rate Wi-Fi signal may be able to get through, if sufficient SNR can still be achieved.

However, if the other device is a transmitting device and not simply a radiating device, such a design decision may result in interference caused by the Wi-Fi device against the non-Wi-Fi device. Additionally, the Wi-Fi device is not likely to change its data rate based on one or even two  frame retries. Therefore, the raised noise floor (interference in this case) results in higher retries instead of data rate shifts when the interference is on a low duty cycle (does not communicate a large percentage of the time). Yes, it can get complicated.

Here’s a great analogy when considering the noise floor. Many people like to sleep with a fan on. Why do they do this? They are raising the noise floor (of course, related to sound waves instead of RF electromagnetic waves). When the noise floor is raised around them, distant noises do not have as much sound to noise ratio and they are less likely to alert the sleeper. They are intentionally making it more difficult to receive audible signals by raising the noise floor.

The RF/electromagnetic noise floor is an important consideration in design. In an environment with a higher noise floor, the APs must be placed and configured with this in mind. Many vendor recommendations for AP placement and hardware specifications assume a particular noise floor (that they seldom communicate). If your environment presents a very different noise floor, their recommendations and receiver sensitivity ratings may not prove true.

Defining Wi-Fi: CCI (Co-Channel Interference) also called CCC (Co-Channel Contention)

This series of blogs (Defining Wi-Fi) will likely stretch to infinity. The blogs will focus on defining terms related to Wi-Fi at a level between the dictionary and a concise encyclopedia, but not quite matching either. Hopefully, the community finds them helpful over time.

NOTE: Entry created August 24, 2016.

Co-Channel Interference (CCI) or Co-Channel Contention (CCC), which is the more apt name, but not in the standard, is an important factor to consider in WLAN design. Co-Channel Interference is that which occurs when a device (station) participating in one Basic Service Set (BSS) must defer access to the medium while a device from a different service set (either an infrastructure or independent BSS) is using the medium. This behavior is normal and is the intentional result of the 802.11 specifications. The behavior is driven by standard Carrier Sense Multiple Access/Collision Avoidance (CSMA/CA) algorithms defined in the 802.11 protocol.

For further understanding, consider the scenario where a laptop (STA1) is connected to an AP on channel 1 (AP1). Another AP (AP2)  is on channel 1 at some distance and another laptop STA2) is connected to that remote AP. Even if the two APs are not required to defer to each others’ frames (because the signal level is too low), the two laptops must defer to each others’ frames if they can hear each other at a sufficient signal level.

CCI

That is, the two laptops are transmitting on channel 1 and they are within sufficient range of each other and, therefore, they must contend with each other for access to the medium, resulting in CCI. Additionally, both laptops may transmit a strong enough signal to cause both APs to defer even though they have chosen to associate to only one of the APs based on superior signal strength. Also, both APs may transmit a strong enough signal to cause both laptops to defer even though they are associated to only one of the APs.

To be clear, it is common for APs to create CCI with each other. The point of using this example is to eradicate, from the start, the common myth that CCI is just about APs. CCI is created by any 802.11 device operating on the same channel with sufficient received signal strength at another device on the same channel.

Now, because CCI is not like other RF interference, a modern movement to call it Co-Channel Contention (CCC) has started. In my opinion, this is not a bad thing. CCC brings more clarity to the picture. CCI is about contention and not traditional interference.

What we commonly call interference is a non-Wi-Fi signal or a Wi-Fi signal from another channel that corrupts the frames on the channel on which a device is operating. That is, with other types of interference, unlike contention, the Wi-Fi client may gain access to the medium and begin transmitting a frame while the non-Wi-Fi (or other channel Wi-Fi) is not communicating such that the transmitting Wi-Fi device sees a clear channel. During the frame transmission, the other transmitter may begin transmission as well, without acknowledgement of current energy on the channel, and cause corruption of the Wi-Fi frame. This is not the same as CCI.

Excessive CCI results in very poor performance on the WLAN. With too many devices on a given channel, whether in a single BSS or from multiple service sets, the capacity of the channel is quickly consumed and performance per device is greatly diminished. For this reason, CCI must be carefully considered during WLAN design.

Defining Wi-Fi: SNR (Signal-to-Noise Ratio)

This series of blogs (Defining Wi-Fi) will likely stretch to infinity. The blogs will focus on defining terms related to Wi-Fi at a level between the dictionary and a concise encyclopedia, but not quite matching either. Hopefully, the community finds them helpful over time.

NOTE: Entry created August 20, 2016.

Signal-to-noise ratio (SNR) is a measurement used to define the quality of an RF signal at a given location. It is the primary determiner of data rate for a given device as the SNR must be sufficient to achieve particular data rates. Simply stated, more complex modulation methods can be demodulated with higher SNR values and low SNR values require that the modulation method become less complex. More complex modulation methods result in higher data rates and less complex methods result in lower data rates.

NOTE: Whether you choose the word complex or detailed, the end meaning is the same. A higher SNR is required for higher data rate modulation methods.

The SNR is defined, in Wi-Fi, as the difference between the desired received signal and the noise floor. The noise floor may vary for each channel within the monitored band such that the noise floor may be greater for one channel than for another. Additionally, intermittent, non-Wi-Fi, interfering devices that use the same frequency range as the Wi-Fi device may reduce the SNR available at any moment.

SNR can be calculated with the following formula:

SNR = received signal strength – noise floor

For example, if the received signal strength is -75 dBm and the noise floor is -90 dBm, the SNR is 15 dB. 25 dB SNR or greater is desired for improved data rates and, therefore, improved throughput.

NOTE: SNR is defined in dB and not dBm as SNR is relative.

Finally, many vendor specification (spec) sheets list receive sensitivity values for specific data rates. They will indicate that you can accomplish a particular data rate with a specified signal strength (or greater signal strength). The following is an example of such a spec sheet from the Orinoco USB-9100 802.11ac adapter. (Click to expand)

Orinoco USB-910 Spec Sheet

Remember that these spec sheets assume a noise floor value (which is never communicated in the spec sheets) and that a different noise floor than what they assume would result in the requirement of a higher signal strength than that which is listed because SNR is what you actually need to achieve a given data rate. Also, remember that a higher signal strength is a lower base ten number (excusing the negative sign) because we are referencing negative values; therefore, -65 dBm is higher than -70 dBm. This can sometimes get confusing to those new to Wi-Fi.

Defining Wi-Fi: Radio Frequency

This series of blogs (Defining Wi-Fi) will likely stretch to infinity. The blogs will focus on defining terms related to Wi-Fi at a level between the dictionary and a concise encyclopedia, but not quite matching either. Hopefully, the community finds them helpful over time.

NOTE: Entry created August 5, 2016.

Radio Frequency (RF) is a term used to reference a portion of the electromagnetic spectrum that is used for 802.11 (and other) network communications. Wi-Fi networks use RF waves in the microwave frequency range. Frequencies used in 802.11 networks range from 700 MHz to 60 GHz. The vast majority of Wi-Fi networks use the 2.4 GHz and 5 GHz frequency bands.

radio spectrum
The Radio Spectrum – Image credit:  CEPL

RF is the carrier used to modulate data. The RF waves are manipulated to represent digital data bits. The RF wave amplitude and phase can be changed to indicate binary data. Several modulation techniques are used in 802.11 networks, including Binary Phase Shift Keying (BPSK), Quadrature Phase Shift Keying (QPSK), 16-Quadrature Amplitude Modulation (QAM), 64-QAM and 256-QAM. The modulation used, in addition to coding rates and a few other factors, determines the data rate for a given channel width (range of frequencies used to define the channel).