Old Skills – Still Valuable

I have been working with various Linux distributions much more these days than in the past. Spending all that time in the shell has flooded the mind with memories of days gone by. When we used to have to know our systems well to properly configure the simple task of booting (config.sys and autoexec.bat), we had to master many technical skills. I am amazed, nearly every day, at how often those old skill still prove valuable to me.

Remember screens like this?

Config.sys in Edit
The mem Command

If not, you didn’t work with DOS. If so, you did. If not, don’t distress, you can learn the skills you need to get by in the Windows Command Prompt, PowerShell or the shell in a Linux distribution.

In this post, I’m going to focus on three skills we had to master in the DOS days that are still valuable today. They were:

  1. Getting Help
  2. System Diagnostics with Commands
  3. Automating Work

Getting Help

At the DOS prompt (and still in the Command Prompt or PowerShell in Windows and the shell in Linux) help was always just a simple switch away. For nearly every command or program, you could simply add a /? to the command to find out exactly what the command could do. Those who learned (and still learn) commands this way are always more powerful users or administrators than those who simply learn specific command parameters for specific tasks from books, blogs and articles.

The reason for this reality is simple: when you use the help to see all the command can do, you often learn of uses that others have not demonstrated or used themselves.

Consider the mem command shown earlier from DOS. If you simply typed mem and pressed ENTER, you saw a screen like the following.

Simple mem Command Output
Simple mem Command Output

Now look at all you learned about the mem command if you used the /? parameter.

Getting Help for the mem Command
Getting Help for the mem Command

I can already hear someone saying, “Wait, Tom. The mem command is not in the Windows Command Prompt anymore. How does this help?” That’s a great question. The answer is that you can find other commands, related to memory, that you can use and use with power when you learn to get help. Consider the tasklist command in Windows.

The following screen shows the output of a basic tasklist command with no parameters:

Raw tasklist Command Output
Raw tasklist Command Output

It is showing every process, regardless of the memory consumed by it. Now, look at the help for the tasklist command using the /? parameter.

tasklist Help
tasklist Help

Notice that you can do several things to refine the list, particularly in relation to memory usage.

Armed with this information, I can now use the /FI filter parameter to see only tasks consuming more than 15,000 kilobytes of memory with the tasklist /FI “MEMUSAGE gt 15000” command.

Filtering for High Memory Usage Tasks with tasklist
Filtering for High Memory Usage Tasks with tasklist

As you can see, getting help is key to learning Command Prompt or shell commands. In Linux, you typically use the —help parameter for this. In PowerShell, use the Get-Help cmdlet to accomplish this.

System Diagnostics with Commands

The old DOS prompt gave us several tools for performing system diagnostics. In addition to the mem command, you had commands like checkdsk, ver (both still in the Command Prompt), and  undelete (sadly, no longer with us). The Command Prompt is actually far more powerful today in Windows than it ever was in DOS. Dozens of additional commands are available for diagnostics. In addition to tasklist, important commands include:

  • sc – service management
  • ipconfig – IP configuration viewing and management
  • netsh – a plethora of networking functions
  • systeminfo – viewing information about hardware and software
  • ftype – working with file associations

This is a very brief starter list. Type help at the Command Prompt (just like in DOS by the way) to see a list of common commands as shown in the following image. Remember to use the /? parameter with them to learn all the details of how they work.

Partial Output from the Command Prompt help Command
Partial Output from the Command Prompt help Command

Automating Work

Finally, you can automate the Command Prompt using batch files and PowerShell or the Linux shell using scripts (PowerShell scripts and bash scripts respectively). The batch files work almost entirely the same in the Windows Command Prompt today as they did in DOS 25+ years ago when I used them. Of course, some of the old commands are gone, but the logic and concepts are still the same.

The point of this post is simple. Never discount old knowledge. It continues to benefit you today. In fact, I can say plainly that I passed a certification exam a couple of years ago almost entirely because I knew DOS all those years ago. And, yes, I still have my old DOS books including great books on batch files. Here’s a picture of just one.

Inside MS-DOS 6.22
Inside MS-DOS 6.22

And, yes as well, the Disk is still included after all these years 🙂

Happy shelling!

The 802.11ac 1 Gbps Uplink Myth

The following info-graphic illustrates as succinctly as possible why it is a myth that 802.11ac APs require more than a 1 Gbps uplink to the switch. I will be presenting a CWNP webinar on this on January 19th, but was thinking through some things this weekend and decided to share a graphic with the world.

NOTE: You are free to use this info-graphic in any way you desire. In print, online, in free distribution or paid distribution. It is yours to use. Please just give credit to the source or leave the Copyright reference in the image if used.

This information reveals why it is a myth that 802.11ac APs require more than a 1 Gbps uplink to the switch.

Of Inches and Feet – Or the Origin of a Poor Measuring System

Have you ever wondered why we use inches and feet in the United States or where it came from when the metric system seems to make so much more sense. I mean, really, 12 inches in a foot, 3 feet in a yard. How is this easier than 100 centimeters in a meter? Additionally, the centimeter being smaller than the inch, the metric system, without even addressing millimeters, allows for greater precision. More precision, simpler extrapolation from one unit to the other, where in the world did inches and feet come from?

Well, the inch, according to some, was originally the width of a man’s thumb. Therefore, as he was working he could simply measure out three thumb widths, or four, or five, or whatever length he desired, and he would have consistency in his measurements – within some measure of variance. The important thing to remember is that HE would have consistency in HIS measurements. If another man measured out the same three thumb widths, the actual length, width or height would vary. But, since every item created in the days of yore was a one-off item, this was not a real problem for many craftsmen.

Eventually, around the 14th century, the inch was defined as three corns of barley placed end-to-end. Of course, whether you use the human body or a plant member to define measurement, you are going to end up with inconsistency.

The yard was originally the length of a man’s belt or his girth, according to some sources. Again, depending on your dietary practices, your measurement would be different from another man’s. And your measurement would differ throughout life – at least mine would.

Interestingly, over the years, consistency was developed not for a pure desire for standardization, but out of governmental desire for more taxes. According to The Weights and Measures of England, by R. D. Connor, standardizing on yards and inches (instead of yards and handfuls) was implemented to prevent cloth merchants from avoiding taxes. We can always count on the greed of rulers to provide a standard if nothing else will do.

Thankfully, the modern world is moving more and more to the metric system (in fact, most of the world outside the U.S. these days) and we can get away from what is now a consistent but confusing system and use a consistent and simple system. No longer will I have to teach my small children or grandchildren creative techniques for remembering 12 inches make a foot and 3 feet make a yard.

Now we just have to get rid of miles so we don’t have to talk about 5280 feet in a mile anymore. 1000 meters in a kilometer is so much easier, don’t you think?

What If? Have you ever wondered what the world would be like if we always listened to the experts?

You walk down the front walk of your summer home and approach the moving truck parked in the driveway. It is a normal thing in today’s advanced society to work for the same company, but roam from place to place. You think to yourself, “How did they ever live without these portable computers all those years ago?” The ability to take a computer on the road with you wasn’t even a dream in the beginning, but now it’s a reality.

You open the rear door of the moving truck and step up through the narrow opening. The area is very crowded, but there is just enough room for your office chair and a few spare square feet of desk space. “Ahhh… advancements in technology,” you say to the bare walls of the truck. “Now then, let’s kick start this baby.”

After a long day of work in these tight quarters, you step out of the twenty-six foot moving truck and walk back into the house.

You are probably thinking, “C’mon, Tom. What are you talking about? There are no portable computers that require a moving truck to haul them.” Right you are, but this is – WHAT IF?

Here is what was reported in 1949:

Where a calculator on the ENIAC is equipped with 18,000 vacuum tubes and weighs 30 tons, computers in the future may have only 1,000 vacuum tubes and perhaps only weigh 1 1/2 tons.
-Popular Mechanics, March 1949

Didn’t know someone said that? Just hold on because there is much more to follow.

The truth of it all is that we’ve all been deceived. They have tried to convince us that these computers are worth something. The things we’re doing with them aren’t really productive, they are worthless. You don’t agree? Why not? A couple of really important people did:

Worthless.
-Sir George Bidell Airy (Astronomer Royal of Great Britain), in reference to the potential value of Charles Babbage’s analytical engine. Charles is considered to the the inventor of the computer today.

“What the ____ is it good for?”
-Robert Lloyd (an engineer at IBM) in response to colleagues who insisted that the microprocessor was the future of the computer industry in 1968.

Do you agree? Neither do I, but what if we had listened to them?

How many new computers are shipped every year? Not too many, as a matter of fact it has averaged .09 computers every year for the last forty-five years. Wow! How did we ever do it? Since 1943 we have developed and marketed five whole computers.

I realize that this sounds a little far-fetched, but I can prove it:

“I think there is a world market for about five computers.”
-Thomas Watson, Chairman of the Board of IBM, 1943

In reality, however, we have produced and marketed, not only millions of systems, but literally hundreds of different kinds of computer systems as well. He said five computers, but look at this list of computer types over the last thirty to forty years:

  • ATARI 800
  • IBM
  • Apple II
  • Apple IIe
  • Apple IIgs
  • Apple Macintosh
  • Commodore VIC-20 (I loved this one.)
  • Commodore 64 (Another personal favorite.)
  • Commodore 128
  • Commodore Amiga
  • TRS-80
  • Tandy 1000
  • Adam

I could go on, but I believe that my point is made – but what if?

We don’t need you. You haven’t got through college yet.
-HP Executive, 1976

Get your feet off my desk, get out of here, you stink, and we’re not going to buy your product.
-Joe Keenan, President of Atari, 1976

There is no reason for any individual to have a computer in their home.
-Ken Olson, President of DEC, 1977

It is a really good thing that we didn’t listen to these wise people. If we had, we would have undoubtedly been greatly delayed in the development of the personal computer system. The first one, the HP executive, was responding to Steve Jobs when he offered HP his Apple II computer design. The second quote was Atari’s response to Steve, and the third quote is just a bit of wisdom from the era.

Steve ignored these people, and financed the development of the Apple II himself. This act launched the personal computer revolution as we know it today – for the most part anyway. But – what if?

Well, we could go on talking about people like Bill Gates and others. But I’ll be nice and simply end by saying I am really glad I have over 8 GB of RAM in all of my computers. Aren’t you, Bill? Or maybe he still has 640k.

Aren’t you glad that you have never said anything that turned out to be wrong in your life? I know I am!

Oh yeah… now for my prediction of the future. I predict that we will all evolve into computers and then the computers will rule the world. It will be like a matrix kind of thing. I wonder if anyone else has ever thought of anything like that?

The reality is that experts can often get caught up in their knowledge of the present and lose sight of the possible.

The point of it all is this: as we enter the new year of 2017, dream. Dream and don’t let those around you say your dreams are impossible. Just dream, plan and act and see what your future may hold.

Happy holidays!

Security Myths?

I find it very interesting when an article debunks itself while talking about debunking myths. If you have not read the recent Network World article titled “13 Security Myths You’ll Hear – But Should You Believe?” you can read it here:

http://www.networkworld.com/news/2012/021412-security-myths-256109.html?page=1

While most of the “myths” are very obvious to anyone who has worked in computer support for very long, one of them I found quite interesting. The third “myth” referenced in the article is, “Regular expiration (typically every 90 days) strengthens password systems.” First, while I completely disagree that this is a myth taken within the context of a complete security system including proper user training, it appears that the article itself debunks the debunking of this “myth”. Note the following from myth number 6, “He adds that while 30-day expiration might be good advice for some high-risk environments, it often is not the best policy because such a short period of time tends to induce users to develop predictable patterns or otherwise decrease the effectiveness of their passwords. A length of between 90 to 120 days is more realistic, he says.”

Now here’s the reality of it from my perspective. If you never change passwords, an internal employee can brute passwords for months and even years until he gains access to sensitive accounts. If you change passwords every 90+ days while having strong passwords that are easy to remember, you accomplish the best security. Strong passwords that are easy to remember can take weeks or months to back with brute force. For example, the password S0L34r43ms3r is VERY easy to remember, well it’s easy for me to remember, but you have no idea why. Brute forcing this password would take months with most systems. Therefore, I have a strong password. If I change it every 90-120 days, I will have a good balance of security and usability.

Does every employee need to change his or her password every 90-120 days? No, certainly not. Some employees have access to absolutely no sensitive information. We can allow them to change their passwords either every 6-12 months or never, depending on our security policies. The point is that different levels of access demand different levels of security.

While I felt the article was very good and it did reference some research to defend the “myth” suggested in relation ot password resets, the reality is that the article and the research (which I’ve read) does not properly consider a full security system based on effective policies and training. Granted, few organizations implement such a system, but, hey, we’re only talking theory in this context anyway, right? It sure would be nice if security could move from theory to practical implementation in every organization, but it hasn’t. The reason? By and large, because most organizations (most are small companies) never experience a security incident beyond viruses, worms and DoS attacks. That’s just life.

IEEE 802.1X Authentication – Device Roles

The IEEE 802.1X (802.1X-2004) standard defines three roles involved in an authentication system that provides port-based access control:

  • Supplicant
  • Authenticator
  • Authentication Server

The supplicant is the entity containing the port that wishes to gain access to services offered by the system or network to which the authenticator is connected. Stated in common Wi-Fi terminology, the IEEE 802.1X supplicant is the device desiring to gain access to the WLAN.

The authenticator is the entity containing the port that wishes to enforce authentication before granting access to services offered by the system or network to which it is connected. Again, stated in common Wi-Fi terminology, the IEEE 802.1X authenticator is the access point (AP) through which the wireless clients connect to the network. In controller-based systems, it can also be the controller that acts as the authenticator.

The authentication server is the system that performs the authentication processes required to verify the credentials provided by the supplicant through the authenticator. RADIUS servers are commonly used as the authentication server in an IEEE 802.1X implementation for WLANs.

This is the first portion you must grasp to properly understand 802.1X authentication systems. You must know about these three roles and why they exist. It is important to remember that, in a WLAN, the authentication server is not likely a wireless device at all, but rather is a server computer or a network appliance that provides the service to the APs and controllers (as well as other devices requiring authentication to the network).

Finally, before leaving this introductory post, remember that IEEE 802.1X is not a wireless standard, it is an authentication framework that can be used by wired and wireless systems. Also, the actual authentication used will vary depending on the Extensible Authentication Protocol (EAP) type used. IEEE 802.1X provides the framework on which the actual EAP authentication protocols operate.

Defining Wi-Fi: Noise Floor

This series of blogs (Defining Wi-Fi) will likely stretch to infinity. The blogs will focus on defining terms related to Wi-Fi at a level between the dictionary and a concise encyclopedia, but not quite matching either. Hopefully, the community finds them helpful over time.

NOTE: Entry created August 26, 2016.

The noise floor, in wireless networking, is the RF energy in the receiver space from other intentional and unintentional radiators nearby or at a distance as well as natural phenomena that results in the existence of electromagnetic energy at some measurable level. Defined differently it is the sum of all those signals or energy generators that you aren’t trying to receive. It is a moment-by-moment factor in RF signal reception. The following capture from AirMagnet Spectrum XT shows the noise floor related to channels 1 and 6 in 2.4 GHz.

Noise Floor in Spectrum XT

Two common myths are believed about the noise floor.

  1. The noise floor is the same on all channels in a band.
  2. The noise floor can be measured at a moment and that is the constant noise floor.

The first myth is very important as the noise floor may well be several dB higher in some channels than in others (remember, -95 dBm is higher than -100 dBm when measuring RF energy). This will impact SNR (read my definition of SNR here) and cause variance in data rates available on those channels if not considered. While the noise floor may be constant across channels in what we sometimes call a “clean” environment, it is not uncommon to see channel 1 with a noise floor of say -97 dBm and channel 6 with a noise floor of say -95 dBm (these numbers are just for example purposes). This variance is a difference of 60% in signal strength. Depending on the received signal strength, it can easily result in a data rate 2-3 levels (or more) lower in the channel with a higher noise floor.

The second myth assumes that there are no intermittent radiators (a term used instead of transmitters to include unintentional radiators) present. Such radiators may only generate RF energy periodically and can be missed with a quick measurement. Additionally, such devices may cause reception problems after the WLAN is operational because of their manual use. That is, a human turns them on when they want to use them and, when at rest, they do not cause interference at other times. For example, microwave ovens.

We usually use the term interference (instead of noise floor), which I will define in detail in a later post, to reference nearby radiators that cause significant RF energy in a channel at levels greater than what the noise floor would be without them, such as the previously mentioned microwave oven. This differentiation is important because we can often do something about such components (remove them, change the channels, shield them, etc.). However, when considering the noise floor on a moment-by-moment basis, one could argue that these devices raise the noise floor. Why? Because even when they are present, a lower data rate Wi-Fi signal may be able to get through, if sufficient SNR can still be achieved.

However, if the other device is a transmitting device and not simply a radiating device, such a design decision may result in interference caused by the Wi-Fi device against the non-Wi-Fi device. Additionally, the Wi-Fi device is not likely to change its data rate based on one or even two  frame retries. Therefore, the raised noise floor (interference in this case) results in higher retries instead of data rate shifts when the interference is on a low duty cycle (does not communicate a large percentage of the time). Yes, it can get complicated.

Here’s a great analogy when considering the noise floor. Many people like to sleep with a fan on. Why do they do this? They are raising the noise floor (of course, related to sound waves instead of RF electromagnetic waves). When the noise floor is raised around them, distant noises do not have as much sound to noise ratio and they are less likely to alert the sleeper. They are intentionally making it more difficult to receive audible signals by raising the noise floor.

The RF/electromagnetic noise floor is an important consideration in design. In an environment with a higher noise floor, the APs must be placed and configured with this in mind. Many vendor recommendations for AP placement and hardware specifications assume a particular noise floor (that they seldom communicate). If your environment presents a very different noise floor, their recommendations and receiver sensitivity ratings may not prove true.

Defining Wi-Fi: CCI (Co-Channel Interference) also called CCC (Co-Channel Contention)

This series of blogs (Defining Wi-Fi) will likely stretch to infinity. The blogs will focus on defining terms related to Wi-Fi at a level between the dictionary and a concise encyclopedia, but not quite matching either. Hopefully, the community finds them helpful over time.

NOTE: Entry created August 24, 2016.

Co-Channel Interference (CCI) or Co-Channel Contention (CCC), which is the more apt name, but not in the standard, is an important factor to consider in WLAN design. Co-Channel Interference is that which occurs when a device (station) participating in one Basic Service Set (BSS) must defer access to the medium while a device from a different service set (either an infrastructure or independent BSS) is using the medium. This behavior is normal and is the intentional result of the 802.11 specifications. The behavior is driven by standard Carrier Sense Multiple Access/Collision Avoidance (CSMA/CA) algorithms defined in the 802.11 protocol.

For further understanding, consider the scenario where a laptop (STA1) is connected to an AP on channel 1 (AP1). Another AP (AP2)  is on channel 1 at some distance and another laptop STA2) is connected to that remote AP. Even if the two APs are not required to defer to each others’ frames (because the signal level is too low), the two laptops must defer to each others’ frames if they can hear each other at a sufficient signal level.

CCI

That is, the two laptops are transmitting on channel 1 and they are within sufficient range of each other and, therefore, they must contend with each other for access to the medium, resulting in CCI. Additionally, both laptops may transmit a strong enough signal to cause both APs to defer even though they have chosen to associate to only one of the APs based on superior signal strength. Also, both APs may transmit a strong enough signal to cause both laptops to defer even though they are associated to only one of the APs.

To be clear, it is common for APs to create CCI with each other. The point of using this example is to eradicate, from the start, the common myth that CCI is just about APs. CCI is created by any 802.11 device operating on the same channel with sufficient received signal strength at another device on the same channel.

Now, because CCI is not like other RF interference, a modern movement to call it Co-Channel Contention (CCC) has started. In my opinion, this is not a bad thing. CCC brings more clarity to the picture. CCI is about contention and not traditional interference.

What we commonly call interference is a non-Wi-Fi signal or a Wi-Fi signal from another channel that corrupts the frames on the channel on which a device is operating. That is, with other types of interference, unlike contention, the Wi-Fi client may gain access to the medium and begin transmitting a frame while the non-Wi-Fi (or other channel Wi-Fi) is not communicating such that the transmitting Wi-Fi device sees a clear channel. During the frame transmission, the other transmitter may begin transmission as well, without acknowledgement of current energy on the channel, and cause corruption of the Wi-Fi frame. This is not the same as CCI.

Excessive CCI results in very poor performance on the WLAN. With too many devices on a given channel, whether in a single BSS or from multiple service sets, the capacity of the channel is quickly consumed and performance per device is greatly diminished. For this reason, CCI must be carefully considered during WLAN design.

Defining Wi-Fi: SNR (Signal-to-Noise Ratio)

This series of blogs (Defining Wi-Fi) will likely stretch to infinity. The blogs will focus on defining terms related to Wi-Fi at a level between the dictionary and a concise encyclopedia, but not quite matching either. Hopefully, the community finds them helpful over time.

NOTE: Entry created August 20, 2016.

Signal-to-noise ratio (SNR) is a measurement used to define the quality of an RF signal at a given location. It is the primary determiner of data rate for a given device as the SNR must be sufficient to achieve particular data rates. Simply stated, more complex modulation methods can be demodulated with higher SNR values and low SNR values require that the modulation method become less complex. More complex modulation methods result in higher data rates and less complex methods result in lower data rates.

NOTE: Whether you choose the word complex or detailed, the end meaning is the same. A higher SNR is required for higher data rate modulation methods.

The SNR is defined, in Wi-Fi, as the difference between the desired received signal and the noise floor. The noise floor may vary for each channel within the monitored band such that the noise floor may be greater for one channel than for another. Additionally, intermittent, non-Wi-Fi, interfering devices that use the same frequency range as the Wi-Fi device may reduce the SNR available at any moment.

SNR can be calculated with the following formula:

SNR = received signal strength – noise floor

For example, if the received signal strength is -75 dBm and the noise floor is -90 dBm, the SNR is 15 dB. 25 dB SNR or greater is desired for improved data rates and, therefore, improved throughput.

NOTE: SNR is defined in dB and not dBm as SNR is relative.

Finally, many vendor specification (spec) sheets list receive sensitivity values for specific data rates. They will indicate that you can accomplish a particular data rate with a specified signal strength (or greater signal strength). The following is an example of such a spec sheet from the Orinoco USB-9100 802.11ac adapter. (Click to expand)

Orinoco USB-910 Spec Sheet

Remember that these spec sheets assume a noise floor value (which is never communicated in the spec sheets) and that a different noise floor than what they assume would result in the requirement of a higher signal strength than that which is listed because SNR is what you actually need to achieve a given data rate. Also, remember that a higher signal strength is a lower base ten number (excusing the negative sign) because we are referencing negative values; therefore, -65 dBm is higher than -70 dBm. This can sometimes get confusing to those new to Wi-Fi.

Thoughts on IT for those who think about IT