Wireless Access Point (AP) Internals for Cisco WAP371

So I took an AP apart today because I needed some pics of internals. Thought I’d share with all a little nugget or two.

AP Internals (WAP371)
AP Internals (WAP371)

Most WLAN APs have three common components:

  • Radio chipset
  • Antennas
  • Ethernet Port

In addition, they will have a CPU, memory and filters. Batteries are often used to retain configuration settings when disconnected from power. The slide shows the internals of the WAP371.

The WAP371 uses the Broadcom BCM43460 chipset, which is a 3×3:3 802.11ac. The AP vendor may not implement the full capabilities of a chipset. Implementing reduced capabilities may be driven by:

  • Reduced manufacturing costs.
  • Reduction in power requirements.
  • Use of early chipsets and rapid entry to market.

The BCM43460 was an early release based on the 802.11ac draft. It is a three stream chipset and supports up to 80 MHz channels in 5 GHz and 802.11n capabilities in 2.4 GHz. The chipset support airtime fairness and transmit beamforming. Without specifying exact details, Broadcom states that it has “full IEEE 802.11a/b/g/n legacy compatibility with enhanced performance.”

By the way, the chip in the image labeled LDT0579 from LinkCom is the PoE power transformer chip.

HP Buying Aruba Networks – What Would It Mean?

Here’s a little known fact: HP wireless is one of the biggest players in Wi-Fi. They have been for years and it is in part because they were already implemented in many organizations as the switching infrastructure. It just made sense to add on HP wireless for the 802.11 solution.

Now, to be clear, HP wireless products have, frankly, been very good Wi-Fi products. They work. They get the job done. However, they have garnered little to no buzz in the Wi-Fi experts community because they have not been the innovators. In most cases, they have been a, “Yeah, we do that too.” vendor, which means they do most of the good stuff other vendors do, but they are seldom the innovators. To be clear, this is a valid industry strategy. Many tech companies have been consistently successful by simply implementing the best of what others do. There is nothing wrong with that strategy and it typically becomes a familiarity or price sell for them. Now, if the rumors come true and HP acquires Aruba Networks, this is a big move and could actually lead to some very interesting innovative scenarios.

This would be massively bigger than Cisco acquiring Meraki. This is more like GM acquiring Nissan. This is a top five player seeking to acquire a top five player. As of early 2014, the top five WLAN vendors by revenue market size, per IDC, were:

  • Cisco
  • Aruba
  • Ruckus
  • HP
  • Motorola

When Cisco acquired Meraki, they were acquiring a strong player who was best known for their management interface and certainly not for their hardware. As HP looks at Aruba Networks, they are getting a hardware and software powerhouse that could place the HP/Aruba brand for growth and real competition with Cisco. Right now, cisco still outsells the other four top five combined. In fact, they outsell all other enterprise vendors combined by some measurements.

Of course, only time will tell if this merger will happen, but it could bring some exciting and interesting new things to the Wi-Fi arena if it does. We’ll keep our eyes on it for sure!

IEEE 802.1X Authentication – Device Roles

The IEEE 802.1X (802.1X-2004) standard defines three roles involved in an authentication system that provides port-based access control:

  • Supplicant
  • Authenticator
  • Authentication Server

The supplicant is the entity containing the port that wishes to gain access to services offered by the system or network to which the authenticator is connected. Stated in common Wi-Fi terminology, the IEEE 802.1X supplicant is the device desiring to gain access to the WLAN.

The authenticator is the entity containing the port that wishes to enforce authentication before granting access to services offered by the system or network to which it is connected. Again, stated in common Wi-Fi terminology, the IEEE 802.1X authenticator is the access point (AP) through which the wireless clients connect to the network. In controller-based systems, it can also be the controller that acts as the authenticator.

The authentication server is the system that performs the authentication processes required to verify the credentials provided by the supplicant through the authenticator. RADIUS servers are commonly used as the authentication server in an IEEE 802.1X implementation for WLANs.

This is the first portion you must grasp to properly understand 802.1X authentication systems. You must know about these three roles and why they exist. It is important to remember that, in a WLAN, the authentication server is not likely a wireless device at all, but rather is a server computer or a network appliance that provides the service to the APs and controllers (as well as other devices requiring authentication to the network).

Finally, before leaving this introductory post, remember that IEEE 802.1X is not a wireless standard, it is an authentication framework that can be used by wired and wireless systems. Also, the actual authentication used will vary depending on the Extensible Authentication Protocol (EAP) type used. IEEE 802.1X provides the framework on which the actual EAP authentication protocols operate.

Foods We Shouldn’t Eat?

Now, I’m the first to suspect anything that is created by a company that has a product to sell; however, after a year and a half of nutrition study while recovering from cancer treatment, I have to say that I feel there is far more “truth in advertising” in this advertising than in most. Make up your own mind (and, yes, I know this is not related to technology [smile]):


Earning Dividends on Your Mistakes

We often think of mistakes as horrible things. We label them bad, negative or as a failed action. However, it’s possible to learn a lesson that brings value from our mistakes. In reading a book published in 1911, titled How to Systematize the Day’s Work, I came across the following excerpt:

Dividends on Mistakes

A mistake may be made the keystone of system – the foundation of success. The secret is simple: Don’t make the same mistake twice.

The misspelling of a customer’s name – an error in your accounting methods – an unfulfilled promise; these are valuable assets if they teach you exactness.

Let your mistakes shape your system and your system will prevent such mistakes. When you discover a mistake, sit down then and there, and arrange the system to prevent its repetition.

Paint it on your walls; emblazon it on your door; frame it over your desk; say it to your stenographer; think it to yourself; burn it into your brain; this one secret of system, this one essential to success: DON’T MAKE THE SAME MISTAKE TWICE. (emphasis original)

As I read through this section, I couldn’t help but think about the years of teaching I’ve delivered on documentation and its importance to effective troubleshooting and operations and also the process of becoming an expert. This concept of learning from your mistakes is a big part of becoming an expert and it is a significant factor in becoming an effective technician. Ineffectiveness is often born out of the ignoring of our mistakes, which results in their repeated occurrence. What an excellent insight to begin the new year!

WLAN (Wireless LAN) Administration Guidelines

Best practices provide a foundation on which to build specific policies and procedures for unique organizations. Wireless networks do not necessarily require the reinvention of administration best practices. Several best practices can be borrowed from wired network administration including:

  • Configure devices offline
  • Backup configurations
  • Document changes
  • Update devices periodically
  • Perform analysis occasionally

Configuring devices offline provides two major benefits: improved security and greater network stability. Security is improved because the new device is not connected to the network until it is configured according to organizational security policies. Stability is improved because devices are added to the network only after they are configured to operate properly within the network. This best practice should be part of any IT organization’s operational procedures.

Initial device configuration can take anywhere from a few minutes to a few days. As a wireless technology professional, you will want to avoid unnecessary manual reconfigurations. The best way to avoid this extra work is to backup the configuration settings for any device that provides a backup facility. Many devices allow you to save the backup to a file that is stored separately from the device and some devices allow only internal backups that are stored in the memory of the device. While the external backup is preferred, the internal backup should be utilized if it is the only method supported. Even with modern “centralized” WLAN technologies, something has to be backed up (for example, the controller or the cloud) by somebody (for example, you or your service provider).

Device configurations are often modified several times over their lifecycle. It is not uncommon for a device to be modified more than a dozen times a year. These configuration changes should also be saved to a backup. If the device supports it, I usually backup the initial configuration and then backup the modified configuration to a separate backup file. However the backup is performed, it is important to backup the changes as well as the initial configuration. As much as we talk about the importance of documentation, IT professionals seldom document minor changes they make to device configurations. These minor changes add up to a big difference over time and the easiest way to document them is to back them up.

Finally, occasional analysis of the network will allow you to determine if it is still performing acceptably. On wired networks, administrators spend most of their time analyzing the performance of the network from a strict data throughput perspective (though security monitoring is also frequently performed and occasional troubleshooting tasks). On wireless networks, the issue of coverage must also be considered. Are the needed areas still receiving coverage at the required data rates? If you look only at the throughput at the APs, we may miss the problems occurring in coverage patterns. If you look only at the coverage, you may miss problems related to throughput. Both are important.

In addition to these practices borrowed from the wired networking world, wireless networks introduce new guidelines. These wireless-specific guidelines include:

  • Test the RF behavior after environmental changes
  • Update security solutions as needed
  • Remove configurations from decommissioned devices

The first wireless-specific guideline is really a subset of the wired best practice of occasionally performing analysis. As I stated previously, wireless networks introduces the need to look at more than throughput metrics at the port level. We must analyze the RF behavior and ensure that coverage is still provided where it is needed. This extra requirement is driven by the nature of RF communications. Aside from implementing enterprise-class monitoring systems, the small business or home office will require occasional analysis and adjustments based on the results.

Wired and wireless networks require updated security solutions, but if history is our teacher, wireless networks may require such updates more frequently (though the last five plus years have honestly been mostly silent in this area as WPA2 has proven very worthy so far). The nature of wireless communications allows for attacks to be made without physical access to the premises. This fact may be the reason behind the more rapid discovery of vulnerabilities. WEP was shown to be flawed in less than three years. WPA and 802.11i have a backward compatibility weakness when using TKIP that may allow for ARP poisoning or Denial of Service attacks and this weakness was discovered within five years of ratification. The problem is that these solutions (WEP and 802.11i) are intended to provide wireless with security at or greater than the level of a wired network (WEP stands for Wired Equivalent Privacy) and yet they do not always achieve it. Since new exploits are discovered periodically, we may be forced to change the security solution we’re using every three to five years (though the past several years have proven greater general stability). I am using a wired Ethernet port right now that was installed more than ten years ago – no security changes have been needed to meet the level of a physical port because it is, well, a physical port.

However, this issue of meeting wired equivalence may be less of an issue than the level at which it is often presented. Do we really need to ensure that our wireless links are equivalent to our wired links? Not if they are used for different things or if we can provide effective security at higher layers. For example, some organizations require IPSec VPN tunnels for any wireless links that connect to sensitive data, though this has become far less common today with the strength of WPA2.

Finally, since the security settings of the wireless network are often stored in the APs and client devices, it is crucial that you remove the configuration settings before decommissioning the hardware. If you leave the WPA passphrase (used with WPA-PSK) in the device’s configuration settings, the next person to acquire the equipment may be able to retrieve the information and use it to gain access to your network. The likelihood of this occurring is slim (very slim), but it doesn’t take long to remove the configuration and it is common for machines to be wiped before decommissioning them anyway.

These guidelines give you a good starting point. Do you have additional recommendations?

You Ate Your Cheese

“Who Moved My Cheese?” is one of the most popular books in history that addresses change and how to cope with it in your life; however, I would suggest that, for IT professionals and many others, we need an eye-opening, honest book with a title more like, “You Ate Your Cheese.”

You see, the point is that most of the career challenging and life altering work-related changes that occur can be predicted in the technology sector. For example:

  • If you still desire to be supporting Windows 3.1 computers, you ate your cheese.
  • If you still think modems are the best way to connect to the Internet, you ate your cheese.
  • If you think dBase is a modern database, you ate your cheese.
  • If you think Apple is the winner in the mobile phone space, you ate your cheese.
  • If you think InfoSeek is the best search engine, you ate your cheese.
  • If you think Colorado Jumbo 250 tape drives are still a good backup solution, you ate your cheese.
  • If you think Zip drives are the greatest external storage solution ever made, you ate your cheese.
  • If you think 802.11b wireless is fast enough, you ate your cheese.
  • If you think you can control every device users bring into your environment, you ate your cheese.
  • If you think Commodore will make a comeback, you ate your cheese.
  • If you think Windows XP is here to stay, you ate your cheese.
  • If you think Mac OS X will win the OS wars, hmmm, let’s wait and see.

OK. This should be enough to make the point. You eat your cheese when you stick with the knowledge you have and do not grow and learn with the industry. If you think you can master a technology and then just work with that for 20 years, you’re in the wrong industry. I suggest that you consider returning shopping carts to their storage locations at the local departments store. It’s one of the few jobs I know of that is still pretty much like it was 20 years ago. Even in that job, many facilities now have motorized cart pushers to ease the strain on the staff.

Do you see the point? You must continue learning in practically all jobs these days and this is particularly true in IT. If you find yourself in a situation where your skills are no longer in demand, no one moved your cheese, you ate your cheese. It’s time to become cheesemakers and not just cheese eaters. When you use up all the skill you have, it’s often too late to develop new skills. Cheesemakers develop skills continually. Certifications are a great way to do this, but simply learning new skills that you can apply for your current employer or customers can be a great way to evolve over time so that you never get into a situation where you’ve eaten your cheese.

So, the next time someone tells you that someone else moved their cheese, just look them in the eye and kindly say, no, you ate your cheese.

NOTICE: This post is not intended to cover all scenarios in life and is likely to have missed many situations where cheese is indeed moved by a third party. In such situations, advice from books like Who Moved My Cheese? may indeed be helpful. Individuals should consider this post to be advice only and not a medical, physical, emotional or psychological solution to the trauma induced by the moving of said cheese.

802.11 in the Search Cloud

An excellent keyword research service is offered by KeyWordEye.com and one of its features, even with a free account, is the creation of a search volume word cloud. The larger words are the more commonly searched words in relation to the keyword on which you build the word cloud. The following image is the resulting word cloud based on the keyword of “802.11.”

802.11 Word Cloud
802.11 Word Cloud – Click the image to enlarge

The lessons we can learn from search volume are tremendous. Over time, we can discover trends and at any moment we can see what people are interested in. Now, keep in mind, people search for things for many reasons, including:

  • Purchasing products
  • Learning how stuff works
  • Looking for definitions
  • Clicking a pre-built search link

Whatever the reason for searching, it is interesting what made it onto the search cloud list. Here are a few that really got my attention:

  • 802.11ac – I expected this to be there, but was happy when it was confirmed.
  • 802.11b – I was amazed how often this is still used as a search word. The word alone and included in other phrases totals more than 3000 searches at Google US each month alone. This does not include other search engines. Interesting!
  • 802.11n frequency – search phrases like this, being used in large amounts, reveal the technical proficiency of the audience. Certainly, many people out there are looking for more technical information.
  • 802.11g vs 802.11n/802.11n vs 802.11g – People still want to learn about 802.11n compared to their older 802.11g hardware. Tell them what they want to know.

These are just a few insights, I’m sure you can locate more. The most important thing we techies need to learn from this: keyword research can be useful in helping us focus on learning and explaining in-demand technologies (instead of the ones we THINK people should be using).

Three Steps to Becoming an Expert

NOTE: This is an article I wrote several years ago. I hope you enjoy!

Have you ever noticed that experts make more money than generalists? That’s because they specialize and generalists generalize. Or, as Zig Ziglar says, they are a wandering generality.

How did I become an expert in certain areas? How have others always done it? It’s really simpler than you may think and I’m going to reveal it to you in this brief article.

There are three easy steps to becoming an expert:

  1. Choose the Expertise
  2. Make Your Knowledge 90/99
  3. Tell Them What You Know

Let’s look at these three steps individually.

Choose the Expertise

The first step is really the hardest. You wouldn’t think so, would you?

The reason this step is the hardest is because it is the step that the other two are built on. If you ever decide to change your mind about your expertise, it means learning all over again. Therefore, you should put lots of energy into this step.

So, how do you decide on your expertise? Look at what you love and enjoy.

Do you like fishing? Become an expert at bass fishing in the lakes of northern Ohio.

Do you like gardening? Become an expert in growing African flowers in American soil.

Do you like politics? Become an expert in inaugural addresses and their impact on the presidential term.

Notice that I took a generality and made it a specific. You should do this too. I am not just an expert in the field of computers, I also specialize in technical communication skills. I am a general expert in computers/networking and a specialized expert in technical communication skills.

Make Your Knowledge 90/99

I state this when teaching classes on personal growth and I am often asked what I mean. Well, that’s the intention of the statement – to get you to ask.

Here is the answer: You should know more than 90% of the people about your general area of expertise and more than 99% of the people about your specific area of expertise.

Remember that I am a computer expert specializing in technical communication skills. I know more than 90% of the people when it comes to computers, but I know more than 99% of the people when it comes to technical communication skills.

How do you accomplish this level of knowledge? Read, read and then read some more. Go to training classes. Read at least 5 books on the topic. Subscribe to and read 2 or 3 magazines on the topic. Attend 2 training classes per year on the topic. Get experience with the topic.

If you do these things, you will definitely be a 90/99!

Tell Them What You Know

You have to tell people what you know or they won’t know you know.. ya know?

The easiest way to tell your peers and managers (or anyone else) what you know is to put it in writing. Write tips and articles for the company employees (like the one you’re reading and enjoying now).

Depending on your desired goal, you may consider writing magazine articles and offering them for free to various publications. Start a blog on the area you’ve chosen. These days, it’s one of the most powerful ways to become known as an authoritative expert. You may even decide to go for the gold and write that book!


If people look at you as an expert, they will respect your opinion much more. As a matter of fact, if they don’t look at you as an expert, they probably won’t even listen to what you have to say.

In order to become an expert you must first determine the area of expertise you desire. Then come up with a specific area of that expertise to become even more knowledgeable in.

Focus on the 90/99 rule. Make sure you know more than 90% of the people in your general expertise and more than 99% of the people in your specialized area of expertise.

Tell people what you know through articles and tips. Go for the big one and write a book. Do what it takes to get your name out as an expert.

Yes! You can be an expert!

-Tom Carpenter

The Importance of Data Classification (Information Classification)

The importance of security varies by organization. The variations exist because of the differing values placed on information and networks within organizations. For example, organizations involved in banking and healthcare will likely place a greater priority on information security than organizations involved in selling greeting cards. However, in every organization there exists a need to classify data so that it can be protected appropriately. The greeting card company will likely place a greater value on its customer database than it will on the log files for the Internet firewall. Both of these data files have value, but one is more valuable than the other and should be classified accordingly so that it can be protected properly.

Data classification is the process used to identify the value of data and the cost of data loss or theft. Consider that the cost of data loss is different than the cost of data theft. When data is lost, it means that you no longer have access to the data; however, it does not follow automatically that someone else does have access to the data. For example, an attacker may simply delete your data. This action results in lost data. Data theft indicates that the attacker stole the data. With the data in  the attacker’s possession, the attacker can sell it or otherwise use it in a way that can damage the organization’s value. The worst case scenario is data theft with loss. In this case, the attacker steals the data and destroys the copies. Now the attacker can use the data, but the organization cannot.

When classifying data, then, you are attempting to answer the following questions:

  • How valuable is the data to the organization?
  • How valuable is the data to competitors or outside individuals?
  • Who should have access to the data?
  • Who should not have access to the data?

It might seem odd to ask both of the latter two questions, but it can be very important. For example, you may identify a group who should have access to the data with the exception of one individual in that group. In this case, the group should have access to the data, but the individual in that group should not, and the resulting permission set should be built accordingly. In a Microsoft environment, you would create a group for the individuals needing access and grant that group access to the resource. Next, you would explicitly deny access to the individual who should not have access. The denial overrides the grant and you accomplish the access required.

Many organizations will classify data so that they can easily implement and maintain permissions. For example, if data is classified as internal only, it’s a simple process to configure permissions for that data. Simply create a group named All Employees and add each internal employee to this group. Now, you can assign permissions to the All Employees group for any data classified as internal only. If data is classified as unclassified or public, you can provide access to the Everyone group in a Windows environment and achieve the needed permissions. The point is that data classification leads to simpler permission (authorization) management.

From what I’ve said so far, you can see that data classification can be defined as the process of labeling or organizing data in order to indicate the level of protection required for that data. You may define data classification levels of private, sensitive, and public. Private data would be data that should only be seen by the organization’s employees and may only be seen by a select group of the organization’s employees. Sensitive data would be data that should only be seen by the organization’s employees and approved external individuals. Public data would be data that can be viewed by anyone.

Consider the following applications of this data classification model:

  • The information on the organization’s Internet web site should fall in the classification of public data.
  • The contracts that exist between the organization and service providers or customers should fall in the classification of sensitive data.
  • Trade secrets or internal competitive processes should be classified as private data.

The private, sensitive, and public model is just one example of data classification, but it helps you to determine which data users should be allowed to store offline and which data should only be access while authenticated to the network. By keeping private data off of laptops, you help reduce the severity of a peer-to-peer attack that is launched solely to steal information.

This data classification process is at the core of information security, and it can be outlined as follows:

  1. Determine the value of the information in question.
  2. Apply an appropriate classification based on that value.
  3. Implement the proper security solutions for that classification of information.

From this very brief overview of information classification and security measures, you can see why different organizations have different security priorities and needs. It is also true, however, that every organization is at risk for certain threats. Threats such as denial of service (DoS), worms, and others are often promiscuous in nature. The attacker does not care what networks or systems are damaged or made less effective in a promiscuous attack. The intention of such an attack is often only to express the attacker’s ability or to serve some other motivation for the attacker, such as curiosity or need for recognition. Because many attacks are promiscuous in nature, it is very important that every organization place some level of priority on security regardless of the intrinsic value of the information or networks they employ.

Thoughts on IT for those who think about IT