Category Archives: Technical

Security Myths?

I find it very interesting when an article debunks itself while talking about debunking myths. If you have not read the recent Network World article titled “13 Security Myths You’ll Hear – But Should You Believe?” you can read it here:

http://www.networkworld.com/news/2012/021412-security-myths-256109.html?page=1

While most of the “myths” are very obvious to anyone who has worked in computer support for very long, one of them I found quite interesting. The third “myth” referenced in the article is, “Regular expiration (typically every 90 days) strengthens password systems.” First, while I completely disagree that this is a myth taken within the context of a complete security system including proper user training, it appears that the article itself debunks the debunking of this “myth”. Note the following from myth number 6, “He adds that while 30-day expiration might be good advice for some high-risk environments, it often is not the best policy because such a short period of time tends to induce users to develop predictable patterns or otherwise decrease the effectiveness of their passwords. A length of between 90 to 120 days is more realistic, he says.”

Now here’s the reality of it from my perspective. If you never change passwords, an internal employee can brute passwords for months and even years until he gains access to sensitive accounts. If you change passwords every 90+ days while having strong passwords that are easy to remember, you accomplish the best security. Strong passwords that are easy to remember can take weeks or months to back with brute force. For example, the password S0L34r43ms3r is VERY easy to remember, well it’s easy for me to remember, but you have no idea why. Brute forcing this password would take months with most systems. Therefore, I have a strong password. If I change it every 90-120 days, I will have a good balance of security and usability.

Does every employee need to change his or her password every 90-120 days? No, certainly not. Some employees have access to absolutely no sensitive information. We can allow them to change their passwords either every 6-12 months or never, depending on our security policies. The point is that different levels of access demand different levels of security.

While I felt the article was very good and it did reference some research to defend the “myth” suggested in relation ot password resets, the reality is that the article and the research (which I’ve read) does not properly consider a full security system based on effective policies and training. Granted, few organizations implement such a system, but, hey, we’re only talking theory in this context anyway, right? It sure would be nice if security could move from theory to practical implementation in every organization, but it hasn’t. The reason? By and large, because most organizations (most are small companies) never experience a security incident beyond viruses, worms and DoS attacks. That’s just life.

IEEE 802.1X Authentication – Device Roles

The IEEE 802.1X (802.1X-2004) standard defines three roles involved in an authentication system that provides port-based access control:

  • Supplicant
  • Authenticator
  • Authentication Server

The supplicant is the entity containing the port that wishes to gain access to services offered by the system or network to which the authenticator is connected. Stated in common Wi-Fi terminology, the IEEE 802.1X supplicant is the device desiring to gain access to the WLAN.

The authenticator is the entity containing the port that wishes to enforce authentication before granting access to services offered by the system or network to which it is connected. Again, stated in common Wi-Fi terminology, the IEEE 802.1X authenticator is the access point (AP) through which the wireless clients connect to the network. In controller-based systems, it can also be the controller that acts as the authenticator.

The authentication server is the system that performs the authentication processes required to verify the credentials provided by the supplicant through the authenticator. RADIUS servers are commonly used as the authentication server in an IEEE 802.1X implementation for WLANs.

This is the first portion you must grasp to properly understand 802.1X authentication systems. You must know about these three roles and why they exist. It is important to remember that, in a WLAN, the authentication server is not likely a wireless device at all, but rather is a server computer or a network appliance that provides the service to the APs and controllers (as well as other devices requiring authentication to the network).

Finally, before leaving this introductory post, remember that IEEE 802.1X is not a wireless standard, it is an authentication framework that can be used by wired and wireless systems. Also, the actual authentication used will vary depending on the Extensible Authentication Protocol (EAP) type used. IEEE 802.1X provides the framework on which the actual EAP authentication protocols operate.

Defining Wi-Fi: Noise Floor

This series of blogs (Defining Wi-Fi) will likely stretch to infinity. The blogs will focus on defining terms related to Wi-Fi at a level between the dictionary and a concise encyclopedia, but not quite matching either. Hopefully, the community finds them helpful over time.

NOTE: Entry created August 26, 2016.

The noise floor, in wireless networking, is the RF energy in the receiver space from other intentional and unintentional radiators nearby or at a distance as well as natural phenomena that results in the existence of electromagnetic energy at some measurable level. Defined differently it is the sum of all those signals or energy generators that you aren’t trying to receive. It is a moment-by-moment factor in RF signal reception. The following capture from AirMagnet Spectrum XT shows the noise floor related to channels 1 and 6 in 2.4 GHz.

Noise Floor in Spectrum XT

Two common myths are believed about the noise floor.

  1. The noise floor is the same on all channels in a band.
  2. The noise floor can be measured at a moment and that is the constant noise floor.

The first myth is very important as the noise floor may well be several dB higher in some channels than in others (remember, -95 dBm is higher than -100 dBm when measuring RF energy). This will impact SNR (read my definition of SNR here) and cause variance in data rates available on those channels if not considered. While the noise floor may be constant across channels in what we sometimes call a “clean” environment, it is not uncommon to see channel 1 with a noise floor of say -97 dBm and channel 6 with a noise floor of say -95 dBm (these numbers are just for example purposes). This variance is a difference of 60% in signal strength. Depending on the received signal strength, it can easily result in a data rate 2-3 levels (or more) lower in the channel with a higher noise floor.

The second myth assumes that there are no intermittent radiators (a term used instead of transmitters to include unintentional radiators) present. Such radiators may only generate RF energy periodically and can be missed with a quick measurement. Additionally, such devices may cause reception problems after the WLAN is operational because of their manual use. That is, a human turns them on when they want to use them and, when at rest, they do not cause interference at other times. For example, microwave ovens.

We usually use the term interference (instead of noise floor), which I will define in detail in a later post, to reference nearby radiators that cause significant RF energy in a channel at levels greater than what the noise floor would be without them, such as the previously mentioned microwave oven. This differentiation is important because we can often do something about such components (remove them, change the channels, shield them, etc.). However, when considering the noise floor on a moment-by-moment basis, one could argue that these devices raise the noise floor. Why? Because even when they are present, a lower data rate Wi-Fi signal may be able to get through, if sufficient SNR can still be achieved.

However, if the other device is a transmitting device and not simply a radiating device, such a design decision may result in interference caused by the Wi-Fi device against the non-Wi-Fi device. Additionally, the Wi-Fi device is not likely to change its data rate based on one or even two  frame retries. Therefore, the raised noise floor (interference in this case) results in higher retries instead of data rate shifts when the interference is on a low duty cycle (does not communicate a large percentage of the time). Yes, it can get complicated.

Here’s a great analogy when considering the noise floor. Many people like to sleep with a fan on. Why do they do this? They are raising the noise floor (of course, related to sound waves instead of RF electromagnetic waves). When the noise floor is raised around them, distant noises do not have as much sound to noise ratio and they are less likely to alert the sleeper. They are intentionally making it more difficult to receive audible signals by raising the noise floor.

The RF/electromagnetic noise floor is an important consideration in design. In an environment with a higher noise floor, the APs must be placed and configured with this in mind. Many vendor recommendations for AP placement and hardware specifications assume a particular noise floor (that they seldom communicate). If your environment presents a very different noise floor, their recommendations and receiver sensitivity ratings may not prove true.

Defining Wi-Fi: CCI (Co-Channel Interference) also called CCC (Co-Channel Contention)

This series of blogs (Defining Wi-Fi) will likely stretch to infinity. The blogs will focus on defining terms related to Wi-Fi at a level between the dictionary and a concise encyclopedia, but not quite matching either. Hopefully, the community finds them helpful over time.

NOTE: Entry created August 24, 2016.

Co-Channel Interference (CCI) or Co-Channel Contention (CCC), which is the more apt name, but not in the standard, is an important factor to consider in WLAN design. Co-Channel Interference is that which occurs when a device (station) participating in one Basic Service Set (BSS) must defer access to the medium while a device from a different service set (either an infrastructure or independent BSS) is using the medium. This behavior is normal and is the intentional result of the 802.11 specifications. The behavior is driven by standard Carrier Sense Multiple Access/Collision Avoidance (CSMA/CA) algorithms defined in the 802.11 protocol.

For further understanding, consider the scenario where a laptop (STA1) is connected to an AP on channel 1 (AP1). Another AP (AP2)  is on channel 1 at some distance and another laptop STA2) is connected to that remote AP. Even if the two APs are not required to defer to each others’ frames (because the signal level is too low), the two laptops must defer to each others’ frames if they can hear each other at a sufficient signal level.

CCI

That is, the two laptops are transmitting on channel 1 and they are within sufficient range of each other and, therefore, they must contend with each other for access to the medium, resulting in CCI. Additionally, both laptops may transmit a strong enough signal to cause both APs to defer even though they have chosen to associate to only one of the APs based on superior signal strength. Also, both APs may transmit a strong enough signal to cause both laptops to defer even though they are associated to only one of the APs.

To be clear, it is common for APs to create CCI with each other. The point of using this example is to eradicate, from the start, the common myth that CCI is just about APs. CCI is created by any 802.11 device operating on the same channel with sufficient received signal strength at another device on the same channel.

Now, because CCI is not like other RF interference, a modern movement to call it Co-Channel Contention (CCC) has started. In my opinion, this is not a bad thing. CCC brings more clarity to the picture. CCI is about contention and not traditional interference.

What we commonly call interference is a non-Wi-Fi signal or a Wi-Fi signal from another channel that corrupts the frames on the channel on which a device is operating. That is, with other types of interference, unlike contention, the Wi-Fi client may gain access to the medium and begin transmitting a frame while the non-Wi-Fi (or other channel Wi-Fi) is not communicating such that the transmitting Wi-Fi device sees a clear channel. During the frame transmission, the other transmitter may begin transmission as well, without acknowledgement of current energy on the channel, and cause corruption of the Wi-Fi frame. This is not the same as CCI.

Excessive CCI results in very poor performance on the WLAN. With too many devices on a given channel, whether in a single BSS or from multiple service sets, the capacity of the channel is quickly consumed and performance per device is greatly diminished. For this reason, CCI must be carefully considered during WLAN design.

Defining Wi-Fi: SNR (Signal-to-Noise Ratio)

This series of blogs (Defining Wi-Fi) will likely stretch to infinity. The blogs will focus on defining terms related to Wi-Fi at a level between the dictionary and a concise encyclopedia, but not quite matching either. Hopefully, the community finds them helpful over time.

NOTE: Entry created August 20, 2016.

Signal-to-noise ratio (SNR) is a measurement used to define the quality of an RF signal at a given location. It is the primary determiner of data rate for a given device as the SNR must be sufficient to achieve particular data rates. Simply stated, more complex modulation methods can be demodulated with higher SNR values and low SNR values require that the modulation method become less complex. More complex modulation methods result in higher data rates and less complex methods result in lower data rates.

NOTE: Whether you choose the word complex or detailed, the end meaning is the same. A higher SNR is required for higher data rate modulation methods.

The SNR is defined, in Wi-Fi, as the difference between the desired received signal and the noise floor. The noise floor may vary for each channel within the monitored band such that the noise floor may be greater for one channel than for another. Additionally, intermittent, non-Wi-Fi, interfering devices that use the same frequency range as the Wi-Fi device may reduce the SNR available at any moment.

SNR can be calculated with the following formula:

SNR = received signal strength – noise floor

For example, if the received signal strength is -75 dBm and the noise floor is -90 dBm, the SNR is 15 dB. 25 dB SNR or greater is desired for improved data rates and, therefore, improved throughput.

NOTE: SNR is defined in dB and not dBm as SNR is relative.

Finally, many vendor specification (spec) sheets list receive sensitivity values for specific data rates. They will indicate that you can accomplish a particular data rate with a specified signal strength (or greater signal strength). The following is an example of such a spec sheet from the Orinoco USB-9100 802.11ac adapter. (Click to expand)

Orinoco USB-910 Spec Sheet

Remember that these spec sheets assume a noise floor value (which is never communicated in the spec sheets) and that a different noise floor than what they assume would result in the requirement of a higher signal strength than that which is listed because SNR is what you actually need to achieve a given data rate. Also, remember that a higher signal strength is a lower base ten number (excusing the negative sign) because we are referencing negative values; therefore, -65 dBm is higher than -70 dBm. This can sometimes get confusing to those new to Wi-Fi.

Defining Wi-Fi: Radio Frequency

This series of blogs (Defining Wi-Fi) will likely stretch to infinity. The blogs will focus on defining terms related to Wi-Fi at a level between the dictionary and a concise encyclopedia, but not quite matching either. Hopefully, the community finds them helpful over time.

NOTE: Entry created August 5, 2016.

Radio Frequency (RF) is a term used to reference a portion of the electromagnetic spectrum that is used for 802.11 (and other) network communications. Wi-Fi networks use RF waves in the microwave frequency range. Frequencies used in 802.11 networks range from 700 MHz to 60 GHz. The vast majority of Wi-Fi networks use the 2.4 GHz and 5 GHz frequency bands.

radio spectrum
The Radio Spectrum – Image credit:  CEPL

RF is the carrier used to modulate data. The RF waves are manipulated to represent digital data bits. The RF wave amplitude and phase can be changed to indicate binary data. Several modulation techniques are used in 802.11 networks, including Binary Phase Shift Keying (BPSK), Quadrature Phase Shift Keying (QPSK), 16-Quadrature Amplitude Modulation (QAM), 64-QAM and 256-QAM. The modulation used, in addition to coding rates and a few other factors, determines the data rate for a given channel width (range of frequencies used to define the channel).

Wireless Access Point (AP) Internals for Cisco WAP371

So I took an AP apart today because I needed some pics of internals. Thought I’d share with all a little nugget or two.

AP Internals (WAP371)
AP Internals (WAP371)
Most WLAN APs have three common components:

  • Radio chipset
  • Antennas
  • Ethernet Port

In addition, they will have a CPU, memory and filters. Batteries are often used to retain configuration settings when disconnected from power. The slide shows the internals of the WAP371.

The WAP371 uses the Broadcom BCM43460 chipset, which is a 3×3:3 802.11ac. The AP vendor may not implement the full capabilities of a chipset. Implementing reduced capabilities may be driven by:

  • Reduced manufacturing costs.
  • Reduction in power requirements.
  • Use of early chipsets and rapid entry to market.

The BCM43460 was an early release based on the 802.11ac draft. It is a three stream chipset and supports up to 80 MHz channels in 5 GHz and 802.11n capabilities in 2.4 GHz. The chipset support airtime fairness and transmit beamforming. Without specifying exact details, Broadcom states that it has “full IEEE 802.11a/b/g/n legacy compatibility with enhanced performance.”

By the way, the chip in the image labeled LDT0579 from LinkCom is the PoE power transformer chip.

WLAN (Wireless LAN) Administration Guidelines

Best practices provide a foundation on which to build specific policies and procedures for unique organizations. Wireless networks do not necessarily require the reinvention of administration best practices. Several best practices can be borrowed from wired network administration including:

  • Configure devices offline
  • Backup configurations
  • Document changes
  • Update devices periodically
  • Perform analysis occasionally

Configuring devices offline provides two major benefits: improved security and greater network stability. Security is improved because the new device is not connected to the network until it is configured according to organizational security policies. Stability is improved because devices are added to the network only after they are configured to operate properly within the network. This best practice should be part of any IT organization’s operational procedures.

Initial device configuration can take anywhere from a few minutes to a few days. As a wireless technology professional, you will want to avoid unnecessary manual reconfigurations. The best way to avoid this extra work is to backup the configuration settings for any device that provides a backup facility. Many devices allow you to save the backup to a file that is stored separately from the device and some devices allow only internal backups that are stored in the memory of the device. While the external backup is preferred, the internal backup should be utilized if it is the only method supported. Even with modern “centralized” WLAN technologies, something has to be backed up (for example, the controller or the cloud) by somebody (for example, you or your service provider).

Device configurations are often modified several times over their lifecycle. It is not uncommon for a device to be modified more than a dozen times a year. These configuration changes should also be saved to a backup. If the device supports it, I usually backup the initial configuration and then backup the modified configuration to a separate backup file. However the backup is performed, it is important to backup the changes as well as the initial configuration. As much as we talk about the importance of documentation, IT professionals seldom document minor changes they make to device configurations. These minor changes add up to a big difference over time and the easiest way to document them is to back them up.

Finally, occasional analysis of the network will allow you to determine if it is still performing acceptably. On wired networks, administrators spend most of their time analyzing the performance of the network from a strict data throughput perspective (though security monitoring is also frequently performed and occasional troubleshooting tasks). On wireless networks, the issue of coverage must also be considered. Are the needed areas still receiving coverage at the required data rates? If you look only at the throughput at the APs, we may miss the problems occurring in coverage patterns. If you look only at the coverage, you may miss problems related to throughput. Both are important.

In addition to these practices borrowed from the wired networking world, wireless networks introduce new guidelines. These wireless-specific guidelines include:

  • Test the RF behavior after environmental changes
  • Update security solutions as needed
  • Remove configurations from decommissioned devices

The first wireless-specific guideline is really a subset of the wired best practice of occasionally performing analysis. As I stated previously, wireless networks introduces the need to look at more than throughput metrics at the port level. We must analyze the RF behavior and ensure that coverage is still provided where it is needed. This extra requirement is driven by the nature of RF communications. Aside from implementing enterprise-class monitoring systems, the small business or home office will require occasional analysis and adjustments based on the results.

Wired and wireless networks require updated security solutions, but if history is our teacher, wireless networks may require such updates more frequently (though the last five plus years have honestly been mostly silent in this area as WPA2 has proven very worthy so far). The nature of wireless communications allows for attacks to be made without physical access to the premises. This fact may be the reason behind the more rapid discovery of vulnerabilities. WEP was shown to be flawed in less than three years. WPA and 802.11i have a backward compatibility weakness when using TKIP that may allow for ARP poisoning or Denial of Service attacks and this weakness was discovered within five years of ratification. The problem is that these solutions (WEP and 802.11i) are intended to provide wireless with security at or greater than the level of a wired network (WEP stands for Wired Equivalent Privacy) and yet they do not always achieve it. Since new exploits are discovered periodically, we may be forced to change the security solution we’re using every three to five years (though the past several years have proven greater general stability). I am using a wired Ethernet port right now that was installed more than ten years ago – no security changes have been needed to meet the level of a physical port because it is, well, a physical port.

However, this issue of meeting wired equivalence may be less of an issue than the level at which it is often presented. Do we really need to ensure that our wireless links are equivalent to our wired links? Not if they are used for different things or if we can provide effective security at higher layers. For example, some organizations require IPSec VPN tunnels for any wireless links that connect to sensitive data, though this has become far less common today with the strength of WPA2.

Finally, since the security settings of the wireless network are often stored in the APs and client devices, it is crucial that you remove the configuration settings before decommissioning the hardware. If you leave the WPA passphrase (used with WPA-PSK) in the device’s configuration settings, the next person to acquire the equipment may be able to retrieve the information and use it to gain access to your network. The likelihood of this occurring is slim (very slim), but it doesn’t take long to remove the configuration and it is common for machines to be wiped before decommissioning them anyway.

These guidelines give you a good starting point. Do you have additional recommendations?

The Importance of Data Classification (Information Classification)

The importance of security varies by organization. The variations exist because of the differing values placed on information and networks within organizations. For example, organizations involved in banking and healthcare will likely place a greater priority on information security than organizations involved in selling greeting cards. However, in every organization there exists a need to classify data so that it can be protected appropriately. The greeting card company will likely place a greater value on its customer database than it will on the log files for the Internet firewall. Both of these data files have value, but one is more valuable than the other and should be classified accordingly so that it can be protected properly.

Data classification is the process used to identify the value of data and the cost of data loss or theft. Consider that the cost of data loss is different than the cost of data theft. When data is lost, it means that you no longer have access to the data; however, it does not follow automatically that someone else does have access to the data. For example, an attacker may simply delete your data. This action results in lost data. Data theft indicates that the attacker stole the data. With the data in  the attacker’s possession, the attacker can sell it or otherwise use it in a way that can damage the organization’s value. The worst case scenario is data theft with loss. In this case, the attacker steals the data and destroys the copies. Now the attacker can use the data, but the organization cannot.

When classifying data, then, you are attempting to answer the following questions:

  • How valuable is the data to the organization?
  • How valuable is the data to competitors or outside individuals?
  • Who should have access to the data?
  • Who should not have access to the data?

It might seem odd to ask both of the latter two questions, but it can be very important. For example, you may identify a group who should have access to the data with the exception of one individual in that group. In this case, the group should have access to the data, but the individual in that group should not, and the resulting permission set should be built accordingly. In a Microsoft environment, you would create a group for the individuals needing access and grant that group access to the resource. Next, you would explicitly deny access to the individual who should not have access. The denial overrides the grant and you accomplish the access required.

Many organizations will classify data so that they can easily implement and maintain permissions. For example, if data is classified as internal only, it’s a simple process to configure permissions for that data. Simply create a group named All Employees and add each internal employee to this group. Now, you can assign permissions to the All Employees group for any data classified as internal only. If data is classified as unclassified or public, you can provide access to the Everyone group in a Windows environment and achieve the needed permissions. The point is that data classification leads to simpler permission (authorization) management.

From what I’ve said so far, you can see that data classification can be defined as the process of labeling or organizing data in order to indicate the level of protection required for that data. You may define data classification levels of private, sensitive, and public. Private data would be data that should only be seen by the organization’s employees and may only be seen by a select group of the organization’s employees. Sensitive data would be data that should only be seen by the organization’s employees and approved external individuals. Public data would be data that can be viewed by anyone.

Consider the following applications of this data classification model:

  • The information on the organization’s Internet web site should fall in the classification of public data.
  • The contracts that exist between the organization and service providers or customers should fall in the classification of sensitive data.
  • Trade secrets or internal competitive processes should be classified as private data.

The private, sensitive, and public model is just one example of data classification, but it helps you to determine which data users should be allowed to store offline and which data should only be access while authenticated to the network. By keeping private data off of laptops, you help reduce the severity of a peer-to-peer attack that is launched solely to steal information.

This data classification process is at the core of information security, and it can be outlined as follows:

  1. Determine the value of the information in question.
  2. Apply an appropriate classification based on that value.
  3. Implement the proper security solutions for that classification of information.

From this very brief overview of information classification and security measures, you can see why different organizations have different security priorities and needs. It is also true, however, that every organization is at risk for certain threats. Threats such as denial of service (DoS), worms, and others are often promiscuous in nature. The attacker does not care what networks or systems are damaged or made less effective in a promiscuous attack. The intention of such an attack is often only to express the attacker’s ability or to serve some other motivation for the attacker, such as curiosity or need for recognition. Because many attacks are promiscuous in nature, it is very important that every organization place some level of priority on security regardless of the intrinsic value of the information or networks they employ.

Three Reasons Why My Surface Pro Is A Beast Compared To Your Non-Windows Tablet

1) Running Windows Apps
…and I mean all Windows Apps. I can run a Windows XP VM, using VMware Player or other tools, and then run most any application I desire – even those not directly compatible with Windows 8. Yes, it is a bit clunky sometimes trying to “click” in the right place with my fat finger, but pulling out the pen typically resolves this issue. The point is that I can run very important software apps for an IT geek like me, such as protocol analyzers, spectrum analyzers and programming tools and I can run them all in their full-blown power – not in some limited, nearly useless, tablet version.

 
2) It’s A Computer
…a real computer. Running with 4 GB RAM and a lickety-split fast processor, I can do anything other basic laptops can do. With a small USB 3 hub, I can connect multiple USB devices at the same time. The Surface Pro, and its sister Windows 8 Pro tablets now coming out, is the only tablet that can “really” be used as a tablet and then as a desktop computer. When I go into my office, I can plug it into a USB cable (attached to a powered hub) and have full access to external storage, keyboard and mouse. Then I plug in the video cable and I have a large screen monitor. The performance is as good as my 2 year old desktop sitting across the room.

 
3) It’s A Tablet
…in spite of what many have said (mostly those who have not used it), the Surface Pro is a tablet. Granted, it’s a bit heavier than an iPad, but, then again, it can do a few thousand things the iPad can never do (because of its limited interface options and applications – that’s right, I just said the iPad has limited applications over the Surface Pro because it cannot run all of the Windows apps released over the past decade or more [see reason number 1]). The touch sensitivity is equal to my iPad and my best Android-based devices. No problems there.  The pen is very accurate and makes for excellent diagramming – far superior to that available on either the iPad or the Android-based tablets.

 
As a side note – I have used iDevices off and on for more than three years and Android-based devices during that time – I have lots of experience with all three device types. I have waited a couple of months to write this post because I was initially blown away by the Surface Pro and I thought, “surely this is going to wear off and I will see the flaws in this device that make it less appealing than the Apple or Android devices.” Based on the reviews I had seen to that point, I thought I must be confused about how great it is. Now, after more than two months of use, I am more convinced than ever that, for an IT geek, the other tablets can’t even come close (though this may not be true for the general user). Going back and exploring those reviews again, it became obvious to me that most negative reviews fell into one of the following two categories:

  • Reviews by people who had not used the Surface Pro but commentated only on its features.
  • Reviews by people who had used Apple devices for nearly all their work (laptops and tablets) for several years.

Certainly, people in the first category, should not be taken seriously. People in the second category should be taken very seriously because they do present an issue for Microsoft. Microsoft has to address the learning curve for that group (and it includes many, many younger buyers today). But I don’t work for Microsoft marketing, so that’s their problem and this adaptivity is not in any way a reflection of usefulness or value for those who are willing to adapt. Stated another way, if a device is harder to use for someone who has been using another device, this is not an important  factor in the measurement of either the usability or the functional usefulness of that device. It is simply proof that they know how to use the other device better. Simple as that. From a functional perspective, no one can argue with sincerity that the iPad or Android tablets offer more than the Surface Pro (with the possible exception of access to memory cards, but that is easily solved with a USB memory card reader – though it is, admittedly, not a pretty solution).

 
The reality is that I could go on with another thirty reasons that the Surface Pro is far better for the average IT geek than the other non-Windows tablets, but I simply lack the energy to persuade you. My goal is not really to persuade anyone anyway – just to be a voice that is not influenced by the anti-Microsoft bias that is so common out there. Here’s the way I would summarize it. Do you want a device that can do all the following in equal capability to a laptop while being a true tablet?

  • Run advanced IT software
  • Access custom USB hardware
  • Run virtual machines
  • Run Office – real Office or Office-like applications with all capabilities
  • Access hundreds of thousands (millions ?) of full-featured applications
  • Current access to tens of thousands of custom Windows 8 UI apps (with a growth rate surpassing 100,000 by the end of summer) – think of these as the “tablet” apps for Windows 8
  • The best Internet browsing experience of any tablet (remember, you can install Firefox or Chrome on here – and I mean the real ones, not the lame tablet releases [smile])

Then Surface Pro (or one of its sister Windows 8 tablets coming out from other vendors) is right for you. Certainly, it’s not for everyone, but I cannot even fathom thinking the competing OS-based tablets are better tablet tools for the standard IT pro. However, many will disagree with me and just keep complaining to software vendors about the fact that their needed IT tools are just not available for the iPad that they use.

 

Just sayin’