Category Archives: Technical

WLAN (Wireless LAN) Administration Guidelines

Best practices provide a foundation on which to build specific policies and procedures for unique organizations. Wireless networks do not necessarily require the reinvention of administration best practices. Several best practices can be borrowed from wired network administration including:

  • Configure devices offline
  • Backup configurations
  • Document changes
  • Update devices periodically
  • Perform analysis occasionally

Configuring devices offline provides two major benefits: improved security and greater network stability. Security is improved because the new device is not connected to the network until it is configured according to organizational security policies. Stability is improved because devices are added to the network only after they are configured to operate properly within the network. This best practice should be part of any IT organization’s operational procedures.

Initial device configuration can take anywhere from a few minutes to a few days. As a wireless technology professional, you will want to avoid unnecessary manual reconfigurations. The best way to avoid this extra work is to backup the configuration settings for any device that provides a backup facility. Many devices allow you to save the backup to a file that is stored separately from the device and some devices allow only internal backups that are stored in the memory of the device. While the external backup is preferred, the internal backup should be utilized if it is the only method supported. Even with modern “centralized” WLAN technologies, something has to be backed up (for example, the controller or the cloud) by somebody (for example, you or your service provider).

Device configurations are often modified several times over their lifecycle. It is not uncommon for a device to be modified more than a dozen times a year. These configuration changes should also be saved to a backup. If the device supports it, I usually backup the initial configuration and then backup the modified configuration to a separate backup file. However the backup is performed, it is important to backup the changes as well as the initial configuration. As much as we talk about the importance of documentation, IT professionals seldom document minor changes they make to device configurations. These minor changes add up to a big difference over time and the easiest way to document them is to back them up.

Finally, occasional analysis of the network will allow you to determine if it is still performing acceptably. On wired networks, administrators spend most of their time analyzing the performance of the network from a strict data throughput perspective (though security monitoring is also frequently performed and occasional troubleshooting tasks). On wireless networks, the issue of coverage must also be considered. Are the needed areas still receiving coverage at the required data rates? If you look only at the throughput at the APs, we may miss the problems occurring in coverage patterns. If you look only at the coverage, you may miss problems related to throughput. Both are important.

In addition to these practices borrowed from the wired networking world, wireless networks introduce new guidelines. These wireless-specific guidelines include:

  • Test the RF behavior after environmental changes
  • Update security solutions as needed
  • Remove configurations from decommissioned devices

The first wireless-specific guideline is really a subset of the wired best practice of occasionally performing analysis. As I stated previously, wireless networks introduces the need to look at more than throughput metrics at the port level. We must analyze the RF behavior and ensure that coverage is still provided where it is needed. This extra requirement is driven by the nature of RF communications. Aside from implementing enterprise-class monitoring systems, the small business or home office will require occasional analysis and adjustments based on the results.

Wired and wireless networks require updated security solutions, but if history is our teacher, wireless networks may require such updates more frequently (though the last five plus years have honestly been mostly silent in this area as WPA2 has proven very worthy so far). The nature of wireless communications allows for attacks to be made without physical access to the premises. This fact may be the reason behind the more rapid discovery of vulnerabilities. WEP was shown to be flawed in less than three years. WPA and 802.11i have a backward compatibility weakness when using TKIP that may allow for ARP poisoning or Denial of Service attacks and this weakness was discovered within five years of ratification. The problem is that these solutions (WEP and 802.11i) are intended to provide wireless with security at or greater than the level of a wired network (WEP stands for Wired Equivalent Privacy) and yet they do not always achieve it. Since new exploits are discovered periodically, we may be forced to change the security solution we’re using every three to five years (though the past several years have proven greater general stability). I am using a wired Ethernet port right now that was installed more than ten years ago – no security changes have been needed to meet the level of a physical port because it is, well, a physical port.

However, this issue of meeting wired equivalence may be less of an issue than the level at which it is often presented. Do we really need to ensure that our wireless links are equivalent to our wired links? Not if they are used for different things or if we can provide effective security at higher layers. For example, some organizations require IPSec VPN tunnels for any wireless links that connect to sensitive data, though this has become far less common today with the strength of WPA2.

Finally, since the security settings of the wireless network are often stored in the APs and client devices, it is crucial that you remove the configuration settings before decommissioning the hardware. If you leave the WPA passphrase (used with WPA-PSK) in the device’s configuration settings, the next person to acquire the equipment may be able to retrieve the information and use it to gain access to your network. The likelihood of this occurring is slim (very slim), but it doesn’t take long to remove the configuration and it is common for machines to be wiped before decommissioning them anyway.

These guidelines give you a good starting point. Do you have additional recommendations?

The Importance of Data Classification (Information Classification)

The importance of security varies by organization. The variations exist because of the differing values placed on information and networks within organizations. For example, organizations involved in banking and healthcare will likely place a greater priority on information security than organizations involved in selling greeting cards. However, in every organization there exists a need to classify data so that it can be protected appropriately. The greeting card company will likely place a greater value on its customer database than it will on the log files for the Internet firewall. Both of these data files have value, but one is more valuable than the other and should be classified accordingly so that it can be protected properly.

Data classification is the process used to identify the value of data and the cost of data loss or theft. Consider that the cost of data loss is different than the cost of data theft. When data is lost, it means that you no longer have access to the data; however, it does not follow automatically that someone else does have access to the data. For example, an attacker may simply delete your data. This action results in lost data. Data theft indicates that the attacker stole the data. With the data in  the attacker’s possession, the attacker can sell it or otherwise use it in a way that can damage the organization’s value. The worst case scenario is data theft with loss. In this case, the attacker steals the data and destroys the copies. Now the attacker can use the data, but the organization cannot.

When classifying data, then, you are attempting to answer the following questions:

  • How valuable is the data to the organization?
  • How valuable is the data to competitors or outside individuals?
  • Who should have access to the data?
  • Who should not have access to the data?

It might seem odd to ask both of the latter two questions, but it can be very important. For example, you may identify a group who should have access to the data with the exception of one individual in that group. In this case, the group should have access to the data, but the individual in that group should not, and the resulting permission set should be built accordingly. In a Microsoft environment, you would create a group for the individuals needing access and grant that group access to the resource. Next, you would explicitly deny access to the individual who should not have access. The denial overrides the grant and you accomplish the access required.

Many organizations will classify data so that they can easily implement and maintain permissions. For example, if data is classified as internal only, it’s a simple process to configure permissions for that data. Simply create a group named All Employees and add each internal employee to this group. Now, you can assign permissions to the All Employees group for any data classified as internal only. If data is classified as unclassified or public, you can provide access to the Everyone group in a Windows environment and achieve the needed permissions. The point is that data classification leads to simpler permission (authorization) management.

From what I’ve said so far, you can see that data classification can be defined as the process of labeling or organizing data in order to indicate the level of protection required for that data. You may define data classification levels of private, sensitive, and public. Private data would be data that should only be seen by the organization’s employees and may only be seen by a select group of the organization’s employees. Sensitive data would be data that should only be seen by the organization’s employees and approved external individuals. Public data would be data that can be viewed by anyone.

Consider the following applications of this data classification model:

  • The information on the organization’s Internet web site should fall in the classification of public data.
  • The contracts that exist between the organization and service providers or customers should fall in the classification of sensitive data.
  • Trade secrets or internal competitive processes should be classified as private data.

The private, sensitive, and public model is just one example of data classification, but it helps you to determine which data users should be allowed to store offline and which data should only be access while authenticated to the network. By keeping private data off of laptops, you help reduce the severity of a peer-to-peer attack that is launched solely to steal information.

This data classification process is at the core of information security, and it can be outlined as follows:

  1. Determine the value of the information in question.
  2. Apply an appropriate classification based on that value.
  3. Implement the proper security solutions for that classification of information.

From this very brief overview of information classification and security measures, you can see why different organizations have different security priorities and needs. It is also true, however, that every organization is at risk for certain threats. Threats such as denial of service (DoS), worms, and others are often promiscuous in nature. The attacker does not care what networks or systems are damaged or made less effective in a promiscuous attack. The intention of such an attack is often only to express the attacker’s ability or to serve some other motivation for the attacker, such as curiosity or need for recognition. Because many attacks are promiscuous in nature, it is very important that every organization place some level of priority on security regardless of the intrinsic value of the information or networks they employ.

Three Reasons Why My Surface Pro Is A Beast Compared To Your Non-Windows Tablet

1) Running Windows Apps
…and I mean all Windows Apps. I can run a Windows XP VM, using VMware Player or other tools, and then run most any application I desire – even those not directly compatible with Windows 8. Yes, it is a bit clunky sometimes trying to “click” in the right place with my fat finger, but pulling out the pen typically resolves this issue. The point is that I can run very important software apps for an IT geek like me, such as protocol analyzers, spectrum analyzers and programming tools and I can run them all in their full-blown power – not in some limited, nearly useless, tablet version.

 
2) It’s A Computer
…a real computer. Running with 4 GB RAM and a lickety-split fast processor, I can do anything other basic laptops can do. With a small USB 3 hub, I can connect multiple USB devices at the same time. The Surface Pro, and its sister Windows 8 Pro tablets now coming out, is the only tablet that can “really” be used as a tablet and then as a desktop computer. When I go into my office, I can plug it into a USB cable (attached to a powered hub) and have full access to external storage, keyboard and mouse. Then I plug in the video cable and I have a large screen monitor. The performance is as good as my 2 year old desktop sitting across the room.

 
3) It’s A Tablet
…in spite of what many have said (mostly those who have not used it), the Surface Pro is a tablet. Granted, it’s a bit heavier than an iPad, but, then again, it can do a few thousand things the iPad can never do (because of its limited interface options and applications – that’s right, I just said the iPad has limited applications over the Surface Pro because it cannot run all of the Windows apps released over the past decade or more [see reason number 1]). The touch sensitivity is equal to my iPad and my best Android-based devices. No problems there.  The pen is very accurate and makes for excellent diagramming – far superior to that available on either the iPad or the Android-based tablets.

 
As a side note – I have used iDevices off and on for more than three years and Android-based devices during that time – I have lots of experience with all three device types. I have waited a couple of months to write this post because I was initially blown away by the Surface Pro and I thought, “surely this is going to wear off and I will see the flaws in this device that make it less appealing than the Apple or Android devices.” Based on the reviews I had seen to that point, I thought I must be confused about how great it is. Now, after more than two months of use, I am more convinced than ever that, for an IT geek, the other tablets can’t even come close (though this may not be true for the general user). Going back and exploring those reviews again, it became obvious to me that most negative reviews fell into one of the following two categories:

  • Reviews by people who had not used the Surface Pro but commentated only on its features.
  • Reviews by people who had used Apple devices for nearly all their work (laptops and tablets) for several years.

Certainly, people in the first category, should not be taken seriously. People in the second category should be taken very seriously because they do present an issue for Microsoft. Microsoft has to address the learning curve for that group (and it includes many, many younger buyers today). But I don’t work for Microsoft marketing, so that’s their problem and this adaptivity is not in any way a reflection of usefulness or value for those who are willing to adapt. Stated another way, if a device is harder to use for someone who has been using another device, this is not an important  factor in the measurement of either the usability or the functional usefulness of that device. It is simply proof that they know how to use the other device better. Simple as that. From a functional perspective, no one can argue with sincerity that the iPad or Android tablets offer more than the Surface Pro (with the possible exception of access to memory cards, but that is easily solved with a USB memory card reader – though it is, admittedly, not a pretty solution).

 
The reality is that I could go on with another thirty reasons that the Surface Pro is far better for the average IT geek than the other non-Windows tablets, but I simply lack the energy to persuade you. My goal is not really to persuade anyone anyway – just to be a voice that is not influenced by the anti-Microsoft bias that is so common out there. Here’s the way I would summarize it. Do you want a device that can do all the following in equal capability to a laptop while being a true tablet?

  • Run advanced IT software
  • Access custom USB hardware
  • Run virtual machines
  • Run Office – real Office or Office-like applications with all capabilities
  • Access hundreds of thousands (millions ?) of full-featured applications
  • Current access to tens of thousands of custom Windows 8 UI apps (with a growth rate surpassing 100,000 by the end of summer) – think of these as the “tablet” apps for Windows 8
  • The best Internet browsing experience of any tablet (remember, you can install Firefox or Chrome on here – and I mean the real ones, not the lame tablet releases [smile])

Then Surface Pro (or one of its sister Windows 8 tablets coming out from other vendors) is right for you. Certainly, it’s not for everyone, but I cannot even fathom thinking the competing OS-based tablets are better tablet tools for the standard IT pro. However, many will disagree with me and just keep complaining to software vendors about the fact that their needed IT tools are just not available for the iPad that they use.

 

Just sayin’

New Group Policy Settings in Windows 8 and Windows Server 2012

With each new edition of Windows, Microsoft adds new Group Policy capabilities. Group Policy has been with us since the release of Windows 2000, but has roots going back to Windows 95 and Windows NT in the older System Policies. Group Policy allows you to centrally configure settings for Windows clients and servers using Active Directory for deployment and application of these settings.

Windows 8 and Windows Server 2012 introduce new Group Policy settings that may be important to network and system admins. Most of the new settings are related to new features, but many of them are related to existing features from previous editions of Windows as well. In total, Windows Server 2012 not supports more than 3,400 policy settings. Some apply only to older versions and some apply only to newer versions, but with this many policy settings you clearly have a lot of flexibility in centralized management and configuration of the Windows OS.

169 new policies have been created that require Windows 8 or Windows Server 2012 to function. This does not include the policies that require Internet Explorer 10, which typically means you’re running Windows 8 or Windows Server 2012 as well.

Examples of important policies for Enterprise deployments include:

  • Turn off the Store application (can be applied to users or computers)
  • Turn off tile notifications (the rectangles on the Start screen)
  • Turn off toast notifications (the popup notifications in the upper right corner)
  • Location where all default Library definition files for users/machines reside (allows you to point to a single location for consistent Libraries on all computers)
  • Prevent users from uninstalling applications from Start (normally, you can right-click a tile and simply click uninstall – not good in the Enterprise!)
  • Turn off app notifications on the lock screen (may be required for privacy or to reduce network bandwidth consumption)

In addition to the Windows 8 and Windows Server 2012 policies, another 69 require Internet Explorer 10 or above. You can learn more about all the policies (old and new) by downloading the Group Policy Settings Reference for Windows and Windows Server located here.

 

Headed to Wireless Field Day 3 (WFD3)

I have been selected as a delegate for WFD3 and will be attending in September. You can learn more about it here: http://techfieldday.com/2012/wfd3. This event is just one more thing that shows how great it is to be an IT professional. We understand community. We breath community. We are not the nerds in the basement anymore. Now, we’re the nerds on Twitter :-)

Seriously though, the WFD3 event is going to be exciting. We get to visit with vendors and learn about the latest enhancements to their technologies for WLANs. I’m looking forward to learning new information and hanging out with the greatest techs in the whole industry: Wi-Fi geeks.

During the event, I will blog, tweet and post videos so that we can all learn together. If you don’t follow me yet, I’m at twitter.com/carpentertom. As I blog during the event, the blog posts will be duplicated here and at CWNP.com so you can check either location to see all that’s happening.

Currently, the delegates for the event are as follows:

Ryan Adzima A Boring Look @radzima
Tom Carpenter CWNP @carpentertom
Sam Clements SC-WiFi @samuel_clements
Daniel Cybulskie Simply WiFi @SimplyWiFi
Rocky Gregory Intensified @bionicrocky
Jennifer Huber I ♥ WiFi @JenniferLucille
Blake Krone Digital Lifestyle NSA Show @blakekrone
Chris Lyttle WiFi Kiwi’s Blog @wifikiwi
Sean Rynearson WiFiGeeks @Srynearson
Scott Stapleton Not your fathers WiFi @scottpstapleton
George Stefanick my802.11 @wirelesssguru
Gregor Vučajnk 802dot11 @gregorvucajnk

Security Myths?

I find it very interesting when an article debunks itself while talking about debunking myths. If you have not read the recent Network World article titled "13 Security Myths You'll Hear – But Should You Believe?" you can read it here:

http://www.networkworld.com/news/2012/021412-security-myths-256109.html?page=1

While most of the "myths" are very obvious to anyone who has worked in computer support for very long, one of them I found quite interesting. The third "myth" referenced in the article is, "Regular expiration (typically every 90 days) strengthens password systems." First, while I completely disagree that this is a myth taken within the context of a complete security system including proper user training, it appears that the article itself debunks the debunking of this "myth". Note the following from myth number 6, "He adds that while 30-day expiration might be good advice for some high-risk environments, it often is not the best policy because such a short period of time tends to induce users to develop predictable patterns or otherwise decrease the effectiveness of their passwords. A length of between 90 to 120 days is more realistic, he says."

Now here's the reality of it from my perspective. If you never change passwords, an internal employee can brute passwords for months and even years until he gains access to sensitive accounts. If you change passwords every 90+ days while having strong passwords that are easy to remember, you accomplish the best security. Strong passwords that are easy to remember can take weeks or months to back with brute force. For example, the password S0L34r43ms3r is VERY easy to remember, well it's easy for me to remember, but you have no idea why. Brute forcing this password would take months with most systems. Therefore, I have a strong password. If I change it every 90-120 days, I will have a good balance of security and usability.

Does every employee need to change his or her password every 90-120 days? No, certainly not. Some employees have access to absolutely no sensitive information. We can allow them to change their passwords either every 6-12 months or never, depending on our security policies. The point is that different levels of access demand different levels of security.

While I felt the article was very good and it did reference some research to defend the "myth" suggested in relation ot password resets, the reality is that the article and the research (which I've read) does not properly consider a full security system based on effective policies and training. Granted, few organizations implement such a system, but, hey, we're only talking theory in this context anyway, right? It sure would be nice if security could move from theory to practical implementation in every organization, but it hasn't. The reason? By and large, because most organizations (most are small companies) never experience a security incident beyond viruses, worms and DoS attacks. That's just life.

Reliability Monitor and Windows 7 (How it saved my life!)

Ok, so maybe it didn't save my life, but it sure does help me discover what's really happening on my users' computers. No longer do I have to rely on answers from the users. I can simply look at the history of their computer and see new installs, crashes and other valuable information in the Reliability Monitor.

To access the Windows 7 Reliability Monitor the fast way:

  1. Simply click Start, type Reliability and click the View reliability history link that is displayed with the blue flag.
  2. Once in the interface, you can scroll through the history viewing errors, warnings and information entries by clicking on them.

 

The information displayed in the Reliability Monitor will include device driver installations, software installations, system crashes, application crashes, failed installations and more. You can export the data to an XML file, which could then be analyzed by other reporting applications, for example, Crystal Reports supports XML data sources.

Interestingly, Microsoft removed the feature to view remote computers' reliability data through the GUI interface of the Windows 7 Reliability Monitor. With the new tools, to view the reliability data on remote computers, you must use PowerShell , which, quite frankly, sucks in comparison to the graphical view in my opinion. However, there is a nice article at the TechNet Magazine website that gives you the basics of PowerShell and reliability data here: http://technet.microsoft.com/en-us/magazine/dd535685.aspx.

SharePoint Lists Clearly Explained

SharePoint is an excellent product for collaboration and content management; however, it is also a great database front end. In this post, I will explain how a list is really nothing more than a data entry and data reporting interface for a back end database.

When you create a list in SharePoint, you are actually creating a set of database table in the back end SQL Server. Because SharePoint allows you to create your own custom lists, the back end database tables are not as simple as traditional tables one might create in a database-driven application, but they are tables nonetheless.

In order to support the list, SharePoint will store two important sets of information. The first is the description of the SharePoint lists and the second is the data stored in the lists. The description of the lists will contain the columns included and the requirements of those columns as well as the lists properties, such as the name and description. The data stored in the SharePoint lists will be in a different table. This table contains all of the column values for all of your lists with list IDs used to map them to the appropriate visible list in the SharePoint interface.

You can verify all of this by directly querying the back end SQL Server database. I don't recommend that you play around in this back end database very much as your actions could quickly lead to disaster (accidentally deleting or improperly modifying data), but you can see the structure SharePoint uses to store the list.

The next time you need a simple tracking table, consider using a SharePoint list. The SharePoint lists will have automatically generated forms. The data will be backed up automatically with your, hopefully, already scheduled SharePoint backups. And the interface will be familiar to your users.

Custom lists provide yet another way that SharePoint shows its power. I'll provide a demonstration video soon showing you just how to create such a custom list. Until then, happy computing!

Convergence+ Becomes CTP+

CompTIA has decided to merge with the CTP exam and change Convergence+ into CTP+. The good news is that there really isn't a massive difference between the two exams. My Convergence+ book covers 70-80 percent of the material needed to pass the CTP+ exam; however, I have good news. We are in the process of writing a CTP+ book that should be out in the late part of the first quarter of 2011.

The book will include more coverage of networking technologies (such as cables, devices and protocols) as the CTP+ exam addresses these technologies with more emphasis than the Convergence+ exam did. Additionally, wireless networking will be covered in more detail and extensive coverage of VoIP protocols will be added (such as H.323, SIP, MGCP, RTP and RTCP). The VoIP protocols were covered in the Convergence+, but more depth is required for the CTP+ exam.

In the end, the CTP+ book will be larger than the Convergence+ book by around 125 to 150 pages. The exam itself, will be a more thorough evaluation of your VoIP and video over IP knowledge. Look for the book in early 2011 and, in the meantime, if you have questions feel free to ask.

Thanks,
Tom

Disabling System Restore in Windows 7

At times and for many reasons, you may want to disable system restore in Windows 7 systems. Windows 7 creates restore points or recovery points on a scheduled basis and when you install software or upgrades by default. You can change when and how it does this and even completely disable system restore, if you desire. I'll explain more about system restore in this post.

Just to make us as confused as possible, Microsoft refers to two different things in windows 7 systems. First, we have the System Restore and second we have System Recovery. System Recovery is best thought of as the umbrella that covers System Restore and the process used to schedule and create restore points. Technically, you use System Restore when you want to recover a restore point created by System Recovery.

When you perform a system restore, by default, several items are restored including the following:

  • Windows system files
  • The registry
  • Applications

 

Always use caution when performing a restore. It is possible that the end state will be worse than the existing problem.

So, why would disabling System Restore in Windows 7 be a good thing. Well. the simple answer is that it consumes space. If it is set to use up to 10% of your drive space, on a 100 GB drive, it could be consuming 10 GB of your  space. If you system is currently stable and you simply need to get some free space, disabling System Restore will delete all restore points immediately. You can then enable it again and Windows 7 will begin creating new restore points in the regular manner.

If you are an advanced user and are willing to take the risk, you can simply turn System Restore off permanently. In either case, disabling System Restore in Windows 7 is a very easy process. Simply follow these steps:

  1. Click Start.
  2. Right-click on Computer and select Properties.
  3. In the left pane, select System Protection.
  4. Click the Configure button.
  5. Select Turn off system protection.
  6. click OK.