Anthony Ricigliano - Latest Anthony Ricigliano News and Advice:
With an economy burdened by a slow recovery from the great recession and the government hamstrung by skyrocketing deficits, the suggested solutions for getting us back on track seem to be coming from all directions. The thing is, the answer for hi-tech lies in the same model it has operated on for the last few decades, not a new plan based on theoretical economics.
This model is based purely on building the best mousetrap possible, and if there is intense competition, so be it. Using this model, U.S. companies kept high value talent in-house and outsourced the lower value skill sets. In this model, product designers would stay in-house while tasks like assembly would be outsourced.
The environment in hi-tech has always been one of high risk and high reward with promising companies attracting funding from venture capital firms and the like. Successful companies reaped huge rewards as they either went public or were acquired by other hi-tech companies. This bred an environment that encouraged risk taking with rewards that reached into the billions of dollars.
The highly competitive nature of the field meant that there were losers in the process as well. For the uninitiated, the industry felt like it was made up of parts from the Wild West combined with a healthy serving of anarchy. The system worked however, enabling start ups to get to market and then compete and win against slower moving competitors.
The foundation of this model is still basically intact, but the recession and credit crunch have tamed the industry to an extent. With capital more difficult to come by, the appetite for risk has been muted as well. The financial crisis has changed the political winds as well with a seeming preference to focus resources on past industries as opposed to advancing tomorrow’s technology winners.
At this point, the best thing that can happen is for small innovative companies with great products to rack up a few “wins” to start re-building that appetite for risk which will in turn start bringing capital back to market.
It’s quite possible that the environment could remain somewhat muted as confidence is rebuilt in the industry but once it begins building momentum money will surely start flowing back in. America has the talent, the capital, and the guts to innovate our way back in hi-tech. As soon as the industry is being compared to the Wild West again we’ll know we’re back in full swing.
By Anthony Ricigliano
ANTHONY RICIGLIANO - Read current news and posts by author Anthony Ricigliano
Showing posts with label technology. Show all posts
Showing posts with label technology. Show all posts
Wednesday, October 5, 2011
Friday, February 11, 2011
Virtual Storage - By Anthony Ricigliano
Author Anthony Ricigliano - News and Articles by Anthony Ricigliano: While it’s true that information is king, he’s definitely a greedy ruler! As the business world continues to demand the storage of more and more data for longer periods of time, the need for increased amounts of disk space grows exponentially larger each year. To compound the issue, the low price of storage space means that many software developers no longer feel the need to make their products space efficient, and government regulations seem to increase legislative requirements for the retention of critical information each year. As the business units see the price tag on servers and disk space become more affordable, they can’t understand why adding just one more should be a problem. They fail to recognize that the cost of a growing computer room includes more than just the initial cost of the storage units.
The Shocking Cost of Maintaining Storage Units
Most non-IT workers would be shocked to find out that the cost of managing each storage unit can be as much as four to 10 times the original purchase price. In addition to putting a big dent in the IT budget, ever increasing storage units lead to server sprawl and a constantly declining operating efficiency. Increased maintenance can also be disruptive, expensive, and burdensome to the entire enterprise. To solve this problem, system engineers have been working on file virtualization methods to eliminate these issues. Their goal is to reduce storage and server inefficiencies while permitting infinite growth. Let’s take a look at exactly how they intend to accomplish this lofty goal.
Breaking the Tight Connection between Clients, Servers, and Storage
The old strategy of tightly coupling storage space with clients and servers is a big reason that adding a new storage unit becomes expensive to maintain. When machines from a variety of vendors are added to the network, they may not all integrate seamlessly creating individual islands of storage to manage. When applications are physically mapped to a specific server for storage, any changes, including additions, require modifications to this complex mapping algorithm. In some cases, adding a new device or moving a system to a storage unit with more space requires expensive and annoying downtime. This often leads to an under-utilization of the actual storage space, an expensive proposition, because system administrators over-allocate space to minimize the need to take an outage. To break free from this outdated methodology, file virtualization relies on the ability to remove this static mapping process to allow storage resources to freely move between applications as needed without restricting access to the data.
Adding a Layer of Intelligent Design to the Network
File virtualization adds a layer of intelligence to the network to decouple logical data access from the physical retrieval of the actual files. This separates the application and the client from the physical storage devices so that static mapping is no longer needed. With this change, the existing bank of servers can be maintained without disrupting the core system or the user’s access to valuable information. After implementing a file virtualization strategy, many IT shops find that they can consolidate storage units and increase their overall utilization. In this way, they may be able to simplify the system configuration by decommissioning older storage devices that are no longer needed or that they can go much longer than anticipated without adding additional disk space.
In today’s IT world, most shops are finding that using a file virtualization system is not only a “best practice," it’s a must-do to continue operating. IT shops with budgets that continued to rise each year just a short time ago are seeing their available funds shrink more and more each year. With increasing pressure to reduce costs or keep the flat, file virtualization is also a virtual requirement.
Anthony Ricigliano
The Shocking Cost of Maintaining Storage Units
Most non-IT workers would be shocked to find out that the cost of managing each storage unit can be as much as four to 10 times the original purchase price. In addition to putting a big dent in the IT budget, ever increasing storage units lead to server sprawl and a constantly declining operating efficiency. Increased maintenance can also be disruptive, expensive, and burdensome to the entire enterprise. To solve this problem, system engineers have been working on file virtualization methods to eliminate these issues. Their goal is to reduce storage and server inefficiencies while permitting infinite growth. Let’s take a look at exactly how they intend to accomplish this lofty goal.
Breaking the Tight Connection between Clients, Servers, and Storage
The old strategy of tightly coupling storage space with clients and servers is a big reason that adding a new storage unit becomes expensive to maintain. When machines from a variety of vendors are added to the network, they may not all integrate seamlessly creating individual islands of storage to manage. When applications are physically mapped to a specific server for storage, any changes, including additions, require modifications to this complex mapping algorithm. In some cases, adding a new device or moving a system to a storage unit with more space requires expensive and annoying downtime. This often leads to an under-utilization of the actual storage space, an expensive proposition, because system administrators over-allocate space to minimize the need to take an outage. To break free from this outdated methodology, file virtualization relies on the ability to remove this static mapping process to allow storage resources to freely move between applications as needed without restricting access to the data.
Adding a Layer of Intelligent Design to the Network
File virtualization adds a layer of intelligence to the network to decouple logical data access from the physical retrieval of the actual files. This separates the application and the client from the physical storage devices so that static mapping is no longer needed. With this change, the existing bank of servers can be maintained without disrupting the core system or the user’s access to valuable information. After implementing a file virtualization strategy, many IT shops find that they can consolidate storage units and increase their overall utilization. In this way, they may be able to simplify the system configuration by decommissioning older storage devices that are no longer needed or that they can go much longer than anticipated without adding additional disk space.
In today’s IT world, most shops are finding that using a file virtualization system is not only a “best practice," it’s a must-do to continue operating. IT shops with budgets that continued to rise each year just a short time ago are seeing their available funds shrink more and more each year. With increasing pressure to reduce costs or keep the flat, file virtualization is also a virtual requirement.
Anthony Ricigliano
Thursday, February 10, 2011
Virtualization for the Dynamic Enterprise
Anthony Ricigliano News - Business Advice by Anthony Ricigliano:
What does Server Virtualization Mean?
Server virtualization is the use of technology to separate software, including the operating system, from the hardware. This means that you can run several environments on the same physical server. In some installations, this could mean that several identical operating systems are run on the same machine. Other shops could decide to run a Windows platform, a Linux system, and an UNIX environment on a single server.
Advantages of Server Virtualization
In today’s demanding business environment, server virtualization offers many different advantages. Not only does virtualization allow servers and data to be more mobile than ever, it also provides a cost-effective way to balance flat or shrinking budgets. The following list details the major benefits:
• Consolidation – Most large servers run applications that only take up a small percentage of their processing power. Even busy software packages usually only have small peak times that utilize over 50% of their CPU capabilities. The rest of the time, the capacity is unused. By virtualizing the server so that additional systems can take advantage of under-utilized resources, IT shops can increase their return-on-investment (ROI). Although some companies have reported a consolidation ratio as high as 12:1, most shops can easily show a 3:1 to 4:1 rate.
• Decreased Footprint – By decreasing the number of physical servers, the size of the computer room can be reduced and utility costs should decrease.
• Lower Hardware Costs – The utilization of a higher percentage of existing hardware resources will reduce the total number of physical servers that are needed. This will save money on the upfront expense of purchasing hardware and the long-term cost of maintenance.
• Flexibility – Server virtualization allows an IT shop to be much more flexible. Instead of waiting for new hardware to arrive before implementing a new system, a new virtual server can be created on an existing machine. This also provides a more flexible method for migration and disaster recovery.
• Easier Testing and Development – Historically, IT installations have used separate physical servers for their development, acceptance testing, and production environments. With virtualization, it is an easy process to create either different or identical operating environments on the same server. This allows developers to compare performance on several different environments without impacting the stability of the production system.
Virtualization and Disaster Recovery
The growth in both international business and large-scale natural disasters has many organizations closely analyzing their disaster recovery plans and general hardware malfunction procedures. In either event, it is critical to be back up and running in a very short period of time. Most modern IT shops require consistent up-time 24-hours a day to maintain their core operations, or their business will be severely impacted. Both reliability and accessibility are greatly improved when server virtualization is used to its fullest potential.
By reducing the total number of servers needed to duplicate the production environment, it is much less expensive to create and test an off-site disaster recovery environment. Hardware, space, and backup expenses are dramatically reduced. It’s easy to see how setting up 30 or 40 pieces of hardware would be both easier and cheaper than configuring 100 items.
Along the same lines, a hardware malfunction will be less of an issue with server virtualization. While many more systems will run on the same piece of hardware, most shops find that they can easily duplicate physical servers for automatic rollover in the event of a hardware failure when they virtualize.
Major Virtualization Products
While there are always smaller players in any new technology, VMware and Microsoft Virtual Server are the biggest providers of server virtualization products.
• VMware offers the free VMware Server package or the more robust VMware ESX and ESXi products. Systems that are virtualized by VMware products are extremely portable and can be installed on virtually any new piece of hardware with a low incidence of complications. The system can be suspended on one machine, moved to another one, and immediately resume operations at the suspense point when restarted.
• Microsoft Virtual Server is a virtualization product that works best with the Windows operating systems, but can also run other systems like the popular Linux OS.
Anthony Ricigliano
What does Server Virtualization Mean?
Server virtualization is the use of technology to separate software, including the operating system, from the hardware. This means that you can run several environments on the same physical server. In some installations, this could mean that several identical operating systems are run on the same machine. Other shops could decide to run a Windows platform, a Linux system, and an UNIX environment on a single server.
Advantages of Server Virtualization
In today’s demanding business environment, server virtualization offers many different advantages. Not only does virtualization allow servers and data to be more mobile than ever, it also provides a cost-effective way to balance flat or shrinking budgets. The following list details the major benefits:
• Consolidation – Most large servers run applications that only take up a small percentage of their processing power. Even busy software packages usually only have small peak times that utilize over 50% of their CPU capabilities. The rest of the time, the capacity is unused. By virtualizing the server so that additional systems can take advantage of under-utilized resources, IT shops can increase their return-on-investment (ROI). Although some companies have reported a consolidation ratio as high as 12:1, most shops can easily show a 3:1 to 4:1 rate.
• Decreased Footprint – By decreasing the number of physical servers, the size of the computer room can be reduced and utility costs should decrease.
• Lower Hardware Costs – The utilization of a higher percentage of existing hardware resources will reduce the total number of physical servers that are needed. This will save money on the upfront expense of purchasing hardware and the long-term cost of maintenance.
• Flexibility – Server virtualization allows an IT shop to be much more flexible. Instead of waiting for new hardware to arrive before implementing a new system, a new virtual server can be created on an existing machine. This also provides a more flexible method for migration and disaster recovery.
• Easier Testing and Development – Historically, IT installations have used separate physical servers for their development, acceptance testing, and production environments. With virtualization, it is an easy process to create either different or identical operating environments on the same server. This allows developers to compare performance on several different environments without impacting the stability of the production system.
Virtualization and Disaster Recovery
The growth in both international business and large-scale natural disasters has many organizations closely analyzing their disaster recovery plans and general hardware malfunction procedures. In either event, it is critical to be back up and running in a very short period of time. Most modern IT shops require consistent up-time 24-hours a day to maintain their core operations, or their business will be severely impacted. Both reliability and accessibility are greatly improved when server virtualization is used to its fullest potential.
By reducing the total number of servers needed to duplicate the production environment, it is much less expensive to create and test an off-site disaster recovery environment. Hardware, space, and backup expenses are dramatically reduced. It’s easy to see how setting up 30 or 40 pieces of hardware would be both easier and cheaper than configuring 100 items.
Along the same lines, a hardware malfunction will be less of an issue with server virtualization. While many more systems will run on the same piece of hardware, most shops find that they can easily duplicate physical servers for automatic rollover in the event of a hardware failure when they virtualize.
Major Virtualization Products
While there are always smaller players in any new technology, VMware and Microsoft Virtual Server are the biggest providers of server virtualization products.
• VMware offers the free VMware Server package or the more robust VMware ESX and ESXi products. Systems that are virtualized by VMware products are extremely portable and can be installed on virtually any new piece of hardware with a low incidence of complications. The system can be suspended on one machine, moved to another one, and immediately resume operations at the suspense point when restarted.
• Microsoft Virtual Server is a virtualization product that works best with the Windows operating systems, but can also run other systems like the popular Linux OS.
Anthony Ricigliano
Wednesday, February 9, 2011
Anthony Ricigliano - Successful Technology Project Management
Anthony Ricigliano - Business News and Advice by Anthony J Ricigliano:
Managing a Technology Project involves managing both the new system components and the programmers and analysts that create them. In many ways, managing the people involved can be a more daunting task that tracking each new piece of code or hardware item. If each person on the team is not kept up-to-date and on the same page, the process can quickly break down and mistakes will be made.
The Right Approach Can Increase the Chances of Success
While the exact approach taken may depend on the organization and the project details, there are a few methods that should always be used. Many project managers like to detail their project within software packages like Microsoft Project, or Sharepoint, but it may not be very effective without communication that goes beyond recording tasks and deadlines. The project manager should realize that while some people work well with a list, most people will need more direction. In addition, the team will probably be made up of an assortment of people with different learning styles. The material should be presented verbally and visually for the best results. At a minimum, the project manager should create a project plan, schedule a launch meeting to explain the project in detail, and then plan on weekly meetings for progress reports and problem resolution.
Improved Human Interaction Can Prevent Project Failures
If a project manager only informs, and doesn't communicate, there is a high chance that the project will fail. They should be open to all questions, feedback, and suggestions to ensure that everyone understands both their role in the project and the potential cost of a failure. Excellent suggestions about better methods for implementing new technology can sometimes come from surprising sources. If an open-door approach is not maintained, a team member with a great idea could decide to keep it to themselves rather than risk ridicule or rejection. While it is important to go over the minute details of system changes that must be implemented, it is just as important that everyone understands the big picture. If the entire team understands that their next raise is dependent on the revenue increase that a successful project outcome will bring and that a failure could mean layoffs, they will be more likely to put in their best effort. The project manager should also make sure that they are aware of each team member's vacation plans and personal issues that could result in an absence during a critical phase of the project. While unforeseen events will always happen during a project, asking a few questions can minimize the surprises.
Is Over-Communication Possible?
While anything is possible, it's very hard to over-communicate during a project. Always ask for elaboration on any answer to make sure that each party understands both the question and the answer. Yes and no questions rarely give the full picture. Frequently, team members will think they have the same technical definition of a business term, but actually bring a slightly different viewpoint to the table. Neither is wrong, just from different perspectives. For example, one person may think that a payment timetable begins when they place an order, while someone in a different area may think that the clock doesn't start ticking until the product actually arrives.
Communicate at all Levels within the Organization
Effective communication is required within and between all levels of the organization. While executives have very different perspectives than middle management and the technical staff, they will need frequent updates about each project. The executive level should expect weekly updates that let them know whether or not the project is on target to meet the deadline or if the project manager requires additional resources to achieve the ultimate goal. Middle management will also require a weekly update, but will want more details about each task and the testing results. The team will require the most information so that they know if their part is causing a delay in any other area or if they will have to wait on another component before they can complete their part. Communication should go both ways. Projects that involve inter-company partnerships require even more back and forth communication. As the project approaches its target launch dates, meetings may be escalated from weekly to daily when necessary.
Effective Communication Leads to Improved Support
When everyone feels like they are a valuable part of the project, they are more likely to provide the support required for a successful project. Each person involved from management to staff with minimal roles should be included in all communications and feel that they are providing useful input so that they engaged and buy into the importance of success. An executive who believes in the value that the project will bring to the organization will be more likely to pull a few strings when needed to add resources to a project when they are desperately needed. Along the same line, a technician who feels that their input is heard will be more likely to fit your needs into their busy schedule than if they think their ideas are only given a token amount of consideration.
by Anthony Ricigliano
Managing a Technology Project involves managing both the new system components and the programmers and analysts that create them. In many ways, managing the people involved can be a more daunting task that tracking each new piece of code or hardware item. If each person on the team is not kept up-to-date and on the same page, the process can quickly break down and mistakes will be made.
The Right Approach Can Increase the Chances of Success
While the exact approach taken may depend on the organization and the project details, there are a few methods that should always be used. Many project managers like to detail their project within software packages like Microsoft Project, or Sharepoint, but it may not be very effective without communication that goes beyond recording tasks and deadlines. The project manager should realize that while some people work well with a list, most people will need more direction. In addition, the team will probably be made up of an assortment of people with different learning styles. The material should be presented verbally and visually for the best results. At a minimum, the project manager should create a project plan, schedule a launch meeting to explain the project in detail, and then plan on weekly meetings for progress reports and problem resolution.
Improved Human Interaction Can Prevent Project Failures
If a project manager only informs, and doesn't communicate, there is a high chance that the project will fail. They should be open to all questions, feedback, and suggestions to ensure that everyone understands both their role in the project and the potential cost of a failure. Excellent suggestions about better methods for implementing new technology can sometimes come from surprising sources. If an open-door approach is not maintained, a team member with a great idea could decide to keep it to themselves rather than risk ridicule or rejection. While it is important to go over the minute details of system changes that must be implemented, it is just as important that everyone understands the big picture. If the entire team understands that their next raise is dependent on the revenue increase that a successful project outcome will bring and that a failure could mean layoffs, they will be more likely to put in their best effort. The project manager should also make sure that they are aware of each team member's vacation plans and personal issues that could result in an absence during a critical phase of the project. While unforeseen events will always happen during a project, asking a few questions can minimize the surprises.
Is Over-Communication Possible?
While anything is possible, it's very hard to over-communicate during a project. Always ask for elaboration on any answer to make sure that each party understands both the question and the answer. Yes and no questions rarely give the full picture. Frequently, team members will think they have the same technical definition of a business term, but actually bring a slightly different viewpoint to the table. Neither is wrong, just from different perspectives. For example, one person may think that a payment timetable begins when they place an order, while someone in a different area may think that the clock doesn't start ticking until the product actually arrives.
Communicate at all Levels within the Organization
Effective communication is required within and between all levels of the organization. While executives have very different perspectives than middle management and the technical staff, they will need frequent updates about each project. The executive level should expect weekly updates that let them know whether or not the project is on target to meet the deadline or if the project manager requires additional resources to achieve the ultimate goal. Middle management will also require a weekly update, but will want more details about each task and the testing results. The team will require the most information so that they know if their part is causing a delay in any other area or if they will have to wait on another component before they can complete their part. Communication should go both ways. Projects that involve inter-company partnerships require even more back and forth communication. As the project approaches its target launch dates, meetings may be escalated from weekly to daily when necessary.
Effective Communication Leads to Improved Support
When everyone feels like they are a valuable part of the project, they are more likely to provide the support required for a successful project. Each person involved from management to staff with minimal roles should be included in all communications and feel that they are providing useful input so that they engaged and buy into the importance of success. An executive who believes in the value that the project will bring to the organization will be more likely to pull a few strings when needed to add resources to a project when they are desperately needed. Along the same line, a technician who feels that their input is heard will be more likely to fit your needs into their busy schedule than if they think their ideas are only given a token amount of consideration.
by Anthony Ricigliano
IT's A Wireless World - By Anthony Ricigliano
Anthony Ricigliano - Business Advice by Anthony Ricigliano:
In today's business world, the use of Wireless Local Area Networks (WLAN) continues to grow. As users perform more and more of their day-to-day responsibilities through wireless connections, a reliable, secure WLAN is mission critical for the modern mobile business. Although the implementation cost for robust WLANs continues to drop, the operational expenses for maintenance, security, and troubleshooting are on the rise.
Operational Challenges
Because WLANs use a license-free radio signal for connectivity, the operational challenges of keeping the network running issue-free are very different from supporting a traditional wired network. The following list details the key wireless performance issues that affect WLAN deployments:
• Coverage and Capacity - Because signal strength weakens as the distance from the transmitting device increases, many buildings experience coverage holes and fading signals. Poor connections or the inability to connect at all can be frustrating and negatively impact productivity. Bottlenecks in the system can affect throughput as Access Points (AP) are overloaded or specific users consume excessive network resources.
• Noise and Interference - Because many other devices, from microwave ovens to Bluetooth devices, use the same type of frequency as the WLAN radio signals, ambient thermal noise and interference can create intermittent problems that are hard to detect. Although equipment does exist to detect these issues, the price tag is usually cost prohibitive leaving many IT departments to guess about the actual source of their WLAN problems.
• Connectivity Problems - When a user reports they are having problems connecting to the network, the list of potential problems is long. On the user's side, it could be user error, an incorrect security key, or a bad driver. The AP could be having hardware or configuration problems, or the gateway on the wired network could be having a problem.
• Roaming Issues - As a wireless client moves, or roams, it switches from one AP to the next. If the switch doesn't go smoothly, the user may experience latency or jittery connections. Instead of using a laptop analyzer that makes troubleshooting a connection to a single AP easy, a distributed monitoring system is required to find roaming problems.
Security Risks
The same radio waves that make WLANs convenient and easy to implement create a way for hackers to attack the system. With the growth of identity theft rings, malware attacks, and other internet threats, it's critical that businesses address the security issues relatd to WLAN use. There are three primary ways that hackers take advantage of WLANS:
• Denial of Service - The hacker floods the network with signals that impact the availability of resources.
• Spoofing - The hacker assumes the identity of a valid user to steal sensitive information. An attacker may even disguise their connection as an AP.
• Eavesdropping - Because WLANs radiate network traffic into the open air, it is possible to collect this information from a remote location. Hackers are sometimes able to intercept confidential data in this way. Because the information also reaches its original destination, unprotected businesses are often unaware that this has occurred until it is too late.
Best Practices
Every IT department should research the industry's recommended best practices to manage and mitigate both the operational challenges and the security risks that come with WLANS. Some of these methods include:
• Use APs as network monitors. Within special AP firmware, promiscuous mode can be set so that specific APs serve as sensors to continuously monitor the network for performance issues and security violations. This allows network administrators to research wireless issues from anywhere with access to the WLAN.
• Take advantage of automated tools. Because WLAN use is increasingly prevalent, software development firms are developing new WLAN monitoring tools every day. Evaluate several to find the one that best fits your IT department’s needs to reduce the time needed to troubleshoot operational problems.
• Encrypt wireless traffic. By using protocols like Wired Equivalent Privacy (WEP) or standards like 802.11i, data transmitted across the WLAN is encrypted. Unless the receiver has the correct encryption key, the information is useless.
• Change the default SSID. Because the Service Set Identifier (SSID) works as a password when devices make a connection to the WLAN, it must be changed regularly to maintain high security levels.
• Use Virtual Private Networks (VPN). A VPN provides a secure, encrypted connection to the WLAN from a remote location so that hackers can't use intercepted information.
• Minimize WLAN radio waves in non-user areas. By restricting radio transmissions to the inside of the physical building as much as possible, hackers will be less likely to attack the system from the parking lot or street.
Author Anthony Ricigliano
In today's business world, the use of Wireless Local Area Networks (WLAN) continues to grow. As users perform more and more of their day-to-day responsibilities through wireless connections, a reliable, secure WLAN is mission critical for the modern mobile business. Although the implementation cost for robust WLANs continues to drop, the operational expenses for maintenance, security, and troubleshooting are on the rise.
Operational Challenges
Because WLANs use a license-free radio signal for connectivity, the operational challenges of keeping the network running issue-free are very different from supporting a traditional wired network. The following list details the key wireless performance issues that affect WLAN deployments:
• Coverage and Capacity - Because signal strength weakens as the distance from the transmitting device increases, many buildings experience coverage holes and fading signals. Poor connections or the inability to connect at all can be frustrating and negatively impact productivity. Bottlenecks in the system can affect throughput as Access Points (AP) are overloaded or specific users consume excessive network resources.
• Noise and Interference - Because many other devices, from microwave ovens to Bluetooth devices, use the same type of frequency as the WLAN radio signals, ambient thermal noise and interference can create intermittent problems that are hard to detect. Although equipment does exist to detect these issues, the price tag is usually cost prohibitive leaving many IT departments to guess about the actual source of their WLAN problems.
• Connectivity Problems - When a user reports they are having problems connecting to the network, the list of potential problems is long. On the user's side, it could be user error, an incorrect security key, or a bad driver. The AP could be having hardware or configuration problems, or the gateway on the wired network could be having a problem.
• Roaming Issues - As a wireless client moves, or roams, it switches from one AP to the next. If the switch doesn't go smoothly, the user may experience latency or jittery connections. Instead of using a laptop analyzer that makes troubleshooting a connection to a single AP easy, a distributed monitoring system is required to find roaming problems.
Security Risks
The same radio waves that make WLANs convenient and easy to implement create a way for hackers to attack the system. With the growth of identity theft rings, malware attacks, and other internet threats, it's critical that businesses address the security issues relatd to WLAN use. There are three primary ways that hackers take advantage of WLANS:
• Denial of Service - The hacker floods the network with signals that impact the availability of resources.
• Spoofing - The hacker assumes the identity of a valid user to steal sensitive information. An attacker may even disguise their connection as an AP.
• Eavesdropping - Because WLANs radiate network traffic into the open air, it is possible to collect this information from a remote location. Hackers are sometimes able to intercept confidential data in this way. Because the information also reaches its original destination, unprotected businesses are often unaware that this has occurred until it is too late.
Best Practices
Every IT department should research the industry's recommended best practices to manage and mitigate both the operational challenges and the security risks that come with WLANS. Some of these methods include:
• Use APs as network monitors. Within special AP firmware, promiscuous mode can be set so that specific APs serve as sensors to continuously monitor the network for performance issues and security violations. This allows network administrators to research wireless issues from anywhere with access to the WLAN.
• Take advantage of automated tools. Because WLAN use is increasingly prevalent, software development firms are developing new WLAN monitoring tools every day. Evaluate several to find the one that best fits your IT department’s needs to reduce the time needed to troubleshoot operational problems.
• Encrypt wireless traffic. By using protocols like Wired Equivalent Privacy (WEP) or standards like 802.11i, data transmitted across the WLAN is encrypted. Unless the receiver has the correct encryption key, the information is useless.
• Change the default SSID. Because the Service Set Identifier (SSID) works as a password when devices make a connection to the WLAN, it must be changed regularly to maintain high security levels.
• Use Virtual Private Networks (VPN). A VPN provides a secure, encrypted connection to the WLAN from a remote location so that hackers can't use intercepted information.
• Minimize WLAN radio waves in non-user areas. By restricting radio transmissions to the inside of the physical building as much as possible, hackers will be less likely to attack the system from the parking lot or street.
Author Anthony Ricigliano
Subscribe to:
Posts (Atom)