Skip to content

MCTS KEY

MCTS Training, MCTS Certification exams Training at MCTSKYEY.com

Archive

Tag: google

comScore confirms continuing upward trend in Yahoo’s fortunes

New measurements show that Yahoo continued to improve its share of the U.S. search market after striking a deal last year with Mozilla, the maker of Firefox.

According to comScore, which publishes monthly stats on search share, Yahoo gained 1.2 percentage points in January, climbing to 13%. It was the second straight month of increases for Yahoo.

Yahoo’s growth came largely at the expense of Google, which dropped 1 percentage point during January. Google accounted for 64.4% of the U.S. search share last month.

Meanwhile, Microsoft’s Bing remained flat at 19.7%.

Since November 2014, when Yahoo partnered with Mozilla to make its search engine the default for U.S. Firefox users, Yahoo’s share has grown by 2.8 percentage points, representing a 28% increase.

The continued upward trend in Yahoo’s share identified by comScore was similar to the one drawn by Irish analytics firm StatCounter, which earlier in February pointed to a second-consecutive month of gains by the Sunnyvale, Calif. company.

Mozilla changed the default search from Google to Yahoo for U.S. users when it released Firefox 34 on Dec. 1, 2014. The Mozilla-Yahoo deal was a result of the former not renewing its long-standing partnership with Google, which in 2013 generated approximately $275 million in revenue for the open-source developer of Firefox.

But Yahoo’s growth, smaller in January than the month before by both comScore’s and StatCounter’s measurements, may have reached its limit: For the month thus far — through Feb. 25 — StatCounter pegged Yahoo’s usage share in the U.S. at 10.5%, down from January’s 10.9%.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification,
Microsoft MCITP Training at certkingdom.com

 

 

New alert appears before users reach sites likely to serve up software that silently changes the browser’s home page

Google has added an early warning alert to Chrome that pops up when users try to access a website that the search giant suspects will try to dupe users into downloading underhanded software.

The new alert pops up in Chrome when a user aims the browser at a suspect site but before the domain is displayed. “The site ahead contains harmful programs,” the warning states.

Google emphasized tricksters that “harm your browsing experience,” and cited those that silently change the home page or drop unwanted ads onto pages in the warning’s text.

The company has long focused on those categories, and for obvious, if unstated, reasons. It would prefer that people — much less, shifty software — not alter the Chrome home page, which features the Google search engine, the Mountain View, Calif. firm’s primary revenue generator. Likewise, the last thing Google wants is to have adware, especially the most irritating, turn off everyone to all online advertising.

The new alert is only the latest in a line of warnings and more draconian moves Google has made since mid-2011, when the browser began blocking malware downloads. Google has gradually enhanced Chrome’s alert feature by expanding the download warnings to detect a wider range of malicious or deceitful programs, and using more assertive language in the alerts.

In January 2014, for example, Chrome 32 added threats that posed as legitimate software and monkeyed with the browser’s settings to the unwanted list.

The browser’s malware blocking and suspect site warnings come from Google’s Safe Browsing API (application programming interface) and service; Apple’s Safari and Mozilla’s Firefox also access parts of the API to warn their users of potentially dangerous websites.

Google’s malware blocking typically tests much better than Safari’s or Firefox’s, however, because Google also relies on other technologies, including reputation ranking, to bolster Chrome’s Safe Browsing.

Like the Microsoft application reputation ranking used in Internet Explorer, Google’s technology combines whitelists, blacklists and algorithms to create a ranking of the probability that a download is legitimate software. Files that don’t meet a set legitimacy bar trigger a warning.

Google uses other signals, the details of which it has not disclosed, to identify websites that will likely serve up unwanted software like home page changers. Google search uses similar signals to ward off entries in the results list. “This change reduces the chances you’ll visit these sites via our search results,” wrote Lucas Ballard, a software engineer, in a Monday blog post.

Chrome 40, the browser’s current most-polished version, can be downloaded for Windows, OS X and Linux from Google’s website.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

Gmail represents a dying class of products that, like Google Reader, puts control in the hands of users, not signal-harvesting algorithms.

I’m predicting that Google will end Gmail within the next five years. The company hasn’t announced such a move — nor would it.

But whether we like it or not, and whether even Google knows it or not, Gmail is doomed.

What is email, actually?
Email was created to serve as a “dumb pipe.” In mobile network parlance, a “dumb pipe” is when a carrier exists to simply transfer bits to and from the user, without the ability to add services and applications or serve as a “smart” gatekeeper between what the user sees and doesn’t see.

Carriers resist becoming “dumb pipes” because there’s no money in it. A pipe is a faceless commodity, valued only by reliability and speed. In such a market, margins sink to zero or below zero, and it becomes a horrible business to be in.

“Dumb pipes” are exactly what users want. They want the carriers to provide fast, reliable, cheap mobile data connectivity. Then, they want to get their apps, services and social products from, you know, the Internet.

Email is the “dumb pipe” version of communication technology, which is why it remains popular. The idea behind email is that it’s an unmediated communications medium. You send a message to someone. They get the message.

When people send you messages, they stack up in your in-box in reverse-chronological order, with the most recent ones on top.

Compare this with, say, Facebook, where you post a status update to your friends, and some tiny minority of them get it. Or, you send a message to someone on Facebook and the social network drops it into their “Other” folder, which hardly anyone ever checks.

Of course, email isn’t entirely unmediated. Spammers ruined that. We rely on Google’s “mediation” in determining what’s spam and what isn’t.

But still, at its core, email is by its very nature an unmediated communications medium, a “dumb pipe.” And that’s why people like email.
Why email is a problem for Google

You’ll notice that Google has made repeated attempts to replace “dumb pipe” Gmail with something smarter. They tried Google Wave. That didn’t work out.

They hoped people would use Google+ as a replacement for email. That didn’t work, either.

They added prioritization. Then they added tabs, separating important messages from less important ones via separate containers labeled by default “Primary,” “Promotions,” “Social Messages,” “Updates” and “Forums.” That was vaguely popular with some users and ignored by others. Plus, it was a weak form of mediation — merely reshuffling what’s already there, but not inviting a fundamentally different way to use email.

This week, Google introduced an invitation-only service called Inbox. Another attempt by the company to mediate your dumb email pipe, Inbox is an alternative interface to your Gmail account, rather than something that requires starting over with a new account.

Instead of tabs, Inbox groups together and labels and color-codes messages according to categories.

One key feature of Inbox is that it performs searches based on the content of your messages and augments your inbox with that additional information. One way to look at this is that, instead of grabbing extraneous relevant data based on the contents of your Gmail messages and slotting it into Google Now, it shows you those Google Now cards immediately, right there in your in-box.

Inbox identifies addresses, phone numbers and items (such as purchases and flights) that have additional information on the other side of a link, then makes those links live so you can take quick action on them.

You can also do mailbox-like “snoozing” to have messages go away and return at some future time.

You can also “pin” messages so they stick around, rather than being buried in the in-box avalanche.

Inbox has many other features.

The bottom line is that it’s a more radical mediation between the communication you have with other people and with the companies that provide goods, services and content to you.

The positive spin on this is that it brings way more power and intelligence to your email in-box.

The negative spin is that it takes something user-controlled, predictable, clear and linear and takes control away from the user, making email unpredictable, unclear and nonlinear.

That users will judge this and future mediated alternatives to email and label them either good or bad is irrelevant.

The fact is that Google, and companies like Google, hate unmediated anything.

The reason is that Google is in the algorithm business, using user-activity “signals” to customize and personalize the online experience and the ads that are served up as a result of those signals.

Google exists to mediate the unmediated. That’s what it does.

That’s what the company’s search tool does: It mediates our relationship with the Internet.

That’s why Google killed Google Reader, for example. Subscribing to an RSS feed and having an RSS reader deliver 100% of what the user signed up for in an orderly, linear and predictable and reliable fashion is a pointless business for Google.

It’s also why I believe Google will kill Gmail as soon as it comes up with a mediated alternative everyone loves. Of course, Google may offer an antiquated “Gmail view” as a semi-obscure alternative to the default “Inbox”-like mediated experience.

But the bottom line is that dumb-pipe email is unmediated, and therefore it’s a business that Google wants to get out of as soon as it can.

Say goodbye to the unmediated world of RSS, email and manual Web surfing. It was nice while it lasted. But there’s just no money in it.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

As ‘organizers of information distribution’ they must store data about users’ communications on servers in Russia

Russia’s communications regulator has ordered Facebook, Twitter and Google to join a register of social networks or face being blocked in Russia, according to a report in the newspaper Izvestia.

Data integration is often underestimated and poorly implemented, taking time and resources. Yet it
Learn More

By registering as “organizers of information distribution,” companies agree to store data about their users’ communications on servers in Russia or face a fine of 500,000 Russian roubles ($13,000), the report said. Companies that fail to register within 15 days of a second order from the regulator can be blocked in Russia.

A number of Russian Internet companies have already registered, said the newspaper. These include search engine Yandex, social networking service VKontakte, and webmail service Mail.ru, it said, citing Maxim Ksenzov, deputy head of the Russian Federal Service for Supervision of Communications, Information Technology, and Mass Media (Roscomnadzor).

The regulator’s move against the three U.S. Internet companies was no surprise: Western monitoring organizations including the New York-based Committee to Protect Journalists have been predicting it since Russia passed its so-called Social Media Law in May.

It’s not just Internet services that must register with Roscomnadzor, however: Bloggers too must register as mass media outlets if they have more than 3,000 visitors per day, and must comply with the same restrictions on their output as television stations and newspapers. These include obeying the election law, avoiding profanity, and publishing age-restriction warnings on adult content, according to the CPJ.

Roscomnadzor maintains an extensive list of blogs and other sites that it says contain “incitements to illegal activity”, and requires Russian ISPs to block them.

Organizations including the CPJ expect the registration requirement to have a significant effect on freedom of expression in Russia, not through blocking but through self-censorship, as bloggers limit what they say to avoid the risk of administrative sanctions.

 

Best Microsoft MCTS Training – Microsoft MCITP Training at Certkingdom.com

Microsoft and Apple will ship fewer devices in 2014-2015 than earlier estimates, while Android will ship more
Gartner today scaled back its forecast of Windows’ near future, saying that while Microsoft’s operating system will power an increasing number of devices this year and next, the gains will be smaller than it projected six months ago.

For 2013, Windows’ share of the operating systems on all devices — smartphones, tablets, PCs, ultra-light form factors, and PC-tablet hybrids — dropped 5.8% compared to the year before, an additional half-percentage point from the 5.3% the research company pegged in January 2014 for the year prior.

This year, Windows’ share of the device operating system market will grow 2.3% to 333.4 million devices, the bulk of them traditional PCs and what Gartner dubs “ultramobiles, premium,” or the top-tier notebooks. Windows’ growth, however, will come from smaller systems — smartphones in particular.

“Windows phones will exhibit strong growth from a low base in 2014, and are projected to reach a 10% market share by 2018, up from 4% in 2014,” said Annette Zimmermann, a research director at Gartner, in a statement Monday.

In 2015, said Gartner today, Windows will power 373.7 million shipped devices, a year-on-year increase of 12.1%.

Gartner’s numbers today were different than those in January, when it was much more bullish about Windows. Then, analysts projected that Windows device shipments would grow 9.7% in 2014, with another 17.5% increase in 2015. In the latter year, 422.7 million devices of all kinds were to ship that ran Windows.

Although Windows will continue to grow, Gartner’s estimates today were significantly down from those it made six months ago. Most striking was the downgrade of Windows’ 2014 gains to about one-third of the earlier forecast.

The revised estimates also mean that Windows will account for a smaller share in both 2014 and 2015 than projected previously. In January, Gartner said that Windows would capture 14.3% and 16.1% of all device shipments this year and next, respectively. Today’s numbers put Windows’ share at 13.7% (2014) and 14.4% (2015) instead.

The reason Windows forecasts were downgraded, said Gartner analyst Mika Kitagawa, was twofold: a softening of tablet shipment growth and the continued reliance of Microsoft on traditional PCs for the bulk of its licensing sales.

“Microsoft will stay in the traditional PC market,” said Kitagawa.

Those systems will continue to struggle, with downturns in 2014 and 2015 of 6.7% and 5.3%; in January, Gartner said that the category would be down 7.2% this year and 3.4% next. Adding in its “ultramobile, premium” numbers, the total personal computer market is now forecast to shrink 2.9% in 2014 and grow by 2.7% in 2015.

Previously, Gartner had pegged ultramobiles to grow much faster, with the total personal computer market believed to be flat this year (0.3% growth), with a more robust 4.6% increase in 2015.

The expected increase in Windows phone shipments will not be enough to make up the difference.

Windows wasn’t the only platform that Gartner said would grow slower than it had believed before: Apple’s iOS and OS X combined number were also downgraded.

For 2014 and 2015, Gartner now forecasts that iOS/OS X will power 271.1 million devices in 2014 — most of them iPhones — and 301.3 million in 2015, for year-over-year growth rates of 14.8% and 11.2%.

Six months ago, Apple’s estimated shipments were more optimistic: 344.2 million and 397.7 million for this year and next, respectively, representing increases of 29% and 15.4%.

Not surprisingly, Android will take up the slack, said Gartner, which predicted Google’s mobile operating system will become even more dominant. Where six months ago Gartner projected that Android device shipments would grow by 25.6% and 13.8% in 2014 and 2015, today it modified those estimates to 30% and 17.3%, respectively.

This year and next, Android will account for 48% and 52.9% of all device shipments, Gartner forecast today, upgrades from January’s numbers of 44.6% and 44.7%.

Gartner is now pegging total Android device shipments for 2014 and 2015 at 1.17 billion and 1.37 billion, up from previous bets of 1.1 billion and 1.25 billion.

 


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCP Training at certkingdom.com

 

 

A diverse set of real-world Java benchmarks shows that Google is fastest, Azure is slowest, and Amazon is priciest

If the cartoonists are right, heaven is located in a cloud where everyone wears white robes, every machine is lightning quick, everything you do works perfectly, and every action is accompanied by angels playing lyres. The current sales pitch for the enterprise cloud isn’t much different, except for the robes and the music. The cloud providers have an infinite number of machines, and they’re just waiting to run your code perfectly.

The sales pitch is seductive because the cloud offers many advantages. There are no utility bills to pay, no server room staff who want the night off, and no crazy tax issues for amortizing the cost of the machines over N years. You give them your credit card, and you get root on a machine, often within minutes.

[ From Amazon to Windows Azure, see how the elite 8 public clouds compare in InfoWorld Test Center’s review. | Benchmarking Amazon: The wacky world of cloud performance | Stay on top of the cloud with InfoWorld’s “Cloud Computing Deep Dive” special report and Cloud Computing Report newsletter. ]

To test out the options available to anyone looking for a server, I rented some machines on Amazon EC2, Google Compute Engine, and Microsoft Windows Azure and took them out for a spin. The good news is that many of the promises have been fulfilled. If you click the right buttons and fill out the right Web forms, you can have root on a machine in a few minutes, sometimes even faster. All of them make it dead simple to get the basic goods: a Linux distro running what you need.

At first glance, the options seem close to identical. You can choose from many of the same distributions, and from a wide range of machine configuration options. But if you start poking around, you’ll find differences — including differences in performance and cost. The machines may seem like commodities, but they’re not. This became more and more evident once the machines started churning through my benchmarks.

Fast cloud, slow cloud
I tested small, medium, and large machine instances on Amazon EC2, Google Compute Engine, and Microsoft Windows Azure using the open source DaCapo benchmarks, a collection of 14 common Java programs bundled into one easy-to-start JAR. It’s a diverse set of real-world applications that will exercise a machine in a variety different ways. Some of the tests will stress CPU, others will stress RAM, and still others will stress both. Some of the tests will take advantage of multiple threads. No machine configuration will be ideal for all of them.

Some of the benchmarks in the collection will be very familiar to server users. The Tomcat test, for instance, starts up the popular Web server and asks it to assemble some Web pages. The Luindex and Lusearch tests will put Lucene, the common indexing and search tool, through its paces. Another test, Avrora, will simulate some microcontrollers. Although this task may be useful only for chip designers, it still tests the raw CPU capacity of the machine.

I ran the 14 DaCapo tests on three different Linux machine configurations on each cloud, using the default JVM. The instances aren’t perfect “apples to apples” matches, but they are roughly comparable in terms of size and price. The configurations and cost per hour are broken out in the table below.

I gathered two sets of numbers for each machine. The first set shows the amount of time the instance took to run the benchmark from a dead stop. It fired up the JVM, loaded the code, and started to work. This isn’t a bad simulation because many servers start up Java code from command lines in scripts.

To add another dimension, the second set reports the times using the “converge” option. This runs the benchmark repeatedly until consistent results appear. This sometimes happens after just a few runs, but in a few cases, the results failed to converge after 20 iterations. This option often resulted in dramatically faster times, but sometimes it only produced marginally faster times.

The results (see charts and tables below) will look like a mind-numbing sea of numbers to anyone, but a few patterns stood out:

Google was the fastest overall. The three Google instances completed the benchmarks in a total of 575 seconds, compared with 719 seconds for Amazon and 834 seconds for Windows Azure. A Google machine had the fastest time in 13 of the 14 tests. A Windows Azure machine had the fastest time in only one of the benchmarks. Amazon was never the fastest.
Google was also the cheapest overall, though Windows Azure was close behind. Executing the DaCapo suite on the trio of machines cost 3.78 cents on Google, 3.8 cents on Windows Azure, and 5 cents on Amazon. A Google machine was the cheapest option in eight of the 14 tests. A Windows Azure instance was cheapest in five tests. An Amazon machine was the cheapest in only one of the tests.

The best option for misers was Windows Azure’s Small VM (one CPU, 6 cents per hour), which completed the benchmarks at a cost of 0.67 cents. However, this was also one of the slowest options, taking 404 seconds to complete the suite. The next cheapest option, Google’s n1-highcpu-2 instance (two CPUs, 13.1 cents per hour), completed the benchmarks in half the time (193 seconds) at a cost of 0.70 cents.

If you cared more about speed than money, Google’s n1-standard-8 machine (eight CPUs, 82.9 cents per hour) was the best option. It turned in the fastest time in 11 of the 14 benchmarks, completing the entire DaCapo suite in 101 seconds at a cost of 2.32 cents. The closest rival, Amazon’s m3.2xlarge instance (eight CPUs, $0.90 per hour), completed the suite in 118 seconds at a cost of 2.96 cents.

Amazon was rarely a bargain. Amazon’s m1.medium (one CPU, 10.4 cents per hour) was both the slowest and the most expensive of the one CPU instances. Amazon’s m3.2xlarge (eight CPUs, 90 cents per hour) was the second fastest instance overall, but also the most expensive. However, Amazon’s c3.large (two CPUs, 15 cents per hour) was truly competitive — nearly as fast overall as Google’s two-CPU instance, and faster and cheaper than Windows Azure’s two CPU machine.

These general observations, which I drew from the “standing start” tests, are also borne out by the results of the “converged” runs. But a close look at the individual numbers will leave you wondering about consistency.

Some of this may be due to the randomness hidden in the cloud. While the companies make it seem like you’re renting a real machine that sits in a box in some secret, undisclosed bunker, the reality is that you’re probably getting assigned a thin slice of a box. You’re sharing the machine, and that means the other users may or may not affect you. Or maybe it’s the hypervisor that’s behaving differently. It’s hard to know. Your speed can change from minute to minute and from machine to machine, something that usually doesn’t happen with the server boxes rolling off the assembly line.

So while there seem to be clear performance differences among the cloud machines, your results could vary. These patterns also emerged:

Bigger, more expensive machines can be slower. You can pay more and get worse performance. The three Windows Azure machines started with one, two, and eight CPUs and cost 6, 12, and 48 cents per hour, but the more expensive they were, the slower they ran the Avrora test. The same pattern appeared with Google’s one CPU and two CPU machines.
Sometimes bigger pays off. The same Windows Azure machines that ran the Avrora jobs slower sped through the Eclipse benchmark. On the first runs, the eight-CPU machine was more than twice as fast as the one-CPU machine.

Comparisons can be troublesome. The results table has some holes produced when a particular test failed, some of which are easy to explain. The Windows Azure machines didn’t have the right codec for the Batik tests. It didn’t come installed with the default version of Java. I probably could have fixed it with a bit of work, but the machines from Amazon and Google didn’t need it. (Note: Because Azure balked at the Batik test, the comparative times and costs cited above omit the Batik results for Amazon and Google.)
Other failures seemed odd. The Tradesoap routine would generate an exception occasionally. This was probably caused by some network failure deep in the OS layer. Or maybe it was something else. The same test would run successfully in different circumstances.

Adding more CPUs often isn’t worth the cost. While Windows Azure’s eight-CPU machine was often dramatically faster than its one-CPU machine, it was rarely ever eight times faster — disappointing given that it costs eight times as much. This was even true on the tests that are able to recognize the multiple CPUs and set up multiple threads. In most of the tests the eight CPU machine was just two to four times faster. The one test that stood out was the Sunflow raytracing test, which was able to use all of the compute power given to it.
The CPU numbers don’t always tell the story. While the companies usually double the price when you get a machine with two CPUs and multiply by eight when you get eight CPUs, you can often save money if you don’t increase the RAM too. But if you do, don’t expect performance to still double. The Google two-CPU machine in these tests was a so-called “highcpu” machine with less RAM than the standard machine. It was often slower than the one-CPU machine. When it was faster, it was often only about 30 percent faster.

Thread count can also be misleading. While the performance of the Windows Azure machines on the Sunflow benchmark track the number of threads, the same can’t be said for the Amazon and Google machines. Amazon’s two-CPU instance often went more than twice as fast as the one-CPU machine. On one test, it was almost three times faster. Google’s two-CPU machine, on the other hand, went only 20 to 25 percent faster on Sunflow.

The pricing table can be a good indicator of performance. Google’s n1-highcpu-2 machine is about 30 percent more expensive than the n1-standard-1 machine even though it offers twice as much theoretical CPU power. Google probably used performance benchmarks to come up with the prices.

Burst effects can distort behavior. Some of the cloud machines will speed up for short “bursts.” This is sort of a free gift of the extra cycles lying around. If the cloud providers can offer you a temporary speed up, they often do. But beware that the gift will appear and disappear in odd ways. Thus, some of these results may be faster because the machine was bursting.
The bursting behavior varies. On the Amazon and Google machines, the Eclipse benchmark would speed up by a factor of more than three when using the “converge” option of the benchmark. Windows Azure’s eight-CPU machine, on the other hand, wouldn’t even double.

If all of these factors leave you confused, you’re not alone. I tested only a small fraction of the configurations available from each cloud and found that performance was only partially related to the amount of compute power I was renting. The big differences in performance on the different benchmarks means that the different platforms could run your code at radically different speeds. In the past, my tests have shown that cloud performance can vary at different times or days of the week.

This test matrix may be large, but it doesn’t even come close to exploring the different variations that the different platforms can offer. All of the companies are offering multiple combinations of CPUs and RAM and storage. These can have subtle and not-so-subtle effects on performance. At best, these tests can only expose some of the ways that performance varies.

This means that if you’re interested in getting the best performance for the lowest price, your only solution is to create your own benchmarks and test out the platforms. You’ll need to decide which options are delivering the computation you need at the best price.

Calculating cloud costs
Working with the matrix of prices for the cloud machines is surprisingly complex given that one of the selling points of the clouds is the ease of purchase. You’re not buying machines, real estate, air conditioners, and whatnot. You’re just renting a machine by the hour. But even when you look at the price lists, you can’t simply choose the cheapest machine and feel secure in your decision.

The tricky issue for the bean counters is that the performance observed in the benchmarks rarely increased with the price. If you’re intent upon getting the most computation cycles for your dollar, you’ll need to do the math yourself.

The simplest option is Windows Azure, which sells machines in sizes that range from extra small to extra large. The amount of CPU power and RAM generally increase in lockstep, roughly doubling at each step up the size chart. Microsoft also offers a few loaded machines with an extra large amount of RAM included. The smallest machines with 768MB of RAM start at 2 cents per hour, and the biggest machines with 56GB of RAM can top off at $1.60 per hour. The Windows Azure pricing calculator makes it straightforward.

One of the interesting details is that Microsoft charges more for a machine running Microsoft’s operating system. While Windows Azure sometimes sold Linux instances for the same price, at this writing, it’s charging exactly 50 percent more if the machine runs Windows. The marketing department probably went back and forth trying to decide whether to price Windows as if it’s an equal or a premium product before deciding that, duh, of course Windows is a premium. 

Google also follows the same basic mechanism of doubling the size of the machine and then doubling the price. The standard machines start at 10.4 cents per hour for one CPU and 3.75GB of RAM and then double in capacity and price until they reach $1.66 per hour for 16 CPUs and 60GB of RAM. Google also offers options with higher and lower amounts of RAM per CPU, and the prices move along a different scale.

The most interesting options come from Amazon, which has an even larger number of machines and a larger set of complex pricing options. Amazon charges roughly double for twice as much RAM and CPU capacity, but it also varies the price based upon the amount of disk storage. The newest machines include SSD options, but the older instances without flash storage are still available.

Amazon also offers the chance to create “reserved instances” by pre-purchasing some of the CPU capacity for one or three years. If you do this, the machines sport lower per-hour prices. You’re locking in some of the capacity but maintaining the freedom to turn the machines on and off as you need them. All of this means that you can ask yourself how much you intend to use Amazon’s cloud over the next few years because it will then help you save more money.

In an effort to simplify things, Google created the GCEU (Google Compute Engine Unit) to measure CPU power and “chose 2.75 GCEUs to represent the minimum power of one logical core (a hardware hyper-thread) on our Sandy Bridge platform.” Similarly, Amazon measures its machines with Elastic Compute Units, or ECUs. Its big fat eight-CPU machine, known as the m3.2xlarge, is rated at 26 ECUs while the basic one-core version, the m3.medium, is rated at three ECUs. That’s a difference of more than a factor of eight.

This is a laudable effort to bring some light to the subject, but the benchmark performance doesn’t track the GCEUs or ECUs too closely. RAM is often a big part of the equation that’s overlooked, and the algorithms can’t always use all of the CPU cores they’re given. Amazon’s m3.2xlarge machine, for instance, was often only two to four times faster than the m3.medium, although it did get close to being eight times faster on a few of the benchmarks.

Caveat cloudster
The good news is that the cloud computing business is competitive and efficient. You put in your credit card number, and a server pops out. If you’re just looking for a machine and don’t have hard and fast performance numbers in mind, you can’t go wrong with any of these providers.

Is one cheaper or faster? The accompanying tables show the fastest and cheapest results in green and the slowest and priciest results in red. There’s plenty of green in Google’s table and plenty of red in Amazon’s. Depending on how much you emphasize cost, the winners shift. Microsoft’s Windows Azure machines start running green when you take the cost into account.

The freaky thing is that these results are far from consistent, even across the same architecture. Some of Microsoft’s machines have green numbers and red numbers for the same machine. Google’s one-CPU machine is full of green but runs red with the Tradesoap test. Is this a problem with the test or Google’s handling of it? Who knows? Google’s two-CPU machine is slowest on the Fop test — and Google’s one-CPU machine is fastest. Go figure.

All of these results mean that doing your own testing is crucial. If you’re intent on squeezing the most performance out of your nickel, you’ll have to do some comparison testing and be ready to churn some numbers. The performance varies, and the price is only roughly correlated with usable power. There are a number of tasks where it would just be a waste of money to buy a fancier machine with extra cores because your algorithm can’t use them. If you don’t test these things, you can be wasting your budget.

It’s also important to recognize that there can be quite a bit of markup hidden in these prices. For comparison, I also ran the benchmarks on a basic eight-core (AMD FX-8350) machine with 16GB of RAM on my desk. It was generally faster than Windows Azure’s eight-core machine, just a bit slower than Google’s eight-core machine, and about the same speed as Amazon’s eight-core box. Yet the price was markedly different. The desktop machine cost about $600, and you should be able to put together a server in the same ballpark. The Google machine costs 82 cents per hour or about $610 for a 31-day month. You could start saving money after the first month if you build the machine yourself.

The price of the machine, though, is just part of the equation. Hosting the computer costs money, or more to the point, hosting lots of computers costs lots of money. The cloud services will be most attractive to companies that need big blocks of compute power for short sessions. If they pay by the hour and run the machines for only a short block of time, they can cut the costs dramatically. If your workload appears in short bursts, the markup isn’t a problem because any machine you own will just sit there most of the day waiting, wasting cycles and driving up the air conditioning bills.

All of these facts make choosing a cloud service dramatically more complicated and difficult than it might appear. The marketing is glossy and the imagery makes it all look comfy, but hidden underneath is plenty of complexity. The only way you can tell if you’re getting what you’re paying for is to test and test some more. Only then can you make a decision about whether the light, airy simplicity of a cloud machine is for you.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

 

Unless you’re lucky enough to live in Kansas City, Provo or Austin

When Google announced plans in 2010 to jump into the broadband business, the company received more than 1,000 applications from communities hoping to be selected for Google Fiber, which promised gigabit-speed Internet at low prices or even free Internet for seven years if you chose a slower speed.

As we head into 2014, Google has delivered super-fast Internet to exactly one place, greater Kansas City; it’s just now rolling out the service to Provo, Utah — where it purchased a pre-existing municipal network for $1; and has announced plans for Austin, Texas, in 2014.

After that, who knows? Google has not released any further scheduling information.

But if you’re Verizon, Comcast or AT&T, you might be breathing a little easier these days, knowing that Google apparently is not planning to buy up all that unused dark fiber and compete in the residential broadband market on a nationwide scale — at least for now.

There has always been speculation about Google’s motives, and, Google being Google, answers have been hard to come by. Is this just an experiment? Another attention-grabbing sideshow, like those mysterious barges floating in San Francisco Bay and Portland, Maine? Is Google trying to compete head-to-head against the incumbents? Or is Google trying to nudge the incumbents to step up their broadband game by introducing the specter of competition? After all, faster Internet means Google can deliver more ads to more end users, which is how the company makes its money.

As Google spokesperson Jenna Wandres puts it: “The simple answer to ‘why’ is this: it’s for Google users. They keep telling us that they’re tired of waiting for incredibly slow upload and download speeds that often take hours to just transfer an album of photos from one location to another.”

According to Wandres, it’s all about speed. She pointed out that Google developed the Chrome browser to make the Internet experience faster, but it can only be as fast as the Internet connections and the hardware and networks that support that infrastructure. So now, they’re installing Google fiber, to make it faster.

“For the next big leap,” says Wandres, “Gigabit speeds will bring new apps and talented developers to the table, who can and will take advantage of these remarkable speeds.” She explains that organizations such as Kansas City Startup Village (KCSV) — an ecosystem of grassroots individuals working together to create an entrepreneur community — thrive in this type of environment; that is, an area where high-speed Internet allows developers to collaborate and share ideas.

Competition is good news
According to Forrester analyst Dan Bieler, Google Fiber “is good news because competition increases the pressure on carriers and cable providers to bring true broadband service to more households and businesses, if they want to compete effectively with Google. In my view, it is unlikely that Google fiber will target rural areas, but it’s clearly an interesting option for Google to target higher-income urban areas as well as central business districts.’’

“Competition is the main driver for improved services, and this will continue to be the case,” adds Ian Keene, research vice president at Gartner. “But Google has discovered that rolling out its services is taking longer than they first thought. If they carry on at this pace, they will not be a threat beyond a handful of cities; not for the foreseeable future, anyway. However, where they are active, we will and have seen the competition fight back with improved subscriber offers.”

For example, after Google announced plans to deliver gigabit Internet to Austin, AT&T announced plans to up its game in Austin. AT&T has promised to provide ultra high-speed gigabit Internet (called GigaPower) to its Austin users in December, with initial symmetrical speeds up to 300Mbps and an upgrade to the 1Gbps by mid-2014 (at no extra cost, of course).

But it’s still too early to tell whether Google’s efforts will prove to be economically feasible, or whether Google will continue to expand beyond the three locations already identified. “Google, like many others, has learned that the enormity of the costs involved in building broadband infrastructure creates a dilemma,” says telecom analyst Craig Moffett. “It is extraordinarily difficult to earn a reasonable return on building an infrastructure to compete with cable. Verizon tried with Verizon FiOS and, after reaching only 14 percent of the country, eventually conceded that further expansion was just not economically justified.”

Moffett explains that at least Google is giving it the old college try; but the markets they have chosen, so far, are all unique cases. “For example,” he says, “In Provo, they’re building on a network that was already there. In Austin, we’ll get a better sense of what the economics might actually look like. At this point, I think it is reasonable to conclude that fiber-to-the-home deployments like these will remain the exception rather than the rule.”

How it works
With more than 1,100 applicants, Google could choose the communities that offered the most advantageous terms and conditions. These installations require access to utility poles, roads, and even substations in order to lay their fiber networks, so applicants had to be willing to expedite that process.

In the case of Kansas City, Google only extends fiber to neighborhoods with a certain number of pre-registered customers.

According to Wandres, locations must be fiber friendly, technological leaders, and residents must show a genuine willingness to work with Google; that is, to be flexible, move quickly, and cut through the red tape.

“It’s a long process and requires a lot of work,” says Wandres. “There must be a strong demand for fiber among the user base (for those who are excited about a technological hub) and for entrepreneurs who can advance the technology. In Kansas City, the Mayors’ Bi-state Innovation Team came up with a playbook for how Kansas City could benefit from fiber. And there’s another group now tasked with following through on those plans.’’

In Kansas City, subscribers can get gigabit Internet for $70 a month or the gigabit service plus TV (200 channels, HD included) bundle for $120 a month. Both of these options provide free installation plus all the equipment necessary to enable the service to function, such as the network gear, the storage device, and the TV box. Additional benefits include 1TB (terabyte) of storage across Gmail, the drive, and Google+ photos and, for the bundle, one Nexus 7 tablet.

Kansas City residents who want Internet access, but may not classify themselves as power users, can get Google’s free Internet service, which runs at 5Mbps. The free service does require a one-time installation fee of $300 (or $25 a month for 12 months), then the service is free for at least seven years.

Wandres adds, “At the end of seven years, we will begin charging the market price for comparable speeds — which should be $0, as long as Internet speeds increase as much as we hope over the next few years. In other words, we think that in seven years, Internet speeds should be ubiquitously faster in America and, by that point, nobody should have to pay for a connection speed that is 5Mbps download/1Mbps upload.”

Brittain Kovač, co-leader and communications pilot at KCSV says, “With regards to speed, nobody has been able to break the gig. We’ve tried. Downloading tons of files while gaming and running multiple videos simultaneously and we still barely see a dent. What companies are experiencing is an extreme amount of time savings; for example, www.sportsphotos.com, a company that moved to the KCSV from Springfield, Mo., is now able to upload thousands of high resolution photos in a matter of hours; a project that in the past, took days, if not weeks to accomplish.”

“In addition,” says Kovač, “Google fiber has been the catalyst that’s brought the community together in ways that may have never happened, or certainly would have taken years to see the outcomes. It’s bringing like-minded people who want to innovate and collaborate, who know we (KC) have a short window of time to do something big, and we’re really leveraging this opportunity to do great things for the community as a whole. From households to startups, corporate and civic, we’re all working together for the first time in years and it’s exciting.”

Based on the Google fiber city map, the Kansas City project is still in progress. Thirteen more cities in Kansas and six additional cities in Missouri are scheduled next for this service.

Next up, Provo, Utah
The situation in Provo is somewhat different, because Google purchased the existing iProvo city network for $1. So, Google didn’t have to start from scratch, it just needed to upgrade the existing network, which was built in 2006.

In a recent blog post, Provo Mayor John Curtis said, “Unfortunately, while we’ve had the desire, we haven’t had the technical know-how to operate a viable high-speed fiber optic network for Provo residents. So, I started looking for a private buyer for the iProvo network. We issued a Request for Qualifications and a Request for Proposal and even hired a private consultant to guide our efforts. [And now] under the agreement, Google Fiber is committed to helping Provo realize the original vision.”

Provo’s customer plan; that is, the monthly price for gigabit Internet or the Internet/TV bundle is the same as Kansas City ($70 or $120, respectively) except that everyone in Provo pays the installation fee of $30, not just the users who sign up for the free 5Mbps/1Mbps service. And, like Kansas, the free service is only free for seven years (or longer, based on the market price for comparable speeds after seven years).


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

 

 

As the first device designed after Google’s acquisition of Motorola, the Moto X is a good combination of both companies’ services.

Moto X is the first completely new smartphone project that was launched after Google acquired Motorola Mobility. As such, it fully integrates the technology assets of both companies. It is a carefully designed, customizable mass-market consumer device with much embedded Google technology: speech recognition, contextual awareness, and personalized search. It’s available in 18 colors with 7 accent colors. The specifications are adequate for a high-end smartphone and meet or exceed most of the iPhone 5 specifications.

At the announcement in New York yesterday, Motorola Senior VP of Product Management Rick Osterloh introduced the Moto X with a personal demonstration. Rather than one big Apple or Samsung-like announcement with hundreds of people, Motorola held four personalized sessions for approximately 50 journalists at a time, allowing interactive questions.
Image Alt Text

Osterloh led with “Touchless Control.” Motorola adapted Google Now to utilize a proprietary always-on speech recognition function. It’s based on the Motorola X8 Computing System that combines a standard Qualcomm Snap Dragon S4 Pro dual-core CPU and quad-core GPU with two proprietary cores, one for natural language and the other for contextual computing.

The Moto X uses the natural language processor to monitor local sound sources at low power for the words “OK Google Now,” that when detected takes the smartphone out of a low-power state and turns the speech stream over to Google Now for recognition and a response through Google services, such as search and navigation. Osterloh said the Moto X is not listening to every word – it’s just listening for the signature of “OK Google Now” to awaken the smartphone. If Google Now’s speech recognition were constantly monitoring for this cue using ordinary hardware, the battery would quickly become drained.

The user can train the Moto X to recognize his or her voice. It’s not completely foolproof, as someone with a similar voice can prompt the Moto X to awaken. This was shown when an attendee at the event shouted “OK Google Now” and briefly took control of the device. The user can choose to add a password or PIN code to protect the device from unauthorized access, and a Bluetooth device, such as an in-car hands-free system, can be configured as a trusted command device, eliminating the need for password or code entry. Touchless Control was demonstrated to work at cocktail-party levels of ambient noise, and at a distance of up to eight or 10 feet.

Motorola’s researchers learned that the average person activates his or her smartphone 60 times a day, to check the time or respond to notifications. The Moto X uses the contextual processor to operate its “Active Display” to present time of day, missed calls, and notifications at low power without taking the smartphone out of sleep mode. Only a minimum number of pixels are illuminated, saving power by leaving the rest of the OLED display dark. The contextual processor recognizes if the smartphone is face down or in a pocket and does not illuminate the Active Display.

The 10-megapixel camera has three improvements. A twist of the wrist launches the camera without entering a password or PIN. The UI is simplified, moving most camera controls to a panel that can be exposed with a left-to-right swipe. This UI makes it possible to take a photo by touching any part of the screen, replacing the small blue icon that requires concentrated fine motor control to press. The camera is easier to focus and produces better images with an RGBC camera sensor that captures up to 75% more light when the picture is taken.

Most interesting is the user customization. The image at the beginning of this report gives one a sense of the many choices the consumer has to personalize the Moto X with a color scheme. The consumer can choose from two bezel colors, 18 back-plate covers, and seven accent colors, for a total of 252 unique combinations. The user can also add personalized text to the back of the Moto X, such as a name or email address that a good Samaritan might use to contact the owner if the smartphone is lost.

Motorola has created a web service called “Moto Maker” for consumers to use in visually sampling and choosing colors, accent colors and personalized text inscriptions. The suggested price is $199 with a carrier contract. Those interested in buying one can visit a carrier and purchase the Moto X at a contract price, where they will be given a voucher that includes a PIN number to enter into the Moto Maker web service to order the Moto X. Motorola said that it has organized its supply chain to assemble the Moto X in Fort Worth, Texas, with a four-day turnaround from order to shipping to customer. Consumers can also use Moto Maker to purchase directly from Motorola online.

Recognizing speech, understanding the meaning of speech and executing specific commands are priorities for Google. To this point, Google recently hired artificial intelligence expert Ray Kurzweil to lead engineering advances in speech technologies. Motorola may be pushing present-day speech technology to its limits. Moto X’s Touchless Control appears to have made at least an incremental improvement over Google Now and Apple Siri. Even if the incremental improvement in speech is not large, the combination of Touchless Control, Active Display, colorful customizability, and buying experience will drive consumer adoption. Google takes risks and innovates at a scale of many millions and billions. Whether the Moto X achieves Google scale remains to be seen.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

 

Analysts say there’s more to the story, contend that users blame browser makers — not advertisers — for over-zealous data collection

An online advertising group this week attacked Mozilla, the maker of Firefox, for being anti-business, hiding behind a veneer of populism and harboring “techno-libertarians and academic elites who believe in liberty and freedom … as long as they get to decide the definitions of liberty and freedom.”

In a long — almost 4,000 words — and often-rambling blog post, Randall Rothenberg, the CEO of the Interactive Advertising Bureau (IAB) took Mozilla to task over the open-source company’s revamped third-party cookie blocking scheme, a point of contention between the online ad industry and the browser builder since the latter unveiled plans to block some of the cookies used by online advertisers to track users’ Web movements, then deliver targeted ads.

Without ads, specifically targeted ads, the free content on the Web risks vanishing, argued Rothenberg. At best, the elimination of targeted ads means more advertisements, a claim the IAB has made before.

Although Mozilla ditched its original concept of third-party cookie blocking, acknowledging that the mechanism was generating too many erroneous results, the company instead announced last month that it was partnering with Stanford University’s Center for Internet and Society to create the “Cookie Clearinghouse,” or CCH.

The CCH’s main job will be to create and maintain a centrally-managed set of lists that will finger sites whose cookies will be blocked and those awarded exemptions.

While the most provocative of Rothenberg’s criticisms were aimed at what he called Mozilla’s values, his biggest beef with the Firefox-CCH plan seemed to be that Mozilla had set itself up as an unelected “gatekeeper” with the power to decide the fate of online businesses.

“The company’s own statements and explanations indicate that Mozilla is making extreme value judgments with extraordinary impact on the digital supply chain, securing for itself a significant gatekeeper position in which it and its handpicked minions will be able to determine which voices gain distribution and which do not on the Internet,” charged Rothenberg.

“The browser is certainly the gatekeeper and the gateway to the broad landscape of the Internet,” agreed Ray Valdes, an analyst with Gartner, acknowledging the realities of the Web. “But most users are not aware of privacy, or simply don’t care, whether it’s in the browser or on Facebook. It certainly doesn’t loom large in the minds of the average consumer [although] it is a hot-button issue for a small part of the user population.”

Al Hilwa, a researcher with IDC, concurred. “The browser makers are definitely in charge and are indeed the gatekeepers,” he said.

Much of the problem that online advertisers have with Mozilla — and Microsoft — ultimately stems from that gatekeeper role, which the ad industry believes has been abused through unilateral decisions to, for example, block third-party cookies by default (Firefox) and switch on the “Do Not Track” privacy signal (Internet Explorer).

The browser makers’ response is that users have expressed a desire for more online privacy.

But Hilwa sees more at play than a Manichaean view of business versus anti-business, as Rothenberg contended.

Saying Mozilla was “caught in the middle,” Hilwa argued that the company was reacting to pressure — perhaps, as Valdes said, to a vocal minority — because its users blame the browser, not necessarily advertisers, for privacy failures. “There is no doubt users will hold browsers accountable for any breaches of privacy or excesses of the advertising industry in siphoning data,” said Hilwa. “[Browser makers] feel under pressure to control the type of data that can seep through their browsers.”

The recent disclosures of widespread government surveillance has added fuel to that fire, Hilwa noted.

For its part, Mozilla declined to directly rebut Rothenberg’s denunciations, and instead issued a statement that walked a line similar to what it has said before when it’s butted against advertisers.

“Mozilla feels advertising is an important component to a healthy Internet ecosystem, and over the coming months we’ll be working to address valid commercial concerns in our third-party cookie patch before advancing it to the general Firefox release,” said a company spokesman, again intimating that the cookie-blocking plan was far in the future. “We’ll continue gathering input while keeping the dialogue open with the hope that advertising industry groups will respect the choices users make to form the Web experience they want.”

Mozilla, ironically, indirectly relies on advertising revenue for the vast bulk of its revenue. In 2011, the last year for which it reported financials, Mozilla earned $162 million, or 99% of all revenue, from deals with search engines, which pay the firm to make their services available to Firefox users.

Those deals are predicated on Firefox users clicking on ads within the ensuing search results.

Mozilla has been aggressively moving on to other projects, however, including Firefox OS, as a hedge against the decline of desktop browsing and a concurrent reduction in search-based revenue. But the desktop versions of Firefox, which until late 2009 were consistently gaining browser user share, have budged little over the last 12 months.

According to Web analytics firm Net Applications, Firefox on the desktop accounted for 19% of the browsers used worldwide during June. In mobile browsing, where Mozilla has devoted significant resources, not only to Firefox OS but also to an Android browser, Firefox held an almost-invisible 0.03% user share.


MCTS Certification, MCITP Certification

Microsoft MCTS Certification, MCITP Certification and over 3000+
Exams with Life Time Access Membership at http://www.actualkey.com

 

Microsoft spoofs Google’s minimalist search site, Google knocks Outlook.com with ‘Gmail Blue’

Microsoft today took another shot at rival Google, the target of its “Scroggled” campaign, with an April Fools’ Day prank that turned its Bing search engine into a Google look-alike.

Dubbed “Bing Basic” in an April 1 blog post, and claiming it was a special test, the prank kicks off “if you visit bing.com and enter a certain telltale query” that then results in “something a little more bland.”

From Bing.com, users simply enter “Google” to see a temporary home page that looks very much like Google’s noted minimalist design.

“We decided to go back to basics, to the dawn of the Internet, to reimagine Bing with more of a 1997, dial-up sensibility in mind,” wrote Michael Kroll, principal UX (user experience) manager for Bing, on the blog. “We may see some uptick in our numbers based on this test, but the main goal here is just to learn more about how our world would look if we hadn’t evolved.”
Bing basic
Microsoft’s bogus “Bing Basic” takes a shot at rival Google’s stark search engine UI.

SearchEngineLand first reported on the “Google” trigger for the Bing Basic hoax.

The revamped Bing Basic screen sports a few differences from Google’s real home page, including a renaming of the latter’s “I’m Feeling Lucky” button to “I’m Feeling Confused.” Clicking on that button in Bing’s imitation leads to Kroll’s blog post.

Microsoft has retained Bing’s hover-links, however, and used them to take additional shots at the competition. Hovering the mouse over one such link displays a pop-up that states, “When there’s nothing else to look at … You may take drastic measures.” Clicking directs the user to a search for “watching paint dry.”

Google’s counter — launched earlier in the day — was both more elaborate and more subtle as it spoofed Microsoft’s Outlook.com email service, the rebrand of Hotmail.com that debuted last July.

Called “Gmail Blue,” the phony is purportedly a major refresh of Google’s own email service that “Richard Pargo,” supposedly a project manager, says was based on the question, “How do we completely redesign and recreate something while keeping it exactly the same?”

The result? Gmail Blue, with blue fonts, blue lines, blue theme, blue everything.

“It’s Gmail, only bluer,” said Pargo with a straight face in a production-quality video that included a cameo by Blue Man Group.

“We tried orange, brown … brown was a disaster,” said “Dana Popliger,” a faux lead designer. “We tried yellow.”

While some have interpreted Google’s gag as a shot fired at Windows 8 — both directly at the summer’s upcoming upgrade, code named “Blue,” as well as critics’ take on the new OS, which makes a radical change of user interfaces (UIs) in one part, while retaining the traditional desktop in the other — it could also be seen as a bashing of Outlook.com, which by default features a blue theme.

“I think the first thought that’s going to come to the end-user’s mind is, ‘I can’t believe I waited this long for this,'” concluded “Carl Branch,” labeled as lead engineer.

Not coincidentally, today was Gmail’s ninth anniversary. Google launched its invitation-only beta of the service on April 1, 2004.

 


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

 

Free MCTS Training - Free MCITP Training - CCNA Training - CCIE Labs - CCNA Certification - MCTS Online Training - MCITP Online Training - Comptia a+ videos - Comptia a+ Video Training - MCTS Training Key - MCITP Training Key - Free Training Courses - Free Certification Courses - MCTS Online Training - MCTS Online Certification - Cisco Certification Training - CCIE LABS Preparation - Cisco CCNA Training - Cisco CCNA Certification Key - MCITP Videos Training - Free MCITP Videos Tutorial - Free MCTS Video Training - MCTS Videos Tutorial - Free Comptia Online Training - Free Comptia Online Certification

Microsoft MCTS Certification - Microsoft MCITP Training - Comptia A+ Training - Comptia A+ Certification - Cisco CCNA Training - Cisco CCNA Certification - Cisco CCIE Training - Cisco CCIE Exams - Cisco CCNA Training - Comptia A+ Training - Microsoft MCTS Training - MCTS Certification - MCITP Certification