Skip to content

MCTS KEY

MCTS Training, MCTS Certification exams Training at MCTSKYEY.com

Archive

Category: Google

We put the screws to all five modern browsers, testing them in all manner of scenarios. If you’re looking for a fast, efficient, convenient browser, we’ve found two that we think you’ll like.

The best browsers go beyond benchmarks, racing through real-world webpages as well as canned routines. They’re easy to set up, flexible and extensible, and connect other devices and services into an ecosystem.

Look, throwing a few benchmarks at a browser just doesn’t cut it any more. Just as you expect us to test graphics cards against the latest games, we think your browsers should be tested against a collection of live sites. Can they handle dozens of tabs at once? Or do they shudder, struggle, and crash, chewing through your PC’s processor and memory?

To pick a winner, we put Google Chrome, Microsoft’s Edge and Internet Explorer, Mozilla Firefox, and Opera to the test, barring Apple’s abandoned Safari for Windows. We used the latest available version of each browser, except for Firefox, which upgraded to Firefox 40 late in our testing. And we also tried to look at each browser holistically: How easy was each to install and set up? Does Opera make it simple to switch from Chrome, for example?

For 2015, we have a newcomer: Microsoft’s Edge browser, which has been integrated into Windows 10.
the word start on a running track

You’ve already seen part of our tests, where we showed you how much of an impact enabling Adobe Flash can have on your system. Disabling or refusing to load Flash can seriously improve performance—some sites, like YouTube, have begun to transition to less CPU-intensive HTML5 streams. Still, other readers pointed out that they simply need to run Flash on their favorite sites. That’s fine—we tested with and without Flash, so you’ll have a sense for which browser performs best, in either case.

Oh, and Microsoft: We found that your new Edge browser isn’t quite as fast as you make it out to be. (Sorry!) But it still demonstrated definite improvement over Internet Explorer.

The benchmark numbers favor Chrome and Firefox

We do consider benchmarks to be a valuable indicator of performance, just not a wholly defining one. Still, they’re the numbers that users want to see, so we’ll oblige. We used a Lenovo Yoga 12 notebook with a 2.6GHz Intel Core i7-5600U inside, running a 64-bit copy of Windows 10 Pro on 8GB of memory as our test bed.

We tested Chrome 44, Windows 10’s Edge 12, Firefox 39, Internet Explorer 11, and Opera 31 against two popular (though unsupported) benchmarks—Sunspider 1.0.2 and Peacekeeper—just for reference purposes. But we’d encourage you to pay attention to the more modern benchmarks, including Jet Stream, Octane 2.0, Speedometer, and WebXPRT. The latter two are especially useful, as they try to mirror actual interaction with web apps. We also tested using Oort Online’s graphics benchmark as well as the standardized HTML5test—which is not so much a benchmark, but an evaluation of how compatible a browser is with the HTML5 standard for Web development.

From our testing, Chrome and Firefox topped the Speedometer and WebXPRT tests, respectively. Perhaps unsurprisingly, Google was the fastest browser under the Google-authored Octane 2.0 benchmark. But Microsoft’s Edge led the pack in the Jet Stream benchmark—which includes the Sunspider tests, which Edge led as well. (For all of the benchmarks, a higher number is better; the one exception is Sunspider, which records its score in the time it took to run.)

browser testing benchmarks 1st set
Google Chrome and Mozilla Firefox do well here. (A higher result is better, except for the Sunspider benchmark.)

What’s surprising about Edge is that it led the pack in the Jet Stream benchmark, but fell way behind on Speedometer, only to record a quite reasonable score in WebXPRT. (Microsoft claims that Edge is faster than Chrome in the Google-authored Octane 2.0 benchmark as well, but our results don’t indicate that.)

Chrome flopped on the Sunspider test; the only test Firefox failed equally miserably in was the Oort Online benchmark, which draws a Minecraft-like landscape using the browser.

For whatever reason, I noticed some graphical glitches as Edge rendered the Oort landscape, including problems drawing a shadow that slid across the bay in the night scene. But Oort proved even more problematic for Firefox, rendering “snow” as flashing lights and rain as a series of lines. (We’ve included the test result, but take it with a grain of salt.) Internet Explorer 11 simply couldn’t run the Oort benchmark at all.

We also included the HTML5test compatibility test, which measures how compatible each browser is with the latest HTML5 Web standards. Although some developers focus extensively on each browser’s score, even the test developer isn’t too concerned:

HTML5test scores are less interesting to me than people think. Any browser above 400 points is a perfectly fine choice for todays web.
— HTML5test (@html5test) August 2, 2015

And the only one that fails that test, of course, is the semi-retired Internet Explorer 11.

What does all this mean? It doesn’t indicate a clear win for any specific browser, including Chrome. Based on our benchmark tests, many of the browsers will handle the modern web just fine.

Next page: Real-world testing and “the convenience factor.”

Real-world testing: Opera makes its case

Opera Software has always lived on the periphery, with what NetApplications says is just 1.34 percent of the worldwide browser market. With Opera considering putting itself up for sale, it may not be long for this world. But in terms of real-world browser performance, Opera is worth a long hard look while you still can.

Why? Because in real-world browser tests, Chrome and Opera performed very well.
It’s important to know how each browser will actually perform while surfing the live web. Testing this is a challenge—some canny Web sites constantly tweak their content, and ads will vary from one visit to the next. But we tried to minimize the time over which we visited each site to help minimize variation.

We used a selection of 30 live sites, from Amazon to CNN to iMore to PCWorld, as well as a three-tab subset of each, to see how performance scaled. Our tests included adding each site to a new tab, one after another, to weakly approximate how a user might keep adding new tabs—but quickly, so as to stress-test the browser itself. Finally, we evaluated them with Adobe Flash turned on and off. (Both Opera and Firefox don’t natively ship with Flash, so we tested without, then downloaded the Flash plugin.)

After loading all 30 tabs, we waited 30 seconds, then totaled the total CPU and memory consumption of both the app itself, the background processes, and the separate Flash process, if applicable.

So what does all this mean? If you own a mid-range and low-end PC, you might have purchased one without a lot of memory, or with a less powerful CPU. In that case, you might consider switching your browser to something that’s more efficient.

This chart contains a lot of information; you can click it to enlarge it. But what you should focus on are the differences in memory consumption (the yellow bars) and the differences in CPU consumption. We’ve included the raw data in a table at the bottom of the chart. In each case, a lower number indicates a more efficient browser, with the one exception being Firefox (with Flash)’s zero scores, which we’ll cover below.

Oddly enough, we noted an actual decrease in CPU consumption when Flash was enabled on the three-tab test, specifically within Edge, Firefox, and Opera—perhaps because the Flash plugin was more efficient at lighter workloads. As our previous report indicated, however, CPU and memory consumption soared when we started throwing tab after tab at each browser.

The other discrepancy that you may note is that Chrome, with Flash enabled, consumes nearly the memory that Edge does without Flash enabled. We double-checked this, but we did so on another day, where Edge’s memory consumption was even higher than what we recorded. (That’s probably due to just a difference in the ads and video the sites displayed.)

Chrome has a reputation for sucking up all the memory you can throw at it, and these numbers prove that out. But it also consumes relatively little of your CPU—which, if you scale down your tab use, makes its impact on your PC manageable. Opera, however, really shines. In fact, without Flash, Opera consumed just 6.6 percent of the CPU and 1.83GB of RAM during our stress test. With Flash on, Opera consumed 3.47GB of memory and 81.2 percent of my computer’s CPU.

And Mozilla was getting on so well—but with Flash on, tabs essentially descended into suspended animation until they were clicked on, then began slowly loading. It was awful. “Tombstoning” tabs that aren’t being used is acceptable, but please, load them first, Mozilla!

Finally, we tried loading pages, then timing how fast before the page became “navigable”—in other words, how soon one could scroll down. Fortunately, all the browsers we tested did well, although some were faster than others; Chrome and Opera did exceedingly well, especially with Flash turned off. In all, however, we’d say that any browser that can load pages at three seconds or less will suit your needs. (Keep in mind that the time to load pages depends in part on your Internet connection and the content of the page itself.)

The convenience factor
Since all of these browsers are free, ideally you should be able to download every one and evaluate it for yourself. And each browser makes it quite easy to pluck bookmarks and settings from their rivals, especially from Chrome and Internet Explorer. But manually exporting bookmarks is another story. It’s almost like telling the browser that you’re fed up with it—and Firefox, for example, passive-aggressively buries the export bookmarks command a few menus deep. Even stranger, Opera claims that you can export bookmarks from its Settings menu, but only the import option appears to have remained in Opera 31.

More and more, however, browsers are using a single sign-on password to identify you, store your bookmarks online, and make shifting from PC to PC a snap—provided that you keep the same browser, of course.

Chrome, for example, makes setting itself up on a new PC literally as simple as downloading the browser, installing it, and entering your username and password. You may have to double-check that the bookmark bar is enabled, for example, but after that your bookmarks and stored passwords will load automatically. (As always, make sure that “master” passwords like these are complex.)

Chrome isn’t alone in this, either. Firefox’s Sync syncs your tabs, bookmarks, preferences and passwords, while Opera syncs your bookmarks, tabs, the “Speed Dial” homepage, and preferences and settings.

That’s an area where Edge needs improvement. Edge can import favorites/bookmarks from other browsers, manually, but doesn’t keep a persistent list of favorites across machines—at least not yet. But if you save a new favorite in IE11, it’s instantly available across your other PCs. Other browsers—not Edge—also allow you to access your desktop bookmarks within their corresponding mobile apps.
edge homepage info

You can configure the Microsoft Edge homepage to show you information that allows you to start your day. (iGoogle did this too, years ago.)

It’s also interesting that, more and more, browsers are moving away from the concept of a “homepage” in favor of something like Edge or Opera, where the browser opens to an index page, with news and information curated by the browser company itself. But you still have options to set your own homepage in Chrome, Edge, and Firefox.

Honestly, all of the browsers we tested were relatively easy to set up and install, with features to import bookmarks and settings either from other browsers or other installations. You may have your own preferences, but it’s a relative dead heat.

Final page: Little extras and PCWorld names the best browser of 2015

Going beyond the web
Modern browsers, however, go beyond merely surfing the web. Most come with a number of intangible benefits that you might not know about.

Perhaps you’d like your browser to serve as a BitTorrent client, for example. In the early days, you’d need to download a separate, specific program for that. Today, those capabilities can be added via plugins or addons—which most browsers offer, but not Edge, yet. (This can be more than a convenience; Edge will store your passwords, but not in an encrypted password manager like LastPass.)

If there’s one reason to use Firefox, it’s because of the plugin capability. Mozilla has a site entirely dedicated to plugins, and they’re organized by type and popularity. Installing a plugin is as easy as clicking through a couple of notifications, then restarting your browser. And given the market share of Chrome—and the plugin popularity of Firefox—you’ll find developers who will focus on those two first. A good example is OneTab, which transforms all of your open tabs into a text-based list, dramatically cutting your browser’s memory consumption. Note that the more plugins you add and enable, the more memory and CPU power your browser will consume.

Opera doesn’t appear to have nearly the number of available plugins that Firefox does, but it does have a unique twist: a “sidebar” along the left hand side that can be used for widgets, like a calculator or even your Twitter feed. Opera is also extensible via wallpaper-like themes, but they’re far less impressive.

Chrome hides a wealth of options to manage what you see on the Web, but only if you want to explore.
But you’ll also notice browsers adding more and more functionality right in the app itself. Firefox includes a Firefox-to-Firefox videoconferencing service called Firefox Hello that works right in your browser, and you can save webpages to a Pocket service for later reading. And this is where Edge shines—its digital assistant, Cortana, is built right in, and there are Reading View options and a service to mark up webpages, called Web Notes. Cortana does an excellent job supplying context, and it’s certainly one of the reasons to give Edge a try.

Over time, we expect that this will be one area where Edge and Chrome will attempt to “pull away,” as it were. In a way, it’s similar to the race in office suites: a number of apps mimic functionality that Microsoft Office had a few years ago. But Microsoft has begun building intelligence into Office, and Edge, elevating them over their competition. Given that Chrome is also the front door to Google Now on the PC, we may eventually see Google try to out-Cortana Cortana on her home turf.

So who wins? Here’s the way we see it.
Give credit where credit is due: Edge’s performance has improved to the point that it’s competitive, though perhaps not as much as Microsoft would make it seem. Still, its lack of extensibility and proper syncing drag it down, at least until they’re added later this year. Firefox also performed admirably, until it bogged down under our real-world stress test. We also believe Opera would be a terrific choice for you, since it zips through benchmarks and real-world tests alike. Sure, it lacks the tight OS and service integration of Chrome, IE, and Edge—but some may see that as a bonus, too.

All that said, we still think Google’s Chrome is the best of the bunch.
Chrome has a well-deserved reputation for glomming on to and gobbling up any available memory, and our benchmarks prove it. But it’s stable, extensible, performs well, integrates into other services, and reveals its depths and complexity only if you actively seek it out. For that reason, Google Chrome remains our browser of choice, with Opera just behind.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

 

Lawsuit alleges age discrimination in Google hiring

There was something about Cheryl Fillekes that Google really liked. Over a seven-year period, Fillekes was contacted by Google recruiters four different times for jobs. In each case, she did well enough in the phone interviews to get an invitation for an in-person interview.

Despite all these interviews, Fillekes never got a job offer, and Google is now getting an age discrimination lawsuit.

Fillekes joined a lawsuit filed in April by Robert Heath, who was 60 in 2011 when he applied for a job at Google. The age discrimination complaint was amended recently to include Fillekes.

The amended lawsuit also alleges that the U.S. Equal Employment Opportunity Commission (EEOC) received “multiple complaints of age discrimination by Google, and is currently conducting an extensive investigation.” An EEOC spokesman said the agency can’t, by law, discuss whether any investigation is taking place.

Google was not immediately available for comment.
According to the lawsuit, Fillekes started programming as a high school student in 1976. She earned a bachelor of science in engineering from Cornell University in 1982, and in 1990 earned a Ph.D. in geophysics from the University of Chicago. She was also a postdoctoral fellow at Harvard. She specializes in Unix and Linux system programming.

Today, Fillekes’ LinkedIn profile describes her career as a “cheese maker at Mohawk Drumlin Creamery.” In 2014, “I bought a dairy farm in upstate NY. I designed and built an on-farm creamery to produce farmstead sheep’s milk cheese and yogurt,” she wrote.

Fillekes could not be reached for comment at deadline.
According to the lawsuit, a Google recruiter contacted Fillekes in 2007 for possible employment in either Google’s engineering and testing group or its software development group. There were a series of phone interviews and an in-person interview at Google’s headquarters in Mountain View, California. In 2010, a different Google recruiter contacted her and said that from her previous interview scores, she was an ideal candidate.

This happened again in 2011 and late 2013. In each case, a Google recruiter contacted her and there were a series of phone interviews, concluding with in-person interviews, but no job offer.

“Despite being very well qualified for each of the positions she interviewed for, Google did not hire her for any position after she attended her in-person interviews,” the lawsuit states. The lawsuit also alleges that Google favors workers who are under the age 40 and hires them “in significantly greater numbers.”

In April, in response to Heath’s complaint, Google said that it “believes that the facts will show that this case is without merit and we intend to defend ourselves vigorously.”


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

 

New alert appears before users reach sites likely to serve up software that silently changes the browser’s home page

Google has added an early warning alert to Chrome that pops up when users try to access a website that the search giant suspects will try to dupe users into downloading underhanded software.

The new alert pops up in Chrome when a user aims the browser at a suspect site but before the domain is displayed. “The site ahead contains harmful programs,” the warning states.

Google emphasized tricksters that “harm your browsing experience,” and cited those that silently change the home page or drop unwanted ads onto pages in the warning’s text.

The company has long focused on those categories, and for obvious, if unstated, reasons. It would prefer that people — much less, shifty software — not alter the Chrome home page, which features the Google search engine, the Mountain View, Calif. firm’s primary revenue generator. Likewise, the last thing Google wants is to have adware, especially the most irritating, turn off everyone to all online advertising.

The new alert is only the latest in a line of warnings and more draconian moves Google has made since mid-2011, when the browser began blocking malware downloads. Google has gradually enhanced Chrome’s alert feature by expanding the download warnings to detect a wider range of malicious or deceitful programs, and using more assertive language in the alerts.

In January 2014, for example, Chrome 32 added threats that posed as legitimate software and monkeyed with the browser’s settings to the unwanted list.

The browser’s malware blocking and suspect site warnings come from Google’s Safe Browsing API (application programming interface) and service; Apple’s Safari and Mozilla’s Firefox also access parts of the API to warn their users of potentially dangerous websites.

Google’s malware blocking typically tests much better than Safari’s or Firefox’s, however, because Google also relies on other technologies, including reputation ranking, to bolster Chrome’s Safe Browsing.

Like the Microsoft application reputation ranking used in Internet Explorer, Google’s technology combines whitelists, blacklists and algorithms to create a ranking of the probability that a download is legitimate software. Files that don’t meet a set legitimacy bar trigger a warning.

Google uses other signals, the details of which it has not disclosed, to identify websites that will likely serve up unwanted software like home page changers. Google search uses similar signals to ward off entries in the results list. “This change reduces the chances you’ll visit these sites via our search results,” wrote Lucas Ballard, a software engineer, in a Monday blog post.

Chrome 40, the browser’s current most-polished version, can be downloaded for Windows, OS X and Linux from Google’s website.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

Gmail represents a dying class of products that, like Google Reader, puts control in the hands of users, not signal-harvesting algorithms.

I’m predicting that Google will end Gmail within the next five years. The company hasn’t announced such a move — nor would it.

But whether we like it or not, and whether even Google knows it or not, Gmail is doomed.

What is email, actually?
Email was created to serve as a “dumb pipe.” In mobile network parlance, a “dumb pipe” is when a carrier exists to simply transfer bits to and from the user, without the ability to add services and applications or serve as a “smart” gatekeeper between what the user sees and doesn’t see.

Carriers resist becoming “dumb pipes” because there’s no money in it. A pipe is a faceless commodity, valued only by reliability and speed. In such a market, margins sink to zero or below zero, and it becomes a horrible business to be in.

“Dumb pipes” are exactly what users want. They want the carriers to provide fast, reliable, cheap mobile data connectivity. Then, they want to get their apps, services and social products from, you know, the Internet.

Email is the “dumb pipe” version of communication technology, which is why it remains popular. The idea behind email is that it’s an unmediated communications medium. You send a message to someone. They get the message.

When people send you messages, they stack up in your in-box in reverse-chronological order, with the most recent ones on top.

Compare this with, say, Facebook, where you post a status update to your friends, and some tiny minority of them get it. Or, you send a message to someone on Facebook and the social network drops it into their “Other” folder, which hardly anyone ever checks.

Of course, email isn’t entirely unmediated. Spammers ruined that. We rely on Google’s “mediation” in determining what’s spam and what isn’t.

But still, at its core, email is by its very nature an unmediated communications medium, a “dumb pipe.” And that’s why people like email.
Why email is a problem for Google

You’ll notice that Google has made repeated attempts to replace “dumb pipe” Gmail with something smarter. They tried Google Wave. That didn’t work out.

They hoped people would use Google+ as a replacement for email. That didn’t work, either.

They added prioritization. Then they added tabs, separating important messages from less important ones via separate containers labeled by default “Primary,” “Promotions,” “Social Messages,” “Updates” and “Forums.” That was vaguely popular with some users and ignored by others. Plus, it was a weak form of mediation — merely reshuffling what’s already there, but not inviting a fundamentally different way to use email.

This week, Google introduced an invitation-only service called Inbox. Another attempt by the company to mediate your dumb email pipe, Inbox is an alternative interface to your Gmail account, rather than something that requires starting over with a new account.

Instead of tabs, Inbox groups together and labels and color-codes messages according to categories.

One key feature of Inbox is that it performs searches based on the content of your messages and augments your inbox with that additional information. One way to look at this is that, instead of grabbing extraneous relevant data based on the contents of your Gmail messages and slotting it into Google Now, it shows you those Google Now cards immediately, right there in your in-box.

Inbox identifies addresses, phone numbers and items (such as purchases and flights) that have additional information on the other side of a link, then makes those links live so you can take quick action on them.

You can also do mailbox-like “snoozing” to have messages go away and return at some future time.

You can also “pin” messages so they stick around, rather than being buried in the in-box avalanche.

Inbox has many other features.

The bottom line is that it’s a more radical mediation between the communication you have with other people and with the companies that provide goods, services and content to you.

The positive spin on this is that it brings way more power and intelligence to your email in-box.

The negative spin is that it takes something user-controlled, predictable, clear and linear and takes control away from the user, making email unpredictable, unclear and nonlinear.

That users will judge this and future mediated alternatives to email and label them either good or bad is irrelevant.

The fact is that Google, and companies like Google, hate unmediated anything.

The reason is that Google is in the algorithm business, using user-activity “signals” to customize and personalize the online experience and the ads that are served up as a result of those signals.

Google exists to mediate the unmediated. That’s what it does.

That’s what the company’s search tool does: It mediates our relationship with the Internet.

That’s why Google killed Google Reader, for example. Subscribing to an RSS feed and having an RSS reader deliver 100% of what the user signed up for in an orderly, linear and predictable and reliable fashion is a pointless business for Google.

It’s also why I believe Google will kill Gmail as soon as it comes up with a mediated alternative everyone loves. Of course, Google may offer an antiquated “Gmail view” as a semi-obscure alternative to the default “Inbox”-like mediated experience.

But the bottom line is that dumb-pipe email is unmediated, and therefore it’s a business that Google wants to get out of as soon as it can.

Say goodbye to the unmediated world of RSS, email and manual Web surfing. It was nice while it lasted. But there’s just no money in it.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

As ‘organizers of information distribution’ they must store data about users’ communications on servers in Russia

Russia’s communications regulator has ordered Facebook, Twitter and Google to join a register of social networks or face being blocked in Russia, according to a report in the newspaper Izvestia.

Data integration is often underestimated and poorly implemented, taking time and resources. Yet it
Learn More

By registering as “organizers of information distribution,” companies agree to store data about their users’ communications on servers in Russia or face a fine of 500,000 Russian roubles ($13,000), the report said. Companies that fail to register within 15 days of a second order from the regulator can be blocked in Russia.

A number of Russian Internet companies have already registered, said the newspaper. These include search engine Yandex, social networking service VKontakte, and webmail service Mail.ru, it said, citing Maxim Ksenzov, deputy head of the Russian Federal Service for Supervision of Communications, Information Technology, and Mass Media (Roscomnadzor).

The regulator’s move against the three U.S. Internet companies was no surprise: Western monitoring organizations including the New York-based Committee to Protect Journalists have been predicting it since Russia passed its so-called Social Media Law in May.

It’s not just Internet services that must register with Roscomnadzor, however: Bloggers too must register as mass media outlets if they have more than 3,000 visitors per day, and must comply with the same restrictions on their output as television stations and newspapers. These include obeying the election law, avoiding profanity, and publishing age-restriction warnings on adult content, according to the CPJ.

Roscomnadzor maintains an extensive list of blogs and other sites that it says contain “incitements to illegal activity”, and requires Russian ISPs to block them.

Organizations including the CPJ expect the registration requirement to have a significant effect on freedom of expression in Russia, not through blocking but through self-censorship, as bloggers limit what they say to avoid the risk of administrative sanctions.

 

Best Microsoft MCTS Training – Microsoft MCITP Training at Certkingdom.com

A diverse set of real-world Java benchmarks shows that Google is fastest, Azure is slowest, and Amazon is priciest

If the cartoonists are right, heaven is located in a cloud where everyone wears white robes, every machine is lightning quick, everything you do works perfectly, and every action is accompanied by angels playing lyres. The current sales pitch for the enterprise cloud isn’t much different, except for the robes and the music. The cloud providers have an infinite number of machines, and they’re just waiting to run your code perfectly.

The sales pitch is seductive because the cloud offers many advantages. There are no utility bills to pay, no server room staff who want the night off, and no crazy tax issues for amortizing the cost of the machines over N years. You give them your credit card, and you get root on a machine, often within minutes.

[ From Amazon to Windows Azure, see how the elite 8 public clouds compare in InfoWorld Test Center’s review. | Benchmarking Amazon: The wacky world of cloud performance | Stay on top of the cloud with InfoWorld’s “Cloud Computing Deep Dive” special report and Cloud Computing Report newsletter. ]

To test out the options available to anyone looking for a server, I rented some machines on Amazon EC2, Google Compute Engine, and Microsoft Windows Azure and took them out for a spin. The good news is that many of the promises have been fulfilled. If you click the right buttons and fill out the right Web forms, you can have root on a machine in a few minutes, sometimes even faster. All of them make it dead simple to get the basic goods: a Linux distro running what you need.

At first glance, the options seem close to identical. You can choose from many of the same distributions, and from a wide range of machine configuration options. But if you start poking around, you’ll find differences — including differences in performance and cost. The machines may seem like commodities, but they’re not. This became more and more evident once the machines started churning through my benchmarks.

Fast cloud, slow cloud
I tested small, medium, and large machine instances on Amazon EC2, Google Compute Engine, and Microsoft Windows Azure using the open source DaCapo benchmarks, a collection of 14 common Java programs bundled into one easy-to-start JAR. It’s a diverse set of real-world applications that will exercise a machine in a variety different ways. Some of the tests will stress CPU, others will stress RAM, and still others will stress both. Some of the tests will take advantage of multiple threads. No machine configuration will be ideal for all of them.

Some of the benchmarks in the collection will be very familiar to server users. The Tomcat test, for instance, starts up the popular Web server and asks it to assemble some Web pages. The Luindex and Lusearch tests will put Lucene, the common indexing and search tool, through its paces. Another test, Avrora, will simulate some microcontrollers. Although this task may be useful only for chip designers, it still tests the raw CPU capacity of the machine.

I ran the 14 DaCapo tests on three different Linux machine configurations on each cloud, using the default JVM. The instances aren’t perfect “apples to apples” matches, but they are roughly comparable in terms of size and price. The configurations and cost per hour are broken out in the table below.

I gathered two sets of numbers for each machine. The first set shows the amount of time the instance took to run the benchmark from a dead stop. It fired up the JVM, loaded the code, and started to work. This isn’t a bad simulation because many servers start up Java code from command lines in scripts.

To add another dimension, the second set reports the times using the “converge” option. This runs the benchmark repeatedly until consistent results appear. This sometimes happens after just a few runs, but in a few cases, the results failed to converge after 20 iterations. This option often resulted in dramatically faster times, but sometimes it only produced marginally faster times.

The results (see charts and tables below) will look like a mind-numbing sea of numbers to anyone, but a few patterns stood out:

Google was the fastest overall. The three Google instances completed the benchmarks in a total of 575 seconds, compared with 719 seconds for Amazon and 834 seconds for Windows Azure. A Google machine had the fastest time in 13 of the 14 tests. A Windows Azure machine had the fastest time in only one of the benchmarks. Amazon was never the fastest.
Google was also the cheapest overall, though Windows Azure was close behind. Executing the DaCapo suite on the trio of machines cost 3.78 cents on Google, 3.8 cents on Windows Azure, and 5 cents on Amazon. A Google machine was the cheapest option in eight of the 14 tests. A Windows Azure instance was cheapest in five tests. An Amazon machine was the cheapest in only one of the tests.

The best option for misers was Windows Azure’s Small VM (one CPU, 6 cents per hour), which completed the benchmarks at a cost of 0.67 cents. However, this was also one of the slowest options, taking 404 seconds to complete the suite. The next cheapest option, Google’s n1-highcpu-2 instance (two CPUs, 13.1 cents per hour), completed the benchmarks in half the time (193 seconds) at a cost of 0.70 cents.

If you cared more about speed than money, Google’s n1-standard-8 machine (eight CPUs, 82.9 cents per hour) was the best option. It turned in the fastest time in 11 of the 14 benchmarks, completing the entire DaCapo suite in 101 seconds at a cost of 2.32 cents. The closest rival, Amazon’s m3.2xlarge instance (eight CPUs, $0.90 per hour), completed the suite in 118 seconds at a cost of 2.96 cents.

Amazon was rarely a bargain. Amazon’s m1.medium (one CPU, 10.4 cents per hour) was both the slowest and the most expensive of the one CPU instances. Amazon’s m3.2xlarge (eight CPUs, 90 cents per hour) was the second fastest instance overall, but also the most expensive. However, Amazon’s c3.large (two CPUs, 15 cents per hour) was truly competitive — nearly as fast overall as Google’s two-CPU instance, and faster and cheaper than Windows Azure’s two CPU machine.

These general observations, which I drew from the “standing start” tests, are also borne out by the results of the “converged” runs. But a close look at the individual numbers will leave you wondering about consistency.

Some of this may be due to the randomness hidden in the cloud. While the companies make it seem like you’re renting a real machine that sits in a box in some secret, undisclosed bunker, the reality is that you’re probably getting assigned a thin slice of a box. You’re sharing the machine, and that means the other users may or may not affect you. Or maybe it’s the hypervisor that’s behaving differently. It’s hard to know. Your speed can change from minute to minute and from machine to machine, something that usually doesn’t happen with the server boxes rolling off the assembly line.

So while there seem to be clear performance differences among the cloud machines, your results could vary. These patterns also emerged:

Bigger, more expensive machines can be slower. You can pay more and get worse performance. The three Windows Azure machines started with one, two, and eight CPUs and cost 6, 12, and 48 cents per hour, but the more expensive they were, the slower they ran the Avrora test. The same pattern appeared with Google’s one CPU and two CPU machines.
Sometimes bigger pays off. The same Windows Azure machines that ran the Avrora jobs slower sped through the Eclipse benchmark. On the first runs, the eight-CPU machine was more than twice as fast as the one-CPU machine.

Comparisons can be troublesome. The results table has some holes produced when a particular test failed, some of which are easy to explain. The Windows Azure machines didn’t have the right codec for the Batik tests. It didn’t come installed with the default version of Java. I probably could have fixed it with a bit of work, but the machines from Amazon and Google didn’t need it. (Note: Because Azure balked at the Batik test, the comparative times and costs cited above omit the Batik results for Amazon and Google.)
Other failures seemed odd. The Tradesoap routine would generate an exception occasionally. This was probably caused by some network failure deep in the OS layer. Or maybe it was something else. The same test would run successfully in different circumstances.

Adding more CPUs often isn’t worth the cost. While Windows Azure’s eight-CPU machine was often dramatically faster than its one-CPU machine, it was rarely ever eight times faster — disappointing given that it costs eight times as much. This was even true on the tests that are able to recognize the multiple CPUs and set up multiple threads. In most of the tests the eight CPU machine was just two to four times faster. The one test that stood out was the Sunflow raytracing test, which was able to use all of the compute power given to it.
The CPU numbers don’t always tell the story. While the companies usually double the price when you get a machine with two CPUs and multiply by eight when you get eight CPUs, you can often save money if you don’t increase the RAM too. But if you do, don’t expect performance to still double. The Google two-CPU machine in these tests was a so-called “highcpu” machine with less RAM than the standard machine. It was often slower than the one-CPU machine. When it was faster, it was often only about 30 percent faster.

Thread count can also be misleading. While the performance of the Windows Azure machines on the Sunflow benchmark track the number of threads, the same can’t be said for the Amazon and Google machines. Amazon’s two-CPU instance often went more than twice as fast as the one-CPU machine. On one test, it was almost three times faster. Google’s two-CPU machine, on the other hand, went only 20 to 25 percent faster on Sunflow.

The pricing table can be a good indicator of performance. Google’s n1-highcpu-2 machine is about 30 percent more expensive than the n1-standard-1 machine even though it offers twice as much theoretical CPU power. Google probably used performance benchmarks to come up with the prices.

Burst effects can distort behavior. Some of the cloud machines will speed up for short “bursts.” This is sort of a free gift of the extra cycles lying around. If the cloud providers can offer you a temporary speed up, they often do. But beware that the gift will appear and disappear in odd ways. Thus, some of these results may be faster because the machine was bursting.
The bursting behavior varies. On the Amazon and Google machines, the Eclipse benchmark would speed up by a factor of more than three when using the “converge” option of the benchmark. Windows Azure’s eight-CPU machine, on the other hand, wouldn’t even double.

If all of these factors leave you confused, you’re not alone. I tested only a small fraction of the configurations available from each cloud and found that performance was only partially related to the amount of compute power I was renting. The big differences in performance on the different benchmarks means that the different platforms could run your code at radically different speeds. In the past, my tests have shown that cloud performance can vary at different times or days of the week.

This test matrix may be large, but it doesn’t even come close to exploring the different variations that the different platforms can offer. All of the companies are offering multiple combinations of CPUs and RAM and storage. These can have subtle and not-so-subtle effects on performance. At best, these tests can only expose some of the ways that performance varies.

This means that if you’re interested in getting the best performance for the lowest price, your only solution is to create your own benchmarks and test out the platforms. You’ll need to decide which options are delivering the computation you need at the best price.

Calculating cloud costs
Working with the matrix of prices for the cloud machines is surprisingly complex given that one of the selling points of the clouds is the ease of purchase. You’re not buying machines, real estate, air conditioners, and whatnot. You’re just renting a machine by the hour. But even when you look at the price lists, you can’t simply choose the cheapest machine and feel secure in your decision.

The tricky issue for the bean counters is that the performance observed in the benchmarks rarely increased with the price. If you’re intent upon getting the most computation cycles for your dollar, you’ll need to do the math yourself.

The simplest option is Windows Azure, which sells machines in sizes that range from extra small to extra large. The amount of CPU power and RAM generally increase in lockstep, roughly doubling at each step up the size chart. Microsoft also offers a few loaded machines with an extra large amount of RAM included. The smallest machines with 768MB of RAM start at 2 cents per hour, and the biggest machines with 56GB of RAM can top off at $1.60 per hour. The Windows Azure pricing calculator makes it straightforward.

One of the interesting details is that Microsoft charges more for a machine running Microsoft’s operating system. While Windows Azure sometimes sold Linux instances for the same price, at this writing, it’s charging exactly 50 percent more if the machine runs Windows. The marketing department probably went back and forth trying to decide whether to price Windows as if it’s an equal or a premium product before deciding that, duh, of course Windows is a premium. 

Google also follows the same basic mechanism of doubling the size of the machine and then doubling the price. The standard machines start at 10.4 cents per hour for one CPU and 3.75GB of RAM and then double in capacity and price until they reach $1.66 per hour for 16 CPUs and 60GB of RAM. Google also offers options with higher and lower amounts of RAM per CPU, and the prices move along a different scale.

The most interesting options come from Amazon, which has an even larger number of machines and a larger set of complex pricing options. Amazon charges roughly double for twice as much RAM and CPU capacity, but it also varies the price based upon the amount of disk storage. The newest machines include SSD options, but the older instances without flash storage are still available.

Amazon also offers the chance to create “reserved instances” by pre-purchasing some of the CPU capacity for one or three years. If you do this, the machines sport lower per-hour prices. You’re locking in some of the capacity but maintaining the freedom to turn the machines on and off as you need them. All of this means that you can ask yourself how much you intend to use Amazon’s cloud over the next few years because it will then help you save more money.

In an effort to simplify things, Google created the GCEU (Google Compute Engine Unit) to measure CPU power and “chose 2.75 GCEUs to represent the minimum power of one logical core (a hardware hyper-thread) on our Sandy Bridge platform.” Similarly, Amazon measures its machines with Elastic Compute Units, or ECUs. Its big fat eight-CPU machine, known as the m3.2xlarge, is rated at 26 ECUs while the basic one-core version, the m3.medium, is rated at three ECUs. That’s a difference of more than a factor of eight.

This is a laudable effort to bring some light to the subject, but the benchmark performance doesn’t track the GCEUs or ECUs too closely. RAM is often a big part of the equation that’s overlooked, and the algorithms can’t always use all of the CPU cores they’re given. Amazon’s m3.2xlarge machine, for instance, was often only two to four times faster than the m3.medium, although it did get close to being eight times faster on a few of the benchmarks.

Caveat cloudster
The good news is that the cloud computing business is competitive and efficient. You put in your credit card number, and a server pops out. If you’re just looking for a machine and don’t have hard and fast performance numbers in mind, you can’t go wrong with any of these providers.

Is one cheaper or faster? The accompanying tables show the fastest and cheapest results in green and the slowest and priciest results in red. There’s plenty of green in Google’s table and plenty of red in Amazon’s. Depending on how much you emphasize cost, the winners shift. Microsoft’s Windows Azure machines start running green when you take the cost into account.

The freaky thing is that these results are far from consistent, even across the same architecture. Some of Microsoft’s machines have green numbers and red numbers for the same machine. Google’s one-CPU machine is full of green but runs red with the Tradesoap test. Is this a problem with the test or Google’s handling of it? Who knows? Google’s two-CPU machine is slowest on the Fop test — and Google’s one-CPU machine is fastest. Go figure.

All of these results mean that doing your own testing is crucial. If you’re intent on squeezing the most performance out of your nickel, you’ll have to do some comparison testing and be ready to churn some numbers. The performance varies, and the price is only roughly correlated with usable power. There are a number of tasks where it would just be a waste of money to buy a fancier machine with extra cores because your algorithm can’t use them. If you don’t test these things, you can be wasting your budget.

It’s also important to recognize that there can be quite a bit of markup hidden in these prices. For comparison, I also ran the benchmarks on a basic eight-core (AMD FX-8350) machine with 16GB of RAM on my desk. It was generally faster than Windows Azure’s eight-core machine, just a bit slower than Google’s eight-core machine, and about the same speed as Amazon’s eight-core box. Yet the price was markedly different. The desktop machine cost about $600, and you should be able to put together a server in the same ballpark. The Google machine costs 82 cents per hour or about $610 for a 31-day month. You could start saving money after the first month if you build the machine yourself.

The price of the machine, though, is just part of the equation. Hosting the computer costs money, or more to the point, hosting lots of computers costs lots of money. The cloud services will be most attractive to companies that need big blocks of compute power for short sessions. If they pay by the hour and run the machines for only a short block of time, they can cut the costs dramatically. If your workload appears in short bursts, the markup isn’t a problem because any machine you own will just sit there most of the day waiting, wasting cycles and driving up the air conditioning bills.

All of these facts make choosing a cloud service dramatically more complicated and difficult than it might appear. The marketing is glossy and the imagery makes it all look comfy, but hidden underneath is plenty of complexity. The only way you can tell if you’re getting what you’re paying for is to test and test some more. Only then can you make a decision about whether the light, airy simplicity of a cloud machine is for you.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

 

Unless you’re lucky enough to live in Kansas City, Provo or Austin

When Google announced plans in 2010 to jump into the broadband business, the company received more than 1,000 applications from communities hoping to be selected for Google Fiber, which promised gigabit-speed Internet at low prices or even free Internet for seven years if you chose a slower speed.

As we head into 2014, Google has delivered super-fast Internet to exactly one place, greater Kansas City; it’s just now rolling out the service to Provo, Utah — where it purchased a pre-existing municipal network for $1; and has announced plans for Austin, Texas, in 2014.

After that, who knows? Google has not released any further scheduling information.

But if you’re Verizon, Comcast or AT&T, you might be breathing a little easier these days, knowing that Google apparently is not planning to buy up all that unused dark fiber and compete in the residential broadband market on a nationwide scale — at least for now.

There has always been speculation about Google’s motives, and, Google being Google, answers have been hard to come by. Is this just an experiment? Another attention-grabbing sideshow, like those mysterious barges floating in San Francisco Bay and Portland, Maine? Is Google trying to compete head-to-head against the incumbents? Or is Google trying to nudge the incumbents to step up their broadband game by introducing the specter of competition? After all, faster Internet means Google can deliver more ads to more end users, which is how the company makes its money.

As Google spokesperson Jenna Wandres puts it: “The simple answer to ‘why’ is this: it’s for Google users. They keep telling us that they’re tired of waiting for incredibly slow upload and download speeds that often take hours to just transfer an album of photos from one location to another.”

According to Wandres, it’s all about speed. She pointed out that Google developed the Chrome browser to make the Internet experience faster, but it can only be as fast as the Internet connections and the hardware and networks that support that infrastructure. So now, they’re installing Google fiber, to make it faster.

“For the next big leap,” says Wandres, “Gigabit speeds will bring new apps and talented developers to the table, who can and will take advantage of these remarkable speeds.” She explains that organizations such as Kansas City Startup Village (KCSV) — an ecosystem of grassroots individuals working together to create an entrepreneur community — thrive in this type of environment; that is, an area where high-speed Internet allows developers to collaborate and share ideas.

Competition is good news
According to Forrester analyst Dan Bieler, Google Fiber “is good news because competition increases the pressure on carriers and cable providers to bring true broadband service to more households and businesses, if they want to compete effectively with Google. In my view, it is unlikely that Google fiber will target rural areas, but it’s clearly an interesting option for Google to target higher-income urban areas as well as central business districts.’’

“Competition is the main driver for improved services, and this will continue to be the case,” adds Ian Keene, research vice president at Gartner. “But Google has discovered that rolling out its services is taking longer than they first thought. If they carry on at this pace, they will not be a threat beyond a handful of cities; not for the foreseeable future, anyway. However, where they are active, we will and have seen the competition fight back with improved subscriber offers.”

For example, after Google announced plans to deliver gigabit Internet to Austin, AT&T announced plans to up its game in Austin. AT&T has promised to provide ultra high-speed gigabit Internet (called GigaPower) to its Austin users in December, with initial symmetrical speeds up to 300Mbps and an upgrade to the 1Gbps by mid-2014 (at no extra cost, of course).

But it’s still too early to tell whether Google’s efforts will prove to be economically feasible, or whether Google will continue to expand beyond the three locations already identified. “Google, like many others, has learned that the enormity of the costs involved in building broadband infrastructure creates a dilemma,” says telecom analyst Craig Moffett. “It is extraordinarily difficult to earn a reasonable return on building an infrastructure to compete with cable. Verizon tried with Verizon FiOS and, after reaching only 14 percent of the country, eventually conceded that further expansion was just not economically justified.”

Moffett explains that at least Google is giving it the old college try; but the markets they have chosen, so far, are all unique cases. “For example,” he says, “In Provo, they’re building on a network that was already there. In Austin, we’ll get a better sense of what the economics might actually look like. At this point, I think it is reasonable to conclude that fiber-to-the-home deployments like these will remain the exception rather than the rule.”

How it works
With more than 1,100 applicants, Google could choose the communities that offered the most advantageous terms and conditions. These installations require access to utility poles, roads, and even substations in order to lay their fiber networks, so applicants had to be willing to expedite that process.

In the case of Kansas City, Google only extends fiber to neighborhoods with a certain number of pre-registered customers.

According to Wandres, locations must be fiber friendly, technological leaders, and residents must show a genuine willingness to work with Google; that is, to be flexible, move quickly, and cut through the red tape.

“It’s a long process and requires a lot of work,” says Wandres. “There must be a strong demand for fiber among the user base (for those who are excited about a technological hub) and for entrepreneurs who can advance the technology. In Kansas City, the Mayors’ Bi-state Innovation Team came up with a playbook for how Kansas City could benefit from fiber. And there’s another group now tasked with following through on those plans.’’

In Kansas City, subscribers can get gigabit Internet for $70 a month or the gigabit service plus TV (200 channels, HD included) bundle for $120 a month. Both of these options provide free installation plus all the equipment necessary to enable the service to function, such as the network gear, the storage device, and the TV box. Additional benefits include 1TB (terabyte) of storage across Gmail, the drive, and Google+ photos and, for the bundle, one Nexus 7 tablet.

Kansas City residents who want Internet access, but may not classify themselves as power users, can get Google’s free Internet service, which runs at 5Mbps. The free service does require a one-time installation fee of $300 (or $25 a month for 12 months), then the service is free for at least seven years.

Wandres adds, “At the end of seven years, we will begin charging the market price for comparable speeds — which should be $0, as long as Internet speeds increase as much as we hope over the next few years. In other words, we think that in seven years, Internet speeds should be ubiquitously faster in America and, by that point, nobody should have to pay for a connection speed that is 5Mbps download/1Mbps upload.”

Brittain Kovač, co-leader and communications pilot at KCSV says, “With regards to speed, nobody has been able to break the gig. We’ve tried. Downloading tons of files while gaming and running multiple videos simultaneously and we still barely see a dent. What companies are experiencing is an extreme amount of time savings; for example, www.sportsphotos.com, a company that moved to the KCSV from Springfield, Mo., is now able to upload thousands of high resolution photos in a matter of hours; a project that in the past, took days, if not weeks to accomplish.”

“In addition,” says Kovač, “Google fiber has been the catalyst that’s brought the community together in ways that may have never happened, or certainly would have taken years to see the outcomes. It’s bringing like-minded people who want to innovate and collaborate, who know we (KC) have a short window of time to do something big, and we’re really leveraging this opportunity to do great things for the community as a whole. From households to startups, corporate and civic, we’re all working together for the first time in years and it’s exciting.”

Based on the Google fiber city map, the Kansas City project is still in progress. Thirteen more cities in Kansas and six additional cities in Missouri are scheduled next for this service.

Next up, Provo, Utah
The situation in Provo is somewhat different, because Google purchased the existing iProvo city network for $1. So, Google didn’t have to start from scratch, it just needed to upgrade the existing network, which was built in 2006.

In a recent blog post, Provo Mayor John Curtis said, “Unfortunately, while we’ve had the desire, we haven’t had the technical know-how to operate a viable high-speed fiber optic network for Provo residents. So, I started looking for a private buyer for the iProvo network. We issued a Request for Qualifications and a Request for Proposal and even hired a private consultant to guide our efforts. [And now] under the agreement, Google Fiber is committed to helping Provo realize the original vision.”

Provo’s customer plan; that is, the monthly price for gigabit Internet or the Internet/TV bundle is the same as Kansas City ($70 or $120, respectively) except that everyone in Provo pays the installation fee of $30, not just the users who sign up for the free 5Mbps/1Mbps service. And, like Kansas, the free service is only free for seven years (or longer, based on the market price for comparable speeds after seven years).


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

 

 

Three of the four demonstrators in the wearable technology session employed Google Glass, even though none of them know when it will be commercially available.

Despite the uncertainty regarding when Google Glass will be made available to the public, entrepreneurs are betting their livelihood on the head-worn device.

Three of the four demonstrators that composed the wearable technology session at the DEMO Fall 2013 conference displayed technology based on Glass.

Pristine joined the growing market for Glass in the healthcare field, introducing a streaming video solution that allowed remote users to watch and interact via audio during healthcare procedures. Surgery was the example given on-stage, and the technology allowed for a remote user to call in, watch the surgery in real time, and consult the surgeon as he conducted it.

The company claimed the technology is HIPAA-compliant, and aims to replace expensive, unwieldy solutions that enable streaming video for medical procedures. The cost for current technology can run into the tens of thousands, whereas Google has suggested a $300 to $500 price range for Glass when it becomes available to consumers.

Aside from the uncertainty around Glass’ release date, Pristine CEO Kyle Samani told panelists that the company has had trouble finding real-life settings in which it can test the technology. Testing in a hospital setting would require collaboration from across the facility – from physicians to patients to IT.

Fortunately, Pristine may have plenty of opportunities to find testing environments as Google bides its time developing the technology.

Another demonstrator, GlassPay, may also benefit from some extra time to develop its technology – a Glass-based technology that allows shoppers to make purchases by scanning barcodes on products.

The only catch, so far, is that GlassPay only allows purchases to be made with the digital currency Bitcoin. Although the demonstration functioned as intended, one noticeable flaw was that the display only showed the price for retail items in Bitcoin value. So a set of towels were shown to retail for 0.1 BTC, leaving the user to calculate the equivalent in USD. Not only will users need to have a Bitcoin Wallet in order to use GlassPay, they’ll need to keep up with the exchange rate on their own if they want to know the dollar-value of their expenses.

Later on in the Glass Pay demonstration, though, GlassPay CEO Guy Paddock explained that GlassPay is currently limited to Bitcoin only because it’s much easier to make quick online payments with Bitcoin than with cash. He expressed interest in integrated Google Wallet later on, which would open up a larger market.

In retrospect, the Bitcoin integration worked to Glass Pay’s advantage, for the purposes of DEMO at least. The company was only given a four-minute window in which to demonstrate its product, and Bitcoin involved much less risk of a verification issue on-stage. Similarly, Paddock also explained that a GlassPay app is already available for Android devices. Google Glass, however, attracts much more publicity than smartphone payment apps, landing GlassPay in the highly publicized wearable technology category.

Another Glass-based app, People+, relies more on the Glass technology. Calling the product a combination of LinkedIn and Wikipedia, the demonstrators showed how People+ could browse through information on a given person, drawing from multiple online source, while focusing the Glass camera on him.

It’s an early iteration of an app that seems perfect for facial recognition technology, depending on how well that might work on Glass in the future. When that might happen, and how such technology will be received by the general public, still remains to be seen.

Only one demonstrator in the wearable technology session didn’t employ Google Glass, but may have had the most impressive demonstration. Skully showed off its high-tech motorcycle helmet, which is equipped with a heads-up display that projects GPS navigation and playback from the rear-view camera and voice commands for phone calls or controlling music.

The rear-view camera may have been the most impressive, giving a panoramic view of everything located behind a motorcyclist and eliminating the need to check blind spots. The rear-view camera’s technology recognizes the horizon behind the driver, meaning that the road will always be in view, and flattens the video playback to provide depth in the video playback.

Unlike Google Glass, the Skully Helmet has a spring 2014 estimated release date, and the company is preparing an SDK on which developers could build their own apps for the device.

The only potential competition it may face could be when Glass becomes available and users could simply wear it underneath traditional motorcycle helmets. Fortunately for Skully, that likely won’t be an issue for a few years.


MCTS Certification, MCITP Certification

Microsoft MCTS Certification, MCITP Certification and over 3000+
Exams with Life Time Access Membership at http://www.actualkey.com

As the first device designed after Google’s acquisition of Motorola, the Moto X is a good combination of both companies’ services.

Moto X is the first completely new smartphone project that was launched after Google acquired Motorola Mobility. As such, it fully integrates the technology assets of both companies. It is a carefully designed, customizable mass-market consumer device with much embedded Google technology: speech recognition, contextual awareness, and personalized search. It’s available in 18 colors with 7 accent colors. The specifications are adequate for a high-end smartphone and meet or exceed most of the iPhone 5 specifications.

At the announcement in New York yesterday, Motorola Senior VP of Product Management Rick Osterloh introduced the Moto X with a personal demonstration. Rather than one big Apple or Samsung-like announcement with hundreds of people, Motorola held four personalized sessions for approximately 50 journalists at a time, allowing interactive questions.
Image Alt Text

Osterloh led with “Touchless Control.” Motorola adapted Google Now to utilize a proprietary always-on speech recognition function. It’s based on the Motorola X8 Computing System that combines a standard Qualcomm Snap Dragon S4 Pro dual-core CPU and quad-core GPU with two proprietary cores, one for natural language and the other for contextual computing.

The Moto X uses the natural language processor to monitor local sound sources at low power for the words “OK Google Now,” that when detected takes the smartphone out of a low-power state and turns the speech stream over to Google Now for recognition and a response through Google services, such as search and navigation. Osterloh said the Moto X is not listening to every word – it’s just listening for the signature of “OK Google Now” to awaken the smartphone. If Google Now’s speech recognition were constantly monitoring for this cue using ordinary hardware, the battery would quickly become drained.

The user can train the Moto X to recognize his or her voice. It’s not completely foolproof, as someone with a similar voice can prompt the Moto X to awaken. This was shown when an attendee at the event shouted “OK Google Now” and briefly took control of the device. The user can choose to add a password or PIN code to protect the device from unauthorized access, and a Bluetooth device, such as an in-car hands-free system, can be configured as a trusted command device, eliminating the need for password or code entry. Touchless Control was demonstrated to work at cocktail-party levels of ambient noise, and at a distance of up to eight or 10 feet.

Motorola’s researchers learned that the average person activates his or her smartphone 60 times a day, to check the time or respond to notifications. The Moto X uses the contextual processor to operate its “Active Display” to present time of day, missed calls, and notifications at low power without taking the smartphone out of sleep mode. Only a minimum number of pixels are illuminated, saving power by leaving the rest of the OLED display dark. The contextual processor recognizes if the smartphone is face down or in a pocket and does not illuminate the Active Display.

The 10-megapixel camera has three improvements. A twist of the wrist launches the camera without entering a password or PIN. The UI is simplified, moving most camera controls to a panel that can be exposed with a left-to-right swipe. This UI makes it possible to take a photo by touching any part of the screen, replacing the small blue icon that requires concentrated fine motor control to press. The camera is easier to focus and produces better images with an RGBC camera sensor that captures up to 75% more light when the picture is taken.

Most interesting is the user customization. The image at the beginning of this report gives one a sense of the many choices the consumer has to personalize the Moto X with a color scheme. The consumer can choose from two bezel colors, 18 back-plate covers, and seven accent colors, for a total of 252 unique combinations. The user can also add personalized text to the back of the Moto X, such as a name or email address that a good Samaritan might use to contact the owner if the smartphone is lost.

Motorola has created a web service called “Moto Maker” for consumers to use in visually sampling and choosing colors, accent colors and personalized text inscriptions. The suggested price is $199 with a carrier contract. Those interested in buying one can visit a carrier and purchase the Moto X at a contract price, where they will be given a voucher that includes a PIN number to enter into the Moto Maker web service to order the Moto X. Motorola said that it has organized its supply chain to assemble the Moto X in Fort Worth, Texas, with a four-day turnaround from order to shipping to customer. Consumers can also use Moto Maker to purchase directly from Motorola online.

Recognizing speech, understanding the meaning of speech and executing specific commands are priorities for Google. To this point, Google recently hired artificial intelligence expert Ray Kurzweil to lead engineering advances in speech technologies. Motorola may be pushing present-day speech technology to its limits. Moto X’s Touchless Control appears to have made at least an incremental improvement over Google Now and Apple Siri. Even if the incremental improvement in speech is not large, the combination of Touchless Control, Active Display, colorful customizability, and buying experience will drive consumer adoption. Google takes risks and innovates at a scale of many millions and billions. Whether the Moto X achieves Google scale remains to be seen.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

 

Microsoft spoofs Google’s minimalist search site, Google knocks Outlook.com with ‘Gmail Blue’

Microsoft today took another shot at rival Google, the target of its “Scroggled” campaign, with an April Fools’ Day prank that turned its Bing search engine into a Google look-alike.

Dubbed “Bing Basic” in an April 1 blog post, and claiming it was a special test, the prank kicks off “if you visit bing.com and enter a certain telltale query” that then results in “something a little more bland.”

From Bing.com, users simply enter “Google” to see a temporary home page that looks very much like Google’s noted minimalist design.

“We decided to go back to basics, to the dawn of the Internet, to reimagine Bing with more of a 1997, dial-up sensibility in mind,” wrote Michael Kroll, principal UX (user experience) manager for Bing, on the blog. “We may see some uptick in our numbers based on this test, but the main goal here is just to learn more about how our world would look if we hadn’t evolved.”
Bing basic
Microsoft’s bogus “Bing Basic” takes a shot at rival Google’s stark search engine UI.

SearchEngineLand first reported on the “Google” trigger for the Bing Basic hoax.

The revamped Bing Basic screen sports a few differences from Google’s real home page, including a renaming of the latter’s “I’m Feeling Lucky” button to “I’m Feeling Confused.” Clicking on that button in Bing’s imitation leads to Kroll’s blog post.

Microsoft has retained Bing’s hover-links, however, and used them to take additional shots at the competition. Hovering the mouse over one such link displays a pop-up that states, “When there’s nothing else to look at … You may take drastic measures.” Clicking directs the user to a search for “watching paint dry.”

Google’s counter — launched earlier in the day — was both more elaborate and more subtle as it spoofed Microsoft’s Outlook.com email service, the rebrand of Hotmail.com that debuted last July.

Called “Gmail Blue,” the phony is purportedly a major refresh of Google’s own email service that “Richard Pargo,” supposedly a project manager, says was based on the question, “How do we completely redesign and recreate something while keeping it exactly the same?”

The result? Gmail Blue, with blue fonts, blue lines, blue theme, blue everything.

“It’s Gmail, only bluer,” said Pargo with a straight face in a production-quality video that included a cameo by Blue Man Group.

“We tried orange, brown … brown was a disaster,” said “Dana Popliger,” a faux lead designer. “We tried yellow.”

While some have interpreted Google’s gag as a shot fired at Windows 8 — both directly at the summer’s upcoming upgrade, code named “Blue,” as well as critics’ take on the new OS, which makes a radical change of user interfaces (UIs) in one part, while retaining the traditional desktop in the other — it could also be seen as a bashing of Outlook.com, which by default features a blue theme.

“I think the first thought that’s going to come to the end-user’s mind is, ‘I can’t believe I waited this long for this,'” concluded “Carl Branch,” labeled as lead engineer.

Not coincidentally, today was Gmail’s ninth anniversary. Google launched its invitation-only beta of the service on April 1, 2004.

 


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

 

Free MCTS Training - Free MCITP Training - CCNA Training - CCIE Labs - CCNA Certification - MCTS Online Training - MCITP Online Training - Comptia a+ videos - Comptia a+ Video Training - MCTS Training Key - MCITP Training Key - Free Training Courses - Free Certification Courses - MCTS Online Training - MCTS Online Certification - Cisco Certification Training - CCIE LABS Preparation - Cisco CCNA Training - Cisco CCNA Certification Key - MCITP Videos Training - Free MCITP Videos Tutorial - Free MCTS Video Training - MCTS Videos Tutorial - Free Comptia Online Training - Free Comptia Online Certification

Microsoft MCTS Certification - Microsoft MCITP Training - Comptia A+ Training - Comptia A+ Certification - Cisco CCNA Training - Cisco CCNA Certification - Cisco CCIE Training - Cisco CCIE Exams - Cisco CCNA Training - Comptia A+ Training - Microsoft MCTS Training - MCTS Certification - MCITP Certification