Skip to content

MCTS KEY

MCTS Training, MCTS Certification exams Training at MCTSKYEY.com

Archive

Category: Tech

Failure may lead to success, but unthinking complacency is a certain dev career killer

You’ll find no shortage of career motivational phrases surrounding failure: Fail fast, failure builds character, the key to success is failure, mistakes make you grow, never be afraid to fail. But the idea of mistaking your way to the top of the software industry is probably unsound. Every developer will have their share of missteps in a career but why not learn from others’ experience — and avoid the costliest errors?

That’s what we did: We talked with a number of tech pros who helped us identify areas where mistakes are easily avoided. Not surprising, the key to a solid dev career involves symmetry: Not staying with one stack or job too long, for example, but then again not switching languages and employers so often that you raise red flags.

Here are some of the most notable career traps for engineers — a minefield you can easily avoid while you navigate a tech market that’s constantly changing.

Mistake No. 1: Staying too long

These days it’s rare to have a decades-long run as a developer at one firm. In many ways, it’s a badge of honor, showing your importance to the business or at least your ability to survive and thrive. But those who have built a career at only one company may suddenly find themselves on the wrong end of downsizing or “rightsizing,” depending on the buzzword favored at the time.

“The longer you stay in one position, the more your skills and pay stagnate, and you will get bored and restless.” — Praveen Puri, management consultant

Opinions vary on how long you should stay in one place. Praveen Puri, a management consultant who spent 25 years as a developer and project manager before starting his own firm, isn’t afraid to throw out some numbers.

“The longer you stay in one position, the more your skills and pay stagnate, and you will get bored and restless,” Puri says. “On the other hand, if you switch multiple jobs after less than two years, it sends a red flag. In my own experience, I stayed too long on one job where I worked for 14 years — I should have left after six. I left other positions after an average of four years, which is probably about right.”

Michael Henderson, CTO of Talent Inc., sees two major drawbacks of staying in one place too long. “First, you run the risk of limiting your exposure to new approaches and techniques,” he says, “and secondly, your professional network won’t be as deep or as varied as someone who changes teams or companies.”

Focusing too much on one stack used by your current employer obviously is great for the firm but maybe not for you.

“It’s a benefit to other employers looking for a very specialized skill set, and every business is different,” says Mehul Amin, senior software engineer at Advanced Systems Concepts. “But this can limit your growth and knowledge in other areas. Obviously staying a few months at each job isn’t a great look for your résumé, but employee turnover is pretty high these days and employers expect younger workers like recent college graduates to move around a bit before staying long-term at a company.”

Mistake No. 2: Job jumping

Let’s look at the flip side: Are you moving around too much? If that’s a concern, you might ask whether you’re really getting what you need from your time at a firm.
Hilary Craft, IT branch manager, Addison Group

“Constant job hopping can be seen as a red flag.” — Hilary Craft, IT branch manager, Addison Group

Charles Edge, director of professional services at Apple device management company JAMF Software, says hiring managers may balk if they’re looking to place someone for a long time: “Conversely, if an organization burns through developers annually, bringing on an employee who has been at one company for 10 years might represent a challenging cultural fit. I spend a lot of time developing my staff, so I want them with me for a long time. Switching jobs can provide exposure to a lot of different techniques and technologies, though.”

Those who move on too quickly may not get to see the entire lifecycle of the project, warns Ben Donohue, VP of engineering at MediaMath.

“The danger is becoming a mercenary, a hired gun, and you miss out on the opportunity to get a sense of ownership over a product and build lasting relationships with people,” Donohue says. “No matter how talented and knowledgeable you are as a technologist, you still need the ability to see things from the perspective of a user, and it takes time in a position to get to know user needs that your software addresses and how they are using your product.”

Hilary Craft, IT branch manager at Addison Group, makes herself plain: “Constant job hopping can be seen as a red flag. Employers hire based on technical skill, dependability, and more often than not, culture fit. Stability and project completion often complement these hiring needs. For contractors, it’s a good rule to complete each project before moving to the next role. Some professionals tend to ‘rate shop’ to earn the highest hourly rate possible, but in turn burn bridges, which won’t pay off in the long run.”

Mistake No. 3: Passing on a promotion
There’s a point in every developer’s life where you wonder: Is this it? If you enjoy coding more than running the show, you might wonder if staying put could stall your career.

“Moving into management should be a cautious, thoughtful decision,” says Talent Inc.’s Henderson. “Management is a career change — not the logical progression of the technical track — and requires a different set of skills. Also, I’ve seen many companies push good technical talent into management because the company thinks it’s a reward for the employee, but it turns out to be a mistake for both the manager and the company.”

“Everyone should be in management at least once in their career if for nothing else than to gain insight into why and how management and companies operate.” — Scott Wilson, product marketing director, Automic

Get to know your own work environment, says management consultant Puri, adding that there’s no one-size-fits-all answer to this one.

“I’ve worked at some places where unhappy managers had no real power, were overloaded with paperwork and meetings, and had to play politics,” Puri says. “In those environments, it would be better to stay in development. Long term, I would recommend that everyone gets into management, because development careers stall out after 20 years, and you will not receive much more compensation.”

Another way of looking at this might be self-preservation. Scott Wilson, product marketing director at Automic, asks the question: “Who will they put in your place? If not you, they may promote the most incompetent or obnoxious employee simply because losing their productivity from the trenches will not be as consequential as losing more qualified employees. Sometimes accepting a promotion can put you — and your colleagues/friends — in control of your workday happiness. Everyone should be in management at least once in their career if for nothing else than to gain insight into why and how management and companies operate.”

Mistake No. 4: Not paying it forward
A less obvious mistake might be staying too focused on your own career track without consideration of the junior developers in your office. Those who pair with young programmers are frequently tapped when a team needs leadership.

“I’ve found that mentoring junior developers has made me better at my job because you learn any subject deeper by teaching it than you do by any other method,” says Automic’s Wilson. “Also, as developers often struggle with interpersonal skills, mentoring provides great opportunities to brush up on those people skills.”

If experience is the best teacher, teaching others will only deepen your knowledge, says JAMF Software’s Edge. That said, he doesn’t hold it against a busy developer if it hasn’t yet happened.

“When senior developers don’t have the time to mentor younger developers, I fully understand. Just don’t say it’s because ‘I’m not good with people.’” — Charles Edge, director of professional services,

“Let’s face it — no development team ever had enough resources to deliver what product management wants them to,” Edge says. “When senior developers don’t have the time to mentor younger developers, I fully understand. Just don’t say it’s because ‘I’m not good with people.’”
Mistake No. 5: Sticking to your stack

Your expertise in one stack may make you invaluable to your current workplace — but is it helping your career? Can it hurt to be too focused on only one stack?

MediaMath’s Donohue doesn’t pull any punches on this one: “Of course it is — there’s no modern software engineering role in which you will use only one technology for the length of your career. If you take a Java developer that has been working in Java for 10 years, and all of a sudden they start working on a JavaScript application, they’ll write it differently than someone with similar years of experience as a Python developer. Each technology that you learn influences your decisions. Some would argue that isn’t a good thing — if you take a Java object-oriented approach to a loosely typed language like JavaScript, you’ll try to make it do things that it isn’t supposed to do.”

It can hurt your trajectory to be too focused on one stack, says Talent Inc.’s Henderson, but maybe for different reasons than you think.

“Every stack will have a different culture and perspective, which ultimately will broaden and expedite your career growth,” Henderson says. “For instance, I find that many C# developers are only aware of the Microsoft ecosystem, when there is a far larger world out there. Java has, arguably, the best ecosystem, and I often find that Java developers make the best C# developers because they have a wider perspective.”

Automic’s Wilson says proficiency — but not mastery — with one stack should be the benchmark before moving onto another.

“It’s time to move on when you are good at the skill, but not necessarily great,” says Wilson. “I’m not advocating mediocrity, just the opposite. I am saying that before you head off to learn a new skill make sure you are good, competent, or above average at that skill before you consider moving on.”

Finally, Talent Inc.’s Henderson offers this warning: “Avoid the expectation trap that each new language is simply the old one with a different syntax. Developers of C# and Java who try to force JavaScript into a classical object-oriented approach have caused much pain.”

Mistake No. 6: Neglecting soft skills
Programmers are typically less outgoing than, say, salespeople. No secret there. But soft skills can be picked up over time, and some of the nuances of developing a successful career — like learning from mentors and developing relationships — can be missing from your career until it’s too late.

“Soft skills and conversations with customers can also give a great sense of compassion that will improve how you build. You begin to think about what the customers really need instead of over-engineering.” —

“It makes for better software when people talk,” says MediaMath’s Donohue. “Soft skills and conversations with customers can also give a great sense of compassion that will improve how you build. You begin to think about what the customers really need instead of overengineering.”

Talent Inc.’s Henderson says your work with other people is a crucial part of developing a successful dev career.

“All human activities are social, and development is no exception,” Henderson says. “I once witnessed an exchange on the Angular mailing list where a novice developer posted some code with questions. Within an hour — and through the help of five people — he had rock-solid idiomatic Angular code, a richer understanding of Angular nuance and pitfalls, and several new contacts. Although the trolls can sometimes cause us to lose faith, the world is full of amazing people who want to help one another.”

Automic’s Wilson says a lack of soft skills is a career killer. Then when less proficient programmers move ahead developers who don’t have people skills — or simply aren’t exercising them — are left wondering why. Yet everyone loves bosses, he says, “who demonstrate tact and proficient communication.”

“To improve your soft skills, the Internet, e-courses, friends, and mentors are invaluable resources if … you are humble and remain coachable,” Wilson says. “Besides, we will all reach a point in our career when we will need to lean on relationships for help. If no one is willing to stand in your corner, then you, not they, have a problem, and you need to address it. In my career, I have valued coachable people over uncoachable when I have had to make tough personnel decisions.”

Programming is only one aspect of development, says management consultant Puri. “The big part is being able to communicate and understand business objectives and ideas, between groups of people with varying levels of technical skills. I’ve seen too many IT people who try to communicate too much technical detail when talking with management.”

Mistake No. 7: Failing to develop a career road map
Developing goals and returning to them over time — or conversely developing an agilelike, go-with-the-flow approach — both have their proponents.

“I recommend making a list of experiences and skills that you’d like to acquire and use it as a map, updating it at least annually.” –Michael Henderson, CTO, Talent Inc.

“I engineer less for goals and more for systems that allow me to improve rapidly and seize opportunities as they arise,” says Henderson. “That said, I recommend making a list of experiences and skills that you’d like to acquire and use it as a map, updating it at least annually. Knowing where you’ve been is as useful as knowing where you want to go.”

And of course maybe equally as important — where you don’t want to go.

“Early in my career, I hadn’t learned to say no yet,” says Edge, of JAMF Software. “So I agreed to a project plan that there was no way could be successfully delivered. And I knew it couldn’t. If I had been more assertive, I could have influenced the plan that a bunch of nontechnical people made and saved my then-employer time and money, my co-workers a substantial amount of pain, and ultimately the relationship we had with the customer.”

Automic’s Wilson gives a pep talk straight out of the playbook of University of Alabama’s head football coach Nick Saban, who preaches having faith in your process: “The focus is in following a process of success and using that process as a benchmark to hold yourself accountable. To develop your process, you need to find mentors who have obtained what you wish to obtain. Learn what they did and why they did it, then personalize, tweak, and follow.”

Click here to view complete Q&A of 70-432 exam
Certkingdom Review

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft 70-432 Training at certkingdom.com

Data science is one of the fastest growing careers today and there aren’t enough employees to meet the demand. As a result, boot camps are cropping up to help get workers up to speed quickly on the latest data skills.

Data Scientist is the best job in America, according to data from Glassdoor, which found that the role has a significant amount of job openings and that data scientists earn an average salary of more than $116,000. According to its data, the job of data scientist rated a 4.1 out of 5 for career opportunity and it earned a 4.7 for job satisfaction. But, as the role of data scientist grows in demand, traditional schools aren’t churning out qualified candidates fast enough to fill the open positions. There’s also no clear path for those who have been in the tech industry for years and want to take advantage lucrative job opportunity. Enter the boot camp, a trend that has quickly grown in popularity as a way to train workers for in-demand tech skills. Here are 10 data science boot camps designed to help you brush up on your data skills, with courses for anyone from beginners to experienced data scientists.

Bit Bootcamp

Located in New Jersey, Bit Bootcamp offers both part-time and full-time courses in data analytics that last four weeks. It has a rolling start date and courses cost between $1,500 – $6,500, according to data from Course Report. It’s a great option for students who already have a background in SQL, as well as object-oriented programming skills such as Java, C# or C++. Attendees can expect to work on real problems they might face in the workplace, whether it’s at a startup or a large corporation. The course completes with a Hadoop certification exam using the skills learned over the past four weeks.
Price: $1500 – $6500

NYC Data Science Academy
The NYC Data Science Academy offers 12-week courses in data science that offer a combination of “intensive lectures and real world project work,” according to Course Report. It’s aimed at more experienced data scientists, who have a masters or Ph.D. degree. Courses include training in R, Python, Hadoop, Github and SQL with a focus on real-world application. Participants will walk away with a portfolio of five projects to show to potential employers as well as a Capstone Project that spans the last two weeks of the course. The NYC Data Science Academy also helps students garner interest from recruiters and hiring managers through partnerships with businesses. In the last week of the course, students will participate in mock interviews and job search prep; many will also have the opportunity to interview with hiring tech companies in the New York and Tri-State area.
Price: $16,000

The Data Incubator
The Data Incubator is another program aimed at more experienced tech workers who have a masters or Ph.D., but it’s unique in that it offers fellowships, which means students who qualify can attend for free. Fellowships, which must be completed in person, are available in New York City, Washington D.C. and the Bay Area. The program also offers students mentorship directly from hiring companies, including LinkedIn, Microsoft and The New York Times, all while they work on building a portfolio to showcase their skills. The boot camp programs run for eight weeks and students need to have a background in engineering and science skills. Attendees can expect to leave this program with data skills that will be applicable in real world companies.
Price: Free for those accepted

Galvanize
Galvanize has six campuses located in Seattle; San Francisco, Denver, Fort Collins, Boulder, Colo.; Austin, Texas; and London. The focus of Galvanize is to develop entrepreneurs through a diverse community of students who include the likes of programmers, data scientists and Web developers. Galvanize boasts a 94 percent placement rate for its data science program since 2014 and students can apply for partial scholarships of up to $10,500. According to Galvanize, students have gone on to work for companies such as Twitter, Facebook, Air BnB, Tesla and Accenture. This boot camp is intended to combine real life skills with education so that graduates walk away ready to start a new career or advance at their current company through formal courses, workshops and events.
Price: $16,000

The Data Science Dojo
With campuses in Seattle, Silicon Valley, Barcelona, Toronto, Washington and Paris, the Data Science Dojo brings quick and affordable data science education to professionals around the world. It’s one of the shortest programs on this list — lasting only five days — and it covers data science and data engineering. Before you even attend the program, you will get access to online courses and tutorials to learn the basics of data science. Then, you’ll start the in-person program which consists of 10 hour days over the course of five days. Finally, after the boot camp is complete, you’ll be invited to exclusive events, tutorials and networking groups that will help you continue your education. Due to the short nature of the course, it’s tailored to those already in the industry who want to learn more about data science or brush up on the latest skills. However, unlike some of the other courses on this list, you don’t need a master’s degree Ph.D. to enroll, it’s aimed at anyone at any skill level who simply wants to throw themselves in the trenches of data science and become part of a global network of companies and students who have attended the same program.
Price: Free for those accepted

Metis
Metis has campuses in New York and San Francisco, where students can attend intensive in-person data science workshops. Programs take 12 weeks to complete and include on-site instruction, career coaching and job placement support to help students make the best of their newly acquired skills. Similar to other boot camps, Metis’ programs are project-based and focus on real-world skills that graduates can take with them to a career in data science. Those who complete the program can expect to walk away with in-depth knowledge of modern big data tools, access to an extensive network of professionals in the industry and ongoing career support.
Price: $14,000

Data Science for Social Good
This Chicago-based boot camp has specific goals; it focuses on churning out data scientists who want to work in fields such as education, health and energy to help make a difference in the world. Data Science for Social Good offers a three-month long fellowship program offered through the University of Chicago, and it allows students to work closely with both professors and professionals in the industry. Attendees are put into small teams alongside full-time mentors who help them through the course of the fellowship to develop projects and solve problems facing specific industries. The program lasts 14 weeks and students complete 12 projects in partnership with nonprofits and government agencies to help tackle problems currently facing those industries.
Price: Free for those accepted

Level
Offered through Northeastern University, Level is a two-month program that aims to turn you into a hirable data analyst. Each day of the course focuses on a real-world problem that a business will face and students develop projects to solve these issues. Students can expect to learn more about SQL, R, Excel, Tableau and PowerPoint and walk away with experience in preparing data, regression analysis, business intelligence, visualization and storytelling. You can choose between a full-time eight week course that meets five days a week, eight hours a day and a hybrid 20-week program that meets online and in-person one night a week.
Price: $7,995

Microsoft Research Data Science Summer School
The Microsoft Research Data Science Summer School — or DS3 — runs for eight weeks during the summer. It’s an intensive program that is intended for upper level undergraduates or graduating seniors to help grow diversity in the data science industry. Attendees get a $5,000 stipend as well as a laptop that they keep at the end of the program. Classes accommodate only eight people, however, so the process is selective, but it’s only open to students who already reside or can make their own accommodations in the New York City area.
Price: Free for those accepted

Silicon Valley Data Academy
The Silicon Valley Data Academy, or SVDA, hosts eight-week training programs in enterprise-level data science skills. Those who already have an extensive background in data science or engineering can apply to be a fellow and have the tuition waived. You can expect to learn more about data visualization, data mining, statistics, machine learning, natural language processing as well as tools such as Hadoop, Spark, Hive, Kafka and NoSQL. Programs consist of more traditional curriculums including homework, but it also includes guest lectures, field trips to headquarters of collaborating companies and projects that offer real world experience.
Price: Free for those accepted

 

Click here to view complete Q&A of 70-414 exam
Certkingdom Review

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft 70-414 Training at certkingdom.com

Back in 1991
There was quite a collection of new technology and plain-old interesting geeky stuff in 1991. Included were the public debut of the World Wide Web, the introduction of Linux and the discovery of Otzi the Iceman. There was the lithium-ion battery, PGP encryption, Apple’s PowerBook, Terminator 2 and more. When through, if you’d like to catch up on the first nine installments of this series, check out 2015, 2014, 2013, 2012, 2011, 2010, 2009, 2008 and 2007.

‘All your base are belong to us’
Really? It’s been 25 years since everyone was scratching their heads saying, “What the hell does ‘All your base are belong to us’ mean?” No. It’s been 25 years since the release of a Japanese video game called Zero Wing, from which sprang the broken English phrase that became an Internet meme about a decade later.

The first Internet cafe
Since virtually every coffee shop, restaurant, pizza joint and dentist’s office offers Internet access today – for free – it may be difficult for the younger set to imagine a time when that wasn’t the case. That wasn’t the case until Wayne Gregori built the SFnet Coffeehouse Network and installed 25 terminals in coffee shops in and around San Francisco in 1991. The service wasn’t free, as the machines were coin-operated.

Linux debuts
Linus Torvalds released the first Linux operating system kernel on Oct. 5, 1991. On Oct. 6, 1991, Torvalds began arguing with volunteer developers who would go on to make Linux an open-source powerhouse and eventually a household name. On Oct. 7, 1991, he gave a vendor the finger.

Charge of the lithium-ion battery
This was the year that Sony began selling the first commercial rechargeable lithium-ion battery, which would go on to become ubiquitous in consumer electronics. They would also sometimes catch fire, a problem that has plagued the technology to some degree until this day, as the makers of the Boeing 787 have learned.

PGP better than pretty good
The encryption software called PGP – for Pretty Good Privacy – was developed and first distributed by Phil Zimmermann in 1991. In the mid-1990s, Zimmerman faced a three-year criminal investigation by the U.S. Customs Service for allegedly violating the Arms Export Control Act (encryption was considered a munition.) Twenty-five years later computer scientists face no such concerns because law enforcement and politicians have come to recognize that the benefits of strong encryption outweigh any risks. … Wait, what?

Apple introduces PowerBook
Though Apple had already produced a machine called the Mac Portable, the PowerBook – released in three flavors in October of 1991 – was the first worthy of being called portable. From Wikipedia: “These machines caused a stir in the industry with their compact dark grey cases, built-in trackball, and the innovative positioning of the keyboard which left room for palmrests on either side of the pointing device.” They weren’t cheap: $2,500.

Say hello, World Wide Web
There are myriad milestones marking the development of the Internet and the World Wide Web, with one occurring on Aug. 6, 1991 when Tim Berners-Lee published a summary of his pet project on the newsgroup alt.hypertext. Trolls had to wait a bit more though because the World Wide Web was not open to new users for another couple of weeks.

Microsoft splits with OS/2
On May 16, 1991, Bill Gates informed Microsoft employees via a memo that the company’s OS/2 partnership was over. From a story in the New York Times: “Reflecting their widening split with I.B.M., Microsoft executives said they would no longer call a new operating system they are working on OS/2 3.0. Rather, the new operating system will be named Windows NT, standing for New Technology. And Windows NT will not be able to run programs written for OS/2, as had previously been planned.”

Norton AntiVirus arrives
Having acquired Peter Norton Computing from Peter Norton the year before, Symantec released Norton AntiVirus 1.0 in 1991 for a suggested retail price of $129. Early advertising featured Norton himself, arms folded, wearing a surgical mask.

Arnold’s back in Terminator 2
Starring Arnold Schwarzenegger and Linda Hamilton, Terminator 2: Judgment Day was released on July 3, 1991. From IMDb: “A cyborg, identical to the one who failed to kill Sarah Connor, must now protect her young son, John Connor, from a more advanced cyborg, made out of liquid metal.”
Cessna CitationJet takes off
See larger image
Image courtesy Wikipedia
Cessna CitationJet takes off

One of seven Cessna families of corporate jet built by the Wichita, Kan.,-based aircraft maker, the CitationJet’s first flight was on April 29, 1991. It could be configured to fly between three and nine passengers. The first production model was delivered two years later.

Galileo buzzes asteroid
Launched in 1989, NASA’s Galileo probe was foremost concerned with the planet Jupiter, but in October of 1991 it traveled past the asteroid Gaspra and took the first close-up images of such a space rock.

Co-inventor of transistor dies
John Bardeen, a physicist and electrical engineer, won the Nobel Prize in Physics, along with William Shockley and Walter Brattain, in 1956 for their invention of the transistor. Bardeen also was the winner of that prize in 1972, making him the only man to have done so twice. He died on Jan. 30, 1991.

Apple debuts QuickTime
Apple’s multimedia technology with a built-in media player debuted 25 years ago. From Wikipedia: “Apple released the first version of QuickTime on Dec. 2, 1991 as a multimedia add-on for System Software 6 and later. The lead developer of QuickTime, Bruce Leak, ran the first public demonstration at the May 1991 Worldwide Developers Conference, where he played Apple’s famous 1984 TV commercial in a window at 320×240 pixel resolution.”

Python programming language
Guido van Rossum, Python’s “Benevolent Dictator For Life,” explains how it all started: “In December 1989, I was looking for a ‘hobby’ programming project that would keep me occupied during the week around Christmas. My office (a government-run research lab in Amsterdam) would be closed, but I had a home computer, and not much else on my hands. I decided to write an interpreter for the new scripting language I had been thinking about lately: a descendant of ABC that would appeal to Unix/C hackers. I chose Python as a working title for the project, being in a slightly irreverent mood (and a big fan of Monty Python’s Flying Circus).”

Congress mandates closed captioning
Although its official name was the Television Decoder Circuitry Act of 1990, it wasn’t until Jan. 23, 1991 that Congress passed legislation that gave the FCC authority to require that television manufacturers incorporate functionality to allow closed captioning by July 1, 1993.

Visual Basic 1.0 debuts
From Max Visual Basic: “The core of Visual Basic was built on the older BASIC language, which was a popular programming language throughout the 1980s. Alan Cooper had developed a drag-and-drop interface in the late-1980s, Microsoft approached him and asked his company, Tripod, to develop the concept into a form building application. Tripod developed the project for Microsoft. It was called Ruby and it did not include a programming language at all. Microsoft decided to bundle it with the BASIC programming language, creating Visual Basic.” It was declared legacy in 2008.

SNES arrives in North America
Already a hit in Japan, the Super Nintendo Entertainment System (SNES) hit North American stores in 1991 and would go on to be the best-selling game console of its time. It remains popular among collectors.

Star Trek VI hits theaters
Star Trek VI: The Undiscovered Country was released on Dec. 6, 1991. Never been a fan, so from IMDb: “On the eve of retirement, Kirk and McCoy are charged with assassinating the Klingon High Chancellor and imprisoned. The Enterprise crew must help them escape to thwart a conspiracy aimed at sabotaging the last best hope for peace.”

Announcing Nielsen SoundScan
A system for tracking and measuring the sale of music and video products, Nielsen SoundScan became the basis of the Billboard charts beginning with the magazine’s May 25, 1991 issue. The accuracy of SoundScan was credited by some with helping to advance the alternative music scene in the United States, as record labels were able to point to this data to help convince radio stations to air the songs of lesser known artists.

New kid in school: SMART Board
SMART Technologies, headquartered in Calgary, Alberta, released its first SMART Board in 1991. The touch-enabled interactive white board remains a staple in classrooms and boardrooms.

Edwin Land dies
A scientist and inventor who co-founded Polaroid, Edwin H. Land’s Polaroid instant camera was introduced to the public in 1948 and allowed for a photograph to be taken and developed in under a minute. Land died on March 1, 1991 and he would have been heartened to know that 25 years after his death instant photography is making a comeback.

Otzi the Iceman discovered
From a 2015 article in Discover Magazine announcing that scientist’s had mapped all of Otzi’s 61 tattoos: “In September 1991, two tourists discovered (Otzi the Iceman’s) remains nestled into a glacier in the Italian Alps. Since then, researchers have rigorously analyzed the Iceman to paint a picture of what life was like during the start of the Bronze Age some 5,300 years ago. We now know that he suffered from a variety of degenerative ailments and ultimately died from an arrow wound to the shoulder.”

Telephone Consumer Protection Act of 1991
The Telephone Consumer Protection Act of 1991, signed into law by President George H. W. Bush, was supposed to – among many other things – stop solicitors from calling you once you told them to stop calling you. The legislation authorized the FCC to create a national database of numbers whose owners did not want to be called, period. That database was not created until Congress passed additional legislation in 2003.

‘Automatic Cleaning-liquid Dispensing Device’
We know it today as the automatic soap dispenser and someone had to invent the first one. That someone was Guey-Chaun Shiau, who was granted a patent for invention on Feb. 5, 1991.

Click here to view complete Q&A of MB2-702 exam

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MB2-702 Training at certkingdom.com

Google’s recent Eddystone announcement adds another heavyweight to the indoor location-based services market. With beacon location support from the two largest mobile ecosystem providers, Bluetooth Low Energy (BLE) beacons have become the de facto standard for indoor micro-location applications. Let’s investigate some of the nuances of the Eddystone format compared to Apple’s iBeacon and highlight key items that organizations should consider while deploying a nascent solution.

The main driver for beacons is the search for a suitable indoor positioning technology for mobile devices, enabling real-time navigation and location awareness for mobile apps. Through the iPhone era, we have tested the limits of the established handset technologies: GPS, inertial navigation, cell tower triangulation, and Wi-Fi (Basic Service Set Identifier — BSSID) scanning. We know their capabilities, both individually and as a group. Indoors, Wi-Fi technology can provide adequate location presence but for use cases requiring real-time, precise location updates, another solution is needed.

Then along came Apple’s iBeacon, the first standardized implementation of a BLE beacon. Sensing an iBeacon on a mobile device is in many ways similar to sensing a Wi-Fi access point; we use signal strength and the signal-distance characteristics to calculate location. However, the substantive differences in approach relates to packaging and implementation. BLE beacons are portable, fully cordless and inexpensive. As a result, a BLE beacon solution designed for location is significantly cheaper and less complex to install compared to an equivalent Wi-Fi based solution.

Additionally, beacons present a huge real-time accuracy advantage, since location is calculated by the app monitoring beacons that continuously transmit, instead of a Wi-Fi infrastructure waiting for a handset to awaken from variable power-save windows.

Because of these inherent advantages, iBeacons have quickly gained traction for indoor positioning. Not to be outdone, Google recently entered the Beacon arena with Eddystone, which claims to extend capabilities supported in Apple’s widely adopted iBeacon protocol.

With Eddystone, Google was looking for a better way to push notifications to clients without the need for an app, as well as the ability to manage beacon deployments. To achieve this, Eddystone is designed to support multiple new data packet types, including Eddystone-UID, Eddystone-URL, and Eddystone-TLM.

Eddystone-UID is similar to iBeacon in that it identifies a beacon and allows an app on a device to trigger a desired action. Eddystone-UID is somewhat different from iBeacon in that it is 16 bytes long (iBeacon is 20) and is split into two parts (iBeacon is 3).

Eddystone-URL sends a compressed URL to the mobile device, relying on Android 5.0 or higher to process push events instead of a dedicated app. On iOS devices, users must install the Chrome browser and enable notifications, so an app is still required. As of now, Android users have to open the notification center to check for nearby Eddystone-URL beacons, making it more of a Pull rather than Push notification. Unsolicited, push-based notifications can present phishing risks, although Google mitigates the risk by brokering all transactions, retrieving site titles and description meta tags and embedding them in the notification. This ensures that users know what site they are about to visit before clicking through a notification link.

While Eddystone-URL may seem interesting, its cross-platform caveats and recent privacy concerns around potential implementations, as highlighted in Google’s Here project, will limit its appeal.

The third data packet type is Eddystone-TLM. Short for telemetry, TLM sends health and statistics data such as battery voltage, beacons temperature, uptime, and number of packets sent to the applications developer. This is a one-way, best-effort communication and could help venues monitor their beacon deployments. Presumably, an Android client must be in direct proximity to the beacon for Eddystone-TLM to update the cloud with health telemetry. While it does not allow you to update or modify the data being sent by the beacon remotely, it is a step in the right direction.

Google’s first crack at Eddystone tells us that more BLE innovations will likely be coming. For early adopters looking to reap the massive benefits from micro location-based services, a BLE beacon infrastructure that can be centrally monitored and managed with firmware updates is an absolute necessity. Most significant is Google’s acknowledgement that indoor location positioning based on BLE technology is no longer a fad and is here to stay. With some prominent BLE beacon deployments in existence, Google can no longer have Apple solely control the future of this space.

 

Click here to view complete Q&A of 70-697 exam

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft 70-697 Training at certkingdom.com

 

 

 

Microsoft promises at least two years of “incremental” updates for its current smartphone operating system.

Microsoft has posted an end-of-life date for Windows 10 Mobile, though it raises more questions than it answers.

According to Microsoft’s support website, mainstream Windows 10 Mobile support will cease on January 9, 2018. However, the posting also says Microsoft will make extended support updates and security patches available for “a minimum of 24 months after the lifecycle start date” of November 16, 2015.

Stranger still, the support site originally listed an end date of January 8, 2019 when WinBeta discovered it last night, with Microsoft promising updates for “a minimum of 36 months.” Since then, the document has changed to reduce Windows 10 Mobile’s lifespan by one year.

It gets weirder. Although Microsoft has previously said that Microsoft alone would distribute Windows 10 Mobile updates, with wireless carriers playing just a supporting role, the support document suggests otherwise. “The distribution of these incremental updates may be controlled by the mobile operator or the phone manufacturer from which you purchased your phone, and installation will require that your phone have any prior updates,” it says. (Windows Insiders can always install preview builds without going through carriers, though this increases the risk of running into bugs.)

Microsoft’s support site doesn’t shed any light on what will happen after January 2018. We can only speculate that a more significant upgrade for Microsoft’s mobile operating system will arrive, assuming the whole effort hasn’t cratered by then.

Why this matters: Long-lasting hardware support has been a touchy issue for Windows Phones over the years. Windows Phone 7 was a clean break from the old Windows Mobile, and Microsoft famously abandoned Windows Phone 7 users (and the existing app ecosystem) when it moved to Windows Phone 8. With Windows 10 Mobile, Microsoft has repeatedly walked back its upgrade promises for existing phones, and today the only phone running the latest stable operating system are the brand-new Lumia 950 and Lumia 950 XL. With Microsoft’s support document leaving plenty of open questions, Windows phone fans could be reasonably skittish about their upgrade paths from here on.

Click here to view complete Q&A of 77-418 exam

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft 77-418 Training at certkingdom.com

Independence has its upsides and downsides. IT pros lend firsthand advice on the challenges of going solo

The life of an independent IT contractor sounds attractive enough: the freedom to choose clients, the freedom to set your schedule, and the freedom to set your pay rate while banging out code on the beach.

But all of this freedom comes at a cost. Sure, heady times for some skill sets may make IT freelancing a seller’s market, but striking out on your own comes with hurdles. The more you’re aware of the challenges and what you need to do to address them, the better your chance of success as an IT freelancer.

We talked with a number of current and former IT freelancers to get their take on the hidden troubles of going solo. Here’s what they said and how to make the best of the downsides of freedom.

Selling yourself from afar

You can’t get a gig without the client signing off, and often getting key stakeholders to accept you as a valued partner can be challenging — especially when the work is remote.

“In order for a project to be successful, the client has to buy into you and the vision for the project,” says Nick Brattoli, founder and lead consultant at Byrdttoli Enterprise Consulting.

“This is exacerbated in the IT world, because more often than not, you are going to be working remotely,” says Brattoli, who’s been freelancing on and off for his entire IT career. “Technology is wonderful in that it makes it possible for us to work from anywhere with an Internet connection. But there is still value in being able to meet face-to-face, and many companies are hesitant to trust someone they haven’t met.”

In addition, at many companies the tech-savvy people running a project will know what needs to be done to meet the desired outcomes. “But once that’s all figured out, it is very hard to convince the people above them to go through with it,” Brattoli says. “Where technology is concerned, people who are less tech-savvy are going to be wary of any new changes to infrastructure.”

To get around these challenges, Brattoli recommends onsite travel to help generate buy-in; proposing various solutions of varying costs for a project; and constant communications after getting initial buy-in to manage expectations as much as possible.
Navigating non-negotiable agreements

Most companies have standard agreements in place to protect confidentiality and restrict competition. Such forms are usually non-negotiable, even for full-time employees, says Stanley Jaskiewicz, a business attorney at Spector Gadon & Rosen, who represents IT employers and freelancers.

For freelancers, these agreements can prove to be tricky business — especially as they begin to add up.

“A freelancer will usually have no leverage to negotiate the restrictive covenants, or the scope of confidentiality,” Jaskiewicz says. This creates several risks, he says. For one, a signed form might prevent a freelancer from being able to make good on future job opportunities or require the freelancer to give ownership of a work product to the employer, without commensurate compensation for what the freelancer gives up.

Furthermore, such restrictions can accumulate rapidly over a career, making it hard to keep track of what you can or can’t do when presented with future job opportunities.

“The freelancer must keep careful records — and constantly update one’s own knowledge — of the restrictions to which he or she is subject,” Jaskiewicz says.

The alternative is to pay a lawyer to check each new job against all prior agreements, which is an economically unrealistic proposition for most freelancers.

“One freelancer I know has an exhaustive knowledge and well-indexed records of what he has signed, but he is the exception,” Jaskiewicz says.

A practical alternative (on the confidentiality side, at least) is to request the “standard” exceptions to confidentiality, Jaskiewicz says. These include prior knowledge, public knowledge, independent development without use of confidential information, receipt of information from a third party not bound by confidentiality with the disclosing party, and compelled disclosure (that is, in response to a subpoena or deposition).

HP exits low-cost tablet market in product shakeup

Even within IT departments there can be issues with your presence as a freelancer.

“When a consultant is placed in a team of permanent employees, there is sometimes some resentment toward the consultant, as they are usually earning more,” Weaver says. This can result in a lack of information sharing or the highly skilled IT work being allocated to full-time employees, with the menial work going to the more expensive and experienced consultant, he says.

This mistrust is even more pronounced when you want to change the way things are done — even if it’s part of your contract.

“People immediately start panicking,” Weaver says. “They would rather have the painfully slow manual process that needs intervention on a daily basis than one that runs automatically and rarely breaks.”

Weaver’s business specializes in moving databases and applications into the cloud, and there is often resistance.

“Getting people to understand that [concept] is really, really hard work,” he says. “There isn’t sufficient IT knowledge, and tech companies don’t help, as new products aren’t explained in a simple way that most people will understand.”

Educating people about IT and simplifying the details so that everyone can understand is key, Weaver says.
Riding out harsh realities and drumming up new business

Providing IT expertise, as with other types of freelancing, can be feast or famine. “At the first scent of an economic downturn, projects get canceled or postponed and IT consultants are either let go or not hired,” Weaver says.

“Many companies still have the old-fashioned view that IT is a cost center rather than a profit center, and as such IT departments are always one of the first places people look when they want to ‘trim the fat,’” Weaver adds.

While keeping a steady stream of work going can be a problem in general with freelancing, some say it’s an even bigger problem for IT freelancers.

“Most engineers and IT folks don’t consider sales and marketing to be their strongest skill, and for them to go out looking for new projects, discussing project road maps, and negotiating on the payments terms is not a fun experience,” says Abbas Akhtar, who freelanced as a software engineer for three years before launching a Web development company called
Solutions Park.

“Engineers generally would love it if they got a set of requirements, delivered the project, and got a check in the mail,” Akhtar says. “Freelancing means they have to do a lot more than just coding.”
Keeping up with technology changes

As anyone in IT knows, technology and how it’s used are constantly shifting. Freelancers especially are challenged when it comes to staying current with the ever-changing technology landscape.

“The resources available to a freelancer may not be sufficient to get trained on new technology, nor put that training into practice in a business environment to engrain the skills,” says Scott Smith, who has worked as an independent IT developer and database consultant and is currently a senior database administrator in the uTest software testing community.

To keep from falling behind, Smith participates in online webinars and forums within and outside the uTest community.

Sometimes change can put assignments in jeopardy. While working as a freelancer, Smith has participated in assignments where he was brought in to perform a specific task, then the scope of work changed to such an extent that it became impossible to complete the assignment.

“In these situations, you have to do your best to continue to provide value to the companies to make sure your brand is still seen in a positive light, despite not delivering on the initial projects,” Smith says.
Reconciling agile development with fixed-bid contracts

Many companies have adopted agile development methodologies to iterate their projects faster in hopes of gaining a competitive edge.

“This has been a boon for software developers — both for full-time and freelancers,” says Damien Filiatrault, CEO and founder of Scalable Path, a network of more than 1,000 freelance developers. “Demand is high, supply is tight, and projects are numerous.”

But for freelancers, there remains a major disconnect between traditional fixed-bid contracting and agile software development projects, Filiatrault says. “Lots of time needs to be spent up front specifying functionality and scope before work even begins on a fixed-bid project,” he says.

Indeed, traditional fixed-bid contracts immediately put the client at odds with the contractor as soon as the contract is signed, because the client wants to jam as much functionality as it can into the project for the fixed price. “On the other hand, the contractor wants to spend as little time as he can on the job for the fixed price,” Filiatrault says.

Working in agile, where the client’s objectives evolve over time, is hamstrung by the fixed-bid contract. “The contractor wants to keep scope locked down as opposed to working in tandem with the client to evolve [the software] in a more collaborative way,” Filiatrault says. “Constant change orders to a fixed bid are tedious. In modern software development, it’s best for the software contractor to work on an hourly basis rather than on fixed contract price.”
Coping with communications gaps

Even within the same company, IT and non-IT people often don’t communicate well with each other. This can be an issue for freelancers as they try to stay in sync with clients.

“It is very true that engineers and non-engineers speak pretty much different languages,” Akhtar says. “The way an engineer looks at a problem and how a nontechnical person may look at a problem is very different.”

What might seemingly be a small issue for clients could actually require a decent amount of technical work to fix, and communicating this to nontechnical people can be tough.

For example, a client of Akhtar’s thought that having the ability to sell 10 items on its website instead of 20 should reduce the cost of the project by half.

“From an engineer’s perspective, once the core e-commerce experience has been built, the incremental effort to modify the number of items you can sell from one to anything is almost zero,” he says. “Freelancers find it a big pain trying to communicate ideas such as these to the client.”

While time management is a challenge that applies to almost any profession, IT freelancers are in a unique position because they might be called in to address issues when they least expect it — throwing schedules into turmoil.

“Once you start to grow your business, time management becomes pivotal,” Brattoli says. “In order to grow, you need to manage your full-time job, your current freelancing projects, growing your business, training, and your personal life.”

This can become quite difficult in IT because many projects are not 9 to 5. “You may spend a day browsing the Internet, and you may work 24-plus hours straight because something blew up,” Brattoli says. “This flexible schedule can both make things difficult and allow you to succeed, depending on how you do it.”

Those working solo especially need to use their time wisely.

“A lot of tasks in the IT world involve doing a couple things, waiting a while, then doing some more things,” Brattoli says. “Rather than browsing the Internet without purpose every time you get these blocks of time, do some studying, read some blogs. Train yourself. On those days where you have nothing to do, bid on some jobs online, expand your LinkedIn network, plan out your dinner. Using your time wisely can alleviate a lot of stress.”

Click here to view complete Q&A of 70-354 exam

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft 70-354 Training at certkingdom.com

 

What are the major tech companies doing to win in the cloud, and how might the market shake out?

There’s an old joke that starts: How do you make God laugh?

The answer, of course: Make plans.

Larger services companies bent on world domination have poised a lot of capital into developing cloud resources, and some aren’t doing well. Let’s ignore Software-as-a-service/SaaS and pure-play cloud services companies, and instead let’s focus on some new entrants that staked their claims in other markets beside cloud.
Dell

What They Did: Clouds are made of up disk and virtual stuff, and Dell just bought EMC – whose disk empire is legendary – and with it, a huge chunk of VMware, whose feisty formula for virtualizing all-things-not-nailed-down is legendary.

What Might Happen: In one huge private (not public) transaction, Dell gets The Full Meal Deal, and makes up for a half-decade of losing ground.
Amazon

What They Did: Like all good B-School grads, they took a key success ingredient in their rapidly evolving IT infrastructure and resold excess capacity at such a price as to make it highly attractive to the IT-Maker hybrid community, thus launching still another way to make Amazon more fluid whilst spawning developer and service provider imaginations.

What Might Happen: All leaders are the biggest targets of competitors, who learn by a leader’s mistakes, and find cracks to drive hydraulically powered wedges. They’ve captured imagination, and to keep the pace of that attractiveness and fluidity, must imagine products that don’t go stale easily through a long revenue cycle. I say: spin-off.
Microsoft

What They Did: Dawdled, then attempted to take an increasingly brittle if varied and successful computing infrastructure for businesses, along with a huge user base, then not only adapted it for the web, but also made licensing suitable for actual virtualization—then cloud use. Their cloud offering, Azure, now mimes appliance, DevOps/AgileDev, and ground-floor services of their strongest competitors, if a little green in places.

What Might Happen: Microsoft will continue to try to leverage a huge user base into forward-thinking capabilities to extend but not destroy F/OSS initiatives, gleaning the good stuff and vetting as much as is possible into the user cloud model, and also the hybrid and public cloud models. Profit!
Oracle

What They Did: After the indigestion of Sun and MySQL, Oracle wrestled with evolving their own vertical cloud, knowing that their highly successful DB products required comparative platform (and also customer) control. Attempts at virtualization weren’t very successful, but the oil well in the basement, SQL infrastructure, continued to produce oil. Cloud offerings were designed for their target clientele and no others, holding ground while not losing ground.

What Might Happen: Oracle’s enterprise clientele has a love/hate relationship with Oracle, and migration to another platform makes them shudder and perspire. Core line-of-business functionality continues to evolve but at a comparatively/competitively lower pace than visible progress made in the arena Oracle plays in.
HP

What They Did: HP purchased Eucalyptus, a burgeoning cloud emulation and DevOps/AgileDev integration software organization known for their AWS emulation private cloud capabilities. HP evolved the purchase into the HP Helion Cloud, which offered private, public, and hybrid clouds. Development appeared (to me) to languish at least in the public space as smaller competitors, notably Rackspace (and other pure-play cloud services organizations) evolved. HP announced last week that they’re dropping the public portion of their Helion Cloud, after changing management earlier.

What Might Happen: As a hardware company, HP competes potentially with cloud services organizations on the cloud front. Its support for initiatives like OpenStack may change. Now that competitor Dell will digest EMC and VMware, the game has changed.

“If you do one thing, do it very well.” That mantra seems to ring true, and each of these organization has struggled to keep up with the pace of change and competitive pricing, all while attempting to gain, rather than hold, ground. Juggling clouds, to coin a metaphor, isn’t easy.

There’s one motivating a migration to the cloud that must be absorbed by cloud services organizations that no one likes to talk about: shifting depreciation. Each of these organizations (and more like them) faces cost models while the sands of depreciation fall through the ROI glass.

 

 

 

 

Click here to view complete Q&A of 70-341 exam

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft 70-341 Training at certkingdom.com

 

 

You’d think after all this time that organizations would have finally gotten BYOD programs pretty much down pat. Don’t bet on it.

A recent study by tyntec reveals that a vast majority of organizations still have inadequate bring-your-own-device (BYOD) policies. That’s not very encouraging, considering that 49 percent of workers now use a personal mobile device for work-related tasks and spend a great deal of time on personal devices for their job.

Further, the typical U.S. worker now expects to have nothing less than total access – anywhere, anytime, from any device – to their employer’s networks, finds another study from Dell and Intel. But despite all this demand on the user side, many organizations still wrestle with security, privacy and support issues around BYOD. That is holding many employers back when it comes to giving BYOD an enthusiastic ‘thumbs up’.

So what does it take to get BYOD right in 2015? CSO put that question to a few IT leaders, whose collective responses reflect the still wide divide on how BYOD is supported at the IT executive level, possibly depending on the industry in which they work.

An undeniable force

The higher education sector has embraced BYOD probably as much as any. No surprise here, really. College and university culture is all about openness – of ideas, of expression, and of access to resources. So it is only natural that today’s campus environment is awash with personal devices.

The University of Tennessee at Chattanooga is a prime example. According to Thomas Hoover, associate vice chancellor and CIO, and Susan Lazenby, manager of strategic planning and communication, BYOD has taken the campus by storm.

The two shared the school’s experiences with BYOD by stressing the impact it has had on the school’s IT organization, including staff and budget. But they confirmed that BYOD was a trend not to be denied, and the university had no choice but to adopt it. They also noted that a robust BYOD program is not just demanded by students, but also by faculty and employees.

To illustrate how rapidly BYOD caught on at UT, the two noted that five years ago the school’s network was supporting 809 devices. That number rose to 14,906 in 2014. This year it jumped to approximately 48,000.

It’s a similar tale hundreds of miles away at Worcester State University in Massachusetts.
“Like any other institute in higher education, Worcester State doesn’t have any choice but to support BYOD,” notes Anthony (Tony) Adade, CIO at the university. “The students come from diverse backgrounds. They come with all kinds of devices. For several years we’ve been seeing an influx of games on our campus – all kinds of games. Besides the normal devices that we have to deal with, we didn’t have any choice but to support them.”

Like at the University of Tennessee, wide-scale BYOD has been a fairly new phenomenon at Worcester State, but demand quickly made up for lost time.

“Initially it was limited. The network itself was at capacity and was not able to handle the devices coming on campus,” Adade explains. “We had to tell some students that they can’t bring devices on campus or if they did they were on their own. However, later on we realized it would be in our strategic interest to have a plan and to address the issue. Now we can safely accommodate almost every device. “

Colleges and universities aren’t the only organizations that have felt compelled to adopt BYOD programs, of course. Countless companies and nonprofits are also supporting programs, and have learned some important lessons in how to do it right.

“It is important to have technology in-house to support BYOD strategy,” notes Christine Vanderpool, CIO at Molson Coors, one of the nation’s leading brewers. “Companies should invest in tools like MDM, DLP and application monitoring (tools that inform the user of malicious applications on their devices). You need staff to support these tools. You need a strong set of policies, procedures and end user education.”

“It is good to focus on the ‘what’s in it for them’ in most cases,” Vanderpool stresses. “If you deploy MD or application controls, you have to explain how this is protecting them in their daily life and not just in their work life.”

What are the most important elements of an effective BYOD program in terms of both providing employee flexibility and productivity and also ensuring company data and network security? Molson Coors CIO Christine Vanderpool offers the following tips on what should be considered: Identified risks include:

“Give real life examples like how some malicious apps can take control/read all the user’s SMS text messages, see password information entered into a bank app, etc. People care most when they can understand it and can potentially impact their lives beyond just their job,” Vanderpool says.

Not everyone’s a believer

But many CIOs remain skeptics when it comes to supporting BYOD, fearing that the probable risks still outweigh the possible benefits. One of them is Jim Motes, vice president and CIO at Rockwell Automation.

“I’m not really a fan of BYOD phones,” Motes says. “I believe the privacy constraints will be at odds with protecting and controlling corporate intellectual property.”

“The smartphone is not just communication technology, it’s a social lifeline, diary, and entertainment system,” Motes continues. “People have too much personal information stored on these systems and should be very careful about how much access they want to give their employers. Employers should avoid them completely to limit their liability should that personal information be breached and exposed.”

So how does an organization resolve these two competing forces: security and privacy concerns on one hand, versus user demand for convenience on the other?

Our sources offered the following combined tips on how to get BYOD right:

Have a thoughtful strategy
As noted, security remains a top concern for IT leaders when it comes to BYOD. It is therefore important to involve the IT security team in establishing a program from the outset. But the approach should be for the CSO to help find a solution, not reasons to not support it. The focus should be on how to best secure the data first and foremost, then the devices.

Take stock of the situation
Once you’ve set your strategy, begin with assessments of the network capacity and the security status. Issues to consider include how much vulnerability does the network have? Who is connecting to it? What devices and applications are they using?

Have a clear set of policies and expectations
You need a set policy of guidelines on what is allowed and what is not and to guide behavior of employees and users. Policies should be simple and easy to understand. Toward that end, have your employees help draft the policies to get their understanding and support up-front.

Some devices are a ‘go’ and some are a ‘no’
Third, identify the devices you wouldn’t be able to support. The program probably can’t be all things to all employees. Create an approved list of devices that IT will support, providing the employee has a valid business reason for using it. Purchase the devices at a reduced cost for employees, and put necessary safeguards on those devices. Let employees know up front to what degree you will support a particular device purchase.

Proper training is critical
Educate employees on how to connect their devices to the network and also the dos and don’ts of their usage. Lunchtime training sessions are a smart idea. Stress what it is that employees are agreeing to, including what happens if a device is lost or stolen – the wiping of the device. Most employees will say yes, and for those that don’t, they can’t participate in the program.

Finally, “BYOD risks and considerations will continue to grow and change just as rapidly as the technologies change,” stresses Vanderpool. “It is vital that all aspects of the BYOD model be continuously reviewed, updated, re-communicated and employees re-educated. The model deployed and the supporting guidelines, policies and procedures implemented to support it must be agile and allow the company to be able to quickly adapt or change them when necessary.”

 

Click here to view complete Q&A of 70-243 exam

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft 70-243 Training at certkingdom.com

 

This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter’s approach.

Vulnerability risk management has re-introduced itself as a top challenge – and priority – for even the most savvy IT organizations. Despite the best detection technologies, organizations continue to get compromised on a daily basis. Vulnerability scanning provides visibility into potential land mines across the network, but often just results in data tracked in spreadsheets and independent remediation teams scrambling in different directions.

The recent Verizon Data Breach report showed that 99.9% of vulnerabilities exploited in attacks were compromised more than a year after being published. This clearly demonstrates the need to change from a “find” to “fix” mentality. Here are three key challenges to getting there:

* Vulnerability prioritization. Today, many organizations prioritize based on CVSS score and perform some level of asset importance classification within the process. However, this is still generating too much data for remediation teams to take targeted and informed action. In a larger organization, this process can result in tens of thousands – or even millions – of critical vulnerabilities detected. So the bigger question is – which vulnerabilities are actually critical?

Additional context is necessary get a true picture of actual risk across the IT environment. Organizations might consider additional factors in threat prioritization, such as the exploitability or value of an asset, the correlation between the vulnerability and the availability of public exploits, attacks and malware actively targeting the detected vulnerability, or the popularity of a vulnerability in social media conversations.

* Remediation process. The second and perhaps most profound challenge is in the remediation process itself. On average, organizations take 103 days to remediate a security vulnerability. In a landscape of zero-day exploits and the speed and agility at which malware developers operate, the window of opportunity is wide open for attackers.

The remediation challenge is most often rooted in the process itself. While there is no technology that can easily and economically solve the problem, there are ways to enable better management through automation that can improve the process and influence user behavior. In some cases, there are simple adjustments that can result in a huge impact. For example, a CISO at a large enterprise company recently stated that something as easy as being able to establish deadlines and automated reminder notifications when a deadline was approaching could vastly improve the communication process between Security and DevOps/SysAdmin teams.

In other words, synchronizing communication between internal teams through workflow automation can help accelerate the remediation process. From simple ticket and task management to notifications and patch deployment, the ability to track the remediation process within a single unified view can eliminate the need to navigate and update multiple systems and potentially result in significant time savings.

* Program governance. The adage, “You can’t manage it if you can’t measure it” is true when it comes to evaluating the success of a vulnerability risk management program. In general, information security programs are hard to measure compared to other operational functions such as sales and engineering. One can create hard metrics, but it is often difficult to translate those metrics into measurable business value.

There is no definitive answer for declaring success. For most organizations, this will likely vary depending on the regulatory nature of their industry and overall risk management strategy. However, IT and security teams demonstrate greater value when they can show the level of risk removed from critical systems.

Establishing the right metrics is the key to any successful governance program, but it also must have the flexibility to evolve with the changing threat landscape. In the case of vulnerability risk management, governance may start with establishing baseline metrics such as number of days to patch critical systems or average ticket aging. As the program evolves, new, and more specific, metrics can be introduced such as number of days from discovery to resolution (i.e., time when a patch is available to actual application).

Practitioners can start improving the process by making some simple changes. For example, most vulnerability assessment tools offer standard prioritization of risks based on CVSS score and asset classification. However, this approach is still generating too much data for remediation teams. Some organizations have started to perform advanced correlation with threat intelligence feeds and exploit databases. Yet, this process can be a full-time job in itself, and is too taxing on resources.

Technologies exist today to help ease this process through automation by enriching the results of vulnerability scan data with rich context beyond the CVSS score. Through correlation with external threat, exploit, malware, and social media feeds and the IT environment, a list of prioritized vulnerabilities is delivered based on the systems most likely to be targeted in a data breach. Automating this part of the process with existing technologies can help cut the time spent on prioritization from days to hours.

Today, vulnerability management has become as much about people and process as it is about technology, and this is where many programs are failing. The problem is not detection. Prioritization, remediation, and program governance have become the new precedence. It is no longer a question of if you will be hacked, but rather when, and most importantly, how. The inevitable breach has become a commonly accepted reality. Vulnerability risk management calls for a new approach that moves beyond a simple exercise in patch management to one focused on risk reduction and tolerable incident response.

 

MCTS Training, MCITP Trainnig

Best Microsoft MCP Certification, Microsoft MCSE Training at certkingdom.com

 

We put the screws to all five modern browsers, testing them in all manner of scenarios. If you’re looking for a fast, efficient, convenient browser, we’ve found two that we think you’ll like.

The best browsers go beyond benchmarks, racing through real-world webpages as well as canned routines. They’re easy to set up, flexible and extensible, and connect other devices and services into an ecosystem.

Look, throwing a few benchmarks at a browser just doesn’t cut it any more. Just as you expect us to test graphics cards against the latest games, we think your browsers should be tested against a collection of live sites. Can they handle dozens of tabs at once? Or do they shudder, struggle, and crash, chewing through your PC’s processor and memory?

To pick a winner, we put Google Chrome, Microsoft’s Edge and Internet Explorer, Mozilla Firefox, and Opera to the test, barring Apple’s abandoned Safari for Windows. We used the latest available version of each browser, except for Firefox, which upgraded to Firefox 40 late in our testing. And we also tried to look at each browser holistically: How easy was each to install and set up? Does Opera make it simple to switch from Chrome, for example?

For 2015, we have a newcomer: Microsoft’s Edge browser, which has been integrated into Windows 10.
the word start on a running track

You’ve already seen part of our tests, where we showed you how much of an impact enabling Adobe Flash can have on your system. Disabling or refusing to load Flash can seriously improve performance—some sites, like YouTube, have begun to transition to less CPU-intensive HTML5 streams. Still, other readers pointed out that they simply need to run Flash on their favorite sites. That’s fine—we tested with and without Flash, so you’ll have a sense for which browser performs best, in either case.

Oh, and Microsoft: We found that your new Edge browser isn’t quite as fast as you make it out to be. (Sorry!) But it still demonstrated definite improvement over Internet Explorer.

The benchmark numbers favor Chrome and Firefox

We do consider benchmarks to be a valuable indicator of performance, just not a wholly defining one. Still, they’re the numbers that users want to see, so we’ll oblige. We used a Lenovo Yoga 12 notebook with a 2.6GHz Intel Core i7-5600U inside, running a 64-bit copy of Windows 10 Pro on 8GB of memory as our test bed.

We tested Chrome 44, Windows 10’s Edge 12, Firefox 39, Internet Explorer 11, and Opera 31 against two popular (though unsupported) benchmarks—Sunspider 1.0.2 and Peacekeeper—just for reference purposes. But we’d encourage you to pay attention to the more modern benchmarks, including Jet Stream, Octane 2.0, Speedometer, and WebXPRT. The latter two are especially useful, as they try to mirror actual interaction with web apps. We also tested using Oort Online’s graphics benchmark as well as the standardized HTML5test—which is not so much a benchmark, but an evaluation of how compatible a browser is with the HTML5 standard for Web development.

From our testing, Chrome and Firefox topped the Speedometer and WebXPRT tests, respectively. Perhaps unsurprisingly, Google was the fastest browser under the Google-authored Octane 2.0 benchmark. But Microsoft’s Edge led the pack in the Jet Stream benchmark—which includes the Sunspider tests, which Edge led as well. (For all of the benchmarks, a higher number is better; the one exception is Sunspider, which records its score in the time it took to run.)

browser testing benchmarks 1st set
Google Chrome and Mozilla Firefox do well here. (A higher result is better, except for the Sunspider benchmark.)

What’s surprising about Edge is that it led the pack in the Jet Stream benchmark, but fell way behind on Speedometer, only to record a quite reasonable score in WebXPRT. (Microsoft claims that Edge is faster than Chrome in the Google-authored Octane 2.0 benchmark as well, but our results don’t indicate that.)

Chrome flopped on the Sunspider test; the only test Firefox failed equally miserably in was the Oort Online benchmark, which draws a Minecraft-like landscape using the browser.

For whatever reason, I noticed some graphical glitches as Edge rendered the Oort landscape, including problems drawing a shadow that slid across the bay in the night scene. But Oort proved even more problematic for Firefox, rendering “snow” as flashing lights and rain as a series of lines. (We’ve included the test result, but take it with a grain of salt.) Internet Explorer 11 simply couldn’t run the Oort benchmark at all.

We also included the HTML5test compatibility test, which measures how compatible each browser is with the latest HTML5 Web standards. Although some developers focus extensively on each browser’s score, even the test developer isn’t too concerned:

HTML5test scores are less interesting to me than people think. Any browser above 400 points is a perfectly fine choice for todays web.
— HTML5test (@html5test) August 2, 2015

And the only one that fails that test, of course, is the semi-retired Internet Explorer 11.

What does all this mean? It doesn’t indicate a clear win for any specific browser, including Chrome. Based on our benchmark tests, many of the browsers will handle the modern web just fine.

Next page: Real-world testing and “the convenience factor.”

Real-world testing: Opera makes its case

Opera Software has always lived on the periphery, with what NetApplications says is just 1.34 percent of the worldwide browser market. With Opera considering putting itself up for sale, it may not be long for this world. But in terms of real-world browser performance, Opera is worth a long hard look while you still can.

Why? Because in real-world browser tests, Chrome and Opera performed very well.
It’s important to know how each browser will actually perform while surfing the live web. Testing this is a challenge—some canny Web sites constantly tweak their content, and ads will vary from one visit to the next. But we tried to minimize the time over which we visited each site to help minimize variation.

We used a selection of 30 live sites, from Amazon to CNN to iMore to PCWorld, as well as a three-tab subset of each, to see how performance scaled. Our tests included adding each site to a new tab, one after another, to weakly approximate how a user might keep adding new tabs—but quickly, so as to stress-test the browser itself. Finally, we evaluated them with Adobe Flash turned on and off. (Both Opera and Firefox don’t natively ship with Flash, so we tested without, then downloaded the Flash plugin.)

After loading all 30 tabs, we waited 30 seconds, then totaled the total CPU and memory consumption of both the app itself, the background processes, and the separate Flash process, if applicable.

So what does all this mean? If you own a mid-range and low-end PC, you might have purchased one without a lot of memory, or with a less powerful CPU. In that case, you might consider switching your browser to something that’s more efficient.

This chart contains a lot of information; you can click it to enlarge it. But what you should focus on are the differences in memory consumption (the yellow bars) and the differences in CPU consumption. We’ve included the raw data in a table at the bottom of the chart. In each case, a lower number indicates a more efficient browser, with the one exception being Firefox (with Flash)’s zero scores, which we’ll cover below.

Oddly enough, we noted an actual decrease in CPU consumption when Flash was enabled on the three-tab test, specifically within Edge, Firefox, and Opera—perhaps because the Flash plugin was more efficient at lighter workloads. As our previous report indicated, however, CPU and memory consumption soared when we started throwing tab after tab at each browser.

The other discrepancy that you may note is that Chrome, with Flash enabled, consumes nearly the memory that Edge does without Flash enabled. We double-checked this, but we did so on another day, where Edge’s memory consumption was even higher than what we recorded. (That’s probably due to just a difference in the ads and video the sites displayed.)

Chrome has a reputation for sucking up all the memory you can throw at it, and these numbers prove that out. But it also consumes relatively little of your CPU—which, if you scale down your tab use, makes its impact on your PC manageable. Opera, however, really shines. In fact, without Flash, Opera consumed just 6.6 percent of the CPU and 1.83GB of RAM during our stress test. With Flash on, Opera consumed 3.47GB of memory and 81.2 percent of my computer’s CPU.

And Mozilla was getting on so well—but with Flash on, tabs essentially descended into suspended animation until they were clicked on, then began slowly loading. It was awful. “Tombstoning” tabs that aren’t being used is acceptable, but please, load them first, Mozilla!

Finally, we tried loading pages, then timing how fast before the page became “navigable”—in other words, how soon one could scroll down. Fortunately, all the browsers we tested did well, although some were faster than others; Chrome and Opera did exceedingly well, especially with Flash turned off. In all, however, we’d say that any browser that can load pages at three seconds or less will suit your needs. (Keep in mind that the time to load pages depends in part on your Internet connection and the content of the page itself.)

The convenience factor
Since all of these browsers are free, ideally you should be able to download every one and evaluate it for yourself. And each browser makes it quite easy to pluck bookmarks and settings from their rivals, especially from Chrome and Internet Explorer. But manually exporting bookmarks is another story. It’s almost like telling the browser that you’re fed up with it—and Firefox, for example, passive-aggressively buries the export bookmarks command a few menus deep. Even stranger, Opera claims that you can export bookmarks from its Settings menu, but only the import option appears to have remained in Opera 31.

More and more, however, browsers are using a single sign-on password to identify you, store your bookmarks online, and make shifting from PC to PC a snap—provided that you keep the same browser, of course.

Chrome, for example, makes setting itself up on a new PC literally as simple as downloading the browser, installing it, and entering your username and password. You may have to double-check that the bookmark bar is enabled, for example, but after that your bookmarks and stored passwords will load automatically. (As always, make sure that “master” passwords like these are complex.)

Chrome isn’t alone in this, either. Firefox’s Sync syncs your tabs, bookmarks, preferences and passwords, while Opera syncs your bookmarks, tabs, the “Speed Dial” homepage, and preferences and settings.

That’s an area where Edge needs improvement. Edge can import favorites/bookmarks from other browsers, manually, but doesn’t keep a persistent list of favorites across machines—at least not yet. But if you save a new favorite in IE11, it’s instantly available across your other PCs. Other browsers—not Edge—also allow you to access your desktop bookmarks within their corresponding mobile apps.
edge homepage info

You can configure the Microsoft Edge homepage to show you information that allows you to start your day. (iGoogle did this too, years ago.)

It’s also interesting that, more and more, browsers are moving away from the concept of a “homepage” in favor of something like Edge or Opera, where the browser opens to an index page, with news and information curated by the browser company itself. But you still have options to set your own homepage in Chrome, Edge, and Firefox.

Honestly, all of the browsers we tested were relatively easy to set up and install, with features to import bookmarks and settings either from other browsers or other installations. You may have your own preferences, but it’s a relative dead heat.

Final page: Little extras and PCWorld names the best browser of 2015

Going beyond the web
Modern browsers, however, go beyond merely surfing the web. Most come with a number of intangible benefits that you might not know about.

Perhaps you’d like your browser to serve as a BitTorrent client, for example. In the early days, you’d need to download a separate, specific program for that. Today, those capabilities can be added via plugins or addons—which most browsers offer, but not Edge, yet. (This can be more than a convenience; Edge will store your passwords, but not in an encrypted password manager like LastPass.)

If there’s one reason to use Firefox, it’s because of the plugin capability. Mozilla has a site entirely dedicated to plugins, and they’re organized by type and popularity. Installing a plugin is as easy as clicking through a couple of notifications, then restarting your browser. And given the market share of Chrome—and the plugin popularity of Firefox—you’ll find developers who will focus on those two first. A good example is OneTab, which transforms all of your open tabs into a text-based list, dramatically cutting your browser’s memory consumption. Note that the more plugins you add and enable, the more memory and CPU power your browser will consume.

Opera doesn’t appear to have nearly the number of available plugins that Firefox does, but it does have a unique twist: a “sidebar” along the left hand side that can be used for widgets, like a calculator or even your Twitter feed. Opera is also extensible via wallpaper-like themes, but they’re far less impressive.

Chrome hides a wealth of options to manage what you see on the Web, but only if you want to explore.
But you’ll also notice browsers adding more and more functionality right in the app itself. Firefox includes a Firefox-to-Firefox videoconferencing service called Firefox Hello that works right in your browser, and you can save webpages to a Pocket service for later reading. And this is where Edge shines—its digital assistant, Cortana, is built right in, and there are Reading View options and a service to mark up webpages, called Web Notes. Cortana does an excellent job supplying context, and it’s certainly one of the reasons to give Edge a try.

Over time, we expect that this will be one area where Edge and Chrome will attempt to “pull away,” as it were. In a way, it’s similar to the race in office suites: a number of apps mimic functionality that Microsoft Office had a few years ago. But Microsoft has begun building intelligence into Office, and Edge, elevating them over their competition. Given that Chrome is also the front door to Google Now on the PC, we may eventually see Google try to out-Cortana Cortana on her home turf.

So who wins? Here’s the way we see it.
Give credit where credit is due: Edge’s performance has improved to the point that it’s competitive, though perhaps not as much as Microsoft would make it seem. Still, its lack of extensibility and proper syncing drag it down, at least until they’re added later this year. Firefox also performed admirably, until it bogged down under our real-world stress test. We also believe Opera would be a terrific choice for you, since it zips through benchmarks and real-world tests alike. Sure, it lacks the tight OS and service integration of Chrome, IE, and Edge—but some may see that as a bonus, too.

All that said, we still think Google’s Chrome is the best of the bunch.
Chrome has a well-deserved reputation for glomming on to and gobbling up any available memory, and our benchmarks prove it. But it’s stable, extensible, performs well, integrates into other services, and reveals its depths and complexity only if you actively seek it out. For that reason, Google Chrome remains our browser of choice, with Opera just behind.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

 

Free MCTS Training - Free MCITP Training - CCNA Training - CCIE Labs - CCNA Certification - MCTS Online Training - MCITP Online Training - Comptia a+ videos - Comptia a+ Video Training - MCTS Training Key - MCITP Training Key - Free Training Courses - Free Certification Courses - MCTS Online Training - MCTS Online Certification - Cisco Certification Training - CCIE LABS Preparation - Cisco CCNA Training - Cisco CCNA Certification Key - MCITP Videos Training - Free MCITP Videos Tutorial - Free MCTS Video Training - MCTS Videos Tutorial - Free Comptia Online Training - Free Comptia Online Certification

Microsoft MCTS Certification - Microsoft MCITP Training - Comptia A+ Training - Comptia A+ Certification - Cisco CCNA Training - Cisco CCNA Certification - Cisco CCIE Training - Cisco CCIE Exams - Cisco CCNA Training - Comptia A+ Training - Microsoft MCTS Training - MCTS Certification - MCITP Certification