If you’ve ever been part of a technology roll-out that fizzled, you know how expensive and frustrating it can be. You have carefully chosen a vendor, made a big splashy announcement, (maybe) made training available to people and… nothing. The bad news is that can be costly and annoying. What you need to know is that it’s perfectly natural.
What you’re seeing in action is something called "The Hype Cycle”and knowing what it is will help you counteract it and get more ROI on your technology investment. The term was coined in the mid-1990s by Gartner ( or The Gartner Group as they were then known). Basically, it shows how people react to a new technology.
There are 5 stages to this cycle:
The Technology Trigger: someone invents a cool tool like the iPad, or IT finally quits mucking around with the bid process and announces they’ve chosen a new web presentation platform.
The Peak of Inflated Expectations: This is based on perfectly good math and overly optimistic expectations. "We have 200 managers, if they all do 2 webmeetings a week we’ll make our money back in no time and look at everything we’ll save on travel”. The problem is that not all your managers will immediately jump on that bandwagon, which leads to…
The Trough of Disillusionment: This sounds like something out of Pilgrim’s Progress but it basically is the cold splash of reality in your face when you realize only a fraction of people will immediately adopt a new tool. At this point, you have to work like a dog to get to…
The Slope of Enlightenment: This is how long it takes your people to ramp up to using a tool at some level of effectiveness. If they move fast, it’s a quick climb, if they aren’t motivated to use the technology, it can be a long, painful, gradual slope to…
The Plateau of Productivity: Which is where whoever is going to use the tool is using it. Sometimes it’s almost universal (like email) and sometimes it’s far below expectations (SharePoint anyone?).
Where money is recouped or lost forever is in a) how steep the Slope of Enlightenment is and b) how high the Plateau of Productivity is. So how do you get over the initial disappointment and get where you need to go faster?
Most companies make two mistakes when rolling out technology: they have unrealistic expectations of how quickly people will grab onto a technology and they try to roll it out to everyone at the same time. Both will deepen the trough of disappointment and increase the length of time it takes to reach productivity.
Here’s how to overcome this perfectly natural obstacle to success:
Start Small– choose a pilot group of users, demonstrate success and let the rest of your team build on success. You want to choose people for this group who are willing to use technology, are respected by their peers and will immediately show positive results from using the tool. This will increase positive buzz and help others be more willing to try your new gizmo. Think about your Peaks and Troughs… the lower the peak the shallower the trough and the faster the climb to utilization.
The list of goals entrepreneurs have is usually long and perpetually growing. We might have plans to start a marketing project, invest in a new hire, or redesign our website. With so much happening at once, sharpening our leadership skills often falls to the bottom of the list. Yet it’s imperative to begin thinking about how you can take yourself and your company to the next level.
Scott Eblin, a leadership coach and author of Next Level: What Insiders Know About Executive Success, writes about the behaviors leaders need to abandon and ones they should adopt. I spoke to him about his best leadership tips for entrepreneurs. Edited interview excerpts follow.
Q: What are the most common habits entrepreneurs should break?
Eblin: The lowest-rated behavior in our research on leaders is the skill of pacing oneself by building in regular breaks from work. That can be a particular challenge for small-business owners who are trying to do it all. The entrepreneurial mind-set is often that of the ‘go to’ person. You’ve gotten where you are because you get stuff done. You’re the closer. It’s all too easy to get sucked into that mind-set and lose your perspective.
Q. What behaviors can business owners adopt to get to the next level?
A. An important question that all leaders need to ask themselves on a regular basis is ‘What is it that only I can do?’ That question is not about being indispensible. Rather, it’s about thoughtfully considering the highest and best uses of your time and attention. Where is the value really added? Assess the things that only you can do, and find help for the rest.
Q.What’s the best way for entrepreneurs to learn new skills?
A. I’m a big fan of peer coaching. I encourage all of my clients to find one or more peers with whom they can connect on a regular basis. You can learn from each other’s experiences and provide some space for each other to get up on the balcony and think out loud.
Q.What’s a good strategy for making changes?
A. Don’t try to change too many things at once. Focus on the vital few and work on making those habits you can build on. Aristotle said, ‘We are what we repeatedly do. Excellence, then, is not an act but a habit.’ Who am I to argue with Aristotle? I think that’s great advice.”
Q.What’s the one thing everyone needs to do to be a better leader?
A. Ask for feedback from your team, peers and clients. Then, look for the one or two most important things that would make the biggest difference in your overall effectiveness. Identify one or two action steps that are in the sweet spot between ‘easy to do’ and ‘likely to make a difference.’ From that point on, follow the advice on the back of the shampoo bottle. Rinse and repeat.
Making a decision is one of the most powerful acts for inspiring confidence in leaders and managers. Yet many bosses are squeamish about it.
Some decide not to decide, while others simply procrastinate. Either way, it’s typically a cop-out — and doesn’t exactly encourage inspiration in the ranks.
To avoid pining over what to do and what to skip, it can help to learn how to make better decisions. You’ll be viewed as a better leader and get better results overall. Here are five tips for making quicker, more calculated decisions:
1. Stop seeking perfection. Many great leaders would prefer a project or report be delivered only 80% complete a few hours early than 100% complete five minutes late. Moral of the story: Don’t wait for everything to be perfect. Instead of seeking the impossible, efficient decision makers tend to leap without all the answers and trust that they’ll be able to build their wings on the way down.
2. Be independent. Good decision makers are "collaboratively independent.” They tend to surround themselves with the best and brightest and ask pointed questions. For instance, in a discussion with subject-matter experts, they don’t ask: "What should I do?” Rather, their query is: "What’s your thinking on this?” Waiting for committees or an expansive chain of command to make decisions could take longer. Get your information from credible sources and then act, swiftly.
3. Turn your brain off. Insight comes when you least expect it. Similar to suddenly remembering the name of an actor that you think you’d just plumb forgotten. The same happens when you’re trying to make a decision. By simply turning your mind off for a while or even switching to a different dilemma, you’ll give your brain the opportunity to scan its data bank for information that is already stored and waiting to be retrieved.
4. Don’t problem solve, decide. A decision can solve a problem, but not every problem can be solved by making a decision. Instead, decision making often relies more on intuition than analysis. Deciding between vendors, for instance, requires examining historical data, references and prices. But the tipping point often rests with your gut. Which feels like the right choice?
5. Admit your mistakes. If your feelings steered you wrong, correct the error and fess up. Even making the wrong decision will garner more respect and loyalty when you admit you’ve made a mistake and resolve it than if you are habitually indecisive.
CareerCast.com releases its 2011 Jobs Rated Report, showing Software Engineer as the top job.
In the 2011 Jobs Rated Report released last week by CareerCast.com, the number one job was Software Engineer. The job has gotten a boost from the development of apps for iPods, tablets, smart phones and other devices.
According to the report, Software Engineer topped the list as the hottest job of 2011 thanks to its low stress, great outlook for employment, strong income growth potential, few physical demands and a high environmental ranking. Rounding out the five best jobs are Mathematician, Actuary, Statistician and Computer Systems Analyst, while Roustabout (someone who works on an oil rig), Ironworker, Lumberjack, Roofer and Taxi Driver ranked as the five worst jobs.
Tony Lee, publisher of the Jobs Rated Report, says that a college education played a large role in this year’s rankings. He said, "Not only do the top 5 jobs pay more than twice as much when comparing mid-level incomes as the bottom 5 jobs ($83,777 per year vs. $30,735 per year), they all benefit from a college degree and math skills.”
To see the 2011 rankings of all 200 best and worst jobs, click here.
Justin James considers Silverlight, Windows Phone 7, mainstream development alternatives, Web development maturity, and the economy topics worth watching in 2011.
2011 is here! While I don’t like to make predictions per se, I do like to explore what topics I think may be important to developers for the next twelve months. Let’s jump right into my look ahead for 2011.
Silverlight
2010 was the year that Silverlight (and with it, WPF for apps that need access to local resources) gained real momentum. The more I play with Silverlight, the less it frustrates me, though lots of aspects of the technology still rub me the wrong way. In my opinion, the "patterns and practices” people pollute Silverlight’s ecosystem; they waste a lot of time and effort on a million frameworks to do things that address a couple of stylistic and academic concerns at the expense of increased complexity, indirectness of code, and significantly raising barriers to entry.
Fortunately, I learned that you don’t need to do things the way these folks push. In fact, the default, out of the box Silverlight development experience is very similar to WinForms (for better or for worse), and the learning curve is not nearly as bad as it appears when you first survey the landscape. This is particularly good news because, in 2011, enough development is moving to Silverlight and WPF that folks who don’t have the time and energy to learn new development paradigms will be moving to it.
Windows Phone 7
In my TechRepublic columns about Windows Phone 7 development, I note that the experience hasn’t always been pleasant. While aspects of Windows Phone 7 development still frustrate me, it is a much better experience than its competition in terms of writing applications.
I don’t know if Windows Phone 7 will be a big hit, but if it’s a success, it will be a late bloomer like Android. Remember, Android was anemic until the Droid 1 was released just over a year ago, and now it’s a big hit. That said, I think that Android is the odd man out right now. The development experience is tough because of the fragmentation. You never know what resolutions to expect, for example, or baseline phone functionality. Even on a particular model, you can’t expect a particular version of Android. With iPhone, BlackBerry, and Windows Phone 7, you do.
RIM has lost an incredible amount of momentum, and none of its recent attempts at regaining it have looked promising. Palm’s WebOS is on ice until HP figures out what it wants to do with it. Symbian has huge worldwide success except for the United States. iPhone continues to move crazy unit numbers. If Windows Phone 7 becomes a hit, it will be at the expense of RIM and Android. I think Android has enough problems, and Windows Phone 7 has both enough potential to pull it off. Windows Phone 7 is already quite good in ways that Android isn’t, both to developers and users. If I were an Android developer, I would be watching Windows Phone 7 to see where it goes.
Google Tech Talks January, 25 2008 ABSTRACT In this talk we examine how high performance computing has changed over the last 10-year and look toward the future in terms of trends. These changes have had and will continue to have a major impact on our software. A new generation of software libraries and algorithms are needed for the effective and reliable use of (wide area) dynamic, distributed and parallel environments. Some of the software and algorithm challenges have already been encountered, such as management of communication and memory hierarchies through a combination of compile--time and run--time techniques, but the increased scale of computation, depth of memory hierarchies, range of latencies, and increased run--time environment variability will make these problems much harder. We will focus on the redesign of software to fit multicore architectures. Speaker: Jack Dongarra University of Tennessee Oak Ridge National Laboratory University of Manchester Jack Dongarra received a Bachelor of Science in Mathematics from Chicago State University in 1972 and a Master of Science in Computer Science from the Illinois Institute of Technology in 1973. He received his Ph.D. in Applied Mathematics from the University of New Mexico in 1980. He worked at the Argonne National Laboratory until 1989, becoming a senior scientist. He now holds an appointment as University Distinguished Professor of Computer Science in the Electrical Engineering and Computer Science Department at the University of Tennessee, has the position of a Distinguished Research Staff member in the Computer Science and Mathematics Division at Oak Ridge National Laboratory (ORNL), Turing Fellow in the Computer Science and Mathematics Schools at the University of Manchester, and an Adjunct Professor in the Computer Science Department at Rice University. He specializes in numerical algorithms in linear algebra, parallel computing, the use of advanced-computer architectures, programming methodology, and tools for parallel computers. His research includes the development, testing and documentation of high quality mathematical software. He has contributed to the design and implementation of the following open source software packages and systems: EISPACK, LINPACK, the BLAS, LAPACK, ScaLAPACK, Netlib, PVM, MPI, NetSolve, Top500, ATLAS, and PAPI. He has published approximately 200 articles, papers, reports and technical memoranda and he is coauthor of several books. He was awarded the IEEE Sid Fernbach Award in 2004 for his contributions in the application of high performance computers using innovative approaches. He is a Fellow of the AAAS, ACM, and the IEEE and a member of the National Academy of Engineering.
Crawljax is a tool for crawling any AJAX/GWT application. It uses WebDriver to navigate through the different states of a web application. With plugins and invariants Crawljax can be used to perform various automated tests. For example: security testing, regression testing, accessibility testing, performance testing, cross-browser testing...
Confession. I once left my computer on during takeoff. I know I shouldn’t have. I really shouldn’t have. But it was acting up and half the time when I shut it down it wouldn’t turn back on again unless I plugged it in, so I left it on going through security in case the screeners wanted it turned on, and then I forgot, and… well, it’s not a very good excuse. And I really really will never do it again.
I’ve read enough to know that in a perfect world, with brand new planes and pristine mobile gizmos that have never been, oh, dropped on a sidewalk, turning on a computer, or even a phone, won’t cause problems for the pilot.
But I also know that this isn’t a perfect world, and there a heck of a lot of gizmos on a plane, and no one really knows how they all will interact, particularly with the wiring on older planes, and I don’t want to my gadget to be the one causing trouble. Since 2000, the New York Times reports, there have been at least 10 voluntary reports filed by pilots with the Aviation Safety Reporting System when devices used by passengers clearly were interfering with flight electronics (pilots test this when flight systems are behaving oddly by asking passengers to turn their gizmos on and off.)
Back in 1996, fellow Spectrum senior editor Linda Geppert and I reported on a study from the RCTA, a nonprofit that advises the FAA; the RCTA had looked at anecdotal reports of problems, but couldn’t quantify the real risk, and urged more research to be done.
In 2006, Bill Strauss and his coauthors from Carnegie Mellon University reported in Spectrum’s Unsafe At Any Airspeed, that, according to measurements made on 37 flights during 2003, least one person on a typical flight makes a cellphone call, and that cellphones and other portable electronic devices can indeed interfere with normal cockpit instruments, concluding that, eventually, a device like a cellphone will be found at fault in an accident.
This research hasn’t been repeated recently; but you have to think it’s happening more and more. Folks don’t like turning off their smart phones. Ever. And the so-called airplane mode doesn’t exactly solve the problem. On a recent flight I told the really large guy in a leather jacket crammed into the seat next to me, who was trying to make a cell phone call halfway through the flight, to please turn off his phone or put it into airplane mode. He told me he had tried airplane mode—but it wasn’t working, it wouldn’t let him make the call. And he’s probably not the only person confused.
But odds are his phone hadn’t been dropped, it was a fairly new plane, and everything was fine, as it usually is. Usually.
GIGATEPS AHEAD: The Intrepid supercomputer is at the top of the Graph 500 list.
Imagine a world in which a car's performance is judged solely by the time it takes to go from 0 to 100 kilometers per hour, ignoring fuel efficiency and other metrics. This, in essence, is the state of supercomputing today, says a group of U.S. computer scientists. People today typically judge supercomputers in terms of their raw number-crunching power, for example by asking how many linear algebra problems they can solve in a second. But, the scientists argue, the lion's share of challenging supercomputing problems in the 2010s requires quick and efficient processing of petabyte and exabyte-size data sets. And good number crunchers are sometimes bad exascale sifters.
It's time, the researchers say, for high-performance computers to be rated not just in petaflops (quadrillions of floating-point operations per second) but also in "gigateps" (billions of traversed edges per second).
An "edge" here is a connection between two data points. For instance, when you buy Michael Belfiore's Department of Mad Scientists from Amazon.com, one edge is the link in Amazon's computer system between your user record and the Department of Mad Scientists database entry. One necessary but CPU-intensive job Amazon continually does is to draw connections between edges that enable it to say that 4 percent of customers who bought Belfiore's book also bought Alex Abella'sSoldiers of Reason and 3 percent bought John Edwards's The Geeks of War.
"What we're most interested in is being able to traverse the whole memory of the machine," says Richard Murphy, a senior researcher at Sandia National Laboratory, in Albuquerque, N.M. "There's no equivalent measure for these problems that's accepted industry-wide."
So Murphy and his colleagues from other U.S. national laboratories, academia, and industry have put together a benchmark they're calling the Graph 500. The name comes from the field of mathematics (graph theory) that the benchmark draws most heavily from. And the 500 is, Murphy says, an "aspirational" figure representing what they hope someday will be a "top 500" ratings list of the highest-performing supercomputers around the world, measured in gigateps instead of gigaflops.
The current biannual Top 500 supercomputers list recently made headlines whenChina's Tianhe-1A took the top position, coming in at 2.57 petaflops. The supercomputers on the list are ranked using a benchmark package of calculation speed tests called the High-Performance Linpack.
Crucially, Murphy says, the point of the Graph 500 is not to run a horse race on a new racetrack. Rather, he says, they've designed the benchmark to spur both researchers and industry toward mastering architectural problems of next-generation supercomputers. And the only way to know if you've solved those problems is for the industry to include those problems in its metrics.
In fact, by a Graph 500–type standard, supercomputers have actually been getting slower, says computer science and electrical engineering professor Peter Kogge of Notre Dame University. For the past 15 years, he says, every thousandfold increase in flops has brought with it a tenfold decrease in the memory accessible to each processor in each clock cycle. (For more on this problem, see Kogge's feature article in next month's issue of IEEE Spectrum.)
This means bigger and bigger supercomputers actually take longer and longer to access their memory. And for a problem like sifting through whole genomes or simulating the cerebral cortex, that means newer computers aren't always better.
"Big machines get embarrassingly bad gigateps results for their size," Kogge says.
Today only nine supercomputers have been rated in gigateps. The top machine, Argonne National Laboratory's IBM Blue Gene–based Intrepid, clocked in at 6.6 gigateps. But to score this high, Intrepid had to be scaled back to 20 percent of its normal size. (At full size, Intrepid ranks No. 13 on the conventional Top 500 list, at 0.46 petaflops.)
"I think Graph 500 is a far better measure for machines of the future than what we have now," Kogge says. Supercomputing, he says, needs benchmarks that measure performance across both memory and processing.
However, Jack Dongarra, professor of electrical engineering and computer science at the University of Tennessee and one of the developers of the Top 500 list, notes that the Graph 500 isn't the first new benchmark to challenge the ... Read more »