23 July 2010

Stefan Lindegaard hit on a very important point in his blog regarding “False Positives and Fast Response Times.” One of the most important points regarding any campaign to solicit ideas is feedback. Individuals who take the time to share their thoughts to improve processes, cut costs, or create new products and services want to know if they are being taken seriously.

Very often at the launch phase of an "innovation suggestion program" management will pump up an organization and fill their heads with all the good that will come from the activity and the rewards that individuals will be able to earn. This normally sets the stage for large amounts of ideas to enter the “idea funnel”. However, all-too-often management misses the point that although rewards are great tools for motivation, the lack thereof doesn’t even come close to undermining a suggestion program as the lack of unambiguous and timely feedback.

Back in the day of manual systems (those carried on slips of papers which were shuttled from department to department for feedback), the idea management process was arduous and resource intensive. Not having the right amount of resources often meant that it would take weeks, if not months to get any feedback on an idea. This in turn led to progressively lower amounts of input due to the resulting disenfranchisement of the organization’s brain-trust.

I personally remember the first suggestion program I was involved with and how I had to personally track down the program coordinator to get feedback on every idea. Many of these were waste/ scrap reduction ideas that saved anywhere from $20k to $200k and could be implemented with $4000 modifications (that could be done in 1-2 weeks). On average these ideas took 2-3 months to implement (because I was constantly pushing the coordinator and relevant stakeholders), and had it not been for my stubbornness, perhaps I would have given up on finding any more scrap reduction opportunities.

Not providing conclusive and prompt feedback will quickly undermine an innovation suggestion program. Today’s idea management software technology goes a long way at helping control the feedback situation. On these modern-day platforms users get feedback from peers, coordinators, evaluators, and implementers through email or built-in message boxes. Since many of the tasks, routing and data-collection, are now centralized on a software platform, coordinators, administrators and stakeholders in general have quicker access to the information needed to accomplish their jobs, and with a lesser amount of effort to finish them they are able to feedback sooner and allow individuals to move-on to the next suggestion.


Posted on Friday, July 23, 2010 by George R.

1 comment

04 June 2010

(DISCLAIMER: Although I would not be surprised that a similar idea to this could solve the problem, there are several factors that obviously have not been taken into account with this design… [i.e.: the blowout preventer is partially engaged]. This article is more an illustration on how over-complicated we can make things when as humans we are unprepared for worse case scenarios and fail to properly frame problems that need to be solved.)

My 7-year-old son figured out how to stop the gushing oil well. As a ‘manager’ I ran a small experiment and led him to a sensible answer. I turned on the garden hose and asked my son how he would stop the flow of water without turning off the spigot. His first reaction was to step on the hose. When that didn’t fully do the trick he proceeded to cover the end of the hose with his hand. By now the flow was only 20% of the original, and my son was drenched. He then looked at me and said, “dad, if we can push something into this ‘hole’ so that I can hold it with my hand I think I can stop it!” EUREKA!, the kid figured it out…

I quickly sketched an adapted version of the “plug-and-hand” for the gushing well. After all, the two physics concepts at play here are: 1) Pressure, 2) Area of the orifice… and the rules that apply on the surface also apply at 5000ft under water.



The problem that has plagued BP is that they failed miserably at containment (and their management is fully responsible for this). In their mind, this oil is being wasted, and every solution they try to come up with is one way or another trying to ‘recover’ some of the oil. They have been trying to somehow capture and transport the oil to surface ships thus spending all their brain power on pie in the sky solutions that have never been tried before. This problem is systematic and likely not isolated to BP, but to every single oil company out there. When you consider the cost of drilling a well, every containment solution which has been developed has to do with the ability to continue recovering the oil. Even the blowout preventer is designed to be “re-tapped”. Except for the ‘top-kill’ process, not a single solution exists in an oil company’s arsenal to fully and completely disable a well.

In the context of ideation or brainstorming, the problem or challenge was incorrectly framed by BP management and perhaps even the US government. BP has clearly been looking for solutions that can recover some or all of the oil (until that relief well is completed), and the US government has been focused on cleanup and containment (how to keep the oil from damaging the shoreline)… but no one has truly looked at how to effectively plug the well.

Granted, this solution is in its simplest form, but I’m sure that the “engineers” which have the schematics of the blowout preventer, the size of the pipe and oil pressure which must be suppressed can define a variation that can effectively “PLUG THE WELL”.

The irony here, is that in BP’s struggle to ‘salvage’ some of the oil in order to sell it, they have effectively incurred more costs on the cleanup bill.

So the moral of the story… there are two:
1) As the Baden-Powell once said: “Be prepared.”
2) Not framing the problem correctly can lead to a tremendous waste of “collective-intelligence. This can be the difference between over-engineering a solution and finding one ‘under our noses’.




Posted on Friday, June 04, 2010 by George R.

5 comments

28 May 2010

Before the advent of the web, and more specifically, Web 2.0 technologies, effective innovation was a slow process and was usually limited to large organizations which had the resources to genuinely filter through thousands of ideas.

According to the American Management Association’s (AMA) data, most companies lack a formal process for selecting ideas. They surveyed 1356 global managers and found that:
  • - 48% “don’t have a standard policy for evaluating ideas.”
  • - 17% use an “independent review and evaluation process.”
  • - 15% said “ideas were evaluated by the unit manager where the idea was proposed.”
As Robert Tucker states, “An effective selection process connects your “idea funnel” to your “idea pipeline.” Without it, this winnowing is haphazard, hierarchical, and discouraging to would-be innovators.”

One of the best examples of idea management connecting the idea funnel to the idea pipeline is that of Disney. According to Peter Schneider, who was president of Disney features, many of Disney’s best ideas came from their home-grown “Gong Show”. Several times a year employees had an opportunity to pitch their ideas to Eisner (Disney’s CEO) and several other top managers. Up to 40 of them would perform or present their ideas until a ‘gong’ would sound. Eisner and his team would then discuss each one and decide which ones would fit strategically with their vision. Disney Stores, as well as many of their animated features were born through this process.

Eisner and the Disney team clearly understood the value of listening to their employees and applying strategic criteria to each idea, however, in the face of this process, and the fact that top management had to sit, sift and decide based on input from 40 employees, this process was highly inefficient and ineffective. For one, most employees would faint at the thought of standing directly in front of Eisner to pitch their idea, so only the bravest would stand a chance at presenting their idea. Second, of these 40 probably came through some resource intensive pre-vetting otherwise Eisner et al (and their expensive salaries) probably would have had to listen to some pretty bad ideas. Finally, this process took place several times a year, thus it was not a continuous day-to-day process.

In a company with tens of thousands of employees ideas lie everywhere, but finding the best ones has always been a problem only because vetting them has traditionally been a resource intensive task, and thus the numbers reported by AMA are a reflection of management teams not wanting to dedicate the time, effort, and resources to such a task.

Idea Management Software is changing the game. Most players in the market, including ourselves (INCENT), have developed effective algorithms and ‘game-like’ situations to help ideas flow at an unprecedented rate into the funnel, reduce the pre-vetting strain on management resources, and ultimately yield a higher number of high quality ideas for management to review.

So how do these systems work? In a nutshell (and I won’t go into the pros and cons of the different algorithms), ideas are posted via the web to a common platform accessible globally by all users. The users can review, add feedback, and vote on the ideas they feel passionate about. The ideas which are ranked highest are then reviewed by an expert committee and rigorously analyzed against a predetermined criteria scorecard (the better Idea Management systems include this feature, but with the most basic ones this process goes into ‘manual’ mode.)

Criteria are usually defined around the company culture and values. For example, Bank of America’s scorecard reflects the following: ease of implementation, associate impact, customer delight, and revenue potential. Once the criteria are applied to the ideas, those with the highest ‘expert’ scoring will likely be introduced to the idea pipeline.

However, the greatest value in these systems is not their ability to filter the ideas and to help organizations identify the best ones more efficiently, but rather it’s the transparency of the process, which keeps users engaged, and the centralization of all the ideas, which makes them searchable by the entire organization. The ‘lessons learned’ database that is built through the use of the system is what systematically helps improve the quality of the ideas and the rate at which problems are solved throughout the organization. Especially with intra-innovation, or continuous improvement, individuals looking for ways to solve problems may find that similar or identical problems have already been addressed in other parts of the organization. In lean parlance, these systems enable “Yokoten”, which in Japanese means “across everywhere”



Posted on Friday, May 28, 2010 by George R.

No comments

11 May 2010

Taiichi Ohno used to say “Where there is no standard there can be no Kaizen”, which can be translated to “you can’t improve what you don’t measure.” However, these two sayings have a critical missing element... the term “accurate”. Many organizations fall victim to poor data interpretation and instead of improving their processes they do them more harm.


Good data analysis is an integral part of good idea management systems. Breaking down the raw data and identifying trends in idea quality, user participation, and aging of ideas can help program administrators improve the process, however, the wrong slicing and interpretation of the metrics can quickly hinder it.

This concept always brings me back to my love of golf and in particular one of my pet peeves… the “Putting Average Leaderboard”.

It has always struck me as odd that one of the statistics most used by sports analysts to measure a pro golfer’s performance is their putting average. Normally, when a golfer has a bad year following a good one, they will usually look at their putting average as the culprit for the fall from grace.

Needless to say, putting accounts for close to 50% of strokes on a golf course, and putting averages are mostly the result of a golfer’s ability to get the ball close to the hole with the other 13 clubs prior to using the putter to finesse the ball into the hole. When there is only a 0.10 average put per hole separating the top putter and the 80th on the list, and when you realize that 2010’s top two money leaders, Ernie Els and Phil Mickelson are ranked 54th and 53rd respectively on the average putts list, it is time that analysts realize that this list does not come close to predicting how good a golfer is.

It’s a matter a fact that the top golfers find themselves putting for birdie more often than the golfers at the top of the putting list. The golfers with the lower averages are usually the ones having to chip and putt for par, and these are precisely the ones we seldom hear winning a green jacket and the first ones in line to join the Nationwide tour.

For Phil and Ernie, if they were subject to ‘management decisions’ made by interpreting the data, they would probably be sent to ‘putting-re-certification’ class. Unfortunately this would cause them to spend time away from sustaining and developing their other skills, and would likely lead them to fall off the top of the money list.

The irony would be that they would likely climb up the putting charts, giving ‘management’ the impression that the re-certification classes were effective and failing to realize that they have hindered their ability to be top performers by placing them in a position to sink more ‘PAR’ putts.

Thus, from the analytical sense, the putting average list has no value and the data represents a red herring, that if followed as most analysts interpret it, would lead to good golfers losing their winning ways.

… and as the old saying goes… “There are two things that don’t last very long: dogs chasing cars and pros putting for pars”


Posted on Tuesday, May 11, 2010 by George R.

No comments

03 May 2010

Seeing that my natural vacation sanctuary, where I normally go to break from life’s stresses and enjoy time with my family is about to be permanently destroyed, I decided to break with the Idea Management and Lean tone of this blog in order to reflect a little on quality management. For many years, I helped Mercedes-Benz suppliers improve their quality through lean tools, but also through the use of statistics. Even though I was never formally trained as a Six-Sigma ‘grasshopper’ and much less a Sensei, I did use many of the statistical tools found in the Six Sigma toolbox.

The FMEA has always been a key tool in the auto industry to identify areas of product quality risks and thus planning how to mitigate them. Those components that can play a role in a potential catastrophic failure, loss of life or loss of property, get treated with extra care. To generalize, these components are ‘serialized, and data is recorded along the entire manufacturing chain. Every critical process is monitored, and equipment is designed to “inspect” its own quality, and in the case of critical characteristics, it’s designed to check the quality of preceding processes. Redundancy is so built that if one inspection process fails, the next one will catch the defect. These redundant checks are designed in layers and their ultimate goal is to ensure no bad parts exit the manufacturing process.

With that said, to put six sigma quality in perspective, aircraft are a good example of redundant systems at work. Critical systems in aircraft are designed with multiple backup systems. (Keeping math simple, and not using real life numbers) If a hydraulic system has a natural tendency to fail once in 100,000 uses, applying a backup system ensures that in a worst case scenario, a simultaneous failure will only occur once in 100 billion uses.

In general there are two major reasons for quality failures: The first is the failure to identify a potential failure mode and thus not guarding against it and is normally due to lack of historical reference or a lesson learned. The second is by far the worst, and it’s the failure of people to follow established procedures. This is critical because it is not a reflection of the actual workers, but rather a reflection on management.

Bunji Tozawa said “Blame the process and not the person”. What he eludes to is that management is responsible for the processes and thus a failure is essentially their fault.

Like the auto industry, big oil relies on suppliers, and it’s extremely critical to ensure these suppliers manage and maintain their internal procedures. In the auto industry we don’t only measure and rate suppliers by their ability to supply good parts, but we also audit their adherence to their quality systems and have different means of flagging potential problems before they occur. The proactive approach is taken to ensure that human lives are not lost driving cars.

Having a deep understanding of quality systems, redundancy, and personal ties inside the oil industry, putting my head around the sinking of the BP platform and BPs overall safety and environmental record (Pipeline 2006, Refinery explosion 2005) is almost impossible to think that it all happened because of ‘bad luck’ or failed equipment! This wasn’t a failure as in case one: Not identifying a potential failure mode, but instead a failure to follow procedures and adhering to best practices.

A reason why in general the global oil safety record is good is because of strict processes and procedures (I also wrote about managing safety and how Schlumberger uses idea management systems to manage identified safety risks.) Keep in mind that there are more oil wells in operation than there are aircraft in the air on any given day, and the number of catastrophic incidents pale in comparison to the aeronautical industry. (Here’s an old CNN article showing the worse accidents through 2001)

The bottom line, when the dust clears, a thorough investigation will likely yield a lack of self auditing and supplier auditing practices inside BP, and management’s inability or unwillingness to ensure that the entire corporate culture is driven by adherence to established procedures. A good indicator here will be BP’s response. As they start to blame Transocean (the operator of the rig), “a faulty blow-out” control system, a missed maintenance step, or operator error, what they really will be saying is that management has been incompetent and unable to drive a corporate culture that adheres to strict safety and environmentally relevant procedures.


Posted on Monday, May 03, 2010 by George R.

No comments

09 April 2010

People have asked me in the past year what Lean Idea Management meant. They assumed it had to do with managing Kaizen in lean organizations. Although in reality our Idea Management tool would bring value to Lean organizations, it also applies well in all organizations.

The reason I named the blog Lean Idea Management, and really what drove the design of EurekaTool was the need to eliminate the non-value added steps often associated with that assembly line called “ideation-to-implementation”. There are many areas of waste in a typical continuous improvement process, even in lean companies.

For one, most ideas are recorded on paper, or perhaps a Kaizen Card. The idea has to be communicated to several individuals in order to align needed resources or get buy-in. Typically this happens in meetings. Ideas, and supporting documentation are put in manila folders and passed around for evaluation and eventually if the idea is good, it is handed over to someone or a team of implementers. Ultimately, most organizations I have seen, also have a coordinator or team of coordinators that walk these folders around, hand them over, retrieve them and even coordinate the meetings. Most of the time these coordinators, or gatekeepers, become bottle necks under the extreme load of keeping the idea assembly line moving, and at times have been known to make ideas ‘accidentally’ disappear.

If you try to picture this ‘idea assembly line’ you can envision a lot of people walking to and from these ideas, instead of the ideas coming to them. All this walking back-and-forth is non value-add. Then you have the bottle neck; the gatekeepers sifting through tons of paperwork, and trying to coordinate the whole thing. They try to chase down individuals who have sat on ideas for weeks (if not months). Imagine this process for 120,000 ideas?

Essentially, the manual process boils down to an assembly line that is not balanced – too much work on few individuals - , and no concept of Kanban – there is no natural method of pulling the idea through the process, and constantly has to be pulled and pushed by the coordinators (literally walking back and forth with the folders). Ultimately there are no Poka Yokes and visual flags alerting individuals and management that a good idea has not been evaluated on time, implemented, or perhaps lost… (which is common in these manual systems.)

So in general, as lean organizations engage in continuous improvement, very often they fail to improve that continuous improvement process itself, and this is where idea management software can truly solve many of the problems, add-value, and eliminate the waste encountered in this most necessary process (if you’re going to sustain Lean).

1) Elimination of paperwork and eventual transcriptions into a database or spreadsheet. Ideas are entered by idea generators via the web (typically an on-site kiosk)
2) Routing of ideas to multiple champions, depending on area or idea type, thus eliminating the overloading of the coordinator.
3) Electronic notifications of task assignments and full transparency (to everyone) where the idea is on the ‘assembly line’
4) Alarms and escalation of tasks which are overdue.
5) And last but not least, the metrics and visual communication tools needed to keep everyone informed of how well the process is performing: Idea Quality, throughput time at different stages, participation, and (of-course) cost savings or process savings generated.

Bottom line, the software becomes the virtual assembly line, and the resources needed to evaluate and implement only get engaged when necessary. The notifications and alerts keep the line moving, and the administrator no longer becomes a full time resource assigned to keep the process moving, but instead a part time maintenance worker that can spend time on other value-add processes throughout the organization.

Ultimately, when the system is inefficient, it leads to individuals who have great ideas to lose faith in the system and cease their contributions - there is absolutely no value in ideas, only in their implementation.




Posted on Friday, April 09, 2010 by George R.

2 comments

05 April 2010

If there is anything to learn from the Toyota accelerator fiasco is that there is a vast disconnect in the reporting systems that currently exist in organizations to stay abreast of safety situations which could be deemed ‘catastrophic’ (loss of life impact).

Part of the reason that Toyota did not address the accelerator problem more decisively was the fact that the external sensors (data collection points) that they were utilizing were not well integrated and collected a lot of erroneous or misleading data. NHTSA was perhaps one of the biggest contributors to this problem by not feeding all accelerator issues back to Toyota, or providing collateral data indicating the problem may have been driver error and thus adding ‘noise’ to the data needed to fully understand the extent and the root of the problem.

Now in no way am I saying Toyota was innocent in this matter, but I do want to defend them in their methodic ways of not jumping to conclusions without letting the data speak first. One of the core teachings of Lean is that “you cannot improve what you cannot measure” however measuring also has to be accurate. If data is inaccurate, as was the case with the acceleration problem, then it is understandable that there was a delay before Toyota clearly saw that there was a major problem to be addressed.

The dilemma stems from the fact that this dirty data cannot be used as a defense. To a degree most auto manufacturers, and to that extent, most organizations are using sensors that are potentially flawed and inaccurate, let alone internal sensors. When it comes to safety matters there really is no room for error since bad data is as good as no data, and people’s lives are worth the extra time and effort to ensure data is accurate and the handling of the data is treated with priority.

Recently I had the opportunity to speak with someone at Schlumberger regarding their QUEST program. The quest program is an idea management system focused on safety issues and based on the DuPont STOP methodology. (For those not aware of the STOP methodology, essentially it’s every employee’s responsibility to identify and report all areas of safety risk.) Schlumberger realized that in the oil services arena risks are plentiful and thus decided to bring some solid structure behind their process. Their QUEST system manages all global incidents and keeps every Schlumberger employee abreast of the latest incidents and risks, and provides them with an immediate ability to record risks they detect and possible solutions. However, the key component is the escalation… top management is advised of risks which have not been addressed promptly or catastrophic incidents (loss of life, environmental damage, etc).

As I learned more about Schlumberger’s process I also learned that in the oil services industry, Safety (and the environment) are so important that not having solid processes to manage these points can cost contractors their business with big oil (Exxon, Shell, Chevron, etc).

Applying this lesson to Toyota, and all the other Toyotas out there, at the first indication that an accelerator pedal was involved in a loss of life incident, Mr. Toyoda should have been notified. Was he? Probably not, and perhaps it was after several losses that someone, perhaps a data clerk sifting through warranty data noticed there was a possible problem.

In conclusion, with the kind of technologies and applications available on the market, there is no reason why any company should not adopt an idea management software system to streamline their continuous improvement initiatives, but more importantly, keep their workers and customers safe. Companies spend billions of dollars each year on software to keep their inventory and orders under control, why not spend some of that on keeping loss of life and lost time incidents under control? After all, as we learned with Ford and Firestone, and now with Toyota, loss of life can be costly and especially for those that fail to take all measures necessary to protect it.


Posted on Monday, April 05, 2010 by George R.

No comments

01 April 2010


I am guilty of making the same mistake many make about innovation – believing it’s usually about state-of-the-art inventions. I recently blogged about the Apple iPAD and did a good job of criticizing the lack of new technology. After all, the iPAD is essentially something between a large iPhone or a small laptop that most companies will be able to emulate. From the outside this doesn’t look like a great invention, however, leave it to Chuck Frey to set me straight.

I came across his recent blogpost “Seth Godin on the power of remarkable ideas” and realized that I had failed to acknowledge what Apple does best. They historically have not been a technology leader. (The Mac (1984) was not the first computer to have a mouse or have a user friendly GUI. The iPod was not the first mp3 music player.) However, Apple has always had the ability to perfect a product or service and continually improve it until it shines. They are able, better than any other company, to determine what people really want, and deliver value to them… after all, innovation is about delivering VALUE!



Posted on Thursday, April 01, 2010 by George R.

1 comment

29 March 2010

What’s in it for me? This is the very first question most of the participants in an open innovation initiative will ask. One of the aspects often overlooked with OI is what will the process or organization give back to the participants. It is very easy to give out cash prizes for winning ideas or participation, but is what is the formula for ensuring participants keep coming back in an open Innovation scenario?

Creativity and The Performance Paradox, by Steve Shapiro, is perhaps one of the best explanations I have seen in trying to understand the right balance of motivation that has to be provided in order to get the desired performance. It reflects on the Yerkes-Dodson law, where performance increases with motivation up to a certain point after which performance drops. (I want to credit Stefan Lindegaard who suggested I look up some of Steve’s contributions in this subject, and thus saving me a long dissertation on this topic.)

Keeping this in mind, there is one more element that is sometimes forgotten when implementing the motivation structure in an open innovation culture. The ‘Learning’ element is sometimes forgotten, but one of the most valuable weapons in ensuring that the OI culture matures and becomes highly effective towards the expected goals of the OI initiative. Many participants in an OI activity join because of the rewards, but as they engage the process they find that they also find a reward in what they are learning from the activity and from other’s involvement. (This could perhaps make a case for continued participation without rewards, however, at a given point, as their experience and knowledge grows their ‘learning’ reward begins to diminish.)

From the OI side it’s critical to keep these ‘learned’ individuals coming back. They are no different than employees who you’ve spent significant resources on, and the OI initiative benefits each time they return. These individuals are more adept at the process, and with each return their ideas increase in quality and they become more adept at making those critical ‘idea connections’ that need to occur as multiple individuals from different walks of life collaborate on a particular problem (see Innovation: Collaboration has a multiplier effect.)

In the attempt to ensure the participants return, I usually like to take a page from the Airline Industry’s handbook… frequent flier miles. Making a flight on a particular airline will get you nothing but points, but continuing to loyally fly the airline can eventually get you free tickets. What this means is that there has to be a motivation transition or connection from one OI initiative to the next in order to maintain participant loyalty.

In OI there really is no need to provide a large ‘home-run’ prize, after all, as Steve Shapiro clearly explained, participants will be engaged for the wrong reasons and their creativity, which is essential to OI, will likely be diminished. The learning experience is a significant reward in itself and coupling it with a structure that allows participants to incrementally build towards a reward (like in the frequent flyer case) can be a powerful weapon at securing their loyalty to the process while in return they increase their ability to add-value with each subsequent open innovation challenge.

The big challenge in Open Innovation (Part I)



Posted on Monday, March 29, 2010 by George R.

2 comments

22 March 2010

I started investigating stats regarding open innovation to see what I could find regarding the idea quality ratio. In simple mathematical terms this ratio is the number of approved (or implemented) ideas divided by the total submitted ideas.

Over the years, data from most idea management processes has shown that programs where this ratio is high, participation, and more importantly return participation, runs high and tends to increase. However, when this ratio is low, participation tends to decline. The reason for this is simple; most people don’t take rejection very well.

I came across this blog entry from Stefan Lindegaard regarding my old employer’s (Daimler) attempt on open innovation. There was a very telling line that he quotes from Daimler’s “Style your Smart” contest where 50,000 design ideas were received from over 100 countries and only six prizes given out. (3 for design and 3 for active participation in the contest by evaluating, uploading and commenting on designs)

The obvious result of this was that for those that came out winners, this was a great experience, and in a heartbeat they would probably participate in the next Daimler open innovation initiative. But what about the thousands of non-winners who entered ideas and helped evaluate and rate the designs? Will they spend the time and effort to do this again?

Daimler’s attempt, to a degree, recognized that engaging individuals required more than declaring a design winner. “Style your Smart” cleverly gave out prizes for participation, thus giving individuals a little more hope that they could win something, but for the most part it was a marketing gimmick that lacked a clear vision of how to re-engage the original participants in Daimler’s next OI initiative.

What is clear is that in the development of an open innovation culture one of the most important questions that has to be answered is... how do you guard against the inherent erosion of contributors?

The big challenge in Open Innovation (Part II)




Posted on Monday, March 22, 2010 by George R.

2 comments

15 March 2010

A new book by Youngme Moon promises to be “Different”. In this age of bigger, better, faster, it’s time for companies and people to be different. Organizations are beginning to realize that they need to instill the innovation culture in their organizations in order to stand out. Whether it’s on the shop floor continually innovating improved processes to eliminate non-value-added steps, to online communities of customers and employees brainstorming the next product breakthrough, managers in today’s business world will need to challenge their teams to break with paradigms and go against the grain in order lead.




Posted on Monday, March 15, 2010 by George R.

1 comment

09 March 2010

The common recommendation by TPS experts is that Kaizen teams must be cross-functional. The common rule is to form the team with 1/3 target area members, 1/3 from upstream and downstream areas, and 1/3 from external areas (i.e.: Finance, engineering, HR).

This makes sense, and sounds reasonable, however I found that perhaps the best way to explain the benefit of the cross-functional structure of the Kaizen team is to look into the ‘science’ of innovation. After all, Kaizen IS Innovation… albeit most people associate the latter, by default, with radical innovation (i.e.: iPODs, blackberrys, NASA, etc.). On the other hand, Kaizen (in 99% of the cases), is about incremental innovation, but the best practices for achieving incremental or radical innovation are the same.

Perhaps one of the best articles I’ve read about innovation was in the New Yorker (see this blog post), and clearly shows how great inventions come from the sharing of the right information at the right time. It goes into detail of how a company, Intellectual Ventures, was founded and how it became one of the greatest inventing organizations of our time. (IV, as it’s known for short, is 100% dedicated to the business of inventing, patenting, and licensing their inventions.) So how do they do it? While most organizations load up their R&D departments with engineers, doctors, chemists, and technical gurus, IV loads up with, lawyers, doctors, pilots, musicians, paleontologists, chemical engineers, programmers, teachers, and everyone you can think of, and launches brainstorming sessions to tackle myriads of problems. What they realized is that inventions seldom come from one individual and instead come from a set of circumstances that bring multiple experiences and information together in one place to help solve a problem.

That’s why a Kaizen team benefits from being cross-functional, and why the ‘thirds’ rule makes good sense. The more varied the experiences, the greater will be the chance to succeed with a solution.

So as I like to say when emphasizing the benefits of teamwork… Nobody has the answer, but everybody has the answer.



Posted on Tuesday, March 09, 2010 by George R.

4 comments

03 March 2010

One thing that I was taught when I was young was that sharing information with others and not keeping secrets could make you friends with a lot of people. In today’s market, it looks like major corporations are starting to leverage that axiom to further expand their corporate dominance.

Being a good corporate citizen no longer means charitable giving to local communities, and hiring interns from local schools. The definition has been transformed by some of the top corporate citizens to include direct participation from the community in identifying the next products, services and trends. In the case of IBM’s 2006 Innovation Jam, they allowed their corporate crème-de-la-crème to openly collaborate and share ideas with common citizens, and the result was an impressive 46,000 ideas from employees, family, friends, and partners, of which 10 were identified for further funding.

What has become clear is that cloud-based idea management platforms, which are configured for open innovation, are quickly becoming the tool of choice for these modern age corporate citizens to proactively engage their communities.

Posted on Wednesday, March 03, 2010 by George R.

No comments

26 February 2010

I stumbled accross this video on Jon Miller's blog. Senate hopeful, Paul Akers, discusses Lean on Fox News. If there is one thing I'd like to see congress do is to pull a page from Taiichi Ohno's playbook.

Posted on Friday, February 26, 2010 by George R.

No comments

One of the greatest tech movements in history has been open-source software. At the birth of the movement many scoffed and thought that making source code available for free would lead to the financial demise of those who shared their code.

The following NY Times article explores how many companies (IBM, DuPont, and others) known for innovation are following suit with open innovation. They are opening virtual communities for discussion on ‘green’ topics. They are involving communities, and in the process pledging their environmental patents for common use. After all, many of these patents have to do with sustainability, and sharing them makes them more effective.

Posted on Friday, February 26, 2010 by George R.

No comments

20 February 2010

Due a misunderstanding of what lean manufacturing really is, many believe its ultimate goal is to make things cheaply and cut every corner possible. So it is understandable that those who have this impression of lean will think it’s the reason why Toyota is having the problems it is currently having.

The reality of those problems is not rooted in lean. In fact, being lean is what has kept Toyota competitive and kept it from following the path of Chrysler and GM. Its agility in quickly adjusting its production volumes and shift the model mix to match the market is a testament to the effectiveness of lean at Toyota. The implementation of Hoshin Kanri, where every plant, department, and person at Toyota is aiming for common goals, and using standardized communication is a great reason why Toyota is successful. However, in a perfect world Toyota would also have its entire supply base aligned with their goals and standardized practices… and that’s where the problem is rooted.

The lack of Hoshin Kanri at many suppliers made them business gluttons and prevented them from effectively aligning their internal goals across the organization. Many became highly exposed to Chrysler and GM due to a lack of solid diversification strategies and inefficient product and plant implementations. Added to the sporadic implementation of Lean and poor standardization, these suppliers were put under immense cost pressures to survive and remain solvent.

The irony in all this is that most auto manufacturers certify their supply base, however there is only so much that can be expected, and in most cases an ISO/TS 16949 certification with some minor adherences to OEM processes is all that can realistically be expected from them. Outsourcing components is the only way to remain competitive in this market, and if an OEM like Toyota were to demand strict adherence to their standards and corporate goals, then they would probably find themselves unable to certify suppliers, and worse yet, would probably lose ground competitively due to the obvious in- sourcing that would take place.

The string of recalls, which has visibly hit Toyota, will not end with them. Every automaker which relies on a supply base which was almost decimated by the global recession and the near collapse of GM and Chrysler will be in the news soon. (Trust me on this one.)

Posted on Saturday, February 20, 2010 by George R.

No comments

27 January 2010

In late 2009 I visited a supplier which had been placed on containment for poor quality. (Due to the severe recession, travel budgets were cut and the ability to engage this supplier directly was delayed by over a year.) They were constantly fighting fires and were never able to get into a proactive mode where they could identify quality risks prior to quality problems occurring. For a long time we had an endless cycle of: quality problem – corrective action – problem resolved. Every time one problem was resolved another one came to attention.

The supplier had put together a six sigma team together and over the course of several months they were able to get many of the quality problems fixed. However, when I arrived at their plant to verify the improvements and lift the containment I realized that there was still plenty of work to be done. By just walking and observing it was easy to identify areas of risk.

I handed out sheets of lined paper and asked the six sigma team to follow me to the floor. We went to the first station (which was an injection molding process) and asked them to stand and watch the operation for 30 minutes and write down anything they thought could be improved. We did this for an entire afternoon on several key processes, and by the end of the day we sat down to compare notes. As simple as this activity sounded, by the looks of “I can’t believe we never thought of that” on their faces I knew that I had given them a breakthrough tool. They found simple yet breakthrough fixes to problems they were having trouble identifying just because they weren’t observing.

I credit one of my mentors, Ken Martin, for introducing me to this basic, but useful tool when he stressed the importance of observing. In my later years I learned that a famous Toyota mentor and executive, Taiichi Ohno, used to draw a circle on the ground and tell people to stand in it and observe a process for hours.

Posted on Wednesday, January 27, 2010 by George R.

1 comment

21 January 2010


Lean complements Innovation complements Lean...

A great article published on Wharton University’s knowledge blog. It explores how lean and innovation can complement each other to make companies successful.

Article


In my personal experience in auto manufacturing, it was almost a given that the first months of production of a new product required the hiring of additional personnel to address inefficiencies that could not be eliminated prior to launch. Great products were brought to market on time but with a lot of waste (time and materials) that still needed to be eliminated.

The Lean cycle, after the innovation cycle ensured that no matter what the company brought to market it would quickly become not only a better product, but also a cheaper one to make.

In the course of the process innovation sometimes brings great inefficiencies with it, especially if being first to market makes a company rush the implementation of a product. In general there isn’t enough time to work out all the problems in the design and related manufacturing processes. In order to win the innovation game you can’t afford infinite time to refine prior to implementation. Companies, in the post implementation phase, require a culture of continuous improvement and waste elimination… this culture is called Lean.

The Lean Innovation (LI) cycle is a good way to guarantee success since it does not require the innovation phase to deliver perfection and thus the innovators can quickly focus on the next great invention.

Posted on Thursday, January 21, 2010 by George R.

2 comments