Thursday, July 7, 2011

One Size Does Not Fit All in Risk Management

Written by Ryan Lloyd, Customer Requirements Manager, MKS a PTC Company

Risk Management is a critical process in developing safe and effective medical devices as well as a requirement for regulatory approval. However, there is no single best method for managing risk that can be applied in all circumstances. Organizations often need to adapt accepted and new processes to meet their needs and to fully leverage the benefits of Risk Management.

For example, there are multiple approaches to Risk Management, and each approach has its benefits and drawbacks. Traditional approaches, such as FMEA, are often described as 'bottom-up,' and they require examining every design element and ensuring each possible failure is appropriately mitigated. In contrast to this, standards such as ISO 14971, IEC 62304 and regulatory bodies such as the FDA and the EU, emphasize a 'top-down' approach, such as Hazard Analysis, to ensure that safety and effectiveness are considered from the perspective of the device's intended use rather than just how it functions.

Both approaches contribute to the overall safety, effectiveness and quality of a device, and while the industry is moving towards top-down risk management, this doesn't mean that bottom-up approaches such as FMEA shouldn't still be applied. Most organizations use a customized combination of both approaches.

Another difference can be found in how organizations classify risks and hazards. Risk Priority Number (RPN) is a commonly used technique often associated with FMEA, that calculates the priority of a risk based on several characteristics, typically Severity and Occurrence. This is a simple and quantitative means of assessing risks and hazards, but it does have some limitations. For example, in the following chart

several different combinations result in the same value. Does that mean that a critically severe hazard with a remote possibility should be prioritized the same as a hazard that is probable but with minor repercussions? Because RPN is numerical it is useful in statistical analyses, but sometimes too vague and inflexible for dealing with the complexities of modern medical device development.

Another commonly used technique for prioritizing and categorizing hazards is the use of a Risk Index. The organization, or sometimes the specific project, defines what can be classified as Unacceptable, As Low as Reasonably Possible (ALARP), and Acceptable level of risk, not by RPN but by explicitly assigning a risk level to each combination of severity and occurrence.

Therefore, while Risk Management is mandated and an essential part of design and development, organizations need flexibility to tailor it to fit their needs. Risk Management solutions must enable medical device manufacturers to apply Risk Management techniques that fit the organization and the situation, rather than try to shoe-horn development into a pre-defined process. For example, the system should allow the organization to define the computations used for RPN and Risk Indexes for each project. Workflow and artifacts for top-down Risk Management during initial requirements and design, and bottom-up during development, must also be supported. This will also ensure that Risk Management is not conducted as a separate and isolated activity, meaning that Risk Management is not only tailored to the organization and project needs, but is also closely integrated with all of development, reducing effort for Risk Management and demonstrating compliance.

PTC is committed to helping medical device manufacturers succeed.

Feel free to comment on this post or visit our other blog post on Risk Management by Dennis Elenburg, Customer Solutions Engineer at MKS a PTC Company.

Labels: , , , ,

Tuesday, July 5, 2011

Test Faster or More Effectively?

It is possible with the right Requirements Management Tool!
Written by Dennis Elenburg, Customer Solutions Engineer, MKS a PTC Company

A problem that plagues all software development organizations is never having the time to test everything you want to test. Shipping a quality product or bug free application means making hard choices when allocating limited testing resources. Testing faster and more efficiently is always a good idea, but are you testing the right things? Can your testing organization easily access the product requirements and create defects from failed tests, both automated and manual? Do your testers write bug reports that help your developers quickly fix the problem?

You Get What You Measure
Quantity of tests executed doesn't necessarily correlate to product quality. This is especially true when automation is involved. If the metric used to assess quality is "number of tests performed," a tester who automates 1,000 test cases may be rewarded and respected more than the lowly manual tester who is only completing 10 tests in the same period of time. However, if those 1,000 tests pass without uncovering any bugs this may be a false quality indicator. If the manual tester discovers two bugs and an unmet requirement in his 10 manual tests, who contributed more to the quality process?

Test automation is utterly incapable of interpreting the meaning of test failures. Automation speeds up the execution process and comparison of results, but a knowledgeable tester must still determine if a failure was due to the test script or if a defect truly exists. Then, a well written bug report will not only note the failure but indicate how the system behavior doesn’t conform to the requirements. This improves developer productivity in fixing the defect. Efficient knowledge work connects testing, requirements, defects, and even the code.

High value software testing and defect analysis is performed by testers who understand product requirements, have the skills to map tests to requirements, and who can write bug reports that help developers fix defects quickly. This is collaborative knowledge work. You can boost the effectiveness of your knowledge workers by providing them with tools to help them connect the dots. More automation may speed things up, but to really enhance product quality, total visibility and end-to-end connectivity across the software development lifecycle is imperative.

Highly Effective Testing is Collaborative
As a former QA manager who struggled under dictates from senior management to "automate, automate, automate," I can empathize with the desire to run more tests. Test automation vendors make lots of promises, and automation can be great, but it is easy to lose sight of the importance of connectivity when focused on a major automation initiative. How do you translate your test results back into useful knowledge for improving product quality?

A quality metric of "number of tests passed" is not enough, particularly if you're automating hundreds or even thousands of tests. You need metrics on all aspects of your software development process and how they relate to each other: test coverage against requirements, defects per test session, and even defects in requirements are just a few examples. A whole host of metrics are possible when you have total visibility and connectivity across your entire software development process.

If everyone can see the complete lifecycle of the requirements from inception to final delivery, including any defects along the way, then quality becomes a shared responsibility across the enterprise. Silos come down and collaboration improves. Quality depends on everyone doing their part, and that starts with equipping your knowledge workers with access to the information they need to do their jobs. Do you have total visibility into your entire software development organization? If not, let us show you how…

Please feel free to leave comments or answer any of the questions asked throughout this blog. For a demonstration of how your entire software organization can benefit from total visibility, tracing requirements through development and into testing see any of the following resources:

Labels: , , ,