Updates in Breast Cancer Diagnostics - Transcript
WEBVTT
00:00:00.000 --> 00:00:10.000
Record it, first.st
00:00:10.000 --> 00:00:13.000
Hello! And welcome to another webinar.
00:00:13.000 --> 00:00:17.000
We're gonna give everyone about a minute or so to log on.
00:00:17.000 --> 00:00:35.000
And then we'll be getting started.
00:00:35.000 --> 00:00:39.000
Hi, there! If you're just joining us, we're giving everyone about 30 more seconds to log on.
00:00:39.000 --> 00:01:07.000
And then we'll start. Today's webinar.
00:01:07.000 --> 00:01:11.000
Alright, we're gonna go ahead and get started on. Today's webinar.
00:01:11.000 --> 00:01:29.000
So welcome to another mathematic webinar. Today we are hosting a webinar with our friends at sunrise labs. So very pleased to have Christopher with us today, we're gonna be talking about risk management for medical devices and how to get everyone speaking the same language.
00:01:29.000 --> 00:01:42.000
Before I turn it over to Chris. I do have a few housekeeping notes that I would like to go over, and you know what I never introduce myself. So Hi! I'm Nicole Owens. I'm the director of Marketing Communications for mathematic.
00:01:42.000 --> 00:01:53.000
If you're not familiar with our organization, we are a membership based trade association that works to bolster the Med device industry, primary and primarily in New England.
00:01:53.000 --> 00:01:57.000
Through education and networking awareness and advocacy.
00:01:57.000 --> 00:02:10.000
This event is being recorded today. So everyone who registered for the event will receive a copy of the Webinar after. And we're gonna be able to share the slides as well. So you'll get that in a follow up email.
00:02:10.000 --> 00:02:12.000
After the webinar ends today.
00:02:12.000 --> 00:02:22.000
We are also going to have ample time for Q. And A. So anytime that you have a question throughout the webinar, please feel free to put it in the Q, and a function at the bottom of your screen.
00:02:22.000 --> 00:02:27.000
And then we will do our best to answer all those questions for you when we get to that portion of the webinar.
00:02:27.000 --> 00:02:39.000
Alright. Now I am pleased to turn it over to our presenter, for today, Christopher. He is the senior principal systems engineer for sunrise labs.
00:02:39.000 --> 00:02:43.000
And he's is gonna take it away.
00:02:43.000 --> 00:02:46.000
Thank you. Nicole.
00:02:46.000 --> 00:02:47.000
Hello, everybody! Good afternoon. Thank you for joining me.
00:02:47.000 --> 00:02:50.000
Thanks. Chris.
00:02:50.000 --> 00:02:59.000
So risk management, part of what sunrise system team does. And we collectively built numerous risk management files. So we understand its benefits and features.
00:02:59.000 --> 00:03:05.000
I put together this presentation to try and get everyone to better understand the importance of risk management.
00:03:05.000 --> 00:03:12.000
It is an invaluable tool that helps create safer devices more quickly, with fewer regulatory challenges.
00:03:12.000 --> 00:03:16.000
That'd be very clear. We're talking specifically about device system safety.
00:03:16.000 --> 00:03:30.000
To patients by centers and operators, program risks and business to be treated in a similar way. But that's at a scope for this presentation.
00:03:30.000 --> 00:03:35.000
Go to the next slide.
00:03:35.000 --> 00:03:44.000
So what exactly is risk management? Well, per iso 14 is the systematic application of management, policies, procedures, and practices.
00:03:44.000 --> 00:03:52.000
Such a task and analyzing, evaluating, controlling, and monitoring risk.
00:03:52.000 --> 00:04:02.000
Quite the word salad. So why do we actually need risk management? Well, it's central to the FDA and the EU Mdr. Approval processes.
00:04:02.000 --> 00:04:04.000
It's ensconced in the regulations.
00:04:04.000 --> 00:04:07.000
It's also key aspects of the.
00:04:07.000 --> 00:04:11.000
Consensus standards that we're expected to comply with.
00:04:11.000 --> 00:04:24.000
He see 6 0, 6 0, 1, IEC. 6, 10 Iso. 14, 7, 0, 8. They all have clauses that dictate that you need a risk management process in the development of your products.
00:04:24.000 --> 00:04:34.000
But I would argue that you really should want it as well. If you can get your leadership and engineering team to internalize how risk management works is various benefits.
00:04:34.000 --> 00:04:44.000
Right off the bat being able to identify hazards and failure months early in the program. When you've got a little more flexibility in your design. A little bit more schedule flexibility is invaluable. We're getting that sort of stuff.
00:04:44.000 --> 00:04:47.000
Flushed out early in the process.
00:04:47.000 --> 00:04:59.000
As technical issues come up risk management can use to help triage them and figure out which ones are really important and which ones. You can probably safely defer or delay work on.
00:04:59.000 --> 00:05:06.000
It allows you to better communicate to all your stakeholders how you're gonna manage and make your device safe.
00:05:06.000 --> 00:05:09.000
It's a 1 stop shop for that information.
00:05:09.000 --> 00:05:13.000
It can help you make architecture decisions. You can compare different.
00:05:13.000 --> 00:05:15.000
Devices, ideas.
00:05:15.000 --> 00:05:22.000
Against the requirements that you come up with in this process and figure out which ones meet them, and which ones don't.
00:05:22.000 --> 00:05:25.000
And that can help you make the decision about which direction to go.
00:05:25.000 --> 00:05:31.000
It can help your test teams figure out, what do I have to test? And how might I go about testing it?
00:05:31.000 --> 00:05:36.000
And it also, I like to say, is, it's it's really good at helping you figure out when you're done.
00:05:36.000 --> 00:05:41.000
When you identified everything that you know of that can go wrong. And you've figured out what you're gonna do about it.
00:05:41.000 --> 00:05:46.000
And you verify that all those things that you intend to do have been done.
00:05:46.000 --> 00:05:50.000
You have a safe enough product to consider submitting.
00:05:50.000 --> 00:06:01.000
I like to say that the proper execution of risk management ensures that we have done our best to avoid people getting hurt with our devices.
00:06:01.000 --> 00:06:06.000
Central to the process are the steps of analyzing, evaluating, and controlling.
00:06:06.000 --> 00:06:08.000
And frequent issues.
00:06:08.000 --> 00:06:11.000
They come up as getting everyone understand how that actually works.
00:06:11.000 --> 00:06:13.000
Virtually every.
00:06:13.000 --> 00:06:20.000
Has it? Analysis review meeting I've ever facilitated. We always get spend a large amount of time.
00:06:20.000 --> 00:06:26.000
Discussing the terms and the logical flow of the process.
00:06:26.000 --> 00:06:35.000
So this presentation is gonna try and give you an outline and a thinking process for how that's supposed to work and work. You walk you through some examples.
00:06:35.000 --> 00:06:42.000
First, st let's start off with the terms that are used in this process, and these are all.
00:06:42.000 --> 00:06:45.000
Straight from Isa. 14, 9.
00:06:45.000 --> 00:06:51.000
Risk, combination of the probability of occurrence of harm, and the severity of that harm.
00:06:51.000 --> 00:06:54.000
Harm, injury or damage.
00:06:54.000 --> 00:06:58.000
To health and people, or damage to property or the environment.
00:06:58.000 --> 00:07:04.000
Severity, measure of the possible consequences of a hazard.
00:07:04.000 --> 00:07:07.000
Hazard potential source of harm.
00:07:07.000 --> 00:07:09.000
Hazard, a situation.
00:07:09.000 --> 00:07:15.000
Circumstance which people, property or the environment is, are exposed to one or more hazards.
00:07:15.000 --> 00:07:20.000
Risk control process in which decisions are made and measured.
00:07:20.000 --> 00:07:26.000
Are implemented, by which risks are reduced to or maintained within specified levels.
00:07:26.000 --> 00:07:30.000
Residual risk remaining after risk control measures have been implemented.
00:07:30.000 --> 00:07:34.000
Safety, free from unacceptable risk.
00:07:34.000 --> 00:07:40.000
Note in this list. I haven't included the term mitigation. It doesn't appear in 1, 2, 9, 71.
00:07:40.000 --> 00:07:45.000
The term has been deprecated since at least the 2,000 addition of the standard.
00:07:45.000 --> 00:07:59.000
And, in fact, ericization I previously worked for actually got an audit observation for including that term in the recipe. So we should avoid using the term mitigation really should be thinking in terms of risk control.
00:07:59.000 --> 00:08:04.000
So those terms can be a little cryptic, particularly that definition for risk controls.
00:08:04.000 --> 00:08:07.000
So having a mental model.
00:08:07.000 --> 00:08:16.000
Can help figure out how that all fits together and honestly, without even seasons. Engineers often debate specifics of those terms.
00:08:16.000 --> 00:08:18.000
The example I like to use is Cross Street.
00:08:18.000 --> 00:08:22.000
You can analyze all aspects of the risk management process.
00:08:22.000 --> 00:08:26.000
Using this example.
00:08:26.000 --> 00:08:29.000
So we'll put those terms in the context across in the street.
00:08:29.000 --> 00:08:35.000
Harm the injury received by being struck by a car that we're gonna focus on that example.
00:08:35.000 --> 00:08:38.000
Ie. Unforced trauma.
00:08:38.000 --> 00:08:42.000
Severity. How bad is the injury received by being struck by the car.
00:08:42.000 --> 00:08:45.000
Cut some bruises, broken bones, etc.
00:08:45.000 --> 00:08:48.000
Hazards.
00:08:48.000 --> 00:08:53.000
In this case we're gonna say, moving objects. And this is kind of derived from the list that's in.
00:08:53.000 --> 00:08:57.000
14 9, 71.
00:08:57.000 --> 00:08:59.000
Hazard, a situation.
00:08:59.000 --> 00:09:01.000
Pad pedestrian, and the path of oncoming traffic.
00:09:01.000 --> 00:09:05.000
Some might say, stepping into the street, but that's not strictly.
00:09:05.000 --> 00:09:10.000
If you were in the street, and there's no car coming, you could be there all day, and there's no hazard.
00:09:10.000 --> 00:09:17.000
It's the act of being in the oncoming traffic that is actually, that has its situation.
00:09:17.000 --> 00:09:23.000
Risk. Everyone would agree that blindly stepping into the street is risky.
00:09:23.000 --> 00:09:25.000
IE. Not.
00:09:25.000 --> 00:09:28.000
Free funacceptible risk.
00:09:28.000 --> 00:09:30.000
Risk control measures.
00:09:30.000 --> 00:09:35.000
What do we do? We generally work both ways before we step into the street.
00:09:35.000 --> 00:09:39.000
Waiting for approaching car to pass, requiring drivers to yield. The pedestrians.
00:09:39.000 --> 00:09:45.000
Speed limits, zoning rules for visibility that you can actually see up and down the street.
00:09:45.000 --> 00:09:47.000
Street lights.
00:09:47.000 --> 00:09:50.000
Note that the 1st cup of 3.
00:09:50.000 --> 00:09:56.000
Risk controls in this example are kind of equivalent to patient training or operator training.
00:09:56.000 --> 00:10:02.000
And the latter are more like design, features.
00:10:02.000 --> 00:10:08.000
Residual risk. I think everyone would agree that you know, as a society, if you do all those things.
00:10:08.000 --> 00:10:14.000
That the risk associated with stepping is acceptable, ie. Free phone acceptable risk.
00:10:14.000 --> 00:10:21.000
We certainly could do more. But as a society we've agreed that that's enough.
00:10:21.000 --> 00:10:24.000
Let's talk a little bit about probability of occurrence.
00:10:24.000 --> 00:10:29.000
This is how likely a harm will occur when you're in that hazardous situation.
00:10:29.000 --> 00:10:31.000
Or will result from that situation.
00:10:31.000 --> 00:10:34.000
It can be broken down into 2 probabilities.
00:10:34.000 --> 00:10:38.000
There's p. 1 probability that it has a situation occurs.
00:10:38.000 --> 00:10:44.000
And there's P. 2. Probability of a hazard situated leading to the harm.
00:10:44.000 --> 00:10:50.000
And the overall probability is the product of those 2 probabilities.
00:10:50.000 --> 00:10:54.000
So in the context of street crossing.
00:10:54.000 --> 00:10:59.000
p. 1 would be, how probable is it that there's an car when the pedestrian steps into the street.
00:10:59.000 --> 00:11:05.000
So for this example, we'll say occasionally, let's say 10 times a day.
00:11:05.000 --> 00:11:11.000
P. 2. In the context across the street is how probable is a type of injury.
00:11:11.000 --> 00:11:15.000
To occur if a pedestrian is, in fact, by the car.
00:11:15.000 --> 00:11:21.000
So cuts and bruises. Yeah, pretty much been guaranteed that you're gonna have those. If you get hit by a car.
00:11:21.000 --> 00:11:23.000
Sprains and broken bones, pretty likely.
00:11:23.000 --> 00:11:25.000
At least half the time.
00:11:25.000 --> 00:11:27.000
It's a reasonable estimate.
00:11:27.000 --> 00:11:33.000
Internal injuries requiring hospitalization less, but at least one in 10 times.
00:11:33.000 --> 00:11:37.000
So good estimate.
00:11:37.000 --> 00:11:43.000
So, combining those we get cuts and bruises. If you blindly step into the street. Yeah, as much as 10 times a day.
00:11:43.000 --> 00:11:49.000
Sprains and broken bones. Yeah, at least have 5 times a day in this example.
00:11:49.000 --> 00:11:59.000
Injuries that require hospitalization. It's completely believable that one time a day you could get struck by a car and have to go to the hospital. If you just blindly step into the street.
00:11:59.000 --> 00:12:04.000
So how do risk controls? Work a little bit more about that.
00:12:04.000 --> 00:12:10.000
They really fall mostly into 3 principal categories. There's risk controls that reduce p. 1.
00:12:10.000 --> 00:12:15.000
That would be the training that tells you to look both ways and wait for Karth to pass.
00:12:15.000 --> 00:12:20.000
That's reduces the likelihood of being in the path of oncoming traffic.
00:12:20.000 --> 00:12:22.000
Reducing. P. 2.
00:12:22.000 --> 00:12:34.000
Speed limit would be example of that by slowing the car down. The seriousness of the injuries that happen when there actually is a collision between the pedestrian vehicle is produced potentially.
00:12:34.000 --> 00:12:40.000
There's a 3rd category. You can interrupt the sequence of events, removing the need to step in the street altogether.
00:12:40.000 --> 00:12:48.000
Doesn't necessarily apply in this particular example, because we're on a residential street. But pedestrian overpass on a multi highway an example of.
00:12:48.000 --> 00:12:51.000
Removing the need to be in the path of oncoming traffic.
00:12:51.000 --> 00:12:56.000
Those of you that know Iso, 49, 71 know that there.
00:12:56.000 --> 00:13:01.000
There it provisions for assessing residual risk with lower severities.
00:13:01.000 --> 00:13:08.000
Under W with after the application of risk controls. This is an example where that would be acceptable.
00:13:08.000 --> 00:13:13.000
Now let's look at how this process gets started. You need some.
00:13:13.000 --> 00:13:17.000
Pieces in place before you actually start doing the analysis.
00:13:17.000 --> 00:13:20.000
Kind of 3 legs to the stool, so to speak.
00:13:20.000 --> 00:13:28.000
You need an intended use or purpose to find. I like to say, this is the what's it for? Who's it for? What's the environment?
00:13:28.000 --> 00:13:32.000
In the crossing street example. This would be like safe.
00:13:32.000 --> 00:13:37.000
Transit of pedestrians and vehicles on local streets is what this is for. We're trying to analyze that.
00:13:37.000 --> 00:13:40.000
And develop a way to ensure that.
00:13:40.000 --> 00:13:42.000
Who's it for.
00:13:42.000 --> 00:13:46.000
All inventory individuals, including wheelchair users.
00:13:46.000 --> 00:13:52.000
What's the environment? It's outside environment, day or night, all seasons.
00:13:52.000 --> 00:13:55.000
Then you need a basic description to the system.
00:13:55.000 --> 00:14:00.000
And the crossing street example. We're talking about a typical paid residential street with sidewalks.
00:14:00.000 --> 00:14:02.000
Line with single family homes.
00:14:02.000 --> 00:14:07.000
In a municipality that provides services, enforces, and regulations.
00:14:07.000 --> 00:14:13.000
So that's your kind of bounding box of what you're analyzing.
00:14:13.000 --> 00:14:18.000
And then, lastly, you need your criteria for riskability, and this is usually.
00:14:18.000 --> 00:14:22.000
I risk acceptability. Matrix. This is.
00:14:22.000 --> 00:14:28.000
What combinations of severity and probability are acceptable, and what combinations are unacceptable.
00:14:28.000 --> 00:14:34.000
If you haven't got these 3 things when you start, really need to reach out to your leadership and try and get these things established right off the bat.
00:14:34.000 --> 00:14:36.000
So that you have a firm.
00:14:36.000 --> 00:14:40.000
Set a ground rules for what you're analyzing and.
00:14:40.000 --> 00:14:47.000
Whether you need to do something about the various things you come up with.
00:14:47.000 --> 00:14:49.000
So how do you organize the file itself.
00:14:49.000 --> 00:14:52.000
There's essentially 3 principal parts.
00:14:52.000 --> 00:14:58.000
1st off is the hazard analysis, and this is what hazards are inherent in the use of the device.
00:14:58.000 --> 00:15:00.000
In the intended application.
00:15:00.000 --> 00:15:04.000
And the crossing example is what can go wrong when you cross the street.
00:15:04.000 --> 00:15:09.000
That has analysis generally to design details.
00:15:09.000 --> 00:15:12.000
But is framed by the intended use in the design.
00:15:12.000 --> 00:15:21.000
It has the broadest audience, so it needs to be fairly high level, but it needs to cover all the foreseeable hazards. And if you need some examples to start with.
00:15:21.000 --> 00:15:26.000
Table, c, 1 and 49, 71 has a nice long list of examples.
00:15:26.000 --> 00:15:29.000
That you can choose from and kind of get the process started.
00:15:29.000 --> 00:15:32.000
If you come up with ones that are not on that list that's totally acceptable, too.
00:15:32.000 --> 00:15:38.000
And in addition to that, you don't have to include every one of the ones in that list, you only have to include the ones that.
00:15:38.000 --> 00:15:43.000
Can reasonably be attributable to your device or system.
00:15:43.000 --> 00:15:47.000
The next piece. Design Fmas, this is, what about the device?
00:15:47.000 --> 00:15:53.000
Our system can go wrong. And the crossing example. That's what can go wrong with a car.
00:15:53.000 --> 00:15:56.000
Crack windshield war tires.
00:15:56.000 --> 00:16:02.000
Then there's what can go wrong in the street streetwise, failing over growth of bushes. That would be examples of.
00:16:02.000 --> 00:16:07.000
Things that should be analyzed on the hardware side of the system.
00:16:07.000 --> 00:16:15.000
Then there's the use, Emea, what mistakes can the patient caregiver commit? And what are we gonna do about those.
00:16:15.000 --> 00:16:20.000
The crossing street samples its vehicle and pedestrian. What! How can they make a mistake that.
00:16:20.000 --> 00:16:23.000
Can put them in harm's way.
00:16:23.000 --> 00:16:31.000
The last. Very important is that this use of me feeds directly in human factors, testing. So this dovetails very, very nicely with.
00:16:31.000 --> 00:16:37.000
I. Ec. 6 0, 6 0, 1 dash 6, which is the usability.
00:16:37.000 --> 00:16:41.000
Standard.
00:16:41.000 --> 00:16:44.000
One more term before we go any farther foreseeable misuse.
00:16:44.000 --> 00:16:48.000
Reasonably foreseeable misuse. This is the use of a product or system in a way.
00:16:48.000 --> 00:16:51.000
Not intended by the manufacturer.
00:16:51.000 --> 00:16:57.000
But can result from readily human behavior. This includes behavior and processional users.
00:16:57.000 --> 00:17:00.000
It can be intentional or unintentional.
00:17:00.000 --> 00:17:03.000
But this does not mean malicious misuse necessarily.
00:17:03.000 --> 00:17:09.000
And, more importantly, this is also not how an engineer would use the device.
00:17:09.000 --> 00:17:20.000
This could be a challenge for the team. You really have to think in terms of how your spouse, or your children, or your neighbors or your parents would use it. Not how your team members can figure out how to game it.
00:17:20.000 --> 00:17:30.000
Along those lines. I wanted to point out that if you have the opportunity to to watch human factors testing or get your team to watch it extremely.
00:17:30.000 --> 00:17:33.000
Highly recommend you do so.
00:17:33.000 --> 00:17:37.000
It's really surprising what you'll learn when you do this.
00:17:37.000 --> 00:17:40.000
Been on teams where we ran our hands.
00:17:40.000 --> 00:17:45.000
Use errors that we thought really gonna be a problem, and that we're gonna have to struggle to fix.
00:17:45.000 --> 00:17:48.000
That never showed up in human factors, testing at all.
00:17:48.000 --> 00:17:55.000
It's not like we took those out of our analysis, but we were able to rank those a little bit more reasonably based on what the human factors testing told us.
00:17:55.000 --> 00:18:00.000
Likewise there's a number of situations where you came up that we never even dreamed of.
00:18:00.000 --> 00:18:03.000
Quite frankly. You can't consider.
00:18:03.000 --> 00:18:11.000
Your use complete unless you have done your human factors testing.
00:18:11.000 --> 00:18:14.000
So let's look at the structure of these various documents.
00:18:14.000 --> 00:18:21.000
I got here a couple of lines of across from the street. Hazard analysis. Now note the column order.
00:18:21.000 --> 00:18:23.000
You. It's organized by hazard.
00:18:23.000 --> 00:18:27.000
Then you have the cause. Sequence of events.
00:18:27.000 --> 00:18:32.000
Has a situation, and then you go into your risk, assessment, your harm, your severity, your risk.
00:18:32.000 --> 00:18:39.000
Then your risk controls, then your residual risk assessment. And then I add this little column in here notes which I use to.
00:18:39.000 --> 00:18:47.000
Fill in some blanks. Kind of explain why I might have made a certain assessment of severity.
00:18:47.000 --> 00:18:51.000
How a risk control might actually serve to reduce the risk of harm.
00:18:51.000 --> 00:18:57.000
So let me go through this 1st example. We already talked about moving objects, pedestrian.
00:18:57.000 --> 00:19:04.000
And vehicle fail to avoid contact is the cause of the harm. In this situation.
00:19:04.000 --> 00:19:09.000
Pedestrian steps in the street. When a vehicle is approaching, vehicle fails to stop and hits the pedestrian.
00:19:09.000 --> 00:19:14.000
I'll jump ahead here to the risk controls we have. I've included the risk controls we talked about before.
00:19:14.000 --> 00:19:21.000
And then the residual risk assessment is, yeah. It's pretty unlikely that we'll end up with a high severity harm if we do all those things. So.
00:19:21.000 --> 00:19:25.000
As a society, we generally would agree that the risk is acceptable.
00:19:25.000 --> 00:19:29.000
And I have notes here to explain why speed limits are useful, IE.
00:19:29.000 --> 00:19:35.000
If you just keep cars going, it gives the operator more time to respond.
00:19:35.000 --> 00:19:38.000
To the patient being the.
00:19:38.000 --> 00:19:44.000
Pedestrian being in the street, and also reduces harm if there is collision between the 2 of them.
00:19:44.000 --> 00:19:50.000
Now note in the 1st line, I only considered the highest severity.
00:19:50.000 --> 00:19:55.000
Depending on how your riskability matrix is constructed. That knock on actually be.
00:19:55.000 --> 00:19:59.000
The combination that presents the highest risk.
00:19:59.000 --> 00:20:03.000
You may well be in situations where.
00:20:03.000 --> 00:20:08.000
Perhaps a lower severity harm happening a lot more often might actually be worse.
00:20:08.000 --> 00:20:16.000
So be careful about that. You might have to analyze a couple of different scenarios, a couple of different combinations to figure out which ones are really the worst.
00:20:16.000 --> 00:20:27.000
Or you, and you might want to have either multiple lines in the hazard analysis, or you might want to at least explain in the notes. Why, you chose a specific severity level, if it's not obvious.
00:20:27.000 --> 00:20:31.000
Another important thing here is that you really want to assess the risk here in the middle.
00:20:31.000 --> 00:20:37.000
Assuming that you haven't done anything to reduce probability yet. Think clean sheet of paper
00:20:37.000 --> 00:20:42.000
A Neanderthal stepping into the street, not knowing anything about our social mores.
00:20:42.000 --> 00:20:44.000
Or a toddler, for instance.
00:20:44.000 --> 00:20:49.000
This is important, because, you wanna make an honest assessment of what the risk is before you do anything about it.
00:20:49.000 --> 00:20:53.000
Because if the risk is already acceptable.
00:20:53.000 --> 00:20:58.000
And you don't do that exercise. You might be applying risk that actually aren't necessary. And.
00:20:58.000 --> 00:21:00.000
That adds cost and time.
00:21:00.000 --> 00:21:05.000
And the second line. Here is an example of a situation where that makes sense.
00:21:05.000 --> 00:21:07.000
I took the hazard of fallen.
00:21:07.000 --> 00:21:12.000
Pedestrian loses their balance is the cause. The sequence is a pedestrian.
00:21:12.000 --> 00:21:15.000
Trips, while stepping up the curve leading.
00:21:15.000 --> 00:21:17.000
To a fall.
00:21:17.000 --> 00:21:19.000
As the situation is full.
00:21:19.000 --> 00:21:31.000
Blunt trauma again, and I was slightly conservative in my assessment of severity. Here, I said, medium, it's certainly foreseeable that if you did trip. You might break your risk, for instance. So I I took that as.
00:21:31.000 --> 00:21:33.000
Highest level of certainty, that.
00:21:33.000 --> 00:21:35.000
I I would expect if you tripped.
00:21:35.000 --> 00:21:39.000
But once. The last time anybody.
00:21:39.000 --> 00:21:50.000
On this call saw somebody trip when they stepped out of a street break the risk. It's extremely rare, so much so that a society. We really don't do anything about that potential harm in our design of our streets.
00:21:50.000 --> 00:21:57.000
Yes, we have cuts. That's really about usability. It's not so much about reducing the risk of these sorts of injuries.
00:21:57.000 --> 00:22:03.000
And if you hadn't done that risk assessment, you might have been tempted to add risk, when, in fact, at least a society.
00:22:03.000 --> 00:22:07.000
Don't see the need for them.
00:22:07.000 --> 00:22:16.000
Next, let's go on to design a. Now, in this case you can look at the column order. It's similar, but slightly different.
00:22:16.000 --> 00:22:25.000
In this case rather than organizing by hazards, organizing by component. And we're gonna break down the different components of the system and look at the way they fail.
00:22:25.000 --> 00:22:34.000
And that's with the second column is what the failure mode of those components are. And then from there on the column order is the same, and the analysis is the same.
00:22:34.000 --> 00:22:36.000
So.
00:22:36.000 --> 00:22:44.000
Let's just walk through this example. Here, we'll start off with the street, is the component. What can go wrong with the street? Well, the street and the drivers. Views can be blocked.
00:22:44.000 --> 00:22:48.000
And that could be the cause can be overgrowth of foliage.
00:22:48.000 --> 00:22:55.000
And the sequence events is, the 2 can't see each other before the pedestrian steps into the path of the oncoming car. They get hit.
00:22:55.000 --> 00:23:01.000
Plant force trauma, and it can be serious as much as once a day if they're not careful.
00:23:01.000 --> 00:23:03.000
So we have additional controls.
00:23:03.000 --> 00:23:08.000
The city and state are obliged to keep the foliage trimmed.
00:23:08.000 --> 00:23:16.000
We do teach our our Nova drivers and our children to be extra careful when they can't see clearly up and down the streets.
00:23:16.000 --> 00:23:20.000
But speed limits are still effective at helping to reduce this harm.
00:23:20.000 --> 00:23:22.000
Th this harm.
00:23:22.000 --> 00:23:26.000
If and when this situation occurs.
00:23:26.000 --> 00:23:30.000
And with those risk controls we would argue, it's acceptable in the notes here explaining again.
00:23:30.000 --> 00:23:33.000
How that speed limit actually serves to help.
00:23:33.000 --> 00:23:37.000
Notice, though, that we're reusing risk and.
00:23:37.000 --> 00:23:40.000
That's not only acceptable, it's strongly recommended.
00:23:40.000 --> 00:23:42.000
I've been on programs where.
00:23:42.000 --> 00:23:47.000
Teams have gone to great lengths to design the perfect risk control for every single failure, mode.
00:23:47.000 --> 00:23:49.000
But there was almost no commonality to him.
00:23:49.000 --> 00:23:55.000
When I came in, and I said, Hey, you know, we can use more generic risk controls that are arguably just as effective.
00:23:55.000 --> 00:24:01.000
And use them for multiple risk controls. And we end up with few requirements to verify.
00:24:01.000 --> 00:24:08.000
Similar here for the second line. Just for another example, we look at the component being the vehicle.
00:24:08.000 --> 00:24:13.000
Pedestrian driver Review was, Excuse me in this case. It's really the driver's view is blocked.
00:24:13.000 --> 00:24:16.000
Because they were scratched or cracked windshield.
00:24:16.000 --> 00:24:23.000
So look ahead here to our risk controls. And I said, Okay, well, annual inspection.
00:24:23.000 --> 00:24:28.000
Would serve to help reduce the likelihood of this harm occurring.
00:24:28.000 --> 00:24:36.000
And I note in the notes column that that vehicle inspection would include a check on the windshield.
00:24:36.000 --> 00:24:50.000
This structure is also useful for software fmas. So when we talk about design information, we're not talking just about electrical and mechanical, but could also be used for software, and the component would generally be the module.
00:24:50.000 --> 00:24:55.000
And the failure, mode might thing be things like buffer, overflow, or invalid data.
00:24:55.000 --> 00:24:57.000
Out of balance.
00:24:57.000 --> 00:25:00.000
And so.
00:25:00.000 --> 00:25:11.000
This applies there equally, and will work just as effectively for addressing the necessary 6, 2, 3, 4.
00:25:11.000 --> 00:25:20.000
Usefemea again similar. But the 1st 2 columns are different. This case we organize by task.
00:25:20.000 --> 00:25:23.000
What do we? What are the tasks associated with.
00:25:23.000 --> 00:25:28.000
You know, living near a crossing streets, and what errors can be committed.
00:25:28.000 --> 00:25:32.000
When you're undertaking those tasks.
00:25:32.000 --> 00:25:35.000
So.
00:25:35.000 --> 00:25:43.000
In this case, I said, well, you gotta operate the vehicle. That's 1 of the tasks and the use error. The operator can be distracted by a cell phone.
00:25:43.000 --> 00:25:50.000
So that the operator doesn't see the pedestrian time, and the operator and ends up striking the pedestrian.
00:25:50.000 --> 00:25:54.000
So we've got a new risk. Control use of handheld devices is prohibited.
00:25:54.000 --> 00:26:03.000
But risk controls like looking both ways and waiting for cars to pass are also effective in this situation, and speed limits are as well.
00:26:03.000 --> 00:26:06.000
Another, one
00:26:06.000 --> 00:26:10.000
That, I added, for just to demonstrate another little nuance.
00:26:10.000 --> 00:26:16.000
Staying clear of the street is another task that you need to do. If you don't tend to cross the street.
00:26:16.000 --> 00:26:19.000
Enter in the street when the vehicle is approaching us to you.
00:26:19.000 --> 00:26:25.000
Sequence of events. Child chasing a ball enters the street while the vehicle is approaching, and the operator isn't able to stop in time.
00:26:25.000 --> 00:26:40.000
In this case speed limits are risk control. Oh, before I go there, though I noted that it I did treat the highest, but I actually upped the probability somewhat, because obviously, children are more likely to receive serious injuries from being struck by vehicles.
00:26:40.000 --> 00:26:48.000
And so that's been noted here in the notes that that I explained why those that that probability is elevated.
00:26:48.000 --> 00:26:51.000
In addition to the fact, we use speed limits.
00:26:51.000 --> 00:26:54.000
And that we do have additional driver training.
00:26:54.000 --> 00:27:00.000
Where we make sure that drivers not only pay attention to what's in the street, but also on either sides of the street.
00:27:00.000 --> 00:27:02.000
And then we often have signage.
00:27:02.000 --> 00:27:06.000
Indicating where playgrounds and.
00:27:06.000 --> 00:27:11.000
School zones are.
00:27:11.000 --> 00:27:20.000
Also a nice thing about this notes column is that it's really useful for noting when observations.
00:27:20.000 --> 00:27:23.000
Occur for a particular task. You can note it.
00:27:23.000 --> 00:27:37.000
Whether the line item was added, or that you dreamed it up and found that it actually did happen. There is a good way to keep track of which entries actually trace directly to observations.
00:27:37.000 --> 00:27:43.000
So what is the output?
00:27:43.000 --> 00:27:50.000
Of this whole process generally, as I mentioned before, you, those risk controls generally get instantiated as requirements.
00:27:50.000 --> 00:27:53.000
And they usually fall under roughly 3 categories.
00:27:53.000 --> 00:27:56.000
Design requirements and the crossing street example. This would be.
00:27:56.000 --> 00:27:59.000
Zoning rules and street lights.
00:27:59.000 --> 00:28:04.000
Labeling requirements, things like speed limits, playground school signs.
00:28:04.000 --> 00:28:11.000
Training, looking both ways, waiting for it to pass, yielding a pedestrians.
00:28:11.000 --> 00:28:14.000
You've got this list of requirements. Now that.
00:28:14.000 --> 00:28:22.000
Is in supplement to what you want your device to meet the claims and what features you wanted to have.
00:28:22.000 --> 00:28:25.000
Market ability.
00:28:25.000 --> 00:28:40.000
And that now you can use those requirements also to inform design. You can use that as your criteria. For what is it to make. It's the actual hardware safe. The labeling is what's in your if you and the training, what's is what's gonna be in your training material.
00:28:40.000 --> 00:28:45.000
Next output is presumably. Now you have consensus among your team. We all know what we're supposed to be doing.
00:28:45.000 --> 00:28:51.000
And so there should be fewer debates about exactly what should we be implementing.
00:28:51.000 --> 00:29:00.000
In addition to that is that when management goes well, this looks bad. What are we doing about it? You can walk them through that logic and explain them. Yep, we've got this.
00:29:00.000 --> 00:29:03.000
It's under control. This is what we're gonna do about it.
00:29:03.000 --> 00:29:08.000
As I mentioned before, you can. Now that you've got a list of design requirements, you can use those design requirements.
00:29:08.000 --> 00:29:12.000
As a checklist for evaluating proof of concept, hardware.
00:29:12.000 --> 00:29:18.000
If the proof of concepts, all the boxes for all those design requirements, it's a concept that's worth going forward with.
00:29:18.000 --> 00:29:26.000
If it doesn't, you know that there is in your proof of concept that you're gonna have to go back and revisit in order for it to meet those requirements.
00:29:26.000 --> 00:29:34.000
Ultimately you could also revisit the requirements and look for alternatives. Either way. It shows you where there's a gap between your proof of concept.
00:29:34.000 --> 00:29:38.000
And what you need in order to make sure your device is, gonna be safe.
00:29:38.000 --> 00:29:42.000
In addition to that, now that you've got this collective of requirements for.
00:29:42.000 --> 00:29:46.000
How are you gonna make the claims, and how you're gonna make sure that it has. It's marketable.
00:29:46.000 --> 00:29:53.000
You also safety requirements, tracing the ones that are used for safety allows you to prioritize those because.
00:29:53.000 --> 00:29:57.000
If you haven't got a safe product, doesn't matter what the marketing requirements and what the claims are.
00:29:57.000 --> 00:30:02.000
It lets you prioritize and get those out of the way first, st so that you can then spend your energy on.
00:30:02.000 --> 00:30:07.000
Making sure you meet your claims and making sure you meet your marketing needs.
00:30:07.000 --> 00:30:12.000
When problems come up it allows you to triage them. If you've come across a problem.
00:30:12.000 --> 00:30:20.000
And it's already there's already a line item in your risk management file. But what you're gonna do about it now you don't have to scramble. You already know what the marching orders are.
00:30:20.000 --> 00:30:22.000
If it's not at a risk management file.
00:30:22.000 --> 00:30:34.000
You'll know that you have to make an update your risk management file. Add that entry so that you can then get consensus on. Now, what are we gonna do about this new thing we learned.
00:30:34.000 --> 00:30:44.000
As I mentioned before, this also helps the test team, because now the test team can look at all of those requirements that are there for safety and can look through the risk management file and figure out, okay, what are the.
00:30:44.000 --> 00:30:47.000
What failure modes and hazards are those intended to.
00:30:47.000 --> 00:30:51.000
Be risk controls. For what were the sequence of events that.
00:30:51.000 --> 00:30:57.000
Led to that potential harm. Let's test those risk controls against those sequence.
00:30:57.000 --> 00:31:01.000
And make sure that those risk controls actually work. So you get a more robust.
00:31:01.000 --> 00:31:09.000
Test strategy and it gives the test team a really good starting point for their effort.
00:31:09.000 --> 00:31:17.000
Lastly, you get a nice collective overall risk assessment of all right. This is what.
00:31:17.000 --> 00:31:19.000
Risk profile. The device has.
00:31:19.000 --> 00:31:24.000
And if it's acceptable, you know that you've done enough. You've got the right requirements.
00:31:24.000 --> 00:31:34.000
And when the those are demonstrated to work, you've got a device that is safe enough to actually consider submitting and marketing.
00:31:34.000 --> 00:31:43.000
So? Are there any questions.
00:31:43.000 --> 00:31:46.000
Awesome, Chris. Thank you so much.
00:31:46.000 --> 00:31:56.000
We have lots of comments coming in through the QA. Function. Thank you so much. Everybody who's writing. Then tons of comments on great presentation. This is awesome information.
00:31:56.000 --> 00:32:05.000
But let's dig into a few of the questions, and, you know, feel free. If your question has not been answered, feel free to still put in your your questions.
00:32:05.000 --> 00:32:08.000
Okay, here we go.
00:32:08.000 --> 00:32:16.000
When you set probability of occurrence. How do you know you got it?
00:32:16.000 --> 00:32:20.000
Well, you have to get the well.
00:32:20.000 --> 00:32:27.000
If your book pulling from the predefined list that's been established in your risk management plan, it's an estimate.
00:32:27.000 --> 00:32:35.000
And you need a good consensus across your team that does this pass the sniff test. This is an estimate that all of these are estimates.
00:32:35.000 --> 00:32:43.000
This is really just about ranking. Sure you identify all the different things that can go wrong, rank them to figure out which ones you need to do something about.
00:32:43.000 --> 00:32:47.000
And you make your best estimate.
00:32:47.000 --> 00:32:49.000
Will it be right.
00:32:49.000 --> 00:32:59.000
More often than not they usually are that the wisdom of a team of 3 to 4 engineers is usually good at establishing what a good baseline is.
00:32:59.000 --> 00:33:01.000
If you discover it's not.
00:33:01.000 --> 00:33:10.000
Then you go back in your risk management plan and act on that information. This, the entire risk management process is a living process. You start with a starting point.
00:33:10.000 --> 00:33:12.000
Your first, st best.
00:33:12.000 --> 00:33:21.000
And as you go through the process and you learn more, you go back and you edit it, and you revise it continuously to act on new information, so I wouldn't sweat too much to. Did I get it right off the bat.
00:33:21.000 --> 00:33:24.000
Make your first, st best guess.
00:33:24.000 --> 00:33:33.000
Just make sure it passes. You. Just think about it. Put it in this context of all right. If I if this was the crossing street example? Would it make sense.
00:33:33.000 --> 00:33:45.000
And use that as your starting point, and then go back. And you're gonna have to periodically revisit that. If you look at 1,471. It says that you have to have periodic reviews of that file to make sure that you've acted on new information.
00:33:45.000 --> 00:33:57.000
That's usually if you, if you made an error somewhere in there, those are the sorts of times you have the opportunity to revisit that, and revise them accordingly.
00:33:57.000 --> 00:33:59.000
Okay. Great.
00:33:59.000 --> 00:34:02.000
Here's our next one.
00:34:02.000 --> 00:34:06.000
Okay regarding risk analysis according to Iso.
00:34:06.000 --> 00:34:09.000
14, 9, 7, 1.
00:34:09.000 --> 00:34:14.000
2,019. We cannot use Fmea only right?
00:34:14.000 --> 00:34:15.000
That's correct.
00:34:15.000 --> 00:34:18.000
So, okay? And then there's the second part.
00:34:18.000 --> 00:34:25.000
Secondly, can we use the instructions for users as a risk measure according to.
00:34:25.000 --> 00:34:28.000
This the same iso as referenced.
00:34:28.000 --> 00:34:32.000
Yes, you can.
00:34:32.000 --> 00:34:37.000
Now this is interesting. This actually touches on a topic that we had.
00:34:37.000 --> 00:34:40.000
My team has discussions about periodically.
00:34:40.000 --> 00:34:50.000
There were earlier versions of the Bse. 1,471 that had an annex Za. That talked about the notion that you couldn't use labeling as a risk control.
00:34:50.000 --> 00:35:01.000
Essentially what that was trying to communicate, and the consensus among the industry that people that I've spoken with is the con is that that means you just can't list a bunch of warnings and cautions and say.
00:35:01.000 --> 00:35:05.000
That's it. We're done. You really need. You can use.
00:35:05.000 --> 00:35:11.000
Training in the lab and instructions in the guide as risk controls.
00:35:11.000 --> 00:35:13.000
But you can't just sweep everything under the wood by saying, Well, we told you so.
00:35:13.000 --> 00:35:18.000
And that's essentially where there was a lot of controversy. Now, that's.
00:35:18.000 --> 00:35:20.000
Been
00:35:20.000 --> 00:35:24.000
That's not. There's been revision since then that have kind of.
00:35:24.000 --> 00:35:28.000
Clarified, and and lessons they the the rancor about that.
00:35:28.000 --> 00:35:31.000
I still think it's prudent to make sure that.
00:35:31.000 --> 00:35:37.000
If you're gonna use labeling, that labeling should very specifically be used in the form of training.
00:35:37.000 --> 00:35:40.000
Very extensive.
00:35:40.000 --> 00:35:48.000
And not just simply in the form of warnings and cautions. I view it as I take the safe route. I don't use warnings and cautions as
00:35:48.000 --> 00:35:54.000
I do not list warnings and cautions as a justification for reducing likelihood.
00:35:54.000 --> 00:36:00.000
I I will always I will! I'll include them as a disclosure, saying, we're disclosing this to the patient.
00:36:00.000 --> 00:36:07.000
I will only use training, and in actual instructions. This is how you do this. This is what you shouldn't do.
00:36:07.000 --> 00:36:15.000
For this is how to use the device correctly, I'll take credit for those, but I generally will not take credit for warnings and cautions.
00:36:15.000 --> 00:36:27.000
Okay, good to know. And then there's a 3rd part of this questions asking if there are any FDA guidance specifically related to risk manager.
00:36:27.000 --> 00:36:29.000
I should know this.
00:36:29.000 --> 00:36:39.000
They're talked about. There's a little bit in there I I can't recall them off the top of my head if there are, and which ones are particularly useful.
00:36:39.000 --> 00:36:49.000
If that person's interested in forwarding me some contact information, I can do some digging. You can also go to the FDA website and and search their guidance for risk, management.
00:36:49.000 --> 00:36:52.000
And I'm they'll come up.
00:36:52.000 --> 00:37:00.000
So that's usually the thing I recommend first, st and if they don't, if they don't have satisfaction they can, they could reach out to me, and I could.
00:37:00.000 --> 00:37:04.000
Point them to a possible reference.
00:37:04.000 --> 00:37:08.000
Okay, great. And how should people reach out to you? What's the best way.
00:37:08.000 --> 00:37:11.000
C. perry@sunrise.com.
00:37:11.000 --> 00:37:14.000
Excuse me, sunrise, labscom.
00:37:14.000 --> 00:37:16.000
Awesome. Thank you.
00:37:16.000 --> 00:37:22.000
Okay. Can you talk a little bit about how to resolve the team.
00:37:22.000 --> 00:37:25.000
That in Arab.
00:37:25.000 --> 00:37:33.000
Deciding to to harm, eg. Tripping could lead to a skull, fracture and death. But the expected result is just minor discomfort.
00:37:33.000 --> 00:37:38.000
And B, determining the probability and severity.
00:37:38.000 --> 00:37:42.000
For example, is exposure identifiable. Patient data.
00:37:42.000 --> 00:37:44.000
A low medium, or high severity.
00:37:44.000 --> 00:37:45.000
How do you decide?
00:37:45.000 --> 00:37:48.000
What was the second half of that.
00:37:48.000 --> 00:37:54.000
Okay, let's take it 1 1 far into time. Can you talk about how to resolve agreements with the team?
00:37:54.000 --> 00:37:58.000
That arise, deciding, deciding the har the level of harm. For example.
00:37:58.000 --> 00:38:01.000
Yep, yep, so.
00:38:01.000 --> 00:38:05.000
The way I usually have dealt with that is, okay. If there's.
00:38:05.000 --> 00:38:09.000
Disagreement on which level we should analyze. I'll analyze all them.
00:38:09.000 --> 00:38:11.000
I'll just put line items in for all of them.
00:38:11.000 --> 00:38:13.000
And that way everybody gets their say.
00:38:13.000 --> 00:38:23.000
And then the A, they get to say it's a more comprehensive analysis. It might be a little bit more, but at least all that information is captured.
00:38:23.000 --> 00:38:25.000
And
00:38:25.000 --> 00:38:34.000
Then you can use that to also go through the exercise, as I mentioned, just because you pick the highest severity doesn't necessarily mean you, and you have arrived at the highest risk.
00:38:34.000 --> 00:38:36.000
So this is a class example, where.
00:38:36.000 --> 00:38:43.000
Yeah, you might worry about fractured skull, but it's so unlikely that it actually presents a lower risk. The notion of breaking a risk.
00:38:43.000 --> 00:38:47.000
Right? So put all the line items in, analyze them all.
00:38:47.000 --> 00:38:52.000
And then there's no question about. Well, did we miss something.
00:38:52.000 --> 00:38:57.000
No, you didn't, because you included the whole thing.
00:38:57.000 --> 00:39:04.000
Okay? And then what about determining the probability and severity? How do you go about doing that?
00:39:04.000 --> 00:39:14.000
Well, it's and I think you you suggested there might be research on that. I have actually done searches of various Federal agencies, databases.
00:39:14.000 --> 00:39:20.000
To establish likelihoods of things classic cases. I was working in a prosthetic and.
00:39:20.000 --> 00:39:22.000
A question came up from the FDA about well.
00:39:22.000 --> 00:39:28.000
People can reach into an oven with this arm, and what's the likelihood that they'll their arm will catch fire.
00:39:28.000 --> 00:39:35.000
So I actually looked at the national fire safety database, for how often people get injured when they reach into ovens and use that to establish a likelihood.
00:39:35.000 --> 00:39:42.000
And where it's used to able to establish that my likelihoods actually made sense. My probabilities of harm actually made sense.
00:39:42.000 --> 00:39:44.000
So, yeah, you absolutely can look at that stuff.
00:39:44.000 --> 00:39:55.000
There's nothing wrong with just doing a a, an estimate from the vast majority. But if there's areas where you're particularly concerned, or there's a lot of disagreement about what those likelihoods are.
00:39:55.000 --> 00:40:07.000
You can go and use those databases. And, by the way, that's a great place to put in the notes column, we arrive at this probability by searching this database. And this is where that information was found. So that's really great way to note that when you feel that you need to.
00:40:07.000 --> 00:40:12.000
Ground that probability on some kind of data.
00:40:12.000 --> 00:40:21.000
Okay? And it, this is kind of an offshoot of that question. Maybe it's the same. But let me just clarify this like.
00:40:21.000 --> 00:40:33.000
Ha! What methods do you ensure that the team is doing the proper? Obviously, you know, risk management can encompass so many things? How do you declare when you're done.
00:40:33.000 --> 00:40:36.000
Well
00:40:36.000 --> 00:40:47.000
Like, I said, but the hazard analysis you go through, and you look at all the most obvious stuff. And and that's why I was suggesting that you look at that table. c. 1 and 14, one. It's a very comprehensive list of hazards.
00:40:47.000 --> 00:40:55.000
So your hazard analysis if you cover, if you pick all the most reasonable ones in those for your application, you've got a pretty comprehensive list.
00:40:55.000 --> 00:40:58.000
And so, if you look at all of those, and how those.
00:40:58.000 --> 00:41:07.000
You dream them up. Everyone that everyone thinks as as a reasonable sequence of events that can lead to a particular hazard is captured and has.
00:41:07.000 --> 00:41:16.000
You've got a good hazard analysis. Start with again. There are multiple gates in this process where you have to go back and re review your risk and file where you add new stuff as you go along.
00:41:16.000 --> 00:41:23.000
But that first, st baseline, as long as you're you've gone through, and you've listed everything that you can think of that's good enough.
00:41:23.000 --> 00:41:33.000
Because you will definitely burn stuff as you go through the process. But that doesn't mean that your 1st pass was inadequate. It just means that it's an evolution of knowledge.
00:41:33.000 --> 00:41:35.000
With respect to Dfmas.
00:41:35.000 --> 00:41:44.000
There's a few different techniques. If you look at the old mill specs on how they do, they do it right from the component on up screws, nuts, bolts, stuff like that.
00:41:44.000 --> 00:42:05.000
That's pretty laborious. I only do that for safety critical, like safety systems like safety subsystems that are designed to to react to very specific things. I might do a bottom up analysis of every transistor and every resistor and capacitor. But generally what I'll do is on a on the hardware side, I'll break it down to a module. This module does a a a particular job.
00:42:05.000 --> 00:42:12.000
How can it fail at that job? And those are the failure modes that I'll look at sort of like software modules in a software.
00:42:12.000 --> 00:42:16.000
I'm not looking at every line of code. I'm looking at a module, whether it's inputs, whether it's outputs, what's going to do wrong.
00:42:16.000 --> 00:42:22.000
How can it break and make sure I include all those things? And if you go through that exercise.
00:42:22.000 --> 00:42:25.000
On the design side. That's again.
00:42:25.000 --> 00:42:28.000
That's a very strong starting point.
00:42:28.000 --> 00:42:32.000
And I wouldn't get too worried about. Did I do enough.
00:42:32.000 --> 00:42:39.000
By the time you get through the process you'll you'll generally feel. Yeah, I think I've got everything. I can't think of anything.
00:42:39.000 --> 00:42:42.000
That I missed, and that's the state you want to be in.
00:42:42.000 --> 00:42:45.000
I I can't think of anything I missed. That's good.
00:42:45.000 --> 00:42:54.000
And then the Ufmea, as I mentioned, is, you want to go through every step of the treatment, every step of the devices, use all the different scenarios. It's used every step of that process.
00:42:54.000 --> 00:42:57.000
And look at all the ways that that can go wrong.
00:42:57.000 --> 00:43:00.000
How a typical user might do it wrong.
00:43:00.000 --> 00:43:05.000
And include those in your Fma. That's your good starting point. And then you want to do your human factors testing.
00:43:05.000 --> 00:43:09.000
Your your formative testing go through the human formative testing.
00:43:09.000 --> 00:43:11.000
And you want to do enough formative testing so that.
00:43:11.000 --> 00:43:14.000
You stop learning new things.
00:43:14.000 --> 00:43:19.000
With every you know. Once once you've gone through human, a formative test, you're gonna do multiples.
00:43:19.000 --> 00:43:23.000
When the last human formative test.
00:43:23.000 --> 00:43:39.000
That you did didn't teach you anything new. You've probably done enough formatives, and every one of the observations that you had in that formative needs to go on your Fma, and you need to note it. And then you have a complete enough Ufmea to then inform what you're labeling and your training needs to be.
00:43:39.000 --> 00:43:43.000
What ui elements have to be tweaked, how the hardware might have to be tweaked.
00:43:43.000 --> 00:43:48.000
And then you'll be in a good position to start considering.
00:43:48.000 --> 00:43:52.000
What your script for your summative study is gonna have to be.
00:43:52.000 --> 00:43:57.000
Okay, awesome. We do have a question about.
00:43:57.000 --> 00:44:05.000
So patient safety is the theme here, including supporting Fmea. They're asking about what about cyber security, Fea.
00:44:05.000 --> 00:44:08.000
Yep. Yep.
00:44:08.000 --> 00:44:17.000
It's I consider it kind of a parallel analysis and it should use. There's 2 things I like to say about that in particular is one is that.
00:44:17.000 --> 00:44:22.000
It structures gonna be fairly similar in terms of what's from the left to the right.
00:44:22.000 --> 00:44:24.000
It should be structured a bit like a saw
00:44:24.000 --> 00:44:26.000
Excuse me. A Dfm a. For software.
00:44:26.000 --> 00:44:31.000
But depending on whether or not you're using the gist standard or
00:44:31.000 --> 00:44:33.000
Cbs ranking.
00:44:33.000 --> 00:44:39.000
You may end up with additional columns. And so that's kind of a whole separate presentation. We can go into.
00:44:39.000 --> 00:44:41.000
The, the.
00:44:41.000 --> 00:44:43.000
Important thing.
00:44:43.000 --> 00:44:54.000
I would argue is that when you're looking at your, when you're defining your severity levels for a Vfa, you wanna make sure that at least the higher level severities.
00:44:54.000 --> 00:45:00.000
For the Vfa. And your safety assessment here. This, this risk management file should be.
00:45:00.000 --> 00:45:05.000
Proposal, they should have similar sorts of levels. You'll probably have additional.
00:45:05.000 --> 00:45:10.000
Lower level severities like wass. Personal information. You know.
00:45:10.000 --> 00:45:14.000
Business risks associated with it, which is what requires you to include.
00:45:14.000 --> 00:45:18.000
So those should be ranked, you know, obviously less than safety.
00:45:18.000 --> 00:45:23.000
So you might end up actually with more than 5 levels of severity in your Vfa.
00:45:23.000 --> 00:45:33.000
But as long as the upper levels, the ones that have to do with patient safety and operator safety are included, then you've got a 1 to one correlation in your, and you use a similar.
00:45:33.000 --> 00:45:39.000
Probability scale for me. If, Mea, then you're talking about the same risks in both cases, and doing that.
00:45:39.000 --> 00:45:43.000
Really simplifies.
00:45:43.000 --> 00:45:47.000
How to relate the information that the Vfa.
00:45:47.000 --> 00:45:51.000
Gives you. In the context of patient safety. So there's.
00:45:51.000 --> 00:45:53.000
There's should be some overlap.
00:45:53.000 --> 00:45:57.000
But they're they are treated different like.
00:45:57.000 --> 00:46:03.000
Okay, great could you expand on the difference between mitigation and risk control? A little.
00:46:03.000 --> 00:46:08.000
Well, mitigation was the term that was used way way back, and I think it was actually in the mill spec.
00:46:08.000 --> 00:46:12.000
I think tacitly, you could argue that they're more or less the same thing.
00:46:12.000 --> 00:46:18.000
It's just that the Standard Committee, for whatever reason, decided that the term mitigation was not.
00:46:18.000 --> 00:46:25.000
Suitable, and they switch to risk control. And so that's why they really would prefer to see that term used.
00:46:25.000 --> 00:46:29.000
I mean, obviously, in in colloquial language, you can use the term.
00:46:29.000 --> 00:46:32.000
But I would avoid using it. Written materials.
00:46:32.000 --> 00:46:40.000
Okay, I'm gonna ask you to define something else. Chris, can you define criteria for riskability? Place.
00:46:40.000 --> 00:46:45.000
Yes, okay. So I have additional slides for this, because I suspected this sort of a question would come up.
00:46:45.000 --> 00:46:55.000
You need to do 2 things to start with. First, st you need to spell out levels of severity in a an unambiguous way. You needed to find them, and I just plucked this.
00:46:55.000 --> 00:46:58.000
Table from.
00:46:58.000 --> 00:47:01.000
Excuse me
00:47:01.000 --> 00:47:07.000
Pluck this table from our sop. So this is just a starting point. We'll usually start with this, and then.
00:47:07.000 --> 00:47:12.000
For whatever product we're working on, we'll tweak it as necessary to make it match that product.
00:47:12.000 --> 00:47:19.000
So you see, there's 5 levels of severity all the way from negligible to catastrophic negligible is essentially more or less, you know.
00:47:19.000 --> 00:47:23.000
Cut some bruises, nothing that would require more than a Band-aid.
00:47:23.000 --> 00:47:26.000
Minor might be laceration.
00:47:26.000 --> 00:47:36.000
broken bones. Something like that we might consider a small broken bone. Serious is something where you need to get to. The men get to the hospital.
00:47:36.000 --> 00:47:39.000
And you're gonna probably need certical intervention is.
00:47:39.000 --> 00:47:42.000
You're permanently impaired. It's irreversible damage.
00:47:42.000 --> 00:47:45.000
Catastrophic life. Threatening.
00:47:45.000 --> 00:47:52.000
So this is for a more sophisticated product with a slightly higher risk profile. You need 5 for a simpler one.
00:47:52.000 --> 00:48:04.000
You probably could get away with just 3 levels, probably knock off the top 2 for simpler devices. I don't recommend more than 5 levels of that additional granularity doesn't really buy you very much, and just makes the analysis more cumbersome.
00:48:04.000 --> 00:48:08.000
Then you need to establish levels for
00:48:08.000 --> 00:48:12.000
Oops. It jumped over it. You need to establish
00:48:12.000 --> 00:48:14.000
Levels, for probability.
00:48:14.000 --> 00:48:18.000
And so here we have 5 levels for our generic template.
00:48:18.000 --> 00:48:21.000
Again for simpler devices. You could probably get away with fewer levels.
00:48:21.000 --> 00:48:29.000
So anywhere from improbable, which almost never happens to remote, yet might happen occasionally in the fleet.
00:48:29.000 --> 00:48:32.000
Occasional might be might happen.
00:48:32.000 --> 00:48:35.000
For most of the devices in the fleet. Once in a while.
00:48:35.000 --> 00:48:39.000
All the way to frequent happens almost all the time.
00:48:39.000 --> 00:48:49.000
Now, these can be based on incidences per time and service. Think of pacemaker where it's just sitting and just doing its job continuously for an extended period.
00:48:49.000 --> 00:48:54.000
It could be number of uses or treatments, for instance, staplerizing tool.
00:48:54.000 --> 00:48:57.000
You counted on the number of treatments.
00:48:57.000 --> 00:49:01.000
Or disposable is used like syringe in a
00:49:01.000 --> 00:49:08.000
Wearable ones. Infusion pump would be an example of another way to rank the probabilities.
00:49:08.000 --> 00:49:12.000
It's entirely up to your.
00:49:12.000 --> 00:49:15.000
Leadership and your team to figure out how to rank this.
00:49:15.000 --> 00:49:24.000
And set this up. And if you notice here that the scaling of the percentage so roughly.
00:49:24.000 --> 00:49:27.000
Scales and orders of magnitude.
00:49:27.000 --> 00:49:33.000
Improbable enrollable is 10 times less likely, roughly than remote.
00:49:33.000 --> 00:49:45.000
Remote is roughly 10 times less likely than occasional. Once you get to probable and frequent, it starts to become more like a factor or 2 is kind of how we do it. That's that's usually works for us.
00:49:45.000 --> 00:50:01.000
And then your combination of severity and probability is a matrix is usually how it's constructed where you you list off. You've got all your different severity levels and all your different probability levels. And you go through it as an organization, decide what combinations are acceptable and what combinations are not acceptable.
00:50:01.000 --> 00:50:04.000
And then this is the you use those terms.
00:50:04.000 --> 00:50:09.000
To you. Populate your your analysis with those terms, and those were expected.
00:50:09.000 --> 00:50:14.000
And your risk is populated, based on what's in this matrix. And that's how you do that stuff.
00:50:14.000 --> 00:50:21.000
Often these are these severities and probabilities are established on an organizational basis.
00:50:21.000 --> 00:50:25.000
They can also be established on a product or product type basis.
00:50:25.000 --> 00:50:33.000
It's entirely dependent on what the the greater team thinks is prudent for that application.
00:50:33.000 --> 00:50:35.000
Okay. Great. Thank you.
00:50:35.000 --> 00:50:45.000
We have a question around testing the failure envelope. So shouldn't part of the risk management be determining testing the failure? Envelope.
00:50:45.000 --> 00:50:53.000
Of the design parameters, and some one to 5 or 1.5 safety margin beyond the the use, design, intent.
00:50:53.000 --> 00:50:59.000
You can do that a couple of different ways. You can embed that in your requirements.
00:50:59.000 --> 00:51:05.000
Your design requirements. You can embed that in your test plan, your qualification Test Plan.
00:51:05.000 --> 00:51:10.000
I don't know that you necessarily need to have that explicitly stated in your risk.
00:51:10.000 --> 00:51:12.000
Control file.
00:51:12.000 --> 00:51:17.000
But absolutely yeah. You you, if you want to demonstrate margin of safety.
00:51:17.000 --> 00:51:21.000
I wouldn't necessarily specify it as a requirement, because.
00:51:21.000 --> 00:51:25.000
Not everywhere is gonna be able to utilize the same margin.
00:51:25.000 --> 00:51:28.000
Like, how do you apply that margin of software? It it has no meaning.
00:51:28.000 --> 00:51:39.000
But you can definitely scan that. I would recommend you. Consider that in your test your qualification test plan.
00:51:39.000 --> 00:52:01.000
Okay, great. I do have a couple of questions here on country differences, global differences for risk approaches. So I'm I'm just gonna combine them and apologies. But I'm kind of summarizing a couple of these together. So, for example, are there regional difference? And specifically does the EU require the risky, controlled as far as possible.
00:52:01.000 --> 00:52:10.000
Yup, yeah, this is to my knowledge, the only regulatory language that elaborates on 49 is part 10.
00:52:10.000 --> 00:52:12.000
2 of the Mdr.
00:52:12.000 --> 00:52:17.000
Which states any risk associated with a device must be reduced as far as possible.
00:52:17.000 --> 00:52:24.000
This particular language is related to that annex I mentioned earlier 71.
00:52:24.000 --> 00:52:33.000
And it is it's been implied to mean that every mode or hazard must have risk controls applied.
00:52:33.000 --> 00:52:40.000
I think the consensus among industry people I've spoken with is, it's that's not necessarily what it's intended to do.
00:52:40.000 --> 00:52:44.000
Really what it was supposed to be for is to make sure that.
00:52:44.000 --> 00:52:47.000
You don't withhold.
00:52:47.000 --> 00:52:50.000
A viable risk control measure.
00:52:50.000 --> 00:52:52.000
Just for economic or financial reasons.
00:52:52.000 --> 00:52:56.000
You can't use that as an excuse not to do the right thing.
00:52:56.000 --> 00:53:11.000
I've been fortunate that every program I've ever worked on the question of the cost of a risk control has never come up. But that's not always the case. And that's what the EU really wants to make sure happens is that you don't pull any punches just because it's gonna cost you something.
00:53:11.000 --> 00:53:19.000
So that's really the only one. Now, I'm sure that this is tacitly the same kind of thinking on the part of the FDA. But it's not explicitly stated in anything I've read.
00:53:19.000 --> 00:53:26.000
But based on the reviews that the FDA has subjected my products to. They they think the same way. It's just not.
00:53:26.000 --> 00:53:31.000
Stated quite so overtly.
00:53:31.000 --> 00:53:39.000
Okay, great. What about your take, Chris, on changing the severity after implementation of the controls.
00:53:39.000 --> 00:53:48.000
Well, as I mentioned, if you've got a risk control that that definitely interrupts the sequence of events, I don't necessarily have a problem with that at all, and I would spell that out in the notes.
00:53:48.000 --> 00:53:54.000
About how that works, and and make it very clear. I generally don't do that. As a rule.
00:53:54.000 --> 00:54:03.000
Oh, certainly not, Willy. I I wanna make sure that it's there's a really strong, solid, irrefutable argument, for why, that's justified.
00:54:03.000 --> 00:54:06.000
But where it's justified it absolutely is appropriate.
00:54:06.000 --> 00:54:11.000
This is kind of the underlying theme behind intrinsically design.
00:54:11.000 --> 00:54:16.000
If you make a device, so it's intrinsically safe, so that certain hazards can exist.
00:54:16.000 --> 00:54:20.000
Say so, take advantage of it.
00:54:20.000 --> 00:54:31.000
Okay? And can you expand a little bit on what is acceptable to reduce the severity level of the fea.
00:54:31.000 --> 00:54:36.000
Well in the street. Example, like I said, is you could use
00:54:36.000 --> 00:54:38.000
You know, pedestrian.
00:54:38.000 --> 00:54:45.000
Cause wise for crossing over streets. Now, that's that's not always appropriate in a residential area. That would be a big hassle.
00:54:45.000 --> 00:54:50.000
It really is based on an application specific.
00:54:50.000 --> 00:54:53.000
Case, I couldn't give you any specific examples.
00:54:53.000 --> 00:55:00.000
We depend on what product we're talking about. But there, certainly, like, I said, is, if you could simply say, Here's the sequence of events.
00:55:00.000 --> 00:55:08.000
Read through it. Make sure it's it's verbose enough to make it very clear what has to happen for that hazard situation that lead to a harm. If you can break that chain.
00:55:08.000 --> 00:55:19.000
Using a risk control, and you can prove that risk control does break that chain. Then you can justify changing that severity.
00:55:19.000 --> 00:55:26.000
Okay? Now, I'm gonna ask a question into post market surveillance activity. So.
00:55:26.000 --> 00:55:29.000
In regard to that, they're specifically looking for.
00:55:29.000 --> 00:55:37.000
Using risk levels to decide whether an action is required to reduce complaints.
00:55:37.000 --> 00:55:49.000
Knowing that we have both the hazard analysis and the fame.
00:55:49.000 --> 00:56:08.000
So they're asking, do you recommend using the hazard analysis table to make that determination.
00:56:08.000 --> 00:56:12.000
Alright folks. Well, I apologize. It looks like we lost Chris.
00:56:12.000 --> 00:56:22.000
He might be jumping on here back in a second, but we only had about 5 min left, so we will make sure to get your questions to
00:56:22.000 --> 00:56:28.000
Sunrise labs all those that we didn't get a chance to answer live, and I'm sure they will follow up with you.
00:56:28.000 --> 00:56:42.000
After the webinar. Again. Thank you for attending. Thank you, Chris, for his sunrise labs for sponsoring today's webinar. Thank you all for joining us, and I do encourage you to check out mathematic.
00:56:42.000 --> 00:56:49.000
Dot com slash events and see all the other great events we have coming up, and I encourage you to come to a future event.
00:56:49.000 --> 00:56:54.000
Have a great day. We'll see you next time.