Transcript for:
Lecture Notes on Quality Control Protocols

you [Music] hello again we have been discussing the protocols for quality controls and we have come a long way from where we started we started off by talking about different kinds of distributions and the central tendencies we zero down on the Gaussian or the normal distribution where the mean is equal to the mole is equal to the median and then we talked about how the Gaussian can be employed to make an LJ chart for use in the laboratories and then we talked about some rules and the applications of rules and the violations of rules and then we talked about why these rules are important because it can detect errors and so we talked a lot of the kinds of errors that can happen in the lab and the reasons for it and some ideas about how to control those errors and then we went on to making the right chart how can you detect errors only on the right chart and if you make wrong charts then your whole purpose is defeated and from there we went on to talking about a new quality control lot and how you can put the right numbers on the chart when you do parallel testing and after that we talked about another concept in quality control which is your bias bias and how do you compute the bias and if you compute the bias and how we can calculate the total error and then we talked about why even calculate the total error because total error calculations will help you set quality specifications and tolerance limits in your lab if you have the total allowable error also that you find out from the database is available and we talked about different data bases that can be employed to find out the allowable error want to use in your laboratories so all these points come under the internal quality control spur one five one eight nine applause five point six that is ensuring the quality of examinations and there's one more concept in this five point six internal control that I would like to talk to you about yesterday I told you about when we calculate bias this is not mandatory as per in abl rules as of now so call those concept starting from bias total error toll allowable error and what we are going to talk about right now Sigma matrix these are all concepts those not mandatory that you may want to understand and think about in your laboratory whether you want to employ these things because these are good concepts that will enable your laboratory and set a specified quality so we go into the final concepts that we talked about in the internal controlling policies that that is called the Sigma matrix this is the fifth generation quality control and we will see what Sigma metrics means in the laboratory all of you would have heard about Sigma matrix in some context or the other so what is Six Sigma it's a 19th century mathematical theory but now used in today's mainstream business world through the efforts of an engineer at Motorola in the 1980s one of the foremost methodological practices for improvement of services and manufacturing and why do talking about Sigma because Sigma emphasizes the need to define tolerance limits to describe intended use we'll understand what tolerance limits is in a little while it's a goal of Six Sigma for world-class quality provides a uniform way of describing quality in terms of Sigma scale and a Sigma metric QC selection tool included in the cell SI c24 III guidance for statistical quality control for quantitative measurements there are manual tools as well as computerized programs we will see a manual tool in this presentation so what is Six Sigma to bring operations to a Six Sigma level bring down the number of defects to less than three point four defects for every million opportunities or occurrences that means if you are doing 1 million in the lab if you make only 3.4 mistakes you have achieved Six Sigma so that's a big order so you understand the significance of setting the tolerance limits how many mistakes are you letting yourself make so how do you get to that point you get to that point by providing methods for systematically identifying and eliminating errors that's what we have been talking about all this while how do we have error detection programs like internal control programs we have talked about many mechanisms of controlling your operations and then one more thing that you need to do is you have to define your quality specifications that is setting your goals you need to say that I'm going to be at least 4 Sigma or 5 Sigma and in which operation and you have to strive towards that that is what we call quality specifications and we talked about one of the mechanisms of quality specifications yesterday that is by setting the total allowable error as a parameter to stop that is your you say this is how much errors I'm going to make I have to confine myself to the allowable limits that's one mechanism another mechanism is setting goals through Six Sigma and so this way that we already talked and saw this slide many times you've got your LJ charts you've got your bias detection mechanism you can take it together and then you can understand your total error and that is the mechanism that we have been talking about so back to what are some quality specifications totally allowable error this we just talked about you've set your goal as te will be less than t EI and second is sigma matrix can also be used to define the tolerance limits the pmos defects per million occasions or opportunities so if you look to the conversion of D PMO to Six Sigma we already said that if you make less than 3.4 million operations then you have your quality of your standard is the laboratory standard is Six Sigma you have got a three point four defects you have a Six Sigma there are these scales which tell you the conversion if you make one three five zero one thousand three 50 mistakes 4 million operation you have got a 4.5 Sigma so you can just compute that table I have just put parts of it here you have a 1.8 Sigma which is not an acceptable Sigma that means you're making 3 lakhs and 82,000 mistakes per million operations so now while we are talking about it you just imagine the quality of your laboratory how many mistakes would you be making and what is acceptable is a 1.8 Sigma acceptable can you make 38 3 lakh 82,000 mistakes per million operations will you let that be one of the things that your lab is doing when you are looking at these numbers immediately you're thinking of tolerance limits I am NOT going to tolerate this kind of an error 1.8 Sigma is not acceptable to me 4.5 Sigma maybe I can consider this is where you need to make a call about how you'd set your tolerance limits let's look at some of them Sigma's that have been put up in the Westgard QC site and it says Sigma metrics of airline safety it's six will you get into an aircraft if they don't offer you that kind of safety very unlikely so that is a Six Sigma operation airline safety so they have to make sure that every nut and bolt is in place and nothing is missing and every operation is perfect but when you come to airline baggage handling may be not that good 4.15 but you can still deal with it you are okay about 4.5 so it's four point one five it's roughly about we said 1500 or so errors per million operation we may accept it departure delays 2.3 not acceptable would you like sitting around in the airport lounge for whatever time without flight arriving so all these things have been said there to say this is the tolerance limit and maybe these the departure delay they have set a tolerance limit 4.5 are not being able to attain it so at least you are working towards it that's what kinds of graph tell you come to the lab pre-analytical sample is a five point one Sigma inverse estimate wouldn't think that would be the case in our country at this point and we need to actually set the goal and work towards it at this point I may put it like to 0.3 to 0.5 it may be what we are at now so we have to set our goals and work towards it and hemolyzed specimens for point 1 and control exiting limit but that is within our laboratories can if when you are doing controls we have only attained up to 3.4 Sigma okay that is a lot of errors and we are talking about some really good standardized slaps at this point to talk about 3.4 and in our case we are just beginning to make these efforts into quality control so we really have to set our goals and work towards it we are taking that three point four figure from the previous slide control exceeding limit three point four now how about your lab we are going to take that figure of three point four Sigma estimate that that is a number of errors that we are making we could be making much more than that but let's take three point four as an example here and convert it into the number of defective reports that you could be giving per day average load per day I am assuming is thousand five hundred patients in a medical college laboratory and per patient you are doing five tests again and estimates and total number that way will be seven thousand five hundred per day and so total test per year will become 27th lakh and 37 thousand per year which will convert to 78 thousand three hundred and fifty one defects per year and if you did take it on a daily basis approximately out of your thousand five hundred patients billing and to ten thousand five hundred tests being done you're releasing roughly two hundred fifty reports if your Sigma is three point four so is that even that number acceptable even assuming your Sigma is three point four which may not be which could be much lower so is that an acceptable number is something that we need to think about and do you want to set a goal for yourself and work towards that goal is something that we have to consider so now we are going to talk about the technicalities of finding out the Sigma in your laboratory for every analyte understand it's not a laboratory which is having a Sigma is an analyte and if it's an analyte it's even if you have got a to clinical decision levels you're running quality controls in high level and low levels so Sigma has to be understood for both these levels so that you will understand the performance of that analyte at whatever critical decision level that you are able to calculate that for so when you're talking about Six Sigma it's a the most technical view this is again from the whiskered site so the total you're assuming your Gaussian is here so remember now think back and remember your Gaussian you're assuming your Gaussian is here which means your mean is the target you have no bias you're assuming your test has no bias and you're assuming you have got a very well rounded very control Gaussian with acceptable standard deviations and if there you can accommodate six standard deviations in each side of your mean so your caution even if it shifts six times you still are within the limits of allowable error so that is what when you say this is a Six Sigma performance well just reiterate when the bias is zero and the SD is narrow enough that six standard deviations will fit in the negative and six standard deviation will fit in the positive regions between your mean and the total allowable error limits your performance is Six Sigma and your the PMO is just 3.4 so coming back to that again this is how you set your limits so total allowable error I hope you remember total allowable error has limits the lower and the higher limits we saw that yesterday you set your negative limit here in lower end of your total allowable error is here higher end of your total allowable limit is here your caution is standing right in the middle and your mean is the true value there is no bias your standard deviation is really good really control your imprecision is very very little and if you have you can accommodate six standard deviations on either side of the mean this is your Six Sigma performance so to get a Six Sigma you need to have your bias really control your imprecision really control to have a Six Sigma performance and therefore your D PMO is now only three point four per million operations so there should be some way of calculating it is it even if you don't understand the theory of it it is fine you can go into the calculations of it Sigma is equal to total allowable error minus bias divided by SD if you're calculating it in units you can do it also in percentages we'll talk about it now at this point I want to stop enter tell you one thing we have talked about calculating the by is calculating now calculating the Sigma and might sound a little overwhelming so you really don't need to these are very simple calculations all you need to do is you have to find out your CB percent audio st and understand your bias that is very important thing that is something which you get your peer group data about so I'm saying it once again when you're getting your internal quality control program please make sure your QC provider will give you the peer groups data so that that bias becomes a very key element everywhere to calculate the bias without the true value or the target value you cannot really do it so this is an important thing because you really cannot go forward from your LJ unless you have some kind of a peer group support so this is again something I am reiterating get your the peer group data so if your peer group is there you can calculate the bias and you know you can take this SD and CV whichever out from your J's and then it's really easy to calculate and and all these calculations can very easily be done on the QC soft the software which is on this website and all you need to do is just put in the numbers and for the TE define where you're getting your TA from what is your source where did you get it did you get it from the clear did you get it from BV values just read and even if you don't enter it is not a mandatory fill you can get the numbers you understand your performance so it is important that if you can get started on this it would be nice you will know where you stand in terms of quality so once again the calculation of Sigma is equal to total error allowable - bias divided by standard deviation so look at this this graph already that there is this much of bias has built up now this is one standard deviations two standard deviations it is already taken up four standard deviations here and the graph can only shift maybe one more time before it comes to the upper end of the allowable limit so this is also another calculation that we can do this is called critical systematic error how much can you are being shift how far can your main shift is a very important question so that you know that you have to hold your mean in certain position you have to avoid your biases you have to correct your systematic error so these are the important factors that you can calculate I'm not talking about the critical systematic error in this video but the labs for life or there is a module which is on this site you see module volume one and in that we have more explanation about critical systemic error and I would suggest whoever is interested will please go and read that so I'm continuing with just the Sigma calculations here Sigma is equal to either T in units - by is / s T or T in percentage - bias percentage divided by say V percentage so both can be done so there are two examiners we have given here and this is a clay up revision C database for total allowable error glucose says either target value plus 6 mg or 10% whichever is greatest I am using the 10% value here and assume your bias percent is 5 which is your observed mean is 190 target is 200 so the difference is 10 and minus 10 that becomes absolute number 10 and 10 by 200 versus a 5% is your bias and assume that your CV is 3 so by this formula 10 minus 5 divided by 3 is a 1 point 6 7 Sigma which means it's a very poor performance 1.67 we saw much earlier it's making into lakhs of mistakes per million operation so this is an unacceptable Sigma look at the CV here CV is good and even in the biased person is also not too much but that is enough for you to violate your acceptable ranges because you are going with the database that has been decided using the criticality of the parameter and we using the PT reports we talked about it when we were talking about how these people have defined the total allowable error limits so and the clear has defined it in a certain way after a lot of research 10% is the total allowable error and if you use that calculation and see your Sigma you should be having better quality in your laboratory and look at the numbers they don't even look alarming to you three percents CV you would think okay that's fine and you're looking at a 5% bias when you're putting these together and comparing it with the total allowable error you find that you're actually not doing too well another example is this is in units for potassium here the potassium target value plus or minus 0.5 and so assume the biases 0.3 and mean is 6.3 the target is 6 so finally in the calculation you will see that you've got a 4 Sigma performance which is a good performance so you will say oh that's good so I maybe I can improve but at this point I will not worry about my performance but I might worry about my glucose performance so this is how you evaluate your performance by Sigma and then setting our tolerance limits so now how do you put Sigma to work in your laboratory this is a question that we have to answer there are I already told you about there are two mechanisms one is critical systematic error and the calculation of critical systematic error is very easy it is Sigma minus 1.65 it's a easy number to come to but the concepts are little different and there in the module please read through that and also the second important is to find rule selection guidelines so now I'm coming back to our original concept that we talked about while we talked about our LJ graph that you can use your rules as single rules or multi rules so now you would want to understand in which parameter can I use these as single rules or just one rule where do I need multiple rules so that can be done if you know your Sigma because there are certain tools available which will direct you to this is the group of rules that you may want to employ to arrive at your quality if you wander arrive at all five Sigma order for Sigma you may want to take as few rules together and employ it in your laboratory to safeguard the analysis of this parameter on the template parameter okay so this is a table which I would direct your attention to if your Sigma metric is less than two the method has unacceptable performances remember your glucose unacceptable performance and does not meet your requirement for quality it is not acceptable for routine operations you might want to drastically take some measures to change the whatever you are doing and if your method has fit as two and three this is a marginal performance you might want to do more control runs per day and you want to have well trained operators reduce rotation of personnel because whoever restrained it would be good for that person to be in charge of that operation and more aggressive preventive maintenance careful monitoring of patient test results and continual efforts to improve method performance that is two and three you can still do it but you need to have really good mechanisms of safeguarding that analysis if the Sigma metric is between three and four the method has fair performance and meets your requirement for quality and can be managed in routine operations this method will require multi rule procedure with four to six control measurements we talked about it all these things four to six it's good performance more than six it's a very good performance and can be managed using just one control if that is how if your regulatory requirements will permit you're using only one control we have to now at this point before we understand how we use that tool let's understand a little more about making an effective QC design for your lab an effective QC design will ensure performance for by quickly detecting medically significant errors PE D remember that term PE D or percentage of your error detection should be more than 90% and it should generate few very few QC rule violations when there are no significant errors you shouldn't flag things unnecessarily your good QC protocol should not flag things unnecessarily so that you have to take corrective action when they are not required you have to reject runs when they need not be rejected so the percentage of error detection should be more than 90% your false rejection should be less than 5% the third point is that should have the fewest number of QC runs to enable the economics of it that you have to save you don't have to waste your QC material reagents and consumables make it the minimum number of runs the optimum number of runs and meet regulatory and accrediting bodies requirement that's also an important thing because any meal if it is asking for two levels of control for a certain number of patients that is a regulator requirement if you have to look in email accreditation so if a very high Sigma like Six Sigma technically we will not need you to do more than one level of control if the N ABL regulations require you to do two levels then you have to comply with that an effective QC design again we'll have medically important errors will be detected this is a reiteration you should be able to detect medically important errors more than 90% of the times and the if 90% of the error detection cannot be provided by a single QC procedure then multi QC procedures should be used so again one more last point here is one to ask rule should be avoided minimize the waste and reduce cost 1 to s important that you don't take it as a rejection route we talked about it in the initial the Westgard rule presentation but again I reiterate it if you have only one level of QC in your laboratory one to us should be a rejection rule but at this point of our discussion we have come a long way from where you can just say that I have only one level of QC we are assuming you have got multiple levels of QC by level or tri level as required as is recommended and that is how this discussion is progressing under the assumption that you have got the inner QC material in your laboratory so now the role selection as per method performance and the two key points that you will keep in mind is percentage of error detection and percentage of false rejection so these two should be kept in mind the error detection should be more than 90 percent false rejection should be less than 5 percent you can use many kinds of tools for using multi rules what we will discuss here is a power function graphs for Sigma selection this is the clsi recommended procedure also there are the easy rules which Westgard has suggested and opspecs tools are also available some of them are available online you can check those out and before we start on talking about the power function graphs we need to understand two more concepts here the M and the are the n here is very close to the N L rule that we talked about earlier in our discussions on the Westgard rules and here also represents a total number of control measurements that are available at a time decision is made example if two levels of control measurements are available within one run n is equal to two you three are available n is equal to three it's simple to understand like for a bi-level control at one point you will have an N of two and a tri-level you will have an end of three but what if that analyte performance requires you to look at a 6x rule or a four 1s rule or a three 1s rule for which you will need more number of measurements than what is ordinarily available therefore for a six excess to be followed by an analyte and you need six data points and you have only two levels of controls then really you will have to run it three times so that is what your n is the number of data points which should be available at one instance when you are reviewing your quality control before the starting of your tests and are on the other hand represents the number of runs how many repeats of n are required if you have to repeat the 6n like two times the R will be two you will see again when you're looking at the power function graphs that we will talk about shortly so now look at the power function graphs these are power function graphs for two level QC these are for three level QC and what our power function graphs we'll come to that by as we discuss and we are looking at the details of the power function graph on the x-axis you have got Sigma at the Sigma scale is given here 1.5 Sigma to 0.65 Sigma three point six five four up to five point eight five Sigma it is depicted on the profession graph and at the x-axis on the lower side we have the systematic error critical systematic error which is actually the Sigma minus one point six five four one point six five Sigma you have got a it's a critical systematic error of 0 so it is the same thing you can either go by Sigma or you can go back critical systematic error we will talk about going by Sigma on the y-axis you have you can detect both the percentage error detection and the percentage false rejection this is your false rejection at this end this lower end you see how much each of these lines will show us percentage of false rejection and the upper end at point 90 you since you are going to look for a 90 percent error detection you're going to check the squeezed graphs at this line to see where it is cutting I'll tell you in a minute how to how we are going to do it and there are eight power function graphs one two three four five six seven eight power function graphs intersecting all them intersect at some point in this or this graph and right-hand column shows the rules to be followed applied these are the rules that you need to follow in certain situations in each of the situations and so we let's see the details of it I've got a bigger version of it so once again I'm going this is your Sigma scale this is your systematic error this is your percentage error detection that you you want to see what is a percentage of error detection at this point is where the error detection has to be checked and this is your percentage of false rejection and false rejection is also in this column here assuming this is a set of rules false rejection is 0.07 percent so the false rejection is said below this life and now you are looking at the power function graphs 1 2 3 4 5 6 7 8 these are the 8 power function graphs that we talked about all of them start from this point of false rejection and it goes up and cuts the 90% line 90% error detection line at some point so you have to now align your Sigma's to where these are cutting the 90% to find out what set of rules should be applied for a particular Sigma so let us look at a few examples you have a 4 Sigma performance here you have a 5 Sigma performance here and you have a 3 Sigma performance here let's start from the 5 Sigma the 5 Sigma graph the 5 Sigma which is the graph which is cutting in this one is your graph number 7 for a 5 Sigma performance you need to apply only a 1/3 s rule in your analysis it has no false rejection and your end should be - and your R should be one it's as simple as that that is how it says if you have a 5 Sigma performance for your maintaining the quality all you need to do is 1 1 3 yes and with a 1 3 s rule in to control levels to controls and with one run so that is enough so that is if you have a good analytical performance what about for if you are looking at the 4 the 4 is cutting this graph for Sigma performance is cutting this graph and if you trace this graph you will see that it is not the graph number 3 therefore you will follow the set number 3 rules this is the third set rule this is the first set second set third set and the third set says I have to follow 1 3 is 2 2 s are 4 s & 4 1 s so when automatically when you say for 1 is you should have a no.4 to have an end of 4 you should have an end of 4 get an end of for whatever number of runs it is required you have to do that many runs to make an N of 4 I hope you understand when you have the performance you look at the power function graph and you see where that graph is intersecting where your Sigma requirement is intersecting which the power function graph and that is it this is the key and immediately you will go to the key Oh mine is intersecting 3 3 so I will go to the third group of rules and I'll follow the third group of rules look at the 3 Sigma 3 Sigma when you see the 3 Sigma doesn't have any power function graph intersecting it at 90 the best you have is somewhere here at 82% this is 80% so slightly more than that 82 to 83 percent is your best bet for a 6-3 Sigma performance the best error detection is about 82% and for that 82% I have to follow this rule set of rule number 1 which means I have to follow 1 3 s 2 of 3 2 s are 4 s 3 1 s sex all these rules will have to be followed for me to keep that analyte under control so that's a lot of rules and you have to do the N of six with there are of one has been recommended because to have a 6x you have to have six measurements at one go otherwise how will you know the 6x unless you have six readings when I'm looking at that thing in at ten o'clock today morning do decide whether I should accept or reject Iran if I have to fall over 6x I have to have six readings therefore end of six is required so that is how you decide on your quality program how do you do the program how do you decide on a channel I it requires what kind of supervision so if these things should be made known to your both your friend line stuff and if it is not possible like this kind of a supervision has to be done on a daily basis by the Cho supervisory stuff this you can probably hold for once in a week so there will be a real lead for close close monitoring of all these whenever your Sigma's are low up until about 3.5 even for you may need multiple supervision so you need to decide on which analyte requires what supervision and that has to be put down in your protocol till you attain the goal when you can actually move the growth this is again an object is setting your quality objective is Sigma is where you try and set that objective and move towards that achievement of the objective and then once it is achieved you can step down your number of supervisions and all that but up until that point this has to be in the protocol so this is your 90% error detection so we already talked about all these where this drops cut the 90% graphs number 7 here and this is for the 5 Sigma and you said this is a graph number 3 which is cutting for the 4 Sigma and that is nothing cutting on this one so you have looking at the BOP tml performance you are accepting it because that is the best you can have this is the N and R so now to recap about the rule selection different lines represent the power of the different QC rules and the different numbers of control measurements per analytical run we already saw that these QC procedures are identified in the key at the right side we saw that the power curves from top to bottom correspond to the control procedures listed in the top to bottom this is just explaining how to look at the graph now it's something about rural selection in situations where the power curves for two different QC procedures are so close they are hard to tell apart as in graph three and four in these situations the user should select whichever QC procedure is more practical to implement a single rule may be preferred over a multiple rule if that is if you are able to monitor it adequately a minimum of n of two may be required by regulations even though n of 1 QC procedure may provide say the same error detection in the graph above it doesn't provide for an N of one all these situations are talking for an N of 2 but there could be situations when you are doing extremely well because it is only 5 point 6 5 Sigma we are stopping here in an extremely situation even if you can have an end of one if n ABL is requiring two controls then that has to be followed now another recap this is from recap from the very early videos that we discussed about and I'm going to bring it forward to the place where we are discussing now we said that the power of daily monitoring when any point exceeds 3 s limit or 1 3 s error is take stop and take corrective actions and then if there is a to 2's error stop and take corrective actions if two levels of control exceed the same 2's limit stop and take corrective action if one point in the group exceeds 2's limit and another exits minus 2's limit which is an hour for s this rule is to be applied only within run because n must be at least two to satisfy the Kea QC requirements all these rules can be applied within a run and in the case of two 2's across run materials also so this is the power of having a daily monitoring system you are empowered to look at a one-to-one 3 s you can look at 2 2 s are 4 s and all these are possible only if you have to control material available and this is how you can monitor your system on a daily basis power of daily monitoring and then across run errors the power of periodic review your peer group all these are your power of the periodic review look for systematic errors like to do s3 bonus for one a seventy and all these rules look for trends ships emerging populations and look at the peer group mean St CV and CV is D I will discuss it in the next video so all these things will give you the power of periodic review and then there is a power of the multi rules then that is actually you're looking at power of daily monitoring plus the power of periodic review then for a few steps keep that to keep in mind you use all data systematically monitored daily periodically get CBS as these near target data get the from the peer group get the total allowable error arrive at the Sigma or SSCC set goals select rules at follow rules once again the power of periodic review the advantages of multi rule QC procedures are that false objections can be kept low while maintaining a very high levels of error detection this is done by selecting individual rules that have lower levels of false rejection then building up the error detection by using these rules together I know this is a very complex topic and this probably will not be enough for you to understand all the concepts for the reading material should be referred to to understand how to use the multi rule clsi guidelines are available which will guide you through the use of the multi rule using the function graphs and when a Sigma is slow this is again recapping when the analyze shows less than 4 Sigma the lab must take special care to do risk analysis and all these things should be kept in mind statistical measures are using of multi rules looking back to the previous runs and increase the number of Q's is increase the are in good number of runs of QC s and non statistical methods our staff with special training to be deployed for the low Sigma tests and increase a number of supervision so there are mystical methods as well as non statistical methods we've talked about this mostly about the statistical methods how do you need to increase a run and the NCS and looking back and using multi rules these are all the things that will enable you to safeguard that analytical phase when your analyte is not performing well and developing an optimum QC plan define the quality that is needed for each test know the performance cv bias their target this is completely recapping what we have already been talking about get target values ta from the best possible source calculate Sigma metric decide on the rules to be applied on each analyte from the pool selection power function graph whether it is a single rule or a multi rule and decide the number of control measurements and define explicitly the application and interpretation of rule within and across materials interpret multi rule to help indicate the occurrence of random and systematic error train your staff set up daily and periodic monitoring schedule and assign responsibilities and to conclude the discussions on the internal quality controls your stable monitoring system using internal quality control will safeguard a potentially unstable analytical system if you understand all these things that we talked about and implement them adequately thank you you