[Music] [Music] [Music] well good morning afternoon or evening wherever you are on the world world today and welcome to LSP well community webinar I am Rick Bachman director of interactive communications here at STP and I'm glad to have joined us for our webinar today test automation strategies in an agile world with veikkaus mental global president of digital assurance at CBM before we get started I have a couple of announcements about what's going on here at STP as to peak on fall 2019 is creeping up on us and we are celebrating our 10th year in testing by returning to Boston where we held our very first STB con we will be at the Hyatt Regency Boston Harbor right on the water and we have four days of workshops keynotes and plenty of networking plan for our attendees against the most important news is that our early bird discount which can save you up to $300 expires on August 2nd this year you can actually combine all three of our discounts early bird team and alumni to save even more you can learn more about our speakers agenda and discounts at stp con-com our next webinar is on July 24th with lean and agile consultant Liz keel entitled kin oven for Esper's please stop by a software test procom ford slash training to check out the full list of webinars please make sure to follow us on Twitter make sure you tweet any aha moments about this webinar with hashtag STP webinar don't forget to give today's speaker credit by including his Twitter handle here all right let's go ahead and get started just going to do a quick introduction of our speaker today as I mentioned before because Mittal is the global president of digital assurance at CB a-- a leader by spirit veikkaus is spearheading his team of digital assurance professionals at CBA helping and supporting them in accomplishing their personal as well as company goals his expertise in architecture design and implementation of test automation frameworks and solutions enables him to serve CBS high-end customers as well as an IIT Delhi engineering graduate veikkaus has made his career with such companies as info gain IBM CMS and Fiserv he holds many technical certifications his focus area also traverses the development of automation solutions for software testing lifecycle and DevOps using the latest technological innovations and machine learning and predictive analytics on top of all this Vitas is simultaneously working on the atall innovation mission which is an endeavor of the Government of India to encourage and promote a culture of innovation and entrepreneurship particularly in the tech driven sectors veikkaus is a keynote speaker and mentor who loves to empower individuals by sharing his life learnings and impart knowledge on them so veikkaus I would like to thank you and welcome you to today's webinar and we're ready and anxious for you to impart your knowledge and wisdom about test automation strategies in an agile world I'm gonna go ahead and give you control here there we go if you want to go ahead and take control there I do see your screen so I think we're all ready to rock and roll thanks Rick thanks a lot for the introduction and setting the context for the webinar today a good morning good afternoon and good evening for everyone joining globally thanks for taking our time from your busy schedules to join this webinar the title of the webinar is the test automation strategy is for a child one of the predicaments which I have heard from lot of customers my teams and fellow practitioners in agile and test automation is the question which still he needs to be answered is is automation truly delivering the value for you are we able to utilize automation to expedite an ability to test and release faster do we still measure automation by the number of test cases we have automated or we measure utilize different metrics to measure automation success these are the questions which I have been asking a lot of my customers my teams and I try to share my experiences on what are the strategies which can be followed for you to have a right answer to this question to you to show the right metrics of ROI being delivered by automation in an agile world so before we begin and talk about agile and test automation the few facts which were brought out subsequently in two surveys conducted the world quality report both in 2017 and 2018 which probably would sound a little astonishing for you but these are two facts which can be accessed on the world called a report most the customers today was to release build in production every two weeks or sooner we do have enterprises who are releasing into production every hour or every day but to the fact the only automation which carries in agile today is 31% the overall average of automation is 31% when I say that even test person test cases are automated it essentially means 31 person test cases which actually work you may have different automation figures in your organizations some may be in higher 90s some would be around seventies or eighties and some may probably be lower 10 or 20 percentage points but an important question to answer is how many how much of Nataly works when you wanted to work only 18 versus the companies in the US are truly DevOps when is when I say DevOps it essentially means that they have an automated mechanism to deploy builds to execute the right automation and have an automated feedback loop established using only tools like Jenkins is essentially automating your build process not DevOps DevOps means that the entire chain into and has to be automated including the feedback and the analysis today whenever you approach an important milestone in the project or wherever you are in crunch of time most of us go with the notion to our team's test as much as possible or we use human factors to decide what to test we are dependent on our knowledge of application or our knowledgeable resources in our team to make that decision of what to test and how much to test another important factor is that we are already hitting the roof in terms of budgets being allocated to QA when I say QA it is not only the testing teams here it essentially means QA being done across the lifecycle of social development which may include your integration QA includes your functional QA performance as the UAT so xli nearly 40% of the budgets are already being spent on QA at different stages they can't go any higher so obviously something is not working right or there is some gap in how things are being operated today and subsequent slides in where today I'll focus on answering these questions and sharing few details on how it can be bettered so what essentially is holding us back from achieving the true value of automation the first answer which I get whenever I ask this question toward my team the customers is that we have failed low coverage automation in our project there are different reasons which are giving for this low coverage at times it goes back that we have a constantly but evolving application we are in agile so we our applications are constantly changing all the UI is constantly changing the second answer which I typically get is that there are too many automation tools which exist in our project which exist in our teams or if it is a large project if it's a large team it so happens in every scrum team end up designing their own automation framework which essentially means that the automation is siloed by the component by the scrum team and you never essentially have an enterprise-wide automation strategy or a framework or a solution another common reason which comes up in my discussions always is that we are test environments are complex to set up and the data which we require for test automation is not easily available the common factors for that is we have multiple systems to test a multiple system to come together to form the right environment to test which requires large volume of data there are security and privacy concerns around sharing out data from production into the test environments so we cannot leverage that there are multiple confusion which requires setups or the cost of environment and execution infrastructure is too high though this is this reason is no longer relevant to a large extent with the advent of cloud with the availability of Dockers you can spin off into a structure on need and minimize your cost interesting factor which I have heard from a lot of customers is that we have high percentage of automation but our regression cycles are still very long we cannot do nightly Mills because our teams execute those thousand days cases through automation but the analysis of results takes us more time than actually running the automation the realignment is still on human judgment for test planning and there is no prediction model available to predict what test cases to execute or what defaced they can encounter so what essentially is the answer to all these questions the answer to this question is that we need to move to a stage where we have intelligent holiday shion's where we are leveraging the data which is captured as part of the software development lifecycle we leverage the tools which we are using today like JIRA or Microsoft TFS or rally or any other application lifecycle management tool which you're using to provide your relevant insights so what are the three pillars of a holistic approach to an intelligent automation is a holistic approach to test automation having the right selection of test case for execution and measuring the value delivered by using a right kpi's or the right matrix to be utilized so let's get on to the first pot point which is what is the holistic approach to automation so till now interacted with teams the approach to automation starts always with the discussion around tools it starts with discussion around technology it goes into frameworks working a framework which should utilize what should we be the architecture of the framework how the libraries would be structured how the code will be maintained in the repositories what would be the review process what could be the standards and guidelines to be followed all of these are relevant and important points to an establishing and approach to automation but is they're all components to the strategy they cannot be your entire strategy in any team where automation or test automation is limited to the testing fraternity I have not seen succeed so for your ought of a hint of succeed there has to be a more holistic approach to automation which is not limited to tools in technology but goes beyond tools and technologies and frameworks and includes the cultural aspect of automation it includes how the bonding has to play take place between the testing team and the development team and also how it has to be aligned with your application architecture and programming languages to build your application so what's the first step towards the holistic approach to automation to identify what are the right to use cases to even automate till date I'm sure most of you would have been following that approach which all of us have been following over the years of mapping or using manual test cases as the requirements for automation by optimizing the manual test cases given performing automation and measuring our coverage for automation primarily or mainly through the coverage percentage of manual test cases so all of us report metrics that I have 80 percent automation coverage that more or less in majority cases would is means that I have 80 percent of manual test cases automated though I have seen a lot of teams adopting code coverage as well now by utilizing tools like Emma to find out when their automation runs what percentage of code is actually touched upon by the automation but is the mapping between manual test cases are considering manual test cases as the right set for automation delivering value I have not yet seen in delivering the value to the extent where it is benefiting the actual team of leveraging automation one approach which I have been following with a lot of my teams for last couple of years and which has been delivering success for us is move away from the approach of mapping automation to manual test cases but adopt automation towards user journeys and personas so what we do is essentially identify n/2 and workflows to be automated in our application along with the actors or the user personas which play a role in that business workflow and test it a combination which can be leveraged to trigger different business rules in that workflow so once you automate that one single use case overflow with data combinations we are able to leverage and identify different business flows different business rules and validate how an end user would essentially be using a leveraging my application the next step defied the end-to-end workflows to be automated with the right test data and persona combinations the next thing to identify is what are the different technology layers in my application what is the architecture of my application how it is structured and should I devote all my automation energies on the top UI front end or if if you talk about any modern-day architecture being utilized by applications where micro services are playing a keen role a lot of business logic now resides inside the micro services or API so should our automation we still be front-end oriented or can we adopt an automation strategy which is a combination of most front-end and the API layer although micro services layer and we create end-to-end workflows by automating across these layers that essentially helps a to expedite the the automation process completely because API automation can be much faster to develop maintain and execute at the same time takes away the notion of order maintain his effort linked with a UI automation so that is another question to be answered as a team that okay what layers we would focus on from our picture perspective and how our automation approach would tie up to the layers with your automating against the third step is to identify what is the right language right programming language to be leveraged for automation and which is the right tool to be utilized to build my automation scripts what I have seen being successful is if you pick up a programming language which your development team also understands it helps and aids in adoption of automation across the team then the test automation which you are building is not limited to only the testing team but it is cross leveraged by the team across the board where by making developers our friend we are actually contributing to the perspective of how our test automation is essentially utilized and delivered so the perspective of finding the right knowledge right tools helps to make it across the board the main cultural aspect which has to be understood and followed is acceptance as a team that quality is not the responsibility of only testing team quality is a joint responsibility it's joined on a ship for everyone who is part of a project agile project team that is also what agile manifesto talks about once that acceptance and culture is there a lot of challenges which the teams face internal test automation would automatically resolved in terms of communication with the developers to provide right IDs to the UI controls of providing the UI controls IDs ahead of time to the test automation teams so that we can we can build our test automation scripts even when the UI is on a development a lot of my teams today are actually building automation scripts even when the UI is still under development finally once you establish the right pyramid for automation which essentially is a wider base for unit testing the second blade in the parameter should be your integration layer which is essentially a micro services or API test automation and minimalistic approach on the front and ey automation the general ratio which we are maintaining these days is 70/30 which means 70% of our automation is actually at the API or the integration layers and 30% is essentially done on the front end UI which is remember end-to-end workflows being automated if you adopted this holistic approach to automation I can assure you that these are the benefits which you would be able to achieve which is in sprint automation where you're automating the test cases bring documented in the same sprint and which the functionality has been delivered a collective ownership for automation where the development team starts to contribute in building developing and even maintaining your automation scripts faster feedback time because your automation would be up and running and working most of the time which essentially enables you to leverage it on every nightly build or every PR build generated as part of a agile sprint a more reliable and scalable automation and a more forward-looking approach which essentially enables you to adopt to any technology change any UI change or any functional change in your application what are the key projects which I just recently delivered the customer moved from a large monolithic application to a micro service driven architecture where we were able to leverage majority of automation endpoints which we have built in the new architecture reason being we focused more on automation on the API structure and as API is remain the same when they moved from monolithic to micro services 81 our automation which you develop still work with some minor tweaks at CPA we have been looking at an approach in terms of identifying what can be a more unified approach to automation the purpose of this slide essentially is not to promote Maha organization it's just to share with everyone an example of how we are leveraging a unified approach to automation so what essentially we have what I taught in the previous slide convening that thought what we've essentially built is a platform for automation which allows us to automate across all technology layers in our application architecture through one single codebase and library we're using Java as a programming language for our automation solution where we are using open source tools for J's framework so we are using predictor and Nightwatch we are using selenium for web web applications we're using researcher and Java Jersey for API automation Java hibernate to automate the backend databases and a PMON calabash for mobile front-end automation but instead of these tools living in silos what we have done is we have we are using dependency injection framework which allows us to essentially instantiate the object for majority of this tool upfront and then pass some pass them on through our test cases where our test case is an end-to-end workflow which can essentially start from a front-end web-based front-end UI application goes through an API integration validation to a mobile application and been back on a front-end application so this end-to-end workflow can easily be automated as one single test case one single use case what we were simply done to bring in our functional ingenious who are part of our teams in agile sprain contribute to the real automation we adopted a behavior driven approach to a test framework where we are allowing our functionally ingenious contribute to writing test automation script by building a feature files and documenting or creating the step definitions at the same time the benefit which we have been able to leverage is obviously increases ROI because it's completely based on open source we are able to reduce our script development and maintenance effort by 40 to 60 percent in the next few slides I will show how we are leveraging machine learning to be more proactive in terms of our test planning and doing more targeted regression using point of failure analysis we have much higher coverage because of our movement away from test case based automation mapping to more end-to-end user workflow mapping and at the same time the function library which you have built also does a proactive pre-emptive performance and load test where it caps the response time for all our web pages it captures the response time for all our api's as and when our functional automation scripts are executed the entire solution is DevOps integrated and gives a matrix dashboard which the executives have a clear view of why and where automation is failing and what has been demanded and so forth now coming to the next part of the holistic approach to test automation on intelligent QA is how do you do the right selection of test cases how do you identify what are the right test cases to be executed developed or automated if you talk about a tested configuration today most of us configure our tests you'd buy functional modules by the manual test case we are documented we assign priorities to test cases at the time when we had document in those test cases but how relevant those varieties remain or a longest period of time if you go away from that approach and look at all the upstream data and the downstream data in a soft element life cycle you will be able to find astonishing facts on how and where the priorities are still relevant when I talk about the upstream data is essentially right from requirement stage to production rollout it is basically the the requirement traceability matrix to one extent where you are mapping your requirement to the test cases you're mapping the test case to the defects that is one upstream reader which is maintained the next example of mastery data is when if you have mood moved up functionality to production or UAT how and how you identify those defects there and map them to the requirements and the test cases the downstream data essentially talks about the data being captured in production in terms of test data in terms of telemetry which basically means how the end user is using your application what are the navigation paths they are adopting on your application or they are following on your application what kind of customer issues are being reported in production by the end users what kind of browser OS devices they are using to access your application if you look at it eliminate data it will throw up a lot of interesting facts in terms of the perceived notion of value of a particular functionality or importance of a particular module in the application versus what he uses are actually using if you correlate these two data points with a capture as part of the development lifecycle and the ones which are capture as part of the production lifecycle and create a correlation ring around that it will be easily able to tell you and identify what are the most relevant modules to be validated and tested for your given application at any point of time also it will be able to tell you and identify what prioritization should be given to test cases we recently implemented Lisa customer where we considered following input parameters to establish this algorithm the first parameter was what is the amount of code churn which is happening with every sprint with every agile release what are the different functional modules which exist in the application with the business criticality how interlinked are those functional modules and what is the dependency they have within each other what is the number of test cases which exist by a functional module how the test cases are independent of each other what is what has been the previous execution history for a test case what is the priority assigned for a test case how much time a part in a test case is essentially taking for execution the percentage time taken to for execution is important in case you have a hotfix and you want to validate and test that hotfix within a given state amount of time with this parameter being available you can easily configure a test suit which executes within that specific time but provides maximum coverage we also look at the data in terms of the number of defects by functional modules what are the severity and priorities attached to every defect and how many defects relate back to a test case important parameter which you see here is the defect count by developer we all have those developers in our teams who tend to release more buggy code and whenever they are doing a deployment you need to be extra cautious extra careful in terms of what we are evaluating or testing against if coming back to agile way we also look at the user stories by priority by complexity and again by the developers delivering stories in a given sprint another input has a described earlier is telemetry which is the end user using your analytics in terms of how end-user is utilizing my application benefits delivered for us was a it Hedlund improving her overall planning for her release planning for an agile explained we were able to identify way to devote and dedicate resources for testing how to effort plan my entire release the development team was able to identify what combination of functional module when checked in together can break the code from a testing perspective you were able to identify the gaps in terms of test data which we were utilizing and which came as combination production also to identify the navigation path and coverage which was happening in production but probably was missing in our test cases so the two-step to a test cases assessment is first step is essentially capture or find out the risk for a functional module that is for every functional module we identified what is the privatization for Uncle Marty's against each other which is a single calculate by the standardized mean difference between the test case count in the bugs to be estimated once we identify the function risk for a particular module we were able to calibrate and identify the risk attached to test cases in their functional module by identifying the probability of failure or the chance of failure multiplied with the severity of damage being caused if there is a failure in that module or if there is a failure in the test case commanding these two parameters it helps to establish the risk factor for every test case which will leverage knowing what you see right now is the heat map which we generated the heat map essentially allowed us to establish and find out for a put in combination of functional modules what is the probability of failure of a test case so just extrapolating or blowing up one of the example rows so what do you see and in the first column is the test case ID the second column talks about the overall probability of failure of the test case so the overall probability failure for this test case is 64% but if you see across four different functional module combinations the probability of failure changes from 62 percent to 100 percent to 33 percent and to 87 person so the same test case if validated across different combination of functional modules has the altering of the different probability of failure and this is the mechanism through which we identify for a given code check in for a given build being generated what test cases have to be picked up and execute it the and that measure is in the hand of the automation engineer or the manager to basically set up the parameter that executes all test cases beyond a certain percentage point probability of failure in our case we utilize 70 percent as the benchmark so execute all those test cases which have 70 percent and higher probability of failure in the first goal by doing such we were able to eliminate the need for a smoke and Sandy test suit and execute our regression automation bed in multiple iterations over two week sprint to finally eliminate the need of a hardening sprint as our releases were after every two ways a minor release was good supposed to go and at the end of four Sprint's a major GA used to happen so as part of the earlier process you used to have a fifth sprint before a GA release which was essentially a hardening sprint which was meant to execute all our regression on division test cases plus the entire scrum team the entire product team focusing on testing but our following this approach and doing multiple Drowned a prioritized aggression early in the lifecycle you're able to completely eliminate the need for a hardening sprint thereby giving an extra bandwidth of two weeks every two months to the product team to deliver new functionality to probably take up a bug fixing exercise or deliver a hotfix to the end customer which essentially mean we were able to free up time for two months in a year for extra development to be carried out which which helped in budgeting for all our initiatives in QA now coming to how do you actually measure the value base is being delivered or not till now most of us measure the success of testing or measure the value being added by testing by measuring what are the number of test cases executed number of test cases pass or fail number of defects logged the defect distribution across priority or severity but a critical factor which is missed is the effort which is being spent in making automation work and to make automation really work we capture these four metrics the first one being the first pass percentage that is at any given build how many test cases from our automation sood are able to successfully execute in the first go the objectiveness matrix to identify what are the common failure test cases or what are the common reasons for failure of test cases in every build and to address those which are addressable like the synchronization issues the issues which are coming up because of lack of update in automation scripts which is required now because a functional change to identify those controls on the UI automation which are fragile and which have been changing constantly so do you adopt different strategies to follow that the second matrix which you measure is the parallel execution ratio that is what percentage of my test cases automated can be executed in parallel establish the interdependency measure of the test cases the higher the percentage of parallel execution the shorter can be your overall time to execute because then you can leverage multiple docker clients or multiple virtual machines to execute tests sued in parallel and gain more coverage in lesser amount of time another important parameter which we track is a total feedback time that is as part of a DevOps pipeline if I'm executing the automation and what amount of time I am able to circle back and certify a build the feedback time includes the time taken for execution of automation plus the analysis time to truly come back with a result which is then usable to certify a build another important aspect which we measure is the percentage of automated data generated Persinger tested are generated through automation which essentially enables us to measure and find out what percentage of my test cases or automation can be executed by data generated automatically and then and I am not dependent on data from anyone the last is what percentage of my test cases were executed as part of the pipeline as you would have notices and as I described in the previous sections that my team that are done away with the approach of smoker sanity and we do more prioritize regression every time an important factor for estimation is author or total regression suit how many test cases we were able to churn through the pipeline with every build being generated which essentially gives us the confidence that what overall functional data code base we have touched in our entire executions lifecycle once you follow these two approaches and combine the benefits to two you're actually unlocking the value of cognitive test automation which essentially means you're leveraging the right framework right approach through automation right strategy for automation which gives you a more unified automation solution plus you're able to identify the right test cases to automate right ones to execute and are able to run more prioritize regression every nightly build thus showcasing one of the examples for one of our customers well before we take up took up that project their overall cost of regression handled cost for regression was 1.5 million dollars we were able to reduce it to 500k u.s. dollars by achieving 80% automation but everything our machine learning algorithm we were able to reduce it down to 250 K in a one year time span so the additional 1.25 million dollars saved what I seen he leveraged to build up tools and technologies and this entire project was run against an automated test suit of more than 40,000 test cases which essentially enabled us to have a huge improvement in speed and thoroughness which we are adopting we were able to reduce the overall test case to execute by 50 to 70 percent and completely eliminate the human factor in terms of prioritizing and identifying what their skills to execute finally up this is glass slide I have and I have a small survey available if you guys want to leverage this questionnaire to identify and validate your own project teams in terms of where you stand today in terms of maturity of test automation or maturity of QA practices in your projects please go ahead and leverage the survey it's a simple questionnaire base survey which probably would take you collectively with with your team one spray scram meeting of 2025 minutes but can give you great amount of insights in terms of how you are actually doing today in terms of test automation and what it would take you to move the right automation strategy and the right parameter for automation my contact information is mentioned below I'll be more than happy to share my knowledge further with you guys out after the webinar through LinkedIn through my Twitter handle through emails I have given both my person and work him in or for for those like me you are more friendly on whatsapp or mobile you can reach out to me directly on my mobile number as well Rick as we are now 15 minutes left in this webinar how do more than happy to take up if there are any questions for me okay yeah we do have a couple questions and just to remind everyone if you do have a question that you'd like to ask go ahead and put that in the questions panel and and we will ask that so because the first question I have is can you explain more about how you calculate the probability of a test case failing sure because yeah so earlier I able to see my screen I just just open and still fun right yeah we can see your screen we're still on the contacts live but there we go looks like you're switching some stuff around now okay in the meantime while it is still loading for everyone I'll answer that question so the factors which we utilize to identify the the test case probability of failure as I mentioned in one of my slides there are different parameters which will leverage to establish the correlation between a test case and its execution history and in previous situations when it has failed we use logical logistically gresham mechanism to correlate that data with the code chance which we have utilized or against the bells where the test case was simply executed and would have failed in the past or what would have been a successfully pass in the previous situation as well so by using my single equation across those data parameters we were able to identify the probability of failure for every test case which is part of a repository so if you have a mapping established in your test data pointers between your test case and defects and your if you are maintaining the test case execution history for your test cases in the ALM tool those two data pointer x' along with the build information would help you to create a correlation and using the regression mechanism to find out the probability of failure of a test case for a given combination of area paths or functional models Rick take out the next question okay thank you for that explanation the next question says what tools are you using to generate heat maps and how are you executing 40k automated tests can you elaborate your test execution environment sure so - okay I'll answer the second question first and then I'll come back to the first part so we are executing 40k automation test cases and to that point I'll let you elaborate further that prior to me taking of this that potato project these 40k test cases were executed by a team of 52 engineers or a bit of one month because there were some misfits in the test automation strategy there were a lot of failures their first pass percentage was low 30% because of test data issue because of environment not being created properly we were able to change those 40,000 test cases executing from 1100 man days to 32 man days those 40k test case today are executed for me by a team of eight engineers in four days which we are now further reduced to a team of four engineers in three days the infrastructure environments utilized or the distribution which we have for this 40k test cases we moved or we shifted the pyramid for a lot of those test cases from front end to 70% API our web service layer automation and 30% of them was still left over which were more front-end UI and marine frame test cases the test infrastructure environment which utilize is Dockers so we spin up Dockers at run time multiple 50 parallel Dockers we utilize to execute our test cases in a 50 parallel threat mechanism because that is the throughput which our mainframe application can take into the parallel threads being established so we are limited to 50 parallel threads being running using docker to executors of or taking it as cases Rick can just remind you what was the first part of the question yes what tools are you using to generate heat maps great so right so we are essentially using all open source stack also to leverage and generate or utilize our machine learning algorithms so I'm using our n spark for massaging and running my algorithms to generate my heat maps for visualization of those heat maps of for a particular customer using click click view but when maturity we're still using our n spark to generate those heat maps for us for one customer we have started now utilizing sass as they had a license for sass so we are using sass as well for visualization and running of algorithms for storing all this data which we are utilizing as part of a machine learning algorithm they're using Cassandra as a database for all our time series data plus way we you are using MongoDB as a data storage to store all our data on top of which are our n sparkle gotham's actually run Rick I can take another question ok thank you for that explanation how do you find the test cases which will execute in parallel so that is an interesting questions so if you would have noticed in one of my slides I described about and then if s case in the dependency so this is what important parameters which we establish as part of our test automation strategy as part of our ability to run this cases in parallel whenever you have test cases are dependent on each other they have to agree together in our process as we are automation is focused more on n 2 and workflows majority of our test cases are actually independent in nature which can be executed in parallel but we do follow some sequencing to avoid duplication or duplicate buildup of data and our testing cycles Rick other any more questions for me yeah there's just a couple more here real quick let's see here so you selected those test cases that have a higher probability of failure question mark the first attempt is essentially to identify and select those test cases which have a higher probability of failure because as one thing which we all need to understand is as a testing fraternity our objective is not to pass test cases our objective is to find out test cases which would fail which would lead us to defects and our genuine failures so as test automation my first priority is to identify those test cases which have a higher probability of failure which could potentially lead me to defects and identify the issues in my application but that does not mean that we leave out test cases which have a lower probability of failure those are exhibited after our first set of test cases we keep on running our automation in parallel but the purpose of executing the high probability if in a test case up front is my team can analyze them faster and have a much faster feedback loop and time because majority of times I have noticed that teams execute automation but before they can complete their analysis they are forced to move to the next set of execution Rick the collisions will be yeah just a couple more so one is it's actually I think it goes back to the worksheet you might have had up but I might be wrong is there a worksheet for the metric how do you record and use the various metrics mentioned yes there is I do have a matrix - book which has the formulas as well as the visualization for the matrix which I mentioned and I would be happy to share it with anyone who reach out to me on my email great or or on LinkedIn or Twitter I just reach out to me and I'll be happy to share those metric sheets or dashboard with you guys okay we have one last question any tools or strategies developed for the analysis of test failures so we do have a tool right now where we have a that is an interesting part which I forgot to share so what we have done is part of our entire automation strategy we have actually even automated the first cut analysis of automation failures where we have identified different categories and subcategories under which our automation failures would fall into so that our feedback loop is automated for the first pass so what we have done is essentially we have established pointers or locking mechanisms within our automation test cases as well as in our automation solution where the logging at every step in every stage is being maintained and reported which at the end of the run our automation engine the analysis engine runs and correlates those failures back to a previous execution analysis and find out common patterns or what category and subcategory my teams would have selected for similar failures in the past and automatically tags the new failures with those categories and sub-categories so that the next time when my team comes in and sees the report all the failures are automatically mapped by a unique ID they have all the logs and screenshots for every failure available at one single place and the category in a subcategory and any previous reference of the same test case failing or the same tester failing or the same error is already mapped into it has enabled us reduce our overall analysis time but roughly 80% where if if there are common trends or common test case failures we essentially just have a quick glance and approve that failure on our tool and that either gets automatically logged as a new defect or existing defect is either reopened or committed governments are added to it and again Alby I'll be more than happy to share all those details with you guys so just reach out to me and I'll be happy to share that's very generous thank you because one more question just came in which i think is pretty good I think the group could benefit from it and then this will be our last question can you provide an example of how predictive analyst to minimize the need of a domain expert okay so this is a question I will just spin this question in different context because it's you rightly said Rick this is a fairly important question and this is something which originally written our white paper on so my belief is and that is the reason why I said that domain is something which is critical and important and a prediction algorithm cannot essentially cut short the need of a domain expertise and that is reason why I always say that automation cannot and can never take away the job of domain functional engineers what probably it can replicate and rim eliminate is jobs of only functional engineers so pressure algorithm per se cannot eliminate the domain experience or domain knowledge and that is one of the reasons why one of the parameters in our algorithm is ashle the business criticality of a functional module which essentially comes from a domain dice I hope I did answer that question correctly yeah well I'll tell you it was very informative and useful webinar I'm sure that everyone got a lot out of it and we really appreciate your time today Vegas presenting for us so again I want to thank you and we hope to we hope to hear from you again in the near future so thanks tarick you're welcome alright everyone well this concludes today's webinar really appreciate you attending just a quick reminder that we do have the early bird discount for STV comm which will be held in Boston Massachusetts September 23rd through the 26th that early bird discount expires on August 2nd so make sure that you run by stp Khan check out the program the speakers the you know the all the different networking events and those kind of things we're gonna have a lot of fun again we're celebrating our 10th year anniversary and testing and we'd really like to see you there our next webinar is on July 24th you can see that webinar as well as all the other webinars we have posted for the upcoming season here and you can find that at software test procom forward slash training again everybody thanks a lot for attending today's webinar really appreciate it and I hope you guys have a great week and keep practicing your testing and automation we'll talk to you soon thank you bye bye [Music]