Transcript for:
AI in Aviation - Key Topics and Discussions (Day 1)

[Music] and of course we all understand that it offers lots of opportunities but as well challenges uh so we need to understand how AI can support Aviation but how it can do it in a safe way uh iasa has been working now for several years on this topic as you know and I will go very high level through the things that we have done but then gon will go in detail through all the things that we have done and what is coming next um I was talking to Y just now to understand how we were compared to other Regulators in the world and we are very honored to have the FAA here today uh we could say that yasa is leading the way in this area um we are very proud as well to work with all of you because this is not a topic that one single partner or a stakeholder H will crack I mean it requires the cooperation of Industry regulators Academia um researchers uh basically the whole spectrum of stakeholders so again thank you for being here and for helping us to move this agenda forward um we started basically the journey um with some concrete use cases in the area of research uh in particular uh as you know or you may know we have these uh Innovation Partnerships projects ipcs which basically is a contract between iasa and a company that is um bringing forward some kind of innovation or New Concept where we work together and learn from each other in terms of how we bring this forward and in particular in 2020 2021 we investigated h two kodan IPS Innovation partnership projects uh around computer vision based Runway and traffic detection applications with a Swiss startup D Dian sorry I can't pronounce this one but you know who I mean and I did askon before but I just forgot so anyway this company and this project brought very fruitful uh learning uh and expertise to complement asa's development Assurance expertise H giving shape to the W shape uh learning Assurance process that was developed in the first issue of the ASA artificial intelligence road map uh then in 2022 um we explore the formal verification methods H for neural networks for systems Health uh so basically to support maintenance um and through artificial intelligence to monitor these applications with Colin supplied research and technology in an IPC called formula uh more recently in 2023 uh this project is ongoing we have been move moving towards level two of artificial intelligence uh which basically we are teaming up uh with a I and then we are investigating the auto taxi so basically from gate uh to a Runway with boing as part of the beon ER IPC so again you can see how interesting all these applications could be and the US AI can facilitate uh processes but as well you know normal operational uh aspects of Aviation uh then we have done as well on Research with horizon horizon Europe uh the El lip uh machine learning project which aier has been leading and this is a very very interesting project that you will go in detail tomorrow because there is a whole day on machine learning so I'm not going to say much more because I don't want to steal your thunder but yeah there will be so many applications from AAS anticollision system to natural language processing C camera based automatic inspection and support a craft maintenance activities but as I said Shier we'll go in detail through all this tomorrow but we have haven't done just research uh and act actually we have done some more concrete things that now are leaving us to the certification area so we have started to to go through some uh AI in aviation um certification projects uh starting with general aviation because we want to build from um a simpler base or simpler aircrafts to then build into cut um large large aircrafts uh and then apply the things that we have have learned with the general aviation sector so in this uh basically we will um we learn uh H to publish the first AI special condition uh what we call SC AI 01 in 2022 uh and then as well uh the first iasa AI paper um was based on these learnings uh now we are currently working uh to update this special condition to extend to other type of prods as I said to large a grafts so we this is work um ongoing and I'm sure that you will go through it in more detail um however there is a common denominator to make all this happen and this is the the need for an AI trustworthiness framework uh so basically this is the link to all the things and this is the the one that we have to basically figure out and and develop um the asasa AI concept papers uh will be Consolidated in the St starting rule making task 0742 again these are lots of numbers but I guess soon enough you will know this one very much in detail so keep this number in mind um so uh more in concrete uh talking about the event we have two days today and tomorrow and the first day we will be talking about the Asam AI road map uh and framework consolidation so basically we will uh talk to you about what we have done so far where we are today and what is coming and as well some timelines because people were asking yesterday when is happening what uh when are we going to see the ru making so again gon will go in detail through all these things because we understand there is a push especially from industry to get all this done and basically move into a more practical way to to apply H AI um at the end of this process obviously the rule making plan will will be launched and we will include as well the aspects of Ethics which of course are one of the most important ones and our colleague iness is leading on on that uh through her PhD um tomorrow we will be talking about the machine learning uh M lip uh research projects and the final dissemination event and shabier will be leading on this and as I said before he will explain all the findings of these research projects and how we are going to implement uh and and disseminate with all of you so basically we want to share everything that we found there um I want to thank as well um our colleague Peter uh which is leading the scientific committee which is a very important uh Forum where we are engaging with Academia to get their views uh not only in artificial intelligence we are doing other things like um the impact of weather changes uh and climate in aviation which as you know is another big topic and we are seeing just recently another turbulence event that cause some uh injuries in a Europa flight going to Latin America so basically very important input from the scientific committee to understand as well not only what happens H for E AI in aviation but as well in other sectors this is very much cross- sectorial and actually in Europe we're following the European law on AI which is cross sectorial as well so we need to learn as much as we can from other sectors from other Regulators from industry from all of you okay so I think I'm going to stop here and I just want to encourage you all to use these two days to exchange information to ask questions basically to make it as meaningful as possible and of course we will follow up on all this so with that I will pass the the floor to to Gan thank you very much so our next speaker is joining us remotely I believe yes um so G if you're just setting that up so we're next going to hear from the commission so from antoan Alexander Andre who will give us an update on where we are from the regulatory side so welcome Anan Alexandra and over to you good morning to everybody I hope that you can hear me and see me properly yes we can we can see you Al thank you very much for for joining just trying to put the slides next to you that's done so the floor is yours thank you for participating with us today thanks G and thanks sea for organizing actually this event in such a crucial time uh I think Ai and thei regulation in Europe is on the lips of everybody so it's it's great actually to to be able to present it uh to make a focus actually on the aviation sector and then for you also to to discuss it to discuss it in the in today and and and tomorrow um if you can move to the next slides I will directly start super thanks a lot so today I will uh as uh very briefly mentioned I will uh explain very high level actually the content of the legislation but focusing really on these um specific elements that are important for the aviation sector I will also clarify some elements of um the interplay between the European commission and aaza so that uh we are all clear on the on the next steps and and the future requirements that will that will need to be uh complied with by uh organization in the aviation uh sector my name is Alexandre I'm part of the AI office uh in the unit in charge of the legislation process but now also in charge of uh the implementation of uh the rules we have been actually um negotiating this this text for the past uh four almost five years um uh it's it was a long process but we still I think the journey has only started now that we have to move towards the implementations of these specific rules next slide please thank you so this is probably the most well-known uh slides when it comes to the uh AI Act these slides try to capture actually the the essence the objective of the legislation in one visualization and this visualization is this pyramid that you can see on the uh left and uh side actually what the ACT uh wants to achieve is to protect uh um uh citizens to protect consumer on the you internal Market against the risk to health safety and fundament rights that AI system may post and so from the very starts the commission and then it was also supported by the co legislature the commission decid to propose this uh pyramid approach with a specific requirements to these systems that POS specific types of risk to health safety and fundamental rights if we start by by the Baseline by the the the the the green parts of the pyramid actually the legislations for those a systems that do not pose any risk to health safety and fundamental rights the legislation do not impose any restriction on these systems and when we actually introduce and publish the proposal for this legislation the commission um actually number the um um determine that actually 85% of the systems put on the internal Market would fall under this specific category then if we move one step higher if we move to the to the yellowish category these are actually a category of AI systems which includes those systems which interact directly with humans what we want here is to um protect actually citizens protect consumer making sure that when they are interacting with an AI systems they they have enough uh information and they know that they are not interacting with another human but that they are directly interacting with a machine then if we move again one step higher we enter into the core of the legislation which is the high-risk category there um the use of AI systems that falls under this category would fully be permitted but these systems falling under this category will be subject uh to uh compliance with a specific a requirements which are set out in the legislation and on which I will spend a couple of seconds later in this presentation and last but not least for those AI systems which are consider considered as posing an acceptable risk to health safety and fundamental rights they are simply prohibited to be uh used or put on the internal uh Market in the uh European Union so as a first step as um organization being involved in the aviation sector actually the first step for all of you would be to list to identify the different types of AI systems or general purpose AI models that you are using the Second Step would be to um directly encounter whether you are um falling in one of these categories um and for this I will I will enter into the details of each of the categories next slide please so for the last parts of the pyramid for these systems which pose an acceptable risk to health safety and fundamental rights that are purely and simply prohibited these systems cover the following uh the following areas which are depicted on on the slides they covered systems which would be used for social scoring for public or private purposes for biometric categorizations in specific context for realtime remote biometric identifications also in specific context for individual predictive policing or uh um AI systems used for instance in Emotion recognition so here if your organization is developing or and deploying systems that fall under this category that fall under these areas they will simply uh be um uh um for prohibited to be used on the internal Market 6 months after the entry into force of the legislation so six months starting in August of this year next slide please now if we enter into the second uh uh category which is the actually the core as I mentioned of the legislation the iris category so here the question that the organizations should ask themselves to see if their AI systems fall under this category is actually twofold first is whether this the systems that they are being used uh by the organizations are a safety component of a product or an EI system used as a product and cumul cumulatively falls under uh consistent and existing Union harmonized legislation which are required to go on third party Conformity assessments the systems falling under these um these category includes also systems from the aviation legislation as the aviation legislation is uh depicted in the legislation as one of the Union harmonized legislation listed in Annex one that means that uh systems that will be used by the aviation sectors will need to include the specific requirements if they fall under the IR category of the AI act that does not mean that they would need to go through two different uh um Conformity assessment not at all that means that those third party Conformity assessments which are enforced to make actually Conformity assessment in specific sector will need to also check the specific requirements introduced by the act so in the case of the aviation article 108 actually give power to aaza to directly um make the Conformity assessments but also take into account the specific requirements introduced by uh the European AI legislation next slide please this is not actually the only systems that would fall under the iris category there is also another types of systems that can directly fall under this category and these are systems which are based on use case which are less listed in Annex 3 of the legislation and here we covered areas such as Biometrics such as employment such as education law enforcement border management and others here what is also very important to note is that the cleis lator have decided to introduce a filter mechanism what does this filter mechanism means actually if an organization can show can prove that um the AI systems that for RS under this category does not um pose any risk to health safety and fundamental rights it can simply opt out and say that the systems say and documents that the systems will not and should not be considered as a high risk here on the on the right hand side you can see some of these specific systems which are um actually depicted in the legislations those systems are for instance the systems that perform narrow procceed FAL task that improve the results of previous human activities that do not influence human decision directly or that do purely Preparatory uh task so once uh an organizations has identify all of the systems that it is being used with within the organization then it has to determine whether these AI systems fall under one of the categories of um of the act and then next slide please if it falls under the highrisk category of the a act it has to determine whether um it's uh is directly a provider of of these AI systems or only a deployer of these AI systems if it's a provider of the AI systems directly then the legislation is quite extensive on the number of requirement the provider has to achieve has to comply with in order to be compliant with the legislation if it's only a deployer then the the requirements are reduced the requirements uh imposed by the legislations can be are displayed on your screen and covered things like the introduction of Risk Management Systems the the the the introduction of of quality management systems and the um making sure that actually uh there are some specific requirements relating to data quality document mentations transparency human oversight accuracy and cyber cyber security and robustness and so on and so forth are put in place in order to make sure the systems put on the internal Market are fully trustworthy for the deployers as I mentioned the requirements are really uh um much very much lighter than for the providers of um uh direct AI systems the the deploy the providers the deployer sorry obligations includes for instance uh the um a bit of human oversight making sure actually that a human stays in the loops of the decision process uh making sure that also a specific um a in affected worker are informed that specific AI systems are used on them and so on and so forth so what's really important for organization is first to identify all of the um systems that are being used second to uh see under which category do AI system Falls third to determine whether the the the organization is a deployer or providers of these AI systems and finally to comply with the specific requirements whether it is a deployer or a providers of these rrisk AI systems next slide please now during the negotiation process um certain new types of very high level very um cap capable AI systems have been put on the markets notably do systems who are based on what is called general purpose AI models given the specific risk POS that by these types of general purpose AI models the co-legislator have decided to had a layer of um requirements for the providers of these specific models for what concerns all of the models the requirements are quite high level and quite easy actually to to comply with but then the legislation also introduced new requirements for those very uh highly capable models that are called models with systemic risk one of these threshold that has been put uh in place by the co legislature is a Computing threshold of 10 to the 20 fifth flops which are which is a Computing value which has been introduced by the legislation for those organization that develop specific general purpose AI models with such um um systemic Rich which can pose actually specific risk to the society then the legislation introduce a specific requirements but these providers of general purpose models are very few at the moment in the the world and are very concerning specific types of models which are very developed by by um very um um um specific uh companies uh which are not um um many in Europe and actually in the world next slide please so now that I explain these two parts of the legislation so those rules which uh falls under the high-risk uh categories and those rules which would uh be required to be complied with by general purpose AI models providers an important element to stress is the timeline the graduated approach adopted by the co legislature to uh fully um comply with the legislation so as I mentioned in the beginning the legislation is supposed to enter into force on the 1 of August so one one months from now then six months after this this date the uh requirements on prohibited systems will kick in meaning that six months in six months time actually in seven months time uh to these date prohibited systems that are uh falling under the prohibited category will be fully prohibited to be used on the internal market then all the rules that uh are covered or that are specifically focusing on general purpose AI models will kink in in onee time then for um uh the most important parts of the legislation and actually most of the um um provisions of the legislation will only enter into application two years after the entry into force of the legislation meaning more or less two years from now and then for these systems which falls under under Annex one so which are actually the systems for instance that are being used in the aviation sector the the the timeline actually the time to comply with the requirements is a bit longer as uh the organization that are parts of of these specific sector uh have three years to comply with the requirements introduced uh Idol I really want to stress to finalize uh this very small and high level presentation that Our intention the intention of the commission is really to streamline really to use as much as possible existing um mechanism in place in in dedicated sector and that's why actually the legislation specify um that uh the those uh organizations that are responsible for Conformity assessment will continue to be fully in charge of these specific Conformity assessment in a specific sectors such as uh aviations and that means that in the future AI aaza will still be fully in charge of of Conformity assessments for what concerns Aviation rules but taking into account these specific requirements introduced by the AI legislation and that's it uh for this presentation I'm happy to take any questions if you if you have some yes thank you very much Alexandre um we were not let's say forcing to take questions on the Keynotes but because we have two minutes if there is a burning question from the floor uh just let us know no I think yeah let's let's move on with the program um and you will have just after the possibility to use the slido which is a tool for question and answer that we will use then for the Q&A sessions but thank you very much anine Alexandre feel free to thank you no we we really appreciate feel free to stay with us as long as you want you can you can follow the WebEx and thank you for for taking the time for for us today thank you okay so um thank you very much then on alexand um we now going am I going to hand over to Professor Dr Peter heer who is also the chairman of our scientific committee which uh ASA set up now just two years ago so in 20 22 and in his real job he is the head of the Institute of flight guidance since 2005 so I hand direct to you thank you yes thank you very much uh good morning uh ladies and gentlemen it's a big pleasure to provide a little insight into the scientific committee and its work on artificial intelligence and uh before entering into the topic of AI in automation uh I would like to provide a little update on the scientific committee as mentioned before it has been established two and a half years ago and it's a big pleasure uh to support AA here by providing advice to the AA executive director and especially on issues which are somehow linked to science and emerging Technologies and there's a group of overall 11 International experts uh in this group uh supporting aaza and it's a very Lively committee and it's focusing on several areas and currently our work program is focusing on establishing a mechanism to connecting Academia stronger to AA to have a lively body of research and exchange on relevant academic topics secondly and this has been mentioned before in another speech uh to look a little bit closer what does the climate change mean to Aviation so in many cases we are discussing the other way around so what is the impact of of Aviation onto climate but here we're looking the other way around how will climate change and what does it mean for the evolution and the safety of future aircraft and what I'm focusing on today is the AI uh and automation task force is one of the areas we're looking into and there are annual reports publicly available and I really encourage you to have a closer look they are free for download on the AA website and there are just shorter reports released for 22 and 23 plus annexes with a lot of of exciting material if you want to dig deeper into these specific topics so there's the link have a closer look and feel free to look into the documentation so now focusing on the area of artificial intelligence and automation the task force consists of something like 12 people working together seven from aaza and looking around I think most of them are sitting here in the room today almost all of them and besides that there are five members from the scientific committee the names are provided here and we are working very closely along a stream of activities which has been defined at the beginning of this year and the starting point for our activities is actually the continuous development of the AA AI strategy and there's a lot of material now available by aaza which is for example the AI road map in its version 2.0 which is a kind of action plan preparing the necessary AI trustworthiness guidance and and is bidding necessary regulatory updates to really support this Innovation wave which which is underway and there's a concept paper which is the first usable guidance for level one and two machine learning applications and this is the starting point where the scientific committee had a closer look and started elaborating recommendations and supporting uh aaza in looking into the area of human machine collaboration and teaming into ethics in Ai and Automation and finally into uh what comes next which is the design principles for level three Ai and as you can read here they are very exciting things to do uh when it comes to collaboration and teaming then we need to understand what are the roles what are relevant use cases how to validate the concepts we are we are developing here in ethics it's important to understand the perception as well at the stakeholders at the aviation profession person's level and on the public level so what does society think about AI in such a safety critical environment and as I said before to support the development of level three AI activities and translating this into the 24 work program there are three work packages we're looking at I'm not going to read all that just to give give a little insight and uh impression uh we are preparing together with the colleagues from aaza a survey to General Public um complimenting the survey which has been carried out last year early this year reaching out to the professionals and we will hear more about that from enus later today we are uh actively testing level two AI human factors Guidance with specific use cases and we are identifying already use cases for level three to come so many exciting things which are relevant in the further development of the road map and guidance material and there are three messages and statements I would like to hand over to you to um to propose so one is on the classification typology you are well aware that the um current aaza classification scheme um is consisting of three levels one to three which is level one assistance to the human level two is human AI teaming and level three will be Advanced automation and there are different sublevels I'm not going to enter into this um but you know the levels 1 a 1 b 2 a 2B Etc now looking into other means of transportation and other areas uh where AI is being implemented and deployed uh then you might be interested in how things are matching and we looked into other Transportation means for example into the air traffic management domain and um there we looked into the master plan 2020 the new master plan is currently under development at Cesar undertaking there we have the concept of levels of automation if you look into the UNM material systems domain laros then there is a specific method uh Automotive has a specific method the SAE j316 levels of driving automation there are other elements at Railways the goals of automation I see 6 2 to 91 and further more I'm not going to to elaborate on them very much but the the exciting question is how does that all match do we can we map uh levels between the different domains do they correspond are they coherent or are they eventually Divergent and um so we looked very intensively and and you shall not read this table here the right it's just to indicate the mechanism we applied we built up a very large table which you will find in the report and there we try to assign and to to somehow allocate the levels to each other and what we found found out by doing this exercise is that the existing schemes across the domains do not easily match so even if you compare the terms and definitions then you find out that there are different understandings different meanings um depending on the domain you are looking at and that is something which may lead to misinterpretation and therefore is a kind of risk and even in some domains the boundaries between the different levels are not always clear and consistent and that is a matter of concern so what we believe as a scientific committee is that the uh classification typology as proposed by AA is very transparent it's enables a coherent mapping to any other domain we we have looked at and it does in addition provide very clear boundaries between the levels therefore we appreciate very much the effort taken here we believe is really the paving the way towards a unique structure for classifying AI in various applications even Beyond Aviation so that is the first key message the second key message is about the human AI teaming concept and in the guidance material for level one and two machine learning applications we have already identified and seen the learning assurance element and the element of AI explainability and now there is the human AI teaming element which is very important according to my understanding um because this will allow further steps in Paving the way towards higher levels in AI um uh deployment so the level two AI applications require augmenting the AI trustworthiness framework with additional human factors guidance uh we believe that the differentiation between the cooperation and collaboration between human and AI is something to be really differentiated very carefully and the uh framework of human AI teaming is a very clear and very useful method to do so and what we have started with is to do a testing along different use cases and this is very important to really prove the concept to understand is it mature is it fit for purpose and does it really serve what we are looking for and next steps will be Define teaming concepts for level three AI now coming to the third and final message uh what will be next so Advanced Automation and Beyond um still there is a field of development what will be Beyond level three will believe that extended AI safety risk mitigation Concepts need a kind of human Centric approach so this has to be introduced uh in addition it might be debatable if the complexity of operations uh term this phrase this concept needs to be integrated so what does it mean to apply Ai and automation uh does it depend somehow on the type of operation we are looking into and how to measure complexity what what is a complex uh Mission or complex operation we need to carefully differentiate between Advanced Automation and autonomy and um potentially there will be new AI levels for autonomy something to be discussed in the in the coming period and I'm very grateful for the presentation we received a minute ago um the alignment with the final EU AI Act is very important and tic article 14 uh is something to be really investigated in detail because it's opening somehow the door for autonomy but on the other hand it's putting some constraints and we need to carefully analyze and understand what is written there on one hand it introduces the notion of effective oversight by natural persons which is laid down in paragraph one of article 14 on the other hand we're looking into yeah autonomy which is mentioned there explicitly in this article 14 and um so we we need to investigate how to open the door and what does autonomy mean in this context and what is human oversight here in this context so uh looking to the paragraph three in more detail may be the key enabler to enable level 3A and level 3B uh operations by AI so that is an area of further investigation now concluding my little presentation we believe that the AA has very well demonstrated a proactive approach on Paving the way for introducing Ai and Aviation I think this is really remarkable uh remarkable achievement um the approach of levels of automation as presented here is a major step towards structuring the application of AI in a clear and traceable manner that is a very important element to avoid confusion in the future the road map and the guidance material which has been developed so far as an excellent basis for a unified understanding in the aviation Community I think is really Paving the way and building the foundation for further activities and I believe that Aviation with the support and driven by AA is a front runner in structuring the levels of doation which will support and may support a ammonization across different domains so thank you very much and I'm looking forward to interesting talks today thanks thank you very much professor H uh the next gentleman is responsible he's the architect for most of what we've just heard about on that last slide and I think just about everybody in the room already knows him so I'll hand over immediately to Gom Sudan from AA thank you thank you Janet and thank you all for being here so numerous very nice to see a crowded scene here on the on the on site at yaa and to see also that people are joining numerously on the web x uh I hope everyone is at ease on the web X and everything is working well sorry for the small cups at the at the start my uh talk actually just before the coffee break is to give you let's say the big picture Let's uh let's start uh first we will accelerate as we say here after an initial exploration phase and what we are bringing to you today and this week is in fact the consolidation phase we enter in phase two of the road map and that's where we are that's a Milestone we have reached I would say timely with a certain background of what we developed so far that's what you will see also in the different presentations from the team today and also looking forward what are the next steps what are the challenges um I really thank Peter for his presentation because it sets also the scene of challenges everything that we are dealing with with the scientific committee is definitely what is the most challenging the most complex ahead all the rest not saying it's it's done but definitely the r making will be a very important step for us to let's say concretize the guidance necessary uh and you will see also that in the presentation from Joan a bit later so with no um further notice let's let's enter in this very famous slide now but we always bring It Forward it's a it's our road map from the start I would say we updated it in May 2023 last year was the event the ASI ID's first event was actually centered around this new road map it is still our new road map in the sense that we really work per this plan and that's that our driving Factor the phases that you can see here on the slide you also know them because we didn't change them we wanted to keep this framework of first exploration then consolidation and then pushing barriers we still keep this this Logic the exploration has brought us as far as we could but timely to enter in the rule making activity VC you will see that also presented in many more details in in in the speeches today in the presentations and very importantly we enter we entered early 2024 in this phase two of consolidation we are at the level two we have tackled I would say level one level two in a proactive manner but also in a in a prospective manner we are we are not at the stage where we said the concept paper is the final guidance and definitely we will have a rle making exercise to crystallize what what it means yet the concept paper as Maria mentioned is used already in special conditions in order to feed our applicants with early requirements we cannot wait for let's say 2025 the date at which we want to do the first approvals of level one to say okay we have waited enough now here are the requirements of course a certification process doesn't go like this we needed to anticipate as much as possible and that's that was the first driver for the issue one of the concept paper for level one applications I would say we are served at least for uh the applicants are served for the first applications and certifications we are undergoing it proves to be uh a meaningful framework we have a couple of adaptation as any certification process is is doing but there is nothing we identified as really missing or really too challenging to be overcome we know there are challenges you will it will be the the part of the MP presentation and day two you will see some of the challenges we try to overcome with methods and tools but also some are remaining definitely and it's it's a journey and we are still processing on that we enter this phase two with a consolidation thinking so rule making is the big picture or the Big Driver of it of course but we don't abandon the notion of exploration we still have the level three on the plate Peter just mentioned some of the challenges I will also highlight a number of anticipation we already have but we are pursuing this effort with the scientific committee to crystallize our driving princip IPL for level three Ai and that's that's a big step also ahead we the finalized guidance I mean there is kind of a cut in the the road map it goes a bit further but we didn't want to to show everything the the finalized guidance for level one and two is the first step of Ru making of course will follow at some point another rulemaking task either an extension of the 742 task that Maria was mentioning or maybe a new task let's see how how challenging it is but that will be common at also with the phase three pushing barriers we have a number of limitations already today because we are in a in a safety environment we are in aviation we are not trying to take the maximum potential of AI we are trying to take its maximum potential in a safe framework meaning we are restricting some aspects like for instance online learning or adaptive learning or what whatever name you give to it we are kind of saying no we say you freeze your model at the time of certification and then we can certify there is no possibility for to adapt Beyond understanding let's say and this is really the point where I I don't say for every application but possibly there is a bit of room of play and the level three you will see from the anticipation perspective will bring us the tools to embed in the systems some elements that could be very Advanced monitoring Logics let's say and bring us to a level where we can say even if there is adaptive learning in a certain frame and context framed by by the application we could possibly go to a certain level of adap adaptation of certain models in in operations but again so very large perspective that's why we pushed it for let's say 2028 the time where we will be we will have finalized the consolidation meaning having a really solid basis to go anywhere near I would say other possible abilities this is where we are we didn't change again the the road map from the the first one we adapted slightly the timelines towards when when we think level 2B level 3A will come this is not a limitation in itself of course we will engage with any applicant bringing level 3A today but we we anticipate that the let's say the pass is still a bit longer than for level one which is really the lwh hanging fruit not saying it's simple I'm always saying it's not I'm not trying to to simplify but to say that level one gives us the framework in which we can safely uh certify applications again with limitation on the level of criticality this is all things you will you will see in more details in the presentation from FR and and R just after from that perspective what is consolidation for us um Ru making rmt 0742 which starts now uh we we published the tour two weeks ago and this is the start so we'll have a Ru making group everything will be presented by Joan also just after continued exploration as I said we don't abandon that we we Still In Parallel have part of the team working on this AI Assurance technical scope expanding it reinforcement learning is a big stake we see it coming symbolic AI hybrid AI we talk about it with many industry olds um industry Partners human factors for AI a big uh stake let's say still with Rene and what what you presented Peter we have a lot of things to still investigate and test from the guidance eics assessment with iness and you will see also a lot more and there will be a panel of discussion on the boundaries to give also to to this exercise um Advanced automation level three AI it's a it's it's it's a major challenge generative Ai and tools it's another question we regularly get what do we do about okay here with the concept paper we address definitively the embedded part let's say of the of the of the AI but uh and they embedded I mean not necessarily Airborne but embedded in systems under certification or approval already existing what about tools used in operations either to develop why not but also to to enable certain operations under organization approvals this is something we uh we we need also to anticipate we have a fifth project that really started under the program in order to address this aspect and this is something we will talk about just at least in a slide for today the consolidation phase is a four years period 2024 2026 seven and this is something you will you will talk about with us a lot in the in the coming years from a rule making perspective so now I will go on the three layers one by one from the ru making perspective I will not say too much because again joanni will present all the details of it but really from the top level this picture gives the big picture uh let's say gives the environment in which we are involving at the top the eui Act was just introduced by Antoine Alexandre we have this article 108 that gives us a mandate to uh to do something about it that's why in the ruem we will have a part ey in order to clarify this link to the chapter 3 section two of the AI act which is the mandates that is given to us and from that uh part AI which will be as light as possible but aggregate uh authorities require requirements organization requirements and Technical requirements we will have a framework that is most of what the concept paper will become a set of generic AI AMC and GM AMC acceptable needs of compliance GM guidance material from that we will in a step two instantiate that in the different domain regulations that are impacted so it's a it's really it's really like the the place where the the work will be very fine tuning of connection so interfacing can be done in various ways that's something again we will investigate with Joan and the rul making group uh on the on the r making task that we are starting at um very important mentioning also the industry standards a lot is ongoing you will have kot speech also a flash talk from from the chairman of the group of the working group 114 a bit later today and we try definitively as usual to uh to relay to let's say to rely as much as possible on the industry standards that are under development and we will definitely make the the effort to to recognize them in the AMC as as far as practicable from that the time frame will be also restated by by Joan from a consolidation perspective and further exploration thinking uh another slide to show where we are in terms of technical uh trustworthiness Concepts so we we developed a number of things for level one you know very well the learning Assurance W shaped process that's something very very important for machine learning needs to be reevaluated by the team of fris for hybrid symbolic AI and even for reinforcement learning AI expendability is a layer that is absolutely necessary from the perspective of everyone but that is very use case specific so we need to still think about what what is the set of generic guidance we need to give to AI explainability continuous Safety and Security risk assessment very very important element data driven approach generate data data is really the new goal to to get to let's say Gathering signals from the operational perspective whether something is wrong uh incorrect maybe an incorrect assumption that was done at development time we need to monitor that at least for I would say the first times years Etc after we will see what needs to be kind of mitigated a bit differently but at least from from that perspective we push uh Beyond I would say existing consideration on continuing aess for instance if I take one domain we will push a bit the data recording and monitoring capabilities on AI systems and that's a very very important block that FR will describe also in more details what the level two BR brought us to consider on top ethics was already on the plate for level one but we really pushed it one step further for the level two specifically because it's the interaction with the human cooperation collaboration that's where it starts to be very important and the human teaming is the framework I would say that was created from the human factor side to reply or to answer to need from the notion of collaboration and sorry cooperation first and collaborations that will be presented also by by Rene in many more details uh just after in the in after the break scope of the rmt 0742 is definitely this this is where we are level one level two we don't claim we are done again it's a lot of processing still but this is our scope now what is the further exploration we have in mind issue three of the concept paper is upcoming we will initiate it at the end of this year with a Target end of next year to come up with a proposal for the scope extension to reinforcement learning symbolic AI statistical Ai and hybrid AI from so will say a lot more about that also human AI supervision and unsupervised automation are the two next levels to investigate 3A 3B 3A is not an easy one because we are in the sense of putting the human more as a supervision element with machines or let's say AI systems that are much more independent I don't say autonomous you see why it's it's a very big clarification we want to to bring here to The Forum also for discussion um we don't want to to to um let's say to to leave the human in a case of nonc capability to recover and to take over an operation in case of I don't know a failure on the on the system that's a very important human factors element that we need to bring level 3A and level 3B will bring us really in the let's say it's the jump in the unknown of the article 14 from the eui ACT and that's something we will really need to work on the interpretation of the boundaries that we can reach if you interpret it strictly I don't want to talk for Anto Alexon but strictly speaking uh it's let's say you need an a human in the loop of operations now what does it mean it's exactly what we need to find out in in a common effort the last element of uh of the consolidation is this use of AI for operational tools and um on purpose putting generative in Brackets because generative is the hype is bringing us this concern even wider let's say with chat GPT large language models but it's not necessarily only generative AI that will be used as tools let's put it this way now operational tools tools means we have a scope in mind of organization approval we don't want to say to anyone come with your your tool that will help Aviation and we will we will certify it that that's not the the talk here we are just saying within an organization approval we could have applicants organization like doas Posas m moas going for the use of tools but read tools not in the sense of systems that are running in a cockpit or in an ATM environment and this is a place where we have to to start questioning how much of the concept paper would apply how can we enable those possibilities would the concept paper make it or would the generative AI like GPT make it through the concept paper objectives no clearly not so that's the entry point using public generative AI there might be still a small Open Door let's say but let's see how far in the middle we need also at some point to think about prototyping or not only prototyping but thinking in a soundbox meaning enabling a safe and secure environment for developing conops that makes sense in an environment where the data are there is no problem with data privacy Etc this is for instance what we do in AA for safety reports analysis definitely safety report are elements that uh should not go out we have also discussion with European Central question Bank this is things that should not leak in some way so they need to be man mastered in an environment so we have the message is we have the same concern internally but we see this element also benefiting to any organization wanting to use AI tools and we have examples and ipcs that are starting actually on this side and the last end is developing end to- end applications which means more in an embedded like way so of course it's the highend it doesn't apply to every everyone but it's another possibility what we are working on or investigating is a possibility to have a sort of a label for tools let's call it a non-embedded AI transy tools of course we will have to find a more sexy name but actually it's it's it's what Aggregates what what what we have in the thinking for now and based on that the top three program activities know surprise we will execute first priority the rulemaking plan uh and in parallel continue the work on certification to approve the first level one applications Target end of 25 uh Target end of 27 for the rmt but definitely as you will see with Joan NPA planed for Mid 25 so there will be a lot happening in between initiate and develop the concept paper issue three second priority but very high on the radar also with all the extensions that I was mentioning before and enable safe and efficient use of AI in operational tools with a Target end of 2025 um this is also to be of course carefully taken and also with the label it's an it's an anticipation it's not something we want to commit on but we will find out the best way to deal with internal use of AI at AA but at the same time thinking widely on organization approval and support also to our NAA colleagues from that and just checking the time no we are perfectly on time so one slide I really wanted to bring now as a as a real big perspective one thing we are always talking about and that's the subtitle of the road map a human Centric approach to AI in aviation we really believe in it we are really focused on the on the human as much as we we did so far level one level two it was easy we have a human end user in the operations like a pilot and air traffic controller or whoever and this is something that was easy to sell and to not even talk about until now because it was obvious now now that we move towards level three where is the human supervision possibly or no human in the operation how can we stay human Centric in that case that's a that's a big question one thing before giving you the big picture of this human Centric approach very important clarification also the AI road map we call it AI road map because we started thinking AI definitely but it has brought us actually Beyond probably what we were anticipating initially to a road map that is AI and automation or at least Advanced automation I don't want to say automation because we have a lot of automation already in cockpits today we are not trying to reinvent the wheel for for for tools and standards or or methods that are working but definitely with level two high end like level 2B or 3A or 3B we are really going in unknown ground and this is what we call kind of advanc Automation and the road map on AI is really serving as a placeholder for that it doesn't mean we are ruling over everything uh that is connected like I mean every domain will have like if you think extended minimum crew operation there is another uh project running in parallel we are not dealing with that but we are kind of the enabler AI is the enabler a is not is not per se a product it's an enabler and it enables this Advanced automation that we need to look at and there therefore we take use cases from M we take from extended Min minimum crew operation we take also partnership with Cesar and all our colleagues on the ATM Master Plan update especially to take on board all these use cases and go in a meaningful way on an AI and automation road map that is very comprehensive and very logical at least from ANA perspective that means uh as I said Advanced automation is obviously enabled by AI but we always say we don't push people to use AI That's not the point as well so we could have non AI Automation and we have today we could have it go going even further why not why not into level two fine and AI Assurance in this case would not apply that's one of the building block but we have other building blocks in the road map we have human factors we have ethics and all of these elements if you push too far I would say the the cursor on Advan automation definitely will you will hit the need for these objectiv that were developed in other building blocks than the technical I would say AI Assurance part last thing level 3B comes and supervis comes with even more challenges we really believe that um it could be sold as a level one just without a human okay but no there is a lot to to consider and this is where I let's say without a net I I tried I tried a bit of anticipation and thinking here um what what are the big elements we have to consider when we think human Centric first human agency and oversight that's the the key element in the AI act per the article 14 we totally aligned with this logic of protection and capability to really protect the human agency but also its capability of oversight and that's a big challenge for level 3A we have the development and the operational aspects in the concept paper but we will of course carry over this in the in the ru making development phase is extremely crucial and so far it's may be an implicit statement but the development phase is under human oversight and control totally we will not in aviation releas systems that are AI developed let's say and then we we believe that the AI has done better than human no not at all the development is under human oversight 100% today there is an end it's 100% in development and it's some percent I don't I don't want I don't want to give a percentage but in operations we have the notion of authority in the in the in the concept paper that says it's full Authority until Level 2 a and it's a partial release of authority at level 2B it's a extensive release of Authority at level 3A else the operation cannot work and it's a total release of authority at level 3B what does it mean Authority we have also responsibility accountability to think in in mind and liability in the background let's say we don't rule oops we don't rule on on liability but we will think about responsibility if we certify A system that is super advanced in terms of advanced automation we need to think about who is responsible in case so until level two we can say it's the end user that's easy we have a framework we can keep the framework of Aviation as it is today when we cross the barrier we of level three we have to think and to really scratch our head so that's a big part of the concept paper issue three so there is um ethics based assessment objectives you will hear a lot about that later this afternoon so with a presentation from iness and the panel of discussion what we see is a um is two terms let's say to consider for level two and three it's Technical and soci technical I mean we are a technical agency we we know where we can put the boundary soal we have attention also here bro actually by the advanced automation by whichever level we can reach with uh with AI and that's something we need to put to create really the right framework that is practicable for applicants but that doesn't leave aside really important social issues that's something we will discuss along along the day a lot to finish and that was my last speech the the between those terms actually it's a it's not an tension necessarily it's not an opposition it's a bit like Jack DEA the philosopher uh putting in opposition means getting meaning actually from from both both sides so very importantly the arrow will need to be clarified with whether it is a end or it can be a or let's say today we will put an end everywhere whatever we do in level two no qu no question up to level 2B and three we we need to think how much do we can we release to have possibly or like development only for the oversight maybe not in operation else what do we do with us space with drones Etc um accountability we have to put possibly or also but or then user and or someone else and the ethics based assessment we have to to create this technical social uh framework that that really uh meets the the the need so probably it will be not or but a union at the end so that's where I will leave you for for now um last word before entering the coffee break we have this slido um we will leave it through the coffee break so you can you use your mobile phone scan the QR code use the slide. do website putting uh AI days as a code and pass passcode hm Kota with that you enter the slide the slid is a question and answer platform we will gather all your questions there there will be blocks of presentation from now on where you will be able to gather your questions and we will take Q&A there will be Q&A sessions you can see on the agenda where we will take the questions with the the the full team that was that was intervening um with that I would say I leave the floor to to Janet yes thank you one thing for me um I know you all know how to drink coffee but you need to know how to find the coffee um it's actually going to be in our Boeing room so you need to go through the beastro which is the other side of the atrium here keep going and then the other side towards the left there is a door into Boeing and that's where you'll find the the free coffee so to speak please be back at 10:50 so we can start on time thank you thank you you're welcome back everybody so we're going to get started we now have another tight and fun fun packed 75 minutes we're going to start off with two of my AA colleagues talking more about the latest concept paper so I'd like to welcome now Renee ped medf and you get it wrong and fr thank you okay good morning everyone my name is fr triou I am seconded from Euro control TOA and my role in the AI program is uh the project manager for AI assurance and during this presentation I will be accompanied by by R but I think she will uh she will present herself later um so I hope everyone got the slid and uh uh do not re hesitate to uh to ask your questions via the slide maybe you can go one yes I can I can show it again so that you can uh take a picture of it so uh just uh some some reminders about uh the process that we applied for uh uh the uh issue two of the concept paper uh uh similarly to rule making activities uh we went through a consultation process in order to to produce this second issue of the concept paper so we published a draft version in early 2023 uh and uh it took us almost nine months to to Really process all the commands that we received we received uh roughly uh no exactly 900 comments from many different stakeholders uh many comments coming from the industry but also uh um a lot of comments from authorities from Academia and and research as well as the ATM Community without forgetting about Airlines airports and so on so a huge variety of of stakeholders uh if we look at uh at the dispositioning of all these comments a lot were accepted or partially accepted and uh a lot were replied with clarifications about our intentions with uh uh the text of the draft issue too uh uh a low level of uh uh comments were were not accepted going into the details we tried to to position vs into big Sims uh obviously some comments could have been uh uh consolidated into several Sims but uh okay from from the comments we decided to to put them into one or the other the safety assessment was uh really significantly commented and uh uh if for those who know the uh second issue of the concept paper you would have uh identified that a significant three workk was done before between the draft and the and and the final issue uh especially with regards to the the sequence of anticipated Mars a lot of comments were on the learning assurance and here again there have been significant changes in the in the final final issue I would just mention and the uh the the new dotted line on the W shape between uh uh that uh introduces really this notion of requirements at the level of a IML constituent after that you have uh we have human factors and this is obviously linked to the uh to the area of the ai ai sorry human AI teaming uh and after that you have explainability I would just also mention the OD which is really a very important uh uh notion and concept in the with data driven approaches and a lot of comments were uh going into that direction I would like to to just introduce you the the journey for the rest of the of this presentation uh by first reminding everyone the four building blocks of the aaza uh trustworthiness framework with on the left the assessment part of the trustworthiness analysis part of uh of the framework and on the right all the technical blocks uh to uh to to to answer or to satisfy the uh uh the assessment done in the uh by the applicant so um we will first dig into the classification of the AI applications and from this we will uh look at how uh uh this uh the uh technical blocks will enable the transparency and also enable the human AI teaming and during the second part of the presentation we will come back to the assessment looking at uh how we uh uh uh look at the the risks and how we assess these risks and how on the right part the scope uh of the concept paper and its extension uh will be taken into account how some of the technical blocks will uh enable The Continuous safety assessment and uh also looking at the next steps and how to enable advanc automation thank you very much fris for introducing the building blocks um if you have seen the the concept paper if you had the opportunity to go into it you might know this table very well let me see where do I is this the I just use it here okay thank you so might use this table very well we have our threei levels and each of them has a su level again which leads us to six AI levels it was already mentioned by Peter from the scientific committee that a mapping was done between different domains so the eye levels of AA are very important to ensure a mapping across different domains as Peter mentioned the car industry the rail industry drones but also the ATM master plan that's currently under revision was aligned with the ey levels how we have it in the concept paper and of course this is important if you have an ey based system triggering an automation level from the ATM master plan it might be very helpful if you're talking the same language and you know which requirements are needed in line with your AI levels I will now split the different AI levels into the sub levels into the six levels to make it hopefully a bit better visible so we have the level one right that we split into level 1 a level 1 B so in level 1 a we talk about human augmentation automation support to information acquisition so we are talking about the first processing stages then we have also still in level 1A automation support to information analysis so here we are facilitating enabling or touching the first cognitive functions in order to go into level 1 B we have the boundary and that comes when we introduce decision making so the machine supporting the human in its decision making automated support support to decision making for the level two we are not talking only about decision making any longer we're talking about action implementation so that's the next step that a machine supports the human also in the action implementation so we have level 2 a level 2 B so level 2 a we call human AI cooperation it's a directed decision and automate itic action implementation so the human is still monitoring the human can intervene interact overr right when whenever deemed necessary in a in B some of the action implementations might be independent so it's a supervised automated decision and action action implementation again it can be overwritten by the human so we have for the first three levels level 1 a level 1 B level 2 a full end user Authority It's The End user who overrides decides supported by the machine on level 2B we partially release this Authority partially to the machine and in all those two levels as Gom mentioned earlier the responsibility stays fully with the human if we go to the next level level three this is where it will then look a little bit different and it's still under development also as mentioned that will be our Focus for the next concept paper issue three um so that is what we en envisage a level threea with a safeguarded advanced automation we have the saued automation automation decision and action implementation so that means the human can still intervene upon alerting so the machine might alert the human if an action is need is needed and then we have the level 3B where it's nonsupervised Advanced automation where we do not have the end us any longer so in level 3A we have limited end user Authority and in level 3B we do not have the end user there any longer again we are working on it we will be working on it it's a big focus of us for the next um issue um so it's not it's not of course fully developed yet another building block that um fris mentioned in the beginning was the operational expendability so of course if we talk about these AI levels in level 1 a for example it's the system that we have today there might be no additional need for operational explainability on top of the human factors requirements that we have already in place today however the more we go in the I level upwards there might be and I come to that beta something called shared situation awareness so the human might not have their own situational awareness alone any longer in order to be able still beware of everything he has to be provided with explanations and we are talking here about the operational explainability operationally focused of course we have the development explainability as well and you can see this all the also in the concept paper that lays out the full battery of possibilities and then we have to see in operational explainability what is actually and here you have the elements what has to be presented how so what is understandable for the end user how does it have to be presented to understandable for the end user then does it have to be presented the level of abstraction the level of detail what is the situation at the moment how can be best presented and of course is the explanation valid so this is the uh needed in order to give the end user um the possibility to understand the decision that the I best system is proposing and to predict the ey Behavior as well and only with this trust build this collaboration as we call it in level 2B is able to work the end user has to have trust but it's not only the trust of the end user the operational explainability also of course feeds into the system because you have to design your HMI in a way that is understandable for the human it will very much um influence the design in the end but also again in line with human factors principle already that we know today and then if we come again to level two so in order to facilitate this human AI teaming concept this level two was split into cooperation and collaboration so in level 2A we have the the um um the cooperation the Ibis system supports the end user to accomplish its goal so it's still the end user goal right it is all all based on a predefined task allocation so that there is no communication needed and it is the end user situation awareness that we are talking about it's a very directive approach the end user is under F full Authority and monitoring in level 2 B we call this collaboration the two parts work together so we have the human and the based system working together on a shared uh goal and for that that the communication is Paramount because it is it is based on a on a dynamic task allocation so there might be a change depending on the situation to let's talk about high workload whatever uh there might be a task suddenly moved from one actor to the other so there is a communic need for communication and we are talking here about the shared situation awareness in our concept paper we have rephrased the situation awareness for the machine to situation representation and the human St with the situation awareness here we still call the two together we call it still shared situation awareness so this is a co-constructive approach where the both parties work on a shared goal and the partial Authority is given to the machine however the human is still actively monitoring and as we said the full responsibility lies also in to be in collaboration with the human this was a snapshot of some of the elements not all of them of the human factor for chapter in our concept paper and I hand over to you f so um um yes indeed this presentation does not aim at presenting the the full concept paper we want to to focus on on things that probably have changed significantly between between the issue one and the and and the issue too uh so just to remind you the journey so uh uh uh Rene presented the classification she has put some focus on the elements enabling transparency and the human the human factors aspects with human AI teaming uh I will now continue with the other aspects and uh starting with uh the scope of uh of the Technologies uh that will be covered progressively by uh by the concept paper so so far we have put the focus on data driven approaches and uh more precisely uh we have dealt with machine learning and uh even more precisely on supervised learning the the the issue two of the coner paper starts to address some elements related to unsupervised learning but probably we are not uh uh uh we will need to to uh to exercise uh all these objectives with concrete use cases uh and for sure reinforcement learning is not yet addressed however uh so all these aspects will be uh will come with the issue fre of the con paper however the issue fre will have to also address some other some other techniques like uh knowledge based representation base logic based approaches uh we will have also to to think of uh statistical approaches like beian estimations and so on but also very important will be this notion of hybrid AI where you could have some some items or consistent dealing with machine learning and being complemented by some reasoning aspects very important and this was already mentioned by uh by Gom earlier during his presentation the scope is today limited and we want to stay on the on a safe subset uh uh from a criticality perspective so the the concept paper today addresses applications with failure condition that is below major below including major uh we will not allow online learning so uh everything every application uh will have to be uh uh considered with a machine learning model being frozen before being introduced into the operations uh and uh as already mentioned there will be an extension to work the reinforcement learning and and other techniques I would like to to draw your attention on the uh the monitoring aspects and the data recording aspects that have been uh significantly uh reassessed with the final issue two um so if we start from from the left you have here depicted the AIML constituent with the machine learning model and uh some uh um output uh um output level uh explanations and uh you you will have under this IML constituent some pre and post processing the pre processing will have to uh to deal with some monitoring monitoring of the OD monitoring of the out of distribution possible inputs and the uh the post processing will also be in charge of monitoring the performance uh of uh uh of the system and for example the uh the level of confidence of the predictions being made by by the system by the by the model all of these monitoring aspects uh will address either the end user or the user for that some data recording will be uh will be necessary and as also as mentioned by uh by Gom this morning this data recording will go beyond the uh traditional recording already in place according to uh the existing regulations and the um the different usages of this recording will be uh first the accident incident investigation also the monitoring of the system usage during the operations and also uh The Continuous safety assessment so uh in in uh with the con paper uh we uh uh we identify the the need for this continuous safety assessment uh in order to to reassess during the operations all the assumptions made during the development just to to to conclude on um the uh some some aspects of the uh the assessments and how these different assessments are an entry to modulate the objectives of the conil paper with the classification of the AI application it will be about modulating the human factors aspects and possibly the EOL based objectives with the safety assess assment or during the safety assessment there will be an allocation of an assurance level and this allocation of the Assurance level will drive the applicability of of the objectives especially in the for the AI Assurance objectives or learning Assurance objectives in the first issues of the concept paper just a note about the quantitative safety objectives that uh when necessary uh will drive the quantitative safety assment for machine learning and also in in this area there will be an interaction an interplay between between the safety assessment and some objectives of the learning Assurance the information security assessment obviously will modulate the objectives regarding information security and I would like to conclude also by some some other uh some other aspects that uh we uh we will need to uh to address the first is vs6 based assessment currently in the in the concept paper we don't have really this notion of modulation of uh of our objectives including the eal objectives based on such an assessment and this is probably an area that will have to be to be worked we also consider that for some applications there could be uh safety benefits uh from with the introduction of artificial intelligence and and probably this is something that we will have to uh to work on in order to to take benefit of these benefits into the applicability of the objectives and and last but not least uh the um level three guidance and this building block the safety risk mitigation uh will have to be uh worked uh when we continue progressing in the uh um uh in terms of augmentation of the level of automation so when we move towards Advanced automation um we will have to to to think of that thank you much thank you very much a few weeks ago we had a conference in uh Washington together with the FAA and as an outcome of that you might seen that one you put statement jointly with the FAA saying how we wanted to work together on various topics right way across the board in aviation safety in this context I'm very pleased to welcome today from the fia the chief scientist technology advisor TR thank you thank you good morning my name is Jun Farm I'm a chief scientist in Ai and machine learning from the US FAA and I'm very happy to be here to share with you the FAA road map on AI and a special thank to asasa for providing the opportunity so since I have only 15 minutes I'm going to be very quick and concise um in 2022 we work with the research engineering development advisory committee and they recommended the FAA to begin working on AI in aviation and that's was the main reason I'm here I was hired after that recommendation I'm transfer over from the US Air Force so I have some experience working on the development side and moving to the FAA I began to work on the regulation side and we proposed the work plan for the AI road map early in 2023 we got approved to move forward and we be began to work on the road map in 2000 in May 2023 we had different meetings internally within the FAA and externally with the industry and we have two external meeting with the industry one in October 2023 and the most recent one was in March 2024 and we finalized the work map document and we expect it to release it in July we are in July now I prepare this in June when we look at the road map the only reference we had was the road map from asasa and the only contact that I had was Gom so I like to recognize his conversation with me during the preparation of the road map we look at what we could do and where we are heading so in the road map we have three main components one is to clarify the focus on Aviation software on board an aircraft it's not going to be the only thing we are focusing it just is starting point where we want to focus on and I will reflect why we do that later on with the guiding principle the second component is the listing of guiding principle the things that will help us to move forward to get what we want to where we want to go and those are the thing that should be independent of who we are or what kind of Technology Evolution that we will see in the future and the final thing is that similar to iasa we do have some research to support the aviation activity and I like to connect those three together during this conversation so the first thing we did was to make sure that we just want to do focus on safety and this is a very important point to G because at the FAA we are authorized to regulate safety and nothing else and later on I will TIY this road map to the existing US policy and show how it fit into the overall picture of the US policy namely the executive order on AI and when we stack it out I personally saw saw AI in a different context and I saw my colleague talking about AI like it's a human being so from coming from an engineering perspective I wanted to make sure that we want to see AI as a piece of engineering component especially when we mention that we want to focus on Aviation software on board the aircraft so we want to see that as a piece of software in the aircraft so we don't want to Mis interpret AI as some Su some intelligent being that might confuse the responsibility of the pilot and we want to make sure that if it's considered as an engineering component then the developer has to have Clarity on their respons on their responsibility of the software that they develop similar to iasa discussion this morning we want to differentiate what type of AI we are working with now and what type of AI we are dealing with in the future so we define learned AI the AI that we train it to do a function or job but after training we lock it down to be a scar piece of software which is 100% deterministic and that will allow us to Define clearly the intended Behavior as well as to understand the unintended behavior that we can Channel up to the system level that we can assess system safety and the learning AI just together with the industry and together with yasa and all the authority we decided that is for the future but not for now so we will mention some of the research in dealing with that for the future but for policy for now we decided just to deal with the Learned Ai and from the engineering perspective we like to use as much existing regulation as possible there's no point Reinventing new the same thing that we have before that we have good safety record on it so we we like to use as much as possible and that is consistent with g34 from SAE four years ago when they sat down and try to identify the Gap so that they will f focus on what is missing from the existing regulation and and um guidance and that is exactly what we are repeating here one of the thing that we like to be humble is that we like to admit we don't know everything so with the new technology we don't fully understand the eving technology so we are going to take an increment approach working with specific projects so that we can gain better understanding of the technology before we can generalize that into understanding and policy later so at this point we are dealing with specific projects many of them so that is the topic of the one of the meeting that we are organizing and we are organizing the next meeting in July 24th and 25th at the um in the mle um Virginia at M facility and would we would like to extend the invitation to everybody to to participate we are going to go with the learn AI for now and we once we have better understanding then we will move into the area of learning AI right now we are working with regulation on adaptive control to see how an evolving algorithm can be accepted and certified and from God experience we like to see if we can gain some that can be applied to learning AI at right now similar to iasa we begin with something that at a very long criticality level on the safy continum something that make it easier for us to study without impacting the life of people so that's where we begin another incremental approach but on the safety continum and we will fully investigate things even though it might not be relevant on the low Crick cality but we hope that what we understand from God experience will allow us to move to the next level of criticality and we work with the industry we saw many familiar offici from different company who work with the US FAA who expanded the resources to help us understand AI ML and analytic capability so similar to the EU AI act we have the US executive or form in Ai and that is treating AI across the use of AI across different US government agency fortunately the US government assigned the National Institute of scard and Technology to manage this effort and from the there they begin focusing on safety in aviation which be responsibility to the FAA so we are given a much simpler task less difficult but not easy and at this point you can see that we are focusing on sep in aviation and our philosophy in safp is that if we can provide a safety record we will earn the public trust and gas our philosophy in the background we are working on research and the research is the important instrument helping us to move forward so when we work with research we started with the current projects that we work with the industry and what we found out now is got working with those projects people often ask us how are we going to do this the think that we ask them to comply and the more we explain to the industry partner the more it came to me that we have to use computer to analyze the AI software because it's so complex that human cannot visual I we are trained with visualized thing in the three dimension world that we live in anything beyond three dimension is very difficult to visualize and the more I explain in term of computer algorithm the more confusion I created but it gave me an idea I tried to summarize it into saying that we need to think in like AI to rec will let Ai and that give us an idea can we use AI to improve safety and that is the current Hot Topic that we are pursuing in research we don't have unlimited resources so the every year you know that we have to go through Congressional budget to get money to operate and some money will be allocated to research so in order to maximize the utility of the money we partner with all the US government agency so that we don't have to repeat the thing or the agency might have already been done doing so that's that's our strategy and we expect it to have some result in a year or two in low level of DC and before we move forward with higher level of criticality so in 2024 we expected to have the first version of the road map but we are not alone we like to recognize that asasa has been pioneering the road map for about four years ahead of us and we learned a lot from conversation with iasa with the road map that they had four years ago and the new version two that just came out so we recognize that the road map is not a fixed document it will be an evolving document as soon as we have better understanding we will be updating the document and in the background we work with g34 from SAE hoping that there will be a scan that for us to use to move forward so in conclusion the road map is something that we will be using at the FAA to guide our action on Research policy development and scard priority and Workforce preparation and we recognize that feedback is important and this is where we like to work with everybody to hear it from everybody so that we can move forward together and we will be evolving our strategy in the road map in the work plan and we are learning from the initial projects but we will also be learning from the research some of that we are funding some of that we do joint research with the industry and we are working with all the regulation authority to have a clear understanding so that we can work together and have a mutually acceptance of design approval at this point and thank you very much for listening to our FAA uh road map thank you very much and just a reminder that you have an opportunity to ask questions to all of the speakers in this session when we get to the end of this particular run um so our next speaker comes from airbrus we've had quite a lot of discussion about standards and he is chairing the um EUR and sa working groups on a standards thank you very much and thank you to welcome us so thank you Gom uh So speaking about the working group 114 and g44 is talking about five years intive five years of activities in 10 minutes it will be a challenge thank you g so it's about join working group4 g34 uh so both groups were born in 20 sorry 2019 and they were a joint group as in 2020 um the um co-chairs are listed here here we have in the the room with us fatti Kai from Tes Mar Robos from Sky Fred and of course Gary Brown every Everybody Knows Gary our objective uh will be uh to to um to define the the first a ml standards to support the development and the certification and approval or approval of AAL products based on AI technology uh the scope will be twofold airborne and ATMs domain for man and unman aircraft the first of for the scope of our first issue will be only and will be reduced to the offline supervised mode a supervised machine learning sorry machine learning in supervised mode and our first release will be uh next year uh so this is a very huge and worldwide group you've got here um I don't know pretty comprehensive list of all the of all the members um so uh what what what we have been doing for since five years so first we structure the working group of course and then we spend two years to identify the concerns to align all the industry about the the concerns to know more about the use case presented by the by the industry and so we produce our first document which was called statement of concern in uh 2021 we start capturing the regulatory requirement from iak paper of course and we Define the scope of our St then we Define a strategy how integrated this new standard in the aeronautical ecosystem and there is a specific slide on that just after then we work on the specifics of AI the core document what are the high level crossness properties for machine learning model and like any other standards we Define engineering processes with objective and activities and at the end we modulate this this this objective according to the safety criticality front of us today there are reviews ballots and open consultation and which will lead us to the first issue of the standard next year what could be a future certification framework from an Airborne standpoint so I think you know the two first level of Regulation consideration so I will talk about the the industry response for as means of compliance so there will be three level three levels of engineering first level is the system level of course you are in a system developing a classical system at at some point in your architecture you are uh identifying a one ml based function and this ml Bas function will be developed using the new uh the new standard Ed 324 RP 6983 which Define a specific concept called ml constituent which will support uh the uh development of the ml based function and at the end of the day this ml constituent is actually um item container so at the end of the day it will be the development of already known items software or Hardware item so using do 178 or d254 for for the item so we are in the middle in between system and item we have introduced New Concept of ml constituent which is a bit a container of item if we dive a little bit in uh in the processes as as it is described today in the in the RP 6983 so it's a diagram of processes and artifact in Gray you've got the existing guidance and in in white the the new guidance you can recognize the Free level three levels of engineering system a m constituent and item it's a w shape the first V is for the design the second V is for the implementation on the first V you have first first require mlc machine learning constituent requirement capture it start with a level of of requirements and then you are going to the data management you're are going to produce your your data your data sets in order to and then sorry design your model and train your model using your data sets at the end of the first three you've got Al validation and verification activities and then you have something which we call ml model data description Plus yes very important data pre and post processing it's not only ml model but also data processing PR and post processing in order to have a complete function and at this stage no training anymore okay we are going to go into the implementation phase it starts with a mlc design mlc physical architecture design meaning how many items do we need in order to implement this uh this ml model so it can be only one item very simple or several combination of hardware and software item we have a flexibility to do what we want but when these all these items are defined specified then we can go to the development of these items using the classical d178 or d254 should it be software or Hardware item when these items are developed we integrate all these item so going back to the uh uh to the ml constituent process mlc integration and verification to have an integrated machine learning constituent and then deliver to the system my last slide so main takeaway we can have today uh so first our commune draw its expertise from International Experts of the aeronautical industry a lot of uh lot of disciplines of course machine learning but safety system engineering software and Hardware engineering and of course certification so we create the condition of cross fertilization to all these domains uh it is since the beginning of very fruitful cooperation and cross fertilization with witha which is involved since the beginning we are pretty align today uh regarding the C paper for for level one of course uh for some remaining consistency issues but they are in the process to be to be resolved and the committee is also working with with fa uh and and really we we we want to um I mean we we really want to to a to be involve with the development of the stard and and we want to closely work with with with the FAA and trying to to align the future of stard with the fa map um this new star will be really the Cornerstone of the integration of AI uh it has been built by the industry for the industry with a permanent constructive and fruitful dialogue with a representative of of the authorities of course we are very interesting by sharing our experience with other fields such as Automotive defense Railway and space thank you thank you very much and so for the last um intervention now for this session I'd like to welcome back to the stage our colleague javanni chimi Chima Chima thank you very much good morning everyone my name is Javan Shima and on top of my primary job at ASA which is senior expert for a for a operations I'm part of the I I program team where I do the overall coordination of the use cases and now just started I will manage the rule making task which I am presenting now so um this is the rule making concept that was explained earlier this morning by Gom so I will not uh repeat it but you can see the main elements here we have the Mandate from the EU AI act to um regulate AI in aviation through the basic regulation and then cascading down to the various Aviation domains and uh taking into account the technical standards um the plan we have in mind um to do that is what you see on the right part of the slide uh it is a three-year plan um which is made of several steps uh according to the ru making process that we have in place at asasa uh the first of which is uh the publication of the tour which we just did a couple of weeks ago so so far we are on track and um then we are planning to um have a first deliverable next year with an MPA for the anticipated part and the generic AMC's and later on during the year uh we will have a second MPA making the necessary links to the uh Aviation domains um this will be followed further on in 2026 by uh an opinion which is um the last step of the aaza ru making process uh but it's not uh the end of the story because it will be followed by the uh adoption of the rules by the legislator which is the commission together with member states because we are only a technical agency we are not a legislator so we may propose amendment to the rules or new rules but then it's up to the political level to uh adopt them and then once this is done we will complement it uh with the necessary AMC's means of compliance and guidance material so uh going a bit more into the details um so this is the link that you can find on our website to the terms of reference of the rule making task 742 a number that will become familiar to most of you for the next three years and the terms of reference like I said is the first step of the rule making process and it is a very short and concise document one pager in this case which uh defines the objectives of the ru making task um the effective regulations and the working method that we will use to achieve these objectives that in this case are three uh the first is to ensure artificial intelligence trustworthiness for its safe use in aviation and this is directly in response to the EU AI act and then the other two objectives is to enable the deployment of AI in the various domains of Aviation starting with the first identified by the article 108 of the AI act and then cascading down to the other Aviation domains that will most likely uh will be affected as well um the activities uh that we will do uh to do this will be centered around the concept paper which will be the basis for what uh we will propose and the way we will achieve the objectives um more specifically we foresee several subtasks um for objective one we will have one subtask to propose the so-called art law the implementing rules and another one for the AMCs and guidance material and then we will have a subsequent subtask for the objective tws two of the first batch of Aviation domains affected by I and then another one for the remaining Aviation domains um you can see in the left part of the slide um the regulations that we anticipate to be affected by I you see that the impact is massive it's basically the entire ecosystem of Aviation we have air worthiness we have ATM we have Ops we have the entire the entire ecosystem um the way we want to do that is uh the working method of the ru making task is with the help of um experts we will have a Ru making group and uh like in every rule making task we intend to conduct an impact assessment that it is indicated light here not in the sense that we underestimate it but in the sense that the the concept to develop a part ey is already set at legislative level by the act so we don't need to question that what we have to question is further down the impact on the various domains and this will be the main part of the impact assessment um the npas that we will publish we will be will be of course sub subject to public consultation and then we will of course assess the coms received and take them into account for the following steps of of the process um so uh I would like now to explain a bit more uh in details what will be the content of the ru making task what you see here is what we have today we have the existing regulatory framework for aviation in Europe which is made by the basic regulation and then cascading down to the various implementing rules for each Aviation domain and then we have the concept paper that was um recently amended this year we are publishing March the Edition two which is now going at level two and the question is where do we go from here so we have therefore the rule making task 742 which will um uh will create the part Ai and in doing this we will uh take all the important elements of the concept paper uh to build this part where we anticipate to have three subp parts uh the technical requirements um The Authority requirements for the competent authorities that will have to oversee the use of AI in aviation and the organization requirements for all the organizations that will be intending to um deploy artificial intelligence in their respective domain and uh this will be supported by a set of generic AI means of compliance and guidance material which uh will be uh descending directly from the concept paper and of course it will be uh further adapted during the rule making process so um after that uh again we will um um transfer the elements that are relevant to each Aviation domain to create the necessary HS that will allow the deployment of AI in aviation and this is a process that somehow has already started because most of you will be aware that we have a special condition uh for um certification of the level one AI application uh which we are starting to use and it was shown earlier this morning that there are already application in progress and we target to have the first level one uh certification uh next year and this work we will continue and this of course is embedded in the uh the general context of the AI act and um which will also affect other domains uh what I would like to make clear now is that we uh don't want to um over regulate we we are um uh we would like to avoid double certification and we would like to keep part ey as light as possible so that it can be a real enabler for the deployment of AI in aviation and uh uh one uh last slide is to conclude on the work we are doing right now after the publication of the terms of reference because aach to it we proposed an initial list of experts for the rulemaking group and um the way we have identify those experts is along two axis we intended to cover all the aviation domain impacted and you have a selection here and we also wanted to cover the impacted disciplines uh in AI so we uh submitted this initial list of key expert that we knew to the AAS advisory bodies the MB and the sa which is the member states advisory body and the stakeholders advisory bodies which represents mainly industry and we are now collecting feedback to um to get proposals and comments and possibly improve and integrate the composition of the final room making group that will allow us to uh kick off the activity in September and this concludes the Pres presentation we will take uh questions later so thank you very much Joan thank you also to all the the speakers for this second part of the the morning uh we take now indeed the Q&A or we enter the Q&A session uh let me see we have until uh 12:30 actually so yeah plenty of time we are a bit ahead of time that's good we will have the capacity to treat more questions and that's great because I saw a number piling up the slido has the capability also some of you have seen to up vote some questions if you have the same question instead of writing it a second time you can up vote I see that it's Mo mostly used but just to to highlight it to people that would not have seen it and taking the questions actually in this in the popularity order um the first one probably will be for me I may have missed it in the introduction but are we confusing automation with AI or was Peter or it could be also for Peter but probably I was also talking on the same topic was Peter talking about two different levels from yaa Automation and AI as I I mentioned in the part of presentation I I had before entering this block here we we really treat Ai and Automation in the AI road map that's one one element now we do separate AI is an enabler automation is really the the type of operation that you can you can you can get out of it it doesn't mean AI necessarily it but it is driven if we think Advanced automation by AI so this is uh what I had on the last slide a bit on the side to really clarify and one thing I didn't say also on the slide autonomy is a big word autonomy is used also in many schemes is used also in the AI act this is another term we need to to clarify in some way autonomy for us mean something we use the ISO definition in fact it's a some systems that may sets their own rules let's say Goals or usage domain and that's something that is extremely Advanced compared to what we we would accept or we could certify today therefore autonomy for us is kind of a level four let's think about it much later whenever possible or practicable or necessary but at least from now to 3B we are in advanc automation so yes we have a road map of automation within the AI road map that's Again by the nature of things and the very important clar ification will happen in what Joan was was showing in the rle making in the rle making we will need to architecture all of that so that it fits in the different regulations or regulatory framework in a way that that is so at the end of the day we will kind of separate at we will have the central generic AI building block in the Middle with AMC GM that will be referable from the different framework and each framework of course will live its life with all the automation road maps that are that are undergoing in the different regulatory framework so not to mix things but yes the AI road map really address addresses actually both the concept paper does go over the fence of AI but again on a on a consistent manner so it's not a a clarification we want to bring in the concept paper but more in the r making very important I think question thanks for that um just trying to see the next one um the next one is for you Trang thank you for the flash talk um by the way I'm sorry about it I was about to L the thing so you can see the question that's even better yeah so the question is for you Trang um thank you for the flash talk with the fa road map share the AAS road map AI levels I let you take the mic um one thing that I did not mention when I share the road map is that that we do have a another road map for automation that is in the background developed by Dr Kathy arbert our chief scientist in human factor so we separated AI form automation because we recognize that automation can be without AI so when we look at iasa AI level we realize more about automation so I will refer that to Dr Kathy arbert who will be talking more about automation road mapap later on when it's more mature but for now we work on AI mainly seeing AI as an engineering component especially with the lockdown learned AI is is allowing us to manage it where we manage existing engineering component maybe with additional consideration that existing guidance and regulation cannot handle but as far as Esa AI level I think it's more corresponding with the automation that we will be talking later on thank you very much TR for the for the clarification so I would go to the next question how how is human oversight ensured in a level 2B application the authority of the end user is partially limited some systems operate autonomously so I think it's a question for you Renee um yes thank you and I wouldn't say the end user is partially limited some some some Authority is given to the machine actually where we have the partial limited Authority on the end user de 3A um but how do we ensure it yes indeed as we said it is still the human who in the end takes the decisions and the human who has the full responsibility of in in both in 2 a and 2 B level I don't know maybe G if you want to complement no I think you said it the the the point may be behind the question which I get is um it's a matter of operational capabilities to ensure that so we we are we are kind of setting limitations in in saying that the full responsibility is on the end user possibly we kind of limit the capability to defer to the system of systems uh more operational capabilities it's it's true but it's the the way we could at least put a limitation to enable the capability to certify 2 a and 2B in good conditions today after when we progress on 3A together we will probably find out other things that might revert back sorry to the level 2B so probably it's not bounding forever but the authority released to the system is really to enable different type of operation the responsibility is overseen and overridable from the start we always said that the human keeps this capability again it's limitative but it's fine to at least process the first applications we are more at the level of seeing the first 2way applications coming in but not yet to be so so probably the the timeline and the complexity of the type of systems will also give us a bit of time to think a bit further but that's the way we have we have said we release partial Authority but we don't release the responsibility at this stage okay thanks for that um how the next one uh would be how do rmt 0742 link with wg14 looks like wg14 decreased their ambition to provide uh AMC's so um I don't know Joan if you want to take it or if you want me to to answer maybe I can start and then you complement um okay see yes um so I think there is not a direct link between the working group and the ru making task 742 there is more a link between the working group and the concept paper because this is where the technical standard s are defined the scope of the rule making task is a bit wider because it also takes into account uh certain rules for competent authorities and uh for organizations plus it will establish the hooks with the various Aviation domains uh where AI is is intended to be deployed yes thanks Joan So in the big picture I agree there is rmt z 742 is much wider than the standard we are dealing with wg14 for now uh but the the the hook or the link will be created by an AMC let's call it learning Assurance which will recognize the standard hopefully if the standard is is released on time and that's what we are working all striving for so there is a strong connection between the concept paper and the and the working group today to align as much as we can to prepare this step of having an AMC recognizing the standard and hopefully the lightest AMC possible so this is the I mean the strong link we can anticipate within 742 and maybe for the second question I I don't think there is any decreased ambition from the working group side but I will let Kristoff maybe if you want to to complement no I think my first mission is to to release something next year again the reduce the scope is reduced to machine learning ofline machine learning in supervise mode and then we will um we will deal with the other a techniques ml techniques in the uh uh in the coming version of the start yes and maybe to mention we use started sg8 subgroup 8 on human factors so you perform more the statement of concern on that side which will fuel other build exactly so to deal with AI level two yes so we created a specific group uh sub subgroup in the in the working group in order to address what human factor what do we need in terms of human factor in order to address AI level two thank you thank you Christoff maybe just reading it again um this notion of ambition to provide AMC's The AMCs are provided by AA by the by the nature of things so the working group was never working on an AMC but on a an industry standard that can be recognized in an AMC so I think we are we are fully aligned and there is I mean the ambition for me is as high as as it can be thank you um next question is for you try um you have mentioned the way to build trust is to focus on the safety record your thoughts on other ways of trust building for instance transparency explainability I think this is a very good question that uh deserve a lot of time for discussion when I mentioned trust I meant to say that we don't regulate trust we like to earn trust with what we can regulate which is the safety of the aircraft so that's that's the clarification but as far as the question that went on with transparency and explain ability this is an issue of AI about explain ability when the developer presented us with their application the first thing we ask is that what does it do what is the functional requirement something that we can go on and verify and validate within the scheme work of the guidance that we already have and that is where the lack of explainability became very apparent and what we're working with for explain ability is how to build the understanding of AI what can it do so that's why I mentioned we are focusing on learned AI which is consistent with the industri consensus and what is consistent with Thea where where we if we lock down the system then it BEC a piece of startic software that does the same thing every time that you provide the same input so that is something that we like to understand something that can help us establish the functionality of that system something that help us to understand the intended behavior that is so defined in LP 4754 and 178 and we can also establish what they call unintended Behavior something that help us to channel up to the system level so that we can analyze the risk and that is something that will be the topic of our conversation in the next meeting but as far as stress is concern is something we have to earn not something we regulate thank you very much try um next one I voted would be um the AI act does not apply if products are covered by Union harmonization legislation sectorial which use cases are not subject to AI Act in aviation so this is a let's say a question we could um it's a very relevant question but it could go very wide in the sense that we need to clarify that in the part that that's really the simple answer to this question saying yes the ACT gives us a mandate per article 108 yes it concerns irisk systems of course the level after is what is irisk system what do we do with minimal or transparency R system etc etc so it could go on for for for for for a long time in the sense that it is part of the rule making task we will have to manage that we will propose a text mid of next year with the first NPA and from that you will see all the clarification of how to use AI I would say at different level of risk from an ai ai act perspective translated into Aviation wording and understanding so I would say bear with us with this one but it's a very very relevant question the next one is um how can the full responsibility remain with the end user within 2B applications some information and parts of the decision process are not influenceable so uh Rene I don't know if you want to say a word about it I think well it's it's a bit similar to the previous one isn't it so um again it is defined in two in the entire level two so 2B as well that the end use has the full responsibility information is provided but this is a provision of information and parts of the decision process are not influenceable I'm not sure to understand again it is designed in a way that the human still is able to override um and take the resp or has the responsibility of the final decision it's not like in 3A that the that the that um it the human is notified by the by the system to do something here it's still all under human responsibility I do not know what yeah what it's also a bit unknown ground as we said we don't have compelling use case level to be today on the table we have a Proxima use case developed as a virtual use case in the in the concept paper as a kind of anticipation from our human factors colleagues what what comes what could come but the the Contour needs to be defined so definitely we are uh conservative also on this side saying that the full responsibility is with the end user means that we need to ensure that the operation is compatible and commerate with that and and that's again a limitation in itself investigating 3A will give us more tools to go back and see how to shape this partial Authority release in terms of responsibility if necessary but maybe we can stay where we are with responsibility full on the end user and then the operation needs to be tailored for that this is really uh let's say unknown ground which we investigate that's another part of the exploration to be clear thanks R perfect so what is the difference oh no uh it was voted in between let me stick to this one wouldn't make sense to separate automation from AI on the one hand and consider automation from the functional level on the other yes I would say it's another way of approaching it um as TR just mentioned and they have an AI road map and they have an automation road map I think if you take the the picture of the two we we have an AI road map which contains the AI uh Block in some way the AI Assurance block and then we are thinking automation enabled by AI at large in order to go for human factors ethics Etc so I think it's a way of presenting things the difficulty we have is and and that's the beauty of the level one two three is we cannot disconnect easily one from the other if we if we start treating only the technical bit on the side and the automation on the other we will probably have a hard time in the rmt 742 to reconcile the two views if we have a generic thinking that we provide again with a level one two three and all the guidance that is consistent because there are connection between the human factors the AI Assurance the ethics the everything is cross-connected in the concept paper that's the reason why we are we are let's say managing it in in in one step if we try to separate into the concept paper it will be let's say massive work for probably nothing let's do it again in the rle making but we will have with the levels of AI a thinking that someone is not mistakenly taking a level three for level one let's say and that's very very different in terms of operation very different we can safely uh with no problem go for a certification level one with all the technical difficulties of course on the machine learning side on the EXP ability side Etc today we can go because we have the end user because we have the assistance element in the operation if we go directly 3B someone can mistakenly take a level 3B for level one uh let's say detect an avoid type of of um assistance it can be an assistant to a pilot the pilot has the responsibility to avoid or not and it can be 3B automatic avoidance but we are not there yet Tech technologically so let's manage let's say expectation again on the side of AI and automation at once that's what we try to do consistently in the concept paper and then the ru making will again create the puzzle in the in the right way so that everyone finds his final view in in the regulation in the lightest way possible that's the what Joan insisted on oh yes yes please please Frank it's a it's a discussion yes um in the us we recognize that there's no right way or wrong way to do it but we recognize that at least from my perspective from an engine as an engineer I normally try what we call divide and conquer when we have a complex task we like to divide that into simpler components and we begin at one component it's not a clean division between Ai and automation but we want to focus on AI at this point so that we can provide more understanding to build up automation so that's our philosophy but I like to emphasize that there's no right way or wrong way to do it it just that our preference is to do something simple first to and do a good job before we extend and that is consistent without incremental approach not trying to solve everything at the same time so no no right way no wrong way it's just a preference yeah thank you TR for the for the Precision um the next one would be could you precise where learned AI could create deterministic algorithms I'm not sure of that I think it's also a question for for you TR if I remember well the I mean the Learned AI is your your terminology if you consider a simple neuron net once you train it then it means that all the weight constant that you capture through the training data REM St so everything is just 100% comp ational it means that when you have an input going to the neuronet after you train and lock it down you always have the unique output correspond to the input you provide so that's what we mean by deterministic we know exactly what is the output even the input so God will allow us to establish what we call um what we call intended Behavior as defined by B 178 and ARP 47 54 the thing that we have to do in or in the framework of VNV of software and when we implement the neuronet that become the software with input and output and B 178 mentioned explicitly that the algorithm can be 100% computational which is exactly the case for implementing the model that we lock down that we know exactly what are the constant multiplication with constant and then pass through a function so that g is 100% G ministic and we intend get to have have example in our next meeting to clarify this point and that is the key point that we like to focus on is 100% deterministic so we shouldn't be worry we should only focus on what we can do at this point and that in instead of talking about the philosophy whether it's deterministic or not deterministic we intended to have specific example that everybody can visualize and that will be our next in the discussion of our next meeting in July 24th and 25th thank you try I think this this topic is a thing we are discussing for for a long long time we we on the other side we try to avoid the notion of determinism because it depends at which level you look at it indeed I'm freezing a model from statical perspective brings certain determinism but if we think the machine learning and the learning Assurance at large it's not the only it's uncertainty element let's say that we have to consider so this is where I think it's kind of yes it's it's a first reassurance let's say to have this and we share the condition to to freeze models for now learned AI I think as you you call it but after yeah there is a lot of consideration behind great thank you for that and I saw several questions piling up on the same topic so maybe let's see we have several to go um I'm trying to cope here up the next up voted one would be um again for you trying what is fa opinion on the nist AI risk framework and Playbook can be used for a in aviation this is a difficult question for me because I as I mentioned we focus mainly on regulation of sefy in aviation we participate in the N conversation but the participation is focusing on us bringing our understanding of what is AI that we learn through the regulation in aviation to get cable so that they can generalize that to get bigger framework and we are not trying to bring the bigger framework that across um different domain into Aviation so that's the clear distinction of our role within the discussion with n we don't want to force down everything that might not be relevant to Aviation into Aviation we want to bring our understanding of AI into the discussion to see if it can be generalize into be broad framework thank you thanks for the clarification um next one would be suppose I have an application deploying AI which guidance should I follow to make it certified should it be the guidance from AA or the standard from EUR so in let's say it's a it's a stepwise stepwise approach so we start rule making in order to clarify exactly this question in the future we will have an AMC let's call it learning Assurance recognizing hopefully the standard in development in wg14 g34 that's where we we go for now the standard is not in publication or not published yet it is in process to go to ballot so meaning we cannot use the standard from any other perspective so what do we do we raise and leverage special conditions in order to clarify the usage of the certain subset of the concept paper objectives tailored to specific applications we started on the general aviation side because the first level one uh topics came from that side uh we are moving probably in September for a consultation on an update of the special condition as Maria mentioned in our introd introductory speech uh just to recognize the the need for for other products let's say in particular larger craft but not only we have of course rotorcraft we have vitols also to Evol to take into account so we will try to do one one single update of the special condition for all products if that works if that doesn't work then we will kind of start larger craft and and go further so the special condition to answer the question concretely is the only way to do to today to leverage requirements in certification that are complementing the current certification specifications okay so that's the SC the special condition pass that anyone has to go we don't have the standard yet and um you don't certify first you don't certify a system you certify a product so an aircraft an engine Etc and you never certify per a standard the standard needs to be first accept acceptable means of compliance through an AMC and uh that's that's what we will do with the ru making so long story cut short special condition today that's the the reference point for certification um a question of about usace um so usace is about man managing a wide variety of drone operations at what AI level can you space operation be classified that's a tough question um I would say it's a case by case exactly like would say Case by case classification um as usual very important to say you have to consider the system and the operation again as two different things so the AI enabler and the big picture of the operation so possibly we we may classify or we may have in new space I don't know a level one for for assistance in some element but because we don't have a human in the loop probably it will jump very rapidly to level 3B and that's probably the let's say the final call we will get to to any application that's why precaution also oblig we we cannot we cannot today put the the requirements on the table so we will we are working with our us space colleagues on use cases to pave the way also to level 3B so it's part of the all the perspective that we open today we we cannot certify today a level 3B that's that would be presumptuous um thank you for the talks and thank you for the comment how do AA and EUR align their respective works so I think that was mentioned by you Kristoff I don't know if you want to to expand I mean we participate in the working group uh as much as we can we have a very good level of alignment as you mentioned we have still remaining issues to solve but I I let you yes from the Y stand points I think yes we have some remaining issues regarding the con paper regarding fa road map so we have the willing to collaborate each other in order to to align our our standard that is mature Draft today to the to the sorry the but it's something ongoing thank you Kristoff uh the next one is very interesting the difference between Authority and responsibility so as usual definition matters so the concept paper in the concept paper we have crystallized two definitions that are different from each other for authority and responsibility this is a set of definition doesn't mean we are ruling forever what is the understanding of responsibility the train of thought we had to create those two um definitions is authorities the capability to to make a decision the system may or may not have this human has this uh and this is where we are on the authority and all what Rene presented related to the partial or full Authority on the one hand and partial Authority release at level 2B responsibility responsibility is another uh step let's say or layer that is more connected to accountability in the background again liability we want to disconnect the two we don't want to say because let's put it with the Absurd reasoning if we say we cannot release Authority at level two then we say full Authority and then we kind of limit even more the POS possible applications because we we stop at level 2A let's put it this way if we want to enable what extended minimal crew operation what ATM road maps are bringing we need to think a bit further can we release the responsibility actually no because else we cannot certify today can we release the authority yes because if we don't do we cannot sustain or provide the tools and the meanings to get to this type of operation so it's again a perspective here it's not we are not enforcing a way of thinking we're saying if F ASA we want to accompany this type of Novel operations we need to think a bit larger and we need to release a bit of the authority to the system yet again under the very strong assumption and limitation that the The Authority remains with the pilot if it's about air worthiness with a air traffic controller if it's about traffic management Etc so this is the again it's a strong assumption I don't claim it is forever and I don't claim it is the only way but this is the way we could release the level two uh issue two of the concept paper without leaving a big portion open let's say but it's open From perspective perspect from from the perspective we we have to work on it it's another investigation ongoing that's what we have today in the definition look at the concept paper if you have comments we can of course red discuss that at the when preparing the issue three um is there already a definition for end user is it required to be a human or can it be as well a legal entity yes that's a very very crucial question first a disclaimer end user we use it from framework which is the alai which we the assessment list for trustworthy AI which we initi initially looked at for most of all for the ethics based assessment this triggered a thinking between user and end user first in this document end user is the one interacting with the system in operation it is a human just to answer one one part of the question cannot be a legal entity it's a human full stop so the end user is a human the user is the set of people interacting with system at any time development uh development teams designers Etc authorities for certification investigation bodies for when when there is an an an incident or accident investigation Etc this is the large spectrum of people that can need something and how did we tailor it in the concept paper we split it between development expandability and operational expendability development expendability is for anyone wanting to understand how the system comes to a certain outcome operational exp is focused on the end user the necess that's exactly what Rene presented in our Slide the necessary information to provide so to give you let's say an understanding of how we split the guidance we have really this end user user concept that is extremely present and for ethics base assessment as well doing that we had a hard time when the eui ACT final text came because it talks about user not about end user at all so we are probably misaligned with the act from top level perspective they consider user and then user packaging we have two terms we for now keep the two terms we will see in Ru making if it makes the the case that's of course a discussion but we keep the two terms because they clarify so much the split of guidance some objective apply really to one and not to the other Etc so this is really where we are so it's a human for sure and it's it cannot be a legal entity and what is the definition please again look at the the concept paper um is there a mapping between software Assurance level Al and level maybe FR if you want to take this one yes so uh no there is no mapping between the software Assurance as we know them and the AI levels this is U uh the proportionality table that you can find in chapter D of the con paper elaborate on uh the proportionality that will come first with the criticality of the application so the uh the Assurance level and in the second part of the table how the classification uh the AI level will drive the uh uh the applicability of the objectives uh mainly from an explainability perspective and human factors perspective so there is no mapping between the Assurance level and and the a level you could imagine that you have a level 1 a application that is safety critical and on the contrary you could imagine level two application that has only minor effects on uh from looking at it from a safety perspective so thank you perfect F thanks for the clarification I see that we are getting overtime and we have still a long list so if we had two or three I would rush but let's answer just one last one and then we can all discuss at the at the break very interesting and perspective oriented one with the speed of development of AI do you think that the proposed timelines for setting up the rules and guidance can keep Pace with the advancement of AI That's a crucial question I would say we don't claim that we we we we keep Pace that's why we have a pushing barriers phase three after 2028 in the meantime we claim that we keep Pace with the needs from our Aviation stakeholders this is what we try to do if each time we raise a new slide and topic like the generative AI usage in operational tooling Etc this is because we we get let's say feedback that there is a need so we try to on board everything we can clearly we are not let's say an AI agency we are a safety agency uh therefore we are not trying to keep Pace with technology but we keep an eye on it but we take it let's say we take the pace from what industry at large is is bringing us back all our stakeholders our applicants and and this is the let's say the ambition of the road map and the AI program today and let's push barriers all together in 20 after 2028 thank you very much we will stop here I let you the the floor Janet thank you very much thank you very much I think that was a very stimulating and interesting discussion I'm sure you could have gone on right the way through lunch really but lunch is waiting for as I hope anyway also back in the Boe room so through the Beast throw and the back corner there thank you very much shall we yeah okay so we are ready to start again with the next uh session of our agenda uh we have a panel now on use cases and um uh we will present you five use cases uh with the help of the presenters that you see here at the table who have kindly accepted to join us and to present their own projects we have Tales hell deep blue cie and boing with us today but I will come back to the speakers uh in a moment um for the moment I just want to spend a few introductory words on why we have a panel on use cases because use cases are particularly important for the AI road map because they are the way uh we engage uh with stakeholders in a collaborative manner um use cases are the way we have to test and possibly challenge our guidance they are the way we have to to learn and to improve and progress and this slide that you see here that you have probably seen in other presentations before I like it particularly because every time that we show it it's a little bit more populated than the time before uh just to highlight the the interest the growing interest that there is from all sectors and all stakeholders to engage with us um to develop their use cases and we do this in uh different ways we do directly through uh Innovation partnership contracts or memorandum of understandings or even even through uh actual applications we do it through research projects under Cesar for example or under Horizon Europe and in doing this we close the feedback loop with the standardization bodies because the use cases uh give us um the the the feedback we need to improve the the technical standards so uh with this I would like to um invite on stage the first presenter of the day with Frederic barbaresco from Tales he AI senior expert at Tales is in charge of advanced studies coordination and AI algorithm for the Talis global business unit land and air systems including Talis AMS and uh today is going to present us um Talis land and air use cases freri the floor is yours thank you so I will present you a three use case of stes the two first are a more mature one and more explorat one at the so the first one is the reinforcement learning based CDR CDR for conflict detection and resolution uh so what is CDR CDR is uh to try to detect on so on detction resolution of medium-term conflict detection so classically we use a 4D flight plan based estimated trajectory and we have to detect minimum separation so horizontal five notic on vertical 1,000 ft so classical resolution is based on classical AI so mainly as research like monos research for instance um so it's more explainable on configurable but slow depending on the situation and it's why we are exploring reinforcement learning because we have fast Insurance several resolution per second potential to achieve more also po optimal solution with uh with enough development but the the goal the main goal is to arride approach on to develop a safe AI based on this RL approach so I will show you a short video that we have presented at Euro control AI Forum that illustrate the concept of aide AI based on reinforcement learning on classical algorithm so here is the interface that we have uh for the conflict detection and resolution so the principle is to detect uh conflict and uh to propose to the controller some solution and the idea is to also mix the classical and deep reinforcement learning uh to have in backup also the classical solution uh for that we have to reinvent also the simulation because we need intensive simulation to train uh on historical data the the reinforcement learning algorithm but you we have also to integrate safei system uh constraint and so to develop uh um the methodology for that and so the idea is to also iite the classical on the hbd so the the concept of reinforcement learning is to train a network and to find clearance from system States uh to see simulate the clearance in on an ATC simulator and to model the impact of the clearance on as a cost function and to learn to maximize the function using reinforcement learning so here you have some illustration of the different different criteria we have used for the reward we have also to work on the data so we have worked to build 1,000 second of realistic training scenario using historical data but the M problem that we have no conflict in recorded data because the controller have solved the conflicts so we have to create this conflict and to have a data augmentation to ensure the presence of separate separation laws the simulation also is very intensive and we have to use parallel Computing fast simulation environment to have enough simulation to train their enforcement learning algorithm uh so training is uh we have two M chall so for the training we have developed a reproductible training pipeline from historical data to trade model we use a parallel simulation environment that continuously run scenario we have monitor also with Capi uh the result during uh the during the the training uh we keep also the complete log of all the clearance uh for for metrics and uh also um have different tool for monitoring the scenario so if you have the evolution of 2 Ki related to percentage of floss of Separation on on reward and we work also how to integrate this kind of module of a I in a system with system integration constraint so we have to deploy the trend model and connect it to existing CD component but also use the train model as an additional solver so it's why we keep the classical solver as a fallback system and also validate the calance by using the system prob function and only display a validated solution so we have a Contin check solution validity in time so what is different in term of validation uh we we can reuse some uh recommendation for machine learning because we have the same kind of development step for uh reinforcement learning we have also a machine learning inside so we are still using machine learning model we can use historical data we still need to objectively measure the model performances we still need to test the the trade model but some requirements may be outside of reinforcement learning so historical data is not always used validation data set are not always available on requirement should be Express as a um a measure of the data set outcomes rather than a measure of precision uh for specific points so there is specific point of uh spec for validation and qualification of RL but I will not give all the detail but we need a new type of validation which is required to qualify the simulation environment first we have also to validate the data so we have free kind of data initial data transition data on life seam live simulation data and also we have for the T a SK specific Advantage so we have the system allow clearance probing to validate human or algo generating clearance using certified components we have we could use human in the loop so all clearance are displayed for human validation on human validation is perform only on clearance that are also validated using the prob function for the second case we work on deep learning uh based digital seor so for arrival manager Aman so in this case so you are familiar with arrival manager so is operational goal is to allocate flight delays across different collaborative control centers and so try to integrate in our product which is called top sky seconder this new techn of deep learning so what is the main uh the main function on our capacity so is to develop a prediction of ETA um on flexibility window so we have a first uh time to fly to the asssa fly point and also we have to estimate the time to flight from SSA to boundary and then uh a third model from AP entry point position on the flight allocated Runway and we have a flex window Associated to this time tolight estimation uh so we have developed a specific structure which is based on different tools for uh development of the model so tonsor flow which is a classical sky which is an internal Library developed by Tes to use to prepare a data ml flow which is an interface to monitor the evaluation metric during the training on Test air flow it is a task scheduler uh to execute the preparation the monitoring so we have developed our own monitoring tools on solution with python scrip and certify AI is also a test libraries that has been developed to easily test the implementation obviously we are also taking t con guideline from ASA and stard draft um and also we follow the mlops best practice to handle this kind of model life cycle so uh we have a developed application uh of ASI guidance on a working group so we have defined operational domain design for our application what to do in case of an event like a storm what is the impact on the model output we have also specified the machine learn learning constituent requirements uh so we have defined more than 70 requirements we have also to test the data quality attribute and also all high level properties the model must satisfy like stability robustness uh so we also beneficiate of the program confence so it is a French program with a French industry uh which has developed engineering tool for uh machine learning validation and qualification and so we have select the some part of the methodology and some part of the tool of the confence program for this project validation so Tales is also contributor in validation qualification of AI through the working group 114 uh we have also participated to a definition standard day 3 to4 and also we are in discussion also to to to improve this standard and also this discussion on the standard are also source of inspiration for our French activities through the deal program in tulus on conf. a program and the last one uh the last one is uh the last use case of tales is what we call Gene pin uh control and green operation so Gene for geometric inform Network and pin for physic inform nor Network it's what we call also analytical model inform nor network uh so it's mainly to address green operation problem on especially for the analysis of controls so we are working on the different Technologies so geometric inform nor Network for detect the detection of contr so we are using for instance fish eyes camera from reat in electrooptic on infrared es we are working also on physic inform all networks for to improve the model of contrs for their prediction on in the future also on theramic inform nor Network so we are involved in different projects French German project which is called Contra in the climate system from observation to impact modeling and prediction with d on dwld on latmos and C in France and R Wat so mainly on on the the observation on modelization of the phenomena we are also involved in the become project from Orizon R become means better contrax mitigation where we work also on sors on uh uh for uh contrast detection and control modelization and the last one is concerto so Tales is leading concerto concerto is a program of Cesar uh and it means Dynamic collaboration to gener to generalize Co ecofriendly trajectories so it's more on the Tactical part how to optimize the the for green operation the flow on the trajectory thank you for your attention thank you very much Frederic and I didn't probably mention it before but we will run now all the presentation in sequence and then after that we will have a short panel discussion and then a Q&A session so the next speaker we have today is pav cure from H um pav is the principal scientist and cockit architect for EU based system Technologies at onw Aerospace he started working for a company in 2016 as human factor scientist in the advanced technology department his research interest include human factor analysis design on human machine interfaces human computer inter sorry in interaction um and artificial intelligence he has been in charge of the research and development of speech recognition voice control and natural languages processing Technologies at on since 2016 P pav the floor is yours hello good afternoon uh okay so I'll present a project that u h leads it's a sayar project called Darwin um it's a project on a digital assistance for a pilot and in this presentation I I decided that I'll show the program itics at the end and I will start with uh what we are actually uh trying to achieve in the project uh so with a broader picture uh what are the trends uh that we are going to address I think that these are the obvious but it gives us the the floor from which we are designing this system so first one is obviously the autonomy so there was a a lot of uh information on the autonomy already and it's a big driver for any cockpit operation with increasing amount of information with the complexity of the information more of the traffic changing generation of the pilots this all calls for a ADV automation uh in the cockpit this goes hand in hand with the digitalization so we need to transform uh all of the current aircraft system in form that the machine is able to process it and is able to provide uh the human pilot uh the right assistance at the right time and uh the last but not least uh again uh many uh thoughts was shared already on building the Trust In AI so uh this is one of the Cornerstone of the project and will with these three drivers uh we are expecting uh these four major uh safety benefits uh from the projects these safety benefits are related to the uh technologies that uh we are addressing in the project so first one is to uh detect and mitigate piloting capacitation so when we have a a pilot in the cockpit uh and autom on board we need to have a system that is able to detect pilot incapacitation and other states uh we need to manage uh pilot high workload in a in a critical situations when workload can goes up uh and also provide the support with the cross check so when you have your uh human copilot and and when you have your automation there must be a way how we can synchronize this uh this information together and with these three uh benefits uh we are trying to uh create and build trust in this technology uh so uh how we are envisioning uh the system from the human perspective some from the human pilot So currently we have a pilot in the cockpit and we have a set of the automation Assistance or set of the uh automation uh functions so in the current cockpit it uh when we are referring it to the uh lower levels of the AI referring to the um AA road map the pilots can trigger this Parts uh review the outputs execute it so what we are building uh in the project is the expandable SK skill set so some kind of the yeah what we're referring to as a digital copilot uh which is basically a system that is able to interconnect all of these assistance provide the unified uh uh interface and then via the human machine teaming interface the pilot can execute and work uh with the individual assistants so in order to make this pilot perspective uh uh real uh we are uh designing in a project three uh technology enablers that are based on the artificial intelligence so the first one is the p State and taskload monitor uh this is the component that allows to uh monitor of the physical Pilot State whether if if pilot is awake if it's drowsy becoming or can fall asleep soon or if it's if the p is incapacitated and uh another part of this component is able to predict incre task load allow the flight plan route so you can imagine that you're uh entering a busy sector uh you have a complex part of the flight plan in front of you so based on this and and uh information coming from the cockpit we can estimate that at some certain point in time there will be a busy busy uh period and and the automation can uh get ready for that uh this enabler U is based on uh machine learning uh and and it corresponds when we are referring to the aasa road map to level one assistance to human uh second uh technology enable the main one is the human machine uh teaming or human AI teaming uh this is the component that basically uh refers to the uh level two uh of the ASA level so we are designing a system that uh is capable of dynamically distribute the tasks between the pilot and the automation based on the uh environment conditions so it means based on the Pilot State based on the task CL based on the uh state of the mission weather surrounding environment Etc and the pilot can collaboratively work with this system but still uh remain in the loop and is in charge of this system so uh we still uh are thinking about it as how how the pilot would really interact with the system uh for this uh given tasks and in order to make these two enablers working together we have another component which we called U trustworthy machine reasoning platform uh this is basically a software engine that uh glues all of these assistance and informations together and provides the means how the pilot can interact with the system so it's a rule based transparent decision support uh while which we are also able or are designing the system that we are able to provide the operational explainability it means that uh when uh the pilot gets some suggestion from some function from some automation from some assistant uh the pilot should be also able to uh see why the automation has decided as it is decided and what is the rational for that so that there should be a short explanation for this and and this is representative of the symbolic AI or knowledge based AI so this is something that yeah we hope that uh we'll explore uh little bit more within the project with AA uh but for for all all of this concept it's a it's a crucial uh component uh when we now paint these three enablers um in a way how we envisioned uh it to be designed uh for the for the cockpit environment so we we have this uh three enablers I see that we have some fonts issues uh So currently uh we you have we have a cockpit uh we have a pilot in the cockpit the cockpit is connected to the environment to the ATC surrounding traffic and we split these three enablers in three phases how the human usually process the information so at the beginning you uh gather information you see some information so this is represent by represented by the enable for a p State monitor so uh in the project we are developing this uh uh light gray boxes Pilot State Monitor and task Monitor and it's accompanied by another uh usual uh aircraft monitors about the vehicle State Etc with this understanding of the situation then we have the the main component this is the uh the decision making or thinking part or decision support whatever you would like to call it uh so that's a core that is gathering the inputs and providing some suggestions to uh the human pilot uh in terms of uh yeah what functions can be executed autonomously it can offer some solution if the pilot decides that because of the high workload uh this task will be delegated to the automation uh it can trigger it so so at the end we have this third component this is the acting or execution part so uh with some uh human AI teaming interface and uh interface to the assistance so when you look at this um um system uh uh as a whole then we close the loop uh back from the interface to the pilot and add this small black uh Parts uh which is the uh machine reasoning or the rule based system so you can see here that for all of the boxes they are interconnected together and can provide the pilot with some uh right information at the right time um so that's a technical solution so now for this programmatics so Darin is a threee projects started last year so we are after one year of the execution uh uh having four main Partners hanbal leading the Consortium P DLR and neurocontrol and we have the associated partners of Slovenia control asasa and we uh are designing a system that uh we should uh validate it in the uh in the Real Environment uh with flying aircraft on the pist prur or in two years so we are all uh looking forward to that and hopefully that it will uh it will happen in time so this is all from my side uh for the project thank you thank you very much pav uh for the next presentation I'm happy to welcome Simon POI and Vanessa Roni from deep blue deep blue is a research intensive consulting firm based in Rome in Italy uh who SC expertise in human factor and safety in aviation and other SA critical domains Simone and Vanessa are the two coordinators of the Hau project that will be presented today Simone in particular the CEO of D blue he has a longstanding experience in leading human Center inovation project and highly complexity sectors like Aviation Space and maritim consultant since 2015 and expertise cover the application of human factors in safety critical systems including Aviation and Health Healthcare Simone Vanessa floor is yours thank you very much and good afternoon everybody so we will present the IU project um the IU project is a research and Innovation project funded by European The Horizon Europe uh it's a three years project started in September 2022 and so it will finish in August 2025 uh basically we are a human factors project uh goal is to develop AI based intelligent assistant prototypes for Aviation system and of course as human factors our key challenge is to develop human Centric intelligent assistant and U when we say human cented for us means uh integrating human values needs abilities and limitation since early stages of the design of our intelligent assistant indeed uh we use a human- centered approach while designing our intelligent assistant we start we consider um the operational goals and humans need but also the Zer ability and societal acceptance trying to find the place where AI can fit and uh while doing that uh we also try to analyze the impact of the technology that we are developing and AI more in General on the human work uh meaning that doing the same job with an intelligent assistant does not exactly mean doing the same job so it's something different so we are trying to explore also this part uh to do that we have gathered um a total of 15 Partners from uh 10 different countries and basically we are three different communities together we have the human factors Community represented by as but also we have in the project bar Kiran from Euro contr control human factors expert from anak and other human factors expert we have the end user Community U for you can see here Skyway tales and Brier London Lon airport and our uh technological Community um we have have um three uh technological provider that are coming from other domains so are not from the aviation domain and they are trying to let's say bring best PR practice from other Industries in our project and these partner are sweet five from Cyprus engineering from Italy and dfki from Germany so this is the Consortium um we have a One external partner which is One external end user which is London Newton airport and they a is one of our advisors meaning that um we keep them updated on the project um we ask them for feedback for instance at the moment we are trying to apply their guidance uh to our use cases and uh then we will provide them with a feedback and we we'll ask them for a feedback trying to to keep this Loop of feedback between us uh so our six use cases um um we work on six use cases each use case aims to deliver one intelligent assistant prototype AI based the first use case uh is for pilots and aims to deliver an intelligent assistant for Pilots to support them in managing startle effect and this is a use case led by our partner enak the Second Use case is uh also for Pilots it is led by Tales together with Brier and it aims to to develop an intelligent assistant to to assist pilots in a root planning and replanning the third use case we are working on is the more futuristic one let's say uh it's um it aims to develop an intelligent assistant to support the management of Urban Air Mobility traffic and this is led by Lyn choping University and lfb for f SC is for Tower controllers and aims to support controllers in decision making related to the sequence of inbound and outbound aircraft uh while optimizing Runway utilization and this is led by Skyway the last use cases are for airports use case f focuses on safety data basically and um it has um let's say this ambitious uh uh goal to TRW try to enable the sh the shift towards predictive safety intelligence and this is led by Euro control with the support of engineering and London newon Airport last but not least we have uh our use case number six is for airport but actually is for passengers at the airport trying to support them in um finding the best way uh in the airport and try to reduce the risk of spreading virus inside crowded areas such as airports so this this is the overview on our six use cases here you have at the bottom the QR code in case you want to know more about use cases and the project in general and um I leave the floor to sumone that will give more information about one of our use cases for time sake which is use case number one so we selected this one for this presentation and it's about how can artificial intelligence support pilots during startle uh startling and surprising events so startle effect uh the name of the intelligence assistant is focus so we created this one with a colleagues from let's go through to a story so we have an a flight landing there is lightning strike the pilot is startled you can scan the QR code you will see the actual video and and uh then we have the support kicking in and the support takes three different functionalities the first one is the detection of startle and surprise and we want to differentiate the two if you are surprised you can still perform if you stter you may freeze so we want the assistant to be able based on physiological parameters to differentiate where we are in case the pilot is startled then we have the second functionality that is the support to Pilots to manage emotions and stress this is based on bio feedback it is exploratory we are trying to see if by calming the body before calming the mind it is an effective approach or not given that we have maybe enough time maybe not the third functionality is once the the system detects that the pilot now can perform there is guidance to regain situational awareness and to get out of the uh problematic situation for Pilots so you see that even in one concept we have three different functionalities AI plays a different role in all of these for the moment we are developing the AI part for the first functionality the detection of startle against surprise for the others we have Mo caps for the moment we are trying it out in realtime simulations and then we'll see whether we will manage to develop the AI Support also for those ones uh this is about the first strand of work in IU which is our use cases but IU like Vanessa said it's human centered human Centric project so we wanted to explore other two strands so the second strand is to test the current methods and techniques that we currently use for human factors Human Performance and see if they're still applicable uh to AI systems uh in particular we are of course trying to try to understand what is explainability from the end user point of view from the pilots and controllers perspective and then we have the societal aspect that was mentioned a couple of times during this morning so we want to see whether the introduction of digital assistance may impact sa the safety culture of of Aviation we have a very good safety culture in avation we're proud of that does it change if I am all of a sudden interacting with a digital assistant maybe for better maybe for worse but this is something we a question we are asking ourselves and we want to put in place safeguards against that and then like the anticipated I will spend a couple of words we want to assess the impact on the human role and try and anticipate how the human roles are going to change so a couple of words on these two strengths uh we are looking for each of the use cases we are analyzing the use cases from the human factors perspective security safety liability shift of responsibility and liability and also compliance with regulation and also with the ethics for instance the key point there is to test all of the golden standards and see if they still work or if we have problems and to see how all of these things may interact together to give good feedback to the use cases so for instance we start from one critical event in the uh use case you have just seen the assistant May trigger the startle effect support but it's not necessary and we are asking ourselves okay what is the impact on safety overload what is the impact on on the human aspect trust uh the pilots are going to lose trust because they have too many crywolf uh alarms uh what is the impact on liability there is an increased liability for AI providers because the system is not perfect it will never be perfect and that we know but there is also an increased liability for the end user because you we need to train the pilots for that so we're trying to see whether there are common cases that have an impact on all the different key performance areas sending feedback to the use case developers so that they can tackle those issues at the stage uh here it's worthwhile mentioning that especially our colleague bar Kiran for eurocontrol is developing is tackling the human factors aspect and trying to integrate the AA guidelines the objectives for human factors with the Cesar Human Performance assessment process in the form of uh an app where basically you go through different questions and it gives you the requirements that you need to address as part of your development it can be fairly open-ended if you still have a few ideas where are your critical areas you can select the areas the ones that you see there on the right and just go through the list of questions to see if you are if you need additional requirements to comply with in your conops to include in your design and development this one is being done as we speak so it's still we have the concept we have the contents we are developing the app we will be developing the app in the first couple of months in the next couple of months then uh finishing on this one the impact on the human Ro uh most of the project is technology driven we have technological opportunities but we don't want to catch up on the human aspect we want to be able to design the human role while we design the technology so for instance we analyze for Pilots how the future interactions with our digital assistants are going to be and we want to see if Pilots need different skill sets to interact with the the digital assistants and the answer is of course yes it's not the same job it's a different job because now you have a digital companion so what we want to get out of this part of our work is a new set of training requirements for Pilots uh for instance for CRM we want to see whether the that digital assistance is going to impact where the digital assistance is going to impact on the current requirements for CRM training and to see if and how we need to modify the current requirements it is exploratory work we are trying to project ourselves into the future lots of uncertainties but we thought it was good to try and not just assess the impact after the technology is there but to try and design The Human Side of the job while we design the technology so that's the spirit of this Activity thank you thank you very much to deep blue and we continue now with the next presentation which is delivered by Emmanuel leit from CA em Manuel is the vice president of product management he started his career in Aerospace over 20 years ago at theault moved through Bombardier and then arrived at cie in recent more recent years he expanded his experience also at Amazon and um Emmanuel the floor is yours thank you thank you everyone thank you for giving me the opportunity to brief today and thank you for the two presentation just before me it's introducing exactly what I'm going to talk about so C for those who doesn't know C we build simulator and we train Pilots we train about 140,000 pilot every year on Commercial Aviation business Aviation and Military so today I'm going to focus on one use case where we leveraging AI to improve training and also ultimately safety so simulator session there's three people the co-pilot the pilot and the instructor on the back so we have system a product called C rise training ecosystem that system has been in place for about 10 years where we collecting data with the consent of the pilot and co-pilot to aggregate that data to Telemetry all the parameter that the aircraft are generating the external parameter as you can imagine in the simulator we can control every outside parameter as well being weather being the temperature the traffic and also any kind of malfunction to simulate emergency or abnormal procedure what we've been doing with C rise is also assessing other pilot behave with a co-pilot in term of communication and screening pattern so we went one step further having some biometry instrumented into the simulator it's been mentioned this morning few times there is no Ai No automation if there is not big data leg it's been 10 years that we collecting data across the planet across the different aircraft type that you can see on the top of the slide we have we probably have a simulator for every aircraft flying today and we have about 2,000 customer that sign on uh to that system would it be Airlines directly some business operator aircraft and some military customer as well where we provide training so you're gonna tell me okay it's nice we have data what do you do with it before diving into what we're doing with it we focusing mostly in the session according to ESI and Fa in three main phases of flight the most dangerous one the takeoff the approach and the landing and all combination of malfunction external effect that can impact the safety of a flight so the instructor is there to assess other crew performing into those session so after 10 years of data Gathering as you can imagine we have a lot of data the first thing first we have some algorithm that automatically detect when a crew performing a procedure like if they perform a takeoff the system going to recognize that its takeoff then going to collect the proper parameters to be assessed later on so we conduct those maneuver every day every hour every minute on different aircraft Tye and we clocking data into the datal which is a cloud computing somewhere the the next step in the middle is when the magic happen when we say okay now we have boing 737 or a 320 rejected takeoff we have 10,000 hours of recording what do we do with that is where we start doing some data clustering to understand what are the population in regions or even sometime Airlines we can't go at the pilot level to understand is pilot performing well a retive takeoff and what you can see so it's a very small graph some Pilots the Blue Dot in the yellow dot we see that some pilot perform reg takeoff while they pass V1 by many Nots increasing the risk uh of safety for a flight and reversely so what we do as a trading provider then we unpack the outliers to understand what happened and to trying to provide some remedial training one two also some Clos Loop training what we see in the middle is not just the simulator data we operate on be on airlines we operate on behalf of government for training we also operate on behalf of business Aviation Partners where some of the top three top four top five risk in Flight we see them in the simulator because pilot as often I say flight as a train sometime train as a flight so a way to remor flight operation risk through that methodology is to improve the training and reassess every month to see okay Do my pilot perform well now in projective takeoff do I still see pilot rejecting V1 plus 10 or do I see my pilot that should not have rejected at that time and reject for the wrong reason the last element is to so we're doing many analysis one of them is comparing to the prescription of the acraft manufacturer go back to the rejected takeoff because is easiest one to talk about so we compare within three second as an example the pilot must retract the prot within 2.5 second the pilot must break I let you imagine if you have an instructor on the back the simulator is on motion with vibrating the instructor on the back cannot assess if the pilot is pushing the W at the right time so to give feedback as well for a crew it's critical to have that access of the data and provide recommendation you did X because of Y let's work on why to improve the occurrence of of X the next two slide is going to perfectly what you just Bri brief it's about biometry so that's often called that data is hard data mathematical data aerodynamic data avionic data environmental data but as we know pilot and co-pilot are human are still human it will be still human the interaction the communication where they're looking at is very important that's why we I've done an experiment sorry with Singapore Airline where we put 200 pilots on 150 session on the Triple 7 where we were asking the crew to perform a landing with a loss of an engine at different altitude different speed to assess one how do they behave together how do they communicate together when the actual copilot detect the engine loss to the pilot in those phase of flight must look PFD outside PFD a primary flight display and outside all the time does it do really that again it's impossible for an instructor sitting at the back to see where the pilot the pilot are looking the other element also that we assess in that one was is the pilot and co-pilot do the right visual check as you know there is procedure where they you need to do ABC then check is really doing the check at that time in an emergency situation the next slide will be an actual video of what happened in the simulator so you're going to see on the top on the top left PF stand for pil of flame on the right side that's a p pilot mandatory the two screen on the botton it's a primary flight display so we in approach 800 m above ground and the navigation display and you see the yellow Dot on the two top screen the side of the dot mean how long a pilot or the co-pilot are looking in that area and you're going to see throughout that emergency procedure the crew doesn't know when doesn't know actually if energ is going to fail and it obviously it doesn't know when and what will happen in that situation and you're going to see the screening pattern of the pilot and also look at the screening pattern of the of the pilot monitoring on the right side oh we have no sound no should be working let me see okay got it I'm not getting FD guidance FD off sorry for that it's going to work cancel the caution caution can recycle my FD please I'm not getting FD guidance FD off me from the right now check check minim continue speed breaks up so what you see here it's a landing without an engine we went one step further and unfortunately I didn't don't have enough time we work with University in North America to assess the ctive workload to the gas tracking and also to the galvanic skin detector because we we've seen and many reports show it and spee military pilot higher the ctive of load is lower fat the brain is reacting with a much slower speed and specifically in those critical situation so it's a simulator but that can happen in real life and the goal of the simulator is to train pilot of the unlikely event of an emergency situation thank you very much thank you very much Manuel and now last but not least we have boing presenting we have Matan and dragos monano Matt is the autonomous system certification subject matter expert at boing and he focuses on certification strategy and developing certification techniques and processes for novel Technologies and New Concept operations such as AI machine learning and technology and in Advanced Air mobility and dragos is the boing senior technical fellow and actual engineer uh Chief technologist at boing he is the technical lead of AI research and Engineering at boing and he was one of the pioneers of AI at boing and in aviation in general so um the floor is yours so good afternoon um talk about today what call The Beacon project which is our IPC between Boeing and yosup start by few acknowledgements and then we'll talk about the IPC and what we're doing and then I'll hand it off to dragos is the AI Chief technologist to talk more about the system itself that we're we're exploring so acknowledgements wise first and foremost like to thank the Boeing and the AA team that have been working on this it's a long list of names too long to to try to name everybody off but everybody's been working on it knows who they are just like to stop and thank them for their contributions on this I'd also like to talk a little bit about what Boeing is doing uh as we're trying to engage with gold Regulators I'm exploring oops exploring Innovative and emerging Technologies um we're looking for this is one of a host of projects that we're doing looking to as you can see here kind of our strategic objective strengthen regulatory relationships and uh where we can help help Foster Global regulatory alignment between the The Regulators and to do that as I mentioned this is just one of a series of projects we're working on we have another IPC with yasa looking at the am conops um for our partners at whisk and a few other operations so just wanted to to highlight those first before we get into the details of this particular project so the IPC um what we're doing here is in collaboration with the ASA looking at the regulatory requirements means compliance and V&V strategies for a machine learning based system we are using the concept paper issue two as the the basis of the work that we're do doing here we're using our experimental automated taxi system which I talk more about here in a minute as surrogate uh for exploration in this uh we are considering both a level 2A and a level 3A version of the system we started with 2A and we'll we'll talk a little bit about the the split there in a second um and we're hoping to wrap that up here in the next uh few months um we began this IPC back in June of 2023 and we're looking to finish up kind of early to mid next year uh and we are expecting to publish a report at the end of it publicly that'll be out there for reference similar with the other ipcs that other companies have done so um the FPC a little more about that why did we propose this um we thought it would be a very interesting and helpful exercise for both us and yasa to explore these systems kind of off the critical path of certification so we really had time to sit down and talk about the the points of interest that we both had and that we wanted to explore um we also thought would be a good opportunity to take the concept paper and apply it to a system and see what we learn and what can learn from the concept that that application and where there might be potential areas of refinement in the future in both sides uh we also wanted to help lay the groundw future certified AI systems and like I mentioned the other some of the other ipcs formula and codan and M leap and some of the other that if we wanted to continue to add to that growing body of work that the rest of the industry has contributed to we wanted to add our contributions to that as well and like we also said we want to help facilitate harmonization on this and so we're you know when the results are Advanced enough that we can share what we've done with the fa uh we're planning on doing that as well so we could help uh help add to the the continuing engagement between two Regulators so uh areas of focus for our IPC so the application of the objectives and the the mocks from the concept paper like I mentioned take those apply it to the system and see what we learn about how the objectives work how the mocks work and uh what we would need to do with the system to satisfy those uh validation verification approach there's been a lot of discussion about the wshap model um and so what you can see here is the kind of our Boeing take on that and how it overlays with the traditional systems V and how they interact and so that's one of the areas of exploration for us and then as it's been highlighted multiple times today human factors is a big thing obviously especially for the level 2A system and so we're focusing on that and the use of sdpa or system theoretic process analysis as a way to explo explore the human factor especially around the HMI um and you can see here on the the bottom uh right hand screen is the sdpa control structure diagram that we've created for our system um as part of this and we'll go into that more in the report and what we've learned from that so last slide for me kind of our key takeaway so far um again like to thank yaso they've been an excellent partner um in this IPC we've had a lot of very good directive conversations uh both ways and so we'd like to continue that obviously um you know as we've worked on applying the concept paper to the system we've we have identified and I think gasa would agree that you know there's some potential areas of of refinement not to say especially in the LI of the or in light of the second the last bu there the concept paper does in the approach of taking definitely seems like a viable way to to approach certification in these systems and obviously it's concept paper you know it's it's still working through and that's where we're hoping to help add to you know the areas that can be refined as we we move forward and the moves forward uh so upcoming um so later this year um iasa and hopefully the fa will both be visiting us at our test facility in Montana for a demonstration of the the automated taxi system uh at the same time that'll also hopefully be our wrap-up for phase one which is exploring the level 2A version of the system and then shortly thereafter we'll do the kickoff for phase two which is the level 3A version of the system uh we're looking to finish up phase two um in the first half of next year and then hopefully get the published report done and out by by roughly midy year um next year so with that I will hand it off to dragos and he'll talk more about the system so uh this is an automated taxi system that runs it's at a TRL level somewhere between four and six it runs on an experimental airplane and I'm going to describe it in a few minutes uh please ask questions uh and I'm happy to address them and also in one-on-one discussions after that so it turns on our experimental airplanes um you can see some videos that we have released on our experimental airplane uh on our LinkedIn site and on the boing LinkedIn site uh so basically the philosophy and the development of the system was to change as little as possible in in the environmental operations so we don't clean environments we don't stop other vehicles from moving around in fact in some demonstrations in places where we test we had incursions we had noise we had different weather and so on so the system is able to receive a taxi clearance via radio very much like a pilot that's from ATC human to parse the clearance and to plan the taxi route uh and provides readbacks if obviously the clearance is uncertain so uncertainty is everywhere and we try to handle it with uh uh utmost care and also to reason on it at every step um we uh provide feedback we read back if for example the clearance doesn't contain all the taxiways uh for example the path would be alpha bravo Juliet and Bravo is missing we read back um it executes it uh and it can execute it on all the airports on which we have Maps which is pretty much all the airports in the world um and the the important thing is the perception so perception is used for uh obstacles but also for localization which we use on our own Maps now uh as it was mentioned we're at level to here to a in particular in which uh the flight crew monitors the automated taxi it can disengage it it it's R to override it and in earlier demonstration we had a ground station just for monitoring the safety so that's for experimental reasons so uh this is the flight crew oversight that we have right now the crew is responsible for the activation of the system it monitors the execution we EXP the execution in the planning and also in the perception as much as possible uh if needing uh we can override for example the destination and details like this in the plan and uh it can obviously override if it's an abnormal operation so for that there's an interesting component because we humans always project what may happen right and we are anticipate and that's why we're scared if somebody does a special movement or so uh we try to do that and to provide some sort of input like that to the pilot so that they are able to override uh we have our experimental displays again this is experimental this is not certified it's a midlevel TRL this is a high level architecture and it probably matches a lot of what you've seen earlier we we have a planning and decision making backbone um which is symbolic but it handles uncertainty it's connected to our two uh big learning based components one is the perception the other one is the dialogue management um we have the dialogue management and I talked to some of you during the break uh we haven't exposed that in detail in our IPC here and it's obviously work in progress obviously we I mentioned we use maps and we have what we call our navigation database which we patented basically that's a representation a symbolic representation of the airports for the planners to run on them um the planning and decision making backbone symbolic but it's important state action based uh and again it handles uncertainty and it handles the uncertainty of the input it also plans uh and expands the plan as it's executed so it's basically a state-of-the-art modern hierarchical planning uh component um again uh the focus with our in our IPC is on the two major classes of risks coming from perception one is on the object detection and again uh probably all of you who have implemented object detectors on any robot you know that it's not only a detector typically we convert to some sort of a pipeline where we do some scene process pre-processing then there is the detection then there is a reasoning component which has at least tracking or something else and the decision making it's important here and we discussed this both with the ASA and the FAA the goals so the decision making is tied to some goals and there is also the risk so it's very hard to talk about AI quote unquote in the absence of goals or in the absence of an objective we do optimization here and uh that's the goal here localization obviously so one example is that currently maps are not certified because they are handled by the human pilots and we use those Maps those are the Jefferson Maps um it's interesting because we believe that if there will be automation for localization there will be a risk analysis and we run that risk analysis as part of the IPC um all the measurements are fused in the evidence gr and we onun a standard localization given that we have the maps we don't need to do slam so that was that ends my presentation happy to address questions and answer so thank you to boing and thank you again to all the presenters for their use cases that we have seen today uh I would like now to have a short discussion with the panelists um and have a round of opinion around a couple of points we have put together the first question I would like to ask to all of you is um based on the actual content of the concept paper Edition two and based on your experience so far with your use cases and probably with others that you have done before what do you consider to be the most challenging points of the anticipated AI transor framework uh I mean what what can be anticipated today byasa maybe we just follow the order okay okay uh I think that for the deployment of AI uh as I Illustrated on the conflict detection and resolution we could envisage to have uh uh to deploy this kind of system in a first step in a shadow mode with human in the loop uh also to make the operator more confident on on the module and also to use it on not operationally in the first phase but as in a shadow mode and then also use the feedback of the operator for example for the conflict detection and resolution we could have also a clearance by the operator and it could improve the performances of the conflict detection and resolution by uh by reinforcement learning so uh try to Define different step where the in the first step the parator is in the loop but in Shadow mode and improve the system and we we collect his feedback to to improve the system and in the second step we uh the operator will control will control the system but for the time being we are addressing mainly advisory system like conflict detection and resolution and also arrival manager which are it is up to the controller to make the final decision one of the biggest challenge we we face is with L security for commercial airline it's about making sure data is aniz so we can't trace back the pilot or the co-pilot as one thing on Military when you uh deal with military asset and cross domain security with different clearances where you could eventually someone would look at the data reverse some Doctrine military Doctrine that's one of the the biggest CH we facing and specifically with cloud computing so um the solution I talk about is designed to run on cloud whoever the provider however we see in military that it's not always possible to have a security Cloud specifically outside the US so we have also solution leveraging Edge Computing to try to circum those risk and those limitation that we can see in mostly for military custom so security and classified data would be the biggest challenge thank you um for our project uh we have a multiple challenges uh depending on the component uh I would mention probably specifically two of them so when we are detecting the piloting capacitation so this is a machine learning component so here the main challenge is the uh data representativeness so uh how to make sure that that we have trained the Sy system of the sufficient uh and and diverse set of the samples that we can detect it uh correctly and for the teaming uh component it's probably uh designer system that that we keep pil in the loop so make sure that the pilot is aware about the Automation and automation is aware about what pilot is doing so to keep these two uh actor Us in in in harmony for us is really something related to The Way We design the project we have six different use cases and we are trying to harmonize the approach among six different projects really and so the the guidance material for instance is great because it gives us a Common Language but there it is still a struggle because we have different levels of maturity different ways of interacting with AI system sometimes it's a dialogue sometimes it's just giving input and receiving an output so it's not really a dialogue so we have different teaming Concepts behind that so to us the the biggest challenge is trying to keep it under the same umbrella having the same harmonized approach even though we have six different use cases in covering different Aviation domains and for for us um I don't think this would be a surprise given the amount of uh coverage it gets in the concept paper but the around the HMI the providing the flight crew with the correct information they need in order to monitor the system and and intervene if they need to you know giving them what information what they need when they need it in order to make a decision when they need to do something um that's I mean it's not that much different than today there's Automation and there's been a lot of HF uh discussion over the last few years um and so learning what we need to do if differently if anything around Ai and uh making sure the fler gets the right information um technically on the system I think we have three major issues uh first of all on perception we all the community learned about perception a lot just in the last 10 years and we realize uh how much it is about reasoning and handling that uncertainty so that's one huge thing then there is uh let's not forget that these systems Run in real time on some sort of an airplane where you always will be uh limited on Power and computational capacity no matter what and that has to be addressed continuously as even as we advance with technology and then I think one thing that we should all acknowledge is that if we go with pure the more we go with endtoend learning the more we fall into fallacies uh why these are highrisk systems that for which highrisk those rare data points we will never have enough data so a challenge will be not so much on having the data but inferring the knowledge so that you have knowledge inferred Based on data and then you have knowledge coming from physical rules and so on so that's what we try to do in that you know in that navigation database for example and that's a challenge in general so for the highrisk uh for the highis systems we will never have enough data and we'll have to cover it otherwise and then comes the reasoning thank you very much to you all I think we can switch now to the questions from the floor and we will go on slide for that okay so I read the questions in the order they appear so we have a first one currently several methods exist to classify performance such as pilot performance can an i tool be used to complement that or is it unacceptable uh under thei act we can so I think it was raised on question yes if probably this was raised on Tal's presentation I don't know if you want to react to that oh I have not addressed pilot performance so so it's not my to yeah I can maybe give a give it a try um so today instructor depending of where they are which rediction uh assess pilot performance the scale one to 100% one to five Etc the product I showed you see is a complement to provide also a grading to the actual performance of the pilot where the instructor has a right and can overwrite the system let give him some information like I described earlier the rejected takeoff it's impossible for an instructor to see if the pilot has pressed the break within three seconds so theem is is is an assistant is an add heads to the instructor evaluation system thank you somebody else who wants to react yeah on pilot performance on the same one I think here we have two questions the F the second one is whether this is acceptable under the a regulation and it depends what you use the assessment for I mean if we're talking about you know monitoring people on workplace no if it is for training purposes that's what we currently the first part of the question is can we do it uh it is an open research question today I mean uh it's not like a magic box you put everything in and you get the answer yeah we are I think they doing the same we are correlating with other metrics other indicators that we currently use I mean normal standard practice and we see whether the AI analysis can add anything to it uh a problem is the scarcity of data so for instance you had quite a lot of data so maybe you also can complement on that one a problem that for instance we have in I is the scarcity of data so the recordings we did we don't have enough data to answer to that question question to be honest yeah perhaps I would add maybe a complement of information would AI would be the proper mechanism to assess cbta Competency Based training I think that's one one one vein that need to be explored and we exploring the our industry is exploring that because it's it's one thing to know where to fly it's one thing to have the right competencies to operate a flight from E to B and I think that what we just talk about what you just mentioned can be a leverage with the proper regulation around thank you so we have another question just answer it yes so let's go for the next one which is about the hiu use case number one um if accepted by pal to be monitored with biod data on how they perform maybe this acceptable rather than accepted I don't read yeah yes well U for us the we monitor physiological data not performance data to detect if there is a start L surprise effect uh we spoke with the different Pilots uh we didn't mark any issue on it especially when you talk about startle effect especially when you speak with Pilots with experience startle effect actually they um try to they tend to um like the idea to have this kind of support as when it happens it's not a a a nice experience so we didn't um Mark any acceptance issue uh related to this use case and this aspect thank you so the next question is for uh oh no was just overtaken by another so the next question is for on the Pilot State and task Lo monitor is not social scoring unacceptable risk according to AI act yeah that's a similar response as hiu so we are not uh uh monitoring a performance data it's a physiological data and it has nothing to do with the social scoring so we are really measuring uh the human physiological data and and the markers that can lead um to uh drowsiness sleep Etc so uh with this information we are explaining to Pilot that this is just another safety net even for them to be aware that they uh they uh vigilance is degrading and and and the system can react on that so it's not on how they perform or how they score but it's about really safety okay thank you then we have we have a question for boing what will be the difference between the level 2A and 3A application the automatic taxing seems already mostly level 3A safeguarding by human yeah so um like I mentioned we're going to be rolling into level three uh later this year so I'm sure this will be refined a little bit as we have our discussions with yasa however our our going in uh preliminary discussions have been around the level 2A version of the system is if we would put this system in a traditional two-person flight group flight act today um so you have one flight crew monitoring one system on one aircraft um like I said our preliminary discussions with the ASA is if we had that system installed in a remotely piloted or remotely supervised aircraft that had a remote supervisor that was overseeing multiple aircraft at the same time that's what we were talking what be a level 3A system due to the the reduced amount of oversight that the human supervisor could provide to any one system at any one time yeah uh in in general this is uh a a challenging technique discussion and happy to have it during coffee or tonight during a soccer game simply because if you uh in doing it practically and I know there were declaration you know there were official Declaration of you know boing airb authorities and so on there is a state we have a in machines we have the concept of state and actions and here we talk about the state of machine state of the environment and the environment is whatever is in in the environment plus those humans you have to monitor that now once you monitor that you can claim to be somewhere in three and so on you have to monitor it and to to act yes safely and so on so it we had discussions this border line will be blurry once uh all of us will work on actual systems because um just simple advisory systems not State based will not live long so state state state and action are Central to call them whatever you want automated intelligent I AI IIA whatever um but uh it's Central it's Central and that's why probably Gom cares about reinforcement learning and planning it doesn't matter how you do it uh you'll have a plan and that plan has to monitor everything um no matter what 2 a 2B I think it's going to be a question of robustness at some point yes dros just to to jump on this um this is one of the discussion we exactly have now we focused on the two-way uh example for now so we didn't really engage in that but we see really this reasoning element as a a key element and we need to see what is around of it so it's kind of a it's a system of systems in fact it's a separate piece of the the system and that's something we need to to to investigate theorize and see how good it feels with with different use cases maybe to create a bit of interaction and not letting you fall asleep on this side of the room uh because we have a lot on the slido and each time we reply a question there is another one coming so just to to see if there are some questions from the floor I can pass at the back if necessary but you have also mics on the tables I don't know if someone wants to jump in before we continue with the the slid no you're okay anytime raise the hand we we take questions from the floor as well and we want interaction also onite thank you let's go then we continue with the slide so the next question we have it will say that the ey is not permitted to be used for scoring people however data is used to score pilots in um fqa without AI is there a difference anyone wants to start on this I think so from tying back to the previous question it's about the state in the uncertainty it's not mean it's it's got to be a very thin line there right because we all know that it's illegal to score however you are allowed to monitor the uncertainty of action and are you we should be allowed to monitor for example a if the state if the belief state of the pilot is uh uncertain uh and I think that's the goal so it's basically from a systems perspective it's all about understanding where the environment is and the pilots and the humans are part of that environment and what their belief state is and then the system should be able to take the optimal actions it's about optimal actions and optimization it's not about scoring thank you so the next one is a question for Tales how do you keep the pilots in the loop when increasing level of automation during higher workload periods okay so first I will change the word pilot by controller because I have presented only traffic control use cases um as explained to be very concrete for for instance with the conflict detection and resolution uh the CDR will be an advisor it will propose some solution to the uh to the controller uh and the idea is that the controller will have the proposal by the classical algorithm and the proposal by the AI based algorithm and so it could make its own appreciation of the solution proposed by a classical approach or AI base approach on select the base one uh based on his consideration so we don't know that it will increase the workload of the of the controller it will propose different Alternatives and he has a higher degree of freedom to select the best one according to this uh understanding of the of the context and the the situation and then uh by this way we will increase the confidence of the controller with your uh with a with a tool if we take the analogy if you use a Google map on your on your car at the beginning you are not so confident that the time of arrival of Google Map is correct on when you have use 10 times and you have a good estimation of your time of arrival with Google Map you are confident in the tool and you make you use it so uh it's to how it's it's more to increase the confidence of the controller and obviously we take into account try to avoid to to increase the workload of the controller but uh it's necessary to to to put the the controller in the loop uh to increase it confidence thank you so next question is um sorry can I reply to to this one to the one about the how you keep the controller p uh to me I mean again from a factor perspective you need to reply to two different questions one is like design with the upper casd so it's a concept on human AI interaction partnership that you want to have in your system it can be a dialogue it can be reactive system that only reacts to users's input it can be proactive so it alerts you when there is something that you need to pay attention to but this is the concept of human AI interaction that you want to to put in place then there is the lower case D design and that's about the uh user interface so it's about what Matthew was talking before so what do you put on the interface at that moment in that specific situation and it's a matter of the explainability levels it's a matter of that specific context that specific situation uh and the two are linked so it's it's always a dialogue between those two levels so what you have in mind is the metaphor analogy for the human interaction and the way you implement that but it's a constant back and forth between those two levels if you just get one of the two then you lose the big picture or if you just get the high level then you don't have the solution to the detailed design of the interface thank you Simone so the next question not of the use cases elaborated on the fal system safety risk effect such as a minor from a false positive or major false negative uh why does anyone want to uh to the contrary I mean we had very limited time but basically all the risks that we mentioned for example for the perception are evaluated clearly obviously on the detection is about false negatives and so on but for example for localization it's about continuous risk and it's about basically the uncertainty so so the risk is the probability times the loss right and uh all the risk analysis is uh that that's Central to to our work and I guess pretty much of everybody's work so thank you dragos anyone else want to react on this or shall we go to the next which is for Tales uh using open source like tensor flow where is the liability so the the tensor flow just a tool so we could use other kind of uh same tools for developing a neural network uh what is important is to to validate and to qualify the result on the the nor Network itself uh so Tor for just like another tool to to learn the Nal network and to train the N Network sorry uh so we we have other tool in t that we use for developing this technology of noral network but the challenge it's more on the qualification on validation of the result not the tool needed for the development thank you uh we have now a question for on well what is the limit between pilot monitoring and EM motion reading this was prohibited in thee II act um uh yeah so I assume that the limit was meant like a threshold or or a division line uh for a pilot monitoring it's about the data that we have selected U uh for training so uh when we started building the classifiers we didn't analysis so for multiple sensors you can have a camera based sensors you can have a variable based sensors you can have chest sensors some different kind of The Radars motion sensors Etc and then you have an tremendous amount of the markers that can give you some kind of the value so heart rate uh eye movement scal valink skin response Etc and after this analysis and other consultations with the subject matter experts meaning the the Medical doctors we have down selecting the ones that are contributing to the classification of the states so if we are to detect sleep so we have a set of the markers that are contributing the most to The Sleep attention if we were to detect emotions we would select the different markers for that so so that's about the selection and it's about also the challenge about the data representativeness so we need to uh acquire sufficient amount of the markers and the data to be a B to classify the the requested State thank you very much pav yes maybe just a comment on this question because it has not been mentioned but there is an exception in the UI act allowing um emotional recognition specifically for safety and medical reasons so it's would be compatible and I I assume for many of these use cases specifically in this domain okay thank you probably just to to complement on that I think the point is exactly that is to to trace properly the the way you select the criteria and the way you trace it to the uh prohibited or non prohibited use cases I think this is something we have to to think about as well yeah great thank you okay we have now another question which is rather General and it's uh for all of you if you want to react why do all use cases start with intelligence rather than virtual or other term with assistant yeah is not intelligent is it they don't so we want our friend to pay attention I don't think ours starts with intelligent and we we tried not to use the term so for fa discussions I think you know they don't and we don't claim intelligence so do do your cases start with intellig well yes uh there was a long discussion inside the io Consortium we started calling them digital assistant and then we decided to move to intelligent assistant that was because uh first of all the assistant are not all AI but there are some parts that are AI but others are technology and second is because uh when we design them as we said we start from user need userve human values so we start to respond we try to respond to real needs with our intelligent assistant and so try to um take out from technology the best we can do to make something intelligent that can really help our Pilots controllers in while they are working so this is why we call them intelligent assistance thank you when we launched our product 10 years ago the biggest push back we got is from the instructor not from the pilot and we used virtual assistant and people instructor which are pilot quickly jump to okay they will replace me and so but if you read the presentation and if you have followed me for sure there is automatic thing that can happen and there is it's a complement today that's why I purposely not use intelligent and or virtual for that matter because it's trig emotion and bad perception yeah I can see you but you I hope you can hear me uh yes I have a question about you you were talking about before about for instance monitoring of performance monitoring of pilots and so and I would ask if because in I act is specific list in Annex 3 uh in which there is also employees management uh that risk that list is the purpose of the list is be to protect against another kind of risk not safety risk is a risk to harm fundamental rights of citizens so in this case we have many examples like for instance uh having access to private service like Financial Services or public benefits and so forth and one of these one of these domain is listening this an exis management emplo management so could it be follow under this list and in this case is has to be considered high risk because of the use of that not because of the where is the you know in which domain it falls and so thank you I don't know if one of the panelists want to take it but I would say in in a global manner Mato thinking about what we presented also this morning this is precisely what we need to to let's say characterize in the part ey it's it's really where there are there are so many let's say cases where the use of it could make a difference so the the operational use is is an important thing and that's why we always start with the conops the concept of operation is our starting point with that or having said that that's something we didn't mention this morning it brings us also a bit higher up than the system itself to be clear when we say conops we are at the product or even let's say interaction with the human but also product level which is on purpose because we have to take this big picture and um I would say part and conops are kind of the answer the generic answer to what you're looking for but precisely I would not answer because it's it's a use case basis let's say assessment I would say thank overall uh I think we're only touching the surface so when we talk to our pilots and this touch is also one of the leaders of Airbus mentioned this a few years ago the pilots do so much more than flying an airplane the pilots before you start even taxing the pilots decide on unloading lugage anxious passengers Personnel we will we don't even talk about that now so the way I see this Automation and I'd like as much as possible the community to embrace it so far we boing Airbus all the all the makers of airplanes have optimized this interface between two intelligent humans and a lot of Automation and we have there two very interesting systems the autopilot which acts but is low dimensional and deterministic and the FMS which is actually High dimensional but doesn't act it's advisory we try now to move that boundary and we try to start making decisions which are not made by any of the systems and that's what's painful we try to call it AI non AI whatever uh will move and there will be humans involved we'll call them whatever uh we'll see where they will be in 10 years in 20 years and so on but we're just moving that decision boundary here slowly and I think that's what you T and you know we'll see how we monitor these humans and you know what the role is but the goal will be to have something robust there where you have these Automation and some human and maybe the human is remote whatever you will decide and we look for the robustness of this team I think we should embrace this vision and this Vision was always embraced by airplane makers and we you know from 50 years ago we added all the now it's the big thing we want to make decisions in a high dimensional space and we have these pains so I think if you disagree with this I'm happy to have discussions okay thank you we are approaching the end of the slot so we take one last question and then we we pass to the next agenda item so the question would be hybrid iPhone contrails is surely not a safety effect function that benefits Society the need is fit for purpose so would they need to pass this the concept paper 02 scrutiny so about Contra topic so uh the result of of this AI will have an impact on altitude no fly no fly altitude and so could compress the traffic at at the other altitude and could increase the the conflict uh so the main impact there is no direct safety impact but uh due to the the level of altitude that will be not authorized to limit the contct generation you will compress the traffic at lower altitude and it could generate some safety issue so we have to address the control to PE in a in a global in a global context also impacting the control um traffic control so we have to develop tools to help the controller to manage this uh the change of trajectory of the the aircraft and also for the pilot because he has to to manage also this this issu thank you and with this we are at the end of this panel was a very interesting and Rich discussion I would like to thank you again all the speakers for having joined us today and presented their use cases so please give us a minute to rearrange the table for the next agenda item and then we start again yes and I will call Ines on the floor for the last presentation before our next coffee break and we we've put it on purpose before the coffee break to reinitiate thinking and start discussion because the next slot is the panel of discussion on ethics so will be very interesting to have also interaction from the room same processing with a Q&A session so iness I would say Let's uh let people Paul uh SE take a seat and then the floor is yours thank you there okay ready yeah okay so good afternoon everybody again I have this tough task to keep you a bit awake hopefully so and take you with me on a flight of half an hour on a different topic which is ethics for uh AI in aviation yeah so okay so this little flight so this little story uh will have several points so I'm going to um yeah yeah tell you why why did we start to looking into these topics what was the approach thata took considering ethics what was the starting point what research did we do uh why we put together a survey that I'm pretty sure that some of you uh filled in thank you for that already um how did we put this together what was the results and I I think everybody's very curious about the results at this stage and what are actually the next steps right so that's the thing so why ethics because uh from the first moment that we were dealing with these topics this really impacted on us yeah so it's not just about the part that is related to to technical uh domains yeah it's much more than that because this will have implications and impacts and consequences on the humans so it will it was very important from the starting point for us to understand what is actually the perception of the humans that will actually interact with the AI based systems yes so for this we started to see okay is this ethical acceptable or not until when in what condition so all of these doubts were really the starting point for this words so this was basically the reason why so and why why uh how did we do this approach now how did we tackle it so for us at this moment as you can see so we don't evaluate ethics we don't evaluate it um having this idea of that okay an AI based system it's like a moral agent okay so it's we are not there all right so we are just understanding now at this moment at this stage of the technology that uh AI based system serves us as a tool okay so as an artifact that will help us to perform and to perform better okay so that's a bit the approach so there's no moral agent here uh it's not that we have an a system being an uh individual entity okay so yes so for that this is just the approach that we took to be clear for everybody and what we did uh actually was trying to see all right so when what is this ethics how can we tackle this how how can we measure it so for that we based uh on uh key ethical Concepts uh of course hooking these concepts with different guidelines and for sure you heard about it in the morning a lot about the uh EU AI act right so this is a big big important uh guidance for us so what we did we of course uh hoped ourselves on on the European regulation of course we took care to listen a bit to the people because it's very important to see what is the the the opinion of the professionals that are dealing directly with these things and of course we hope to draft guidance uh to help you to do better to be safe and to not infrige any human rights uh of course human rights I just take kind of a parenthesis here to very important to see that we have a nowadays a third generation of Human Rights and this third generation has really directly to do with the techn Oles yeah it it has to do with uh data protection it has to do with bioethics it has to do with transparency H so all of this it has to to match together all right I'm not going to to read the slides from A to Z because it's boring but just I highlighted here some some parts of some articles uh from the U AI act that are touching a bit this this uh key ethical Concepts that we put together and that served as a basis for our studies of course the starting point was actually what the alt ey right so this is nothing new for you of course it's back to 2019 or something like this we encountered this good basis uh in order to think about ethics uh uh appli to this uh new scene so of course uh the alai give us the seven years I'm not going to read everything here but we have some new kind of animal here when you uh for example see this societal and environmental well-being or non-discrimination diversity and fairness so this for for the systems that we are dealing with it may be something new for us now but still is important and in fact some important years on the Ali mentioned you know the AL has also a huge bunch of questions at the end and the team did the exercise to go through all of the questions because here the goal is not to invent regulation we have a good sound regulation we are proud of this H and we want of course to maintain it and to of course to check from where we can actually uh not Reinventing the wheel but take what we have in order to promote efficiency when uh dealing with ethics so we took these questions and we try to see on the regulation where this could be treated okay for this question we actually have this T the regulation that replies to it from this other question we have also this regulation that can reply to it so we were just trying to match a bit of course at the end we can say oh oh maybe for some questions we don't have a regulation that really replies totally to it so and that's why you see on the lower part of of the PowerPoint these objectives in Leela or so this was the piece of the puzzle that we put on as new uh in order to consider some of the questions of the alide that were not uh totally replied by the regulation so that's why we created this some um new objectives of course we didn't do this together we did lots of exercises and a lot of research and I want really to highlight the research part because we are all on a learning process and this is just uh the first part of it we take baby steps but sure baby steps we want to do so I want just to bring it here a bit of um the partnership that we put together with some important uh beautiful people that I want to highlight here and in particular of course you just have here the team leaders but behind these faces are uh other other members uh of the teams and uh they're very keen to work witho we are very uh motivated to work with them this of course it will continue and I thank uh very dearly um of course a scientific committee uh DLR of course Lo and Ela team I see some phases here already thank you for all of the work it's a pleasure to work with you also Lisbon University in particular faculty of psychology Professor Mar shambel a big thanks also to her the poly Technic Torino and of course internally some uh collaboration with some uh project management particular teams in in our case the team of human factors was specially um yeah dearly dear to us so thank you fin for that okay so as I was saying before we took um we took not of all of this contextual and uh kind of a regulated frame and considering the ethics we took some key ethical Concepts that was the basis for us so not forgetting of course the alai we came up with this set of uh ethical Concepts so equal opportunities non-discrimination and fairness data protection right to privacy transparency accountability and labor protection and professional development okay so what did we do with this because you can say n but how how can we measure ethics then you have beautiful ethical uh key concepts with how can you do actually an evaluation so we really wanted to to see the opinion of the professionals and of course we didn't ask them hey what do you think about ethics in aviation uh what is for you data protection uh in aviation what are the ethical issues because this is all very subjective right so if I ask uh each of one of you what is ethics for you I will get as many replies as um people persons right so uh what we do we did we really designed some cases practical situations applied to Aviation of course where we could uh represent this uh key ethical Concepts so and when you're asking uh considering this situation that you're going to discover in a minute um we are asking uh the professionals to position themselves towards this this particular situations in three variables so we want to know if they are comfortable to Le with an a an AI based system if they trust in it and if they will accept to work with it so basically these were the three main questions that we uh sent to the professionals the situation ation so I bet that you're very curious about it so the situations were created on on a um very um descriptive uh narrative story so like imagine that you are in this situation and then we describe the situation and at the end we ask how will you position yourself you are you comfortable with it do you trust on it do you accept it so you have here the first case which is uh Pilots physiological data monitoring very interesting because you heard five minutes ago two or three cases that were directly connected to this H and I have some news for you but okay we will get into this so very interesting that everybody's more or less interested on on the same type of cases because this is it it represents that are really the important ones so Second One Pilots supporting goar around situations third one AI based systems supporting maintenance uh fourth case airport allocation of Airlines to termin fifth case Airline crew members attribution to flights six speech recognition in voice communication I heard already some cases related to it too seven the risk of this killing eighth new competencies when teaming up with an AAS system and uh this eth case had two parts in fact the last part it has to do with accountability and responsibility we touched a bit in the morning about uh about this too so we sent out a survey based on these eight cases um of course we have qualitative and quantitative um questions that we treat it uh and there was just two conditions to reply to this survey first one being an aviation professional second one connected somehow with an AI based system so interacting or being impacted by an AI based system so this were the conditions and uh it was very interesting to receive 231 replies when we just opened the questionary uh yeah like two or three weeks only yeah it was available only for two or three weeks between like Christmas time and and the new year of 2024 and which is a very peculiar timing I know but it was very interesting that we could receive 231 professional saying yeah we have say we want to have an sharing my opinion with the Asa and I also would like to say that in fact this was not just a server like click click click inbox 10 minutes and it's done it was a very uh important and extensive questionary it was more a working session so at least one hour it would take for uh one professional to to go through the the questioner so I thank dearly all of the the professionals that went through the the questionary and took at least one hour of the time to have a say to to the agency okay let me see my time half of the time gone when the things are interesting they fly okay so I I just put together some results that I want to share with you of course just little tables and boring tables but I just want to highlight some things so for comfort you just see there the eight cases right and I want you just to look into the mean just to highlight the fact that all of this was rated on a scale of seven points so we had the rating scale from one to seven one it's almost nothing and seven it's was the maximum point so here in this case it's seven it's a maximum Comfort what what we can see here is we are all in the middle of a path right so we are here on the FL h a tendency to be comfortable but not uh yeah comfortable uh totally one thing that I would like to highlight very quickly it's the k seven so the the risk of this killing it's actually the one that brings less Comfort to the professionals very quickly also trust same exercise here also we encounter the same so uh the people are not very uh yeah trustful in situations where they find out that there is a high risk for themselves to be this skilled to not cope anymore with this new job that somebody was talking about on the on the on the speech before also for acceptance it reinforces a bit the same okay so on a very Global and quick basis if we say okay what is the generic uh results that you have uh the focus will be on trust so the people the less results it's really trusting the system we will have to work a lot on trust building up trust another thing that we asked the professional were hey but what is the need for regulation do you think that this should be regulated and clearly everybody or almost everybody said yes yeah should be regulated and we asked also do you think aaza could be the authority to do the oversight yes or no what you say and uh at least 50% of the of the professionals said yes should have an important role here but of course if we are happy to see well there is a tendency for accept acceptance a tendency for Comfort a tendency to trust we are not yet there but there is a tendency uh we are of course curious about taking the acceptance actually what is the rate of nonacceptance now so we are usually taking about more more care about the people that are not yeah Keen to accept an AI based system system teaming up with themselves on on a work environment so yes here you can see also through the cases on a very quick quick uh basis uh the percentage of the people that were not accepting uh the cases so for example for for the pilot physiological data monitoring just to link to the cases that we were listening before we have 35% of the professionals that tend to not not accept this monitoring all right so this is for for the colleagues that were uh speaking before just of course it it's a questionary we are talking about 231 cases replies but we have 35% saying that they has they have a tendency to not accept the situations all right anyway what I want also to highlight is this risk of this killing again almost 50% of the professionals will not accept uh teaming up with an AI that put at risk their skill yeah okay so of course we asked also this very quickly uh we asked the professionals oh what would be actually the factors for you to change your opinion to acceptable or what other issues would you see here so lots of uh contents we got this is just an example I just want to say to you that we got 2,326 contents people really engaged in saying to us um what ethical concerns do they have S uh so for example I'm just going to read out the risk of this King case number seven somebody said as an example yeah that it is really difficult to feel safe and capable of doing fluently a task that you don't regularly that you don't that you don't do regularly so if you're not doing it um on a systematic basis it's really difficult so occasional occasional training can can not replace at all a more regular practice which I think should be required so uh this really yeah it's just a statement but it really shows that the risk of dis killing is there and it's important to be considered all right another thing that I wanted to share on a very quick basis I'm running trying to do this this this quick run it's um as we are talking about the Skilling what we connect immediately with the Skilling it's skills right competencies so somebody said before that well my job will be a different job it's not my job anymore it's something that will be transformed that will be totally disruptive so I'm not doing anything that I'm going to do now I'm going to have a completely different job how can I behave am I prepared to do it what new competencies shall I uh develop so we asked this um to to the professionals so we said sent a a list and we ask them to rate them from the most important to the less important and of course not surprisingly we see here that gen General AI knowledge and data literacy and cognitive skills and it competencies were the most important competencies that the people elected in order to be ready to team up with an aest system then we have communication skills uh sensory competencies the social skills and the last was the physical skills well it seems logic yeah but we don't know about the future is this really not interesting or not important uh physical skills I don't know okay so we just um I asked also we asked uh also this is just a list that we ask the people to write but we opened the question and we said hey what are what competencies do you see important that you have to develop and then we came across with 25 new suggestions yeah and of course the technical competencies are the most important for for the professionals it comes right on top but then we see lots of new interesting competencies for example emotional intelligence dealing with error interacting with the machine being resilient keeping human autonomy so solving problems knowing about cyber security and having ethical awareness what is really interesting here and I want to highlight to you is that we have a considerable um new competencies linked to this emotional intelligence part namely the need for assertiveness the need for emotion regulation and here regulation has nothing to do with the legislation right okay so how to deal for example with boredom because uh we we feel that if we have a machine interacting with me maybe the richness of my job will not be anymore like it was so they expect to be bored uh and how to gain trust in the system so these were actually the new competencies that were highlighted okay almost at the end um we asked also um what other type of initiatives do you think uh that aaza should develop in terms of Ethics in AI for Aviation and the people replied to me like saying okay of course regulation and guidance material should be there of course it's our mandate we recognize it we are doing it already but we have to continueing do it then ethical awareness by Dynamic activities and written written materials so and this is I think in interacting immediately with the following topic which is inter interacting with the stakeholders so more more systematically and more directly of course this survey showed as being one of these initiatives and we had some people saying oh so nice that as wants to have my opinion it's really interesting but we can do much more at least the people say that we should do much more then promoting training and competency development uh initiatives for sharing information and sharing knowledge and of course the the the other um suggestion was linked to certification in order to have of course uh reliability and safety in terms of the certification process yeah so almost at the end you can ask me n but who are these people man so who are these 231 um professionals so a bit of a mirror of the aviation World unfortunately for the ladies yet so uh 80 men 20 women so basically 62% between 40 and 50 year 59 years old what I want to say to to you is that we have here really uh a set of mature professionals with more than 10 years uh of professional experience saying that they have a good understanding of AI for Aviation they are within teams that know at least at the medium level uh what is this aib system about uh they basically 80% of them work directly in technical Aviation domains um 20% are from the Naas and more than 75% they work right directly with the ibed systems so uh just to close the profile of the people they are also feeling quite uh happy with their work motivated and satisfied with their work which is good right okay so closing almost my presentation what are the next steps so what is actually going to happen in in the future so this was the first exercise trying to listen to the people so we try to listen to the professionals but this year we're going to launch a similar survey not of one hour for sure to the generic public because we want to continue listen to the people so this will be one of one of the one of the big uh milestones for this year considering the results that I was just sharing with you so we expect to share the full report with all of the results at the end of August so maybe you can just bring it to the beach sometime yeah nice literature so we expect to share at the end of the summer August and of course we want to put effort also to continue doing workshops and to bring the professionals into aasa more closely and more directly and basically that's more or less that I think it was quick but I would just like to close my presentation thanking these two magical guys that I have here so Gil and axel so this is actually the the project management team of AI uh um ethics for AI eics for AI uh but of course I would like of course to thank to all that collaborated with us and the future ones that will work and collaborate with too thank you so much okay great thank you very much iness uh you can contact iness you will have the contact of her on the slide whenever you see it again exactly and with that we move into the coffee break you can still use slido we will have the panel discussion we will organize the table for the next session and uh at the end of it we will have also a Q&A session before entering the the conclusions closing remarks from from Alan our chief engineer so let's go uh finding the coffee if it's in boing or here I don't know we will we will find out thank you yes I think we can we can start so let me introduce our panel moderator yesper rasmusen our AA director for flight standards and enjoy the the the panel discussion thank you thank you very much Gom and welcome back um hopefully a bit reenergized by the coffee um we still have one session to go so uh I will do my best to make it Lively together with the panel here um we have heard a lot uh since this morning about the EU AI at uh introducing also our requirements of eics now uh for um I I believe this is one of the hot potatoes in in in what we discuss to in in in applying AI in in in aviation and furthermore you could also say for many Engineers this is not the usual uh uh ball game uh is another saying it's more fluffy how to deal with this um well basically that's what we're going to have a conversation about um in the ASA we don't hide that we need your help uh we need help from the uh survey which en just presented we need help from distinguished experts from Academia from industry um so we in the best way possible way can shape the framework that you can work on there in the coming years um let me shortly introduce the panel I'll just do do it very shortly uh we have Peter hacker from Technical university of Brown we have um Thomas quer from DLR RAR from an A Research Center fat kakai from Tales s bosski from EOS and finally Enis belinga from Pasa um so uh we will play this way that we will start with Academia and then go to Industry viewpoint and finally uh iasa um I will be uh fair but still a bit tough with the panel uh we have agreed to to use seven me minutes each uh allowing time for q&as afterwards and that's very important um so uh I with without further on Ado I will jump into it and I think I will start with you Peter please okay thank you very much um yeah great so maybe you click one slide further then we might have a few questions there which will help me to fill my next seven minutes um we we are discussing ethics and um ethics and Ai and there are several Dimensions uh how ethics are affected we've seen a wonderful presentation by Enis earlier today where many things have been approached through the um to through the questionnaire and um so I think I I would like to focus on data because data are essential for implementing Ai and in aviation and as soon as you start about discussing about data you need to discuss data proection protection and um and data privacy so overall I would like to touch upon three things one is data protection and privacy so how will data be um used today how how are they being used today uh What will what will change and how will they be used in the future uh I would like to talk a little bit about change making so allowing Society to agree in that what we are trying to do and then I've hidden the kind of hobby horse here uh which is data uh for for use in science so open science and this will become even more difficult in the age of uh artificial intelligence so let's start with the first statement which is about um data which data are being used today so today we have a very well defined situation so we have uh data owners at the different stakeholders it's Airlines it's airports uh it's aircraft manufacturers whatever we have uh and they are owning data and they are sharing data according to a VAR scheme so which data are handed from the aircraft manufacturer to the operator at the end which data are shared within the traffic Network so this is very precisely defined and nobody gets data who doesn't need to have them and we all know how those data are being processed and where they are being processed and we can be ensured that the data are very well protected so this is a very well defined state today and it may change in the future because in the future we will include data from other sources uh looking for example in um in the air traffic management domain we are looking into Data from social networks we are trying to analyze messages floating around from airports from other sources to understand what is the traffic situation what does it mean for an individual flight what does it mean for the airline operating a certain flight so the quality of of data will change and we will lose control regarding the data ownership because on one hand um industry and all those who are developing uh AI based systems uh will on one hand need data to produce the system to train it initially and then over lifetime the system will make use of the data further learn train and further yeah condensate and and work on the data so we're losing track and trace of the data and um this means that at the end we have no control anymore and this is a big Challenge and this may be a challenge to acceptability by the citizen and this brings me to the second bullet point change making so how can we pave the way for Trace and responsible data usage we have learned this morning and all the day that there's a very close regulatory framework there's a legislation on that and um I'm convinced that this is setting a very good frame but we need to ensure that this is really followed up and we need to ensure that Society is believing us that we are adhering to the scheme and that we are implementing it and that they can trust us in the way how data are being used and I think a very good step forward is the um questionnaire which has been developed um by aaza especially Enis um has reached out to the professional stakeholders and I believe it's absolutely important to reach out to the to the yeah to the society to all those who are traveling with the aircraft within the Air transport system and to see what they think about data privacy data protection about Ai and aviation in order to understand their concerns and to develop measur to really build trust and now coming to my last Point um as data are very important element in the aviation community and they are not easily shared today it's always for research a very difficult gu in for Academia very very very big challenge um to to access data and do a a um uh a kind of research which which is really significant and and will provide the results we're looking for um there's a very long history in approaches where we approach the stakeholders from EUR control via aircraft manufacturers via airports and navigation service providers to get a consistent set of data which we can use for research and at the end achieve um comparable and relevant data and this will be even a larger bigger challenge when we are looking into artificial intelligence where where it's a lot of about training data a lot of relevant data for testing and so I think it will be for the in the future a big challenge for Academia and for the research Community uh to manage data in order to to develop things Concepts ideas to build knowledge which will find its way into the Innovation pipeline so I'm not sure if that was seven minutes but well you have been very disciplined Peter I think you only spent five so allowing uh just a question uh here before we move on um now you you focus very much about this the the data issue um if we if we look at it from the ethical side it seems to me that you put a lot of emphasis on how the public acceptance or the passenger acceptance is uh uh for for for for use of data and hence also AI That's how I hear you is that correct understood yeah that is one part of the matter of course that is relevant but the aviation professionals who are subject to um to data handling and management of course need to be recognized and respected and in the breaks we had several discussions with EA um so for uh crews in the cockpit how their data being managed if we look into pilot monitoring Pilot State monitoring there are a lot of open questions what to do with those data it's it's a tradeoff right where are we increasing safety and where are we collecting data which could be potentially misused so I think it's not only the passenger it's also every Aviation professional thanks a lot uh very good points um let's continue uh with uh our next panel uh member uh Thomas from DLR Thomas over to you yeah thank you very much and uh thank you for the opportunity to uh participate here in the panel um yeah so uh I would like like to start with a rather general statement so ethical requirements are necessary for the acceptance of AI systems so very general to be honest but uh I think looking a bit more into details um and looking a bit more into the day and the things we already discussed uh this is uh quite obvious so with regard to what Enis showed us from the survey there is from the aviation professionals really the demand that there need to be some questions answered with regard regard to uh ethical requirements but also with regard to the UI act uh Aviation applications uh are highrisk systems so there is a hard requirement for answering some some ethical questions and um also considering the um panel we had on the use cases we had uh quite some applications where uh we have the teaming between an AI system and uh the human operator being a air traffic controller or being a pilot and and um then looking into these applications there are certainly some some ethical questions you've already raised some uh with regard to the monitoring perspective and uh I think there it starts so um is it ethical being monitored in the workplace and there is some discussion on this and I think this discussion is not finished so um from the perspective of the engineer um it is highly desirable to have this data in order to improve systems and potential improve safety um from the perspective of a pilot I think this monitoring is questionable and uh looking a bit deeper into ethical topics here I think um monitoring the uh biometric data the phological data there uh is quite some some open space for the uh ethics assessment and for example for unfair bias um if you have a system which is monitoring the Gaze or the the um uh um the the the um eye movement of a pilot then um the eyes need to be tracked and um there are currently systems which are not doing this with uh the same amount of accuracy so uh you have systems which are detecting Green Eyes in a higher accuracy than brown or or blue eyes and using this as an input for an an AI assistant you clearly have uh open door for unfair buyers in your system so this is an example where it it gets a little bit difficult I would say um so what is an important aspect is to find the right balance between ethical requirements and the complexity of the AI functions so what do I mean with this um we have seen that uh with the um concept paper we have the categories of level one to three uh subdivided in in A to B so we have some some clear boundary conditions and right now we see that for example with the alai we have a vehicle where we can address ethical questions so one question would be where do we find the balance between applying the full alai question list to for example a level onea system and maybe we can see it um if you could click one further so here we have a bit of a high level perspective uh right down below we have the uh intertwined V uh model in Gray and the W model in in light blue and the V model is what we are doing in aviation if we develop systems and everyone who is working in industry or has been working in Industry knows that this is the reason why we have uh safe Aviation but this is also the reason why development is rather expensive in comparison to other Industries now ai adds another layer of complexity with the intertwinement of the W model and the eics based topics add another layer of complexity so we see here uh on the top left the um alist and uh there are some um yeah high level requirements will be derived and these high level requirements in the end will break down on the w and they they will break down for example with regard to unfair buyers within the training data to the left branch of the W and will break down to requirements with regard to data management and so depending on the um complexity of the AI function and maybe in the end also with the criticality of the a AI function we need to find a balance and uh need to make sure that we don't build up too many barriers to really develop and deploy these systems but also um don't make it too easy in a sense of uh negate ethical requirements which are clearly there and so one question for the future I think would be um can we maybe derive some kind of uh subsets for example from the Alti list or other guidelines which are then uh applicable for a level one system and a larger subset is applicable for level one and level two and so forth so to make it easier for the developer in the to to address the correct ethical requirements where we I think most of all uh or most of the people here agree that there are some ethical requirements we need to fulfill and so there is this balance to be found um I would say thank you very much thank you very much uh also you were well be within the seven minutes thank you very much for this uh as far as I understand you uh you are you are you are advocating uh a kind of segmentation of the AI application system in in terms of safety criticality as I get it all from noncritical I mean from cin Entertainment Systems which is noncritical on clearly top critical systems where we can apply different barriers that's how I understand you're balancing so so if I mean if we have a scale from everything is from forbidden to everything is is allowed we should vary uh we should set the boundary according to the criticality level is that correctly understood yes so um I think with a perspective of of a systems engineer on uh the the complex task of of Designing trustworthy AI this this balance between the necessity and in the end the cost of developing these products and and deploying these products uh into the field this is uh the the aspect what what I'm um yeah advocating for I would say thank you very much um we move on to the third panelist uh and that is R um please thank you so indeed so I'm raron maybe some of you remember I was speaking previously as airus I was managing the AI road map for ERS but as of today I don't speak anymore for ERS my colleague Sergey will take care because um I'm now appointed to be the director of operation of an AI Research Center in too I'm still SED from airus there but we've in order really to to move on especially to move on the topic of certifiable artificial intelligence we have created a specific AI Research Institute in in tus which is called an where we have something like 300 researchers working from Academy but also we we have seconded Engineers from various Industrial company and um our main topic is really to build performance and trustworthy artificial intelligence because yeah performance we get it trust is a challenge for many of what we want to do and therefore we are really building uh This research center so what I have shown there is our contribution on the AL Tha referential uh in an we do have uh so we do have technical contributions and also contributions coming more for from social science so you have there and I will give you a view on that a mix of stuff which are more technical that maybe will go to the to will be linked to the learning assurance and some which are more coming from sociologist psychologist which are more links to the ethical topic so first on looking into the alai on the human Agency on oversight uh we did a a quite interesting analysis and I think we'll will lerate a bit with what you did on the way um U humans are judging machines uh it's something we did with it's a Cesar hialgo is coming from MIT came to n for this activity so the topic was really oh we judge machine and in fact we have bias as human we don't judge humans on machine in the same way we tend to use matchine on their outcomes on the outcome which is produced whereas we judge humans on the intention and sometime we forgive we forgive a lot to humans if the intention is good but the result is maybe not the right one we we tend to to forgive uh um for humans which we don't do for machine so we have made an analysis on really on this aspect of of the way we Church machine and as a result if there is a risk of physical AR we are more Ashley we judge more Ashley machine if there is a risk of discrimination we judge more actually humans because we expect humans should care about discrimination whereas for machine somehow we expect maybe machine is is is not able to really understand discrimination and all of that so we have a different view on that and it's quite interesting to to learn about that and there is a a book which has been published by Cesar on on on this topic also then on the topic of oversight uh I mean I've think I've seen we we've made a long way we discussed this topic with Gom long ago initially in the AI act there was a stop button so in term of human a human agency there was really the requirement of having humans to stop the system with a stop button now we are more with an efficient um oversight by a natural person which is what we have but still I think it's very important that we also think about oversight not only by human and here I mean there are tools which we have developed for example through the deal project on out of distribution I mean how to build an efficient monitoring system to make sure that the AI is doing what it should do and only what it should do and we know I mean in aviation that fallback Solutions is not always human even if human is often a very good solution but there are some cases where the fullback solution is also a system and this is what we work on with developing new AI method for monitoring in term of Technical robustness and safety I mean we have developed a lot of tools um really to improve the the robustness which are linked to formal methods to the use of for example something which is called as one lipstitch network to use and this has been developed also in in the deal project um the point I want to make here is that on this link to what was said by Thomas is that we have to balance because if you really want robustness you may lose some performance and especially if you go with this kind of one lip stit Network they are more robust but they are more they are less performant so when you when you do this list it looks like you want all of the items but you will have to make tradeoff between all these items because there are some conflicting requirements so if you want something very robust potentially it would be less performant then um covering some topic on transparency um transparency definitely we are developing new explainability methods and especially especially we try to move to not only development explainability but also operational explainability using similar methods so uh we we try to develop concept based explainability which means I will take an example if you are on we have a use case which is by the way a public use case about detecting the runway uh we build a data set for that so we want the pilot to be comfortable about the fact that we have detected the runway because we are capable to for example to know that we have the concept of what we call the piano which is this kind of stuff that you have at the beginning of the uh of the runway so if we have well recognized the piano we know we are not on a Motorway so we want to be able to provide this concept based explainability to provide more transparency and more explanation to the pilot uh in term of diversity and nondiscrimination and fairness it's it's quite a complex topic I I want to say because if you want something fair you may lose accuracy so if you want to improve the fairness of your algorithm on not being discriminative you will do it but at the cost of losing the accur potentially some accuracy so there are some math behind that especially optimal transport fairness measures which are the best tradeoff between um ensuring the best fairness the best equity and the best accuracy so these are stuff which are quite important on the last topic is also by psychologist we I'm not sure we will have that in in in autonomous in in in aviation but we had it for autonomous driving we we performed and we worked with Jean FR B it was also a collaboration with MIT on the moral machine experiment and this was really the big survey to analyze the challenges for autonomous driving with questions analyzing should I rather manage the safety of the driver versus the safety of The Pedestrian and this kind of dilemma where okay these scenarios which you don't you don't know what what the system should do and there are ways that psychologist um build some survey to really educate U including regulatory um bodies regulatory agency to really help to find what what are the moral Norms that should be applied with such dma and yeah finishing I'm over the seven minutes uh one minute just a bit so my conclusion is just be aware of these conflicting objectives ahead because if you want all of them it's not possible you find you have to find the right trade-off between all these objectives thank you om thanks for pointing out some dilemas and balances and tradeoffs that we have here so we cannot go for the ultimate solution we will lose something else uh on the way let's I I think that will come back in the discussion thanks a lot and and at the same time you you the perfect bridge builder from Academia uh to Industry since you are uh you are yourself that bridge you can say um and and that leaves uh to the the bridge to the next speaker fate from Talis so thank you Jasper uh so I will start actually this uh this short talk by by reminding that from from industry perspective AI is a powerful mean actually to to develop Advanced Automation in some Fields where traditional techniques were not so I would say performant so we can uh we can mention but it is not an ex exhaustive list U developing computer vision systems to do object detection classification or natural language processing and and so uh of course uh this uh powerful uh automation mean comes with some concerns and from these concerns we can actually mention many many things that have been somehow introduced in the presentation today we can mention lack of explainability we can mention data quality we can mention generalization capability we can mention data and concept drift um from uh I would say use case point of view sometimes we mention didactic I would say use case where uh we we can imagine an AI system that is developed to um I would say to uh to provide assistance to the the cabin crew uh in order to man to manage emergency I would say evacuation due to for instance fire or something like that and uh imagine that the database that has been used to to to train this this model has several information and in particular what is related to the travel class so economy uh business and and first class and due to spous correlation during the training so the the priority of the evacuation is linked to this uh I would say uh traveling class so you will provide I would say some uh some uh situation that will impact of course the the the the ethics rules we can have also many other I would say examples uh in in in the aviation industry um there are also some concerns regarding the the security uh so we can mention deep fakes which is a I would say a sophisticated way uh to a sophisticated attack vector and maybe an example will be better than long sentences so I don't know uh if you can click on the speaker yeah so imagine you are on a plane uh traveling and you are I would say listening to your favorite music pilot announcement or watching ladies and gentlemen this is or watching your favorite I would say uh uh movie and you have a pilot announcement like this one this is your captain speaking I hope you're having a comfortable flight we have encountered a minor technical issue with the cabin's temperature regulation system to address this we need to redistribute the air circulation throughout the aircraft I kindly ask all passengers to temporarily move to the back of the plane where the temperature control is more stable this will help us manage the situation more effectively and ensure your comfort please do not solicit the cabin crew during this process as they are attending to other important matters thank you for your cooperation and understanding this pilot announcement has been generated by an AI tool to illustrate the risks related to deep fakes okay so uh you can see the impact on the load balancing of the the aircraft and Al of course on the air worthiness so this is just to illustrate the fact that we will face I would say very sophisticated uh I would say uh uh attacks like like this one and it is really a concern for for for the deployment of of um uh I would say AI systems in in navigation what is important to uh to highlight is that uh today uh AI is AI ethics is seen as a big umbrella where we could find a lot of things and in particular uh I would say foundations that are already covered by by existing relation regulations like uh I would say safety and uh and information security so to be able to uh to implement in in an efficient way uh the ethical requirements that we have in uh I would say alai document or in aaza I would say uh concept paper or in in in future regulation it is very important to uh identify what is already covered by existing regulation standard Etc and what is not and when there are areas that are not fully covered by existing regulation or standards here uh it is important to to have uh I would say either a risk- based approach or a performance based approach in order to manage the the gaps uh the goal will be of course to uh implement or to address uh this uh this new ethical requirements uh By Design in order to to not consider them just as an addon uh what is also very important is the proportionality uh the goal is not to Overkill the business uh so it's very important to uh through this risk based approach or performance- based approach to uh have an implementation of these ethics guidelines that is proportionate to the risk or the performance and of course uh here also tradeoffs and I think that we are quite quite consious on that uh it is important to maintain a balance between the The Innovation and thetical requirements thank you thank you very much fat uh like uh we heard before Thomas we can also see you using the words of balancing and uh uh weighing and on and not um and trying to O avoid our duplication so so where we have an inherent risk uh uh of course we have a lot of Regulation already in uh in aviation so it it really needs to add value uh to uh this um uh new regulation on on AI thank you very much um let's uh proceed with u um s from AOS yeah thank you very much I'm very glad to be here and um so the question of this uh panelist should Aviation Embrace ethics and I will try to let's look at it from a wider perspective Maybe and I think it will incorporate many things which have been already said and have been mentioned so first of all I'm uh working um as part of a cross divisional initiative how to implement and apply the eui regulation and go beyond it towards a ethics and I'm focusing on the ethics part and I have colleagues focusing on eui act and we do it not only because first of all easa will implement it for onboard systems and critical systems but we develop a lot of AI for our internal processes which are not covered which will be directly fall under the UEI act and regulation so should we Embrace ethics and I would say that um actually aviation industry has always embraced ethics but it's very uncomfortable for engineers to speak about this soft topic so it was always spoken about as well we focus on the technical aspect and safety yeah but if you look in the philosophical literature of course safety and human lives and safety are ethical topics so the systems which we have built are built in a way to protect certain values which are important to us so ethics has always been done but instead of speaking about well this kind of un weird concept which is not so clearly can be taken inhance let's speak about requirements technical designer Assurance uh failure modes and so on so of course and it can be measurable so this is very important so Aviation has always embraced Ethics in some sense but now we are forced to let's say take it out of the closet and maybe look at it more directly because AI is somewhat different and AI has the dependence on data so the modern AI types are data dependent data is being generated can be evaluated and used for different means the public perception is different because people are now using AI in everyday lives and understand what it can do and new risks are being taken so somehow we are now have to take it be more explicit about it it doesn't mean everything has to be turned on its head it just means we need to understand in what sense we have already taken care of it and what are the really new risks coming uh in this area also what I've learned in discussions with many academics is that it's kind of um very hard to circumvent and not to do ethics because many of the technical decisions in eii domain irrespective whether in aviation or I don't know in banking on any many domains if you just say well I I'm not considering ethics I'm considering just a technical Solution by doing technical choices in your pipelines and your data selection uh how you what kind of algorithms you use you have taken ethical Decisions by the way and you are promoting maybe certain values without being aware of it what kind of values you're promoting and protecting or maybe violating so you cannot kind of go completely and say I'm focusing on technical technology so maybe somebody else has to take care about ethics you have already made an ethical decision so um the now the question is okay what is so special maybe about AI uh well it's this blackbox nature Advanced capabilities on decision making which might come up um oversight can we do our s side and the data of course and many has been a lot of questions have been mentioned here and a lot of questions have been raised also during the previous panels in the questions people ask questions about it how we use tools um and I think everything which has which have have been said so far is compatible also with the iasa concept paper which already takes care of gaps and fills in gaps with the ethical topics well let's look at some examples so one example which is not from a critical system it's what we uh and we have I think AAS has published it is that we have now a pilot um pilot project not pilot we have an assistant based on generative AI So based really on U let's say almost state-of-the-art technology where uh we have a we have standard operating instructions which are instruction for human workers how to assemble parts during the aircraft assembly and these are PDF documents and we assume that the human worker teams know how to build it by heart but if they have questions they need to look it up in the PDF document instead of a PDF we now have an assistant you could imagine an smartphone speaking with you and the smartphone would provide you direct you can ask what should I do the next step and the smartphone would tell you okay the next step in your process is to assemble Following part you can ask okay what is the torque value to be applied and you would get instead of a table you would get a torque value to be applied and when you provide and it's purely an assistant Tool uh low level I mean it's kind of 1B if you would maybe take on it um so the first question we get when we explain something about it ethical questions the most people ask responsibility who is responsible in case of mistakes what will happen about data you have interactions between the users and its system so now you have data how the workers are asking questions and how many or whether they are not asking questions and so these are the first questions we get and we need to find an answer to it and it's a system which is not even let's say as critical as some have been provided here um so what one part of the answer is of course yeah the humans are responsible of course and we need to make it explicit and to train and have a rigorous validation and testing of the technology we use and provide some Assurance on it maybe so um thinking about ethics what what what we are thinking about is that it should be kind of a steering wheel in the development of AI and not a break so we don't want to stop AI development instead we want to focus and develop better AI products now I'm thinking about AI for example internal tools and we want to speed up adaptation of it that people are comfor fortable using it in their daily work life and that they are agree and by using it extensively we are can reap the benefits we actually have the business value and we have the value the purpose enhance safety maybe and so on so how to do it in practice how to increase the some in some sense Implement ethics and take people's opinion into account without creating a process which somehow strangles Innovation here we are investigating so ethics by design process so these are there are many proposals have which have been released in Academia and we have worked jointly on a white paper with the I for people Institute where the idea is to take ethical considerations from the beginning into account of course after taking the legal implications so if aasa would provide us requirements on products of course first things would be taken into account are the legal and the implications coming from outside so from UI act from aaza from any sectorial regulation but then you look maybe on top you take make sure there are no overlaps but you look also on specifical requirements coming from ethics you maybe do a number light assessment in the beginning against the UI and then you have maybe a hard longer assessment prior to uh deep development and there you also take into account early on the possible values of your end users and to make sure that you anticipate in the design process how the system will be used and how you make sure that the people are comfortable using it and are using it in any way and we can imagine that it's kind of an extension of something like a risk based approach which is more common now in the engineering field where new type of risk and failure modes have been identified and you try to Pro proactively address them during development process and the mitigation actions should be of course according to the risk level so you don't do something very expensive for low risk thank you very much s and thanks for reminding us and clarifying that ethics is uh not a new animal it has always been around uh possibly also historically with the steam machine and so on if they ever discussed it uh we highlight it now because we have this new thing called Ai and then suddenly we take it a bit apart and look at this uh ethical Dimension but you actually contribute to demystifying a bit thank you very much for pinpointing that um and I will let now uh the word to the final speaker which is uh who is Enz from yaa okay thank you all right so more questions than answers yeah so I I put some some topics on the slide that I would like to share with you it's a bit of maybe some worries or some food for thought and the first it has to do with the with the risk of dis killing as I could show to you so this is a very important topic for the professionals and in fact this was based on a case that we designed so we asked the people for example you you are an outco and you have a AI based system interacting with you helping you to do your job but more and more this system will be will gain autonomy and basically you are not doing anything but supervising it imagine this type of situation and well imagine that you'll have a Cyber attack situation and the system is not working anymore and you have to jump and solve a uh the emergency how would you feel like and the peop would say no sorry I don't think I would be ready to perform H I would not I would be actually in fact already the skilled because I stopped performing my job uh it's it's like us right taking selfies with these things like 15 years ago we would use a camera right and if I put the camera on your hands do you still know how to use it maybe but it will take time at least for you to say oh yeah I have to yeah push this button and check the light and the distance and so on and so forth so this is just small yeah joke but the thing is that the people will not feel prepared to jump into an emergency situation after having this time not performing so and we also asked okay so what are actually the mitigations for this uh and the first thing that occurs to us it's like training but people say to us no I don't think training will be sufficient to be enough to to have my my behavior totally aligned with the needs of the situation in order to to perform uh correctly and successfully so the risk of this killing is there and we have to tackle it and to tackle it well so the second thing that I was yeah was bringing to the table today it's this new competencies and especially this emotional intelligence and people say oh but this is so far away from technology and and so on and so forth but we also worked on a Case uh where a pilot would have an AI based system uh interacting uh and kind of a monitoring the communication yeah for the purpose of uh identification miscommunication and um and imagine that you are not an English native speaker pilot imagine that the picture of your voice it's low and so on and so forth and then you have the I system say alert then 10 minutes after alert or whatever now what how does it feel like yeah well you feel like insecure like am I doing right or wrong why is this bothering me because it's bothering yeah it's getting into your emotions this is bothering me and at the end of the day you are tired and even you are thinking well am I performing wrong am I a good or bad professional so all of these things are linked to uh emotional intelligence and it has impact on safety yeah because at the end if we are not performing well or if you are putting in cause own uh competen is delivering our job this has impact on saf so food for f we will have to go through this also and uh last but not least it has to do with this topic of responsibility responsibility and accountability for example imagine on a situation we also discussed this case on a maintenance situation where the the expert has some sort of device that could uh check the sound structure of of the aircraft yeah and then you have kind of a lightning uh scheme like red yellow green and just do the evaluation of of the structure at at the end you have just to sign off and say yeah it's red it's it's green it's good to go the structure is sound and safe and we asked well are you ready to sign off yeah will you take the responsibility are you accountable for that so and this is important here to be discussed also because people would say well maybe I would accept to sign off the fact that the check was done but I will not be responsible and I will not be accountable uh concerning the result of the quality of the check you you know what I mean so lots of these situations will come up uh in into the deviation world and we should also tackle so just s for thought some situations will come up I just gave you some examples that sometimes uh of course they are not so technical but I think they are pretty human and we are all human here yeah so s for thought and that's it and I think it was like quick no that was very quick en thank you very much and and uh um now uh things are reappearing in different settings with new wordings but as also some of the panelists have uh demonstrated that things are are coming up again and they're perhaps not that new DES Skilling reminds me of the ongoing work in IO are on Under The Heading um automation dependency which is actually the same discussion do for Pilots following the the the the boing boing Max accidents um what can a pilot do uh when there's more and more automation AI or not in in the cockpit uh and how much should he or she do uh so uh um so things are coming up and perhaps they are more inherent to Aviation safety more in general and thus not only to um to AI as uh a special new technology all right thank you very much for uh for the input now uh there's definitely Food For Thought also food for questioning uh please use the slido as much as you please uh I will start out um taking the liberty of being chair uh to ask one or two questions to to the panel and everybody can kick in as as they want now what do we have to regulate as aasa uh and perhaps extending it a bit more how much should we as regulatory Authority take the pen and write down things in hard law soft law and how much should we bounce back to Industry in um in the form of Industry standards and so on F say you are me co-chairing one of the working group is in EUR how much can industry Embrace also in the in in the soft field of Ethics making charters of uh companies agreeing on on a a way of working or how much should do for the entire uh industry I think this is an open question so this is just uh uh my question to the panel and whoever like can raise the hand and take the word Thomas so maybe um I go first um it's an interesting question and um I would tend to um say that uh the the lead for ethical questions which need some answering should lie with with an agency uh which um yeah has some um yeah task to to to f fulfill this uh this monitoring because in the end in Industry uh there might be uh uh some tendency that uh yeah earning the money with the profits is also something which is very important and sometimes it might be the case case that uh ethical questions are not on the same priority and without implying anything I think it's a good practice that uh these things uh should be addressed by by some neutral body and this could be an agency thank you very much and uh now uh this was also a small provocation perhaps to the industry Representatives let's let's hear what you say to that so maybe I can I can jump in uh so regarding the your question do we need new regulation or more regulation and uh I would say who can who can support this this work um so first what we have seen this morning uh regarding the Pres representation of the EU AI Act is that some um some decisions have already been been uh out taken to uh to I would say uh not authorize uh the usage of AI in some fields and I think that from this perspective from this point of view there is already I would say safeguards in the AI act to um to uh to to deal with this ethical aspect regarding social scoring and all what has been presenting this morning so we have a first level that is already in the eua ACT uh secondly what is interesting that when when we I would say ask the question about regulation new regulation new standard is first to ask the question uh is is it really new or is this problem really a new problem or is it I would say uh uh uh are we looking to the same problem from different perspectives so if I if I take uh an example uh regarding the unfer bias um actually we have seen through the examples that we have discussed uh that we can have bias in the data and if it is not managed properly this bias can lead to a bad model and an intended Behavior but we could have the same if we have uh a a specification so forget machine learning and AI if we have a bad specification for a system which uh I intr some biases we will have the same uh the the the system will be developed according to the specification and this biases will will be present in the system so um actually it's questioning how we uh we validate the source of Truth which can be the specification in one hand or the data in other hand and it question the the capacity to to to be able to detect this SP is in the source of truth that's true that uh for from specification point of view we are used to validate the requirements and detect some requirements that are not I would say acceptable for data it raises more challenges regarding the verification mean but is it a matter of mean or is it a matter of Regulation and new objectives that's the question actually thank you very much um uh let me just elaborate a bit more uh some are saying that in Europe we have regulation and in the US perhaps China they have Innovation um well it's perhaps meant as a joke perhaps not um I hear what you're saying fat is that yes some regulation is needed but be very prudent uh don't go too far discipline the pen uh let it not draft too many pages or Too Deep that's that's the message here s uh will you also contribute yeah I would fully agree to that that um I mean I yeah we always heard oh it's very common command yeah that's like okay we can regulate very much but what about Innovation and the question would be what is the goal of innovation which is completely uninhibited so do we really want to live in the world where it happens I'm not sure I I'm not the one to give the answer everybody can give it uh every person can answer it but maybe we should ask the question how can we speed up and create the right systems which we want to have concerning the regulation I think um looking at Aviation as a system concerning not only machines tools but also humans internally and giving the Mandate of iasa to about safety one can said okay what what is the domain of safety and how much um now with AI what has exactly the question what new has come on top of what is already there and if really something new can be identified which would impact the safety because we have not taken it into account then maybe a regulation is needed but still of course we need to make it sure that we make it as prudent as possible and be able to innovate of course also here yeah can I sorry um just a note because I think regulation can support Innovation yeah uh because if it's well regulated uh all of the people will have the trust enough who trust enough to put effort money and work on it so I think regulating brings enan will really support and reinforce trust and I see the investors investing a lot on Innovation because they feel safe H and they feel trust to put the money on it that's how I see it thank you enus Peter Well if I may add another dimension I think it's not only about regulation it's also about education right and this is important to raise the awareness to teach uh young engineers and people who are joining the aviation system what the background is and I think there we have to still to do something we are very well used to educate in the usual engineering domains but there are still some things need to be done yeah exactly if I may add I think this is very important that maybe which I have not mention it that the cultural change in the development field the developers AI Developers engers need also to have some little cultural change to embrace this topic and actively and I think uh uh taking it up in education and I know there are programs on technology ethics AI ethics in many universities and tus and so on I think it would be really helpful thank you very much um now uh as you can see uh the slide of questions are starting uh to pop up um and uh let me just just give me a second to review let's take the first one how can it be verified that the AI is trained without any discrimination form who wants to have a word on that um so there are um different methods which uh are I mean uh could be used to um detect bias which are not expected I mean bias are the essence of machine learning so you learn from correlation with which you find in the data and usually the Discrimination is not coming from the algorithm itself but is coming from the fact that in your data set you represent the word as it is whereas the word as it is might be might um um might Embrace some discrimination so you expect to have something which is not discriminative but if you rely on existing data you will project the the world as it is with the Discrimination that goes with it so approach is that you have methods to really identify bias in your data set so that you are aware of this bias and potentially you can correct uh this bias I was mentioning optimal uh transport which is one of these method which will help you to correct your data set so to make sure for example that you will have the same answers for male or female but it could be also other kind of discrimination that you want to void so you will have to define the different variable where you want to make sure that you will have similar answers and then you can use suchar methods to really uh recalibrate your model somehow to make it to ensure that you have optimal fairness with your algorithm so that's at least a technical answer uh but everything is not linked to to technology also on on this topic thank you very much any other fat and Thomas yeah maybe to complement what R romaric said I think a good um a good approach also is to uh to not consider the data as the unique source of Truth and to let's say reconciliate the data with the initial functional intent so in the initial functional intent we expect that um discrimination or any other I would say bias will will not be part of the ined function and by uh I would say uh defining a link between this initial intent and the data you collect or the data you generate you can first ensure that uh your training data uh will be I would say uh will have less biases and also regarding the verification if you verify the your and product or your your your your model against this initial functional intent you will have have also some means in order to to detect I would say some undesired biases that have been introduced during the during the development so data is not the unique source of Truth the source of Truth is what is intended initially by I would say the the the developer the the designer and also the the I would say the customer or the stakeholders that are at the very beginning of the of the process thanks Thomas yeah so um I would like to maybe come a bit to the process P perspective on this so uh with the example I mentioned for uh it tracking um and it tracking um uh with regard to uh detect if an operator is tired um we um worked on this in in our project Loki where uh we are developing a assistant AI assistant for air traffic controller and uh there we encountered this situation that we we came up with well okay different eye colors are being detected in a different accuracy and this was really surprising to us because well we expected this is state-of-the-art technology and we just use it and and use it as as training data and now um having the awareness you also mentioned before for this this implications on on these ethical topics then uh I think in the process level it is something which uh can be covered in an operational design domain for an an AI system so you could really state that the different eye colors uh should be uh detected in in a similar accuracy and then in the end it can be verified with the different scenarios you could lay out for for testing it and so um having the awareness and combining it with with a decent process is is uh another perspective to look at it I would say thank you very much um uh looking looking at the two uh next questions on on the slid they address in various ways uh in which perspective the the ethics um should be seen first and let's separate them but but they I think they are somehow connected Can you give an example of an ethical problem in Aerospace that is not first an engineering problem and wouldn't be solved by solving the engineering first I think you have touched it already but any additional comments to this yeah so um as a matter of fact the the very example I was mentioning was was at least um a surprise to us so uh with the expectation that uh basic task like the the it tring which would be so relevant for for any AI assistant uh in the end um it was surprising to see how uh at the starting point you have these implications and uh obviously you could State well okay if the it tracking system would be just better then you wouldn't have these implications but um yeah also with the use cases we discussed before I think um many Engineers tend to take these um yeah um systems you can buy for granted and and use it as a baseline for for further developments and so um it is at least for us an example where it was really surprising what consequences there are in the long run for developing the system thank you P yeah taking up that question I think there are two elements one is um you need to differentiate what what are concerns regarding the automation itself and what is really AI add to that and I think um sometimes these two elements are really mixed and we can also with a simple automation system enter into discrimination and ethics issues and um secondly the question is which function for example could not be solved by by engineering so the first of these questions uh considering for example the measuring of vital parameters on board the the aircraft cockpit um so monitoring the the crew Um this can of course increase safety but on the other hand the data can be misused and then you might consider to have a bulletproof way of of uh processing the data and not releasing them to any other authorities whatever but I think this is not really a technical issue would could not be overcome by only technical means thanks a lot uh I think uh let's take on board the the the following question which is closely related um uh as This concerns AI systems does this mean systems without AI don't have to consider as a bit provocative so uh what what what is is asked here is how specific is this to AI H or should it be handled elsewhere you can also uh ask in in the regulatory framework or or in the way we deal with the technology and specifications any additional comments from the panel well first of all many of the laws we have are based on ethical principles first so I think many of problems which are we have uh with any system are already of course have been taken into account by laws we have because they protect us on the regulatory level so AI why AI now because AI have brings something to the table which might have not been regulated before in any way before the UI act so there are possible scenarios and because we have seen examples of system I think um we we one there is one example of systems which have not been regulated and which now are there and we cannot take them back and it's usually connected to social networks which we have and the impact of social network which have changed many Industries all of uh advertisement and publisher and so on you cannot take them away but the impact which they have and the risks which have been risen with them are such that this Society and also I'm not sure the companies have not anticipated them so there is an example of this already where um regulation was not fast enough so and then we speak about ethics but of course we have to consider it also for other systems but it has been the regulation was fast and it's covered I would would say thanks a lot um we can move to the next one actually TOA are we moving uh towards autonomous ground Ops uh where does this fit under the regulatory FR framework G you step in please yeah maybe I will step in in the panel sorry about that no I I would take this question a bit more like a generic topic that as we mentioned we are really dealing with AI at large any domain impacted will benefit of it how we we will we move towards ground ups and let's say within Aerodrome framework and Regulatory framework is really based on the the ru making task we will see exactly the level of proportionality then the level also of safety assessment we need in fact to inject to understand is it a safety related application or not Etc so I think it doesn't escape the big picture that we drew this morning and same for ethics so I I I didn't see a specific etic point in this one that's why I'm kind of jumping in to you yes right but now it had seven likes so um somebody was uh so we took it in as an exception all right I think also we should uh uh leave room for our questions from the floor uh we have at least the microphone here uh please raise your hand if you have a question uh that's where you want to elaborate a bit more than you could do on the slide we have one here please from the fa oh yeah you I speak um I have a question not from the FAA point of view but from a personal point of view as a an academic professor in my previous life just imagine I was teching a class this is human learning now not machine learning I was teching a class and I decided to write an exam to T my students after giving out the exam I discovered that people with blue eye pass it with higher rate than people with black eyes do I have to do something to make sure that both group have equal performance by revising my my exam or do I set the minimum performance like 60% will cons sufficient performance all right so now beware what the answer may be I'm very get to hear that can I can I I ask just one question will be 60% of performance enough that's normally this kind that we have in the US University yeah okay but if we apply this to Aviation yeah what is the level of safety that we want to embrace and to I I was asking in the context of human learning yeah and and more yeah and could there be an un un unknown discrimination that's your seee here so oh mik no I mean for me just a general comment it's a it's um it's a good testim of expectations that might be different from humans to AI I mean as soon as you base and you you start to have an AI system you will probably raise new requirements in term of fairness which you may not raise if if if a human is is doing the job and it's it rais a lot of questions because it means ourself we have bias towards what we accept from and that's a bit the story about the way you judge a machine I mean Professor we we might not ask and this happen at the moment because professors are starting to use gen gen also in in assessing some of the result and then the level of scrutin in in what they will get from the from the ey system is higher than probably what they were doing um as as humans as human beings so I have no not a direct answer unfortunately but it shows that yeah we we would expect different things and especially I mean iness comment was very interesting in link to critical systems is also linked to critical system Human Performance is often not enough so we have also to go beyond I mean on to make sure we we regulate for for for the right level for safety but um yeah this is what I can say if I may um what what does what does it mean you could either lower the standards for ethics at machine uh learning level or you could increase the standards of ethics for the human being so those two ways I would see and and so to decide can I be more specific let's consider scan in meric school they have to pass some sort of Licensing Board would we consider lowering the requirement so that everybody can pass through eyes people can pass to the same rate or do you feel comfortable with establishing minimum performance for meico doctor so that they can treat you yeah I think the question is very much situational uh dependent and and I we know that ethical values or principles can be in uh conflict one being safety for medical profession and the other being fairness so there we would ask the question what is the right measure of fairness and what do we mean by fairness in this situation and maybe in this situation we would decide explicitly stating that so the ethical part would be to State explicitly that we actually in this situation care very much about safety and the future doctors and we have to admit and embrace them maybe unfairness on some particular properties maybe not in some others so it's very much case dependent and that is true that there is always a tradeoff so um even going back to Ai and um even in the AI development there are examples and we know examples of big companies where focus on some particular aspect of fairness might have lower the actual factfulness which is not desired for the system so I think the trade-offs have to be clearly stated and there are trade-offs there and it has to be made explicit which decision has been taken and what is the system doing thank you uh let me move on uh to a SL of question with seven likes actually Thomas it's for you um so she can raise it if you want she's in the audience sorry ah is in the AUD okay please I know okay so uh I think both of the questions are related so the the one voted up to seven is related to the first one about the data privacy and uh data protection so uh at this point um I was wondering if uh the existing processes related to data management for example the anonymization for some use cases do you think that this can be enough to cover some um ethics statements and uh which relates to the second one uh which is more General uh the for the ethics in fact uh do you think that on the W shap learning process so this can be a question for there also um do think that there is already um most of the ethics requirements are covered so if people will follow uh the um the statements on this process can have Let's uh say an more ethical AI or there is even more statements to be defined or some most important things are missing yeah so um maybe I focus my answer on the on the second part of the question um I would certainly hope and and to a certain degree expect that the existing W uh gives already quite some answers um because the complexity is quite high as it is and um I would say that with regard to to the uh ethical requirements the challenge is to translate them into technical requirements and these technical requirements need to be yeah traced or or worked through the W so starting with a uh operational design domain for example with regard to it tracking that uh the the cor correct colors are are being uh being trained and then in the end in the end uh uh verified with correct scenarios to to prove this and uh so the challenge is not to make it more complex hopefully but really to find good solutions for translating the ethical requirements and then maybe also sometimes dilemas into technical topics and requirements thank you r on on on my side I will take um more the the the challenge is that anonymization is not always enough I mean it has been proven and in many studies um due to the beauty within bracket of machine learning I mean sometime you can recover uh information even having anonymize the data so anonymize the data is for sure the a first good step after there is a there is research on privacy preserving uh machine learning uh techniques Technologies um such as I mean one of them which is quite common is differential privacy but the challenge want once again I mean if you use that differential privacy you will lose a bit of accuracy so once again that's bit of this interesting conflict where okay you're improving the aspect of the privacy because you it you will be stronger and it would be even more difficult to come back to to to disclose any any any Privacy Information because in fact you but you lose accuracy so that's the kind of in tradeoff so uh maybe this is a challenging question but do you think at the same case when um we will have a use case which is safety critical but also there is some ethic statements that should be respected um in your opinion when which of these two criteria should be uh considered most important than the other I think the performance or the the ethics in fact I think we spoke about this a bit it's a bit of a tradeoff what should be your priority of course you have to create as much as a balance as possible but I mean when you are talking about safety and people's lives then I think you have a clear answer there now so but tough decisions for sure I think we we have been been going around some some of the dilemas uh which uh several panel me members have highlighted um where we need to find a way forward uh perhaps we need to segment as I believe it was you Thomas are mentioning uh into different degrees various degrees of safety criticality because as you say in is when we when we come to safety I mean there there's not much we want to trade here really uh but in many other aspects of what we're doing and how we engineering it we may have to take some conscious some informed decisions on what to do uh whether we have one objective we want to prioritize more than the other objective so how to regulate us through this uh it's it's not very clear to me uh but um again back to my my my preferred question before is that something that industry also can take uh on board some of these questions um and uh presenting A A Way Forward in in the form of Industry standard or chter or white paper or uh whatever form it can take um so to advise um the the REM the old industry but also the authorities on a professional and decent way to go forward maybe uh to to comment on that um it it would go back to uh what you mentioned before with regard to the um different levels and does it match to other Industries and I think there also lies part of the answer or part of the comment uh because Aviation does not stand alone with regard to to to these dilemmas and these problems and I think it's it's really it really makes sense to um also look out to other safety critical uh domains like uh uh medical applications rail or or automotive and and really to to try to um find some common ground between these safety critical Industries thank you yeah and and by the way as soon as you touch uh Automotive we we are uh multiplying with Factor 10 in terms of size so so so that's um uh well in in literal sense a vehicle uh that you can uh jump into uh and follow um it's difficult when you are a sector Automotive or Aviation or bartime or whatever because we're organized uh in silos so we we need to to to to to grasp some common ground and that's what ideally the AI uh EU Act is trying to to to grasp uh what is covering everything um now what remains for us to discuss here is what do we need to add for a safety critical industry like ours and can we find inspiration that's how I hear you Thomas from from similar sectors nuclear Automotive whatever I mean I I can only agree with that I can say that in inside and inside we we do work with Railway uh with Automotive with nuclear um and we find a lot of stuff in common when we are building the trustworthy AI methods because the trustworthy AI methods we share really the same objective and we had by the way we defined uh for example a use case which is on detecting a Runway but we have people from Automotive working on this use case as well as Railway because they're learning from this use case in term of explainability method in term of robustness method in term of out of distribution Conformity uh conform prediction conform prediction so I mean all these kind of low level we we really can work together across in industry sectors and also I want to say uh with the the work yasa is doing others are also looking a lot at what we produce to answer objectives from IA when I say others from Automotive from Railway because um also yaa has invested quite a lot uh into the topic maybe more than some of their uh agency so they are really uh learning but also contributing uh to what we do and I think it's quite a good good approach thank you very much um time for the last question well in in fact a bunch of questions because they're going your way in it so so if you can take care of some of them I will not spend time reading them sure let's go for it so um is keeping human autonomy a competency well in the sense that a human should be autonomous should have the ability to be autonomous when interacting with a i based system not doing over Reliance to the system because you can imagine interacting with the I system systematically yeah and then you maybe maybe will have the risk to over rely on it so it's important that you still detain the competence to be autonomous to judge to evaluate and to be able to decide in this sense being autonomous it's a competence yes um is the Skilling a problem Society replaced skills of roller agriculture by industrial means yeah isn't okay so I think it is especially when we have a high risk situation or an emergency situation just think about covid yeah maybe at that time you thought I have a garden at home maybe I will try to do some carrots tomatoes and potatoes I'm just kidding right but do you understand the logic if you have a high risk situation very high level of criticality and it's an emergency situation the risk of dis Skilling there should be could be a problem a high problem so yes I think uh the risk of dis Skilling is important to be tackled uh what more and the last one uh do you think possible option for avoiding this Skilling is suris process related to the training simulation or urgencies yes sure of course I'm not saying training should not uh be in place I'm super fan of training and competitive development as you can imagine but I I I think that might be not sufficient just this of course practical training simulators for sure yes but I would say that putting the people on the situation doing really the actual job once in a while or once per day or two times per day just to be sure that the people still perform I think it will be a must thank you very much Enis I think we're coming to uh to the end of the Q&A session uh taking a view on my watch um so uh we have spent uh some time together uh trying to come around this for at least for for an perhaps an engineering mindset a bit fluffy thing and can be a bit uh uh on on on difficult to grasp uh however we have uh heard many interesting perspectives on how to attack or how to address ethics uh putting it in a larger perspective which many of you have done uh also highlighting the balance uh balances and trade-offs we have in this uh area and and and thank you very much also to to the floor for raising a lot of very relevant questions which we heard some some some interesting response to from from the panels so thank you very much first and foremost to the panel thank you to all of us thank you bye maybe before leaving our panelists we have some presents actually for all the panelists we didn't share them before but Joan will do for his panel and iness for for for the one that was managed by jper thank you very much and k k if you can come forward indeed thank you for bringing that you you did well we we have also some uh nice present for you in especially and for you Rene our uh let's say representative of the 20% that were in the in the survey but thank you so much for for all the great you know to to work on this on on this agency in the in the AI program with gam we have to have to actually two criterias yeah being a psychologist and having being born on the 40th of January right yeah okay dinner then uh wait wait wait before the dinner we have uh first the wrap up and then we will put the slide for for the dinner location uh so thanks again to all speakers panelists interaction of the the public people watching online and staying awake with us also from far away in the in the world so very very appreciated and I give the floor to Alan our chief engineer uh he has that chief engineer for the closing wrap up I would say I would say on perspectives let well thank you very much uh I think think you will all agree with me that was a very intense day full of very useful Pudo philosophical discussions I really have to say I enjoyed the last few hours because it was a mix of technique and philosophical social discussion so we are really in the heart of today's society transformation so uh back to this morning session Gom and the audience I think something is striking to me looking at the previous years uh the previous years we were talking about what has to be done the road map what is under construction what will come and so forth I think we' have gone through a big a major step major achievements thanks to to all of you because it is really what I feel and if you disagree let me know it's a year of consolidation I would say a lot of material has been published at the concept paper issue to the road map is not very secured we have I was also impressed by the use cases in aviation we have a lot of practical cases maybe I would say from a manag perspective too many because it keeps people very busy in the agency but at least we learn from it and those use cases are really the foundation to confirm what we think or what should we should not think maybe depending on the discussion because I liked one of the last panelist uh statement uh ethic should not be a break but a steering wheel for AI so AI ethic is really something we have to bear in mind and this use cases as we saw are touching every day on it so really really a big thanks to all of you for all what has been done but I'd like to come back on this consolidation there was also a newcomer in the room well it it was a bit there last year but the AI act and thanks to the commission presentation this morning you saw how we work as part of the full EU system we are not left alone gom's team the agency we are ensuring on a daily basis that we are compatible or I would even say consistent compatible is not enough that we're consistent with the AI act and I encourage you to review or to re considerer the slide you have where on top you the AI act and then the various pillars with our regulatory development the part AI you have in mind and most important the standard bodies Euro Etc because it shows how we want how we have to be organized in Europe and this is a really challenge how to ensure consistency with the EU society and expectation which are expressed in the AI Act without dzing Aviation safety so a big than to all of you that's the area of consolidation and of course if we consolidate we are an agency regulations R making so as you saw the road making 742 Is On The Way doors where ISU and I would like to come back to this evening discussion again the steering wheel not the break so we have to make sure G say G all of us that that in in our further development with industry with our bilateral Partners we have to make sure that we we keep the ethic assessment permanently in mind because this will allow us to have the right objectives and the right proportionality I also sorry to quote most of what who said I had no time to prepare a long speech I improvising uh so I likeed one statement it is always a matter of tradeoff if we want to do it all it will not fly so we have to have this kind of proportionality the tradeoffs how to satisfy the overall safety expectations without being not consistent with the AI Act and the society expectation Al a very important word now on we have a special guest our FAA colleague so thank you very much for visiting the agency today and it was very nice to have a first introduction on your thought how you see uh the work from an FAA perspective on AI on Automation and how you want to make best use of existing standards and regulations today so it's a point where in my humble opinion I'm not an expert I a poor pilot and the manager so I'm not an expert in I'm learning from Gom every day but for us it is a point where how could I say it is very similar to what we did at the very beginning with the use cases with had ipcs Innovation partnership with industry to develop the famous learning Assurance process yeah and I think you following the same path and I would like also to quote a very important uh event which happened last last month were in July the FAA yaa annual conference our two Executives highlighted very very clearly the need to have a single approach to safety safety is universal so normally we should really do our best to harmonize to work together because we have to promote this common mindset and I think it is on the way so you are welcome to to to to be with us today you are welcome to share your thought and of course we will share house with you but there might be sometimes some small discrepancies it's normal we have to fix them and at the end of the day industry will be the Safeguard and we'll ensure that we have done it properly so that's in a nutshell what I had taken from the day did I forget something except that I don't want to go back through your panel JP because this was fascinating the link between the technique the society the ethical considerations and how to make sure that back to where is in but here your best preferred topic as a pilot this killing is for me I am an instructor and this killing is really something which is on top of my own list so I can understand what you mean what we mean by then and the Performance Based the Competency Based aspect again in the automation development is very important to us so I don't want to keep you busy for too long because it has been a long day and you have now a nice heing waiting for you I think it's not raining any longer so you may have a chance no umbrella so thank you all it was really very instructive very very enlightening and I like really this kind of Cooperative event thank you to the panelist thank you the and the team and thanks to all of the audience and online for supporting us today and I wish you a nice evening thank you much thank you very much to all of you the last slide of the day to remind Where We Gather uh at 7 o'cl at the gaam doome which is a brewery uh on the other side of the main station you go through the main station straight ahead and you will fall under let's say pass under a kind of a passage and there you you will enter a brewery called gaffle and do you just ask normally we will be probably in the in the in the cellar like last time and a private area where we can really continue the discussion so thanks again for all for coming and for being with us and uh yes let's continue the discussion uh safe travel to everyone traveling back Unfortunately today but still tomorrow we have a program M liip day at the As Days the final dissemination event I would say from the perspective of as and the cons Consortium so stay numerous in the in the room and uh see you very soon at the G gaam do thank you [Music]