>> TODD LUXTON: Good morning, good afternoon,
or good evening to everyone. Thank you for joining today’s webinar. My name is Todd
Luxton, I am a research chemist with the United States Environmental Protection Agency Office
of Research and Development, and I will be the moderator for today’s webinar— What
we Know about nanoEHS: Nanoinformatics and Modeling. The NNI’s nanoEHS webinar series
focuses on sharing what we now know about environmental health and safety aspects of
engineered nanomaterials. This webinar will feature experts from diverse
disciplines to share the perspectives on key findings for the topics. Today we're going
to be featuring nanoinformatics and modeling. Before introducing our excellent panel of
speakers and providing a brief overview, I want to mention that that the NanoEHS webinar
series is an important platform for agencies participating in the National Nanotechnology
Initiative, the NNI, to share information on nanoEHS research progress and findings.
Throughout the series experts will share the big take-home EHS messages with the broader
nanotechnology community and highlight the NNI’s role in answering those questions.
We've set aside time for answering the panel today. You can type your questions into the
Q & A box on the bottom of your screen. We'll try to get through as many questions as we
can. I look forward to a lively conversation. Let's briefly introduce our speakers for today. Our first speaker is Andrea Haase. She's the
Head of the Fibre- and Nanotoxicology Unit, in the Department of Chemical and Product
Safety of the German Institute of Risk Assessment. A biochemist and toxicologist by training,
her work since her appointment as Unit Head in 2008 has addressed the integration of nanomaterials
in different regulatory frameworks in the EU and conducting nanosafety research. Dr.
Haase has been involved in several large national and European nanosafety and governance projects
and is a coauthor of the EU-U.S. Roadmap Nanoinformatics 2030. Our next speaker today is Stacey Harper. She's
the professor of environmental engineering at Oregon State University. There her lab
has worked on developing and applying rapid testing strategies to investigate tools to
determine the potential hazards of nanomaterials and nanoplastics and link those to the material
properties. Dr. Harper has spearheaded the development of a knowledge base of Nanomaterial-Biological
Interactions (NBI) between OSU and the Oregon Nanoscale and Microtechnologies Institute. Next we have Fred Klaessig. Fred is manager
of Pennsylvania Bio Nano Systems and co-chair of the US-EU Databases and Computational Modeling
for NanoEHS Community of Research. Prior to this, Dr. Klaessig was Technical Director
and Business Director for the Aerosil Line at Evonik Degussa. This work led to his involvement
in the international standards development organizations such as ASTM International and
ISO as well as industry-led organizations. He is a co-editor of the EU-U.S. Roadmap Nanoinformatics
2030. Robert Rallo is Director of the Physical and
Computational Sciences Division in the Advanced Computing, Mathematics, and Data Division
at the Pacific Northwest National Laboratory. HIs research interests are in data-driven
analysis and modelling of complex systems. Prior to joining PNNL, he was an Associate
Professor in Computer Science and Artificial Intelligence and Director of the Advanced
Technology Innovation Center (ATIC) at the Universitat Rovira i Virgili in Catalonia.
Dr. Rallo served as chair for the Modeling WG in the EU NanoSafety Cluster (2013-2016)
and as the EU co-chair of the US-EU NanoEHS Human Toxicity Community of Research. Just one final comment before I turn it over
to the panel. I hope you'll join us for the other nano EHS webinar in the series and more
information on all of the NNI public webinars can be found on nano.gov. You can follow us
on Twitter @NNInanonews. With that, we'll turn it over to our first speaker, Andrea. >> ANDREA HAASE: Okay. Thank you, Todd. Thanks
for the kind invitation and for the nice introduction. It is my pleasure now that I can provide an
overview from the European perspective. Let me just see. I think now you should be able
to see my slides. I entitled my presentation, “What do we know about nanoinformatics?”
I intend to give you some insight from selected European projects. Of course, it is not a
complete list, for time reasons. There's some logos depicted here. I will guide you through. Let me start with a general introduction.
I entitled that “Nanosafety: Where are we?” And just starting from the application of
nanoparticles or materials, we're all aware of the fact that nanomaterials are used nearly
everywhere. There are plenty of applications already on the market and many more under
research and development. That means nanomaterials are very complex. We have a lot of different
chemistries. We have plenty of forms and variants. Moreover, not only that each of the nanoforms
can behave differently, we also, need to consider complex changes of these materials. This picture
here is just a simplification. And the reality might be even more complex. There might be
dissolution, agglomeration, hetero-agglomeration, changes in the surface chemistry and other
properties and reactivity. Bio-nanocoronas and so on. So, all of that renders the material
characterization and also a proper dosimetry highly challenging. To sum that up, I think
we urgently need modern data-driven approaches to deal with the complexity of these materials.
It is not each and every variant and each and every modification can be fully tested
for each and every endpoint. We have that modern data-driven approaches, be it AOPs,
assessment strategies or integrated approaches to testing and assessment, or safe by design,
safe and sustainable by design. All of these modern data-driven approaches they have one
aspect in common. They need data, a lot of data. Of course, the knowledge has increased dramatically
in the last decades. Knowledge is further increasing. Plenty of information is available.
We have a couple of nano-specific database or nanosafety databases. We have a couple
of other relevant database that partially store relevant data from the side of nanomaterials.
There are other useful resources, for example, like Zenodo or figshare the authors made access
to their datasets in parallel to the publication. That means from the perspective of the user,
many challenges remain. How can you find relevant information for your material or your application?
How can you evaluate the relevance and the reliability of that data and how can you bring
together all of these different pieces of information? That means the solution -- in our perspective—that's
not only my opinion but this is coming from a couple of collaborators from different European
projects—we think we need to push forward towards the FAIR data infrastructure. What
do these letters mean? Fair is the abbreviation for making data findable, accessible, interoperable,
and reusable. Behind each of the aspects there are a couple of issues. For time reasons,
I would like to emphasize the important issues. Findable: The data needs to be enriched with
metadata. It needs to be assigned with a unique, globally unique, and persistent identifier.
For making data accessible, the metadata needs to be accessible or in particular the metadata
needs to be accessible even if the dataset is not accessible or no longer accessible,
so that at least the information can be obtained that the particular dataset is available in
a specific database such that then access permission can be checked out. For interoperability,
it is important to use formal, accessible, shared, and broadly applicable languages ontologies
and for making data reusable it is of the utmost importance to have clear and accessible
data usage licenses. And this is not a complete list. These are just the important issues.
Also we need to keep in mind that making data FAIR means that the data is open. And that
each and everybody can just use it for any purpose. So this needs to be clarified and
emphasized as well because otherwise people may be reluctant to publicly share the datasets. And this is an issue that has been initiated
in a project where I had the pleasure to participate, NanoREG2. This is the publication that came
out at the end of the project. We put together our perspective on how the nanosafety data
infrastructure can become more FAIR and we shared the experience that we made in the
project where we reached out to other finished projects and collected the complete data sets
of the project and incorporated them into the nanosafety data interface based on eNanoMapper.
As you can see below, this is the current view of the interface. Meanwhile many European
projects are using this interface so we think we can say it is one of the largest nanosafety
databases that we have worldwide. You can access it via this link here. Specifically
looking at the NanoREG2 dataset, you can see the overview here. This is only the tiny fraction
of that whole interface. This is only the NanoREG2 database. You can see here the number
of data points and the number of methods being represented in the database for particular
forms of nanomaterials. You can see here which of the nanomaterials are mostly populated
in the database. This is titanium dioxide, for instance; silicon dioxide, multiwalled
carbon nanotubes, zinc oxide, silver nanoparticles, to mention some of them. We experienced particular issues for some
of the data. It is very often omics data is stored in specific repositories that are not
nano-specific. And even so these omics data overall they have a very high FAIR index since
the nano-specific descriptors are missing, this is hindering the reuse of this data for
the nanosafety community. This is something we initiated already in NanoREG2 by linking
transcriptomics datasets, and this is currently ongoing in other European projects where we
try to link the datasets and specific omics repositories to eNanoMapper. This is something
that we address also in our reply to the publication. You can see below actually the NanoREG2 database
all has a very high FAIR score. In the project Gov4nano, this is something
that has been taken on and pushed to another level and a FAIR implementation network was
established. I will not talk you through the complex slide, but the idea is that we would
like to develop the FAIR data ecosystem that allows the various stakeholders, be it citizens,
policy makers, industry, researchers, and civil society to get access to high quality
trustable data. And this once the nana data has been established and initiative and we
also released a video that's available here at the link that explains the idea behind
the initiative. And a couple of case studies are currently ongoing. Some of them initiated
by Gov4Nano and some of them initiated by an infrastructure project, Nanocommons. And
to name some of them —there is work ongoing on persistent identifiers, use on the use
electronic laboratory notebooks to upload data more elegantly and easily. We experience
the basically the biggest challenge here on our way forward is how to stipulate [that]
the researchers deposit their data in such infrastructure. The largest bottleneck is
not on the side of the technology, but rather how can we change the mindset and the community? I would like to emphasize two other aspects
before I end that I find highly important. This is the standardization of metadata. This
is also important to make the relevant information findable and to make data that stored in different
databases also interoperable. This is just giving you the general idea. If you have an
experiment ongoing; there is a user and operator; there is some detection or measurement ongoing.
But before there is some sample pre-processing equipment needs to be described- maybe there
is a calibration, and raw data and then you have the property that's measured and there's
data analysis and post processing of the data. And for all of that you need metadata that
truly describes what has happened to that specimen or to that nanomaterial. This is
highly challenging to make it in the standardized manner. This is an initiative that was currently
ongoing in the project Nanocommons. For time reasons I will not talk you through the very
complex figure. But the idea is -- may be just going one step back. Basically, we work
archetypes. We have archetypes for the instrument, chemicals, and all of that should contribute
to a standardized metadata and description. What is also needed from the perspective of
the user, you don't only need the data, but you need tools to analyze the data. This is
another example from NanoCommons. Tools are developed that are then directly linked to
the data infrastructure. In this example, it is the NanoXtract: Nanomaterials Image
Analysis Tool. I will not talk you through. If you have questions, please feel free to
ask me later on. More work is ongoing in other project. Also in NanoCommons they have other
tools- NanoSolveIT and NanoinformaTix That brings me to the end. I hope that I could
convince you first of all the digitization is highly important and that this will enable
and advance data-driven approaches that are urgently needed for the hazard and risk assessment
of nanomaterials but also to ensure safe innovation. I tried to give you some insights from selected
European projects. I would like to emphasize here on the slide at the AdvancedNano Implementation
Network. You need the different stakeholders to buy in to the idea and we need to foster
a FAIR data landscape. Also, there are challenges as more material types need to be represented.
This is an example. So currently ongoing project like HARMLESS focuses on multi-component materials,
PolyRisk focuses on micro and nanoplastics. We need to integrate information from different
database like I have shown you with the omics data. We need tools supporting the digitization,
the easy upload and the reuse of data. The examples here is the GRACIOUS template wizard
that has been established from GRACIOUS or the NanoCommons electronic laboratory notebook
case studies. Of course, I would like to thank many collaborators coming from all of these
projects for their valuable input. Later on b I'm happy to answer your question. You can
paste them now in the Q & A box. Thank you at this point. >> STACEY HARPER: It's okay. I think I'm on
now. I didn't know if you were going to introduce me or not, Todd. >> TODD LUXTON: I thought I had but I was
on mute. >> STACEY HARPER: Okay. Here I am. For those
of you I haven't seen for a couple of years, howdy. I'm still here. I really appreciate
you guys bringing back at least getting the gang back together for the panel. I think
it is fun just working through how we were going to do this. I'm going to focus my remarks
today on what we can draw from the wealth of nanoEHS research that has been conducted
over the past decade and a half, the past two decades, and what we can leverage to inform
these new materials that Andrea was just talking about and specifically in my world, nanoplastics. When we started this over a decade ago when
we were writing the nanoinformatics road map, the U.S.-based one, we really envisioned a
path in which we could advance nanoEHS research by leveraging the nanoinformatics. None of
us are nanoinformatics people, per se, but we're researchers who saw the need for nanoinformatics
to really advance our understanding of nanomaterials and getting to that Holy Grail of “can we
predict a materials’ behavior based on the physical-chemical properties of it?” This
was like the wild west. This is when we were collecting a few studies that were collected
and reported information on things like the agglomeration state, which we now know is
critically important. There wasn't much consideration given to instances of characterization. Thinking
about, you know, when in the material’s life cycle the data were collected. That's
important because we know that transformations can occur. This is what our data pipeline
looks like. It was literally a pipeline where we would collect data. Everyone doing their
own independent research: collecting data, processing it, and analyzing it, writing up
your publication, and then, curators on the backside would have to go in and extract that
information back out of publications. You know a lot of negative data is missed that
way and not reported on. And wouldn't be considered in the final risk assessment. It is quite
biased in some ways. Then the computational analysis of the curated
data could happen. But what we did back then, we envisioned what it could be. In Andrea’s
overview of the EU’s efforts, these data repositories offer a real “pie in the sky”
idealism, a set of interoperable system of Federated databases that could share information
and making sure that information was the same. We still have the same old pipeline here where
we collect the data and publish it. But, by putting the raw data into– raw data annotated
so people can understand it and understand the error and variability in the data and
be able to share like materials with other like materials. In this way drive hopefully
some predictable models that Robert will be talking about and even inform the next round
of study designs. In order to do this and support this effort,
the National Cancer Institute Nanotechnology Working Group lead effort for several years.
We're trying to adapt the standardized tab- delimited format that was developed by the
European Bioinformatics Institute. We're trying to basically adapt it so it could describe
nanomaterials. Just the complexity around describing the nanomaterial is immensely complex
that we really needed some assistance in thinking about what that was and working through it
for, all of the different nanomaterial classes. We basically added a materials file extension
on to this ISO tab format. That's the basis for most of the EU efforts. It was at least
the starting point back in 20 -- mid 2015 or something like that. This allowed us to
catch all the critical physical-chemical properties about the nanomaterials, not just the surface
chemistry, but how that surface chemistry is actually attached. We moved this forward
to an ASTM [International] standard to make it useful for other people. For informing nanoplastics risk, many of the
engineered nanomaterials on the nanoEHS research have been and are commercially available or
can be synthesized in small batches to tweak the specific physical or chemical property
you want to. I think my last look at nanocomposites web site for ordering materials, they have
23 pages of options. That's not adding in any customization that you could do. A lot
is available. Nanomaterials themselves can be precisely engineered for the shape, for
the size, or the service chemistry. They are often times available in homogeneous suspensions.
We can study these to determine what are the factors that are driving their fate in the
environment and also their impact on living systems. Most engineered nanoparticles are
produced at the nanoscale. Therefore, the primary particle size is within that nanoscale.
Whereas only a few nanoplastics are generated as primary particles. Those are limited primarily
to polystyrene and PMMA [polymethyl methacrylate]. Most of the nanoplastics that we're dealing
with in the environment are going to be a result of breakdown to macroplastics to microplastics
to nanoplastics. From the microplastics research we can say
something about the occurrence of these materials. If you look at breakdown from macroscale to
microscale plastics, the same trend of the ratio of the different types of plastic that
are more prominent should hold at the nanoscale as well. A good hypothesis, nonetheless. So
those materials that are commercially available, the polystyrene and PMMA are only available
in spherical forms, which is clearly not what we see or will see in the environment. In addition, the surface chemistries are pretty
limited to, you know, we can get a positive charge by adding some amine groups, carboxyl
groups give it a negative charge, or it could be left undecorated and just be neutral. But
this really limits the information that we have on nanoplastics effects, certainly we
can't be doing the comparative types of studies that we've been doing in nanoEHS. What can
we draw from that nanoEHS to inform plastics risk assessment? I would say all of these environmental transformations
that were described eloquently by Greg Lowry in his 2012 paper should hold for nanoplastics;
right? I guess here where we have dissolution that would not necessary be the case for nanoplastics
because they are persistent. They don't dissolve. They can transport through the environment
will likely depend on the agglomeration state like it does for nanomaterials. Both of the
processes should be affected by things like salinity or pH, organic matter in the system,
So, we know a lot that we can draw from. The formation of the organic matter layer in the
environment, or the protein corona in living systems, should also be the same, if the outer
most surface chemistry is the same or similar enough. Transport then we can think about
that could be similar to nanomaterials, if they had the same density. If you think about
nanocellulose or nanolignin, both of those might have a density that's similar enough
to nanoplastics, to be useful and informative. On the environmental fate and transport both
in terrestrial and aquatic systems, these should be comparable. Again, if they are in
the aqueous system, the density is going to be a big driver. As they agglomerate, plastic
particles even at the nanoscale may float after agglomerate as opposed to falling into
the sediment like you would expect from metal nanoparticles. And if they are un-agglomerated,
they can remain suspended in water bodies indefinitely and lead to exposure to organisms
that live in or traverse through the water column. Again, there would be no dissolution
of the plastics. There have been some concerns about leeching and some of the additives and
the stabilizers from the plastic particles. I think a good example of this is the 6PPD
[p-Phenylenediamine]-quinone that's released by the tire wear particles. For environmental
sampling and quantification, I think we have the same issues in environmental sampling
that exist for nanomaterials and nanoplastics. This would be things like the difficulty in,
collecting them. For microplastics research, they are not even trying to get down to the
nanoscale at this point. Also concentrating the samples. There's interference from the
colloids in the system. It is really challenging to try to locate them once they are in a complex
matrix. Now dynamic light scattering and nanoparticle tracking analysis can be used for the nanoplastics
and nanomaterials. Since both instruments rely on the idea that the particles are spherical,
the vast majority of nanoplastics are going to violate this assumption while many of the
engineered nanomaterials are engineered to be spheres. One other thing to note is that
some of the clear plastics evade the detection system for the nanoparticle tracking. Considerations. Thinking about uptake and translocation what
can we glean from nanoEHS? I think the mechanisms of uptake would be very similar. Particularly
when you think of this mechanism being a function of size. The smallest nanoparticles here can
directly penetrate through the cell membranes. We have particles that are around 100 that
can get in through the clathrin-mediated endocytosis. We have 50 to 80 nanometer ones that can get
in using caveolar receptors and endocytosis. Those that are larger ones will be mostly
taken up but phagocytosis. Now one other consideration is that the biodistribution should be similar
if the particles have the same size, shape, and charge; or close enough- we need to know
what that distinction is so we can start reading across. The accumulation of nanoplastics could
occur in lysosomes because they will not break down even at the very, very low pH environment
that's found in the lysosome, so that could be problematic. Lastly, I wanted to touch on what we know
about the toxicity relative to nanoEHS. Reactive oxygen species generation is a predominant
finding for both nanomaterials and nanoplastics. Although right here where the nanoparticles,
some nanoparticles themselves, especially the transition metals can generate ROS on
their own. It would be my hypothesis that the nanoplastics would do the oxidative stress
through the cellar interactions. Particle specific effects would be expected for both
the nanomaterials and nanoplastics. The three main things that have been indicated for both
nanoplastics and nanomaterials toxicity are inflammation, oxidative stress, and metabolic
disruption. Those are the highlights. Lastly, I want to end with my slide on nanoparticle
safety testing to share with you all. With that, I will hand it over to you, Fred. >> FRED KLAESSIG: Thank you, Stacey. I would
like to thank Todd for the introduction earlier. I would like to thank the audience for participating
or being here. I'm going to be describing the role of dissolution in the current risk
assessment models for oral ingestion. Dissolution is an intriguing phenomenon in that dissolution
will limit the amount of particles that are present and for toxicology, the dose makes
the poison. That can simplify testing considerably. In 2011 when the nanoEHS plan came out, the
concern was a particle effect. What I would say is there was the toxicity observed the
particle? Was it a dissolution product? And when one goes through the plan, you'll see
there are milestones for test method development, applying those test methods to silver which
was the case study, and there's a chapter on informatics which combined all of the results
into a coherent dataset. This continued when Stacey and I and others were working on the
first nanoinformatics roadmap. The situation changed when we got to 2016. Can I have the
next slide? When Andrea Haase and I become our work in what eventually become the EU-U.S.
Roadmap. In the intervening years there's been more
reports on the nanoscale particles dissolution but there are divergent regulatory opinions.
I'm using here hydroxyapatite [HAP] as the case study. It is the mineral form of calcium
phosphate found in bones and teeth. Paul Westerhoff of ASU [Arizona State University] had documented
that nano-HAP was present in baby formula for sale in Australia, which led to a review
by the local authority Food Standards Australia New Zealand who took the position it would
dissolve in the stomach, therefore no exposure and no action. In Europe at the same time
around 2016 the Scientific Committee for Consumer Safety was concerned-- also addressed nano-HAP
in toothpaste and mouthwash. They expressed concern about oral safety but did not address
dissolution as an effect that should be considered. For Andrea and me, there was an opportunity,
progress in literature, divergent opinions–could we bring them altogether? We put this together
in the pilot project and roadmap. I've repeated here what the objectives are there. They really
come down to identifying stakeholders, bringing them together, and then seeing if we can make
advances accordingly. Next slide please. There were, of course, technical reasons that
justified such a project. In the background I would say we are dealing with the fact that
it was a topic of an unlikely raised issues regarding confidential business information.
That means that we could focus on what I call the fourth point here. How do you do test
methods that simulate physiological conditions, with solution characteristics, compartment
identity, and residence time such that the resulting data are useful from a toxicology
standpoint. Certainly, that's the area where I've been engaged with and I've learned from
in my interactions with NSF international, who are a public health standard development
organization for testing product, certifying them, and auditing them from the use in the
water system. Next slide please. Let me kind of give a concrete example. Lead
pipes are used in a number of municipalities in the United States. You are supposed to
feed phosphate to maintain a phosphate barrier layer. In Flint, the budget was cut, they
stopped feeding phosphate. Corrosion occurred. It is a question of whether or not people
are being exposed to particles or soluble lead ions. I have here a chart from dissolution
study of galena, which lead sulfide, an insoluble form of lead. You can see as it dissolves
in gastric fluid, these two curves reach the same plateau. We can assume equilibrium to
the right of that vertical line. To the left you can see the approach to equilibrium. Smaller
particles reaching equilibrium faster than the larger particles. In essence, the equilibrium
is to the right and the kinetics to the left. If you ask me my opinion on one or two PPM
of particles in water, I would say they will probably dissolve in the stomach and therefore
your exposure is to soluble lead salts. Of course. that has to be demonstrated. Next
slide please. That's not always the case when it comes to
nanoscale particles. What I have here is cupric oxide, tenorite. You see on the left the plateaus
for this material change with the shape. Spindle at the bottom, rods are the red line. Slightly
higher, spheres are the higher line. The plateaus do not align with equilibrium. You have the
effects of the particle shape. On the right I have concentration. Same particle and different
concentrations. I've provided on the right-hand side what the solution concentration was.
These profiles are completely kinetic, there's no equilibrium branch. You can go to references
for nanosilver, for example where the same is true for those particles; the size of the
different plateaus, the dose levels lead to different plateaus. This raises the question
of what is a possible explanation. Go to the next slide please. I'm now into the world of geochemistry. This
is one of the reasons I have been saying mineralogical names. This is albite which is a sodium aluminum
silicate. I'll draw your attention to the picture at the right. The relatively smooth
surface. There are some lines that represent ledges. And when you do a dissolution study
you are monitoring or measuring the retreat of the ledges, their movement across the field
of view. On the left-hand side is a similar material but it is rougher. There are etch
pits present. Dissolution is now on ledge movement plus the formation of etch pits,
the number of etch pits, and whether the etch pits actually grow. Or how fast they grow. These authors have aligned this with the two
different mechanisms, based on the level of under-saturation. Under-saturation is the
solubility product re-expressed for activity. What happens in a static system or batch mode
test, you start severely under- saturated as the particle dissolves? It starts moving
to mechanism one and where it lands starts depending on how it was manufactured or as
we would say “engineered” when it comes to nanoscale materials. Did the manufacturing
process introduced line defects, point defects, screw dislocations, and the other sorts of
elements that come with the crystal growth. I would point out this was for a flow-through
system. The previous work was for a batch system. Can I go to the next slide please? This is a series of data from flow through
testing on silica particles. Everything on the page is a silica particle and surface
treatments may differ. Orange is a lung simulant fluid, Gamble’s solution around pH 7.4.
Black is a lysosomal acidic solution around pH 4.5. You can see the shape of the curves
differ. On the left, the orange is more likely to have curvature. On the upper right quadrant
relatively linear reactions there. This is somewhat confirmed. I you look at where I
drew the circle, those are the three materials that did not have any surface treatment, they
are somewhat together when it comes to the pH 7 neutral material, but they separate when
it comes to the acidic material. Obviously, the kinetics have changed. When you combine
these two conditions, you simulate the lung. That's one of the reasons the work was done.
What's missing is any plateauing or any asymptote formation indicating a limit. That somewhat
makes sense. This is flow through. The under-saturation is controlled and therefore, you can go to
completion more readily. I have the next slide please. I would just like to somewhat recap. In 2011
the concern was the particle effect. The actions taken were appropriate. 2016 people are used
to the fact that there will be mixtures and those mixtures will change in their relative
composition based upon the physiological compartment that you are considering. When you do batch
mode, you are probably seeing the under saturation decreasing with time. Asymptotes are more
likely to be visible and they may be sample-specific in terms of shape and size. The testing is
probably better attuned to compartments, the stomach, or phagocytes. When it comes to flow-through
testing on the other hand, under-saturation is fixed over time, the solution is more likely
to go to completion. It simulates open systems where fluid is refreshed such as lung lining.
The blood stream is also another possibility. In all of this I've talked about the current
use of kinetic reaction rate laws that describe the data is disconnected from the degree of
under-saturation, thermodynamics. That represents the different interests of the disciplines,
geochemistry versus what we're doing in nanoEHS. The next slide. I'll finish just by saying what the current
regulatory climate is. Australia has not reconsidered things. That's a historical number comment.
In Europe and the United States, we're seeing a movement towards dissolution in the stomach
if it is complete and demonstrated, there's no systemic exposure. Localized genotoxicity
remains required. The European Food Safety Authority has provided test conditions. Those
are batch conditions, 30 minutes in stomach acid, less than 12% remaining at the end.
In the United States — NSF International, they have extraction tests to determine the
drinking water concentration of a nanoform that comes from a product that is being used.
The second step would be dissolution in the stomach using the EFSA conditions, and finally
in vivo genotoxicity. So. there has been progress, but at the same time, there have been challenges.
Again, I would like to thank the audience for watching. Have a good day. >> TODD LUXTON: Thank you so much, Fred. We
have our final speaker now. Robert. >> ROBERT RALLO: Thanks Todd for the introduction.
Thanks to the organizers. I would like to focus in on the last part of this webinar
on an aspect which is complementary to what my colleagues have focused on before, which
is essentially looking at the nanoinformatics ecosystem and targeting the right side of
this diagram that you see. Which is, we can generate the data, we can use characterization
capabilities to generate this data. And then we have computational approaches to model
or analyze this data, which is what at the end of the day helps researchers to get new
insights from this data in this specific field. Since 2010 up until now, a lot of things have
changed in the modeling field. And starting from the pioneering efforts in the U.S., for
instance, in channels like the UCLA CEIN [Center for the Environmental Implications for nanotechnology].
Or through different projects funded by the European Union 2020 within FP7 and Horizon
20220 programs. We started to look at the translation of structure
activity relationships for chemicals to the nanoparticle world, right? And in this pioneering
effort, we realized that the translation is not easy and not direct. Essentially, often
times the interactions with the biology when we are talking about nanomaterials, is much
stronger and much more complex than what we have with regular chemicals. Also, the data
that we're able to generate in the nano-space is more difficult to generate. Although by
leveraging high-throughput screening facilities, we can generate larger volumes of data. But
it is still -- data continues to be a limitation. So, what I am going to focus on in this part
of the talk is essentially on what has changed since then. What are the challenges that still
remain and what are the opportunities that new modeling and new computational approaches
offer us? I wanted to focus this in five different areas. The first one is we have focused a
lot in the past reproducibility of the data, but we need to focus on the reproducibility
of the models that we generate. The second one is data alone is not enough to create
these models. We need to find ways to provide some domain awareness and to leverage this
domain awareness within the modeling workflows. Then we need to make sure these models can
be used at the end of the day. So, we need to make sure that these models are robust.
That we can trust the predictions that we get from these models. Essentially that we
can deploy these models in an operational setting. Finally, the last task I wanted to
cover is the idea of how we can advance, in terms of having a much more tight coupling
of the modeling, the characterization instruments, and the whole scientific discovery process
through automation and autonomy. In terms of reproducibility, it is important
and still a challenge to track the providence of the model in the same way we're tracking
the providence of the data. This becomes more and more important, especially now with all
of the machine learning approaches in which we have this huge scans of large hyper-parameter
spaces to tune the models. If we are not able to keep track of all of these processes, we
may have the situation in which we can have models or we can introduce, inadvertently,
biases in the model development process. Essentially as we're moving towards the idea of using
the autoML [machine learning] using automatic methods to generate or approve the quality
of the machine learning or data-driven models in general, we have to keep an eye in the
same way we've done in the past developing the ontologies, developing curation strategies
for data; we have to keep an eye on the reproducibility aspects of the models. The second element is this idea of the domain
awareness. In the most cases, we still have limitations on the data. Most of the modern
approaches are really data hungry. They need lots of data to really have reliable predictions.
We need to advance in developing models which can operate in idea space in which we have
data but we don't have perhaps sufficient data to develop a model that we can really
trust. So, in this space again, and this has been addressed in the past when we were developing
QSARS [quantitative structure-activity relationships] for chemicals, the validation of these models
is essential. And understanding when the model is interpolating versus extrapolating; when
we're operating within the application domain of the model versus when we're not, is going
to be extremely important. This information has to be provided in and
has to be linked to all of the modeling approaches that we implement. The real knowledge embedding
with the modeling architectures, we can start thinking on complementing the data on what
we know about the biology, about the chemistry and the material science aspects of the systems
that we're trying to model. This can help us partially alleviate the situation of limited
data, but more importantly it can increase the confidence that we have in the models,
because we have some sort of physical-chemical constraints that really control the response
of the model when we're especially operating outside of the applicability domain of the
model. Approaches like transfer learning, reinforcement learning. We have seen the advances
especially in reinforcement learning with all of the developments of Google with AlphaFold
and other techniques have shown recently. These are really promising approaches that
can help us to advance the field. Causality is another other important element that links
with data It is not enough, in which we need to really find the proper causal structure
within the model to make sure that everything within the model makes sense. Robustness is again an important element.
One of the things we're not taking into consideration is the bias that we may introduce into design
the experiments of experiments to capture the data and generate the data. Bias, together
with interpretability of the models is something that is going to be key. This is something
which is a really important area of research which will have huge impacts when we're modeling
the behavior of nanoparticles. And then with respect to the scalability, of course with
the advances in computing environments, we are in a better situation to start doing things
which up to now were much more difficult to implement. Which is coupling the data with
simulation capabilities. We can couple data-driven models, with molecular dynamic simulations
or quantum chemistry simulations, for instance, to really drive all of this process. This
can be done by leveraging the specific types of hardware, and this can be done in situ.
So we can have operando characterization techniques, which are able to respond in real time and
adapt in real time to the type of measure and the quality of the measure that we are
obtaining. This leads to the last part of this presentation,
to the last challenge and this is how do we use these techniques or combination of all
these techniques to develop smart instruments; to use the idea of AI as the driving element
in the design and execution of experiments that will end up generating these kind of
self-driving laboratories that will be able to go beyond high-throughput and do the real
intelligent generation on-demand of data that we need in order to improve the models and
get new insights into the system or systems that we're trying to model. Overall, I think that we are exciting -- this
is an exciting time. There's a lot of opportunities in the area, especially when you are looking
at this from the viewpoint of modeling and how models can help to advance research and
the new tools that we have for our hands to really develop these new modeling approaches.
With this, I'm going to finish. Thanks. I'm getting the control back to you, Todd. >> TODD LUXTON: Thank you very much, Robert.
Thank you to all of our speakers today. That was really very informative. We covered a
broad range of topics here. Again, if you look at the bottom of your screen, you'll
see the Q & A box. That's where you can type in a question that we can pose to our speakers
today. I would like to start off and ask a question of all our speakers. Covering a little
bit of what each of you have offered and provided insights today. I think we’ve really progressed
in a great way. We started off with sort of an understanding of what the rationale and
reason behind why we're collecting the data and developing the framework from which we
can continue to pull and integrate. Then we went on to think about how the data we currently
have can be applied to nano and microplastics. How do we avoid having to redo or go through
the entire process again? Launching then into a discussion on how to build some of the mechanistic
data inputs. It is more than just collecting data. It is understanding the mechanisms and
the dynamics of those situations. Finally, how do we pull all of the data together and
learn something from it? When I look at this, I look at the immense weight that it would
take to get all of this into a single format. My question to the panel is what do you think
is our bottle neck at this point for being able to achieve this goal of getting all of
this information into a system that can be easily accessible. And then on the other side
of that, where are we really succeeding at this? How can we use those results to push
us forward? If any of you would like to comment on it, that would be wonderful. >> STACEY HARPER: I think Andrea should speak
because Europe is actually implementing some of this at a much rapid pace than we are in
the U.S. >> ANDREA HAASE: Thank you so much, Stacey.
I was just thinking about a clever answer. Because I think there's not a single bottleneck.
There are several bottlenecks. Maybe my answer is not even complete. I think one aspect is
currently the understanding of data is still hampered by the fact that we're still lacking
data. Not that they are not generated but they are simply not accessible or available
to do some metanalysis or evaluation. Clearly, we need a change in mindset through which
the datasets are released. The data-base landscapes fragmented. We need more standardized approaches
to interlink these data, to standardize the metadata and so on. Maybe we have a couple
of bottlenecks that still need to be worked on in order to make a full understanding even
on the wealth of information that we have today and not only -- not only to touch like
the quality criteria, completeness scores, so on and so forth. I think modelers would
appropriate or have a quality tech associated with the data. Many bottlenecks, I would say. >> FRED KLAESSIG: This is Fred. I think that
we have to move from the worry of nanotoxicology to more active nanosafety format. I see that
in what's happening in the dissolution area. Not every test on the official list has to
be done because dissolution means you don't have to do it. I would extent that to the
type of predictive toxicology that Andre Nel and colleagues at CEIN are pursuing and currently,
I think if I am correct, Andrea, IATA [integrated assessment and testing approaches] is a version
of that is in the GRACIOUS framework. So those are somewhat flexible responses to what is
needed for the testing for safety purposes and less the inflexible response that you
have to do a 90-day inhalation study because it is a particle, darn it. >> STACEY HARPER: We've talked over the past,
you know, decade or so about having the journals be one of those key gateways that when you
publish, you have to make your data actually available in a format that's usable instead
of just a PDF of your raw data or something and it is not annotated. You know that we've
talked about the logistics of that; it is probably not feasible. I think, you know,
of the Miami standard for, you know, putting in Juno, genomics data, that's expected. You
can't publish without putting your data in there. I think we have to move to a structure
like that to really get the amount of data. I think one of the key things to that is there's
a lot of negative data that's never published. There's a lot of it. And we're not taking
them when we do a risk assessment. We're looking at the published papers and then try to extract
from that. It just is flipped backwards, I think. >> ROBERT RALLO: Yes. I think we have a lot
of advances. We are seeing a lot of advances, for example, in natural language processing
that can help in doing some of the extraction of the data, in some cases. I would agree
with what’s been said; It is imperative to have access to the data and perhaps having
a common format is not that important. The important element is to make sure the data
well documented and we understand what the data represents and then we can derive whatever
interfaces to capture the data in the right way. >> TODD LUXTON: Thank you all for that. We
do have a couple of questions. The first one here is for Dr. Haase. What are your thoughts
on FDA and international counterparts in regulating nanomaterials which may affect clinical adoption/use
of nanomaterials? >> ANDREA HAASE: My answer is I don't have
any particular upon on what the FDA is doing. I would like to approach the question from
a more general perspective since I also work in a regulatory institute. I think regulators
also appreciate to have access to the wealth of data from the scientific domain and to
look also at the data that is available from the research. Also in the medical field, it
is highly relevant in the preclinical state to have all of these evaluations coming from
cell culture and coming from other maybe potentially biochemical and acellular testing. I think
the way forward has already been paved in the TOX21 initiative. I think what we need
here for the nano or innovative materials domain is a very similar approach. We need
modern and data- driven approaches that are reliable. So that not each and every material
variant needs to be fully evaluated. Again, we can rely on what we know already and make
an extrapolations or predictions that are relevant. >> TODD LUXTON: Thank you. Our next question
here is playing devil’s advocate, this approach—gathering data with too little understanding of the
difficulties, evaluating each nanomaterial for toxicity, — seems to be committing the
sin that was committed in biotechnology of drowning in data. How do you think this will
material the commercialization of the new materials? >> ANDREA HAASE: Maybe I can go first. The
others can add. Currently I do not see the risk that we're drowning in data. Rather I
currently see the situation where the problem is elsewhere. Currently not the full data
sets are released. Also the metadata description is poor. That means currently, I would rather
see it is highly challenging to fully and truly evaluated what has been done, how it
has been done, and how it has been evaluated. That contributes to the situation that we
have today where it is not really clear. Is that really -- genotoxic. Is that really showing
some other adverse effect? Because it is not really clear if two experiments have been
conducted under comparable conditions. Maybe the protocols were just different. All of
that matters. I think currently we would benefit from a universe where more datasets are fully
released and we have a rich description in metadata that we can truly understand what
has been done. I think then Robert will come in with all of this artificial intelligence,
so you can easily sort out the garbage by using the computational approaches and in
parallel we need to really push forward for quality standards, for completeness standards,
and then, I would say it like that. >> FRED KLAEESIG: This is Fred. I have a different
perspective, shall we say. The regulators are not scientists. I think Andre asked the
question. When you get an PMN [pre-manufacture notice] and you have 90 days, to act upon
it, you go by analogy. You pick the closest particle you have worked by analogy, and you
move from that as opposed to, say, I'm going to change how I evaluate the material because
of nanoinformatics. It’s just a difference in the regulatory nature. That's why I put
it more in the nanotoxicology versus nanosafety perspective. We have to ask that people move
over to the more dynamic setting how they evaluate things. I see that Andre has his
hand up. >> TODD LUXTON: I don't know quite sure how
we do that If it is possible. >> NNCO WEBIINARS: It is not possible. I'm
sorry about that. >> FRED KLAESSIG: You can accept everything
I said Andre agrees with unless he lowers his hand. There's a question from Paul Harten
on ISA-TAB-NANO. ISA-TAB-NANO is a standard in ASTM International. It and other elements
were brought together in what's called eNanoMapper. ISA-TAB was a flat file, Excel-spreadsheet
type. Everything has moved forward, with movement in languages. ENanomapper right now in my
mind is an excellent tool. There is also the MENDNano database that came out of CEIN. >> ROBERT RALLO: To add to what my colleague
said, nanoinformatics in general is not some magic set of capabilities that are going to
help us answer whatever we have. Nanoinformatics, in general, from data modeling, to data analysis
are the set of tools that provide scientists with additional capabilities that can help
them to advance and to look at the data in a different way. I think this is the real
focus that we have right now. How this translated, later on, to use in a regulatory environment-
this is a different question. This links with some of the things that I've said before.
If we want to use a model in a regulatory environment, we need to make sure the model
really works. We need to understand -- we need to understand how the model has been
developed. We need to make sure the model has been validated. We need to know where
we can trust the model and where we cannot. There's still a lot of work to do in the space.
But I think that the building blocks which is in part what we've been doing in all of
these past years are there. We are now in a position to start really advancing in the
field towards this direction. >> ANDREA HAASE: Maybe if you allow I can
add one more aspect, because -- I don't know, I think it is understanding who is the regulator
and who is the risk assessor. What we frequently see is that we are approached, there some
journalist out there that read something in the paper. Then they make some conclusions
on that. There's suddenly a huge public interest in one of the topics. It is really difficult
to explain what the science behind it is. And I think from that aspect, I truly believe
that also for this, we need a more open access to all of the data. Including also the negative
data that never will become published as Stacey just said. And also, if you develop or want to develop
these modern approaches to get rid of these animal tests trends that we still need today,
agreed. But maybe we don't want to do animal tests forever. We need the information coming
from all of the research project to make use of the power of big data in order to develop
AOPs or IATA-based approaches that can truly replace animal tests in the reliable manner.
This is also something that we need to consider here. >> TODD LUXTON: Kind of moving along the lines
to this question. As an environmental health practitioner, I appreciate the reduction in
the “sky is falling” message as most of us do. Do any of you have any insights into
disposable aspects of engineered nanomaterials in issues that we might face from that standpoint?
And how -- I guess maybe how some of the current information that we do have might enable us
to make better predictions about ultimate fate? >> FRED KLAESSIG: If I might take that on.
I come from the industry. I recommend looking to carbon black in the silica industries.
Carbon black is ten million metric tons a year production. Silica is about 4 million
metric tons. They've been in commerce. They have disposal requirements. They would be
a starting point for addressing that sort of a question. Second, I believe that our
colleagues in Europe are more complete. They connect the safety data sheet disposal to
the safety data sheet in terms of toxicity. Andrea is aware of CLP [Classification, Labelling
and Packaging] over in Europe. There's a lot of activity there making certain that the
supplier tells the distributor, tells the customer the same information, so that it
is disposed of properly. I defer to Andrea at that point.
>> I'm not sure if I need to answer. Because CLP is just the European reality of the GHS
[Globally Harmonized System of Classification and Labelling of Chemicals] system that's
globally active. I think you described it pretty well, Fred. >> TODD LUXTON: All right. We have a question
here that's probably directed more towards Fred. Are we really documenting our measurements
of dissolution, absorption, and toxicity data well enough we can assess their quality? I
think this is -- you know, a really important question that can be expanded beyond this.
One of the challenges that I faced as a Federal researcher is the ability to utilize other
published data and having to have a very well documented QA/ QC procedure that goes along
with it. This kind of touches on a number of different topics that we've talked about
today. How do we make sure that we're meeting those QA/ QC practices when we're doing the
big data collection? >> FRED KLAESSIG: I think we're in the middle
problem. I don't think that we're getting the right metadata fully. I don't think regulators
need the mechanistic answers from the physical chemist on dissolution I think the need there
is more directional; does it dissolve a lot? Does it dissolve a little? Does it persist?
Then that would allow someone from the regulatory side would say What is from my menu? So many
from column A and so many from column B are the appropriate test to apply. As long as
eventually you do back it up with mechanisms. I think, in response to John’s question,
the question is what is your test method -open or batch? Are you seeing a possible apparent
solubility limit in one system and not the other? First I think it comes down to methodology.
We haven't really clearly established that. I think everyone is doing a lot of work. But
I think there will be some consolidation. I bring everyone's attention the NanoHarmony
group over in Europe is working on this as a revision of TG 105 from OECD. I think they
are looking at both batch- and flow-through systems. They may be able the place for more
detailed answers to that question. >> STACEY HARPER: Yes, I would say even on
the topside when we think about studying silver nanoparticles or copper nanoparticles, we
do the dissolution studies. But we do it at one concentration and then we expose our animals
over a wide range of concentrations. We don't really do dissolution measurements that way.
Your thoughts, Fred? >> FRED KLAESSIG: I think there's limits.
I know that -- I think Keld Alstrup Jensen and Jutta Tentschert are the two people in
NanoHarmony. They are very much involved in what is the dose. What is the concentration?
And the answer is you want to find that apparent kinetic solubility limit that will tell you
that the particle exists. And to make sure everyone realizes- In the GRACIOUS framework,
persistence of the product is what leads to accumulation, which leads to a certain category
of tests being done. You have to identify with persistent or the apparent solubility
to be able to make those sorts of decisions. They are all interconnected. I don't think
the dissolution is going to solve all of the problems. But it will help you, inform you
in terms of what toxicity testing you should do. >> TODDD Thank you. There's been several questions
or calls for the making the slides available. The overall presentation will be made available.
So that's what we can offer at this point in time. Another question here. Have any rules
of thumb evolved as to what constitutes a significant exposure of nanoparticles? The
example here is by inhalation and trying to determine if we are measuring what is noise
versus signal. But in the broader sense, what constitute significant exposure? That would
be to all of our panel members. >> FRED KLAESSIG: I think dissolution may
give you a floor in the sense that -- oral ingestion -- as I used with the lead example.
It might be that anything below 2 PPM lead particles is not really a particle exposure.
It is an exposure to lead salts. There's a long tradition of information on that. There
may be a dissolution floor. I don't know what's a trigger though or what would be a threshold.
I assume that's what the regulators do when they do a recommended exposure limit or something
of that -- OEL. They put that on one of the safety data sheets. I think that's a very
complicated topic. Therefore, it comes much later than the current group is being exposed
to. You'll be told, “oh, gee, you shouldn't have used benzine when you were a sophomore
in organic chemistry.” You are going to be told after the fact that some exposures
were not particularly helpful. >>ANDREA HAASE: I think it is correct. But
we need to ask “does the particle persist? “Can it biopersist? Can it accumulate in
issue? Then the answer would be different compared to the soluble particle that will
eventually become cleared or it something that accumulates over the lifetime? Then you
may end up with the significant exposure even if the initial event or the individual event
can be pretty low. Yes .
>> STACEY HARPER: This would all be informative on the toxicology side too. That would have
to be taken into account. How seriously toxic a particular nanomaterial is. >> FRED KLAESSIG: Correct, Stacey. I'm making
the difference between the particle effect and whether the toxicology of the soluble
is acceptable. You might not like the toxicology of the soluble ion. But it may tell you it
is a measurable rate. >> STACEY HARPER: Yes. >> TODD LUXTON Okay. Unfortunately, we've
reached the end of our time for today. In closing, I would like to thank our speakers
one more time. Stacey, Robert, Andrea, and Fred for sharing their perspectives and for
their wonderful presentations. There was a great learning for me. Thank you for being
part of today's event in asking questions. We hope you'll join us for future webinars.
Follow us on Twitter @NNInanonews and check out nano.gov where you'll be able to find
copies of the presentations. Once again, thank you very much to all of our presenters. I
hope that everybody has a wonderful morning, afternoon, and evening. Thank you so much
for joining us today. >> FRED KLAESSIG: Thank you. >> STACEY HARPER: Thank you.