Transcript for:
Understanding and Combating Scientific Fraud

Thank you. Thank you. I'm going to the gym.

I'm going to go to the bathroom. Ladies and gentlemen, now we'd like to begin Dr. Suzanne Schell's lecture. We'd like to introduce Dr. Nami Arimitsu, Associate Professor at Toyo University, who will act as the moderator of this session. Dr. Arimitsu, please.

Hello everybody. My name is Nami Arimitsu of Toyo University. Now I have the great honor to introduce Dr. Suzanne Shale. Dr. Shale is an ethics advisor.

to the English National Health Service. This is the first time that the NHS has appointed an ethics advisor. She is currently making a film about whistle-blowing. in healthcare. This will be used to teach junior doctors how to raise concerns.

She's working with the Scottish National Health Service on how to support patients and surgeons after serious errors in surgery. In the field of education in ethical leadership and healthcare ethics, she designs and delivers master classes and seminars in ethical leadership and healthcare ethics. She leads a Bachelor of Science in Health Sciences at the University of New York. course in healthcare ethics at King's College London. Today Dr. Shea will speak on the subject noble and ignoble science the long fight against fraud and fabrications.

Dr. Shea please. I'd like to start by thanking the organisers for the enormous privilege of being here today to speak to you. I'd also like to thank the other lecturers who have given us such a stimulating week and so much to think about.

I'd also like to thank you, the participants, for coming to Japan to this fascinating meeting and for giving us so much opportunity for interesting conversation. I've learned a great deal already from some of the conversations that I've had with you. It's very difficult to come last and follow so many brilliant lecturers. But one of the advantages of doing so is to be able to reflect on some of the themes that have emerged.

And one of the themes that's come across very strongly from the lectures is how science is a beautiful balance of creativity and rigour, with a little bit of luck thrown in, as Martin Chalfie has taught us this morning. I think it was Einstein who commented that science was 1% inspiration and 99% perspiration. And what I want to talk about today is what happens when imagination and creativity outrun rigor and perspiration.

Because sadly there is a history in science of scientists whose imaginations have run away with them, and who have gone looking for and creating data to support their ideas, rather than looking for data which are capable of falsifying their hypotheses and enabling real discovery. So, I've described this as a long fight. against fraud, because it's somewhat surprising to find that this isn't a modern problem. One of the first people to write about the difficulty of imagination outrunning rigour was Charles Babbage, who's sometimes described as the grandfather of modern computing. And he was writing quite early on, really, in the development of modern science.

about and reflecting on, as he put it, the decline of science in England. And he wrote this book in 1830. And I believe it was Babbage who coined the terms cooking and trimming for some of the things that over-enthusiastic or over-creative scientists do with their data. And he was disappointed to see that the empirical scientists around him were on the one hand sometimes cooking their data, which he described as making multitudes of observations.

And then out of this multitude, selecting only those which agree, or very nearly agree, with what they want to argue. So that was the cooking. Trimming, he suggested, was clipping off little bits here and there from those observations which differ most in excess from the mean.

So this was the sort of thing that... Babbage observed going on around him in the 1830s. What I'm going to be talking about, it has to be said, frankly, are much more serious fabrications than cooking and trimming. And what I'm going to do is to tell you some stories.

And I'm going to tell you three stories, one each for the scientific disciplines which are represented here, so that none of you are able to say this doesn't happen in my discipline. So this is my rogues gallery, and we're going to start off with the physicist Emile Rupp, who in the 20s and 30s managed to mislead Einstein with some of his work. Our second rogue, and this is the one who really bugs me, because he affects me and some of my interests, is a medical researcher, Eric Pohlman. I'm not absolutely sure how we pronounce his name, I might be wrong, but I'll call him Eric Pohlman, who is a medical researcher. And our final rogue is the chemist, Bengu Seizen.

And I'm going to tell you a little bit about what each of these scientists do. did, how they were found out, and what we can learn from their activities and how their activities were managed. And what I'm then going to do is to try and figure out how to explain the existence of this sort of behaviour in science.

So let's start off with Emil Rupp. Emil Rupp was working in Germany in the 1920s. and 30s.

And this extract is from an interview with Thomas Kuhn, which was done in 1963, with the distinguished physicist Walter Gerlach, who was awarded the Nobel Prize for Physics in 1942. And looking back on Rupp's influence, this is what Gerlach had to say. So Rupp in the late 20s or early 30s was regarded as the most important. and most competent experimental physicist.

He did, we believed, incredible things. Later, it turned out that everything he had ever published, everything, was forged. This had gone on for 10 years. 10 years, he reflected with disappointment.

So what was it that Rupp did? The obvious. problems started around 1926 when Rupp published a series of papers on the interference properties of light emitted by canal rays. And although a lot of scientists were quite excited about it, some physicists had grave suspicions about his work.

And it's interesting, this was in the days before the internet, but word got around pretty quickly among European scientists in various countries. that perhaps there was something a bit questionable about his work. But that was allowed to pass for a while, and Rupp had a successful career.

He was working at the labs of AEG in Berlin. And then later in the 1930s, he carried out what one might describe as perhaps an extremely foolish and rather impressive piece of fabrication, because he published a paper claiming... that he had accelerated a beam of protons at potential differences of 500 kilovolts. The problem was, as every scientist who read the paper and knew his lab appreciated, was that his Berlin lab was simply too small for the accelerator that he would have needed to do that.

So his lab mates spoke to his employers. So... If we look back on sort of how he was found out, that work on canal rays, although it had been cited by Heisenberg and Einstein and, you know, was being used, was already thought to be a bit dodgy.

And when he then finally made those proton claims, two of his colleagues at AEG, Arnaud Brache and Fritz Lang, raised it with his employers and said, really, this can't be allowed to go on. So AEG carried out an investigation. And Rupp was dismissed. And the rest, as they say, is a rather ignoble history.

So what lessons can we learn from the Rupp case? One of them, I think, is very interesting, in that he managed to lure highly respected scientists into believing his claims. And one of the interesting things...

that has been argued by historians of science looking through the Einstein papers is that Einstein was so eager to believe that Rupp's work was right that he adopted a very uncritical position. The second thing that we could learn is that the safeguard of reproducibility in this case worked. Munich experimentalists who were aware from the outset that Rupp's work was questionable were the Munich experimentalists Wilhelm Wien and indeed Walter Gerlach, and they were able to challenge Rupp's data on canal rays.

The third important thing is that when his colleagues finally blew the whistle on Rupp's proton claims, his employer responded promptly and decisively and dealt with the problem. And I'm going to come back to these three themes as we look at some of the other cases. But one of the things that interested me in the first talk that was done this week by our colleague Professor Schmidt was... When he commented on that temptation, when you have a beautiful theory to find data, and in fact to see data that confirm it, and there's an argument perhaps that what Einstein fell foul of in this case was the temptation of confirmation bias.

Now, I'm just going to get back over here. I'm about to introduce you to what I believe is a prize, is an excuse which deserves a Nobel Prize for creativity because Rupp came up with the most glorious excuse for why he had done what he did. He got his doctor to write to his employers explaining his behaviour in the following terms. Dr Rupp has been ill since 1932 with an emotional weakness.

psychasthenia, linked to psychogenic semi-consciousness. During this illness, and under its influence, he has, without being himself conscious of it, published papers on physical phenomena that have the character of fictions. It is a matter of the intrusion of dreamlike states into the area of his scientific activity.

So that was how he accounted for it. And we'll come back a little bit later to how he might account for some of these problems in a rather less self-serving way. So I want to move on now to Eric Pollerman. And I've said that this is the fraud which really gets to me. And it really does because I'm old enough for this to matter.

Because what Polman was researching was obesity, which clearly doesn't affect me, menopause and ageing, both of which do. So Polman then was the first US scientist to be imprisoned for scientific fraud. He led a lab at the University of Vermont. And the work that he was doing...

Interestingly, it wasn't groundbreaking in the sense that it reflected what were in many ways kind of accepted propositions in his field. So he hypothesised that as women aged, their bodies would show increases in LDL, or low-density lipoprotein, which is what deposits cholesterol in arteries, and that they would show decreases in the high-density lipoprotein, which carries cholesterol to the liver. Well, you know, that's not particularly controversial.

We'll come back to the interesting point about then his data. Because this was pretty much accepted as being, you know, more or less what happened. He also hypothesised that women who were taking hormone replacement therapy, oestrogen, if they were attempting to lose weight, would lose abdominal fat far more easily than those who were not on HRT. And the data that made Poylman's work interesting was that he had carried out a longitudinal clinical study so that he wasn't taking a snapshot in time. The value of his data to researchers in the field was that it was a longitudinal study which had taken place over a period of seven years.

And I'm sure, as you will all understand, getting the funding and the grants and the capacity to be able to do that kind of longitudinal study is both difficult... and therefore particularly valuable. The problem was that those data that he had entered over that long period of seven years were fabricated.

So how did he get found out? This is our hero, Walter D'Ono. Walter D'Ono was a really interesting character. He was a high school athlete, brilliant athlete, who had run so much as a student that apparently he had a series of tiny, tiny fractures in his bones. His response when he was told to give up running was to take up cycling.

And he then became a brilliant triathlete. And at the point at which he went to work in Poylman's lab, he was training for the Olympics, and he was also preparing to enter medical school. So he already had a very good undergraduate degree in nutrition, and he'd been mentored by Poylman. And so he was very grateful to be offered this job as a lab technician, working on this longitudinal study.

So, Poylman asked him to do an analysis of data which were on a spreadsheet, and he raised some issues with Poylman about what it was that these data appeared to show, because as far as Danilo could tell, these data were actually falsifying Poylman's hypothesis. So, he raised this with Poylman because it was inconsistent with some of Poylman's published work. And Poylman... took the spreadsheets home for the weekend, returned on the Monday and said, well, don't worry about it, there have been some problems with data entry and I've corrected them for you.

So D'Aminio looked again at these spreadsheets and he thought, well, this is ridiculous because there's no way that the patterns in the data that I was seeing could be something to do with, you know, a few sort of random mistakes around data entry. So he started to look more carefully. He started to look back through some of the data that had accumulated over the seven years of this study and discovered that not only were there apparently discrepant numerical data, but when he started looking for the clinical cases from which these data had ostensibly been extracted, he discovered that some of these clinical cases seemed not to exist.

You know, the patients themselves were fictions. Now this is where I think the story becomes particularly disappointing, if you like, because Di Nino at that stage went to speak to some of Poylman's colleagues and he spoke to a former post-doc in the lab and said, you know, I've got some anxieties about these data. Have you ever had any cause to have concerns? And this was a post-doc who'd worked in Poylman's lab for several years who'd now had a professorial role at another lab. And he said, well, yes, I think there's been chat for a long time about some of these data being a bit dodgy.

I don't know whether DiNino said to him, well, why the hell didn't you do something about it? But it was clear that there was an understanding that things had been going wrong for some time. Worse than this, DiNino also went to speak to a respected professor in the field and said, I really am at the same university. I really have some worries about this and what do you think I should do?

And the response of the professor was to say to him, no good will come of this. If you raise these issues, it's going to damage Poylman's career and it's going to damage the university. And the only advice that I can give you is if you're going to go anywhere near this, you better be damn sure that you're right. Now I don't know what you think about that as a response from a respected professor when they hear about possible fabrications, but again we might come back to that.

So, DiNino is a man, this is why he's my hero, he undeterred, he decided that he had a responsibility as a clinical researcher to do something about this. And so eventually he went to the University of Vermont Internal Research Compliance Office and the university responded in exactly the way that they should. They took these issues seriously and eventually... Having appointed a panel of five researchers to review the data, they then went to the U.S. Office of Research Integrity, which is sustained by the National Institutes of Health, who again carried out an inquiry, and they discovered the full scale of Poylman's fraud.

Sorry. So... What do we learn from the Poylman case? First of all, it is possible for a whistleblower against scientific fraud to succeed in bringing things to attention, particularly where there's a response from the authorities that is appropriate and initiates a good investigation. So Di Nino, I think, we would have to agree, behaved in an exemplary fashion.

He gathered sufficient evidence to demonstrate his concerns and he kept going in the face of considerable deterrence and a sort of shrugging of shoulders from people around him. Interestingly, the two colleagues from whom Di Nino sought advice gave him, I think we would probably agree, abominable advice, which was, oh, well, look, just keep out of it and, you know, keep your head down. Interestingly, if we look back now, although Poilmon was imprisoned for his fraud and indeed paid back as much as he could from his own money in terms of some of the value of the grant he'd had over the years, as far as I can find out, neither of the people who advised him to keep quiet were ever censured.

And again, this is an interesting issue, I think, where we see the field of science differing quite considerably perhaps from medicine. And I think for me one of the real tragedies of this case is that the true data that falsified the hypothesis were actually very interesting data. Because what Poilman had were data which seemed to suggest that the prevailing beliefs about low density and high density lipoprotein, the prevailing beliefs about menopause and ageing, were actually wrong.

But rather than doing something with those data... He pushed them away and entered fabricated data. So let's move on to our final rogue.

Our final rogue is a Turkish researcher, Bengu Sezen, who arrived in Colombia in 2000 to study for a PhD under Professor Chalmers. And I'm indebted to Martin Chalfie for correcting me. on the pronunciation of Professor Sharmes'name. Because I said to him, have you ever heard of this chap, Sames?

And he said, Sharmes? So, from 2000 to 2005, Bengu Cezanne and Professor Sharmes co-authored six papers together. Cezanne was investigating carbon-hydrogen bond activation and she was using NMR spectroscopy and combustion elemental analysis.

And it has to be said that not long after she settled down in the lab, her lab mates started to raise questions because the problem was that they were trying to reproduce Cezanne's results and having absolutely no success whatever. Well, the problem was that when they went to Chalmers and said we can't reproduce these results, Chalmers... assumed that the problem was not with Cézanne and her work, but with their work. And one of the sad aspects of this tale is that the two graduate students who were attempting to reproduce her work and were on long-term placements in the lab, when those placements came to an end, Chalmers didn't permit them to continue in the lab, which in effect terminated their studies.

A third lab mate, who also got caught up in this, left the lab voluntarily. So among those who were working around Sharmes, who were working around Cezanne, the problem of reproducibility came to be seen as a kind of ineptitude or lack of capacity among the graduates who were trying to do the work. They were the ones who were getting it wrong. So how did she get found out?

Well, this was fascinating because it was clear among lab mates that something was going wrong. And one of them, who'd managed to remain in the lab, noticed that these reactions that Cézanne was publishing only produced a sizable quantity of product when she had access to the lab in private. And so this lab mate set up a sting operation.

He actually set up an experiment to run over the lab in the weekend, you know, openly on the bench somewhere where Cézanne could see it, and then went home. And when he came back, he discovered that the experiment had been tampered with in order to produce the results that Cézanne wanted. So at that point, he reported Cézanne to Chalmers, and finally, Chalmers...

Faced with what appeared to be irrefutable proof that Seizen was the problem and not her lab mates, instituted an internal inquiry. Now as is the case with a lot of these frauds, when you actually look back and see what was done... You think, how on earth did anyone ever think that was going to work?

Because what Cézanne had done when the investigators looked at her work is that they discovered that she had altered her spectroscopy printouts with whiteout. She'd just gone over the wrong peaks and obliterated them. And on further investigation, it was discovered that although she was doing, ostensibly, this work on the NMR...

that she didn't have her own account to use the equipment at Columbia. They also discovered when they looked at lab records that she had never ordered the materials that she needed for the combustion and elemental analyses that she was reporting. So, I mean, you might say it was a fairly open and shut case.

This is a picture of the printout, and my understanding is, because I'm not an expert on this, that the bit that was whited out was an inconvenient peak up here. So, as I say, she just took out the TPEX and covered it over. And this was eventually published in her PhD thesis, for which she was awarded a distinction. So, the results of the investigation then were concluded in November 2010. They found 21 instances of misconduct. And they concluded that Cézanne had carried out a massive and sustained effort over the course of more than a decade to dope experiments, manipulate and falsify NMR and elemental analysis research data.

And further than that, she created fictitious people and organisations to vouch for the reproducibility of her results. Now, as I say, you look at that and you think, what on earth made her think that she could get away with it? Is your first thought, or my first thought, and then my second thought is, how on earth did she get away with it? And that, again, we'll sort of come back to that question. So what are the lessons that we could learn from the Saison and Chalmers affair?

Interestingly, here I think reproducibility again proved to be a significant check on fraud, and that makes it quite different from... the Poilman case, where it's much more difficult to prove that there are difficulties with longitudinal clinical research, because that's not the kind of research for which reproducibility is likely to operate as a check. The second thing I think we need to recognise is that fraud can have an absolutely devastating effect on lab mates, that three students left the lab.

I know that from the conversations that I've had at this HOPE conference and others, that that's That very obvious impact on lab mates is apparent, and we know about that. But a more subtle impact on lab mates is when people start to suspect that things are going wrong in their lab and all live in this cloud of suspicion and anxiety and doubt about what to do, which I think is the more subtle effect. But interestingly, of course, Cézanne was eventually unmasked. by a determined junior researcher.

And I think that one of the sort of features of a lot of the four cases that we look at is it is quite often junior researchers who are the ones who act in an exemplary fashion when more senior people have turned away from the problem. A lot of questions have been asked in the ethics of science literature about Chalmers'role in this. And the question has been posed whether or not he acted as a responsible principal investigator. Cézanne was a graduate student when she worked under Chalmers, and you might say that as a graduate student she could and should have expected scrutiny of her methods and lab practice, because that's part of the teaching process. If you're not scrutinising somebody's methods and lab practice, are you teaching them?

Are you nurturing them? as the kind of researcher that you're hoping to create for the future of science. For those of us outside science, and I would emphasise that it's those of us outside science, and every discipline has its own culture, and you may criticise the culture of philosophy and ethics in response to my criticism of the culture of science. For those of us outside of science, I think the issue of shared authorship of papers is a bit of an odd one. Because for those outside the field, the sharing of authorship on papers whose data you cannot personally vouch for seems very odd indeed.

And so I think there's a question here, rather than an explicit criticism, is what is appropriate co-authorship? And do the scientific disciplines... have that right, or is it that we in other disciplines are being a bit prissy about what we're prepared to put our name to? And I think the last question is to ask whether or not Sharmes, as the leader of this lab, acted justly towards Cezanne's lab mates, and therefore what the responsibilities are of somebody who is leading a lab towards those who are working in and for them.

So, how are we to understand all of this? I mean, you know, these are great stories, and they're perhaps worrying, or we might say, well, there's a few rotten apples in the barrel. You know, these people are sociopaths.

They're nothing to do with the rest of us. So what I want to do is to examine some of the arguments which are put to explain scientific fraud. So the first argument is, well, you know, some people are just dishonest, stroke sociopathic, stroke...

fantasists and they're very different to the rest of us. Now that's a very convenient kind of argument and I'm actually not going to spend any time on it because I'm not sure that it's the most interesting argument to explore. So the second argument, and this is the one that you see mostly in the literature around scientific fraud, is that the social structures of science, of modern science in particular, tempt people into fraud. The third argument is the argument about cognitive bias, which is that people tend to see what they believe or what they want to believe to be true. So that fraud may not be wholly a result of cognitive bias, but it feeds on it.

People can get away with it. People like Emil Rupp could get away with it because others like Einstein really wanted to believe that Rupp's data were correct. And the final argument... is that the path to fraud is walked with many small steps.

So be very, very careful about the first small step that you take. So I'm going to deal with the final three of those arguments and see where they take us. So this is the argument that it's all to do with the research environment.

And I think that you would recognise this argument. But the suggestion is that... Broadly speaking, modern scientists are on a treadmill. And you're on a treadmill of having to generate not just data, not just successful grant, not just grant applications, but importantly, data which will support grant applications. And when we look at what funders want to fund, they want to fund people who are successful.

They want to fund people who have formulated a hypothesis and carried out rigorous scientific research and then demonstrated that their hypothesis was correct. So the argument is that the problem is that funding structures reward that kind of success and that replication studies, that all-important business of reproducing other people's results, earn far less credit and in some cases almost no credit at all. So it's the intense competition, runs this argument, for funding and for promotion that creates a climate of temptation. And in that climate of temptation, it's all too easy to start thinking, well, you know, I'll just do a little bit to the data here and a little bit to the data there, because I know that I'm right about this ageing and menopause stuff.

And if only I can get the next grant in and more money, then I can carry on that research. And it's simple. Incredibly important to all of those women out there in the world who want to know what taking HRT is going to do to them.

Yes, I did want to know, but I wanted to know stuff that was true, not stuff that was fabricated. So I think there's a lot of... We can see how that argument is, at least on the surface of it, quite convincing.

We know the culture that it describes and we can see the temptation that it presents. The slight difficulty that I have with this argument is that if we go back and look at what Babbage was arguing in 1830, none of that applied at the time. You know, the funding and structures of modern universities didn't exist in 1830. Babbage wasn't on that kind of treadmill.

He and his colleagues may have been on a different sort of treadmill, but it wasn't the one that modern analysts of scientific fraud describe. So I think that we might say that we could recognise perhaps that the research environment contributes to this, but it's not a wholly satisfying explanation. So let's move on to the next one, which is that fraud feeds on cognitive bias. I love this diagram. I'm sure that you will all immediately have seen a NECA cube on the screen.

And the NECA cube in itself is a lovely example of how the mind makes meaning in different ways. Because when we look at the NECA cube, sometimes it looks as though one is the front face pointing out to the left. Other times it looks as though the front face is pointing out to the right.

But this is a NECA cube squared diagram because in this diagram there are no lines. In this diagram what we have are a series of circles intersected by lines. And our brain does the rest of it.

It makes... glorious meaning out of this diagram, out of this illustration. And what arguably we do a great deal of our time in life is to ascribe meaning to patterns of uncertainty. And because we do that all the time and because we're meaning-making animals, it's enormously tempting to ascribe meaning to data.

that actually those data don't support, or to choose to ascribe meaning to data, which is the preferred meaning, as opposed to the meaning that perhaps might suggest that your research needs to take a new and different direction. So the suggestion is then that confirmation bias may lead researchers to argue away inconvenient data or to overlook problems. And I think that this argument about cognitive bias might explain some instances of cooking and trimming. There's a temptation to do it not wholly deliberately and dishonestly, but to think, well, that bit isn't really relevant of my data, and this bit over here looks potentially problematic, but actually if I look over here, I can see the data making a pattern that is very productive and helpful.

So I think some of the sort of... some of the argument here is that it's actually an innocent process and it's part of the creative mind that we bring to science which enables people to do brilliant science. And if we didn't have that creative mind, if we didn't have that meaning-making mind, if we didn't sit in seminars and think, wow, what could I do with GFP in my worm? If it weren't for that kind of moment, no one would be doing good science. Perhaps this was the reason that Einstein was supportive of Rupp, and was prepared not to ask too harsh questions of Rupp's data.

Again, we can see the value in some of this way of explaining scientific fraud, but I think the problem is, if we think back of the three rogues who I've introduced to you today, I think we'd have to say that they all knew perfectly well what they were doing. So it might explain some instances of fraud and fabrication and temptation, but it doesn't explain all of them. So let's move to the small steps argument.

And the small steps argument is an argument that comes from research into the cognitive dissonance, which started off a very long time ago in the 1950s. and was pioneered by Leon Festinger. And I don't think I've got time. I've got a little sort of clip of a film, which I think I'm going to just skip through and tell you the story of the research.

In this photograph, one of Festinger's assistants is discussing the experiment with the young man who has come in as an experimental subject. And what Festinger did was to present these young people with the most fantastically boring task. And what they had to do was, you can see the little pegs in front of them, they had to sit for a very long time turning the pegs. Went on for a lot longer than that. So once they were thoroughly bored, they stopped the experiment.

And what they then said to their experimental subjects is that what this experiment is really about is finding out how people's expectations affect their experience of the task. And you've just done this task which you found incredibly boring, but what we would like you to do is to introduce the task to the next experimental subject. And because we want to test out their expectations, we would like you to tell them that it's an incredibly interesting task.

And then we're going to see whether that expectation affects their experience of the task. So the students who came in were randomised to two groups, and this is the interesting thing, because at that point the experimenter was lying to the students. But what he said was to the participants, one set of participants were randomised to being paid a dollar a day to introduce other experimental subjects to the task. So they would basically be paid a dollar a day for lying that this was an incredibly interesting task.

Now the other group to which subjects were randomized were paid $20 a day. And they were paid $20 a day to lie to experimental subjects that it was an incredibly interesting task. So what I'm now going to do is a quick experiment which... like Martin Chalfie, I haven't done before, but I want to get your view on this.

And the question is that if we were to say that some of those subjects came to really believe that it was an interesting task, having lied about it, are they in the $20 a day group or are they in the $1 a day group? So the question is... I'll skip through to the next one. We've got one group of students, or experimental subjects, who are lying for a dollar a day. And we've got another group of subjects who are lying for $20 a day.

And the question is, which group of subjects came to believe that it really was an incredibly interesting task? Now, I haven't got time to share with you today... the social psychological work on group norms and how we all tend to abide by group norms.

So I'm going to ask you to close your eyes and vote because then you'll vote honestly. So please, could you all close your eyes? And what I'd like you to do now is to put up your hand if you think that the group that got paid a dollar a day for lying is the group that was more likely... to come to believe that it was true.

Okay, keep your eyes closed. And now can you put up your hand if you think the group paid $20 a day was more likely to believe that it was true. Okay, thank you.

You can put your hands down. So I can tell you the result of the voting. About a third of you thought that the $1 a day group would come to believe it was true. And about two-thirds of you thought that the $20 a day group would come to believe that it's true.

Now, some of you might have seen the incautious headline for this slide, which is the surprising effect of cognitive dissonance. Because what Festinger and his researchers found out is that if you were in the $1 a day group, you were much more likely to come to believe that you were telling the truth when you said that this was a really interesting task. And the guys who are being paid $20 a day said, no, I know I'm lying. I'm telling them that it's a really interesting task, but I know it's boring. I'm sorry I haven't got time to show you some of the research subjects being interviewed, but how did they explain this?

Well, the argument of cognitive dissonance is that the idea of cognitive dissonance is that it's a kind of inattention. When you face two conflicting directions, so that what happened for the $1 a day group is that they were thinking... I'm only being paid a dollar a day to sit here and tell these students this nonsense. How come I'm only doing it for a dollar a day? It's not worth it.

It's not even interesting lying to people. So that set up a kind of psychological tension and it was easier for them to resolve that psychological tension by thinking, well, you know, I'm telling the truth. Whereas the $20 a day people were sitting there thinking, this is great, I'm being paid $20 a day for lying. And that was the end of it.

No cognitive dissonance, nothing to resolve, no tension. So there's been an enormous amount of research on cognitive dissonance, as you might imagine, in the succeeding 50, 60, 70 years. And what we find is that, again, it's part of this magnificent creativity of the human mind.

Faced with conflicting beliefs or conflicting truths, we're wonderfully imaginative and can persuade ourselves that that which we would prefer to believe really is more likely. So how might this then apply to fabrication and fraud in science? I want to talk about an argument called the pyramid of choice.

And this is an argument made by Carol Tavris and Eric Aronson in their book, Mistakes Were Made, But Not By Me. And what they suggest happens, and this is the effect of cognitive dissonance. Down at the bottom of our pyramid, you face quite a kind of small choice.

Once you've made your choice, between two conflicting possibilities, you experience dissonance. And in order to resolve that psychological tension, you then set up a whole load of rationalisations for what you've done. So at the beginning, two people who have a different response to that single choice, maybe quite close together, you might say there's not really that much difference in their thinking.

But by the time they've made different choices, experienced dissonance, and then rationalised their choices, by that time, they're an awful long way away from each other. And they make the argument, in the context of the example, of a student making the choice about whether to cheat on an exam paper. So there are two possibilities. If you desperately need that... grade, you could decide that you're going to give up your integrity in order to get the grade.

If you feel that you desperately need your integrity, you may decide to give up the grade and keep your integrity. And at the outset, both of those students are in a state arguably of equipoise. They're thinking, shall I, shan't I?

Shall I give up the grade? Shall I give up my integrity? Which way am I going to go?

They then make the decision. They experience dissonance because the student who gave up the grade thinks, God, I gave up the grade! And then the student who gave up their integrity is thinking, oh no, I gave up my integrity. So what do they do is they start to produce a mass of rationalisations. All sorts of rationalisations come to the fore of the student who gave up the grade.

And a completely different set of rationalisations comes to the fore of the student who gave up their integrity. And if we then look at where they stand vis-à-vis each other at the end of the process, the guy who gave up the grade and kept his integrity will say the other guy is a complete schmuck, if not worse than that. You know, that guy's disgusting, he's really dishonest, it's an appalling thing to do, how could he have possibly thought about it?

The guy who decided to give up the integrity and get the grade... thinks, what an idiot the other guy is. Does he not realise how important grades are? Does he not realise that without getting into grad school, he's not going to be able to advance science? He's never going to win that Nobel Prize, and all because he gave up that grade now.

So that arguably is the effect of cognitive dissonance. Now what I'd suggest is that that explains sometimes how it is that people can start out... at the beginning of a scientific career as people just like you, ordinary people of ordinary integrity who are committed to doing good work.

And at some point, they face that choice. Should I just massage a little bit of the data here? Should I cook a bit there?

Should I trim a bit there? And following the logic of the pyramid of choice, once you start down that road, you start to build... around you, the rationalisations, which then underpin more and more gross fraud and fabrication. Because you see, the comment that some people have made about Bengu Cezanne was that if you look at her thesis, she clearly knew an enormous amount about the science.

She wasn't stupid. She was an incredibly intelligent young woman at the time. You know, at the point at which she started her career of fraud, she was a really bright graduate student. probably wanting to do good science. And at some point she made that one small decision which started to allow her to move along the pyramid of choice and putting herself in a position which she may have come to regret.

So I'm going to finish with a quotation from Robert Park's book, The Road from Foolishness to Fraud. What may begin as an honest error, he suggests, has a way of evolving through almost imperceptible steps. from self-delusion to fraud. The line between foolishness and fraud is thin.

Thank you very much. Thank you very much for a very interesting and insightful lecture, Dr. Shale. Now, does anyone have a question to Dr. Shale, please? That was a very fascinating presentation. I was just wondering, you've given a lot of examples of what sort of things you can see in someone who's committing scientific fraud.

I was wondering what sort of... What differences are there if it's just someone just doing straight bad science compared to someone who's performing fraud? Would you mind if I turned that question back to you and asked you where you thought the distinction lay? I'm not doing that just to be clever.

I'm genuinely interested to hear what your initial thoughts are about it. Yeah, it is a bit of a nut of a question, isn't it? Well, no, I mean, I didn't mean to put you on the spot, but, yeah. I would say... I guess I'm a little unsure because I would imagine that cook data could look the same as data that's incorrectly processed.

And I'm wondering, like, because fraud is something that we should... all be aware of and that perhaps we should all keep an eye out for. Even if it's just sort of something in the back of our mind, always keep an eye out. What could we use to say, well, what tools could we use to... distinguish between poor science and fraud.

So when to raise an alarm and when to say, hey, this person doesn't know what they're doing, they actually need help. Yeah, yeah. What?

Yeah. I really appreciate your question. I also appreciate you endeavouring to answer it. And I do apologise if I put you on the spot. I think one of the difficulties is here, and one of the best books, I think, on this is written by David Goodstein.

And the reason... I think it's such a good question and requires proper answer. In Goodstein's book, he sets out at the beginning what he says are a number of nostrums or maxims about how science works.

And what he argues is, and those are principles for how good science works, and they're a bit like some of the myths. that Martin Chalfie started us off with today, that there are certain... sort of philosophical principles around what good science is, that actually then when you look at scientific practice, don't hold up.

And so his argument is that if we look at what has classically been argued to be the principles of good science, and then we look at scientific practice, what we find is if you were to censure the people who didn't accord with the first set of principles, you would end up censuring virtually everybody. So... I think what I would respond to you is that you asked a brilliant question, and I think that the best way of answering it to your satisfaction would be to have a look at David Goodstein's book, because it's a very powerful argument, I think, about the difficulties that we have about what science is supposed to be and what it actually is. I mean, to give you an example of what he means, for example, is that the sort of notion of reproducibility, for instance, is going to work in some scientific areas.

areas but not others, that you might say that work has all got to be reproducible in principle, but if the grant funding isn't there, or if it means having to take two sets of patients through two sets of seven-year longitudinal studies when you think you already know the answers, it may not be ethically appropriate to do that. So I think it was a very good question, and I would really recommend good... What was the title of the book again, sorry? I actually can't remember, but I'd be very happy to both to sort of double check it and let you know later and also to circulate it if the organisers would be happy to do that afterwards I can send an email.

And in fact, if you'd like to I'd be very happy to send you all a reading list of what I think are the sort of top picks on looking at scientific fraud, because it is a very interesting literature. Thank you. Hello. Thank you for a wonderful talk. Thank you.

I would like to hear your opinion on programs like the SciGen computer program, which auto-generates... what seems like scientific papers and try to publish it in order to expose low submission standards of journals. In your eyes, are there frauds or heroes?

If I've understood your question correctly, I'd like to answer it in terms of generally the use of social media to try and keep science clean. Because I think there's a really interesting issue here. The retraction watch sites and some of the other sites in which people can anonymously draw attention to science that they have some anxieties about I think are immense. immensely valuable.

There's a downside to them as well, which we all know. I think it, if I can get the quote right, Churchill said once that a lie will make its way around the world before the truth has even got its boots on. And the problem with that in the sort of context of social media and sort of allegations about questionable science is that there can be a huge degree of reputational damage done.

to somebody when allegations are unfounded. And, you know, we all know the temptation to say, well, okay, so they've said those allegations are unfounded, but there's no smoke without fire. You know, so that once rumours have started, they can be immensely damaging.

So there's been quite an interesting debate in the ethical literature about the use of social media for this, and I think that people would, a genuine, you know, the arguments are sort of genuinely balanced between recognising... recognising the value and also recognising some of the damage, and that maybe one of the things that we need to do is to start thinking of ways in which we can limit the damage whilst maximising the benefits. Thank you.

Thank you. Would you see as one of the greatest threats with the whole issue of fraud here, the fact that people can ultimately convince themselves that what they are doing is in fact legitimate, that the corrections they're making ought to be done, that these ought to be done. to be omitted, to clean up the data, that they're really sort of self-propagating themselves into delusions of legitimacy. Yeah. I mean, I think that's a huge issue.

I mean, Bengi Sezen, objectively... very strongly to Sharmz retracting papers that they had co-authored and said that he had no right to retract them because she is the first co-author believed continued to believe that her data were correct and it may surprise you to know that she now works as a lecturer in a university in Turkey I think she has a assistant or associate professor status there and so she certainly managed she she has clearly managed to convince somebody that she's been the victim of a terrible miscarriage of justice in the US and has now gone back to Turkey to continue her legitimate researches. But I think for me, there are sort of two problems. And one is this sort of self-conviction and the belief that you really are doing the right thing.

And what then really worries me about people like Poilman is how once those data have gone into the literature, how... how incredibly difficult they are to eradicate. Because even when a paper is retracted, there may have been hundreds of people around the world, if not thousands, working, doing their own work on the basis of those falsified or fabricated data.

And so I think the sort of consequential damage is potentially enormous. And the thing that really saddened me, actually, when I started to look in more detail at this field, was to see how much fraud is done... in clinical research.

It may be that that's where they get found out, but I think it's also because that's where more fraud is happening. And the thought that in an endeavor which is seeking to improve human life, to enhance health and well-being, to find that that endeavor is being corrupted by misleading data, I find deeply, deeply disappointing. Thank you. I was wondering if you had any comments on publication bias. Specifically, I feel there's been some controversy...

from the pharmaceutical companies, and I'll use Roche and Tamiflu as an example. You focus very much on the individual scientist in the talk, but I think that's very widespread and a major concern. I think publication bias is a huge problem in the clinical field.

When you say publication bias, are you including funding bias in that? Yeah, yeah. I mean, the group that I, you know, have been working with most recently in Oxford are really passionate about this, you know, the Centre for Evidence-Based Medicine. And I've had, you know, sort of direct experience of some of the problems.

that this creates in that I was about a year ago carrying out a review of a surgeon's practice who's an orthopedic surgeon. And the question was whether or not it was appropriate for him to be using aspirin as a... of prophylaxis for VTE.

The word's not going to come out right. And the problem is, when you look at the published data on VTE prophylaxis, there's almost nothing on aspirin, and everything is on the new drugs and the low molecular weight heparins. And the reason for that is that there is no money in funding research.

on aspirin. And the other reason that you also refer to, the other problem, is that none of the negative data gets published. So, you know, you have a vast, you know, to the extent that the scientific research into low molecular weight heparin is of value, we're probably not seeing an enormous amount of the data that should come out into the public sphere.

I mean, you know, looking at the other side of that, the problem is that if there were even more data to look at, look at, we'd need even more scientists to be analyzing it. We're living in a world awash with data, which is another problem for clinical practice. But I think you're absolutely right to be thinking about publication bias as a problem. So perhaps the last microphone goes to Professor Chalfie. So I had two very quick questions.

The first one is, when you go back and you look at these individuals, All of this thing about starting off with something a little and then going on, it sounds very interesting. But how many of the people that have been found to produce the fabricated data, if you go back and you look not at that fabrication but at work before that time, that there was actually evidence of a completely different fabrication or other thing where maybe it hadn't been found. I know this is more like your first thing about do people do this, but if it works, do they do it again? That's one thing.

The second thing, and I can't remember the man's name precisely, but it was the doctor that wrote that note for Rupp. Did Rupp make that name up? Do we have any real evidence that that doctor was real? That's a very interesting one, actually. The doctor's name, ostensibly, was Geb Sattel.

And it... He was certainly presented as being his analyst in Berlin, but maybe that was a complete fiction, which is a delightful thought. I think your first question, though, raises a real problem about understanding some of this, which is that when one tries to understand the motivations of the people who have carried out these sorts of frauds, firstly, it's extremely difficult to ask them anyway because they may not want to discuss it.

But the other thing is... is that by the time you've got round to asking them, the likelihood that they would be able to give a genuinely true account of why they've done it is extremely small, not because they're pathological liars, but because they probably don't themselves completely understand why they did it. I think your question about how far you go back in the record is a very interesting one, because investigational resources tend to focus on the papers where questions have already been raised.

And so I think in relation to Poilman's... career, for example, he was, I think, 50 by the time this study had questions raised about it. And it was his longitudinal study that was investigated. I don't believe that he has retracted the prior papers, and I don't believe there's been an investigation of those. And so, you know, there are issues about how far the scientific record remains correct.

All right. So time has come to conclude this session. I'm sure many people want to talk to Dr. Shale, so please do so during the rest of the meeting.

So thank you very much again, Dr. Shale. Thank you. Thank you very much, Dr. Sher and Dr. Arimitsu.