Bill Farmer again. Welcome back to McMaster University course Computer Science 1JC3 Introduction to Computational Thinking. Today we're going to start the last topic of the course, software development.
But before we do that, I have a question for you. Which color model has historically been used by artists? And I give you four choices. Red, green, blue.
Cayenne, magenta, yellow. Red, yellow, blue. Orange, green, violet.
And I'll give you a moment to come up with an answer. Okay, well welcome back. Last time we had been talking, we know that the RGB color model is used when you're producing color with light. And the CMY model, or the CMY model with black, CMYB, that's for producing color with ink. Now traditionally, going back hundreds of years, the model that has been used by artists is the red-yellow-blue model.
So that's the answer. One of the reasons I think this model is common... is that the intermediate colors, the colors you get by mixing any two of these, turn out to be orange, green, and violet. And orange, green, and violet tend to be more standard colors than cayenne, magenta, and yellow.
So this is used by artists. And the last model, orange, green, violet, has also been used by artists. Maybe I should say artists and photography. Okay, I have another question.
This deals with analog computing. Which of the following is an analog computing device? I'll give you a moment to answer that.
Okay, now that you're back, the Charles Babbage's analytical engine was a real computer. A computer that could do what computers do today. The difference was it was not based on... electronics. It was based on mechanical engineering.
So this was a digital device. And the Stoffelwalze developed by Fried Leibniz was also a digital device. It computed numbers in a digital way.
And an abacus is also a digital device. That only leaves one. a slide rule. Now I'm mentioning a slide rule because it's a very important device for doing computation.
In some ways it's very effective. When I was a student in high school and in university, this is what I used. There were electronic calculators, but they were too expensive for me and for most students.
So this would be in the 1970s. And the interesting thing is, in the 1970s, I started university in 1974, when you went to the university bookstore, which is at the University of Notre Dame, which is in South Bend, Indiana, if you're interested in South Bend, Indiana. When I went to the bookstore, there's a whole huge corner in the bookstore devoted to slide rules in 1974. When I graduated in 1978...
That whole corner was gone. They barely sold any slide rules. So in the four years that I was at the University of Notre Dame, the transition away from slide rules happened.
And this was due mainly because electronic calculators became cheaper. And in many ways, they're a lot better than slide rules. So how does a slide rule work?
People don't know that. The idea is I take a piece of wood and let's say I have one, two, like this. And then I can take another piece of wood that slides over the top of it.
And if I want to say two. plus 3. What is 2 plus 3? It's 5. So I can use the slide row and do addition like this.
Now, if you set up a slide row like this, this is not very useful at all. The point, though, is that instead of putting numbers here, instead of having each unit be a number, I will have my numbers be logarithms. And so this will allow me to add up logarithms.
And logarithms... The addition of logarithms is equivalent to multiplying the numbers that those logarithms represent. So that's how slide rules work.
They are extremely fast for computing. They're about the fastest computing device I know. The problem is, is when you get done with your computation, you get a number that looks like, you get something you can read off, you know, you can read off from the piece of wood, something like this. And it doesn't tell you where the decimal point is. You know, this could be 3.71.
It could be a whole bunch of different things. It could be 3.71. times 10 to the sixth.
We don't, the device does not tell you where the decimal point is. So this means that the person using it has to figure out where the decimal point is. And there's a hidden virtue of this.
People who use slides, rows a lot are very good at estimating numbers. So they have to be good. Otherwise they would not be able to use a slide row effectively. Okay, so let's move on to a very important case study. This is a machine called the Therac-25.
The Therac-25 was a machine for delivering radiation to cancer patients. So how did this work? It worked that the patient sits here, or lies down here. And they get a dose of radiation. And that radiation comes in two forms.
It comes in x-ray or it comes in low energy electron beams. The electron beams are converted, so high energy electron beams are converted into x-rays because there's a target that the rays go through. Now, the problem with this machine is that it did not operate all the time. And so instead of producing x-rays, sometimes it just delivered high-energy electron beams to the patient.
So the first thing I want to mention that's important, this was developed by this company. The Atomic Energy of Canada Limited, AECL, and they're known for the CANDU reactors, which are atomic energy reactors. So this is a big-time engineering company.
They produce this product. It's controlled by software, but in six incidents in the 1980s, this machine delivered overdoses of radiation that caused severe physical damage or even death. And in one case, this happened here in Hamilton, Ontario.
So this machine did not operate properly and it was due mostly due to software. The software failed to detect that the target was not in place, which meant that the patient got high end. energy electron beam radiation.
The software failed to detect the patient was even receiving radiation, so the technician didn't know that the patient was getting radiation, and the software failed to prevent the patient from receiving an overdose of radiation. The people who were involved in here, they got doses of radiation like somewhere between 100 and 200 times what they should have gotten. If you click here in the slides, you can see more details. about this. So what was the cause of the failure?
Well, first of all, it was inadequate software design. But even more importantly than that, it was inadequate software development. The coding and testing was done by only a single person. So this is software on which people's lives depend, and only one person did the coding and testing. There was no independent review of the computer code.
They just trusted this. person knew what they were doing they would do everything right. There was inadequate documentation of error codes so when error codes came up the technicians did not know what they meant and it were very poor testing procedures.
So they missed the race condition, they missed arithmetic overflow, and there was poor user interface design. So problems occurred because of input errors. And in general, software was just ignored during the reliability modeling. The engineers produced this, they were careful to make sure the machine was reliable, but they acted as if the software was not part of the machine and they didn't have to do reliability modeling for that.
And last, which was a physical problem with the machine, there were no hardware interlocks to prevent the delivery of high-energy electronics. beams when the target was not in place. So if the target was not in place, it should have been physically impossible for the patient to receive high energy electron beams. So this is a disaster. It's one of the famous disasters in software development.
It led to people's deaths and destruction of people's bodies. So I'm mentioning this because it helps us illustrate the importance of software. as part of the systems we use today. So...
Software is developed some way, one way or the other. It should be developed using a rational process, a rational development process. This is the only way to produce quality software. We can't just hope we're going to produce quality software.
We need a process that as an endpoint will produce quality software. And this rational process is necessarily an idealization. It's not something that we can't, it can't be perfect.
it can't be always implemented perfectly and this is because of these following problems humans they're going to make errors what we want to do is be able to reduce them and to catch them Communication between humans is imperfect. This is a huge problem. Again, we want to do things in a way that we can reduce this problem. When you start a project, when you start building something, almost always there are many things that are not perfect.
not understood at the start. If you did understand them at the start, you would do things differently. And the technology we use to develop applications has limitations. We do not have perfect all technology.
And finally, requirements change over the time. That means when we start developing something and we produce a product that meets our requirements, it may be that in a year we have different requirements and now our machine is inadequate. So we're going to talk a bit about the software development process, but before we do that, we're going to talk about someone named David Parnas. And so I'm going to let you decide who this guy is. You have four choices, so we'll stop for a moment.
Okay, well welcome back. This is a picture of David Parnas. He is, actually all four of these answers are correct.
He is a professor emeritus in the Faculty of Engineering at McMaster University. Professor emeritus means he is retired, but he retired at the highest rank of professor. So I don't know if you...
Notice, but there's an assistant, assistant professor, and then higher than that is associate, and then the highest is professor. So he's retired and he retired at the rank of professor so he has this title professor emeritus. He is a person who started the software engineering program at McMaster which was one of the first of three software engineering programs to be accredited in Canada.
He also developed a number of ideas about software engineering, but in particular he developed a very powerful, useful idea called the idea of information hiding in modular design. We'll say just a little bit about that later. And he's one of the founding fathers of software engineering. So, let's get down to business.
As I said, he developed these ideas of modular design and information hiding. So he's famous for that. But he's also an advocate of precise documentation using various things, but in particular, particular Parnas tables.
So something he noticed is that when you want to express something in a logical way, it's better to give an engineer a table rather than a table. logical form and he's one of the first very strong advocates of software engineering of thinking of software development as an engineering discipline so before software came about, many people, and many people still today, they think of engineering as a very physical kind of discipline. It's like you're working with building buildings, like civil engineering. You're developing mechanical devices, mechanical engineering, you're working with chemicals, chemical engineering and so forth.
So software engineering is different because it's not so physical. A piece of software is not a physical thing. So he's one of the strongest advocates for saying that software should be developed just like other engineering products. So I mentioned that he is the main mover behind McMaster's undergraduate program in software engineering. And generally speaking, he is one of the most recognized researchers in the history of faculty engineering at McMaster.
And I recommend, maybe not today, but at some point, I recommend all of you. Read some of the papers in this book of collected papers by David Parnas. These are classic papers about how to develop software and the challenges of software.
Okay, so we're going to stop here, and then we're going to continue more discussion with software development next time. Okay, thank you very much. See you next time.