Home Sections Video ParticipantsProgram Anthony Bryk Video Cecilia Rouse Video A. Thomas McLellan Video Panel Video
SREE - Advancing Education Research
Navigation Bar
Navigation Bar
spacer
spacer Conference Concluded - View Next conference information

Conference Videos
spacer footage

Question and Answer Session
Anthony Bryk, President, Carnegie Foundation for the Advancement of Teaching

This is an edited manuscript, with changes made to clarify, but not alter, the meaning of the exchanges.


Brian Jacob: I hear this and I think it’s hard for me to disagree with anything. But then as I listen to the examples as well, I’m also struggling to understand, in specific terms, how this is different from much of what we all have done to some extent, and what many of our colleagues have done over the years, in the study of literacy coaches. This kind of essentially before or after analyses, controlling for prior achievement and that seems quite similar to things I’ve read about and done for many years. I’m wondering what more specific guidance can we get so we can move forward to the new, different, approach.

Tony Bryk: Brian was one of my former students and I obviously didn’t educate him well enough if he didn’t get the wisdom of my remarks. I've failed. I think that your question is really right on target. In a way there is a whole other domain of empirical work going on around this rapid expansion of value-added analysis, much of it focused on teacher effects, which is part of what I wrap in here. But the difference between that and this, in the bridge I want to try and make, is that there is often no theory of practice that sits underneath it. That we're often analyzing data that already exists and we are trying to figure out some things, but we haven’t really had the opportunity to systematically develop this working theory of practice, go out and measure the other pieces, and see that, well its going on over here and it’s over here, and the difference is about the contexts, and what about them we can learn. So it is in this being more prospective. So in that sense, you’re absolutely right, it is not new, but there are opportunities to learn that we are not capitalizing on. I think that's the point that I'm trying to make.


Judy Singer: OK, Tony, so if Brian was your former student, you are the person who hired me, for my first job.

Tony Bryk: You just have to seed the room, that’s all I have to tell you.

Judy Singer: I'm sitting and listening to this example of the anti-depressant studies, where the RCTs are finding effects that observational studies are suggesting might not exist. What about turn-about being fair play? If you go back to the medical literature, in the case of hormone replacement therapy, it's a very potent example of exactly the opposite phenomenon. Where observational study after observational study just established, and everybody just took as an article of faith, that HRT was in fact what should be happening to women, and it took some serious RCTs to basically say that the matching of treatment to women essentially was creating a selection effect that the RCTs could reveal. I find myself sitting here thinking that maybe we need to get to the next phase of life, where we stop setting up straw men, of it’s either RCTs or observational studies, it’s either exploratory or confirmatory. You gave a John Tukey reference, I'll give another one. He wrote a paper in the early 80s, where he said we need both exploratory and confirmatory. To some extent the rhetorical device of a straw man is incredibly effective, but maybe part of what we need to do – let me state it as a question – is part of what we need to do, to move beyond that, to another phase? I think that to simply say that RCTs are ignoring things, is potentially as misguided as saying that an observational study of variation in a treatment with no control group whatsoever, might also be ignoring something.

Tony Bryk: If you remember where I started, was that I really wanted to expunge the idea that the RCT is the gold standard. Remember where I ended up – with an example where I think an RCT strategy would be both cost efficient and extremely effective at advancing learning. The issue is not that there is a design that is best for all situations. It’s to put the problem of practice improvement, and empirical research to inform it, at the center, and then ask what is the strategy that is most likely to accomplish this? This is a place where this idea that there is a gold standard that should be applied in every problem, is in the way of our being opportunistic. Put the problems we want to learn about in the center position, and then step back and figure out, what is the most efficient way we can get good information on those problems? And sometimes it may take you down an RCT path, and sometimes it may not. I think also, the point is that you can learn things from different ways of inquiring. If differences occur, then again we have to ask that question, why? Why are we getting one kind of evidence out of one stream of research and a different kind of evidence somewhere else? What is causing that? That is also something we should be exploring.


Question: I don’t know Tony at all, that makes me the odd man right here…

Tony Bryk: Oh, we still have time…

Question: I love this. It’s wonderful, it is clean, and I thought it was clear. I think these questions about variation are extraordinarily important. But as you were talking, I couldn’t help hearing a little voice in the back of my mind, thinking all of these teachers…and principals, keeping track of all of this information, over all of this time. I started thinking about how we were constrained, in own scaling up project, by cost, and the things that we sacrificed because we couldn’t afford them. We don’t even count many of the teacher time variables. I started wondering what kinds of questions? If you were doing a cost-benefit analysis, what kinds of questions would become so important that we would want to, or feel justified in, devoting enough expense to their elucidation? This was a problem that was there for me as you were talking.

Tony Bryk: This is a place where technological advances are occurring very rapidly, that are really bringing down the cost and overhead of collecting data. In this study, do you know how we collect the student assessment data? It was done on a Palm Pilot, that automatically synced the data. It was a Wireless Generation product, a DIBELS assessment, which automatically synced the data to New York every night, and then sent it to us. So all the paper data collection and coding and all that stuff: gone. That same technology can do this. We didn’t have the advantage of it because we didn’t have the design resources. All that information that the coach is keeping about who they actually saw that day: same thing. Schools are actually really primitive, when it comes to their capacity to collect and process basic information about their work. This is a place where if we put more of a digital infrastructure under the basic work that students and teachers do, we would actually reduce a lot of mindless paperwork. School people talk about it all the time. All the stupid paperwork to do. Well, why do we still do it that way? Hardly anybody does it that way. But the advantage here is that to the extent that more of this work actually occurs in a digital environment, the evidence is a by-product of doing the work, and that opens up for much more of this kind of activity to occur. Not as something you put on top of practice, but basically it is harvesting the evidence, the information, out of practice. That is on the horizon. It is a year or two, or a few more years, but it is coming. I'm absolutely convinced of that.


Neil Heffernan: You were talking about, if you had all this data online, what would it look like? It reminds me of the Netflix challenge, which people might know about. Netflix offered a million dollars for data-miners to go and better predict which people are going to want which movie. That inspired my advisor, Ken Koedinger at Carnegie Mellon, to make the educational data mining challenge – which will be part of the Knowledge Discovery and Data Mining conference coming up. Some people in this room might actually want to compete in. Trying to predict which kids, are going to get which items right, inside of an intelligent tutoring system. It doesn’t have all the features about what is happening in the classroom, but I thought I would just throw that out there and see if you had any comments about that sort of thing.

Tony Bryk: It is another part of it. It does speak to this whole issue, which I didn't talk about today – it would have been a different talk. The infrastructure, that if we could put in place, could dramatically influence our ability to learn about how to improve. In some ways it’s not the kind of thing that gets a lot of attention, but it sure could enable a lot of improvement, if we built that kind of infrastructure.


Judy Gueron: I kind of understood what you are talking about when you were saying that you cannot improve what you cannot measure. I like the idea of good outcomes and measuring processes, but I don’t understand why you gave up on the counterfactual. It seemed to me that you made this leap. That in some sense, through volume, you can get away from the problem. What you are describing can be built on top of experiments and then you are sure of the internal validity. Here, at the end, I'm still wondering about that, and that seems to me, giving up a lot. It doesn’t have to be compromised with all that you bring to it, in terms of the in-depth information which will help you improve practice. Which is certainly important. You don’t want the experiment to be a black box, which in the end is something which is not very useful. It worked or it didn't, and it varied across places, and I can't figure out anything from that. That’s not a useful place to be. But here, I’m not so sure that I've learned that it works, or that those variations across schools are credible to me.

Tony Bryk: There are sort of two parts to my response to this. One, I actually don’t think that the experiments are feasible. At least in the context where interventions have variable effects on different kinds of students, where the interventions are mediated through teachers, through people, and where the context in which those people work influence how the intervention gets taken up and used. There are just too many facets of variation to put this into an experimental design. Not in our lifetime. So that would be my first observation. The second one is that ultimately what matters, I'm arguing, is that it’s the ability to achieve the reliability in this intervention, in the hands of different people, in different contexts, over time. In an accumulating evidence strategy, that’s what keeps coming at you. You continue collecting the evidence. Can you make this happen, reliably, in a wide range of contexts, and in the hands of large numbers of individuals? Any one of those I grant -the counterfactuals may be weak- but replicability, that is the actual standard of a bench scientist. Can I make this thing happen over and over again? That’s getting from one side to the other side. From evidence to use, I would argue. But thank you for the point, because you really made it, you really brought that, I think, sharply forward.

Return to the conference program or the conference video list.

spacer
spacer
Copyright SREE
Site Map Privacy PolicyContact Us