Home Sections Video ParticipantsProgram Anthony Bryk Video Cecilia Rouse Video A. Thomas McLellan Video Panel Video
SREE - Advancing Education Research
Navigation Bar
Navigation Bar
spacer
spacer Conference Concluded - View Next conference information

Conference Videos
spacer footage

Question and Answer Session
Cecilia Rouse, Executive Office of President, Council of Economic Advisers

This is an edited manuscript, with changes made to clarify, but not alter, the meaning of the exchanges.


Norton Grubb: …[In] charter schools, where the evidence suggests perhaps a negative mean effect, enormous variance. Of course, you've got some great schools and some lousy schools among which it's difficult to choose. Secretary Duncan seems to have President Obama's full support, despite the lack of research evidence. As part of the research orientation of the Obama administration, is there something you can do about this disjunction?

Ceci Rouse: I think I have a meeting now (laughter). We came in and were very focused on higher education, for various reasons which I can talk with you about later. We're in the midst of the Elementary and Secondary Education Act reauthorization. I completely concur, that the evidence on some of the ideas that the Secretary would like to see folded into ESEA is not as strong. But I would push back that I think you will also see that there's an interest in learning more about whether his instincts are truly correct. So, for example, on performance-based pay, Jessie Rothstein is my colleague (in audience), and we're very aware of the challenges, shall we say, in linking teachers to students, and how you would write the right specification, using teacher value-added. On the other hand, I think the idea that merit pay may make a difference, and that you want to have something that's a little bit less subjective than just a principal's instinct on this, I think is a fair question. I think the Teacher Incentive Fund is a genuine attempt to try to structure something so we can learn whether this is really something that makes a difference or not. On charter schools, I would argue that I think the evidence there is frankly pretty mixed, and that the Secretary is also committed to shutting down charter schools that do not look like they're performing. So he is trying to cherry-pick. In other words, there's a lot of heterogeneity. We don't just want to go with the charter schools at all costs, but the idea is that high-performing charter schools are what we want to encourage. I think that falls into the category of choice is an important point for students. Is that going to revolutionize education? It is not so clear that the evidence is there, but choice within the public school system is important for students anyway.


Catherine Snow: You mentioned the potential danger in the i3 funding mechanisms, that school districts will find researchers to create big partnerships with. I wonder if anybody within the Department of Education, or anywhere else, is thinking about the value of supporting partnerships in advance of the research questions, and letting the researchers and the educators think together about what would be the right questions to submit, the right programs to validate or develop or scale-up. Because I'm not sure that the expertise to make those decisions resides solely in the districts. And I think a preexisting partnership might well promote the effectiveness of that funding mechanism.

Ceci Rouse: I agree with you that partnerships developing organically, and in advance, I think is the ideal. I think the idea is that we'd like to see i3 continue. This came out of Recovery Act funding, but I think we'd like to see it continue over time. So I think the model really is for these partnerships to develop. If they're already there, we'd hope these conversations are already happening. And if they're not already there, then maybe the district goes to some researchers, but that over time, that relationship happens. I should also say that OMB has set aside, as part of their $100 million, as part of their initiative, money for technical assistance to the districts to help them develop these partnerships, and to help these evaluations along, to be better structured.


Guanglei Hong: I wanted to comment about the Teachers Incentive initiative [Fund]. One theory, one rationale, we agree with is, if you increase teacher pay -we know that teachers are terribly underpaid in this country as compared to some other countries in the world- so if you can increase teacher pay, you're going to dramatically improve the recruitment pool for the teaching profession. This is something we probably need evidence to show that's true. But I doubt that teacher incentives could actually improve teaching performance. Now, contrast that with another rationale. People used to think all teachers need are more resources. So as we invest money in education, we're going to see results. Now you see another rationale, saying as long as we have some money at the end to attract people and lure them, they're going to produce performance in order to get the money. But what's missing is in the middle. How do you help teachers improve? Putting money at the beginning is not going to help. Is it going to work if you put money at the end? What is missing in the entire framework is what it takes for a teacher to improve her performance, given who she is, and given the kind of students she faces. I wonder if there's money that should be invested in that part, which is perhaps more important, on top of increasing pay for teachers, which I will always endorse.

Ceci Rouse: Am I to interpret the question as really how do you improve professional development? Exactly, I completely agree. In the OMB initiative there is money for a joint NSF-Department of Education math professional development study. But more importantly, if you look at i3, that's a great place for there to be lots of studies and good randomized evaluations of professional development programs. The idea is that i3 is a place for there to be money to study all these sorts of promising practices and to really help develop a really strong evidence base.


Robert Boruch: Can you say something about the way you arrange your thinking about timeframes in all this? Because some things are going to be short term, some things are going to take an agonizingly long time to get off the ground, much less to detect effects on. How do you dimensionalize that in your own head, when thinking in this context?

Ceci Rouse: So, for an example, take i3. These are grants that are three to five years, some programs may take longer than that to get off the ground?


Robert Boruch: Part of the question is how do you know that's the right amount of time? What kinds of empirical data do you rely on to understand the likelihood of detecting effects within reasonable amounts of time?

Ceci Rouse: In thinking about the design of these experiments, of these studies, you have to think about what is it that I'm trying to study. What is the treatment, what is my unit of analysis, when am I likely to see impacts, and what impacts on what sorts of outcomes? I think that is part and parcel of what the design is. And I think at this stage, especially with the things I talked about today, starting from the Access and Completion Fund and the Community College Challenge Fund, even the early education grants, on down to i3 and TIF, I think the idea is every one of those is going to be different. We're trying to make the money available and have the technical assistance available, and that's what we're all going to have to sort out. How in the timeframe that, at least this money is going to work on, how do you design a program? Almost all of them require matching funds. Maybe the Federal part is supporting the front end and then you have a foundation who is a little more patient than the Federal government is, to support the longer follow-up. But I think that's all part and parcel of the design of these studies. I think it's important, don't get me wrong. I think it's important that we not be looking for outcomes in six months, for programs that we know should take two or three years to make a difference.


Helen Ladd: You haven't said much about the reauthorization of ESEA. I'd like to hear your statement about the Obama administration's view on that. Then I have a specific element that I'm interested in, that links up to the research. I, and various other people, have been pushing for a change in the way schools are held accountable. We would like to bring more judgment through some sort of inspectorate approach into the system. But we know we don't know how to do that in the U.S., because we don't have much experience. So it seems to me that through the reauthorization, there would be an opportunity to encourage states to come up with new ways, or alternative ways, to determine and hold the schools accountable for a broader set of outcomes than is possible with a straight test-based accountability. Could you comment on those issues?

Ceci Rouse: Sadly, I cannot. The reason is that we are in the process of working our way through the reauthorization, but we're not yet to the point where I can talk to you. The good news on that, is if I were able to talk to you, it would mean you would have much less influence on what we could do. So what I encourage you to do is to talk to Jessie, because these conversations are happening now. If you actually have ideas, you should let us know, today, before 5:00, when we actually have an important meeting on this issue. I'm not joking about it. If you have ideas you should let us know ASAP, because conversations are happening now. Then we will be able to talk to you more about where we stand.


Judy Gueron: This is a more general question about your statement on wanting to get larger impacts than what Opening Doors found. Your dissatisfaction with the small to modest impacts. And I think it's a challenge across all fields and all experiments. Random assignment trials and good research, in general, have found small to modest impacts, as has much medical research. The only project I have been personally involved in that got enormous impacts, was a quarter of a billion dollar guaranteed jobs program for youth in the 1970s -- a quarter of a billion dollars in those years' dollars. If you step back and say is incrementalism is what it's about, or are we forever trying small things, or relatively modest and small things? Particularly since the Reagan administration, when there was no real money to test the things, and you basically were cobbling controlled trials on top of what the system could itself spit out. So is there, in this vision, large money for radical ideas, expecting that since this is our venture capital, a good share of them are going to fail, but to look for real breakthroughs at a scale that might get you beyond an incremental? And by the way, I'm not against the incremental. I think we too often knock that this is a conservative tool. We get progress, that's important. The next thing is built on it. But it takes out of the box ideas to try to get massive change. An example of that might be what no researchers, serious researchers, really supported, which was the Welfare Reform Act in 1996. Which certainly I, among others, did not expect to end up with what it ended up with. It was no RCT, but it had the magnitude of impact like the guaranteed jobs program I'm talking about, which equalized minority and white employment rates. That's the kind of mammoth change that is really impressive. I wonder what you'd say about that.

Ceci Rouse: I would say that is really a tension. I have forgotten my last slide, which I am going to use this opportunity to show you, because it was a little bit of a downer. I think that is definitely the tension, and I think this administration doesn't feel like it knows what the next big bet is on which to put those big dollars. But I think if it came, this administration would be open to putting real money behind it, in order to see if you could really move the dial up. I think on the issues that I've been involved in, at least, I don't think that folks felt like we knew enough, or had enough knowledge, to actually make that big bet. So my last slide, which I did forget, it was a bit of a Freudian slip, was the humbling slide. Lest we get overconfident here, there was a randomized evaluation. I believe it was conducted by Mathematica, of the Trio programs. And it suggested not so impressive impacts. So what did Congress do? It wrote into the Higher Education Opportunity Act of 2008, prohibiting using random assignment to evaluate Trio programs. Within this administration, we've had conversations about how nonetheless, within this group of programs, to get them to want to evaluate at least variations on their programs. This administration is still interested in pushing forward, but I think we do have to remember that just because we have strong evaluation, with strong results, when it suggests that they don't have the impacts that some people would like, it doesn't mean it always translates into policy. And I guess I'm out of time.

Return to the conference program or the conference video list.

spacer
spacer
Copyright SREE
Site Map Privacy PolicyContact Us