In late 2006, New York City Mayor Michael Bloomberg created the Center for Economic Opportunity (CEO). Born out of recommendations made by the Bloomberg appointed public-private Commission for Economic Opportunity, CEO was designed to be an innovations lab that would test anti-poverty programs by applying a results-based approach. With a budget of $100 million, CEO would closely monitor new programs and hold them accountable for producing measurable results. Uniquely, CEO would cut funding for programs that did not “make the grade.” Bloomberg named Veronica White the Executive Director of CEO. White had decades of experience working in executive positions in several New York City agencies but with CEO she had daunting tasks ahead. She would have to redefine how poverty was measured in the city, facilitate cross agency partnerships, and most important, develop an effective and achievable evaluation system for all programs. This case traces the CEO team’s challenges in placing program evaluation at the core of their mission. CEO programs are geared toward three target populations—working poor adults, young adults between the ages of 16 and 24, and families with children ages five and below. In the first year of operation, White and her team launched a slate of anti-poverty programs that varied widely in scale and scope and ranged from New York’s first ever conditional cash transfer program to a program that would accelerate graduation rates in community colleges. But from the beginning, CEO’s evidence-based programming was put to the test. White faced constant pressure to “produce results quickly.” With the 2008 recession, however, CEO endured significant cuts in its evaluation budget. White and her team had to make the most of limited resources while still sustaining a comprehensive evaluation policy.
Evaluation of social programs is now considered almost as important as the social program itself. This case helps students examine the real-life issues and challenges in managing and budgeting for evidence-based programming. Students learn about the merits and associated costs of various evaluation tools and become better consumers of evidence.