Liste des documents de HARVARD
CD ROM Annuaire d'Entreprises France prospect (avec ou sans emails) : REMISE DE 10 % Avec le code réduction AUDEN872
10% de réduction sur vos envois d'emailing --> CLIQUEZ ICI Retour à l'accueil, cliquez ici Citation Guide 2010 2011
|
CD ROM Annuaire d'Entreprises France prospect (avec ou sans emails) : REMISE DE 10 % Avec le code réduction AUDEN872
10% de réduction sur vos envois d'emailing --> CLIQUEZ ICI Retour à l'accueil, cliquez ici H A R V A R D B U S I N E S S S C H O O L Copyright © 2002–2010 by the President and Fellows of Harvard College. All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means without permission of the Harvard Business School. Harvard Business School must reserve the right to make changes at any time affecting policies, fees, curricula, courses, degrees, and programs offered (including the modification or possible elimination of degrees and programs); rules pertaining to conduct or discipline; or any other matters cited in this publication. While every effort has been made to ensure that this publication is accurate and up to date, it may include typographical or other errors. If you have any comments about this guide, please contact rreiser@hbs.edu or infoservices@hbs.edu. Printed October 2010 Citation Guide 2 0 1 0 – 1 1 A C A D E M I C Y E A RTable of Contents About This Guide Purpose of Citations What to Cite Types of Citations: Footnotes, Source Lines, and Bibliographies Footnotes and Endnotes Source Lines Bibliographies Repeating a Citation Ibid. Shortened Footnote Creating New Citation Styles Permission Requirements Examples of Citations Advertisements Analyst Reports Annual Reports (Printed) Annual Reports (Online) Articles Blogs Bond Prospectuses Books (Printed) Books (Online) Brochures Cases (Printed) Cases (Online) Charts Citation within a Citation Classroom Discussions Compiled Information Conference Papers Databases Downloaded Documents E-mail 4 4 4 5 5 5 6 6 6 7 7 8 9 9 10 10 11 11 12 13 13 15 16 17 17 18 18 18 18 19 20 20 20Citations of Commercial Databases ABI/INFORM Bloomberg Business Source Complete Capital IQ Compustat Datastream Deal Pipeline Economist Intelligence Unit (EIU) eMarketer Euromonitor Factiva First Research Forrester Frost & Sullivan Gartner Global Financial Data Global Market Information Database (GMID) Hoover’s I/B/E/S ISI Emerging Markets JSTOR LexisNexis MarketResearch.com Academic Mintel OneSource SDC (Securities Data Company) SourceOECD Standard & Poor’s (S&P) Thomson ONE Banker World Development Indicators (WDI Online) Endnotes Bibliography Films Government Documents Illustrations Interviews Journals Legal Cases Magazines Maps Market Research Reports Memorandums Minisodes Movies News Websites News Wires Newspapers (Printed) Newspapers (Online) Notes Periodicals (Printed) Periodicals (Online) Podcasts Powerpoint Presentations Press Releases Proceedings Radio Programs SEC Filings Secondary Sources Slide Presentations Tables Technical Notes Television Programs Theses and Dissertations Unpublished Papers Videos Webcasts Websites Working Papers (Printed) Working Papers (Online) 20 21 22 22 23 23 23 23 24 24 25 25 25 26 26 27 27 28 28 29 29 30 30 30 31 31 32 32 33 33 33 33 34 34 35 35 35 36 36 36 37 37 37 37 37 37 37 37 37 37 38 38 38 38 38 38 38 39 39 39 39 39 39 39 39 40 40 40 41 42Citation Guidelines 4 CITATION GUIDE About This Guide This guide describes the citation conventions that HBS students should use when writing research papers. The guide has been adapted from Chapter 3 of the Style Guide for HBS Casewriters, which is available online at http://intranet.hbs.edu/dept/drfd/caseservices/styleguide.pdf. For information about citing source materials not covered in this guide, please contact rreiser@hbs.edu. Purpose of Citations There are three main reasons to include citations in your papers: • To give credit to the authors of the source materials you used when writing the paper. • To enable readers to follow up on the source materials. • To demonstrate that your paper is well-researched. There are many ways to document one’s research. The following guidelines, based on The Chicago Manual of Style, 15th ed., present one method. Whichever method you choose, it is important to follow a format that is clear and consistent. What to Cite You should cite all direct quotations, paraphrased factual statements, and borrowed ideas. The only items that do not need to be cited are facts that seem to be common knowledge, such as the date of the stock market crash. However, if you present facts in someone else’s words, you should cite the source of those words. In addition, if you paraphrase large amounts of information from one source, you should cite that source, as emphasized in Harvard University’s Expository Writing guidelines: When you draw a great deal of information from a single source, you should cite that source even if the information is common knowledge, since the source (and its particular way of organizing the information) has made a significant contribution to your paper.1 Failure to give credit to the words and ideas of an original author is plagiarism. Most people do not intend to commit plagiarism but may do so inadvertently because they are in a hurry or because of sloppy work habits. For tips on how to avoid plagiarism, see the following resources: “Misuse of Sources,” in Gordon Harvey, Writing with Sources: A Guide for Harvard Students, second edition (Indianapolis/Cambridge: Hackett Publishing Company, 2008). “Working Habits that Work,” in Academic Integrity at Princeton, Princeton University, http://www.princeton.edu/pr/pub/integrity/08/habits. “Policy on Plagiarism & Collaboration,” on the HBS MBA website, http://my.hbs.edu/mbadocs/ admin/quick_info/policies/academic/stuwork/plagiarism.jhtml. > > >5 CITATION GUIDE Types of Citations: Footnotes, Source Lines, and Bibliographies Citations can appear in three main forms: footnotes (or endnotes), source lines, and bibliographies. Each form contains similar information arranged in a different way. The following sections provide details about each form. Footnotes and Endnotes Footnotes and endnotes have the same function —to cite the exact page of a source you refer to in your paper. The only difference between footnotes and endnotes is placement: footnotes appear at the bottom of the page, whereas endnotes appear at the end of the document. The main characteristics of footnotes and endnotes are as follows: • They are preceded by a number. • The author’s name is in natural order. • The elements of the citation are separated by commas. The following examples show a quotation and its corresponding footnote or endnote: Quotation cited in text Sahlman says, “Taking advantage of arbitrage opportunities is a viable and potentially profitable way to enter a business.” 32 Corresponding footnote or endnote 32 William A. Sahlman, “How to Write a Great Business Plan,” Harvard Business Review 75 (July –August 1997): 103. Source Lines Source lines typically appear under charts, exhibits, tables, and other graphical items. Source lines should acknowledge the source of the graphic or the data that was used to create it. A source line begins with the word Source and continues with the same information that would appear in a footnote or endnote. The following are some examples of source lines: Source: Jon F. Thompson, Cycle World, vol. 35, no. 6 (June 2008), p. 23. Source: “Worldwide Semiconductor Shipments,” Semiconductor Industry Association website, http://www.sia-online.org/downloads/ww_shipments.pdf, accessed August 2009. Source: Compiled from Bloomberg LP, LexisNexis, and SEC filings data, May 2008. Source: Casewriter’s diagram based on Rhythms NetConnections, Inc. price data for April 7, 2007 through April 30, 2008, obtained from Thomson Reuters Datastream, accessed November 2008. >6 CITATION GUIDE Bibliographies A bibliography lists all of the references you used to create a research paper. The bibliography appears at the end of the paper, after the endnotes, if any. If you have included footnotes (or endnotes) and source lines in your paper, then you do not need to include a bibliography unless your professor has requested one. Bibliographies have the following formatting conventions: • The first author’s name is inverted (last name first), and most elements are separated by periods. • Entries have a special indentation style in which all lines but the first are indented. • Entries are arranged alphabetically by the author’s last name, or by the first word of the title if no author is listed. Bibliographies typically appear in documents that use the author-date style of citation, which is not shown here for space reasons. The following is an example of the author-date style: Reference in text: (Calabrese and Loften, 2000) Bibliography entry: Calabrese, Edward, and Peter Loften (2000). “The chronic effects of fluoride on the estuarine amphipods,” Water Research 16:1313–17. For more information about the author-date style of citations, see chapters 16 and 17 in The Chicago Manual of Style, 15th. ed. Repeating a Citation After the first complete citation of a work, you may abbreviate subsequent instances by using either Ibid. or a shortened form of the citation. See the following examples of each style. Ibid. Use Ibid. to repeat a footnote that appears immediately before the current footnote. Ibid. takes the place of the author’s name, the title of the work, and as much of the subsequent information as is identical. For example: 50 Thomas Smith, “New Debate over Business Records,” The New York Times, December 31, 1978, sec. 3, p 5. 51 Ibid., p. 6. Citation Guidelines – continued >7 CITATION GUIDE Shortened Footnote Use the shortened footnote style to repeat a note that is before, but not contiguous to, the current footnote. The shortened note should include enough information to help readers identify the source — i.e., the last name of the author; enough of the title to be clear; and the page number, if different from the first. For example: 2 1 Samuel A. Morley, Poverty and Inequality in Latin America: The Impact of Adjustment and Recovery (Baltimore: Johns Hopkins University Press, 1995), pp. 24–25. 2 [Citation of different source] 3 Morley, Poverty and Inequality, p. 43. Creating New Citation Styles If you cannot find an example of the type of source material you want to cite, and if you have exhausted other resources (including The Chicago Manual of Style and rreiser@hbs.edu), then just cite all of the details that would help a reader find the source easily. Think about the four “W”s: WHO created the work, WHAT is the title and type of information, WHEN was it published, and WHERE can one find it? The following examples show citations that were created without templates but that are precise and easy to follow: Author’s e-mail survey of students from MBA class of 2009, November 12–15, 2010, Harvard Business School, Boston, MA. Clarence Saunders, “Documentary Evidence about Piggly Wiggly,” Harvard pre-1920 social history/ business preservation microfilm project, available from Historical Collections, Baker Library, Harvard Business School, Microfilm HD Box #136. Caroline J. Ferguson and Barbara A. Schaal, “Phylogeography of Phlox pilosa subsp. ozarkana,” poster presented at the 16th International Botanical Congress, St. Louis, 1999.3 David Hanson, “The Provenance of the Ruskin-Allen Letters (computer printout, Department of English, Southeastern University, 2001), p. 16.4 >When you are citing unusual source materials, don’t worry about following a particular format; instead, just include all of the details that would help readers locate the information quickly. It is always better to provide readers with too much rather than too little source information. Permission Requirements If you plan to publish a paper or distribute it widely (e.g., on a website), and if the paper contains the following kinds of information, you may need permission from the copyright holder: • Graphical items (charts, graphs, maps, photographs, tables, etc.) • Entire documents or articles • Excerpts of text or data Be sure to check the copyright holder’s permission requirements before redistributing any of their information outside the classroom. 8 CITATION GUIDE > Citation Guidelines – continuedThis section shows examples of citations for the most common kinds of source materials. For information about citing other kinds of materials, see Creating New Citation Styles on p. 7, or contact rreiser@hbs.edu. A few notes about the examples: • The term periodical refers to journals and magazines. • For brevity, access dates in the examples show only the month and year (e.g., June 2009). If you are citing information that is updated frequently or pertains to a time-sensitive field such as medicine, then you might want to include complete access dates. • The following examples appear in alphabetical order, with one exception: When citations are shown for both printed and online formats, the examples for printed format appear first. Advertisements Television 5 Footnote 1 Volkswagen of America, Inc., “Crazy Guy,” television advertisement (Arnold Communications, Inc., directed by Phil Morrison), 2002. Bibliography Volkswagen of America, Inc. “Crazy Guy.” Television advertisement. Arnold Communications, Inc., directed by Phil Morrison, 2002. On the Web Footnote 2 Volkswagen of America, Inc., “Crazy Guy,” television advertisement (Arnold Communications, Inc., directed by Phil Morrison), 2000, http://www.andyawards .com/winners.2000/last_television3.html, accessed August 2002. Bibliography Volkswagen of America, Inc. “Crazy Guy.” Television advertisement. Arnold Communications, Inc., directed by Phil Morrison, 2002. http://www.andyawards.com/winners.2000/last_television3.html, accessed August 2002. 9 CITATION GUIDE > Examples of CitationsAnalyst Reports Signed Footnote (on the Web) 3 Steve Weinstein, “High Growth in search creates opportunities for niche players,” Pacific Crest Securities, November 4, 2003, p. 11, http://www.pacific-crest.com, accessed December 2008. Bibliography Weinstein, Steve. “High Growth in search creates opportunities for niche players.” Pacific Crest Securities, November 4, 2003. http://www.pacific-crest.com, accessed December 2008. Unsigned Footnote 4 Wachovia Capital Markets, LLC, “Perspectives on the U.S. Restaurant Industry,” August 20, 2007. Unsigned Footnote (database) 4 Wachovia Capital Markets, LLC, “Perspectives on the U.S. Restaurant Industry,” August 20, 2007, via Thomson Research/Investext, accessed September 2009. Annual Reports (Printed) Printed Footnote 3 General Motors, 2001 Annual Report (Detroit: General Motors, 2002), p. 34. Bibliography General Motors. 2001 Annual Report. Detroit: General Motors, 2002. Note: Publication details, such as the location and name of publisher, are optional in citations of annual reports. These publication details have been omitted in the following examples of online annual reports. 10 CITATION GUIDE Examples of Citations – continued > >Annual Reports (Online) On the Web Footnote (company 4 General Motors, 2006 Annual Report, p. 58, http://www.gm.com/corporate/ website) investor_information/docs/fin_data/gm06ar/download/gm06ar.pdf, accessed September 2007. Bibliography General Motors. 2006 Annual Report. http://www.gm.com/corporate/ investor_information/docs/fin_data/gm06ar/download/gm06ar.pdf, accessed September 2007. On the Web Footnote (database) 5 General Motors, 2006 Annual Report, p. 58, via Thomson Research/Investext, accessed September 2007. Bibliography General Motors. 2006 Annual Report. Thomson Research/Investext, accessed September 2007. CD-ROM Footnote (LaserD) 6 General Motors, 2001 Annual Report, p. 34, available from Thomson Reuters Datastream, Global Access/Laser CD-ROM, disc no. A2015. Bibliography General Motors. 2001 Annual Report. Available from Thomson Reuters Datastream, Global Access/Laser CD-ROM, disc no. A2015. Articles See Newspapers and Periodicals. 11 CITATION GUIDE > >Blogs Blog entry Footnote or post Stephan Spencer, “Teen Blogger Says ‘No’ to Mowing the Lawn,” August 14, 2007, post on blog “Stephan Spencer’s Scatterings,” Business Blog Consulting, http://businessblogconsulting.com/, accessed September 2007. Comment on Footnote blog entry Terra Andersen, “[First few words of comment...],” August 21, 2007, comment on or post Stephan Spencer’s post “Teen Blogger Says ‘No’ to Mowing the Lawn,” August 14, 2007, on blog “Stephan Spencer’s Scatterings,” Business Blog Consulting, [URL of comment], accessed September 2007. Bibliography Andersen, Terra. “That’s wonderful!...” August 21, 2007, comment on Stephan Spencer’s post “Teen Blogger Says ‘No’ to Mowing the Lawn,” August 14, 2007. “Stephan Spencer’s Scatterings,” Business Blog Consulting. [URL of comment], accessed September 2007. Blog entry Footnote or post 7 John Quelch, “How to Profit from Scarcity,” August 31, 2007, post on blog “Marketing KnowHow,” Harvard Business Online, http://discussionleader.hbsp.com/ quelch/2007/08/how_to_profit_from_scarcity_1.html, accessed September 9, 2007. Comment on Footnote blog entry 8 John Davis, “I agree that scarcity...,” September 4, 2007, comment on John Quelch’s or post post “How to Profit from Scarcity,” August 31, 2007, on blog “Marketing KnowHow,” Harvard Business Online, http://discussionleader.hbsp.com/quelch/2007/08/how_to_ profit_from_scarcity_1.html#comments, accessed September 9, 2007. Bibliography Davis, John. “I agree that scarcity...” September 4, 2007, comment on John Quelch’s post “How to Profit from Scarcity,” August 31, 2007. “Marketing KnowHow,” Harvard Business Online. http://discussionleader.hbsp.com/quelch/2007/08/ how_to_profit_from_scarcity_1.html#comments, accessed September 2007. 12 CITATION GUIDE Examples of Citations – continued >13 CITATION GUIDE Bond Prospectuses Footnote 9 Formula One Finance B.V., August 1999 prospectus for US$1.4 billion of 100% Secured Floating Rate Notes, due 2010. Bibliography Formula One Finance B.V. August 1999 prospectus for US$1.4 billion of 100% Secured Floating Rate Notes, due 2010. Books (Printed) One author Footnote 10 David A. Garvin, Operations Strategy: Text and Cases (Englewood Cliffs, NJ: Prentice-Hall, 1992), p. 73. Bibliography Garvin, David A. Operations Strategy: Text and Cases. Englewood Cliffs, NJ: Prentice-Hall, 1992. Two Footnote authors 11 John P. Kotter and James L. Heskett, Corporate Culture and Performance (New York: Free Press, 1992), p. 101. Bibliography Kotter, John P., and James L. Heskett. Corporate Culture and Performance. New York: Free Press, 1992. Three Footnote authors 12 John W. Pratt, Howard Raiffa, and R.O. Schlaifer, Introduction to Statistical Decision Theory (Cambridge: MIT Press, 1995), p. 45. Bibliography Pratt, John W., Howard Raiffa, and R.O. Schlaifer. Introduction to Statistical Decision Theory. Cambridge: MIT Press, 1995. > >Books (Printed) – continued More than Footnote three 13 F. M. Scherer et al., The Economics of Multi-Plant Operation authors (Cambridge: Harvard University Press, 1975), p. 97. Bibliography Scherer, F. M., Alan Beckenstein, Erich Kaufer, R. Dennis Murphy, and Francine Bougeon-Maassen. The Economics of Multi-Plant Operation. Cambridge: Harvard University Press, 1975. Editor Footnote 14 John J. Gabarro, ed., Managing People and Organizations (Boston: Harvard Business School Press, 1992), p. 145. Bibliography Gabarro, John J., ed. Managing People and Organizations. Boston: Harvard Business School Press, 1992. Multiple Footnote editors 15 Kim B. Clark et al., “Project Leadership and Organization,” in The Perpetual Enterprise Machine: High Performance Product Development in the 1990s, eds. H. Kent Bowen et al. (New York: Oxford University Press, 1994). Bibliography Clark, Kim B., Marco Iansiti, and Richard Billington. “Project Leadership and Organization.” InThe Perpetual Enterprise Machine: High Performance Product Development in the 1990s, edited by H. Kent Bowen and Steven Wheelwright. New York: Oxford University Press, 1994. Corporate Footnote author 16 U.S. Dept. of Commerce, U.S. Industrial Outlook (Washington, DC: Government (company or Printing Office, 1980), p. 687. association) Bibliography U.S. Dept. of Commerce. U.S. Industrial Outlook. Washington, DC: Government Printing Office, 1980. 14 CITATION GUIDE > Examples of Citations – continuedEdition Footnote 17 Francis J. Aguilar, General Managers in Action: Policies and Strategies, 2nd ed. (New York: Oxford University Press, 1994), p. 133. Bibliography Aguilar, Francis J. General Managers in Action: Policies and Strategies. 2nd ed. New York: Oxford University Press, 1994. Chapters Footnote or other 18 Teresa M. Amabile, “Discovering the Unknowable, Managing the Unmanageable,” titled parts in Creative Action in Organizations, eds. C. M. Ford and D. A. Gioia (Thousand Oaks, of a book CA: Sage Publications, 1995), p. 81. Bibliography Amabile,Theresa M. “Discovering the Unknowable, Managing the Unmanageable.” In Creative Action in Organizations, eds. C. M. Ford and D. A. Gioia. Thousand Oaks, CA: Sage Publications, 1995. Books (Online) On the Web Footnote 19 Gregory J. E. Rawlins, Moths to the Flame (Cambridge: MIT Press, 1996), http://www-mitpress.mit.edu/e-books/Moths/, accessed August 1997. Bibliography Rawlins, Gregory J. E. Moths to the Flame. Cambridge: MIT Press, 1996. http://www-mitpress.mit.edu/Moths/, accessed August 1997. CD-ROM Footnote 20 Oxford English Dictionary, 2nd ed. CD-ROM (Oxford: Oxford University Press, 1992), p. 157 Bibliography Oxford English Dictionary. 2nd ed. CD-ROM. Oxford: Oxford University Press, 1992. 15 CITATION GUIDE >Examples of Citations – continued 16 CITATION GUIDE Brochures Signed Footnote 21 Mary Cassatt: Modern Woman, ed. Judith A. Barter (Chicago: Art Institute of Chicago, 1998), p. 7. Bibliography Barter, Judith A., ed. Mary Cassatt: Modern Woman. Chicago: Art Institute of Chicago, 1998. Unsigned Footnote 22 Reinventing Software, IBM corporate brochure (White Plains, NY, December 2002), p. 3. Bibliography Reinventing Software. IBM corporate brochure. White Plains, NY, December 2002. Footnote 23 Lifestyles in Retirement, Library Series (New York: TIAA-CREF, 1996), p. 4. Bibliography Lifestyles in Retirement. Library Series. New York: TIAA-CREF, 1996. Footnote 24Altera Corporate Overview, from Altera website, http://www.altera.com/ corporate/overview/ovr-index.html, accessed October 2003. Bibliography Altera Corporate Overview. From Altera website, http://www.altera.com/ corporate/overview/ovr-index.html, accessed October 2003. >Cases (Printed) Printed Footnote 25 V. Kasturi Rangan, “Population Services International: The Social Marketing Project in Bangladesh,” HBS No. 586-013 (Boston: Harvard Business School Publishing, 1993), p. 9. Bibliography Rangan, V. Kasturi. “Population Services International: The Social Marketing Project in Bangladesh.” HBS No. 586-013. Boston: Harvard Business School Publishing, 1993. Cases (Online) On the Web Footnote 26 Amy C. Edmondson and Laura R. Feldman, “Group Process in the Challenger Launch Decision (A),” HBS No. 603-068 (Boston: Harvard Business School Publishing, 2002), Harvard Business Online, http://harvardbusinessonline.hbsp.harvard.edu, Bibliography Edmondson, Amy C., and Laura R. Feldman. “Group Process in the Challenger Launch Decision (A).” HBS No. 603-068 (Boston: Harvard Business School Publishing, 2002). Harvard Business Online. http://harvardbusinessonline.hbsp. harvard.edu, accessed September 2007. Footnote 27 Michael J. Enright et al., “Daewoo and the Korean Chaoebol,” University of Hong Kong case no. HKU143 (University of Hong Kong, August 2001), via Harvard Business Online, http://harvardbusinessonline.hbsp.harvard.edu/, accessed March 2007. Bibliography Enright, Michael J., et al. “Daewoo and the Korean Chaebol.” University of Hong Kong case no. HKU143 (University of Hong Kong, August 2001). Harvard Business Online. http://harvardbusinessonline.hbsp.harvard.edu/, accessed March 2007. 17 CITATION GUIDE > >Examples of Citations – continued Charts Note: When citing a chart, illustration, or other graphical item, use the same style that is used to cite tables. See Tables. Citation within a Citation See Secondary Sources. Classroom Discussions Live classes Footnote 28 Michael J. Roberts, “The Entrepreneurial Manager,” MBA class discussion, September 29, 2001, Harvard Business School, Boston, MA. Bibliography Roberts, Michael J. “The Entrepreneurial Manager.” MBA class discussion, September 29, 2001. Harvard Business School, Boston, MA. Compiled Information The way that you create graphical items such as charts, exhibits, tables, etc., determines how you should word the source lines. The following examples show different ways of wording source lines depending on how you created the item. Item copied Source directly from [Cite source exactly as it is.] a single source Item compiled Source from different Compiled from [SOURCE 1], [SOURCE 2], and [SOURCE 3]. sources Item compiled Source from different Compiled from [SOURCE 1], [SOURCE 2], and author’s calculations. different sources, including author’s own calculations 18 CITATION GUIDE > > > >Item in Source: format Author, based on data from [SOURCE 1], [SOURCE 2], and [SOURCE 3]. created by author but based on data from various sources Conference Papers Published Footnote (in printed 29 J.Wiklund, F. Delmar, and K. Sjöberg, “Selection of the Fittest? How Human form) Capital Affects High-Potential Entrepreneurship,” Proceedings of the Academy of Management 2004 Conference, New Orleans, LA, August 6–11, 2004, pp. 246–250. Bibliography Wiklund, J., F. Delmar, and K. Sjöberg. “Selection of the Fittest? How Human Capital Affects High-Potential Entrepreneurship.” Proceedings of the Academy of Management 2004 Conference, New Orleans, LA, August 6–11, 2004, pp. 246–250. Published Footnote (in online 30 Mark T. Leary and Michael R. Roberts, “Do Firms Rebalance Their Capital form) Structures?” June 7, 2004, 14th Annual Utah Winter Finance Conference; Tuck Contemporary Corporate Finance Issues III Conference Paper, available on SSRN website, http://ssrn.com/abstract=571002, accessed October 2005. Bibliography Leary, Mark T., and Roberts, Michael R. “Do Firms Rebalance Their Capital Structures?” June 7, 2004, 14th Annual Utah Winter Finance Conference;Tuck Contemporary Corporate Finance Issues III Conference Paper. SSRN website. http://ssrn.com/abstract=571002, accessed September 2007. Unpublished Footnote 31 Sarah Dodd, “Transnational Differences in Entrepreneurial Networks,” paper presented at the Eighth Global Entrepreneurship Research Conference, INSEAD, Fontainebleau, France, June 1998. Bibliography Dodd, Sarah. “Transnational Differences in Entrepreneurial Networks.” Paper presented at the Eighth Global Entrepreneurship Research Conference, INSEAD, Fontainebleau, France, June 1998. 19 CITATION GUIDE >Examples of Citations – continued Conference Papers (continued) Unpublished Footnote 31 Victor G.Vogel, M.D., M.H.S., incoming national vice president of research, American Cancer Society, and professor of medicine and epidemiology, University of Pittsburgh; and Sarah F. Marshall, senior statistician, University of California, Irvine; December 12, 2008, presentation, San Antonio Breast Cancer Symposium, Texas. Databases For examples of how to cite information from databases, see Citations of Commercial Databases on p. 36. Downloaded Documents Footnote 31 National Venture Capital Association, “Venture Capital 101” (PDF file), downloaded from NVCA website, http://nvca.org/index.php?option=com_ content&view=article&id=141&Itemid=133, accessed August 19, 2009. Footnote 31 Financial Management Service, U.S. Treasury, Summary Report of the 2008 Financial Report of the United States Government (“The Federal Government’s Financial Health”), Table 1: Budget Deficit vs. Net Operating Cost (p. 4), downloaded from www.fms.treas.gov/frsummary/index.html, September 30, 2009. E-Mail Footnote 32 [Sender], “[Subject],” e-mail message to [Receipient], [Date]. Note: The Chicago Manual of Style says the following about e-mail addresses in citations: “An e-mail address belonging to an individual should be omitted. Should it be needed in a specific context, it must be cited only with the permission of its owner.” 6 Films See Movies, Videos, Webcasts. 20 CITATION GUIDE > > > > >21 CITATION GUIDE Government Documents Congressional Footnote bills 7 33 Food Security Act of 1985, HR 2100, 99th Cong., 1st sess., Congressional Record 131, no. 132, daily ed. (October 8, 1985): H 8461. 34 U.S. Congress., House, Food Security Act of 1985, HR 2100, 99th Cong., 1st sess.,Congressional Record 131, no. 132, daily ed. (October 8, 1985): H 8353-8486. Congressional Footnote hearings 35 Senate Committee on Foreign Relations, Famine in Africa: Hearing before (federal), the Committee on Foreign Relations, 99th Cong., 1st sess., January 17, 1985. unpublished 8 Bibliography U.S. Congress. Senate. Committee on Foreign Relations. Famine in Africa: Hearing before the Committee on Foreign Relations, 99th Cong., 1st sess., January 17, 1985. Congressional Footnote hearings 36 House Committee on Banking and Currency, Bretton Woods Agreements Act: (federal), Hearings on HR 3314, 79th Cong., 1st sess., 1945, 12–14. published 9 Note: According to the Chicago Manual of Style, “[B]ills or resolutions originating in the House of Representatives are abbreviated HR or HR Res., and those originating in the Senate, S or S Res. (all in roman). The title of the bill is italicized; it is followed by the bill number, the congressional session, and (if available) publication details in the Congressional Record.”10 Report of U.S. Footnote presidential 37 Report of the Presidential Commission on the Space Shuttle Challenger Accident, commission vol. 1, chap. 5 (Washington, DC: Government Printing Office, 1986), (published http://history.nasa.gov/rogersrep/v1p97.htm, accessed October 2002. online) Bibliography Report of the Presidential Commission on the Space Shuttle Challenger Accident, vol. 1, chap. 5. Washington, DC: Government Printing Office, 1986. http://history.nasa.gov/rogersrep/v1p97.htm, accessed October 2002. >Examples of Citations – continued Government Documents (continued) Testimony Footnote before 38 U.S. Senate Committee on Homeland Security and Governmental Affairs, congressional Subcommittee on Oversight of Government Management, the Federal committee Workforce, and the District of Columbia; GAO’s 2005 High-Risk Update, (published in testimony of The Honorable David M. Walker, Comptroller General of the online and United States, February 17, 2005, http://hsgac.senate.gov/_files/walkerhigh printed form) riskstatement21705.pdf, accessed October 2006. (Also available in print as GAO-05-350T (Washington, DC: Government Printing Office, 2005).) For more examples of how to cite government documents, see The Chicago Manual of Style, 15th ed. Illustrations Note: When citing a chart, illustration, or other graphical item, use the same style that is used to cite tables. See Tables. Interviews Television 11 Footnote 39 McGeorge Bundy, interview by Robert MacNeil, MacNeil/Lehrer News Hour, Public Broadcasting System, February 7, 1990. Bibliography Bundy, McGeorge. Interview by Robert MacNeil. MacNeil/Lehrer News Hour. Public Broadcasting System, February 7, 1990. Published Footnote or recorded 40 Thomas R. Piper, Leadership & Learning, interview by JoAnn Olson, VHS, directed by Wren Jareckie, Bennington Films, 1993. Bibliography Piper,Thomas R. Leadership & Learning. Interview by JoAnn Olson. VHS, directed by Wren Jareckie. Bennington Films, 1993. Unpublished Footnote 41 Carl Sloane, interview by author, Cambridge, MA, July 4, 1998. Bibliography Sloane, Carl. Interview by author. Cambridge, MA, July 4, 1998. 22 CITATION GUIDE > > >23 CITATION GUIDE Journals See Periodicals. Legal Cases U.S. Supreme Footnote Court 42 Old Chief v. U.S., 117 S. Ct., 644 (1997).12 Lower Footnote federal 43 Eaton v. IBM Corp., 925 F. Supp. 487 (S.D. Tex 1996).13 courts State and Footnote local courts 4 4 Bivens v. Mobley, 724 So. 2d 458, 465 (Miss. Ct. App. 1998).14 For more examples of legal citations, see the following resources: The Chicago Manual of Style, 15th ed. (Chicago: University of Chicago Press, 2003), chap. 17. The Bluebook: A Uniform System of Citation, 18th edition (Cambridge, MA: Harvard Law Review Association, 2005). Association of Legal Writing Directors, ALWD Citation Manual: A Professional System of Citation, 3rd. ed. (Aspen Publishers, 2005). Introduction to Basic Legal Citation, ed. Peter W. Martin (Cornell Law School, Legal Information Institute, 2007), http://www.law.cornell.edu/citation/. Magazines See Periodicals. Maps Public Footnote domain 4 5 University of Texas Libraries, University of Texas at Austin, Perry Castañeda maps Library Map Collection, http://www.lib.utexas.edu/maps/, accessed May 2007. Bibliography University of Texas Libraries. University of Texas at Austin. Perry Castañeda Library Map Collection. http://www.lib.utexas.edu/maps/, accessed May 2007. > > > >Maps (continued) Public Footnote domain 4 5 U.S. Department of the Interior, U.S. Geological Survey, National Map Team, maps http://nmviewogc.cr.usgs.gov/, accessed February 2006. Bibliography U.S. Department of the Interior. U.S. Geological Survey. National Map Team. http://nmviewogc.cr.usgs.gov/, accessed February 2006. Copyrighted Source line maps 4 7 Used by permission of Graphic Maps, a d/b/a of the Woolwine-Moen Group, © 2007 Graphic Maps. All rights reserved. http://www.graphicmaps.com/ webimage/countrys/africa/africa.htm, accessed July 2007. Bibliography Graphic Maps, a d/b/a of the Woolwine-Moen Group. © 2007 Graphic Maps. All rights reserved. http://www.graphicmaps.com/ webimage/countrys/africa/ africa.htm, accessed July 2007. Note: The wording of citations for copyrighted information will vary according to each copyright holder’s requirements. Market Research Reports Footnote 48 Jim Neil et al., “Digital Marketing,” The Forrester Report 2:8 (April 1998), Forrester Research, Inc., http://www.forrester.com, accessed June 2000. Bibliography Neil, Jim, Bill Bass, Jill Aldort, and Cameron O’Connor. “Digital Marketing.” The Forrester Report 2:8 (April 1998). Forrester Research, Inc. http://www.forrester.com, accessed June 2000. Memorandums Footnote 49 Harold Lehman to Runako Gregg, memorandum regarding [subject], [date], [company], from [source of memorandum]. Bibliography Lehman, Harold, to Runako Gregg. Memorandum regarding [subject], [date], [company]. [Source of memorandum]. Examples of Citations – continued 24 CITATION GUIDE > > >Minisodes Footnote 49 “Arnold the Entrepreneur,” minisode adapted from same episode on Diff’rent Strokes (NBC, Season 7, Episode 8, originally aired November 17, 1984), available from YouTube, http://www.youtube.com/watch?v=AEwEtVBaLMw, accessed April 15, 2009. Movies Movie Footnote 50 Jerry McGuire, directed by Cameron Crowe (Columbia/TriStar Pictures, 1996). Bibliography Jerry McGuire. Directed by Cameron Crowe. Columbia/TriStar Pictures, 1996. Movie Footnote (on DVD) 51 Jerry McGuire, directed by Cameron Crowe (Columbia/TriStar Pictures, 1996; Sony Pictures, Special Edition DVD, 2002). See also Videos; Webcasts. News Websites Signed Footnote 52 Wylie Wong, “Software giants unite for Web services,” ZDNet News, February 5, 2002, http://news.zdnet.com/2100-1009_22-830090.html, accessed December 2005. Bibliography Wong, Wylie. “Software giants unite for Web services.” ZDNet News, February 5, 2002. http://news.zdnet.com/2100-1009_22-830090.html, accessed December 2005. Unsigned Footnote 53 “Mattel: Third Recall of Toys from China,” September 5, 2007, CBS News, http://www.cbsnews.com/stories/2007/09/04/business/main3233138.shtml, accessed September 8, 2007. 25 CITATION GUIDE > > >News Websites (continued) Unsigned Bibliography CBS News. “Mattel: Third Recall of Toys from China.” September 5, 2007. http://www.cbsnews.com/stories/2007/09/04/business/main3233138.shtml, accessed September 8, 2007. Notes: In a bibliographic entry for an unsigned article, the name of the news organization (e.g., CBS News) should stand in place of the author.15 Names of news websites (e.g., Reuters, CBS News) should appear in roman (vs. italic) type. News Wires From news Footnote wire’s 53 Michael Liedtke, “LinkedIn Founder’s Road to Riches Paved with Gold website Connections,” January 20, 2008, Associated Press, http://www.ap.org, accessed May 2008. Footnote 50 “Countrywide’s Chairman Mozilo delivers John T. Dunlop Lecture,” company press release, February 4, 2003, via PR Newswire, http://www.prnewswire.com, accessed September 2004. From Footnote third-party’s 53 “Global 1000 Companies and Analysts Endorse Infosys’ ‘Next Generation’ website Consulting Practice,” Business Wire, July 14, 2005, http://findarticles.com/p/ articles/mi_m0EIN/is_2005_July_14/ai_n14788172, via CBS Interactive, Inc., accessed July 1, 2008. Newspapers (Printed) Signed Footnote newspaper 54 Thomas Smith, “New Debate over Business Records,” The New York Times, article December 31, 1978, sec. 3, p. 5. (in special section) Bibliography Smith, Thomas. “New Debate over Business Records.” The New York Times, December 31, 1978, sec. 3, p. 5. Examples of Citations – continued 26 CITATION GUIDE > > >Unsigned Footnote newspaper 55 “Raising Taxes on Private Equity,”The New York Times, June 26, 2007, p. E6. article Bibliography The New York Times, “Raising Taxes on Private Equity,” June 26, 2007, p. E6. Unsigned Footnote newspaper 56 Editorial, The Wall Street Journal, August 28, 1997, p. A19. editorial (without Bibliography title) The Wall Street Journal. August 28, 1997. Editorial concerning interest rates. Note: In a bibliographic entry for an unsigned newspaper article, the name of the newspaper should stand in place of the author).16 Newspapers (Online) Article Footnote from online 57 Kenneth L. Gilpin, “Stocks Soar Amid a Broad Rally on Wall Street,” newspaper The New York Times, July 29, 2002, http://www.nytimes.com/2002/07/29/ business/29CND-STOX.html, accessed July 2002. Bibliography Gilpin, Kenneth L. “Stocks Soar Amid a Broad Rally on Wall Street.” The New York Times, July 29, 2002. http://www.nytimes.com/2002/07/29/ business/29CND-STOX.html, accessed July 2002. Notes HBS technical notes are often referred to as notes. When citing notes, follow the style that is used to cite cases. 27 CITATION GUIDE > >Examples of Citations – continued 28 CITATION GUIDE Periodicals (Printed) Signed Footnote articles 58 Paul A. Gompers, “The Rise of Venture Capital,” Business and Economic History 23 (Winter 1994): 12. Bibliography Gompers, Paul A. “The Rise of Venture Capital.” Business and Economic History 23 (Winter 1994): 1–24. Footnote 59 Steven Levy, “The Connected Company,” Newsweek, April 28, 2003, pp. 48–52. Bibliography Levy, Steven. “The Connected Company.” Newsweek, April 28, 2003, pp. 48–52. Unsigned Footnote articles 50 “Leading Ferociously,” a conversation with Daniel Goldin, Harvard Business Review 80, no. 5 (May 2002): 22–25. Bibliography “Leading Ferociously.” A conversation with Daniel Goldin. Harvard Business Review 80, no. 5 (May 2002): 22–25. Footnote 61 “Choosing the Right Nursing Home,” Family Health 10 (September 1978): 8. Bibliography “Choosing the Right Nursing Home.” Family Health 10 (September 1978): 8–10. Periodicals (Online) Article Footnote from online 62 Joseph Ntayi, “Work Ethic, Locus of Control, and Sales Force Task Performance,” journal Journal of African Business 6, nos. 1, 2 (2005): 155, ABI/INFORM via ProQuest, accessed October 2006. Bibliography Ntayi, Joseph. “Work Ethic, Locus of Control, and Sales Force Task Performance.” Journal of African Business 6, nos. 1, 2 (2005): 155. ABI/INFORM via ProQuest, accessed October 2006. > >Article Footnote from online 63 Richard Tomlinson, “The World’s Most Popular Sport Is a Mess of a Business,” magazine Fortune, May 27, 2002, http://www.fortune.com/indexw.jhtml?channel=208013, accessed June 2002. Footnote 64 Joseph Ntayi, “Work Ethic, Locus of Control, and Sales Force Task Performance,” Journal of African Business 6, nos. 1, 2 (2005): 155, ABI/INFORM via ProQuest, ccessed October 2006. Podcasts Note: In this guide, “podcast” refers to an audio file and “webcast” to a video file. Citations of podcasts and webcasts are similar to citations of websites. As the following examples show, some websites use the term “podcast” or “webcast” and others specify the file type, such as “audio” or “video.” See also Webcasts. Footnote 65 Financial Industry Regulatory Authority (FINRA), “Anti-Money Laundering: Examples of Red Flags,” April 12, 2007, podcast, FINRA website, http://www.finra.org/RulesRegulation/ComplianceTools/FINRAPodcasts/ PodcastIndex/index.htm, accessed September 2007. Footnote 66 “Global Business: Food for Fuel,” Peter Day, February 27, 2007, audio file, BBC World Service, http://www.bbc.co.uk/, accessed September 2007. Footnote 67 Wharton School, University of Pennsylvania, “Home Truths about the Housing Market,” September 5, 2007, audio file, Knowledge@Wharton, http://knowledge. wharton.upenn.edu/article.cfm?articleid=1802, accessed September 8, 2007. Note: If no author is listed for a publication issued by an organization or corporation, then the organization should be listed as the author (in bibliographic entries).17 Powerpoint Presentations See Slide Presentations. 29 CITATION GUIDE > >Press Releases Printed Footnote 68 “Sun Charts Strategy for Services to Deliver High-Value Network Computing Environments,” Sun Microsystems press release (Santa Clara, CA, December 3, 2002). Bibliography “Sun Charts Strategy for Services to Deliver High-Value Network Computing Environments.” Sun Microsystems press release. Santa Clara, CA, December 3, 2002. On the Web Footnote 69 “NASD Fines Wachovia Securities $2 Million for Fee-Based Account Violations,” NASD press release, June 21, 2007, on FINRA website, http://www.finra.org/ PressRoom/NewsReleases/2007NewsReleases/P019312, accessed September 2007. Bibliography NASD (National Association of Securities Dealers). “NASD Fines Wachovia Securities $2 Million for Fee-Based Account Violations.” NASD press release, June 21, 2007. FINRA website, http://www.finra.org/PressRoom/NewsReleases/2007 NewsReleases/P019312, accessed September 2007. Proceedings See Conference Papers. Radio Programs Footnote 70 “Indian Software Firm to Outsource to U.S.,” Adam Davidson, Morning Edition, National Public Radio, September 6, 2007, http://www.npr.org/templates/story/ story.php?storyId=14204620&ft=1&f=1006, accessed September 2007. Bibliography “Indian Software Firm to Outsource to U.S.” Adam Davidson. Morning Edition, National Public Radio, September 6, 2007. http://www.npr.org/templates/story/ story.php?storyId=14204620&ft=1&f=1006, accessed September 2007. Footnote 71 “Plans for Nuclear Waste Dump Hit a Snag,” Michele Norris, All Things Considered National Public Radio, September 5, 2007, http://www.npr.org/ templates/story/story.php?storyId=14191377, accessed September 2007. Note: See also Podcasts. Examples of Citations – continued 30 CITATION GUIDE > > >SEC Filings Footnote 72 Amazon.com, Inc., June 30, 1997 Form 10-Q (filed August 14, 1997), via Thomson Research, accessed June 2007. 73 Alcoa Inc., March 31, 2006 Form 10-Q (filed April 26, 2006), http://www.alcoa .com/global/en/investment/pdfs/10Q1Q06_5_12.pdf, accessed July 2007. Bibliography Amazon.com, Inc. June 30, 1997 Form 10-Q. Filed August 14, 1997. Thomson Research, accessed June 2007. Alcoa Inc. March 31, 2006 Form 10-Q. Filed April 26, 2006. http://www.alcoa .com/global/en/investment/pdfs/10Q1Q06_5_12.pdf, accessed July 2007. Secondary Sources Note: It is best to consult an original source whenever possible. If the original source is unavailable, however, use the following style. (In the examples below, the Zukofsky article is the original source.) Footnote 74 Louis Zukofsky, “Sincerity and Objectification” Poetry 37 (February 1931): 269, quoted in Bonnie Costello, Marianne Moore: Imaginary Possessions (Cambridge, MA: Harvard University Press, 1981), p. 78.18 Bibliography 75 Zukofsky, Louis. “Sincerity and Objectification.” Poetry 37 (February 1931): 269. Quoted in Bonnie Costello, Marianne Moore: Imaginary Possessions (Cambridge, MA: Harvard University Press, 1981), p. 78.19 Citation 74 Patrick J. Cusatis, James A. Miles, and J. Randall Woolridge, “Restructuring with a Through Spinoffs,” Journal of Financial Economics 33 (1993), as cited in Joel citation Greenblatt, You Can Be A Stock Market Genius (New York: Fireside, 1997), p. 57. 31 CITATION GUIDE > >32 CITATION GUIDE Slide Presentations Footnote 76 Linda K. Olsen, “Permissions and Copyright Issues for Cases,” PowerPoint presentation to Research Associates, July 24, 2002. Harvard Business School, Boston, MA. Bibliography Olsen, Linda K. “Permissions and Copyright Issues for Cases.” PowerPoint presentation to Research Associates, July 24, 2002. Harvard Business School, Boston, MA. See also Conference Papers (unpublished) on p. 19. Tables Data from Source line a table Source: Data excerpted from Michael Y. Yoshino and Thomas B. Lifson, The Invisible Link (Cambridge: MIT Press, 1986), p. 78, Table 4.3. Bibliography Yoshino, Michael Y. and Thomas B. Lifson. The Invisible Link. Cambridge: MIT Press, 1986. Data from Source line text (for Source: Data from Richard S.Tedlow, New and Improved (New York: Basic Books, a table) 1996), p. 157. Bibliography Tedlow, Richard S. New and Improved. New York: Basic Books, 1996. Entire table Source line (or other Source: Michael E. Porter, Competitive Strategy (New York: The Free Press, 1998) graphical p. 73, Figure 3-4. Used with permission from The Free Press. item) Bibliography Porter, Michael E. Competitive Strategy. New York: The Free Press, 1998. Chap. 3, Figure 3-4. Examples of Citations – continued > >Technical Notes HBS technical notes are often referred to as notes. When citing notes, follow the style that is used for cases. Television Programs Footnote 77 PBS, Frontline, “Blackout: Interview with Ken Lay,” March 27, 2001, http://www.pbs.org/wgbh/pages/frontline/shows/blackout/interviews/lay.html, accessed August 2004. Bibliography PBS, Frontline. “Blackout: Interview with Ken Lay.” March 27, 2001. http://www.pbs.org/wgbh/pages/frontline/shows/blackout/interviews/ lay.html, accessed August 2004. Theses and Dissertations Footnote 20 78 Andrew J. King, “Law and Land Use in Chicago: A Pre-history of Modern Zoning” (Ph.D. diss., University of Wisconsin, 1976), pp. 32–37. Bibliography King, Andrew J. “Law and Land Use in Chicago: A Pre-history of Modern Zoning.” Ph.D. diss., University of Wisconsin, 1976. Unpublished Papers Footnote 78 Robin Greenwood, “Price pressure in corporate spinoffs” (paper, Harvard Business School, October 9, 2006), http://people.hbs.edu/rgreenwood/spinoffs6.pdf, accessed April 7, 2009. 33 CITATION GUIDE > > >Videos Commercial Footnote video 79 National Treasure, dir. Jon Turtletaub (Touchstone Pictures, Jerry Bruckheimer Films, 2004;VHS, Buena Vista Home Video, 2005). Footnote 80 Forrest Gump, dir. Robert Zemeckis (Paramount Pictures, 1994; DVD, Paramount, 2001). Webcasts Note: In this guide, “podcast” refers to an audio file and “webcast” to a video file. Citations of podcasts and webcasts are similar to citations of websites. As the following examples show, some websites use the term “podcast” or “webcast” and others specify the file type, such as “audio” or “video.” Footnote 81 John Mackey and Michael Pollan, “The Past, Present, and Future of Food,” speech given on February 27, 2007, at the University of California School of Journalism, http://webcast.berkeley.edu/event_details.php?webcastid=19147&p=1&ipp=15&cat, accessed March 2007. Bibliography Mackey, John, and Michael Pollan. “The Past, Present, and Future of Food.” Speech given February 27, 2007, at University of California School of Journalism. http://webcast.berkeley.edu/event_details.php?webcastid=19147&p=1&ipp= 15&cat, accessed March 2007. Footnote 82 MaggieTaggart, “Tax deal boosts film business,” April 12, 2007, video file, BBC News, http://www.bbc.co.uk/, accessed September 6, 2007. Footnote “Romania’s Economic Journey,” Nigel Cassidy, September 26, 2006, video file, BBC News, http://www.bbc.co.uk/, accessed September 2007. See also Videos. Examples of Citations – continued 34 CITATION GUIDEWebsites Company Footnote website 83 Walt Disney Company, “Disney’s Investors Relations —FAQs,” Walt Disney Company website, http://disney.go.com/corporate/investors/shareholder/faq.html, accessed June 1999. Bibliography Walt Disney Company. “Disney’s Investors Relations —FAQs.” Walt Disney Company website. http://disney.go.com/corporate/investors/shareholder/ faq.html, accessed June 1999. Personal Footnote website 84 Nathan Shedroff, http://www.nathan.com/, accessed August 2007. Bibliography Shedroff, Nathan. http://www.nathan.com, accessed August 2007. See also Blogs; Podcasts; Webcasts. Working Papers (Printed) Printed Footnote 85 Ashish Nanda, “Implementing Organizational Change,” HBS Working Paper No. 96 -034, 1996, p. 4. Bibliography Nanda, Ashish. “Implementing Organizational Change.” HBS Working Paper No. 96-034, 1996. Note: The copyright holder for academic working papers is typically the author. Working Papers (Online) On the Web Footnote 86 Josh Lerner, “150 Years of Patent Protection,” HBS Working Paper No. 00-040, 1999, http://www.hbs.edu/research/facpubs/workingpapers/ abstracts/9900/00-040.html, accessed May 2001. Bibliography Lerner, Josh. “150 Years of Patent Protection,” HBS Working Paper No. 00-040, 1999. http://www.hbs.edu/research/facpubs/workingpapers/ abstracts/9900/00-040.html, accessed May 2001. 35 CITATION GUIDEThis section shows how to cite information from commercial databases. A few notes about the examples: • Brackets [...] indicate variables to be supplied by the writer. For example, [Description of information] should be replaced by information such as the author’s name, title of work, date, publisher, and any other details that would help readers find the information. • The following citations refer to information owned by database vendors as well as other information providers.When you cite information from databases, remember to mention both the copyright holder/owner of the information and the entity that made the information available. In addition, if you want to distribute the information outside the classroom, you should contact the copyright holder, which may be different from the information provider. Be sure to check the copyright holder’s requirements before distributing any of their information outside the classroom. • URLs are optional in database citations. If you include a URL, use only the briefest form which points to the main page of the database. • The following examples cover some of the most frequently used databases at Baker Library. To cite other databases, try to adapt these examples, or contact rreiser@hbs.edu. ABI /INFORM Generic Example Source: [Description of information — e.g., author, title, publisher, date, etc.], ABI/INFORM via ProQuest, accessed [month/year]. Specific Example Source: “Gold mine finds enough to dig itself out of hole,” Sacramento Business Journal, July 30, 2009, ABI/INFORM via ProQuest, accessed September 2009. Bloomberg Information Owned by Bloomberg Source: Bloomberg LP, accessed [month/year]. Other Information Source: [Description of information], via Bloomberg LP, accessed [month/year]. Citations of Commercial Databases 36 CITATION GUIDEBusiness Source Complete Source: [Description of information], Business Source Complete, via EBSCO. Capital IQ (see Standard & Poor’s) Compustat (see Standard & Poor’s) Datastream Information Owned by Datastream Source: Thomson Reuters Datastream, accessed [month/year]. Other Information Source: [Description of information], via Thomson Reuters Datastream, accessed [month/year]. Deal Pipeline (The) Source: [Description of information], The Deal Pipeline, accessed [month/year]. Economist Intelligence Unit (EIU) Source: Economist Intelligence Unit, [Description of information —e.g., EIU Country Data or EIU Country Report, author, title, date, etc.], www.eiu.com, accessed [month/year]. eMarketer Source: [Description of information], eMarketer, accessed [month/year]. Euromonitor (see Global Market Information Database) Factiva Source: [Description of information], via Factiva, accessed [month/year]. First Research Source: [Description of information], via First Research, accessed [month/year]. 37 CITATION GUIDEForrester Source: [Description of information —e.g., author, title, volume no., date, etc.], Forrester Research, Inc., accessed [month/year]. Frost & Sullivan Source: [Description of information], Frost & Sullivan, accessed [month/year]. Gartner Text: Source: [Description of information], Gartner, Inc., accessed [month/year]. Graphics: Source: [Source line under graphic], as published in [description of info.], Gartner, Inc., accessed [month/year]. Global Financial Data Source: [Description of information], Global Financial Data, Inc., accessed [month/year]. Global Market Information Database (GMID) Source: [Description of information], Euromonitor International, www.euromonitor.com, accessed [month/year]. Hoover’s Information Owned by Hoover’s Source: [Description of information], Hoover’s, Inc., www.hoovers.com, accessed [month/year]. Other Information Source: [Description of information], via Hoover’s, Inc., www.hoovers.com, accessed [month/year]. I/B/E/S Source: I/B/E/S, a Thomson Reuters product, accessed [month/year]. Citations of Commercial Databases – continued 38 CITATION GUIDEISI Emerging Markets Information Owned by ISI Source: [Description of information], ISI Emerging Markets, accessed [month/year]. Other Information Source: [Description of information], via ISI Emerging Markets, accessed [month/year]. JSTOR Source: [Description of information], via JSTOR, accessed [month/year]. LexisNexis Source: [Description of information], via LexisNexis, accessed [month/year]. MarketResearch.com Academic Source: [Description of information], via MarketResearch.com, accessed [month/year]. Mintel Source: [Description of information], Mintel, accessed [month/year]. OneSource Information Owned by OneSource Source: [Description of information], OneSource Information Services, Inc., accessed [month/year]. Other Information Source: [Description of information], via OneSource Information Services, Inc., accessed [month/year]. SDC (Securities Data Company) Source: [Description of information], SDC Platinum, a Thomson Reuters product, accessed [month/year]. SourceOECD Source: [Description of information], SourceOECD, www.sourceoecd.org, accessed [month/year]. 39 CITATION GUIDEStandard & Poor’s (S&P) Capital IQ Source: [Description of information], Capital IQ, Inc., a division of Standard & Poor’s. Compustat Data via Research Insight Source: Standard & Poor’s Compustat data via Research Insight, accessed [month/year]. Emerging Markets Database (EMDB) Source: Standard & Poor’s Emerging Markets Database (EMDB), accessed [month/year]. Execucomp Source: Standard & Poor’s Execucomp data, accessed [month/year]. NetAdvantage Source: Standard & Poor’s NetAdvantage, accessed [month/year]. RatingsDirect Source: Standard & Poor’s RatingsDirect, accessed [month/year]. Thomson ONE Banker Source: [Description of information], Thomson ONE Banker, accessed [month/year]. World Development Indicators (WDI Online) Source: World Development Indicators, The World Bank Group accessed [month/year]. 40 CITATION GUIDE Citations of Commercial Databases – continued41 CITATION GUIDE Endnotes 1 Gordon Harvey, “The Role of Sources,” in Writing with Sources: A Guide for Harvard Students, second edition (Indianapolis/Cambridge: Hackett Publishing Company, 2008), p. 14, http://isites. harvard.edu/fs/docs/icb.topic273248.files/WritingSourcesHarvard.pdf, accessed October 2008. 2 The Chicago Manual of Style., 15th ed. (Chicago: University of Chicago Press, 2003), section 16.42. 3 Ibid., section 17.216. 4 Ibid., section 17.213. 5 The Chicago Manual of Style FAQ, section about “Documentation” (University of Chicago, June 20, 2002), http://www.press.uchicago.edu/Misc/Chicago/cmosfaq, accessed August 2002. 6 The Chicago Manual of Style, 15th ed., section 17.208. 7 Ibid., section 17.309. 8 Ibid., section 17.307. 9 Ibid. 10 Ibid., section 17.309. 11 The Chicago Manual of Style, 14th ed. (Chicago: University of Chicago Press, 1993), section 15.264. 12 The Chicago Manual of Style, 15th ed., section 17.284. 13 Ibid., section 17.285. 14 Ibid., section 17.286. 15 Ibid., section 17.47. 16 Ibid., section 17.192. 17 Ibid., section 17.47. 18 Ibid., section 17.274. 19 Ibid. 20 The Chicago Manual of Style, 14th ed., section 15.271.Bibliography The Chicago Manual of Style. 14th ed. Chicago: University of Chicago Press, 1993. The Chicago Manual of Style. 15th ed. Chicago: University of Chicago Press, 2003. The Chicago Manual of Style Online. 15th ed. University of Chicago. http://www.chicagomanualofstyle.org/home.html, accessed October 2008. Columbia University Press, “Preparing the Bibliographic Material,” excerpt fromThe Columbia Guide to Online Style, 2nd. ed., by Janice R. Walker and Todd Taylor (New York: Columbia University Press, 2006), http://www.columbia.edu/cu/cup/cgos2006/basic.html, accessed September 2007. Harnock, Andrew, and Eugene Kleppinger. “Using Chicago Style to Cite and Document Sources.” Online! A reference guide to using Internet sources. Bedford/St. Martin’s, 2001. http://www.bedfordstmartins.com/online/cite7.html, accessed August 2002. Harvey, Gordon. Writing with Sources: A Guide for Harvard Students. Second edition. Indianapolis/Cambridge: Hackett Publishing Company, 2008. http://isites.harvard.edu/fs/docs/icb.topic273248.files/WritingSourcesHarvard.pdf, accessed October 2008. Martin, Paul R. The Wall Street Journal Guide to Business Style and Usage. New York: Simon and Schuster, 2002. Princeton University. Academic Integrity at Princeton. http://www.princeton.edu/pr/pub/ integrity/, accessed October 2009. 42 CITATION GUIDECLUSTERS OF ENTREPRENEURSHIP
|
CD ROM Annuaire d'Entreprises France prospect (avec ou sans emails) : REMISE DE 10 % Avec le code réduction AUDEN872
10% de réduction sur vos envois d'emailing --> CLIQUEZ ICI Retour à l'accueil, cliquez ici NBER WORKING PAPER SERIES CLUSTERS OF ENTREPRENEURSHIP Edward L. Glaeser William R. Kerr Giacomo A.M. Ponzetto Working Paper 15377 http://www.nber.org/papers/w15377 NATIONAL BUREAU OF ECONOMIC RESEARCH 1050 Massachusetts Avenue Cambridge, MA 02138 September 2009 Comments are appreciated and can be sent to eglaeser@harvard.edu, wkerr@hbs.edu, and gponzetto@crei.cat. Kristina Tobio provided excellent research assistance. We thank Zoltan J. Acs, Jim Davis, Mercedes Delgado, Stuart Rosenthal, Will Strange, and participants of the Cities and Entrepreneurship conference for advice on this paper. This research is supported by Harvard Business School, the Kauffman Foundation, the National Science Foundation, and the Innovation Policy and the Economy Group. The research in this paper was conducted while the authors were Special Sworn Status researchers of the US Census Bureau at the Boston Census Research Data Center (BRDC). Support for this research from NSF grant (ITR-0427889) is gratefully acknowledged. Research results and conclusions expressed are our own and do not necessarily reflect the views of the Census Bureau or NSF. This paper has been screened to insure that no confidential data are revealed. Corresponding author: Rock Center 212, Harvard Business School, Boston, MA 02163; 617-496-7021; wkerr@hbs.edu. The views expressed herein are those of the author(s) and do not necessarily reflect the views of the National Bureau of Economic Research. © 2009 by Edward L. Glaeser, William R. Kerr, and Giacomo A.M. Ponzetto. All rights reserved. Short sections of text, not to exceed two paragraphs, may be quoted without explicit permission provided that full credit, including © notice, is given to the source.Clusters of Entrepreneurship Edward L. Glaeser, William R. Kerr, and Giacomo A.M. Ponzetto NBER Working Paper No. 15377 September 2009 JEL No. J00,J2,L0,L1,L2,L6,O3,R2 ABSTRACT Employment growth is strongly predicted by smaller average establishment size, both across cities and across industries within cities, but there is little consensus on why this relationship exists. Traditional economic explanations emphasize factors that reduce entry costs or raise entrepreneurial returns, thereby increasing net returns and attracting entrepreneurs. A second class of theories hypothesizes that some places are endowed with a greater supply of entrepreneurship. Evidence on sales per worker does not support the higher returns for entrepreneurship rationale. Our evidence suggests that entrepreneurship is higher when fixed costs are lower and when there are more entrepreneurial people. Edward L. Glaeser Department of Economics 315A Littauer Center Harvard University Cambridge, MA 02138 and NBER eglaeser@harvard.edu William R. Kerr Rock Center 212 Harvard Business School Boston, MA 02163 wkerr@hbs.edu Giacomo A.M. Ponzetto CREI - Universitat Pompeu Fabra C/ Ramon Trias Fargas, 25-27 08005 Barcelona Spain gponzetto@crei.cat1 Introduction Economic growth is highly correlated with an abundance of small, entrepreneurial Örms. Figure 1 shows that a 10% increase in the number of Örms per worker in 1977 at the city level correlates with a 9% increase in employment growth between 1977 and 2000. This relationship is even stronger looking across industries within cities. This relationship has been taken as evidence for competition spurring technological progress (Glaeser et al., 1992), product cycles where growth is faster at earlier stages (Miracky, 1993), and the importance of entrepreneurship for area success (Acs and Armington, 2006; Glaeser, 2007). Any of these interpretations is compatible with Figure 1ís correlation, however, and the only thing that we can be sure of is that entrepreneurial clusters exist in some areas but not in others. We begin by documenting systematically some basic facts about average establishment size and new employment growth through entrepreneurship. We analyze entry and industrial structures at both the region and city levels using the Longitudinal Business Database. Section 2 conÖrms that the strong correlation in Figure 1 holds true under stricter frameworks and when using simple spatial instruments for industrial structures. A 10% increase in average establishment size in 1992 associates with a 7% decline in subsequent employment growth due to new startups. Employment growth due to facility expansions also falls by almost 5%. We further document that these reductions come primarily through weaker employment growth in small entrants. What can explain these spatial di§erences? We Örst note that the connection between average establishment size and subsequent entrepreneurship is empirically stronger at the city-industry level than on either dimension individually. This suggests that simple theories emphasizing just industry-wide or city-wide forces are insu¢ cient. Theories must instead build upon particular city-industry traits or on endogenous spatial sorting and organizational forms due to interactions of city traits with industry traits. We consider three broad rationales. The Örst two theories emphasize spatial di§erences in net returns to entrepreneurship, while the last theory emphasizes spatial di§erences in the supply of entrepreneurs. The former theories are more common among economists. They assume that entrepreneurs choose locations and compete within a national market, so that the supply of entrepreneurship is constant over space. This frictionless setting would not hold for concrete manufacturing, of course, but would be a good starting point for many industries. Entrepreneurship is then evident where Örm proÖts are higher or where Öxed costs are lower, either of which increases the net returns to opening a new business. These spatial di§erences could be due to either exogenous or endogenous forces. To take Silicon Valley as an example, one story would suggest that Silicon Valleyís high rate of entrepreneurship over the past 30 years was due to abnormal returns in Californiaís computer sector as the industry took o§. These returns would need to have been greater than Californiaís and the 1computer industryís returns generally, perhaps descending from a technological breakthrough outside of the existing core for the industry (e.g., Duranton, 2007; Kerr, this issue). On the other hand, Saxenianís (1994) classic analysis of Silicon Valley noted its abundance of smaller, independent Örms relative to Bostonís Route 128 corridor. Following Chinitz (1961) and Jacobs (1970), Saxenian argued that these abundant small Örms themselves caused further entrepreneurship by lowering the e§ective cost of entry through the development of independent suppliers, venture capitalists, entrepreneurial culture, and so on. While distinct, both of these perspectives argue that spatial di§erences in net returns to entrepreneurship are responsible for the di§erences in entrepreneurship rates that we see empirically. An alternative class of theories, which Chinitz also highlighted, is that the supply of entrepreneurship di§ers across space. Heterogeneity in supply may reáect historical accident or relatively exogenous variables. William Shockleyís presence in Silicon Valley was partly due to historical accident (Shockleyís mother), and entrepreneurs can be attracted to Californiaís sunshine and proximity to Stanford independent of di§erences in net returns. Several empirical studies Önd entrepreneurs are more likely to be from their region of birth than wage workers, and that local entrepreneurs operate stronger businesses (e.g., Figueiredo et al., 2002; Michelacci and Silva, 2007). Immobile workers may possess traits that lend them to entrepreneurship (e.g., high human capital). Although quite di§erent internally, these theories broadly suggest that semi-permanent di§erences in entrepreneurial supply exist spatially. 1 While theories of the last kind are deserving of examination, they do not Öt easily into basic economic models that include both Örm formation and location choice. Section 3 presents just such a model that draws on Dixit and Stiglitz (1977). The baseline model illustrates the Örst class of theories that focus on the returns to entrepreneurship, as well as the di¢ culties of reconciling heterogeneity in entrepreneurial supply with the canonical framework of spatial economics. Two basic, intuitive results are that there will be more startups and smaller Örms in sectors or areas where the Öxed costs of production are lower or where the returns to entrepreneurship are higher. In the model, higher returns are due to more inelastic demand. A third result formalizes Chinitzís logic that entrepreneurship will be higher in places that have exogenously come to have more independent suppliers. Multiple equilibria are possible where some cities end up with a smaller number of vertically integrated Örms, like Pittsburgh, and others end up with a larger number of independent Örms. But, our model breaks with Chinitz by assuming a constant supply of entrepreneurs across space. While we assume that skilled workers play a disproportionately large role in entrepreneurship, we also require a spatial equilibrium that essentially eliminates heterogeneity in entrepreneurship supply. In a sense, the model and our subsequent empirical work show how far one can get without assuming that the supply of entrepreneurship di§ers across space (due to 1 These explanations are not mutually exclusive, especially in a dynamic setting. Areas that develop entrepreneurial clusters due to net returns may acquire attributes that promote a future supply of entrepreneurs independent of the factors. 2one or more of the potential theories). We operationalize this test by trying to explain away the average establishment size e§ect. Section 4 presents evidence on these hypotheses. Our Örst tests look at sales per worker among small Örms as a proxy for the returns to entrepreneurship. The strong relationship between initial industry structure and subsequent entry does not extend to entrepreneurial returns. While some entrepreneurial clusters are likely to be demand driven, the broader patterns suggest that higher gross returns do not account for the observed link between lower initial establishment size and subsequent entry prevalent in all sectors. We likewise conÖrm that di§erences in product cycles or region-industry age do not account for the patterns. These results are more compatible with views emphasizing lower Öxed costs or a greater supply of entrepreneurs. Our next two tests show that costs for entrepreneurs matter. Holding city-industry establishment size constant, subsequent employment growth is further aided by small establishments in other industries within the city. This result supports the view that having small independent suppliers and customers is beneÖcial for entrepreneurship (e.g., Glaeser and Kerr, 2009). We Önd a substantially weaker correlation between city-level establishment size and the facility growth of existing Örms, which further supports this interpretation. We also use labor intensity at the region-industry level to proxy for Öxed costs. We Önd a strong positive correlation between labor intensity and subsequent startup growth, which again supports the view that Öxed costs are important. However, while individually powerful, neither of these tests explains away much of the basic establishment size e§ect. We Önally test sorting hypotheses. The linkage between employment growth and small establishment size is deeper than simple industry-wide or city-wide forces like entrepreneurs generally being attracted to urban areas with lots of amenities. Instead, as our model suggests, we look at interactions between city-level characteristics and industry-level characteristics. For example, the model suggests that entrepreneurship will be higher and establishment size lower in high amenity places among industries with lower Öxed costs. The evidence supports several hypotheses suggested by the model, but controlling for di§erent forces again does little to explain away the small establishment size e§ect. Neither human capital characteristics of the area nor amenities can account for much of the observed e§ect. In summary, our results document the remarkable correlation between average initial establishment size and subsequent employment growth due to startups. The evidence does not support the view that this correlation descends from regional di§erences in demand for entrepreneurship. The data are more compatible with di§erences in entrepreneurship being due to cost factors, but our cost proxies still do not explain much of the establishment size e§ect. Our results are also compatible with the Chinitz view that some places just have a greater supply of entrepreneurs, although this supply must be something quite di§erent from the overall level of human capital. We hope that future work will focus on whether the small establishment size e§ect reáects entrepreneurship supply or heterogeneity in Öxed costs that we have been unable 3to capture empirically. 2 2 Clusters of Competition and Entrepreneurship We begin with a description of the Longitudinal Business Database (LBD). We then document a set of stylized facts about employment growth due to entrepreneurship. These descriptive pieces particularly focus on industry structure and labor intensity to guide and motivate the development of our model in Section 3. 2.1 LBD and US Entry Patterns The LBD provides annual observations for every private-sector establishment with payroll from 1976 onward. The Census Bureau data are an unparalleled laboratory for studying entrepreneurship rates and the life cycles of US Örms. Sourced from US tax records and Census Bureau surveys, the micro-records document the universe of establishments and Örms rather than a stratiÖed random sample or published aggregate tabulations. In addition, the LBD lists physical locations of establishments rather than locations of incorporation, circumventing issues related to higher legal incorporations in states like Delaware. Jarmin and Miranda (2002) describe the construction of the LBD. The comprehensive nature of the LBD facilitates complete characterizations of entrepreneurial activity by cities and industries, types of Örms, and establishment entry sizes. Each establishment is given a unique, time-invariant identiÖer that can be longitudinally tracked. This allows us to identify the year of entry for new startups or the opening of new plants by existing Örms. We deÖne entry as the Örst year in which an establishment has positive employment. We only consider the Örst entry for cases in which an establishment temporarily ceases operations (e.g., seasonal Örms, major plant retoolings) and later re-enters the LBD. Second, the LBD assigns a Örm identiÖer to each establishment that facilitates a linkage to other establishments in the LBD. This Örm hierarchy allows us to separate new startups from facility expansions by existing multi-unit Örms. Table 1 characterizes entry patterns from 1992 to 1999. The Örst column refers to all new establishment formations. The second column looks only at those establishments that are not part of an existing Örm in the database, which we deÖne as entrepreneurship. The Önal column 2 In a study of entrepreneurship in the manufacturing sector, Glaeser and Kerr (2009) found that the Chinitz e§ect was a very strong predictor of new Örm entry. The e§ect dominated other agglomeration interactions among Örms or local area traits. This paper seeks to measure this e§ect for other sectors and assess potential forces underlying the relationship. As such, this paper is also closely related and complementary to the work of Rosenthal and Strange (2009) using Dun and Bradstreet data. Beyond entrepreneurship, Drucker and Feser (2007) consider the productivity consequences of the Chinitz e§ect in the manufacturing sector, and Li and Yu (2009) provide evidence from China. Prior work on entry patterns using the Census Bureau data include Davis et al. (1996), Delgado et al. (2008, 2009), Dunne et al. (1989a, 1989b), Haltiwanger et al. (this issue), and Kerr and Nanda (2009a, 2009b). 4looks at new establishments that are part of an existing Örm, which we frequently refer to as facility expansions. Over the sample period, there were on average over 700,000 new establishments per annum, with 7.3 million employees. Single-unit startups account for 80% of new establishments but only 53% of new employment. Facility expansions are, on average, about 3.6 times larger than new startups. Table 1 documents the distribution of establishment entry sizes for these two types. Over 75% of new startups begin with Öve or fewer employees, versus fewer than half of entrants for expansion establishments of existing Örms. About 0.5% of independent startups begin with more than 100 workers, compared to 4% of expansion establishments. Across industries, startups are concentrated in services (39%), retail trade (23%), and construction (13%). Facility expansions are concentrated in retail trade (32%), services (30%), and Önance, insurance, and real estate (18%). The growing region of the South has the most new establishment formations, and regional patterns across the two classes of new establishments are quite similar. This uniformity, however, masks the agglomeration that frequently exists at the industry level. Well-known examples include the concentration of the automotive industry in Detroit, tobacco in Virginia and North Carolina, and high-tech entrepreneurship within regions like Silicon Valley and Bostonís Route 128. 2.2 Industry Structure and Entrepreneurship Table 2 shows the basic fact that motivates this paper: the correlation between average establishment size and employment growth. We use both regions and metropolitan areas for spatial variation in this paper. While we prefer to analyze metropolitan areas, the city-level data become too thin for some of our variables when we use detailed industries. The dependent variable in the Örst three columns is the log employment growth in the region-industry due to new startups. The dependent variable for the second set of three columns is the log employment growth in the region-industry due to new facility expansions that are part of existing Örms. Panel A uses the log of average establishment size in the region-industry as the key independent variable. Panel B uses the HerÖndahl-Hirschman Index (HHI) in the region-industry as our measure of industrial concentration. Regressions include the initial periodís employment in the region as a control variable. For each industry, we exclude the region with the lowest level of initial employment. This excluded region-industry is employed in the instrumental variable speciÖcations. Crossing eight regions and 349 SIC3 industries yields 2,712 observations as not every region includes all industries. Estimations are unweighted and cluster standard errors by industry. The Örst regression, in the upper left hand corner of the table, shows that the elasticity of employment growth in startups to initial employments is 0.97. This suggests that, holding mean establishment size constant, the number of startups scales almost one-for-one with existing employment. The elasticity of birth employment with respect to average establishment size in the 5region-industry is -0.67. This relationship is both large and precisely estimated. It suggests that, holding initial employments constant, a 10% increase in average establishment size is associated with a 7% decline in the employment growth in new startups. These initial estimates control for region Öxed e§ects (FEs) but not for industry FEs. Column 2 includes industry FEs so that all of the variation is coming from regional di§erences within an industry. The coe¢ cient on average establishment size of -0.64 is remarkably close to that estimated in Column 1. In the third regression, we instrument for observed average establishment size using the mean establishment size in the excluded region by industry. This instrument strategy only exploits industry-level variation, so we cannot include industry FEs. The estimated elasticities are again quite similar. These instrumental speciÖcations suggest that the central relationship is not purely due to local feedback e§ects, where a high rate of growth in one particular region leads to an abundance of small Örms in that place. Likewise, the relationship is not due to measuring existing employment and average establishment size from the same data. Panel B of Table 2 considers the log HHI index of concentration within each region-industry. While the model in the next section suggests using average establishment size to model industrial structure, there is also a long tradition of empirically modeling industrial structure through HHI metrics. 3 The results using this technique are quite similar to Panel A. A 10% increase in region-industry concentration in 1992 is associated with a 4% decline in employment due to new startups over 1992-1999. The coe¢ cient on initial region-industry employment, however, is lower in this case. When not controlling for initial establishment size, there is a less than one-for-one relationship between initial employment and later growth through startups. Column 2 of Panel B again models industry FEs. The coe¢ cients are less stable than in the upper panel. The elasticity of startup employment to the HHI index continues to be negative and extremely signiÖcant, but it loses over 50% of its economic magnitude compared to the Örst column. Column 3 instruments using the concentration level in the omitted region. The results here are quite similar to those in the Örst column. Columns 4 to 6 of Table 2 consider employment growth from new facility expansions by multiunit Örms instead of new startups. These new establishments are not new entrepreneurship per se, but instead represent existing Örms opening new production facilities, sales o¢ ces, and similar operations. Nevertheless, formations of new establishments represent more discontinuous events than simple employment growth at existing plants. Again, there is a strong negative e§ect of mean establishment size in the region-industry and subsequent employment growth due to facility expansions. The e§ect, however, is weaker than in the startup regressions. The results are basically unchanged when we include industry FEs or in the instrumental variables regression. These conclusions are also mirrored in Panel Bís estimations using HHI concentration measures. 3 The appendix also reports estimations using the share of employees in a region-industry working in establishments with 20 employees or fewer. This modelling strategy delivers similar results to mean establishment size or HHI concentration. 62.3 Variations by Sector Figures 2a and 2b document estimations of the relationship between establishment entry rates and initial region-industry structure by sector. The underlying regressions, which are reported in the appendix, include region and industry FEs and control for log initial employment in region-industry. The squares document the point estimates, and the lines provide conÖdence bands of two standard errors. Negative coe¢ cients again associate greater entry over 1992-1999 with smaller average establishment size by region-industry in 1992. Figure 2a shows that the average establishment size e§ect is present for startups in all sectors to at least a 10% conÖdence level. The elasticity is largest and most precisely estimated for manufacturing at greater than -0.8; the elasticity estimate for Önance, insurance, and real estate is the weakest but still has a point estimate of -0.2. On the other hand, Figure 2b shows the average establishment e§ect is only present for facility expansions in manufacturing, mining, and construction. This relative concentration in manufacturing is striking, as this sector was the subject of the original Chinitz study and much of the subsequent research. The di§erence in levels between Figures 2a and 2b also speaks to concentration among startupsó in every sector, the average establishment size e§ect is largest for new entrepreneurs. 4 2.4 Entry Size Distribution Table 3 quantiÖes how these e§ects di§er across establishment entry sizes. Table 1 shows that most new establishments are quite small, while others have more than 100 workers. We separate out the employment growth due to new startups into groupings with 1-5, 6-20, 21-100, and 101+ workers in their Örst year of observation. Panel A again considers average Örm size, while Panel B uses the HHI concentration measure. These estimations only include region FEs, and the appendix reports similar patterns when industry FEs are also modelled. A clear pattern exists across the entry size distribution. Larger average establishment size and greater industrial concentration retard entrepreneurship the most among the smallest Örms. For example, a 10% increase in mean establishment size is associated with a 12% reduction in new employment growth due to startups with Öve workers or fewer. The same increase in average establishment size is associated, however, with a 1% reduction in new employment growth due to entering Örms with more than 100 employees. The patterns across the columns show steady declines in elasticities as the size of new establishments increases. The impact for new Örms with 6-20 workers is only slightly smaller than the impact for the smallest Örms, while the elasticity for entrants with 21-100 employees is 50% smaller. Larger establishments and greater concentration are associated with a decrease in the number of smaller startups, but not a decrease in the number of larger startups. 4We have separately conÖrmed that none of the results for new startups reported in this paper depend upon the construction sector, where startups are over-represented in Table 1. 73 Theoretical Model This section presents a formal treatment of entrepreneurship and industrial concentration. We explore a range of di§erent explanations for the empirical observation that startup activity has a strong negative correlation with the size of existing Örms. Our goal is to produce additional testable implications of these explanations. We develop a simple model based on monopolistic competition following the classic approach of Dixit and Stiglitz (1977). Entrepreneurs create Örms that earn proÖts by selling imperfectly substitutable goods that are produced with increasing returns to scale. The startup costs of entrepreneurship are Önanced through perfectly competitive capital markets, and no contractual frictions prevent Örms from pledging their future proÖts to Önanciers. Each company operates over an inÖnite horizon and faces a constant risk of being driven out of business by an exogenous shock, such as obsolescence of its product or the death of an entrepreneur whose individual skills are indispensable for the operation of the Örm. These simple dynamics generate a stationary equilibrium, so that we can focus on the number and size of Örms and on the level of entrepreneurial activity in the steady state. The baseline model enables us to look at the role of amenities, Öxed costs, and proÖtability in explaining Örm creation. Several of its empirical predictions are very general: for instance, essentially any model would predict that an exogenous increase in proÖtability should result in an endogenous increase in activity. An advantage of our approach is that di§erent elements can easily be considered within a single standard framework. We also extend the model to address multiple human capital levels and to allow for vertical integration. 3.1 Baseline Model Consider a closed economy with a perfectly inelastic factor supply. There are I cities characterized by their exogenous endowments of real estate Ki and by their amenity levels ai such that ai > ai+1 for all i. There is a continuum of industries g 2 [0; G], each of which produces a continuum of di§erentiated varieties. Consumers have identical homothetic preferences deÖned over the amenities a of their city of residence, the amount of real estate K that they consume for housing, and their consumption qg () of each variety in each industry. SpeciÖcally, we assume constant elasticity of substitution (g) > 1 across varieties in each sector and an overall Cobb-Douglas utility function U = log a +Competitiveness_Index_2007
|
CD ROM Annuaire d'Entreprises France prospect (avec ou sans emails) : REMISE DE 10 % Avec le code réduction AUDEN872
10% de réduction sur vos envois d'emailing --> CLIQUEZ ICI Retour à l'accueil, cliquez ici Environmental Federalism in the European Union and the United States
|
CD ROM Annuaire d'Entreprises France prospect (avec ou sans emails) : REMISE DE 10 % Avec le code réduction AUDEN872
10% de réduction sur vos envois d'emailing --> CLIQUEZ ICI Retour à l'accueil, cliquez ici David Vogel, Michael Toffel, Diahanna Post, and Nazli Z. Uludere Aragon Working papers are in draft form. This working paper is distributed for purposes of comment and discussion only. It may not be reproduced without permission of the copyright holder. Copies of working papers are available from the author. Environmental Federalism in the European Union and the United States David Vogel Michael Toffel Diahanna Post Nazli Z. Uludere Aragon Working Paper 10-0851 Environmental Federalism in the European Union and the United States David Vogel, Michael Toffel, Diahanna Post, and Nazli Z. Uludere Aragon February 21, 2010 SUMMARY The United States (US) and the European Union (EU) are federal systems in which the responsibility for environmental policy-making is divided or shared between the central government and the (member) states. The attribution of decision-making power has important policy implications. This chapter compares the role of central and local authorities in the US and the EU in formulating environmental regulations in three areas: automotive emissions for health related (criteria) pollutants, packaging waste, and global climate change. Automotive emissions are relatively centralised in both political systems. In the cases of packaging waste and global climate change, regulatory policy-making is shared in the EU, but is primarily the responsibility of local governments in the US. Thus, in some important areas, regulatory policy-making is more centralised in the EU. The most important role local governments play in the regulatory process is to help diffuse stringent local standards through more centralised regulations, a dynamic which has become recently become more important in the EU than in the US. INTRODUCTION In the EU and the US, responsibility for the making of environmental policy is divided between EU and federal institutions, on the one hand, and local institutions, on the other. The former is comprised of the EU and the US federal government, while the latter consist of state and local governments in the US, and member states and subnational authorities in the EU. 1 Historically, environmental rules and regulations were primarily made at the state or local level on both sides of the Atlantic. However, the emergence of the contemporary environmental movement during the late 1960s and early 1970s led to greater centralisation of environmental policy-making in both the US and Europe. In the US, this change occurred relatively rapidly. By the mid 1970s, federal standards had been established for virtually all forms of air and water pollution. By the end of the decade, federal regulations governed the protection of endangered species, drinking water quality, pesticide approval, the disposal of hazardous wastes, surface mining, and forest management, among other policy areas. 1 For ease of presentation, we refer at times to both of the former as central authorities and both of the latter as states. 2 The federalisation of US environmental policy was strongly supported by pressure from environmental activists, who believed that federal regulation was more likely to be effective than regulation at the state level. In Europe, this change occurred more gradually, largely because the Treaty of Rome contained no provision providing for environmental regulation by the European Community (EC). Nonetheless, more than 70 environmental directives were adopted between 1973 and 1983. Following the enactment of the Single European Act in 1987, which provided a clear legal basis for EC environmental policy and eased the procedures for the approval of Community environmental directives, EC environmental policy-making accelerated. Originally primarily motivated by the need to prevent divergent national standards from undermining the single market, it became an increasingly important focus of EC/EU policy in its own right. Each successive treaty has strengthened the EU’s commitment to and responsibility for improving environmental quality and promoting sustainable development throughout Europe. Thus, notwithstanding their different constitutional systems, in both the EU and the US, the locus of environmental policy-making has become increasingly centralised over the last three decades. Nevertheless, state governments continue to play a critical role in environmental regulation on both sides of the Atlantic. Most importantly, states remain an important locus of policy innovation and agenda setting. In many cases, new areas of environmental policy are first addressed at the state level and subsequently adopted by the central authority. Many state regulations remain more stringent or comprehensive than those of the central authority; in some policy areas, states retain primary responsibility. In other cases, responsibility for environmental policy-making is shared by both levels of government. Not surprisingly, in both federal systems, there are ongoing disputes about the relative competence of central and state authorities to regulate various dimensions of environmental policy. We explore the dynamics of federal environmental policy-making in both the US and the EU. At what level of government are new standards initiated? Under what circumstances are state regulations diffused to other states and/or adopted by the central authority? Under what circumstances can or do 3 states maintain regulations that are more stringent than those of other states? We focus on the development of US and EU regulatory policies in three areas: automobile emissions for criteria pollutants, packaging waste, and global climate change. Each policy area reflects a different stage in the evolution of environmental policy. These cases also demonstrate the differences and the similarities in the patterns of environmental policy-making in the US and the EU. Automobile emissions typify the first generation of environmental regulation. A major source of air pollution, particularly in urban areas, automobiles were among the first targets of environmental regulation during the 1960s and 1970s and they remain an important component of environmental policy in every industrialized country. Packaging typifies the next generation of environmental regulation. Its emergence on the policy agenda during the 1980s reflected the increased public concern about the scarcity of landfills and the need to conserve natural resources. Unlike automobile regulation, which primarily affects only two industries, albeit critical ones (automotive manufacturers and the refiners of gasoline), packaging waste regulations affect virtually all manufacturers of consumer goods. The increased priority of reducing packaging waste and promoting re-use and recycling symbolises a shift in the focus of environmental regulation from reducing pollution to promoting eco-efficiency. Global climate change represents a more recent dimension of environmental policy. It first surfaced during the mid-1980s, but it has become much more salient over the last decade. This policy area exemplifies the increasingly important international dimension of environmental regulation: global climate change both affects and is affected by the regulatory policies of virtually all countries. It also illustrates the growing economic scope of environmental regulation: few economic activities are likely to be unaffected by policies aimed at reducing the emissions of carbon dioxide and other greenhouse gases. These three policy areas provide a useful window on the changing dynamics of the relationship between state and central regulation in the US and the EU. Since the mid-1980s, automobile emissions standards have been more centralised in the EU than in the US. The US permits states to 4 adopt more stringent standards, while the EU does not. However, both the EU and the US have progressively strengthened their regulations governing automotive emissions and fuel composition, though US federal emission standards remain more stringent than EU ones, with the exception of lead in gasoline (petrol) which has now been phased out on both sides of the Atlantic. For its part, California, which is permitted its own emissions standards, has become a world leader in the effort to encourage the development and marketing of low- and zero-emission vehicles. The dynamics of the regulation of packaging waste differs considerably. In the US, the federal government plays little or no role in setting standards for packaging waste: packaging, recycling, and waste disposal are primarily the responsibility of state or local governments. However, the lack of federal standards has neither prevented nor discouraged many state governments from adopting their own regulations. There has been considerable innovation at the state level: a number of local governments have developed ambitious programmes to reduce packaging waste and promote recycling. There has been little pressure for federal standards and the federal government has not attempted to limit state regulations with one important exception: federal courts have repeatedly found state restrictions on ‘imports’ of garbage to violate the interstate commerce clause of the US constitution. 2 In the EU, the situation is more complex. Member states began to regulate packaging waste during the 1980s, while the EU became formally involved in this policy area in 1994. However, in contrast to automotive emissions, the responsibility for packaging regulation remains shared between central and state authorities. There is considerable diversity among state regulations, and member states continue to play an important role in policy innovation, often adopting regulations that are more stringent than those of the EU. State packaging waste regulations have been an ongoing source of conflict between central and local authorities, with the European Commission periodically challenging particular state regulations on the grounds of their incompatibility with the single market. In addition, the EU has imposed maximum as well as minimum standards for waste recovery, though this is likely to change 2 Berland, 1992. 5 soon. On balance, EU packaging standards are more stringent and comprehensive than those in the US. Europe’s ‘greener’ member states have made more ambitious efforts to reduce packaging waste than have their American state counterparts, while the EU’s Packaging Waste Directive provides a centralised floor on state standards which does not exist in the US. Nevertheless, there have been a number of important US state standards. In the case of climate policy, important initiatives and commitments to reduce emissions of greenhouse gases have been undertaken in the EU at both the central and state levels with one often complementing and reinforcing the other. In the US, by contrast, federal regulations restricting greenhouse gases had yet to be implemented as of early 2010. As in the case of packaging waste policies, there have been a number of state initiatives. But in contrast to the regulation of packaging waste, the lack of central regulation of climate policy has become politically salient, even causing conflict over the legal authority of states to establish policies in this area. The gap between US and EU regulatory policies regarding climate change is more substantial than the gaps in the other two policy areas. The EU and each member state have formally ratified the Kyoto Protocol, while the US has not. Since American states cannot enter into international environmental agreements, this means that no US regulatory authority is under any international obligation to regulate carbon dioxide emissions. While all EU member states have adopted climate change policies, many states in the US have not. Moreover, most US state regulations tend to be weaker than those adopted or being adopted by the EU. The EU has established a regulatory regime based on emissions trading and shared targets to facilitate member states’ carbon dioxide reduction programmes, while in the critical area of vehicle emissions, the US central government was, until recently, an obstacle to more stringent state regulations. AUTOMOBILE EMISSIONS United States 6 The six common air pollutants are particulate matter, ground-level ozone, 3 carbon monoxide, oxides of sulphur (mainly sulphur dioxide), oxides of nitrogen (mainly nitrogen dioxide), and lead. 4 In US EPA parlance, these are also known as “criteria pollutants,” since their permissible levels are established using two sets of criteria, developed according to scientific guidelines. 5 Mobile sources, which include automobiles, are significant contributors to ground-level ozone and fine particulate matter pollution in many US cities, and also cause carbon monoxide and nitrogen dioxide emissions. Historically, motor vehicles were also the largest source of airborne lead emissions, but the removal lead from gasoline has dramatically reduced lead emissions from transport. Of the six criteria pollutants, only sulphur dioxide emissions, which are largely the result of fossil fuel combustion by power plants, are not substantially attributable to motor vehicles. 6 The regulation of air pollutants (emissions) from automobiles in the US began in 1960 when the state of California enacted the Motor Vehicle Pollution Control Act. This statute established a state board to develop criteria to approve, test, and certify emission control devices. 7 Within two years, the board had certified seven devices that were bolt-on pollution controls, such as air pumps that improve combustion efficiency 8 and required their installation by 1965. 9 After opposing emissions standards in the mid-1960s, ‘the automobile industry began to advocate federal emissions standards for automobiles [after] California had adopted state standards, and a number of other states were considering similar legislation.’ 10 In 1965, Congress enacted the federal Motor Vehicle Air Pollution Control Act, which authorised the establishment of auto emissions standards. The first federal standards were imposed for 1968 model year vehicles for carbon monoxide and hydrocarbons. 11 Two years later, in 1967, Congress responded to the automobile industry’s concerns about the difficulty of complying with different state standards by declaring that federal emission controls 3 Ground-level ozone is different from the beneficial ozone that forms a natural layer in the earth’s stratosphere, shielding us from excessive solar radiation. 4 United States Environmental Protection Agency (from here onwards, US EPA or EPA), 2006. 5 Primary standards are based on human health criteria, and secondary standards on environmental criteria. 6 In countries where the use of low-sulphur diesel fuels have not become widespread, yet diesel vehicle use is common, motor vehicles could be a source of sulphur-dioxide emissions. Some fuels used in marine or rail transport also contain high amounts of sulphur. 7 Percival et al., 1992. 8 California EPA, 2001. 9 Percival et al., 1992. 10 Revesz, 2001: 573. 11 Hydrocarbons are emissions resulting from the incomplete combustion of fuels and a precursor to ground-level ozone pollution. 7 would preempt all state emission regulations. However, an exception was made for California, provided that the state afforded adequate lead time to permit development of the necessary technology, given the cost of compliance within that time. 12 The exemption was granted ‘in recognition of the acute automobile pollution problems in California and the political power of the California delegation in the House of Representatives’. 13 One legal scholar noted, ‘The legislative history of the 1967 waiver provision suggests two distinct rationales for its enactment: (1) providing California with the authority to address the pressing problem of smog within the state; and (2) the broader intention of enabling California to use its developing expertise in vehicle pollution to develop innovative regulatory programs.’ 14 In 1970, President Nixon asked Congress to pass more stringent standards based on the lowest pollution levels attainable using developing technology. 15 Congress responded by enacting the technology-forcing Clean Air Act Amendments of 1970, which required automakers to reduce their emissions of carbon monoxide and hydrocarbons by 90 per cent within five years and their emissions of nitrogen oxides by 90 per cent within six years. 16 These drastic reductions were intended to close the large gap between ambient urban air pollution concentrations and the federal health-based Nationally Uniform Ambient Air Quality Standards (NAAQS) established pursuant to the US Clean Air Act. 17 Once again, California was permitted to retain and/or enact more stringent standards, though these were specified in federal law. 18 The 1977 amendments to the Clean Air Act established more stringent emissions standards for both automobiles and trucks and once again permitted California to adopt more stringent standards. In 1990, the Clean Air Act was again amended: ‘the California Air Resources Board old tailpipe emissions standards for new cars and light duty trucks sold in that state were adopted by Congress . . . 12 US EPA, 1999. 13 Rehbinder and Stewart, 1985: 114. 14 Chanin, 2003: 699. 15 Percival et al., 1992. 16 Rehbinder and Stewart, 1985. 17 Congress based its 90 per cent reduction on ‘the simple notion that since air pollution levels in major cities were approximately five times the expected levels of the NAAQSs, emissions would need to be reduced by at least 80 per cent, with an additional 10 per cent necessary to provide for growing vehicle use’ (Percival et al., 1992: 834). 18 California EPA, 2001. 8 as the standard to be met by all new vehicles.’ 19 In addition to again waiving federal preemption for California, the 1990 legislation for the first time authorised any state that was not meeting NAAQS for automotive pollutants to adopt California’s standards. 20 As a result, two regimes for automotive emission regulation emerged: one based on federal standards and the other on California’s. This regulatory policy reflected ‘a compromise between two interests: the desire to protect the economies of scale in automobile production and the desire to accelerate the process for attainment of the NAAQS’. 21 Thus, while automotive emission standards were primarily shaped by federal legislation, the federal government provided states with the opportunity to choose between two sets of standards. While allowing states to opt for a stricter emissions regime, the Clean Air Act Amendments of 1990 also called for a gradual strengthening of federal automobile emissions standards, to be promulgated by the US EPA. The so-called ‘Tier I’ standards were implemented between 1994 and 1997. The more stringent ‘Tier II’ standards were issued by the EPA in February 2000, and phased-in between 2004 and 2009. There were two important components of the Tier II standards. The first was a dramatic reduction in sulphur amounts in gasoline (by 90 per cent), achieved by the strong advocacy of a coalition of environmental and public health organisations, and state and local environmental agencies. 22 The second was a requirement for all light trucks, passenger cars, medium-duty sport utility vehicles and passenger vans to be subject to the same emissions standards by model year 2009. 23 California has continued to play a pioneering role in shaping automotive emissions policy. In 1990, the state adopted a programme to encourage Low-Emission Vehicles (LEV). This included a ZeroEmission Vehicle (ZEV) programme meant to jump-start the market for these vehicles. The ZEV programme required that such vehicles comprise at least 2 per cent of new car sales by 1998, 5 per 19 Bryner, 1993: 150. 20 Chanin, 2003; Revesz, 2001. 21 Revesz, 2001: 586. 22 This group included the Clean Air Trust and the Environmental Defense Fund, the STAPPA/ALAPCO (State and Territorial Air Pollution Program Administrators / Association of Local Air Pollution Control Officials), a nationwide organisation of state and local pollution control officials, and American Lung Association. In fact, the automakers were also in favour of the proposal to reduce sulphur content of gasoline, without which it would have been difficult to deliver the companion Tier 2 emission reductions. 23 All vehicles up to 8,500 pounds GVWR (gross vehicle weight rating) are subject to Tier 2 standards. Also, these standards are the same whether a vehicle uses gasoline, diesel or any other fuel; in other words, they are “fuel neutral.” (US EPA, 2000) 9 cent by 2001, and 10 per cent by 2003. When this requirement was approved, the only feasible technology that met ZEV standards were electric vehicles, whose emissions were over 90 per cent lower than those of the cleanest gasoline vehicles, even when including the emissions from the power plants generating the electricity required to recharge them. 24 Massachusetts and New York subsequently adopted the California LEV plan. However, in 1992, New York’s decision was challenged in the courts by the automobile manufacturers on the grounds that it was sufficiently different from California’s to constitute a third automotive emission requirement, which the Clean Air Act explicitly prohibits. Shortly afterwards, the manufacturers filed another suit against both states arguing that, since their standards were not identical with those of California, they were preempted by the Clean Air Act. As a result, both states were forced to modify their standards. 25 In 1998, California’s Air Resources Board (California ARB) identified diesel particulate matter as a toxic air contaminant. 26 The state subsequently launched a Diesel Risk Reduction Plan in 2000 to reduce diesel particulate emissions by 75 per cent within ten years. The plan established requirements for low-sulphur diesel fuel and particulate standards for new diesel engines and vehicles, and new filters for existing engines. 27 In this case, federal and California initiatives moved in tandem. Shortly after California acted, the EPA also announced more stringent standards for new diesel engines and fuels in order to make heavy-duty trucks and buses run cleaner. The EPA adopted a new rule in January 2001 that required a more than 30 times reduction in the sulphur content of diesel fuels (from 500 parts per million to 15 parts per million), which matched the California standard. 28 The resulting fuel, called ultra-low sulphur diesel, has been available across the country starting October 2006. By the end of 2010, all highway diesel fuel sold within the US will be ultra-low sulphur diesel. 29 24 California Air Resources Board, 2001. 25 In December 1997, the EPA issued regulations for the ‘National Low Emission Vehicle’ (NLEV) program. This voluntary program was the result of an agreement between nine Northeastern states and the auto manufacturers. It allowed vehicles with more stringent emission standards to be introduced in states that opt for the NLEV program before the Tier 2 regulations came into effect. Vehicles complying with NLEV were made available in the participating states for model year 1999 and nationwide for model year 2001. The standards under the NLEV program were equivalent to the California Low Emission Vehicle program, essentially harmonising the federal and California motor vehicle standards (US EPA, 1998). 26 California EPA, 2001. 27 California Air Resources Board, 2001. 28 The Highway Diesel Rule (US EPA, 2001). 29 The EPA rule requires that by December 1, 2014 all non-road, locomotive and marine diesel fuel sold in the US to be ultra-low sulphur diesel as well. California’s rule accelerates this by three to five years. 10 More recently, California’s automotive emissions standards have become a source of conflict with the federal government. Two novel California regulations, which the state claims are designed to reduce automobile emissions, have been challenged by both the automotive industry and the federal government on the grounds that they indirectly regulate fuel efficiency, an area of regulation which Congress has assigned exclusively to the Federal government. 30 The first case involves a modification California made to its ZEV programme in 2001 that allowed automakers to earn ZEV credits for manufacturing compressed natural gas, gasoline-electric hybrid, and methanol fuel cell vehicles. 31,32 General Motors and DaimlerChrysler sued California’s ARB over a provision that allowed manufacturers to earn ZEV credits by using technology such as that included in gasoline-electric hybrid vehicles, which were already being sold by their rivals Honda and Toyota. Because hybrids still use gasoline, General Motors and DaimlerChrysler argued that California’s efforts were effectively regulating fuel economy. 33 The US Justice Department supported the auto manufacturers’ claim on the grounds that the Energy Policy and Conservation Act provides that when a federal fuel-economy standard is in effect, a state or a political subdivision of a state may not adopt or enforce a regulation related to fuel-economy standards. 34 California responded by claiming that it was acting pursuant to its exemption under the US Clean Air Act to regulate auto emissions. In June 2002, a Federal District Court issued a preliminary injunction prohibiting the Air Resources Board from enforcing its regulation. 35 In response, the ARB modified the ZEV programme to provide two alternative routes for automakers to meet ZEV targets. 36 At the same time, California imposed new regulations which required that the auto industry sell increasing numbers of fuel-cell vehicles in the 30 In the Energy Policy and Conservation Act of 1975, Congress established exclusive Federal authority to regulate automotive fuel economy, through the Corporate Average Fuel Economy (CAFE) standards. 31 At the same time, California extended ZEV market share requirements to range from 10 per cent in 2003 up to 16 per cent in 2018 (California Air Resources Board, 2001). 32 The second dispute concerns climate change and is discussed below. 33 Parker, 2003. 34 Yost, 2002. 35 California Air Resources Board, 2003. 36 According to the California Air Resources Board (2003), ‘Auto manufacturers can meet their ZEV obligations by meeting standards that are similar to the ZEV rule as it existed in 2001. This means using a formula allowing a vehicle mix of 2 per cent pure ZEVs, 2 per cent AT-PZEVs (vehicles earning advanced technology partial ZEV credits) and 6 per cent PZEVs (extremely clean conventional vehicles). Or manufacturers may choose a new alternative ZEV compliance strategy, meeting part of their ZEV requirement by producing their sales-weighted market share of approximately 250 fuel cell vehicles by 2008. The remainder of their ZEV requirements could be achieved by producing 4 per cent AT-PZEVs and 6 per cent PZEVs. The required number of fuel cell vehicles will increase to 2,500 from 2009-11, 25,000 from 2012-14 and 50,000 from 2015 through 2017. Automakers can substitute battery electric vehicles for up to 50 per cent of their fuel cell vehicle requirements’. 11 state over the next decade. 37 However, in the summer of 2003, both automobile firms dropped their suits against California after its regulatory authorities agreed to expand their credit system for hybrids to encompass a broader range of vehicles. 38 European Union As in the US, in Europe, the regulations of state governments have been an important driver for centralised automotive emissions standards, with Germany typically playing the role in Europe that California has played in the US. The EU has progressively strengthened its automotive emissions standards, both to improve environmental quality and to maintain a single market for vehicles. However, European standards were strengthened at a much slower rate than were those in the US, and they were harmonised much later. Thus, in 1989, the EU imposed standards to be implemented in 1992 that were based on US standards implementing legislation enacted in 1970 and 1977, while the EU did not establish uniform automotive emissions requirements until 1987, although some fuel content standards were harmonised earlier. However, unlike in the US, which has continued to maintain a two-tiered system – and indeed extended it in 1977 by giving states the option of adopting either federal or California standards, in Europe, centralised standards for automobile emissions have existed since 1987. During the 1970s and 1980s, there was considerably more tension between central and state regulations in the EU than in the US. Recently, the opposite has been the case. During the 1960s, France and Germany imposed limits on emissions of carbon monoxide and hydrocarbons for a wide range of vehicles, thus forcing the EC to issue its first automotive emissions standards in 1970 in order to prevent these limits from serving as obstacles to internal trade. Shortly afterwards, there was substantial public pressure to reduce levels of airborne lead, a significant portion of which came from motor vehicles. The first restrictions were imposed by Germany, which in 1972 announced a two-stage reduction: the maximum lead content in gasoline was initially capped at 0.4 grams per litre in 1972, to be further reduced to 0.15 grams per litre in 1976. The United Kingdom 37 Hakim, 2003a. 38 Hakim, 2003b. 12 (UK) also enacted restrictions on lead in gasoline in 1978, though less severe than Germany (0.45 grams per litre). With no restrictions imposed by any other member state, the resulting disparity in national rules and regulations represented an obstacle to the free movement of both fuel and motor vehicles within the EC. For not only did these divergent national product regulations limit intra-EC trade in gasoline, but since different car engines were designed to run on fuels containing different amounts of lead, they created a barrier to intra-Community trade in motor vehicles themselves. Accordingly, the EC introduced a directive in 1978 that imposed a minimum and a maximum limit for lead content in gasoline (0.15 and 0.40 grams per litre, respectively), with both standards to go into effect in 1981. While the minimum requirement effectively allowed member states like Germany to establish the strict national limit they sought, it also prevented any member state from requiring lead-free gasoline and potentially disrupting the single market. In 1985, as a result of continued pressure from both Germany and Britain, the European Council required unleaded gasoline to be available in all member states by October 1989. The maximum lead content in gasoline was also further reduced to 0.15 gram per litre, and member states were encouraged to comply as quickly as possible. Two years later, member states were allowed to ban leaded gasoline, should they chose to. In 1998, all Western European and several central European countries agreed to end the sale of leaded gasoline by 2005. Unlike the lead standard, in the establishment of which the German regulations played an important role, the EC’s standards for sulphur in fuel did not reflect the policy preferences of any member state. The sulphur standard adopted in 1975 required all countries, including France, Germany, and the UK, to reduce their sulphur emissions. 39 France, for instance, had already adopted standards for sulphur in diesel fuel in 1966, but the more stringent levels in the European-wide standard forced the French standards lower as well. Germany’s standard was adopted at the same time and was similar to that of the EC. The auto emissions standards adopted in the EC during the 1970s were not mandatory. In fact, until 39 Bennett, 1991. 13 1987, member states were permitted to have standards less stringent than the European-wide standards, although they could not refuse to register or sell a vehicle on their territory if it met EC maximum standards. In effect, these early standards were maximum or ceiling requirements that were not developed not by the EC but instead were based heavily on emissions standards of the United Nations Economic Council for Europe. In 1985, the German minister responsible for environmental affairs announced, on his own initiative, that as of 1989 all cars marketed in Germany would be required to meet US automotive emissions standards, commonly referred to as ‘US ’83’. The adoption of these standards required the installation of catalytic converters, which could only use unleaded gasoline. This created two problems within Europe. Most importantly, it meant that automobiles produced in France and Italy, whose producers lacked the technology to incorporate the converters into their smaller vehicles, would be denied access to the German market. In addition, it meant that German tourists who drove their cars to southern Europe would be stranded, owing to the unavailability of unleaded gasoline in Greece and Italy. Germany’s insistence on requiring stringent standards for vehicles registered in its country forced the EU to adopt uniform automobile emissions standards. This in turn led to a bitter debate over the content of these standards, pitting the EU’s greener member states (Germany, Denmark, and the Netherlands) against the EU’s (other) major automobile producers (the UK, France, and Italy), who favoured more flexible standards. The resulting Luxembourg Compromise of 1987 established different emissions standards for different sizes of vehicles with different timetables for compliance. It thus represented the first uniform set of automotive emissions standards within Europe. These standards have been subsequently strengthened several times, though on balance they remain less stringent than those of the United States, most notably for diesel emissions, which are regulated less stringently in the EU than in the US. During the 1990s, the politics of automobile emissions standards became much less affected by member state differences or tensions between central and state standards. The most important initiative of this period, the Auto-Oil Programme, first adopted in 1996, was aimed at bringing 14 together the Commission and the auto and oil industries to work on comprehensive ways to reduce pollution. After a series of negotiations, the programme ultimately tightened vehicle emission limits and fuel quality standards for sulphur and diesel, and introduced a complete phase-out of leaded gasoline. 40 In 2003, the EU approved a Directive requiring that all road vehicle fuels be sulphur-free by 2009. With the finalisation of Auto-Oil I and II, as the programmes are known, the shift from state to centralised automotive emission requirements appears to be complete. The debates and negotiations over proposals to regulate pollution from vehicles now take place between the automakers and oil producers on the one hand, and the Commission, the Council, and European Parliament (EP) on the other hand. PACKAGING WASTE United States The regulation of packaging wastes is highly decentralised in the US. The role of the federal government remains modest and virtually all policy initiatives have taken place at the local level. While the 1976 Resource Conservation and Recovery Act (RCRA) established stringent requirements for the management of hazardous wastes, the RCRA also declared that the regulation of landfills accepting municipal solid waste (MSW) was to remain primarily the domain of state and local governments. 41 As a result, there is considerable disparity in the handling of packaging wastes throughout the US. On balance, US standards tend to be considerably laxer than those in the EU. While many state legislatures have established recycling goals, few have prescribed mandatory targets. 42 The US generates more MSW per capita than any other industrialised country, and 50 per cent more than most European countries. 43 From 1995 to 1998, the percentage of the MSW generated that has been recovered for recycling remained steady at 44 in the US, while it rose from 55 to 69 in Germany, 40 McCormick, 2001. 41 US EPA, 2003a, 2003b, 2003c. 42 American Forest & Paper Association, 2003. 43 The latest OECD figures report that Americans generate 760 kg per capita, the French 510, the British 560, and Germans 540 (OECD, 2004). 15 owing in part to Germany’s Packaging Ordinance. 44 State and local governments have implemented several policy mechanisms to reduce MSW, including packaging waste. Deposit-refund schemes, minimum recycling content requirements, community recycling programmes, and disposal bans are among the most common policy mechanisms designed to divert materials to recycling from waste streams destined for landfills or incinerators. Eleven states have developed deposit-refund schemes to encourage the recycling of beverage containers. 45 When Oregon passed the first bottle bill requiring refundable deposits on all beer and soft-drink containers in 1971, its objective was to control litter rather than to spur recycling. When the city of Columbia, Missouri, passed a bottle bill in 1977, it became the first local containerdeposit ordinance in the US and remained the only local initiative until it was repealed in 2002. 46 In general, deposit-refund laws require consumers of soft drinks and beer packaged in glass, metal, and plastic containers to pay a deposit that is refundable when the container is returned. 47 These schemes typically do not require, however, that these containers be recycled or reused. 48 California recently expanded its programme to include non-carbonated beverages, which added roughly 2 billion containers, nearly 40 per cent of which are plastic. 49 To reduce the burden on landfills and incinerators, whose construction and expansion are increasingly politically infeasible owing to community objections, many states and local governments have developed recycling programmes that enable or require the recycling of various materials. Such programmes remain exclusively the purview of state and local government because national laws do not allow EPA to establish federal regulations on recycling. 50 Virtually all New Yorkers, 80 per cent of the Massachusetts population, and 70 per cent of Californians have access to curbside recycling. 51 Recycling programmes typically include paper as well as metal and glass containers, while some 44 OECD, 2002. 45 The eleven states with deposit-refund schemes on soft-drink containers are California, Connecticut, Delaware, Hawaii, Iowa, Maine, Massachusetts, Michigan, New York, Oregon, and Vermont. Hawaii’s law takes effect in 2005 (Container Recycling Institute, 2003). 46 Container Recycling Institute, 2003. 47 Some deposit refunds are being expanded to include office products, while Maine and Rhode Island have created deposit-refund schemes for lead-acid/automobile batteries (US EPA, 1999). 48 McCarthy, 1993. 49 US EPA, 2003a, 2003b, 2003c. 50 Cotsworth, 2002. 51 Dietly, 2001. 16 programmes also include containers of particular plastic resins. In Oregon, container glass comprises nearly 4 per cent of that state’s total solid waste stream, and its deposit-refund and collection schemes resulted in 55 per cent of this glass being collected and recycled. 52 Sixty per cent of Oregon’s recycled container glass comes from its deposit-refund scheme, 25 per cent is collected from residential curbside programmes, and the remainder comes from commercial solid-waste hauler programmes, disposal sites, and other private recycling activities. A few states have sought to facilitate recycling by banning packaging that is particularly difficult to recycle, such as aseptic drink boxes, which are made of paper, foil, and plastic layers that are difficult to separate. Connecticut banned plastic cans in anticipation of obstacles this product would pose to materials recovery. In 1989, Maine banned aseptic drink boxes because of a concern about their ability to be recycled, though this restriction was subsequently repealed. The Wisconsin Legislature considered imposing a ban on the sale of aseptic drink boxes and bimetal cans (drink cans with aluminium sides and bottom and a steel top). Instead, the state enacted an advisory process permitting it to review a new packaging design if the packaging proved difficult to recycle. In addition, a few states, including Wisconsin and South Dakota, have banned the disposal of some recyclable materials to bolster their recycling rates. 53 Some states require certain types of packaging to contain some minimum amount of recycled material. Oregon’s 1991 Recycling Act required that by 1995, 25 per cent of the rigid plastic packaging containers (containing eight ounces to five gallons) sold in that state must contain at least 25 per cent recycled content, be made of a plastic material that is recycled in Oregon at a rate of at least 25 per cent, or be a reusable container made to be reused at least five times. 54 This law also requires glass containers to contain 35 per cent recycled content by 1995 and 50 per cent by 2000. 55 California requires manufacturers of newsprint, plastic bags, and rigid plastic containers to include 52 Oregon Department of Environmental Quality, 2003. 53 Thorman et al., 1996. 54 All rigid plastic container manufacturers have been in compliance with the law since it entered into force a decade ago, because the aggregate recycling rate for rigid plastic containers has remained between 27-30 per cent since the law took effect (Oregon Department of Environmental Quality, 2003). 55 Thorman et al., 1996. 17 minimum levels of recycled content in their products or to achieve minimum recycling rates. Manufacturers of plastic trash bags are required to include minimum percentages of recycled plastic post-consumer material in trash bags they sell in California. California’s 1991 Rigid Plastic Packaging Container (RPPC) Act sought to reduce the amount of plastic being landfilled by requiring that containers offered for sale in the state meet criteria akin to those laid down in the Oregon law. These criteria ‘were designed to encourage reuse and recycling of RPPCs, the use of more post-consumer resin in RPPCs and a reduction in the amount of virgin resin employed RPPCs’. 56 Wisconsin’s Act on Recycling & Management of Solid Waste requires that products sold in the state must use a package made from at least 10 per cent recycled or remanufactured material by weight. 57 Industrial scrap, as well as pre- and post-consumer materials, counts towards the 10 per cent requirement. Exemptions are provided for packaging for food, beverages, drugs, cosmetics, and medical devices that lack FDA approval. However, according to the President of the Environmental Packaging International, Wisconsin has done little enforcement of its 10 per cent recycled content law. 58 Governments at the federal, state, county, and local levels have also promulgated policies prescribing government procurement of environmentally preferable products. 59 In 1976, Congress included in RCRA requirements that federal agencies, as well as state and local agencies that use appropriated federal funds, that spend over a threshold amount on particular items to purchase products with recycled content when their cost, availability, and quality are comparable to those of virgin products, though the RCRA does not authorise any federal agency to enforce this provision. 60 States requiring government agencies to purchase environmentally preferable products include California, Georgia, Oregon, and Texas. California’s State Assistance for Recycling Markets Act of 1989 and Assembly Bill 11 of 1993 required government agencies to give purchasing preference to recycled products and mandated that increasing proportions of procurement budgets be spent on products with minimum levels of recycled content. Accordingly, the California Integrated Waste 56 California Integrated Waste Management Board, 2003. 57 Plastic Shipping Container Institute, 2003. 58 Bell, 1998. 59 California Integrated Waste Management Board, 2003; Center for Responsive Law, 2003. 60 US EPA, 2003a, 2003b, 2003c. 18 Management Board (CIWMB) developed the State Agency Buy Recycled Campaign, requiring that every State department, board, commission, office, agency-level office, and cabinet-level office purchase products that contain recycled materials whenever they are otherwise similar to virgin products. Procurement represents one of the few areas in which there have been federal initiatives. A series of Presidential Executive Orders issued throughout the 1990s sought to stimulate markets for environmentally preferable products and to reduce the burden on landfills. 61 In 1991, President George Bush issued an Executive Order to increase the level of recycling and procurement of recycled-content products. In 1993, President Bill Clinton issued an Executive Order that required federal agencies to purchase paper products with at least 20 per cent post-consumer fibre and directed the US EPA to list environmentally preferable products, such as those with less cumbersome packaging. Clinton raised this recycled-content threshold to 30 per cent in a subsequent Executive Order in 1998. 62 At the national level, several Congressional attempts to pass a National Bottle Bill between 1989 and 2007 were defeated. Most recently, a bill was introduced in 2009 as the “Bottle Recycling Climate Protection Act of 2009” (H.R. 2046), but it has yet to be adopted. According to the non-profit Container Recycling Institute, a key reason why bottle bills have not spread to more states or become national law is ‘the tremendous influence the well-funded, politically powerful beverage industry lobby wields’. 63 Thus, packaging waste policies remain primarily the responsibility of state and local governments. European Union The EU’s efforts to control packaging waste contrast sharply with those of the US in two ways. First, with the enactment of the 1994 EU Directive on Packaging and Packaging Waste, central authorities have come to play a critical role in shaping politics to reduce packaging waste within Europe. Thus, in 61 Lee, 1993. 62 Barr, 1998. 63 Container Recycling Institute, 2003. 19 Europe, in marked contrast to the US, this area of environmental policy is shared between central and state governments. Second, unlike in the US, where federal authorities have generally been indifferent to state policies to promote the reduction of packaging waste, in Europe, such policies have frequently been challenged by Brussels (the Commission) on the grounds that they interfere with the single market. In addition, the EU’s 1994 Packaging Directive established maximum as well as minimum recycling targets, while maximums have never existed in the US. As a result, some member states have been forced by Brussels to limit the scope and severity of their regulations. Historically, recycling policies were made exclusively by the member states. In 1981, Denmark enacted legislation requiring that manufacturers market all beer and soft drinks in reusable containers. Furthermore, all beverage retailers were required to take back all containers, regardless of where they had been purchased. To facilitate this recycling programme, only goods in containers that were approved in advance by the Danish environmental protection agency could be sold. Thus, a number of beverage containers produced in other member states could not be sold in Denmark. Foreign beverage producers complained to the European Commission that the Danish requirement constituted a ‘qualitative restriction on trade’, prohibited by the Treaty of Rome. The Commission agreed. When Denmark’s modified regulation in 1984 failed to satisfy the Commission, the EC brought a complaint against Denmark to the European Court of Justice (ECJ). In its decision, the ECJ upheld most of the provisions of the Danish statute, noting that the Commission itself had no recycling programme. The Court held that since protecting the environment was ‘one of the Community’s central objectives’, environmental protection constituted ‘a mandatory requirement capable of limiting the application of Article 30 of the Treaty of Rome’. 64 This was the first time the Court had sanctioned an environmental regulation that clearly restricted trade. The result of the ECJ’s ruling was to give a green light to other national recycling initiatives. Irish authorities proceeded with a ban on non-refillable containers for beer and soft drinks, while a number of Southern member states promptly restricted the sale of beverages in plastic bottles in order to 64 Vogel, 1995: 87. 20 protect the environment, and, not coincidently, domestic glass producers. The Netherlands, Denmark, France, and Italy promptly introduced their own comprehensive recycling plans. The most farreaching initiative to reduce recycling waste, however, was undertaken by Germany. The 1991 German packaging law was a bold move towards a ‘closed loop’ economy in which products are reused instead of thrown away. It established very high mandatory targets, requiring that 90 per cent of all glass and metals, as well as 80 per cent of paper, board, and plastics be recycled. In addition, only 28 per cent of beer and soft drinks could be sold in disposable containers. The law also established ‘take-back’ requirements on manufacturers, making them responsible for the ultimate disposal of the packaging in which their products were sold and shipped. A quasi-public system was established to collect and recycle packaging, with the costs shared by participating firms. In addition to making it more difficult for foreign producers to sell their products in Germany, the so-called Töpfer Law distorted the single market in another way. The plan’s unexpected success in collecting packaging material strained the capacity of Germany’s recycling system, thus forcing Germany to ‘dump’ its excess recycled materials throughout the rest of Europe. This had the effect of driving down prices for recycled materials in Europe, and led to the improper disposal of waste in landfills in other countries. 65 Yet, the ECJ’s decision in the Danish Bottle Case, combined with its fear of being labelled ‘anti-green’, made it difficult for the Commission to file a legal challenge to the German regulation. Accordingly, the promulgation of waste management policy now moved to the EU level. In 1994, following nearly three years of intense negotiations, a Directive on Packaging Waste was adopted by a qualified majority of member states with opposition from Germany, the Netherlands, Denmark, and Belgium. It required member states to recover at least half of their packaging waste and recycle at least one-quarter of it, within five years. Ireland, Greece, and Portugal were given slightly lower targets. More controversially, the Directive also established maximum standards: nations wishing to recycle more than 65 per cent of their packing waste could do so, but only if they had the facilities to 65 Comer, 1995. 21 use their recycled products. It was this provision which provoked opposition. The Packaging Waste Directive has played a critical role in strengthening packaging waste regulations and programmes throughout much of Europe, particularly in Great Britain and the South of Europe. As in the case of automobile emissions standards, it illustrates the role of the EU in diffusing the relatively stringent standards of some member states throughout Europe. Moreover, the decrease in some state standards as a result of the 1994 Directive was modest. 66 Member states continue to innovate in this policy area and these innovations have on occasion sparked controversy within the EU. For example, in 1994, the European Commission began legal proceedings against Germany, claiming that a German requirement that 72 per cent of drink containers be refillable was interfering with efforts to integrate the internal market. Germany has proposed to do away with the requirement owing to pressure from the Commission, but it remains a pending legal issue. This packaging waste dispute tops the list of key single market disputes identified by the Commission in 2003, and the outcomes of numerous other cases hinge on its resolution. 67 In 2001, Germany adopted a policy requiring deposits on non-refillable (one-way) glass and plastic bottles and metal cans in order to encourage the use of refillable containers. This law, which went into effect in 2003, aroused considerable opposition from the German drinks industry, which held it responsible for a dramatic decline in sales of beer and soft drinks and the loss of thousands of jobs. In addition, the European Commission, acting in response to complaints from non-German beverage producers, questioned the legality of the German scheme. The Commission agreed that the refusal of major German retailers to sell one-way drink containers had disproportionately affected bottlers of imported drinks, a position which was also voiced by France, Italy, and Austria. However, after the German government promised to revise its plan in order to make it compliant with EU law, the Commission decided not to take legal action. As occurred during the previous decade, the extent to which new packaging waste initiatives by member states threaten or are perceived to threaten the single market has put pressure on the EU to 66 Haverland, 1999. 67 Environment Daily, 2001a, 2003d. 22 adopt harmonised standards. As the European Environmental Bureau noted in response to the Commission’s decision to sue Germany over national rules protecting the market share of refillable drinks containers, ‘national reuse systems will come under pressure if the Commission continues to legally attack them at the same time it fails to act at the European level’. 68 In 2004, the Commission and the EP revised the 1994 Packaging Waste Directive by not only establishing stricter recycling targets, but also differentiating these targets by materials contained in packaging waste (such as glass, metal, plastic and wood). 69 The majority of member states were allowed until the end of 2008 to comply. 70 The Directive asks the Commission to review progress and, if necessary, recommend new recycling targets every five years. In 2006, the Commission recommended that the targets specified in the 2004 amendment should remain in effect for the time being, while new members catch up with these standards. 71 CLIMATE CHANGE United States In the US, greenhouse gas emissions remain largely unregulated by the federal government. In the 1990s, the Clinton Administration participated in the United Nations’ effort to establish a treaty governing greenhouse gas emissions. While the US signed the Kyoto Protocol, no US President has submitted it to the Senate for ratification. Soon after taking office, the Bush Administration declared it would not support the Kyoto Protocol. Also refusing to propose any regulations for carbon dioxide emissions, it instead chose to encourage industry to adopt voluntary targets, through its Global Climate Change Initiative. The Congress has also not adopted any legislation establishing mandatory reductions in greenhouse gas emissions, though in 2007 it did enact legislation strengthening vehicle fuel economy standards for the first time in more than two decades. In 2009, a climate change bill 68 Environment Daily, 2001b. 69 European Parliament and Council, 2004. 70 With the exception of Greece, Ireland and Portugal, which were allowed until the end of 2011, due to some geographical peculiarities of these countries (presence of numerous islands within their borders and difficult terrain) and low levels of existing use of packaging materials. A subsequent amendment in 2005 allowed new member states additional time for implementation; as late as 2015 in the case of Latvia (European Parliament and Council, 2005). 71 European Commission, 2006a. 23 establishing a cap and trade scheme to reduce greenhouse gas emissions passed the US House of Representatives, 72 and the US EPA has acknowledged it could regulate greenhouse gas emissions under the federal Clean Air Act. Meanwhile, the lack of federal regulation has created a policy vacuum that a number of states have filled. While ‘some significant legislation to reduce greenhouse gases was enacted during the late 1990s, such as Oregon’s pioneering 1997 law that established carbon dioxide standards for new electrical power plants . . . [state] efforts to contain involvement on climate change have been supplanted in more recent years with an unprecedented period of activity and innovation’. 73 By 2003, the US EPA had catalogued over 700 state policies to reduce greenhouse gas emissions. 74 A 2002 report identified ‘new legislation and executive orders expressly intended to reduce greenhouse gases have been approved in approximately one-third of the states since January 2000, and many new legislative proposals are moving ahead in a large number of states’. 75 New Jersey and California were the first states to introduce initiatives that directly target climate change. In 1998, the Commissioner of New Jersey’s Department of Environmental Protection (DEP) issued an Administrative Order that established a goal for the state to reduce greenhouse gas emissions to 3.5 per cent below the 1990 level by 2005, making New Jersey the first state to establish a greenhouse gas reduction target. 76 The DEP has received signed covenants from corporations, universities, and government agencies across the state pledging to reduce their greenhouse gas emissions, though nearly all are unenforceable. In an unusual move, the state’s largest utility signed a covenant that includes a commitment to monetary penalties if it fails to attain its pledged reductions. Other states have employed air pollution control regulation and legislation to cap carbon dioxide emissions from large source emitters such as power plants. Massachusetts became the first state to impose a carbon dioxide emission cap on power plants when Governor Jane Swift established a multi- 72 The American Clean Energy and Security Act of 2009 (ACES) in the 111 th US Congress (H.R.2454), also known as the WaxmanMarkey Bill after its authors Representatives Henry A. Waxman (Democrat, California) and Edward J. Markey (Democrat, Massachusetts). The bill proposes a national cap-and-trade program for greenhouse gases to tackle climate change. It was approved by the House of Representatives on June 26, 2009, and has been placed on the Senate calendar. 73 Rabe, 2002: 7. 74 US EPA, 2003c. 75 Rabe, 2002: 7. 76 New Jersey Department of Environmental Protection, 1999. 24 pollutant cap for six major facilities in 2001 that requires ‘each plant to achieve specified reduction levels for each of the pollutants, including a ten per cent reduction from 1997-1999 carbon dioxide levels by the middle-to-latter stages of the current decade’. 77 The New Hampshire Clean Power Act of 2002 required the state’s three fossil-fuel power plants to reduce their carbon dioxide emissions to 1990 levels by the end of 2006. 78 Oregon created the first formal standard in the US for carbon dioxide releases from new electricity generating facilities by requiring new or expanded power plants to emit no more than 0.675 pounds of carbon dioxide per kilowatt-hour, a rate that was 17 per cent below that of the most efficient natural-gas-fired plant operating in the US at the time. 79 In 2001, all six New England states pledged to reduce their carbon dioxide emissions to 10 per cent below 1990 levels by 2020. 80 By 2007, this joint commitment evolved into a ten-state, mandatory capand-trade program called the Regional Greenhouse Gas Initiative (RGGI). 81 As of early 2010, the initiative only encompassed fossil-fuel fired electric power plants operating in these states with capacity greater than 25 megawatts. 82 During the first two compliance periods (running from 2009 through 2014), the goal of RGGI is to stabilize carbon dioxide emission levels. After that, the emissions cap will be reduced by an additional 2.5 percent each year through 2018. As a result, the emissions budget in 2018 will be 10 per cent below the starting budget in 2009. 83 Under the program, participating states conduct quarterly auctions to distribute allowances, which can then be traded in a secondary market. Recent auction clearing prices have generally remained under four dollars per (short) ton. 84 The prices of allowances exchanged in the secondary market were even lower. 85 Another regional market-based program, called the Western Climate Initiative (WCI), is under 77 Rabe, 2002: 16. 78 New Hampshire Department of Environmental Services, 2002. 79 Rabe, 2002. 80 New England Governors/Eastern Canadian Premiers, 2001. 81 The member states of RGGI are Connecticut, Delaware, Maine, Maryland, Massachusetts, New Hampshire, New Jersey, New York, Rhode Island, and Vermont. Pennsylvania is an observer. 82 RGGI, 2009a. 83 The initial regional emissions cap is set at 188 million short tons of carbon dioxide per year. This amount is about 4 per cent above annual average regional emissions measured during 2000-2004 (RGGI, 2007). 84 RGGI, 2009b. 85 RGGI, 2009c. 25 development. This program targets the western states and provinces of the US and Canada. 86 The goal of WCI is a 15 per cent reduction in greenhouse gas emissions from 2005 levels by 2020. Similar to the RGGI, the WCI will be a cap-and-trade program and have three-year compliance periods. But unlike the RGGI, it will not be limited to carbon dioxide emissions or solely target the electric power sector. When fully implemented in 2015, the WCI is expected to cover nearly 90 per cent of greenhouse gas emissions in participating jurisdictions. Also, WCI members are required to auction off only a portion of total allowances (10 per cent at the outset, increasing to at least 25 per cent by 2020) and may choose to allocate the remainder to participating installations free of charge. 87 A third regional program is under development, based on the Midwestern Greenhouse Gas Reduction Accord (Accord) 88 signed in November 2007 by the governors of several US Midwestern states 89 and the Canadian province of Manitoba. The Accord also calls for the creation of a cap-andtrade program similar to those of RGGI and the WCI, to be operational by 2012. Proposed design features mostly resemble the WCI (for instance, allocating allowances through a combination of auctions as well as free distribution, the inclusion of all greenhouse gases, and coverage of multiple industries). On the other hand, it has some specific features for the protection of industrial interests of the region, such as the exclusion of carbon dioxide emissions from burning of biofuels (like ethanol and biodiesel) from the program. If implemented, contingent on the possible development of a federal cap-and-trade program, the goal of the Accord is to achieve a 20 per cent reduction in greenhouse gas emissions from 2005 levels by 2020. 90 In addition to these three multi-state initiatives, several states have been pursuing indirect means to reduce greenhouse gas emissions. 91 For example, more than half the US states have enacted legislation that requires utilities to provide a certain percentage of electricity generated from 86 As of January 2010, members of WCI are the US states of Arizona, California, Montana, New Mexico, Oregon, Utah and Washington, and the Canadian provinces of British Columbia, Manitoba, Ontario, and Quebec. Several other Western states and the province of Alberta are observers. 87 WCI, 2009. 88 Midwestern Greenhouse Gas Reduction Accord, 2007. 89 These are Illinois, Iowa, Kansas, Michigan, Minnesota and Wisconsin. The observing states are Indiana, Ohio and South Dakota. 90 Midwestern Greenhouse Gas Reduction Accord, 2009. 91 Rabe, 2002. 26 renewable energy sources. 92 By early 2010, nearly 20 states had already implemented, or were currently implementing, mandatory greenhouse gas emissions reporting rules. 93 Such programs attempt to mimic the US EPA Toxic Release Inventory Program’s success in spurring voluntary emission reductions by requiring public reporting of toxic releases by power plants. In 2002, 11 state Attorneys General wrote an open letter to President George W. Bush calling for expanded national efforts to reduce greenhouse gas emissions 94 and indicated their commitment to intensify state efforts if the federal government failed to act. In 2002, California passed legislation requiring its California Air Resources Board to develop and adopt greenhouse gas emission-reduction regulations by 2005 for passenger vehicles and light duty trucks, starting with vehicles manufactured in the 2009 model year. This made California the first legislative body in the US to enact legislation aimed at curbing global warming emissions from vehicles. As The New York Times pointed out, ‘Though the law applies only to cars sold in California, it will force the manufacturers to develop fuel-efficient technologies that all cars can use. This ripple effect will be even greater if other states follow California’s lead, as the Clean Air Act allows them to do.’ 95 Indeed, bills have been introduced in almost twenty other state assemblies since then, calling for the adoption of California’s automotive greenhouse gas standard. A diverse group of states (14 in total that include Arizona, Oregon, New Mexico, New York, Pennsylvania, Massachusetts, Virginia and Florida) ultimately passed legislation adopting the California standard. 96 During the Bush Administration, the marked divergence between state and federal policies in this area led to a flurry of lawsuits. Two of these are worth noting. The first was brought by automotive manufacturers against the state of California. Stating its intention to challenge California’s GHG standard in federal court, the president of the Alliance of Automobile Manufacturers argued that 92 As of January 2010, 29 states and the District of Columbia have enacted laws imposing these “renewable portfolio standards” (Database of State Incentives for Renewables and Efficiency, 2010). 93 As of September 2009, the following states had already developed, or were in the process of developing, mandatory greenhouse gas reporting rules: California, Colorado, Connecticut, Delaware, Hawaii, Iowa, Maine, Maryland, Massachusetts, New Jersey, New Mexico, North Carolina, Oregon, Virginia, Washington, West Virginia, and Wisconsin (US EPA, 2009a). 94 The states are Alaska, New Jersey, New York, California, Maryland, and all six New England states (Sterngold, 2002). 95 The New York Times, 2002. 96 The complete list is as follows: Washington, Oregon, Arizona, New Mexico, Florida, Virginia, Maryland, Pennsylvania, New Jersey, New York, Connecticut, Rhode Island, Massachusetts, New Hampshire and Maine. In addition, as of January 2010, three other states have proposals to adopt the California standard: Montana, Utah and Colorado (Pew Center on Global Climate Change, 2010). 27 ‘[F]ederal law and common sense prohibit each state from developing its own fuel-economy standards’. 97 The suit, filed by auto manufacturers against California Air Resource Board in 2004, was dismissed in 2007. 98 The second suit was brought against the federal government by several states, mainly as a challenge to the EPA’s position that it lacked the authority to regulate carbon dioxide emissions under the Clean Air Act. In 2003, upon the EPA’s denial of a petition to regulate tailpipe emissions of greenhouse gases, several states filed a lawsuit against the federal government claiming that the EPA is required by the Clean Air Act to regulate carbon dioxide emissions as an air pollutant because these emissions contribute to global warming. 99 Initially the case was dismissed, but the petitioners, which included 12 states, several cities and US territories as well as environmental groups, asked for a Supreme Court review. The resulting landmark case Massachusetts v. EPA was decided in favour of the petitioners in 2007. 100 In its decision, the Supreme Court found that “[b]ecause greenhouse gases fit well within the [Clean Air] Act’s capacious definition of ‘air pollutant,’ EPA has statutory authority to regulate emission of such gases from new motor vehicles.” 101 Two years later, the EPA officially acknowledged that it had both legal and scientific grounds to regulate greenhouse gas emissions. 102 On a parallel tack, California had requested a so-called ‘Clean Air Act waiver’ from the EPA in order to implement its 2002 statute. 103 After waiting for several years for a response from the EPA, California sued to compel the agency to make a decision. The EPA denied California’s waiver request in December 2007. However, the waiver denial elicited a second lawsuit by California in 2008, and which was later joined by fifteen other states and five environmental organizations. Ultimately, the Obama Administration asked the EPA to review its decision, after which California was granted the waiver in June 2009. 104 97 Keating, 2002. 98 Pew Center on Global Climate Change, 2008. 99 Johnson, 2003. 100 Meltz, 2007. 101 Massachusetts v. E.P.A., 127 S.Ct. 1438 (2007), p. 4. 102 US EPA, 2009b. 103 According to the Clean Air Act, states have the right to implement stricter standards on air pollutants, but the EPA must grant them a waiver to do so. 104 US EPA, 2009c. 28 The waiver decision has signalled a warming of relations between states and the federal government on the issue of climate change. In return for granting the waiver, the federal government secured the commitment of California, 105 along with of a broad set of stakeholders including auto manufacturers, to adopt uniform federal vehicle fuel economy standards (known as CAFE, short for Corporate Average Fuel Economy, standards) and to regulate greenhouse gas emissions from transport, whose implementation the Obama Administration accelerated by executive order. An update to the CAFE standards—the first such proposal in several decades—was passed as part of the Energy Independence and Security Act of 2007, during the Bush Administration. However, implementation of the Act’s CAFE provision required a subsequent rulemaking by the US Department of Transportation (US DOT), which was never made. In January 2009, the US DOT announced that it would defer any rulemaking on the new CAFE standards to the incoming administration. 106 That rulemaking was promptly issued in March 2009, though only for the model year 2011, since the Obama Administration ordered the US DOT to study the feasibility of even more stringent standards for later years. (Even the standards for model year 2011 are approximately one mile per gallon stricter than the recommendation of the previous administration.) 107 In September 2009, the US EPA and US DOT issued a draft joint rulemaking that proposed national standards to regulate vehicle fuel economy, and, for the first time in US history, greenhouse gas emissions from transport (National Program). 108 Under the original proposals of the Energy Independence and Security Act, the average nationwide fuel economy would have reached 35 miles per gallon by 2020, compared to about 25 miles per gallon in 2009. The National Program mandates a nationwide average of 35.5 miles per gallon by 2016, and once finalized, it would bring the rest of the country up to California’s current standards. Another draft rulemaking by the EPA, also issued in September 2009, would require any large stationary emitters of greenhouse gases such as power plants and industrial facilities, whether new or 105 US EPA, 2009d. 106 US DOT, 2009a. 107 US DOT, 2009b. 108 US EPA, 2009e. 29 undergoing modifications, to obtain operating permits from the agency. The rule would cover facilities with more than 25,000 tons of greenhouse gas emissions per year and the permits would be issued based on a facility’s ability to utilize best practices to control such emissions. 109 This proposal has so far been interpreted as a strategic move by the Obama Administration to compel the Congress to pass more comprehensive legislation dealing with climate change. As of early 2010, the draft National Program rulemaking was in the process of becoming finalized. But it remained unclear whether the EPA would pursue the draft rulemaking on the permitting of large emitters, or defer to the Congress. Thus, in contrast to developments in the area of packaging waste, the lack of federal regulations for greenhouse gas emissions has become a political issue in the US. Clearly, the issue of climate change is much more politically salient in the US than is the issue of packaging waste. Thus, proposals to address the former but not the latter frequently come before Congress. Finally, while packaging waste can be seen as a problem which can be effectively addressed at the local or state level, global climate change clearly cannot. Even the regulatory efforts of the most ambitious states will have little impact on global climate change in the absence of federal regulations that impose limits on carbon dioxide emissions throughout the US. European Union By contrast, both the EU and individual EU member states have been active in developing policies to mitigate climate change. In the early 1990s, several countries (including Finland, the Netherlands, Sweden, Denmark, and Germany) had adopted or were about to adopt taxes on either carbon dioxide specifically or energy more generally. Concerned that such taxes would undermine the single market, the EU attempted to establish a European energy tax. 110 The EU’s 1992 proposal was for a combined tax on both carbon dioxide emissions and energy, with the goal of reducing overall EU emissions by the year 2000 to their 1990 levels. However, this proposal was vehemently opposed by the UK, which 109 US EPA, 2009f. 110 Zito, 2000. 30 was against European-wide tax policies, and to a lesser extent by France, which wanted a tax on carbon dioxide only rather than the combined tax. By the end of 1994, the European Council abandoned its efforts and agreed to establish voluntary guidelines for countries that were interested in energy taxes. 111 In 1997, the Commission again proposed a directive to harmonise and, over time, increase taxes on energy within the EU; that proposal was finally approved in March 2003. It contained numerous loopholes for energy-intensive industry and transition periods for particular countries and economic sectors. 112 Thus, while the EU has had to retreat from its efforts to impose a carbon/energy tax, it has succeeded in establishing the political and legal basis to harmonise such taxes throughout the EU. In March 2002, the Council of Ministers unanimously adopted a legal instrument obliging each state to ratify the Kyoto Protocol, which they have subsequently done. Under the terms of this treaty, overall EU emissions must be reduced by at least 8 per cent of their 1990 levels by 2008-2012. The so-called ‘EU bubble’ in Article 4 of the Kyoto Protocol allows countries to band together in voluntary associations to have their emissions considered collectively. However, even before Kyoto was formally ratified, the EU had begun efforts to implement its provisions. In June 1998, a Burden Sharing Agreement gave each member state an emissions target which collectively was intended to reach the 8 per cent reduction target. In the spring of 2000, the EU officially launched the European Climate Change Program, which identified more than 40 emission-reduction measures. One of the fundamental emission reduction measures put forth by the EU has been emissions trading. The EU proposed a Directive for a system of emissions trading and harmonising domestic arrangements within the Community in 2001. 113 The Directive entered into force on October 25, 2003, creating the first international emissions trading system in the world, the EU Emissions Trading System (ETS). Under the Directive, governments are given the freedom to allocate permits as they see fit; the European Commission will not place limits on allowances, although the member states are 111 Collier, 1996. 112 Environment Daily, 1997, 2003b. 113 Smith and Chaumeil, 2002. 31 asked to keep the number of allowances low and in line with their Kyoto commitment. 114 The first trading (or compliance) period was 2005 through 2007. During the second compliance period, which runs from 2008 through 2012, the EU ETS will encompass as many as 10,000 industrial and energy installations, which are estimated to emit nearly half of Europe’s carbon dioxide emissions. 115 In 2007, the EU officially committed to reduce the Community’s aggregate greenhouse gas emissions by at least 20 per cent below the 1990 levels by the year 2020. Consistent with this commitment and in anticipation of a new international accord to succeed the Kyoto Protocol, the European Parliament amended the EU ETS directive in 2009. 116 This amendment puts forth some important changes to take effect in the third compliance period of the EU ETS, starting 2013. First, the majority of the emission allowances, which have so far been allocated by the member governments free-of-charge, would instead be sold via auction. Moreover, measures governing the EU ETS, including the determination of total allowances and the auction process, use of credits, and the monitoring, reporting and verification of emissions would be centralised under the Commission’s authority. The EU ETS is gradually being extended to include additional economic sectors. For example, emissions from international aviation will be subject to the EU ETS starting January 1, 2012. 117 As of early 2010, it was anticipated that international maritime emissions would be included next. 118 The efforts at the European level have been paralleled by a number of member-state policy initiatives. Among the earliest efforts was an initiative by Germany in which a government commission established the goal of reducing carbon dioxide emissions by 25 per cent by 2005 and 80 per cent by 2050, though these targets were subsequently relaxed owing to concerns about costs. Germany subsequently enacted taxes on energy, electricity, building standards, and emissions. The German federal government has negotiated voluntary agreements to reduce carbon dioxide emissions with virtually every industrial sector. From 2002 to 2006, the UK operated a voluntary greenhouse 114 Environment Daily, 2003c, 2003e. 115 European Commission, 2006b. 116 European Parliament and Council, 2009a. 117 Kanter, 2008. 118 Reuters, 2007 and UN Conference on Trade and Development, 2009. 32 gas-emissions trading scheme, involving nearly fifty industrial sectors, which served as a pilot for the current EU ETS. The British government simultaneously levied a tax on energy use (the so-called climate change levy) with reduced rates for firms and sectors that have met their emission-reduction targets. Like its German counterpart, the British government has officially endorsed very ambitious targets for the reduction of carbon dioxide emissions. This requires, among other policy changes, that a growing share of electricity be produced using renewable sources. While both Germany and the UK have reduced carbon dioxide emissions in the short run, their ability to meet the Kyoto targets to which they are now legally committed remains problematic. Other countries, such as France, Belgium, and the Netherlands, have established a complex range of policies, including financial incentives to purchase more fuel-efficient vehicles, investments in alternative energy, changes in transportation policies, voluntary agreements with industry, and the limited use of energy taxes. In 2002, Denmark approved legislation phasing out three industrial greenhouse gases controlled by Kyoto. In order to utilize demand-side management and energy efficiency measures for environmental protection, including greenhouse gas emissions reduction, the EU also issued a directive specifically addressing energy efficiency in 2006. 119 This directive calls for five-year action plans to be developed by the Commission towards achieving the EU’s goal of 20 per cent reduction in consumption of primary energy by 2020, 120 and has established an indicative energy savings target of 9 per cent to be reached within nine years (i.e., 1 per cent annually), starting in 2008. The directive allows each member to develop its own national action plan to achieve this target (or better). However, as this directive is not legally binding, participation and adherence by member states remain uneven. One of the novel energy savings mechanisms supported by the directive involves the use of tradable white certificates. This is a market-based mechanism whereby energy savings are certified and transformed into the so-called tradable white certificates that can then be traded in a secondary market, similar to allowances in an emissions trading system. A few EU member states (such as 119 European Parliament and Council, 2006. 120 Europa: Summaries of EU legislation, 2008, 2009. 33 France, Italy and the UK) have experimented with white certificate markets, but the voluntary nature of energy efficiency targets across the EU, fragmented action plans of member states towards achieving energy savings and challenges involving the market interactions between tradable white certificates, green certificates (or renewable energy certificates 121 ) and greenhouse gas allowances have so far limited market development. 122 Another example of centralised EU regulation in climate change involves carbon dioxide emissions from passenger vehicles. Starting in 1999, the EU has required all new cars sold within the EU to display labels indicating their fuel efficiency and carbon dioxide emissions. Most recently, a regulation enacted in 2009 requires auto manufactures to limit their fleet-wide average carbon dioxide emissions or pay an ‘emissions premium’ (penalty). 123 The emission limits and penalties will gradually be strengthened during the adjustment period of 2012 through 2018. In 2012, only 65 per cent of each manufacturer’s passenger car fleet will be required meet the baseline of 130 grams of carbon dioxide per kilometre. By 2020, a manufacturer’s entire fleet must have average carbon dioxide emissions 95 grams per kilometre or less. The penalty will be incremental during the adjustment period, starting from €5 for the first gram per kilometre of emissions over the limit, and rising up to €95 for additional grams per kilometre. By 2019, it will be fixed at €95 for each gram per kilometre. ANALYSIS The dynamics of the relationship between central and state authorities varies considerably across these six case studies. In three cases (automobile emissions in the EU and the US, and packaging waste policies in the EU), state governments have been an important source of policy innovation and diffusion. In these cases, state authorities were the first to regulate, and their regulations resulted in 121 Renewable energy certificates represents a similar concept to tradable white certificates and emissions allowances. In case of renewable energy certificates, energy generated from approved renewable energy resources is certified and traded in a secondary market, and can be applied as offsets towards reducing the greenhouse gas emission burden of an installation. 122 Mundaca and Neij, 2007 and Labanca and Perrels, 2008. 123 European Parliament and Council, 2009. 34 the adoption of more stringent regulatory standards by the central government. In the case of climate change policies, both EU and member state regulations have proceeded in tandem, with one reinforcing the other. In the two remaining cases (packaging waste and climate change in the US), American states have been a source of policy innovation, but not of significant policy diffusion. To date, state initiatives in these policy areas have not prompted an expansion of federal regulation, though some state regulations have diffused to other states. The earlier US pattern of automotive emissions standards, in which California and other states helped ratchet up federal standards, has so far not applied to either of these policy areas. However, over the years, the issue of climate change has become more politically significant than packaging waste, and the extended pressure by the states may generate some form federal action on climate under the Obama Administration. Moreover, as climate change gains prominence as the broader environmental threat, automotive emissions are increasingly evaluated in the same context. As a result, this potential federal action on climate change may be twopronged. As of early 2010, even stricter automobile fuel economy and emissions standards—proposed to be on par with those of California—were already on the drawing board. In fact, the associated draft rulemaking, which sets national standards for vehicle greenhouse gas emissions for the first time, was the result of an agreement between the federal government and California. This action on motor vehicle greenhouse gas emissions may then be followed by legislative or regulatory action directed at other sources of greenhouse gas emissions. 124 On the other hand, in Europe, relatively stringent state environmental standards continue to drive or parallel more closely the adoption of more stringent central standards. Thus, in the EU, the dynamics of the interaction between state and central authorities has become much more significant than in the US. Why has this occurred? Three factors are critical: two are structural and one is political. First, in the EU, states play a direct role in the policy-making process through their representation in the 124 Legislative action could consist of the Congress passing a climate change bill that might call for a nationwide cap-and-trade scheme in greenhouse gases. Regulatory action could involve the US EPA issuing a rulemaking to establish carbon dioxide regulation, as mentioned earlier. The agency could perhaps even establish a cap-and-trade market similar to the existing markets for nitrogen oxides and sulphur dioxide. The regulatory path has the potential to be more contentious than the legislative path. 35 Council of Ministers, the EU’s primary legislative body. This provides state governments with an important vehicle to shape EU policies. In fact, many European environmental standards originate at the national level; they reflect the successful effort of a member state to convert its national standards into European ones. In the US, by contrast, state governments are not formally represented in the federal government. While representatives and senators may reflect the policy preferences of the states from which they are elected, the states themselves enjoy no formal representation, unlike in the EU where they are represented on the Council of Ministers. Consequently, for example, the senators and representatives from California enjoy less influence over US national environmental legislation than does Germany’s representative in the Council of Ministers. Second, the single market is more recent and more politically fragile in the EU than in the US. The federal government’s legal supremacy over interstate commerce dates from the adoption of the US constitution, while the EU’s constitutional authority and political commitment to create and maintain a single market is less than two decades old. Accordingly, the European central government appears more sensitive to the impact of divergent standards on its internal market than does the US central government. For example, the US federal government explicitly permits two different standards for automotive emissions, while the EU insists on a uniform one. Likewise, the US federal government appears relatively indifferent to the wide divergence in state packaging waste regulations; only state regulations restricting imports of hazardous wastes and garbage have been challenged by federal authorities. 125 By contrast, distinctive state packaging waste standards have been an important source of legal and political tension within the EU, prompting efforts to harmonise standards at the European level, as well as legal challenges to various state regulations by the Commission. There are numerous state standards for packaging waste in the US that would probably prompt a legal challenge by the Commission were they adopted by an EU member state. Significantly, the EU has established maximum state recovery and recycling goals, while the US central government has not. This means 125 Stone, 1990. 36 that when faced with divergent state standards, particularly with respect to products, the EU is likely to find itself under more pressure than the US central government to prevent them from interfering with the single market. Accordingly, they must be either challenged or harmonised. In principle, harmonisation need not result in more stringent standards. In fact, the EU’s Packaging Directive imposes both a ceiling and a floor. But for the most part, coalitions of the EU’s greener member states have been successful in pressuring the EU to adopt directives that generally strengthen European environmental standards. The political influence of these states has been further strengthened by the role of the European Commission, which has made an institutional and political commitment to improving European environmental quality; consequently, the Commission typically prefers to use its authority to force states to raise their standards rather than lower them. In addition, the increasingly influential role of the EP, in which green constituencies have been relatively strongly represented, has also contributed to strengthening EU environmental standards. The third factor is a political one. During the 1960s and 1970s, there was a strong political push in the US for federal environmental standards. According to environmentalists and their supporters, federal regulation was essential if the US was to make effective progress in improving environmental quality. And environmentalists were influential enough to secure the enactment of numerous federal standards, which were generally more stringent than those at the state level. Thus, the centre of gravity of US environmental regulation shifted to Washington. After the Republican Party’s capture of both chambers of Congress in 1994, followed by the two-term Republican presidency starting in 2000, relatively few more-stringent environmental standards were adopted. During this period, the national political strength of environmentalists and their supporters diminished. Nevertheless, environmentalists and their supporters continued to be relatively influential in a number of American states. In part, this outburst of state activity has been a response to their declining influence in Washington. By 2008, a major discontinuity had emerged between the environmental policies of many US states and those of the federal government. This has meant that, unlike in the 1960s and 1970s, more stringent state standards have had much less impact on the 37 strengthening federal standards. Indeed, in marked contrast to two decades ago, when the automobile emissions standards of California and other states led to the progressive strengthening of federal standards in this critical area of environmental policy, California’s recent policy efforts to regulate automobiles as part of a broader effort to reduce greenhouse gas emissions were initially challenged by the federal government on the grounds that they violated federal fuel-economy standards, an area of regulatory policy in which the federal government has exclusive authority but which it did not strengthen for more than two decades. The Obama Administration has sought to reinvigorate the federal government’s environmental policy role, most notably in the critical area of global climate change. It has also reduced some of the friction between states and the federal government in the critical area of greenhouse gas emissions from motor vehicles. In the EU, the political dynamics of environmental regulation differ markedly. The 1990s witnessed both the increased political influence of pro-environmental constituencies within the EU – by the end of that decade, green parties had entered the governments of five Western European nations – and a decline in the influence of green pressure groups in the US federal government. During this period, a number of EU environmental policies became more centralised and stringent than those of the US. 126 Paradoxically, while the US federal government exercises far more extensive authority than the EU, in each of three cases we examined, EU environmental policy is now more centralised than that in the US. CONCLUSION The focal cases are summarised in Table 9.1. We conclude with general observations about the dynamics of environmental policy in the federal systems of the US and the EU. On one hand, the continued efforts of states in the US and member states of the EU to strengthen a broad range of environmental regulations suggest that fears of a regulatory race to the bottom may be misplaced. Clearly, concerns that strong regulations will make domestic producers vulnerable to competition 126 Vogel, 2003. 38 from producers in political jurisdictions with less stringent standards have not prevented many states on both sides of the Atlantic from enacting many relatively stringent and ambitious environmental standards. On the other hand, the impact of such state policies remains limited, in part because not all states choose to adopt or vigorously enforce relatively stringent standards. Thus, in the long run, there is no substitute for centralised standards; they represent the most important mechanism of policy diffusion. Table 9.1 Comparison of environmental regulations ____________________________________________________________________ Policy EU Status US Status area chronology chronology ____________________________________________________________________ Auto emissions State to central Centralised State to central Shared Packaging waste State to shared Contested State Uncontested Climate change Shared Uncontested State Contested ____________________________________________________________________ Accordingly, the most important role played by state standards is to prompt more stringent central ones. But unless this dynamic comes into play, the effectiveness of state environmental regulations will remain limited. In the areas of both global climate change and packaging waste, virtually all state regulations of the US are less stringent than those of the EU. It is not coincidental that the case we examined in which EU and US standards are the most comparable – and relatively stringent – is automobile emissions, in which the US central government plays a critical role. By contrast, the lack of central regulations for both packaging waste and climate change clearly reflects and reinforces the relative laxity of US regulations in these policy areas. The EU’s more centralised policies in both areas reflect the greater vigour of its recent environmental efforts. REFERENCES American Forest & Paper Association (2003), ‘State recycling goals and mandates’, http://www.afandpa.org/content/navigationmenu/environment_and_recycling/recycling/state_recycling_goals/state_recyc ling_goals.htm. Barr, S. (1998), ‘Clinton orders more recycling; Government agencies face tougher requirements on paper’, The Washington Post, September 16, A14. Bell, V. (1998), ‘President, Environmental Packaging International, environmental packaging compliance tips’, http://www.enviro-pac.com/pr02.htm, August. 39 Bennett, G. (ed.) (1991), Air Pollution Control in the European Community: Implementation of the EC Directives in the Twelve Member States, London: Graham and Trotman. Berland, R. (1992), ‘State and local attempts to restrict the importation of solid and hazardous waste: Overcoming the dormant commerce clause’, University of Kansas Law Review, 40(2), 465-497. Bramley, M. (2002), ‘A comparison of current government action on climate change in the U.S. and Canada’, Pembina Institute for Appropriate Development, http://www.pembina.org/publications_item.asp?id=129. Bryner, G. (1993), ‘Blue skies, green politics’, Washington, DC: Congressional Quarterly Press. California Air Resources Board (2000), ‘California’s diesel risk reduction program: Frequently asked questions (FAQ)’, http://www.arb.ca.gov/diesel/faq.htm. California Air Resources Board (2001), ‘Fact sheet: California’s Zero Emission Vehicle Program’, http://www.arb.ca.gov/msprog/zevprog/factsheets/evfacts.pdf. California Air Resources Board (2003), ‘Staff report: Initial statement of reasons, 2003; Proposed amendments to the California Zero Emission Vehicle Program regulations’, http://www.arb.ca.gov/regact/zev2003/isor.pdf. California Environmental Protection Agency (EPA) (2001), ‘History of the California Environmental Protection Agency’, http://www.calepa.ca.gov/about/history01/ arb.htm. California Integrated Waste Management Board (2003), ‘Buy recycled: Web resources’, http://www.ciwmb.ca.gov/buyrecycled/links.htm. Center for Responsive Law (2003), ‘Government purchasing project “State government environmentally preferable purchasing policies”’, http://www.gpp.org /epp_states.html. Chanin, R. (2003), ‘California’s authority to regulate mobile source greenhouse gas emissions’, New York University Annual Survey of American Law, 58, 699-754. Collier, U. (1996), ‘The European Union’s climate change policy: Limiting emissions or limiting powers?’, Journal of European Public Policy, 3(March), 122-138. Comer, C. (1995), ‘Federalism and environmental quality: A case study of packaging waste rules in the European Union’, Fordham Environmental Law Journal, 7, 163-211. Container Recycling Institute (2003), ‘The Bottle Bill Resource Guide’, http://www.bottlebill.org. Cotsworth, E. (2002), ‘Letter to Anna K. Maddela’, yosemite.epa.gov/osw /rcra.nsf/ea6e50dc6214725285256bf00063269d/290692727b7ebefb85256c6700700d50?opendocument. Dietly, K. (2001), Research on Container Deposits and Competing Recycling Programs, presentation to the Columbia, Missouri Beverage Container Deposit Ordinance Law Study Committee Meeting, 1 November. Database of State Incentives for Renewables and Efficiency (2010), ‘Summary Maps: Renewable Portfolio Standards’, http://www.dsireusa.org/documents/ summarymaps/RPS_map.ppt, accessed January 16, 2010. Environment Daily (1997), March 13. Environment Daily (2001a), March 29. Environment Daily (2001b), October 4. Environment Daily (2003a), February 28. Environment Daily (2003b), March 21. Environment Daily (2003c), April 2. Environment Daily (2003d), May 5. Environment Daily (2003e), July 2. Europa: Summaries of EU legislation (2008), ‘Action Plan for Energy Efficiency (2007-12)’, http://europa.eu/ legislation_summaries/energy/energy_efficiency/l27064_en.htm, accessed January 24, 2010. Europa: Summaries of EU legislation (2009), ‘Energy efficiency for the 2020’, http://europa.eu/legislation _summaries/energy/energy_efficiency/en0002_en.htm, accessed January 24, 2010. European Commission (2006a), ‘Report from the Commission to the Council and the European Parliament on the implementation of directive 94/62/ec on packaging and packaging waste and its impact on the environment, as well as on the functioning of the internal market’, http://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri= COM:2006:0767:FIN:EN:HTML, accessed January 29, 2010. European Commission (2006b), ‘MEMO/06/452: Questions and Answers on Emissions Trading and National Allocation Plans for 2008 to 2012’, http://ec.europa.eu/environment/climat/pdf/m06_452_en.pdf, accessed January 25, 2010. European Commission (2009), ‘Reducing CO2 emissions from light-duty vehicles’, http://ec.europa.eu/environment/air/transport/co2/co2_home.htm, accessed January 18, 2010. European Parliament and Council (2004), ‘Directive 2004/12/EC of the European Parliament and of the Council of 11 February 2004 amending Directive 94/62/EC on packaging and packaging waste’, Official Journal L047, February 18, 2004, p. 0026-0032. European Parliament and Council (2005), ‘Directive 2005/20/EC of the European Parliament and of the Council of 9 March 2005 amending Directive 94/62/EC on packaging and packaging waste’, Official Journal L070, March 16, 2005, p. 0017- 0018. European Parliament and Council (2006), ‘Directive 2006/32/EC of the European Parliament and of the Council of 5 April 2006 on energy end-use efficiency and energy services and repealing Council Directive 93/76/EEC’, Official Journal L114, April 27, 2006, p. 0064-0084. European Parliament and Council (2009a), ‘Directive 2009/29/EC of 23 April 2009 amending Directive 2003/87/EC so as to improve and extend the greenhouse gas emission allowance trading scheme of the Community’, Official Journal L140, June 6, 2009, p. 0063-0087. 40 European Parliament and Council (2009b), ‘Regulation EC443/2009 of 23 April 2009, setting emission performance standards for new passenger cars as part of the Community’s integrated approach to reduce CO2 emissions from light-duty vehicles,’ Official Journal L140, June 6, 2009, p. 001-0015. Hakim, D. (2003a), ‘California regulators modify Auto Emissions Mandate’, The New York Times, April 25, A24. Hakim, D. (2003b), ‘Automakers drop suits on air rules’, The New York Times, August 12, A1, C3. Haverland, M. (1999), Regulation and Markets: National Autonomy, European Integration and the Politics of Packaging Waste, Amsterdam: Thela Thesis. Johnson, K. (2003), ‘3 States sue E.P.A. to regulate emissions of carbon dioxide’, The New York Times, June 5. Kanter, J. (2008), ‘Europe Forcing Airlines to Buy Emissions Permits’, The New York Times, October 24. Keating, G. (2002), ‘Californian governor signs landmark Auto Emissions Law’, Reuters, July 23, http://www.enn.com/news/wire-stories/2002/07/07232002/ s_47915. asp. Nicola Labanca, N. and Perrels, A. (2008), ‘Tradable White Certificates--a promising but tricky policy instrument (Editorial)’, Energy Efficiency, 1(November), p. 233-236. Lee, G. (1993), ‘Government purchasers told to seek recycled products; Clinton Executive Order revises standards for paper’, The Washington Post, October 21, A29. Massachusetts v. E.P.A., 127 S.Ct. 1438 (2007). The full text of the decision is available at http://www.supremecourtus.gov/opinions/06pdf/05-1120.pdf, accessed January 18, 2010. McCarthy, J. (1993), ‘Bottle Bills and curbside recycling: Are they compatible?’, Congressional Research Service (Report 93-114 ENR), http://www.ncseonline org/ nle/crsreports/pollution. McCormick, J. (2001), Environmental Policy in the European Union, New York: Palgrave. Meltz, R. (2007), ‘The Supreme Court’s Climate Change Decision: Massachusetts v. EPA’, May 18, Congressional Research Service Report RS22665, http://assets.opencrs.com/rpts/RS22665_20070518.pdf, accessed January 18, 2010. Midwestern Greenhouse Gas Reduction Accord (2007). The full text of the 2007 Accord is available at http://www.midwesternaccord.org/midwesterngreenhousegas reductionaccord.pdf, accessed January 25, 2010. Midwestern Greenhouse Gas Reduction Accord (2009), ‘Draft Final Recommendations of the Advisory Group’, http://www.midwesternaccord.org /GHG%20Draft%20Advisory%20Group%20Recommendations.pdf, accessed January 25, 2010. Mundaca, L. and Neij, L. (2007), ‘Package of policy recommendations for the assessment, implementation and operation of TWC schemes’, Euro White Cert Project Work Package 5, http://www.ewc.polimi.it/documents/Pack_Policy_ Recommendations.pdf, accessed January 18, 2010. New England Governors/Eastern Canadian Premiers (2001), ‘Climate Change Action Plan 2001’, http://www.massclimateaction.org/pdf/necanadaclimateplan.pdf. New Hampshire Department of Environmental Services (2002), ‘Overview of HB 284: The New Hampshire Clean Power Act, ground-breaking legislation to reduce multiple harmful pollutant from New Hampshire’s electric power plants’, http://www.des.state.nh.us/ard/cleanpoweract.htm. New Jersey Department of Environmental Protection (1999), ‘Sustainability Greenhouse Action Plan’, http://www.state.nj.us/dep/dsr/gcc/gcc.htm. New York Times, The (2002), ‘California’s message to George Pataki (Editorial)’, July 24, A18. OECD (2002), Sustainable Development: Indicators to Measure Decoupling of Environmental Pressure from Economic Growth, SG/SD(2002)1/FINAL, May, Paris: OECD. OECD (2004), Environmental Data Compendium: Selected Environmental Data, OECD EPR/Second Cycle, February 9, Paris: OECD. Oregon Department of Environmental Quality (2003), ‘Oregon Container Glass Recycling Profile’, http://www.deq.state.or.us/wmc/solwaste/glass.html. Parker, J. (2003), ‘California board’s boundaries debated: Automakers say it oversees emissions, not fuel economy’, Detroit Free Press, May 7. Percival, R., A. Miller, C, Schroeder, and J. Leape (1992), Environmental Regulation: Law, Science and Policy. Boston: Little, Brown & Co. Pew Center on Global Climate Change (2008), ‘Central Valley Chrysler-Jeep Inc. v. Goldstone’, http://www.pewclimate.org/judicial-analysis/CentralValleyChrysler Jeep-v-Goldstone, accessed January 28, 2010. Pew Center on Global Climate Change (2010), ‘Vehicle Greenhouse Gas Emissions Standards’, http://www.pewclimate.org/sites/default/modules/usmap/pdf.php? file=5905, accessed January 16, 2010. Plastic Shipping Container Institute (2003), ‘Wisconsin solid waste legislative update’, http://www.pscionline.org. Rabe, B. (2002), ‘Greenhouse & statehouse: The evolving state government role in climate change’, Pew Center on Global Climate Change, http://www.pewclimate.org/global-warming-in-depth/all_reports/greenhouse_and_statehouse_/. Rehbinder, E. and R. Stewart (1985), Integration Through Law: Europe and American Federal Experience, vol. 2: Environmental Protection Policy, New York: Walter de Gruyter. Reuters (2007), ‘EU confirms to propose ships join emissions trade’, April 16. Revesz, R. (2001), ‘Federalism and environmental regulation: A public choice analysis’, Harvard Law Review, 115, 553- 641. RGGI (2007), ‘Overview of RGGI CO2 Budget Trading Program’, http://rggi.org/docs/program_summary_10_07.pdf, accessed January 24, 2010. RGGI (2009a), ‘RGGI Fact Sheet’, http://www.rggi.org/docs/RGGI_Executive%20 Summary_4.22.09.pdf, accessed January 25, 2010. 41 RGGI (2009b), Auction Results, http://www.rggi.org/co2-auctions/results, accessed January 25, 2010. RGGI (2009c), RGGI CO2 Allowance Tracking System (COATS): Public Reports: Transaction price reports for January 1, 2009 through December 31, 2009, https://rggi-coats.org/eats/rggi/index.cfm?fuseaction=reportsv2.price_rpt& clearfuseattribs=true, accessed January 25, 2010. Smith, M. and T. Chaumeil (2002), ‘Greenhouse gas emissions trading within the European Union: An overview of the proposed European Directive’, Fordham Environmental Law Journal, 13(Spring), 207-225. Sterngold, J. (2002), ‘State officials ask Bush to act on global warming’, The New York Times, July 17, A12. Stone, J. (1990), ‘Supremacy and commerce clause: Issues regarding state hazardous waste import bans’, Columbia Journal of Environmental Law, 15(1), 1-30. Thorman, J., L. Nelson, D. Starkey, and D. Lovell (1996), ‘Packaging and waste management; National Conference of state legislators’, http://www.ncsl.org/ programs/esnr/rp-pack.htm. UN Conference on Trade and Development (2009), ‘Maritime Transport and the Climate Change Challenge: Summary of Proceedings’, Multi-Year Expert Meeting on Transport and Trade Facilitation, February 16-18, Geneva. US Department of Transportation (2009a), ‘Statement from the Department of Transportation’, January 7, 2009, http://www.dot.gov/affairs/dot0109.htm. US Department of Transportation (2009b), ‘Average Fuel Economy Standards, Passenger Cars and Light Trucks, Model Year 2011’, March 9, 2009. US EPA (1998), ‘Control of Air Pollution from New Motor Vehicles and New Motor Vehicle Engines: Finding of National Low Emission Vehicle Program in Effect’, March 2, 63 Federal Register 926. US EPA (1999), ‘California State Motor Vehicle Pollution Control Standards; Waiver of federal preemption’, http://www.epa.gov/otaq/regs/ld-hwy/evap/waivevap.pdf. US EPA (2000), ‘Control of Air Pollution from New Motor Vehicles: Tier 2 Motor Vehicle Emission Standards and Gasoline Sulfur Control Requirements; Final Rule’, February 10, 65 Federal Register 6697. US EPA (2001), ‘Control of Air Pollution from New Motor Vehicles: Heavy-Duty Engine and Vehicle Standards and Highway Diesel Fuel Sulfur Control Requirements’, January 18, 66 Federal Register 5001. US EPA (2003a), ‘Federal and California Exhaust and Evaporative Emission Standards for Light-Duty Vehicles and LightDuty Trucks’, Report EPA420-B-00-001, http://www.epa.gov/otaq/stds-ld.htm. US EPA (2003b), ‘Municipal Solid Waste (MSW): Basic facts’, http://www.epa.gov/apeoswer/non-hw/muncpl/facts.htm. US EPA (2003c), ‘Global warming: State actions list’, yosemite.epa. gov/oar/globalwarming.nsf/content/actionsstate.html. US EPA (2006), ‘What Are the Six Common Air Pollutants?’, http://www.epa.gov/air/urbanair/, accessed February 5, 2010. US EPA (2009a), ‘Regulatory Impact Analysis for the Mandatory Reporting of Greenhouse Gas Emissions Final Rule (GHG Reporting): Final Report’, September 2009, http://www.epa.gov/climatechange/emissions/downloads09/GHG_RIA.pdf, accessed January 28, 2010. US EPA (2009b), ‘Endangerment and Cause or Contribute Findings for Greenhouse Gases under the Clean Air Act’, http://www.epa.gov/climatechange/ endangerment.html, accessed January 18, 2010. US EPA (2009c), ‘California Greenhouse Gas Waiver Request’, http://www.epa.gov/oms/climate/ca-waiver.htm, accessed January 18, 2010. US EPA (2009d), ‘Commitment Letters: California Governor Schwarzenegger’, http://www.epa.gov/otaq/climate/regulations/calif-gov.pdf, accessed January 18, 2010. US EPA (2009e), ‘EPA and NHTSA Propose Historic National Program to Reduce Greenhouse Gases and Improve Fuel Economy for Cars and Trucks’, http://epa.gov/otaq/climate/regulations/420f09047a.htm, accessed January 18, 2010. US EPA (2009f), ‘Prevention of Significant Deterioration and Title V Greenhouse Gas Tailoring Rule,’ http://www.epa.gov/NSR/fs20090930action.html, accessed February 5, 2010. Vogel, D. (1995), Trading Up; Consumer and Environmental Regulation in a Global Economy, Cambridge: Harvard University Press. Vogel, D. (2003), ‘The hare and the tortoise revisited: The new politics of consumer and environmental regulation in Europe’, British Journal of Political Science, 33(4), 557-580. WCI (2009), ‘The WCI Cap & Trade Program’, at http://www.western climateinitiative.org/the-wci-cap-and-trade-program, and ‘The WCI Cap & Trade Program: Frequently Asked Questions’, http://www.westernclimateinitiative.org /the-wcicap-and-trade-program/faq, both last accessed January 25, 2010. Yost, P. (2002), ‘Bush administration is against California’s Zero Emissions Requirement for Cars’, Environmental News Network, http://www.enn.com/news/ wire-stories/2002/10/10102002/ap_48664.asp. Zito, A. (2000), Creating Environmental Policy in the European Union, New York: St. Martin’s Press.Failing to learn and learning to fail (intelligently): How great organizations put failure to work to improve and innovat
|
CD ROM Annuaire d'Entreprises France prospect (avec ou sans emails) : REMISE DE 10 % Avec le code réduction AUDEN872
10% de réduction sur vos envois d'emailing --> CLIQUEZ ICI Retour à l'accueil, cliquez ici HARVARD AND CHINA
|
CD ROM Annuaire d'Entreprises France prospect (avec ou sans emails) : REMISE DE 10 % Avec le code réduction AUDEN872
10% de réduction sur vos envois d'emailing --> CLIQUEZ ICI Retour à l'accueil, cliquez ici HARVARD AND CHI N A A R E S E A R C H S Y M P O S I U M M A R C H 2 0 1 0 E X E C U T I V E S U M M A R I E S O F S E L E C T E D S E S S I O N SH A R VA R D A N D C H I N A : A R E S E A R C H SY M P O S I U M | 2 C O P Y R I G H T © 2 0 1 0 P R E S I D E N T & F E L LOWS O F H A R VA R D C O L L E G E SESSIONS WELCOME AND OPENING PLENARY PAGE 3 THE CHINESE CENTURY? PAGE 6 CHINA—DYNAMIC , IMPORTANT AND DIFFERENT PAGE 9 THE MORAL LIMITS OF MARKETS PAGE 12 WHO CARES ABOUT CHINESE CULTURE? PAGE 15 MANAGING CRISES IN CHINA PAGE 18 CHINA’S NEWEST REVOLUTION: HEALTH FOR ALL? PAGE 21 INNOVATIONS CHANGING THE WORLD: NEW TECHNOLOGIES, HARVARD, AND CHINA PAGE 25 CLOSING REMARKS (F. WARREN MCFARLAN) PAGE 28 CLOSING REMARKS (DREW GILPIN FAUST) PAGE 31H A R VA R D A N D C H I N A : A R E S E A R C H SY M P O S I U M | 3 OVERVIEW The world is going through the second great wave of globalization. Globalization isn’t just economic; education is also globalizing.Amid this globalization wave, the engagement of China and America is critical as the economies of these two countries will shape the world economy. It is important for Harvard University and Harvard Business School to be part of the engagement between China and America.The creation of the Harvard Center Shanghai represents a next stage of Harvard’s engagement in Asia. It is another step in the journey of becoming a truly global university. CONTEXT Dean Light and Professor Palepu reflected on the role that globalization plays in education, the journey to create the Harvard Center Shanghai, and the mutual benefits of deepened engagement in China. SPEAKERS Jay O. Light George F. Baker Professor of Administration and Dean, Harvard Business School Krishna G. Palepu Ross GrahamWalker Professor of Business Administration and Senior Associate Dean for International Development, Harvard Business School WELCOME AND OPENING PLENARYHarvard Business School’s process of globalizing has many important elements. These elements include having a global: • Student body. Twenty years ago, Harvard Business School had a relatively small number of international students and few Chinese students.Today, HBS has quite a few Chinese students and the student body is highly international. • Faculty.Today HBS’s faculty comes from across the world, including a half dozen faculty who understand Mandarin, several of whom also can teach in Mandarin.The faculty also includes Bill Kirby, one of theWest’s foremost China historians, who splits his time between HBS and Harvard College. • Curriculum. HBS’s curriculum and cases have become global relatively quickly.There are now courses on doing business in China, immersion programs—including programs in China and elsewhere in Asia—and many other global components in the curriculum. • Alumni group. As HBS students are increasingly international, so too are the school’s alumni. In Shanghai, there is an increasingly active alumni organization. In addition to HBS’s global focus, Harvard University also has adopted a more global perspective.The university is seeking to leverage the work and interest of the entire Harvard community in the global arena. For example, Shanghai is a hotbed for undergraduate internships. One seemingly simple change that will allow students from across Harvard to engage in international opportunities is Harvard’s decision to move to an integrated school-wide calendar.This common calendar will allow coordination of programs across different schools and will make it easier for students to engage in these coordinated global programs. While these elements are important for Harvard to become a truly international university, it also became apparent that being part of the engagement between China and America required that Harvard have a greater presence in China. So, in the last two years, the decision was made to pursue a footprint in China, specifically in Shanghai. Shanghai is the right city and this footprint is in the right place—a central location in Shanghai, on top of two key subway lines. It is important for Harvard to be part of the globalization of the economy and education. Harvard and China have a long-shared history. During the first great wave of globalization around 100 years ago, education also was being globalized.There were students at Harvard College from Shanghai as well as other locations in China.The first classes at Harvard Business School included students from Shanghai. Also, Harvard Medical School was active in Shanghai. Then the world changed. FollowingWorldWar I,the Great Depression, and World War II, the previous wave of globalization gave way to very local political and economic attitudes. Economically and educationally, China and America were not linked. Now, we find ourselves in the second great wave of globalization, which has been building over the past two decades.Today,the world economy and education are being globalized in unprecedented ways. China is now the world’s second-largest economy; the future of the global economy depends on the ability of China and America to engage with each other in a constructive, integrative way. In the long term, engagement between China and America is critical.Recognizing this,it became clear that Harvard should be part of this engagement. “I believe the Harvard Business School and Harvard University must be part of that engagement and must be an important part of understanding how the world economy, the Chinese economy, and the American economy are evolving, and how we can engage with each other.” — Jay O. Light H A R VA R D A N D C H I N A : A R E S E A R C H SY M P O S I U M | 4 WELCOME AND OPENING PLENARY KEY TAKEAWAYS“We could enable [Chinese CEOs in an executive program] to experience Harvard Business School without having to go to Boston, and that’s a real landmark.” — Krishna G. Palepu Historically, great universities have been located in countries with great economies. The stellar universities of Britain, Germany, and America all rose as their societies rose. By taking a global perspective and by opening research and education centers around the globe, particularly in Shanghai, Harvard Business School and Harvard University are seeking to become the first school to maintain its prominent stature as economic forces shift around the globe. The opening of the Harvard Center Shanghai demonstrates the continuing commitment to becoming a truly global university. As a scholar who studies multinationals in emerging markets, Professor Palepu knows how hard it is for organizations to make the commitments that are necessary to transform themselves into global enterprises.The opening of the Harvard Center Shanghai demonstrates such a commitment by Harvard. It marks a continuing evolution in HBS’s global journey. Beginning around 1995, HBS began opening global research centers around the world.The first of these research centers opened in Hong Kong and the school now has six centers, which have contributed significantly to the school’s curriculum. About five years ago, a faculty committee chaired by Professor Palepu recommended expanding and converting these research centers into research and education centers. The rationale was that, in HBS’s view, there isn’t a distinction between research and education, and the uniqueness of HBS is that synergy between research and education. But part of this evolution requires physical infrastructure where classes can be taught.The infrastructure in Shanghai is the type of educational infrastructure that is needed. H A R VA R D A N D C H I N A : A R E S E A R C H SY M P O S I U M | 5 WELCOME AND OPENING PLENARY KEY TAKEAWAYSH A R VA R D A N D C H I N A : A R E S E A R C H SY M P O S I U M | 6 OVERVIEW Many people say the 21 st century will be the “Chinese Century.” However, similar statements made a century ago didn’t come to fruition.Yet for those who have spent time in the country, it is hard to doubt that China will play a critical world role in the next 100 years. China is rapidly moving forward in pursuing unfulfilled dreams in areas of infrastructure, entrepreneurship, and education. Still, as central of a role as China will play, this century won’t belong exclusively to China.This will be a century for all in the world who share common aspirations and who work and learn together to solve common problems. CONTEXT Professor Kirby shared his thoughts on whether this will be China’s century. He looked back at the past century and examined the key factors propelling China forward. SPEAKER William C. Kirby Spangler Family Professor of Business Administration, Harvard Business School;T.M. Chang Professor of China Studies, Faculty of Arts and Sciences; Chairman, Harvard China Fund; Director, Fairbank Center for Chinese Studies THE CHINESE CENTURY?China’s rise in the 21 st century is based on its recovery in the 20 th century. Some people claim the 21 st century will be the “Chinese Century,” which is hard to question. But viewing this as China’s century doesn’t come at the exclusion of other countries; it comes as part of a global community. In large measure, China’s success in the 21 st century is based on its recovery in the 20 th century and its pursuit of longstanding, unfulfilled dreams. “If China is in some measure to define the 21 st century, it is because of its recovery and rise in the 20 th .” — William C. Kirby The longstanding dreams China is working to fulfill are: • An infrastructure dream. China is built on a long tradition of infrastructure. In his book The International Development of China, published in 1922, Sun Yat-sen envisioned a modern China with 100,000 miles of highway and a gorgeous dam. He foresaw a “technocracy,” which has been translated in Chinese as “the dictatorship of the engineers” (an apt definition of China’s government today).This infrastructure dream is becoming a reality as China builds highways, airports,telecommunication systems, and a dam that couldn’t be built anywhere else except in China. • A private enterprise dream.While the government is building the infrastructure,the private sector is building a rapidly growing middle class and a consumer economy.This economy includes proliferating retail stores and new Chinese brands (many of which are targeted to “Mr. and Mrs. China”). No one knows how large the middle class is, with the best guess in the 200–250 million range. There is a group in China termed the “urban middle class.”These individuals are 20–50 years old; 80% own their own home, and most don’t have a mortgage; 23% have more than one property.About one-third have a car; they love to travel; and they are beginning to buy stocks. (However, the gap between this new urban middle class and the rural—a gap that has always existed—is growing fast.) One hundred years ago, China seemed on the verge of the “Chinese Century,” but it didn’t come to fruition. In the early 1900s, many experts thought China was on the verge of the “Chinese Century.” A host of books proclaimed China’s awakening.This view was based on: • A revolution in business. China was experiencing its first golden age of capitalism. China had a sizeable middle class and the glamorous city of Shanghai—not Tokyo or Hong Kong—was the international center of East Asian commerce. It was also a golden age for entrepreneurship. • The formation of Asia’s first republic. About 100 years ago, under Sun Yat-sen, China engaged in a grand historical experiment in forming Asia’s first republic. • A revolution in education. In the first half of the 20 th century, China developed one of the strongest higher education systems in the world. Based on the political climate, the business environment, and the educational system, it was an optimistic time in China. But China’s politics took a decidedly military turn with a series of leaders cut from the same cloth—Yuan Shikai, Chiang Kai-shek, Zhu De, and Mao Zedong.This military turn set China back, but it also provided the foundation for China’s global strength; China could not be defeated by Japan inWorldWar II and could not be intimidated by the Soviet Union. Ultimately, China’s first golden age was undone by the Japanese invasion, the Communist rebellion, and above all, the ruinous policies of the first 30 years of the People’s Republic. China’s entrepreneurs were forced underground and overseas and China’s progressive universities were swept away. “At a time when the rest of East Asia prospered, China went backward.” — William C. Kirby H A R VA R D A N D C H I N A : A R E S E A R C H SY M P O S I U M | 7 THE CHINESE CENTURY? KEY TAKEAWAYSOther Important Points • Three Shanghais.Within the borders of Shanghai are three different Shanghais: 1) the old walled city of Shanghai, which housed some 400,000 Chinese when Westerners first settled in Shanghai in the 1840s; 2) the Bund, which became a major financial hub; and 3) the new Shanghai, which is the Shanghai of the future. • China’s constitution. In the 1910s, Chinese PresidentYuan Shikai asked Harvard’s President Eliot to recommend an advisor to help draft a new constitution for China. Eliot recommended Frank Goodnow,the leading political scientist of the day. Goodnow drafted two constitutions:the first made Shikai president for life and the second would have made him emperor, had he not died first. • 180 degrees. About 100 years ago, America sold textiles and clothes to China and Americans bought Chinese railway bonds, which were viewed as good investments but turned out to be worthless.Today, Americans buy their textiles and clothes from China and China buys American treasury bonds, which hopefully fare better than the Chinese railway bonds. The changes in consumption in the huge new middle class are changing entire industries, such as agriculture.There are major changes in how food is grown, distributed, and sold—without using more land.This includes the dairy industry and the growing Chinese wine industry. • An education dream. No story is more central to China’s future than education. (Chinese families will delay any purchase in order to fund education.) China is rapidly building massive, modern university campuses, such as Chongqing University.These universities will be a welcome challenge to American universities and other leading global schools. “It is this area [education] that I think will clearly determine whether or not this will be China’s century.” — William C. Kirby Harvard shares China’s dream of training and educating future global leaders.This is seen through the fact that each of Harvard’s schools has important relationships in China. Harvard and China share common educational challenges.Among them are to: – Not simply train, but educate the whole person. – Educate a person not simply as a citizen of a country, but as a citizen of the world. – Measure and value not only research, but teaching and inspiration. – Extend the promise of higher education beyond the upper and middle classes. – Determine the proper level of governance and autonomy so universities can serve a broad public purpose. H A R VA R D A N D C H I N A : A R E S E A R C H SY M P O S I U M | 8 THE CHINESE CENTURY? KEY TAKEAWAYSH A R VA R D A N D C H I N A : A R E S E A R C H SY M P O S I U M | 9 OVERVIEW The Harvard Center Shanghai’s state-of-the-art bilingual facility expands access to the HBS experience to non-English-speaking executives in China. Featuring high-tech equipment and world-class interpreters, the facility disintegrates language barriers for an uncompromised HBS classroom experience.The case method, the fast-paced exchanges, and the cold calls are all there. CONTEXT Professor McFarlan shared his experiences spearheading HBS’s executive education ventures in China and described the state-of-the-art, HBS-style, bilingual presentation space at the new Harvard Center Shanghai facility. Participants then experienced the facility for themselves as Professor Li Jin led discussion of an actual HBS case. SPEAKERS F.Warren McFarlan Albert H. Gordon Professor of Business Administration, Emeritus, Harvard Business School Li Jin Associate Professor of Business Administration, Harvard Business School CHINA—DYNAMIC , IMPORTANT AND DIFFERENT• An hour’s class requires a team of three translators, each working 20-minute stints. • The room has double-sized blackboards: half for Chinese, half for English. “By the time you’re 15 minutes into it, you literally forget that you’re not in an English-speaking classroom.” — F. Warren McFarlan Despite all the high-tech equipment, people are the critical link. While high-tech equipment makes the facility translationcapable, it is the people—faculty and translators—who are most critical to delivering an uncompromised HBS educational experience. A bilingual presentation is quite labor-intensive behind the scenes: • Two professors are necessary for blackboard notes in both languages; they need to confer in advance to coordinate plans. • Slides must be translated in advance. Getting translations done in time requires coordination. • During class, professors must become skilled at realizing who is speaking by the red lights since there is no voice change for translated material. Complicating this a bit is a 15-second lag time before the translation arrives. • No less than expert translating skills are a must. “The critical link lies behind the glass walls; you must have world-class interpretation simultaneously.” — F. Warren McFarlan The bilingual facility dramatically expands access to the HBS educational experience. Chinese executives who would not have been able to experience HBS are now able to do so, thanks to the presentation space at the Harvard Center Shanghai. Its capabilities were demonstrated by a recent program at the Center. It consisted of 66 CEOs, 65 of whom didn’t speak English. Without this facility,these individuals would not have been able to participate in this HBS program. Since 2001, HBS and its Chinese business school partners have provided bilingual executive education in China. Harvard Business School has offered executive education programs in China in partnership with leading Chinese business schools since 2001. Professor McFarlan spearheaded the first co-branded program with Tsinghua University (at the request of HBS graduate and former U.S. Treasury Secretary Henry Paulson when he was CEO of Goldman Sachs). Two-thirds of the instructors in this seminal program were HBS faculty, one-third wereTsinghua professors trained in HBS methods.The program was bilingual from day one, with classes conducted in both Chinese and English (realtime translation of classroom exchanges was transmitted by earphones) and HBS case studies focused on Chinese companies and are available in both languages. Harvard’s bilingual classroom disintegrates language barriers to deliver an uncompromised classroom experience. Creators of the HBS/Tsinghua program knew that only real-time translation would allow the fast-paced, interactive experience of an HBS classroom to be replicated in a bilingual setting. “Sequential translation wouldn’t work,” said Professor McFarlan.“The pace of the class would slip; you’d lose 50%.” In the Harvard Center Shanghai’s state-ofthe-art bilingual facility, content lost in translation is no greater than 5%–10%. The presentation space looks much like its classroom counterpart in Boston, with some critical differences: • At each seat are headphones with settings for English and Chinese. Professors who aren’t bilingual wear earphones as well. • Students desiring (or called upon) to speak push a button, which flashes a red light, telling translators at the back of the room whom to tune into. • Teams of expert linguists deliver immediate translations of the exchanges to listeners’ earphones. H A R VA R D A N D C H I N A : A R E S E A R C H SY M P O S I U M | 1 0 CHINA—DYNAMIC, IMPORTANT AND DIFFERENT KEY TAKEAWAYSWith the need to bridge language barriers in education and business only rising in our complex, globalized world, facilities with built-in translation capability are the wave of the future. Despite their high price tag ($3 million), many more are bound to be built. “There isn’t another classroom in China that is like this.” — F. Warren McFarlan Case Discussion Professor Li Jin’s class discussion featured a 2007 HBS case that was previously used in the course Doing Business in China and is now taught to all first-year HBS students in the required Finance course.The case is about three competitors in China’s new media advertising market. It focuses on the decisions that altered their market positioning and led to their ultimate consolidation. The case described unpredictable actions and unforeseeable events that highlighted the different ways that CEOs in China might think about their companies (e.g.,like legacies to be built and nurtured, or as pigs to be fattened and sold). A CEO’s mindset might be based on whether the CEO was an entrepreneur/founder or a professional manager brought in to run a company. The case also demonstrated how unpredictable events in the quickly evolving Chinese market can open windows of opportunity that are soon slammed permanently shut. Those who act quickly, anticipate the future moves of others, and view situations in nontraditional ways can be rewarded, while those who sit tight will lose ground. H A R VA R D A N D C H I N A : A R E S E A R C H SY M P O S I U M | 1 1 CHINA—DYNAMIC, IMPORTANT AND DIFFERENT KEY TAKEAWAYSH A R VA R D A N D C H I N A : A R E S E A R C H SY M P O S I U M | 1 2 OVERVIEW Without realizing it, societies around the world have drifted from market economies into market societies. Marketbased thinking has permeated all aspects of society, affecting societal norms in areas of life not traditionally influenced by markets. The problem:When a society decides that a market is acceptable in a particular area—that certain goods/services may be bought and sold—it is deciding that the goods/services can be valued as commodities. But some aspects of life are damaged, degraded, or corrupted if they are commoditized. Missing in today’s market societies is attention to the moral limits of markets. Societies need to decide which social norms are worth preserving and should not be part of a market. CONTEXT Professor Sandel described the growing role that markets play and asserted that markets need to have moral limits. SPEAKER Michael Sandel AnneT. and Robert M. Bass Professor of Government, Faculty of Arts and Sciences THE MORAL LIMITS OF MARKETS• Social services: For-profit schools, hospitals, and prisons are proliferating as market-based approaches come to these areas. A trend in education is paying children to read. Concierge medical services in the United States and scalping of doctor appointments in China create markets for access to medical services. In 2000, India legalized commercial surrogacy and a market for low-cost, outsourced providers is developing. • The environment: The idea of tradable pollution permits and carbon offsets creates markets for polluting. • Immigration: Proposals have been made to make a market for immigration by selling the right to immigrate to America for perhaps $50,000 or $100,000.Another idea is a market for refugees. Countries would each have a quota, which they could sell or trade. Markets such as these will inevitably affect social norms, often in unexpected ways. For example, if children are paid to read, will they become conditioned to only read when paid and not read for the intrinsic value of reading? Or, if polluters can simply trade pollution permits, does that make pollution acceptable and fail to motivate behavior change? “Pure free-market economists assume that markets do not taint the goods they regulate.This is untrue. Markets leave their mark on social norms.” — Michael Sandel Society must ask, “What should be the moral limits of markets?” The examples of market-based approaches are unsettling. Even if the parties involved in a market-based transaction consent (which is not always the case; in some instances they are coerced), these market-based ideas are distasteful. Most people find the idea of a refugee market distasteful, even if it helps refugees. A market for refugees changes a society’s view of who the displaced are and how they should be treated. It encourages market participants to think of refugees as a product, a commodity. The role of markets has grown in our lives. The world has become infatuated with markets. In recent decades, societies around the world have embraced market thinking, market institutions, market reasoning, and market values.The focus on markets is based on the abundance created by markets. The fact is that markets are powerful mechanisms for organizing product activity and they create efficiency. Often overlooked is the fact that markets can affect society’s norms. The application of market thinking to non-market areas of life assumes that markets are merely mechanisms, innocent instruments.This is untrue.Markets touch—and can sometimes taint—the goods and social practices they govern. An example comes from a study dealing with childcare centers.To solve the problem of parents coming late to pick up their children, centers imposed a fine for late pickups. The social norm had been that late parents felt shame for inconveniencing the teachers to stay late.When this norm was replaced with a monetary penalty, a market for late pickups was created—and late pickups increased. Parents now considered a late pickup as an acceptable service for which they could simply choose to pay.The presence of the market changed the societal norm. “The market is an instrument, but it is not an innocent one.What begins as a market mechanism can become a market norm.” — Michael Sandel Market-based thinking and approaches have the potential to affect social norms in many areas where norms were traditionally non-market areas of life.These include: • The human body: Black markets exist for organ sales. Some marketers are now paying individuals for tattooing the company’s logo on their bodies. Infertile American couples are outsourcing pregnancy to low-priced surrogates in India. H A R VA R D A N D C H I N A : A R E S E A R C H SY M P O S I U M | 1 3 THE MORAL LIMITS OF MARKETS KEY TAKEAWAYSOther Important Points • Collaborative education. At www.justiceharvard.org, anyone can attend Professor Sandel’s popular Justice class.This virtual classroom features videos of lectures including student exchanges, the reading list, discussion guides, and a discussion blog. The site had more than one million viewers in its first few months.Translated versions appear on Chinese websites (which is fine with Professor Sandel if they are accurate). Experiments in virtual classrooms offer opportunities for collaboration between Harvard and university partners in China. Live video-linked classrooms would create a “global public square” permitting discussions in real time. Such discussions would illuminate East/West similarities and differences, leading to more nuanced understanding of both cultures. It is often assumed that the two cultures’ conceptions of justice, liberty, and rights are fixed, but the reality is more complex. Rich historical traditions contain multiple voices and contrary viewpoints within them. A virtual classroom enabling interaction between students in China and America would enable fascinating comparisons of ethical and philosophical thinking within cultures as well as between them. • Learning and teaching. China has long been a“learning civilization”—evolving through engaging with other civilizations and cultures—while America has been a “teaching” (code for “preaching”) civilization.America could benefit from incorporating China’s learning mindset. When a society embraces a market approach and decides that certain goods may be bought and sold, it is deciding that those goods can be valued as commodities. “Some of the good things in life are damaged or degraded or corrupted if they are turned into commodities.” — Michael Sandel Thus, deciding to create a market and to value a good— whether that is health, education, immigration, or the environment—is not merely an economic question. It is also a political and a moral question. Societies must confront markets’ moral limits. Societies, however, often fail to grapple with such moral questions.This causes market economies to drift imperceptibly into market societies, without it having ever been decided that they do so. “Because we haven’t had that debate about the moral limits of markets, we have drifted from having a market economy to being a market society.” — Michael Sandel The world’s market societies need to recognize the moral limits of markets and to define societal norms worth preserving. Case by case,the moral meaning of goods and services must be figured out and the proper way of valuing them decided. Ideally, this should happen collectively, via public debate. Much thought needs to go into how to keep markets in their proper place. “Only if we recognize the moral limits of markets and figure out how to keep markets in their place can we hope to preserve the moral and civic goods that markets do not honor.” — Michael Sandel H A R VA R D A N D C H I N A : A R E S E A R C H SY M P O S I U M | 1 4 THE MORAL LIMITS OF MARKETS KEY TAKEAWAYSH A R VA R D A N D C H I N A : A R E S E A R C H SY M P O S I U M | 1 5 OVERVIEW Currently, there is tremendous interest in China and Chinese culture.As China grows in wealth and influence,those who do business and study in China want to learn and understand the culture.The reality, however,is that China does not have a singular culture that can easily be understood. China doesn’t have “a culture”; it has “culture.” Elements of China’s culture include its history, poetry, literature, art, food, and contemporary culture, including movies, television, fashion, and books. It also has cultured people who are educated and worldly. Those who believe the various aspects of China’s culture are all based on history are misinformed. All aspects of China’s culture and its societal practices (including business practices) are incredibly dynamic and constantly changing. CONTEXT The speakers discussed why it is so difficult to try to define Chinese culture and offered perspectives on China’s cultural history and modern cultural practices. SPEAKERS Peter K. Bol Charles H. Carswell Professor of East Asian Languages and Civilizations, Faculty of Arts and Sciences, Director of the Center for Geographic Analysis, Institute for Quantitative Social Science Xiaofei Tian Professor of Chinese Literature, Faculty of Arts and Sciences WHO CARES ABOUT CHINESE CULTURE?In the sixteenth century, sea transportation brought with it the opportunity for the exchange of ideas between Europeans and Chinese, creating links between the East and West that continue today.The Chinese civil service exam, for example, became the basis for the British civil service exam, eventually serving as a model throughout theWestern world. Since the late nineteenth century, China has been actively absorbingWestern influences. It is worthwhile to note that a value not considered native Chinese at the time it is introduced eventually may became part of the criteria that is used to describe what is Chinese today. Globalization creates the need to maintain a sense of native identity. As the forces of globalization grow, there is a strong impulse in China to maintain a sense of local and native identity. Chinese citizens are brought together by a real sense of belief that they share a common identity and culture. But there is some danger in this way of thinking. By basing this sense of national identity on perceptions about the country’s cultural past, the Chinese are relinquishing their claim to the present and the future. If all modern culture is bound to what is considered foreign and everything native belongs to the ancient past, Chinese cultural tradition loses the very elements that make it dynamic. China can no longer afford to be self absorbed and must allow the knowledge of world cultures to become part of Chinese culture. “A point of danger is that this way of thinking leads to the ossification of the cultural past so the vibrant, dynamic, complex,cultural tradition of China is reduced to a one-dimensional monolithic entity.” — Xiaofei Tian Chinese culture is not easily defined. Wide diversity and lack of a central, contemporary Chinese “culturescape” make defining Chinese culture in a singular way difficult. Chinese culture is a mixture of many elements, both native and foreign, that are constantly evolving. From an ideological standpoint, Confucianism is considered by many as the core of Chinese culture, yet this is a flawed premise.Although Confucianism is definitely a part of China, it is only one part of a much larger picture. It also could be argued that Chinese culture is embodied by its traditional poetry and the aesthetic experience it elicits.Yet, this notion of Chinese culture is incompatible with the dogma that exists in the Confucian Classics. Aspects of culture in China can be found by studying China’s history, literature, religion, food, and popular culture, including movies, television, books, and fashion. But as the diversity of each of these areas demonstrates, there is a huge variety, constant change, and no singular definition of culture in China. “There is no China culture; there’s culture in China.” — Peter Bol (corroborated by Xiaofei Tian) There is a difference between culture and a cultured person, whether Chinese or American.The values that a society’s culture promotes do not necessarily reflect the values that a cultured person holds, such as being educated and worldly. For a cultured person, culture matters, and debates over the hopes and best ideas for society are linked to actual practices and how people live. Chinese culture is a dynamic, continually evolving tradition. The Chinese cultural tradition is vibrant, dynamic, complex, and ever changing. In the fourth, fifth, and sixth centuries, the translation of the Buddhist text from Sanskrit into Chinese led to an incredible cultural transformation in China. H A R VA R D A N D C H I N A : A R E S E A R C H SY M P O S I U M | 1 6 WHO CARES ABOUT CHINESE CULTURE? KEY TAKEAWAYSOther Important Points • University o erings. Elite Chinese universities have begun offering liberal arts education programs, allowing students to take courses across departments. • American managers. Few American managers speak Chinese and most are ignorant about Chinese history and practices. • A negotiating culture. China has much more of a negotiation culture than the United States, where people are more accepting of rules and authority. Schools of higher education must educate students about the history and tradition of different cultures. Many of today’s college students are products of diverse transnational backgrounds;they are multilingual and have a global perspective. In addition, the new professional managerial class conducts business on a global basis. “This new global elite needs a new forum of linguistic and symbolic capital that is transnational, so world languages, world literatures, and world cultures must be offered at higher education institutions.” — Xiaofei Tian To fit with this reality, schools of higher education must offer courses that teach the comparative history and tradition of different cultures, giving students the opportunity to study, examine, and interpret different cultures in the new global context. “The challenge as China grows in wealth and power is to make the next generation of cultured students aware that China’s cultural heritage is part of humanity’s cultural heritage.” — Peter Bol Attendees commented that they understand the difficulty of defining “the culture of China.” However, as individuals and companies doing business in China,they still expressed a desire to better understand China.The speakers distinguished between “common practices” and a deep societal culture.With effort, it is possible to gain some degree of understanding about common practices. However, as with culture, practices constantly are changing. Learning about the country and its practices can be facilitated by learning the language, learning about the country’s history, and reading the country’s literature. H A R VA R D A N D C H I N A : A R E S E A R C H SY M P O S I U M | 1 7 WHO CARES ABOUT CHINESE CULTURE? KEY TAKEAWAYSH A R VA R D A N D C H I N A : A R E S E A R C H SY M P O S I U M | 1 8 OVERVIEW Crises often highlight shortcomings in governments’ ability to safeguard people from harm and to contain fallout from unforeseen scenarios. Retrospective analysis provides rich learning opportunities for addressing shortcomings and preventing or mitigating similar damage in the future. Governments have as much to learn from other nations’ crisis experiences as they do from their own. CONTEXT Professor Farrell discussed implications of the financial crisis for regulatory policy decisions facing governments. Professor Howitt discussed lessons in crisis management from recent disasters in the United States and China. SPEAKERS Allen Ferrell Harvey Greenfield Professor of Securities Law, Harvard Law School Arnold Howitt Adjunct Lecturer in Public Policy and Executive Director of the Roy and Lila Ash Institute for Democratic Governance and Innovation, Harvard Kennedy School Michael B. McElroy Gilbert Butler Professor of Environmental Studies, Harvard School of Engineering and Applied Sciences MANAGING CRISES IN CHINA• Having the right capital requirements. Reforms in capital requirements might include mechanisms allowing institutions to draw down capital during a crisis versus having to raise it mid-crisis. • Having resolution mechanisms that address moral hazards. Needed are mechanisms to wind down financially insolvent institutions that ensure creditors experience losses— so there is incentive to avoid undue risk in the future. • Having regulators trained in both economics and law.The SEC has expertise in law but lacks expertise in economics;the Federal Reserve is strong in economics but lacks deep expertise in regulation. Both are needed. • Minimizing the role of credit rating agencies in bringing complex products to market. U.S. securities law enshrined the positions of the incumbent ratings agencies, forcing investment banks to use the agencies to rate complex structured products that the agencies lacked expertise to understand.These regulations should be repealed. “I would highly encourage China and other countries to avoid the U.S. regulatory treatment of credit rating agencies.” — Allen Ferrell • The systemic significance of non-depository-taking institutions, such as Fannie Mae and Freddie Mac, Bear Stearns, and Lehman. • The instability of the repo market as a financing source. The crisis has taught much about how reliance on the repo market (i.e., overnight lending) affects leverage in the system—both degree of leverage and how it interacts with capital. Key Takeaways (Disasters) Recent disasters in the United States and China highlight both nations’ shortcomings in crisis management. This century, both the United States and China have been affected by traumatic events. The United States lived through the 9/11 terrorist attacks and anthrax scares as well as Hurricane Katrina; China had the SARS epidemic, the Wenchuan earthquake, and the blizzards of 2008. Key Takeaways (Financial Crisis) U.S. regulators’ focus is misplaced: The financial crisis was about standard banking activities; not proprietary trading. Looking at the composition of the U.S. banking sector’s losses and write-downs stemming from the financial crisis is instructive, holding lessons for regulatory policy. The breakdown: • More than half (55%) of losses came from traditional lending activities: 34% from direct real estate lending and 20% or so from other kinds of direct lending. • About 31% of losses resulted from banks’ exposures to securitized products (not from securitization processes per se). From a regulatory standpoint, a bank’s exposure to its own products is a good thing, giving it “skin in the game.” • Losses from proprietary trading were relatively trivial at only 2%. • A similarly small portion of crisis-related losses came from banks’ private equity activities (about 1%). Despite the focus in the United States on proprietary trading as an area in need of reform (e.g.,theVolcker proposal), the financial crisis had little to do with proprietary trading. The vast majority of banking losses (85%) reflected positions that soured for various reasons in the standard bank activities of lending and securitization. “The moral of this story is that the losses were driven by the traditional activities of the banks . . . which is potentially relevant to thinking about Asian regulation.” — Allen Ferrell With Asian banks heavily involved in traditional banking, the crisis holds regulatory lessons relevant for them. The Asian financial sector is heavily involved in direct lending, less so in securitization at this time. (Hopefully, given the importance of securitization for funding, that will change.) Given this business mix, the U.S. financial crisis holds relevance for Asian financial sector regulation going forward. Some lessons include the importance of: H A R VA R D A N D C H I N A : A R E S E A R C H SY M P O S I U M | 1 9 MANAGING CRISES IN CHINA KEY TAKEAWAYS• Awareness of the critical interdependency between local and national capacities. More than 90% of people rescued from theWenchuan earthquake rubble were saved by family or friends; not by the central government’s late-arriving responders. Neither national nor local governments can manage crises on their own. Needed are management systems capable of rapid but decentralized support and connections between national and local capacity. • Stronger local capacities. Localities need to improve their capability to handle as much of a disaster’s effects as possible, since outside aid is often slow to arrive. Once it does, local and national responders need to work closely together. • Faster national capacities. Central governments should focus on accelerating their responses and improving their ability to operate in a decentralized fashion. “[We need to] think about the roles of local government and remote aid to prepare management systems capable of a rapid but decentralized surge of support.” — Arnold Howitt Other Important Points • Shadowy bailout motivation.Transparent counterparty data is lacking to assess the systemic risk had AIG failed. Goldman Sachs says it didn’t have significant counterparty exposure, having hedged its AIG positions; whether that was the case for other counterparties is unclear.The Inspector General’s bailout report suggests a rationale was protecting AIG shareholders—less appropriate a motivation than mitigating systemic risks. • Short shrift for recovery preparation.There are three kinds of disaster preparation: 1) prevention/mitigation (e.g., building codes); 2) emergency response; and 3) recovery. Preparing for recovery is often overlooked. As a result, money is thrown at recovery immediately after an event, and often wasted at great social cost. The two governments’ responses highlight shortcomings in crisis management, including the ability to prepare for emergencies,manage events during crises, and recover from them. China and the United States have structural similarities that make their problems of disaster management similar, including: 1) large and diverse land areas; 2) multilayered governments; and 3) high regional variation in emergency response capabilities. These factors contribute to the chaos in disaster situations. Local resources are often overwhelmed.The arrival of national resources on the scene is delayed by travel time; once there, outside personnel lack local awareness, slowing rescue efforts. Agencies not accustomed to interacting don’t know how to collaborate and cooperate. Lack of coordination causes inefficiencies; confusion reigns; the delays carry a social cost. Crisis management systems should reflect local/ national interdependencies and be capable of rapid, decentralized support. Governments face diverse crisis threats: natural disasters, infrastructure or technology system failures, infectious diseases, purposeful harm. Preparing for emergency response is difficult for governments; crisis management is unlike governments’ typical activities.The work is crucial, involving urgent responses to high-stakes situations that come without warning in unknown locations. Quick and effective action is needed; responders can’t afford the time to learn as they go along. Emergency preparation requires tough tradeoffs between financial cost and resource effectiveness. Capacity must be kept in reserve so it can be utilized effectively with little notice; yet governments don’t want to spend a lot on expensive resources to prepare for contingencies that might not occur.The ability to get resources to distant disasters as quickly as needed might be sacrificed for reasons of cost. Effective emergence preparedness requires: • Crisis management systems that facilitate collaboration. Organizational and communication systems should be in place before a disaster strikes, should facilitate collaboration/ cooperation among agencies, and should have flexible processes to allow for improvisation. H A R VA R D A N D C H I N A : A R E S E A R C H SY M P O S I U M | 2 0 MANAGING CRISES IN CHINA KEY TAKEAWAYSH A R VA R D A N D C H I N A : A R E S E A R C H SY M P O S I U M | 2 1 OVERVIEW Corresponding with China’s economic growth has been an amazing increase in life expectancy and a significant improvement in the public health care system, with childhood vaccinations providing just one important example. However, China still faces enormous health care–related challenges. There are huge disparities in access to care and the quality of care received; the current payment system is largely out-of-pocket and many people can’t afford care; and chronic diseases and mental health issues are on the rise. The Chinese government, well aware of the situation and issues, is undertaking the largest,most ambitious health care reform program in the world.The goals of this program include providing basic health insurance coverage for at least 90% of the population by 2011 and establishing universal access to health care by 2020. Through both long-term research projects and numerous collaborative programs, Harvard has played and is continuing to play an important role in helping to shape China’s health care policies and practices. CONTEXT The panelists reviewed linkages between Harvard and China’s health care sector and discussed the monumental transformation taking place in China both in health care and in society. SPEAKERS Barry R. Bloom Harvard University Distinguished Service Professor and Joan L. and Julius H. Jacobson Professor of Public Health, Harvard School of Public Health Arthur M. Kleinman Esther and Sidney Rabb Professor of Anthropology, Faculty of Arts and Sciences; Professor of Medical Anthropology and Professor of Psychiatry, Harvard Medical School;Victor andWilliam Fung Director, Harvard University Asia Center Yuanli Liu Senior Lecturer on International Health, Harvard School of Public Health CHINA’S NEWEST REVOLUTION: HEALTH FOR ALL?Since 1949, China has made tremendous progress in improving the health of its citizens, but huge challenges remain. Prior to 1949 there was essentially no functioning health care system in China.There were widespread famine, epidemic disease, infanticide, and other catastrophic tragedies. Approximately 20 million Chinese were killed in the war with Japan between 1937 and 1945, and 200 million Chinese were displaced due toWorldWar II and the country’s civil war.While the first part of the 20th century saw dramatic improvement in life expectancy in much of the world, in China it went from 25 years in 1900 to just 28 years in 1949. During this time, there also were enormous disparities between the rich and poor, and between urban and rural. (While disparities exist today, they pale in comparison to the disparities prior to 1949.) But beginning with China’s liberation in 1949, health became a national priority. Dr. Bloom recounted a conversation with Dr. Ma, a Western physician who played a huge role in organizing public health in China.When asked how such a poor country could make health such a priority, Dr. Ma said,“I thought we fought the Revolution to serve the people.” In public health terms, serving the people means: 1) keeping people healthy and preventing disease; for example, through clean water and vaccinations; 2) providing access to affordable, high-quality health care; and 3) providing health security and equitable distribution of health services. Between 1949 and 2007, life expectancy increased from 28 years to almost 73 years.This is based on an increased standard of living, increased urbanization, and development of a public health system that focused on key basics such as childhood immunizations. China immunized hundreds of millions of children, which kept them from dying under the age of five. Harvard and China have a long, rich history of working together in the health care arena. In the aftermath of SARS, which wasn’t handled well by China, researchers at the Harvard School of Public Health did epidemiologic modeling that showed how to stop the epidemic.After presenting the findings to top people in the Ministry of Health, including China’s Minister of Health, Harvard was asked to help develop a program to avoid the outbreak of a catastrophic infectious disease.This program has involved providing high-level executive training to more than 300 leaders in China’s Central and 31 Provincial Ministries of Health.The program recently has been reoriented, with significant input from many in the Harvard medical community to provide training on managing hospitals. This post-SARS program actually built on significant linkages between Harvard and China.A 30-year research study of the respiratory function of Chinese workers in textile mills and a 20-year study on how to provide health insurance for people in rural China have had a huge influence on policy. The School of Public Health has intensive programs where students look at some aspect of the medical system and write papers about their observations, which have received much interest by China’s Ministry of Health. In addition, Harvard has held two forums involving multiple Harvard faculty members on subjects of interest to Chinese leaders, such as poverty alleviation. Dr. Kleinman, who heads Harvard’s Asia Center, said that across Harvard there are more than 50 faculty members who work principally on China, and the projects involving China at Harvard Medical School and other areas throughout Harvard are too numerous to count. “The engagement with China across our university is profound and incredibly broad.” — Arthur M. Kleinman H A R VA R D A N D C H I N A : A R E S E A R C H SY M P O S I U M | 2 2 CHINA’S NEWEST REVOLUTION: HEALTH FOR ALL? KEY TAKEAWAYSChina is embarking on the most ambitious health care reform in the world. The Chinese government is aware of the health care challenges the country faces and has undertaken a remarkable health care reform process.This began in 2005 with the passage of a rural health improvement plan.That concentrated the country’s focus on improving the health care system and the health of the people in China. “This is the first time since the founding of the People’s Republic that China has begun developing a long-term strategic plan for its health sector.” — Yuanli Liu The Harmonious Society Program followed.This program set up 14 ministries and a slew of think tanks to make recommendations on health care reform. In an extraordinary act for China, a draft of the reform plan was posted on the Internet for one month and there were more than 30,000 responses.The government listened and responded by making 190 changes.The result is a serious action plan and a significant investment to address some of China’s long-term health care challenges. “The most radical, extensive,far-reaching plan for health reform of any country in the world has been committed to by the government of China . . . it is, I think, the most exciting development in health reform anywhere in the world.” — Barry R. Bloom This plan, which was announced in April 2009, has a goal of providing basic health insurance for at least 90% of the population by 2011 and establishing universal access to health care by 2020. The focus on health in China is part of the reassessment of culture, values, and norms taking place in China. In the era of Maoism, when China’s public health system was being built, the state regarded the individual as owing his or her life to the state and the party. In the current period of China’s economic reforms, there has been a shift. Now the view is that the state owes the individual a good life, or at least a chance at a good life. While tremendous progress has been made, significant challenges still remain.These include: • The system of paying for care and the cost of care. Currently, 60% of health care in China is paid for by individuals on an out-of-pocket basis.This is the least efficient,most expensive way to pay for care, and for many people makes health care unattainable. The largest complaint of the Chinese population is that they cannot afford health care, and many people forego being admitted to the hospital because they are unable to pay. Also, the cost of health care is actually the cause for about 15% of all bankruptcies in China. • Incentives. The current payment system involves government price setting for many services, such as hospitalization fees.The result is that health care providers overuse and overcharge in other areas, like drugs and tests. Drugs represent 45% of health care spending in China, compared to about 10% in the United States. (These drugs, which are often of questionable quality, are in many instances sold by doctors where they represent a significant source of revenue—and a major conflict of interest. A prime example is saline injections, which many patients now expect and demand, even though they have no medicinal value.) • Disparities.There remain significant disparities in the access to and quality of care between rich and poor, and urban and rural.The gaps are large and are increasing. • Infectious diseases. About half of all Hepatitis B cases are found in China, as are about one-third of TB cases.The mobility of the population makes it easier than ever to spread diseases, as seen through the HIV-AIDS epidemic and the spread of H1N1. • Chronic diseases. As China’s economy has developed, a consequence has been increased rates of chronic diseases, which are responsible for more than 80% of all deaths. The increase in chronic disease—including diabetes and cardiovascular diseases—is related to people living longer, high pollution, and behaviors such as smoking. • Mental health issues. As China has become more prosperous, there have been increases in all categories of mental disorders, anxiety disorders, depression, suicide, substance abuse, and STD rates. H A R VA R D A N D C H I N A : A R E S E A R C H SY M P O S I U M | 2 3 CHINA’S NEWEST REVOLUTION: HEALTH FOR ALL? KEY TAKEAWAYSAlong with this shift in the roles of the state and the individual, individual attitudes, behaviors, and morals have changed.There is a rise in materialism and cynicism, and a breakdown in Confucian values.There is a rise in nationalism, deepening corruption, an almost caste-like distinction between rural and urban, a distrust of physicians, institutions, and agencies, and a concern with public ethics. There also is a high divorce rate, a high suicide rate, and a sexual revolution is underway. A boom in self-help books and in psychotherapy also is taking place. It is in this environment that health care reform is happening.The process of reforming health care is about more than just health care; it is part of a society undergoing transformation. People are thinking of themselves and their lives differently and have different expectations of the government. Other Important Points • One child. The changes going on in China include a reassessment of the country’s one-child strategy. • Health data. In previous years, the quality of health data in China was questionable, but new data systems have been put in place and significantly improved the data being collected. • Qualified health minister. China’s current health minister is an internationally regarded physician who doesn’t seem to be very political.This reflects a trend of filling key positions with technically competent people. H A R VA R D A N D C H I N A : A R E S E A R C H SY M P O S I U M | 2 4 CHINA’S NEWEST REVOLUTION: HEALTH FOR ALL? KEY TAKEAWAYSH A R VA R D A N D C H I N A : A R E S E A R C H SY M P O S I U M | 2 5 OVERVIEW Technological innovations are changing and will continue to change every aspect of how we live, work, and learn. They are changing how people communicate and how we spend our time. Among the most exciting innovations are those in areas of mobility, cloud computing, social networking, platforms,location-based services, and visual search. Increasingly, China is playing a key role in today’s technology innovations. CONTEXT ProfessorYoffie discussed the innovations in technology that are having a huge impact on how we do things, including in education. SPEAKER David B.Yoffie Max and Doris Starr Professor of International Business Administration; Senior Associate Dean and Chair of Executive Education, Harvard Business School INNOVATIONS CHANGING THE WORLD: NEW TECHNOLOGIES, HARVARD, AND CHINAThe iPad will ultimately be a highly disruptive device with the potential to change how media are disseminated and consumed; this includes potentially changing how textbooks are delivered.These and other emerging technologies will impact how students study and how professors do research.The traditional ways of disseminating knowledge through books and articles will need to evolve. Cloud computing is changing how information and applications are stored and delivered. Through the remote delivery of computing power, storage, and applications, cloud computing is quickly changing how information is delivered. From a corporate standpoint,the economics of cloud computing are remarkable. Information delivered through huge data centers built by companies such as Amazon and Google cuts costs by a factor of seven.This fundamentally alters the IT cost equation for all companies, regardless of size.Applications that have historically been hosted on inhouse servers—from customer relationship management (CRM) to enterprise resource planning (ERP)—are now moving to outsourced cloud-hosted servers and data centers. A leading example is Salesforce.com. “The economics of cloud computing are extraordinarily compelling . . . no matter what size company you are, can you imagine the possibility of cutting your [IT] cost by a factor of seven?” — David B. Yoffie On the consumer side, cloud computing is and will be everywhere: in music, video, applications, and photos. It is likely that within 18 months, instead of our personal computers storing our music, our libraries will be moved to the cloud. User concerns about security are the largest drawback to cloud computing.This is a critical issue that needs to be addressed on an ongoing basis. Innovations occur when platforms are developed on which applications reside. In addition to changing how data is delivered, cloud computing also is becoming a “platform.”This means it is the basis for providing a set of applications that deliver ongoing value. HBS is creating the future by leveraging the Harvard Center Shanghai facility and emerging technologies. Harvard University and Harvard Business School have an explicit strategy of becoming truly global institutions. Establishing the Harvard Center Shanghai facility builds on Harvard’s long-standing involvement in Asia. It creates an opportunity for deeper engagement and collaboration with the country that is the fastest- growing producer of technology in the world. HBS views this as an opportunity to accelerate innovation in management,technology, and collaboration on the technological shifts that are changing the way we work, study, and socially interact. Powerful mobile computing is changing how people use technology. The massive shift of Internet use to handheld devices is fundamentally changing technology and the way it is used. The shift away from PC-centric computing to handheld computing is made possible by Moore’s law, which holds that chip processing power will double roughly every 18- 24 months, and the costs will be halved. (The law has held since Gordon Moore conceived of it in 1964. Today an Intel chip the size of a fingernail has 2.9 billion transistors and does a teraflop of processing per second.) “This creates the opportunity to put a supercomputer into your hand.” — David B. Yoffie This geometric increase in processing power has led to the development of powerful handheld devices. For example, the 2009 iPhone has identical technical specifications to the iMac, the most powerful desktop computer in 2001.Today, handheld devices allow us to do things on a mobile basis that we previously couldn’t do. Beyond just phones are other types of mobile devices. eReader devices such as Amazon’s Kindle and Apple’s iPad are creating a rapidly growing eBook market. Now available in a hundred countries, eBooks grew 100% in 2009 alone. At Amazon, for books available in electronic form, 50% of the books’ sales are in eBook form.This past Christmas, the company sold more eBooks than hard copy books. H A R VA R D A N D C H I N A : A R E S E A R C H SY M P O S I U M | 2 6 INNOVATIONS CHANGING THE WORLD: NEW TECHNOLOGIES, HARVARD, AND CHINA KEY TAKEAWAYS• Location-based services. These services, such as YELP and Urbanspoon, identify your location and offer information about local restaurants, hotels, and other services.An application called Foursquare allows a person to see where his or her friends are. Location-based services also can provide navigation and will ultimately deliver advertising on a location basis. • Visual search. An example of visual search is a new phonebased application offered by Google called Google Goggles. It uses pictures to search the web and immediately provide information. For example, if you take a picture of a restaurant, it will give you reviews of the restaurant before you walk in.Visual search has the potential to significantly impact how students learn and interact with their professors, challenging traditional methods of engagement. Other Important Points • Predicting the future. It is impossible to predict the future. Experts in 1960 offered numerous predictions about life in 2000 that failed to come to fruition. One prediction some experts got right was the linkage of computers (essentially the Internet).The one prediction that fell short was a prediction of 200,000 computers in the United States.The actual number is around 300 million. • Internet tra ic. Cisco projects that Internet traffic will grow 66 times by 2013, a compounded annual growth rate of 130%. • Generational Internet use. In the United States, the portion of senior citizens who use email (91%) is comparable to the baby boomers (90%), though 70% of boomers shop online versus just 56% of seniors. • Texting volume. The average U.S. teenager sends almost 2,300 text messages per month. In China, during the week of the Chinese NewYear, 13 billion texts were sent. • People will pay. Some people have the perception that everything on the Internet is free, but that is not the case. The success of iTunes, where songs are sold for $0.99, shows that people will pay when something is priced correctly. The iPhone is a platform.There are now 140,000 applications for the iPhone, which have been downloaded more than 3 billion times; 1 billion downloads were made just in the fourth quarter of 2009. Facebook is a platform for which 350,000 applications have been written and downloaded half a billion times. In addition, people are looking at the following as potential platforms: • Cars. Ford plans to incorporate iPhone applications in their next generation of vehicles. • Television.TV will be a huge platform of the future, serving as a basis for social media, social interaction, and social networks. • Cities. NewYork City has decided to become a platform. The city held a competition, inviting the public to develop applications using raw municipal data. One of the winners created an application that allows you to hold up your phone; it automatically figures out where you are and gives you directions to the next subway stop. “Learning how to play with all these platforms may be absolutely critical to the long-run success of any company, because these platforms are becoming ubiquitous. It's a new way of thinking about the interaction between a supplier and a customer.” — David B. Yoffie Social networks are altering social patterns and how people spend their time. Social networks have global reach, with more than 830 million users. Facebook (the dominant player outside of China) andYouTube have replaced old Internet companies such asYahoo and Microsoft. Facebook users spend 90 billion minutes per month on the site. In China,Tencent has been a successful social networking company. Future innovations are being shaped by the integration of mobility, social networking, and cloud computing. Among the many future innovations that are coming, two types of innovations stand out: H A R VA R D A N D C H I N A : A R E S E A R C H SY M P O S I U M | 2 7 INNOVATIONS CHANGING THE WORLD: NEW TECHNOLOGIES, HARVARD, AND CHINA KEY TAKEAWAYSH A R VA R D A N D C H I N A : A R E S E A R C H SY M P O S I U M | 2 8 OVERVIEW The links between Harvard and Shanghai go way back and have dramatically accelerated in the past decade—even in just the past four years—through a series of programs conducted in partnership with Chinese universities. HBS has written dozens of new cases, hired new Mandarin-speaking faculty, and added new courses, all to address the tremendous interest in China. HBS’s focus on China is not just because of China’s huge population, but because of the enormous opportunities in the country in industries such as software development. China is no longer just a manufacturing center. It has a highly literate, educated workforce and is in the process of climbing the IT value chain.While little known in theWest, China is giving birth to a new generation of formidable technology companies. CONTEXT Professor McFarlan discussed what HBS is doing in China and reflected on why it is so important for HBS to have a significant presence in the country. SPEAKER F.Warren McFarlan Albert H. Gordon Professor of Business Administration, Emeritus, Harvard Business School CLOSING REMARKS• Tsinghua. HBS has a six-week program with Tsinghua University and China Europe International Business School (CEIBS).This program consists of two weeks in Tsinghua, two weeks at CEIBS, and two weeks in Boston. Another program between HBS and Tsinghua, focused on private equity and venture capital, is about to be launched. • CEIBS. In addition to the program with Tsinghua, CEIBS and HBS have a program for CEOs of companies ranging from $500 million to a few billion dollars. Almost none of these CEOs speak English, yet they are being exposed to HBS cases. • Beijing University. HBS is partnering with Beijing University on two programs: Driving Corporate Performance and Designing and Executing Strategy. • Fudan University. HBS has partnered with Fudan University on three programs: growing professional service firms, creating value through service excellence, and strategy and profitable growth. “None of this existed 10 years ago and almost none of it existed four years ago.” — F. Warren McFarlan Continuing China’s economic growth requires moving up the IT value chain. China’s economic growth over the past 30 years will be extremely difficult to replicate. Increasing per- capita GNP requires different strategies. In particular, it requires increasing productivity by leveraging IT. But leveraging IT— by climbing the IT value chain—doesn’t mean just purchasing hardware and software. Leveraging IT to increase productivity is about services, operating differently, and engaging in change management. China is where the United States was 30 years ago, and they don’t realize how difficult it is to climb the IT value chain.Yet, this is where the key to continued economic growth resides. Harvard Business School’s efforts to re-engage in China began in earnest in the late 1990s. The history of Harvard Business School in Shanghai reaches back to HBS’s second MBA class, which had two individuals from Shanghai. By the mid-1920s,the first Harvard Business School Club of Shanghai was formed, which lasted until 1944. Following a 30-year disruption due to political factors, conversations about re-engaging with China began again in 1978 when four HBS faculty members, including Professor McFarlan,traveled to China.While the interest in China was high, no specific plans took place. Then, in 1997, recognition that HBS was underinvested in Asia led to the decision to establish a research center in Hong Kong.This Center has produced cases and done extensive research. At about the same time that HBS decided to establish a presence in Asia, the school was approached regarding teaching Tsinghua University how to conduct executive education.This eventually led to a one-week, dual-language program, co-taught by the two schools, called Managing in the Age of Internet.This initial partnership led to the development of the more expansive program that exists today. HBS programs in China have grown rapidly in recent years, several built on alliances with Chinese universities. Interest at HBS regarding China is incredibly high.There is now a second-year course called Doing Business in China.There are dozens of cases about China, 11 technical notes, and multiple books. HBS has five faculty members who are fluent in Mandarin, and 30 HBS faculty members will work, visit, teach, and do research in China this year. Sixty Harvard MBA students have PRC passports. The Harvard Center Shanghai makes new types of programs possible. In 2010, the Center will host 15 weeks of programs, none of which existed four years ago.HBS’s programs in China are largely based on partnerships with the leading universities in the country.These include: H A R VA R D A N D C H I N A : A R E S E A R C H SY M P O S I U M | 2 9 CLOSING REMARKS KEY TAKEAWAYSTo the surprise of many, China is an emerging IT superpower. The conventional wisdom is that China is a center of lowcost manufacturing and India is the center of IT globalization. Certainly, India has been where the action is, but a new story is emerging.As China consciously seeks to move up the IT value chain, it is rapidly becoming a formidable player in the world of IT. China’s population is literate and educated. (Literacy rates are 93-95%, which are far higher than India’s.) China’s telecommunications infrastructure and bandwidth are massive and growing; there are almost 800 million cell phones in the country. Already,leading technology companies like IBM,Microsoft, and Hewlett Packard have established strong presences in the country. “It is an information-enabled society with massive investments in [technological] infrastructure.” — F. Warren McFarlan H A R VA R D A N D C H I N A : A R E S E A R C H SY M P O S I U M | 3 0 CLOSING REMARKS KEY TAKEAWAYSH A R VA R D A N D C H I N A : A R E S E A R C H SY M P O S I U M | 3 1 OVERVIEW Harvard and China have a long, rich history of partnership and collaboration.Today, collaborative- learning programs exist in each of Harvard’s schools and departments.As the world of higher education becomes increasingly global,the level of collaboration between Harvard and China will only deepen.The Harvard Center Shanghai represents another important step in this collaboration, providing unparalleled opportunities. CONTEXT President Faust talked about the relationship between Harvard and China in the context of the global expansion of higher education. SPEAKER Drew Gilpin Faust Lincoln Professor of History, Faculty of Arts and Sciences and President of Harvard University CLOSING REMARKSAt Harvard, East Asian studies has become a hallmark of the university.The Harvard-Yenching Library has more than one million volumes, making it the largest university East Asian collection outside of Asia.Today, more than 370 courses are offered in East Asian studies in a wide range of subjects, such as history and literature;courses are taught in sevenAsian languages, with more than 600 students enrolled. Opening the Harvard Center Shanghai provides an opportunity for Harvard to reaffirm and enhance its commitment to China. The privilege of universities is to take the long view, as the Harvard Center Shanghai does, and to invest in projects that draw on relationships and knowledge to seize a better future. Harvard’s wide array of projects and partners in China and across Asia are a testament to this long view and to planting seeds for the future. Examples include: • Harvard Business School has published more than 300 cases, articles, and books on China. HBS also is coordinating student immersion experiences in China. • At Harvard’s Fairbanks Center, faculty are working with two Chinese university partners to create a free, online biographical database for China. Collaboration over nearly a decade has created a geographic database of anything that can be mapped covering 17 centuries of Chinese history. • Harvard Medical School has partnerships in China for clinical education and research. • Harvard’s Graduate School of Design has programs and exchanges with China. • Harvard Law School maintains a broad range of involvement with Chinese legal development on everything from trade to intellectual property to legal education. The collaboration that has produced the Harvard Center Shanghai creates unparalleled opportunities. The Harvard Center Shanghai is a space that was designed for academic collaboration. It will be a hub for learning, seminars, executive training, and collaborative programs between Harvard faculty and Chinese universities, organizations, and government. The facility will provide new opportunities for Harvard alumni and for current students who participate in internship programs. This facility results from a tremendous amount of collaboration: between Harvard and multiple alumni; between Harvard and Chinese government officials; and among multiple areas within Harvard (Harvard Business School, the Faculty of Arts and Sciences, the Harvard China Fund, the Office of the Provost, and the Vice Provost for International Affairs).These efforts are consistent with President Faust’s vision of “one university.” There is a long history of collaboration between Harvard and China. Harvard’s first instructor in Chinese arrived in Cambridge (after a journey of nine months) and began teaching Chinese to undergraduates in 1879. Shortly after that, Chinese students began arriving at Harvard and were soon studying in every department and school. By 1908,they had formed a Chinese club. Between 1909 and 1929, about 250 Chinese students graduated from Harvard.These individuals made remarkable contributions in China, with almost half of them becoming professors and more than one dozen becoming university presidents. During this time, a graduate of Harvard Law School helped establish China’s first modern law school, ushering in a century of collaboration between Harvard and China’s legal system. In 1911, graduates of Harvard Medical School created the firstWestern medical school in China.This was the first of many connections in public health and medicine between Harvard and China. H A R VA R D A N D C H I N A : A R E S E A R C H SY M P O S I U M | 3 2 CLOSING REMARKS KEY TAKEAWAYSContrary to predictions of protectionism among nations or schools,the stakes and the players are not national;they are global.As the new Harvard Center Shanghai demonstrates, we are increasingly in a world of universities without borders. Universities exchange faculty and students as never before, and engage in international collaboration and problem solving. Higher education is developing a global meritocracy: underway are a great brain race and a global exchange of ideas.The expanding quality and quantity of universities in Asia and elsewhere open unimagined new possibilities for understanding and discovery.This is a race where everyone wins. “Increasingly we are in a world of universities without borders.” — Drew Gilpin Faust By teaching creative and critical thinking, universities prepare students for an uncertain world. We live in uncertain times.We can prepare but we can’t predict. In such an environment, students need to learn to think creativity and critically;to improvise;to manage amid uncertainty.The intense interactive case study method used at Harvard Business School and Harvard Law School has never been more important.Through this method education unfolds from a vivid debate. Teaching the case method in China is just one more way in which Harvard and China are collaborating. For the past five years, at the request of the Chinese Ministry of Education, HBS faculty have worked with more than 200 top Chinese faculty and deans in case method and participantcentered learning programs. • The Harvard School of Public Health worked with the Chinese government over the past four years on an analysis and plan to provide health insurance to 90% of the Chinese population. • The Harvard China Project based at Harvard’s School of Engineering andApplied Sciences is studying air pollution and greenhouse gases.This project draws on faculty from several Harvard departments and Chinese universities. • Harvard’s Kennedy School is involved in multiple collaborations with Chinese partners on clean energy and advanced training programs in policy and crisis management. • The Harvard China Fund, a university-wide academic venture fund, has made dozens of faculty grants for research partnerships and has placed more than 100 undergraduates in summer internships in China. These endeavors are a sampling of the collaborative tradition between Harvard and partners in China.These partnerships will share ideas and generate new ones. Higher education is increasingly global, which benefits all participants. We live in a moment of furious transformation, particularly in higher education. Nowhere is that transformation happening faster than in Asia. In China,the transformation is analogous to the “big bang.” “In a single decade, along with the world’s fastest-growing economy, China has created the most rapid expansion of higher education in human history.” — Drew Gilpin Faust This is a moment of tremendous opportunity. It is no coincidence that the second major expansion of Asian studies occurred at Harvard in the 20 years afterWorldWar II, when the number of undergraduates in American colleges increased by 500% and the number of graduate students rose almost 900%. China now faces similar opportunities. H A R VA R D A N D C H I N A : A R E S E A R C H SY M P O S I U M | 3 3 CLOSING REMARKS KEY TAKEAWAYSWWW.HBS.EDUTHE MODERN HISTORY OF EXCHANGE RATE ARRANGEMENTS: A REINTERPRETATION
|
CD ROM Annuaire d'Entreprises France prospect (avec ou sans emails) : REMISE DE 10 % Avec le code réduction AUDEN872
10% de réduction sur vos envois d'emailing --> CLIQUEZ ICI Retour à l'accueil, cliquez ici THE QUARTERLY JOURNAL OF ECONOMICS Vol. CXIX February 2004 Issue 1 THE MODERN HISTORY OF EXCHANGE RATE ARRANGEMENTS: A REINTERPRETATION* CARMEN M. REINHART AND KENNETH S. ROGOFF We develop a novel system of reclassifying historical exchange rate regimes. One key difference between our study and previous classi?cations is that we employ monthly data on market-determined parallel exchange rates going back to 1946 for 153 countries. Our approach differs from the IMF of?cial classi?cation (which we show to be only a little better than random); it also differs radically from all previous attempts at historical reclassi?cation. Our classi?cation points to a rethinking of economic performance under alternative exchange rate regimes. Indeed, the breakup of Bretton Woods had less impact on exchange rate regimes than is popularly believed. I. INTRODUCTION This paper rewrites the history of post-World War II exchange rate arrangements, based on an extensive new monthly data set spanning across 153 countries for 1946 –2001. Our approach differs not only from countries’ of?cially declared classi?- cations (which we show to be only a little better than random); it also differs radically from the small number of previous attempts at historical reclassi?cation. 1 * The authors wish to thank Alberto Alesina, Arminio Fraga, Amartya Lahiri, Vincent Reinhart, Andrew Rose, Miguel Savastano, participants at Harvard University’s Canada-US Economic and Monetary Integration Conference, International Monetary Fund-World Bank Joint Seminar, National Bureau of Economic Research Summer Institute, New York University, Princeton University, and three anonymous referees for useful comments and suggestions, and Kenichiro Kashiwase, Daouda Sembene, and Ioannis Tokatlidis for excellent research assistance. Data and background material to this paper are available at http://www.puaf.umd.edu.faculty/papers/reinhart/reinhart.htm. 1. The of?cial classi?cation is given in the IMF’s Annual Report on Exchange Rate Arrangements and Exchange Restrictions, which, until recently, asked member states to self-declare their arrangement as belonging to one of four categories. © 2004 by the President and Fellows of Harvard College and the Massachusetts Institute of Technology. The Quarterly Journal of Economics, February 2004 1As a ?rst innovation, we incorporate data on parallel and dual exchange rate markets, which have been enormously important not only in developing countries but in virtually all the European countries up until the late 1950s, and sometimes well beyond. We argue that any classi?cation algorithm that fails to distinguish between uni?ed rate systems (with one of?cial exchange rate and no signi?cant “black” or parallel market) and all others is fundamentally awed. Indeed, in the vast majority of multiple exchange rate or dual systems, the oating dual or parallel rate is not only a far better barometer of monetary policy than is the of?cial exchange rate, it is often the most economically meaningful rate. 2 Very frequently—roughly half the time for of?cial pegs—we ?nd that dual/parallel rates have been used as a form of “back door” oating, albeit one usually accompanied by exchange controls. The second novelty in our approach is that we develop extensive chronologies of the history of exchange arrangements and related factors, such as exchange controls and currency reforms. Together with a battery of descriptive statistics, this allows us to draw a nuanced distinction between what countries declare as their of?cial de jure regime, and their actual de facto exchange rate practices. To capture the wide range of arrangements, our approach allows for fourteen categories of exchange rate regimes, ranging from no separate legal tender or a strict peg to a dysfunctional “freely falling” or “hyperoat.” Some highlights from our reclassi?cation of exchange rate arrangements are as follows. First, dual, or multiple rates, and parallel markets have prevailed far more frequently than is commonly acknowledged. In 1950, 45 percent of the countries in our sample had dual or multiple rates; many more had thriving parallel markets. Among the industrialized economies, dual or multiple rates were the Previous studies have either extended the four-way of?cial classi?cation into a more informative taxonomy (see Ghosh et al. [1997]), or relied largely on statistical methods to regroup country practices (see Levy-Yeyati and Sturzenegger [2002]). The Fund, recognizing the limitations of its former strategy, revised and upgraded the of?cial approach toward classifying exchange rate arrangements in 1997 and again in 1999. Notably, all these prior approaches to exchange rate regime classi?cation, whether or not they accept the country’s declared regime, have been based solely on of?cial exchange rates. 2. When we refer to multiple exchange rates in this context, we are focusing on the cases where one or more of the rates is market-determined. This is very different from the cases where the multiple of?cial rates are all ?xed and simply act as a differential tax on a variety of transactions. Dual markets are typically legal, whereas parallel markets may or may not be legal. 2 QUARTERLY JOURNAL OF ECONOMICSnorm in the 1940s and the 1950s, and in some cases, these lasted until much later. Our data lend strong support to the view stressed by Bordo [1993] that Bretton Woods encompassed two very different kinds of exchange rate arrangements in the preand postconvertibility periods and that the period of meaningful exchange rate stability was quite short-lived. In the developing world, such practices remained commonplace through the 1980s and 1990s and into the present. We show that market-determined dual/parallel markets are important barometers of underlying monetary policy. This may be obvious in cases such as modern-day Myanmar where the parallel market premium at the beginning of 2003 exceeded 700 percent. As we show, however, the phenomenon is much more general, with the parallel market premium often serving as a reliable guide to the direction of future of?cial exchange rate changes. Whereas dual/parallel markets have been marginal over some episodes, they have been economically important in others, and there are many instances where only a few transactions take place at the of?cial rate. To assess the importance of secondary (legal or illegal) parallel markets, we collected data that allow us to estimate export misinvoicing practices, in many cases going back to 1948. These estimates show that leakages from the of?cial market were signi?cant in many of the episodes when there were dual or parallel markets. Second, when one uses market-determined rates in place of of?cial rates, the history of exchange rate policy begins to look very different. For example, it becomes obvious that de facto oating was common during the early years of the Bretton Woods era of “pegged” exchange rates. Conversely, many “oats” of the post-1980s turn out to be (de facto) pegs, crawling pegs, or very narrow bands. Of countries listed in the of?cial IMF classi?cation as managed oating, 53 percent turned out to have de facto pegs, crawls, or narrow bands to some anchor currency. Third, next to pegs (which account for 33 percent of the observations during 1970 –2001 (according to our new “Natural” classi?cation), the most popular exchange rate regime over modern history has been the crawling peg, which accounted for over 26 percent of the observations. During 1990 to 2001 this was the most common type of arrangement in emerging Asia and Western Hemisphere (excluding Canada and the United States), making up for about 36 and 42 percent of the observations, respectively. Fourth, our taxonomy introduces a new category: freely fallEXCHANGE RATE ARRANGEMENTS 3ing, or the cases where the twelve-month ination rate is equal to or exceeds 40 percent per annum. 3 It turns out to be a crowded category indeed, with about 12 1 2 percent of the observations in our sample occurring in the freely falling category. As a result, “freely falling” is about three times as common as “freely oating,” which accounts for only 4 1 2 percent of the total observations. (In the of?cial classi?cation, freely oating accounts for over 30 percent of observations over the past decade.) Our new freely falling classi?cation makes up 22 and 37 percent of the observations, respectively, in Africa and Western Hemisphere (excluding Canada and the United States) during 1970 –2001. In the 1990s freely falling accounted for 41 percent of the observations for the transition economies. Given the distortions associated with very high ination, any ?xed versus exible exchange rate regime comparisons that do not break out the freely falling episodes are meaningless, as we shall con?rm. There are many important reasons to seek a better approach to classifying exchange rate regimes. Certainly, one is the recognition that contemporary thinking on the costs and bene?ts of alternative exchange rate arrangements has been profoundly in- uenced by the large number of studies on the empirical differences in growth, trade, ination, business cycles, and commodity price behavior. Most have been based on the of?cial classi?cations and all on of?cial exchange rates. In light of the new evidence we collect, we conjecture that the inuential results in Baxter and Stockman [1989]—that there are no signi?cant differences in business cycles across exchange arrangements—may be due to the fact that the of?cial historical groupings of exchange rate arrangements are misleading. The paper proceeds as follows. In the next section we present evidence to establish the incidence and importance of dual or multiple exchange rate practices. In Section III we sketch our methodology for reclassifying exchange rate arrangements. Section IV addresses some of the possible critiques to our approach, compares our results with the “of?cial history,” and provides examples of how our reclassi?cation may reshape evidence on the links between exchange rate arrangements and various facets of economic activity. The ?nal section reiterates some of the main 3. We also include in the freely falling category the ?rst six months following an exchange rate crisis (see the Appendix for details), but only for those cases where the crisis marked a transition from a peg or quasi-peg to a managed or independent oat. 4 QUARTERLY JOURNAL OF ECONOMICS?ndings, while background material to this paper provides the detailed country chronologies that underpin our analysis. II. THE INCIDENCE AND IMPORTANCE OF DUAL AND MULTIPLE EXCHANGE RATE ARRANGEMENTS In this section we document the incidence of dual or parallel markets (legal or otherwise) and multiple exchange rate practices during post-World War II. We then present evidence that the market-determined exchange rate is a better indicator of the underlying monetary policy than the of?cial exchange rate. Finally, to provide a sense of the quantitative importance for economic activity of the dual or parallel market, we present estimates of “leakages” from the of?cial market. Speci?cally, we provide quantitative measures of export misinvoicing practices. We primarily use monthly data on of?cial and market-determined exchange rates for the period 1946 –2001. In some instances, the data for the market-determined rate is only available for a shorter period and the background material provides the particulars on a country-by-country basis. The pre-1999 marketdetermined exchange rate data come from various issues of Pick’s Currency Yearbook, Pick’s Black Market Yearbooks, and World Currency Reports, and the of?cial rate comes from the same sources and as well as the IMF. The quotes are end-of-month exchange rates and are not subject to revisions. For the recent period (1999 –2001) the monthly data on market-determined exchange rates come from the original country sources (i.e., the central banks), for those countries where there are active parallel markets for which data are available. 4 Since our coverage spans more than 50 years, it encompasses numerous cases of monetary reforms involving changes in the units of account, so the data were spliced accordingly to ensure continuity. II.A. On the Popularity of Dual and Multiple Exchange Rate Practices Figure I illustrates de facto and de jure nonuni?ed exchange rate regimes. The ?gure shows the incidence of exchange rate arrangements over 1950 –2001, with and without stripping out 4. These countries include Afghanistan, Angola, Argentina, Belarus, Belize, Bolivia, Burundi, Congo (DCR), Dominican Republic, Egypt, Ghana, Iran, Libya, Macedonia, Mauritania, Myanmar, Nigeria, Pakistan, Rwanda, Tajikistan, Turkmenistan, Ukraine, Uzbekistan, Yemen, Yugoslavia, and Zimbabwe. EXCHANGE RATE ARRANGEMENTS 5cases of dual markets or multiple exchange rates. The IMF classi?cation has been simpli?ed into what it was back in the days of Bretton Woods—namely, Pegs and Other. 5 The dark portions of the bars represent cases with uni?ed exchange rates, and the lightly shaded portion of each bar separates out the dual, multiple, or parallel cases. In 1950 more than half (53 percent) of all arrangements involved two or more exchange rates. Indeed, the heyday of multiple exchange rate practices and active parallel markets was 1946 –1958, before the restoration of convertibility in Europe. Note also, that according to the of?cial IMF classi?- cation, pegs reigned supreme in the early 1970s, accounting for over 90 percent of all exchange rate arrangements. In fact, over half of these “pegs” masked parallel markets that, as we shall show, often exhibited quite different behavior. 5. For a history of the evolution of the IMF’s classi?cation strategy, see the working paper version of this paper, Reinhart and Rogoff [2002]. FIGURE I The Incidence of Dual or Multiple Exchange Rate Arrangements, 1950–2001: Simpli?ed IMF Classi?cation Sources: International Monetary Fund, Annual Report on Exchange Arrangements and Exchange Restrictions and International Financial Statistics; Pick and Se´dillot [1971]; International Currency Analysis, World Currency Yearbook, various issues. Exchange rate arrangements classi?ed as “Other” include the IMF’s categories of limited exibility, managed oating, and independently oating. 6 QUARTERLY JOURNAL OF ECONOMICSII.B. The Market-Determined Exchange Rate as an Indicator of Monetary Policy While the quality of data on market-determined rates is likely to vary across countries and time, we nevertheless believe these data to be generally far better barometers of the underlying monetary policy than are of?cial exchange rates. For instance, if the laxity in monetary policy is not consistent with maintaining a ?xed of?cial exchange rate, one would expect that the marketdetermined rate starts depreciating ahead of the inevitable devaluation of the of?cial rate. When the of?cial realignment occurs—it is simply a validation of what had previously transpired in the free market. Indeed, this is the pattern shown in the three panels of Figure II for the cases of Bolivia, Indonesia, and Iran— many more such cases are displayed in the ?gures that accompany the 153 country chronologies. 6 This pattern also emerges often in the developed European economies and Japan in the years following World War II. To illustrate more rigorously that the market-based exchange rate is a better indicator of the monetary policy stance than the of?cial rate, we performed two exercises for each country. First, we examined whether the market-determined exchange rate systematically predicts realignments in the of?cial rate, as suggested in Figure II. To do so, we regressed a currency crash dummy on the parallel market premium lagged one to six months, for each of the developing countries in our sample. 7 If the market exchange rate consistently anticipates devaluations of the of?cial rate, its coef?cient should be positive and statistically signi?cant. If, in turn, the of?cial exchange rate does not validate the market rate, then the coef?cient on the lagged market exchange rate will be negative or simply not signi?cant. Table I summarizes the results of the country-by-country time series probit regressions. In the overwhelming number of cases (97 percent), the coef?cient on the market-determined exchange rate is positive. In about 81 percent of the cases, the sign on the coef?cient was positive and statistically signi?cant. Indeed, for 6. See “Part I. The Country Chronologies and Chartbook, Background Material to A Modern History of Exchange Rate Arrangements: A Reinterpretation” at http://www.puaf.umd.edu/faculty/papers/reinhart/reinhart.htm. 7. Two de?nitions of currency crashes are used. A severe currency crash refers to a 25 percent or higher monthly depreciation which is at least 10 percent higher than the previous month’s depreciation. The “milder” version represents a 12.5 percent monthly depreciation which is at least 10 percent above the preceding month’s depreciation; see details in the Appendix. EXCHANGE RATE ARRANGEMENTS 7FIGURE II Of?cial Exchange Rates Typically Validate the Changes in the Market Rates Sources: Pick and Se´dillot [1971]; International Currency Analysis, World Currency Yearbook, various issues. 8 QUARTERLY JOURNAL OF ECONOMICSWestern Hemisphere as a region, the coef?cient on the parallel premium was signi?cant for all the countries in our sample. These ?ndings are in line with those of Bahmani-Oskooee, Miteza, and Nasir [2002], who use panel annual data for 1973– 1990 for 49 countries and employ a completely different approach. Their panel cointegration tests indicate that the of?cial rate will systematically adjust to the market rate in the long run. Second, we calculated pairwise correlations between ination (measured as the twelve-month change in the consumer price index) and the twelve-month percent change in the of?cial and market exchange rates, six months earlier. If the market rate is a better pulse of monetary policy, it should be (a priori) more closely correlated with ination. As shown in Table II, we ?nd that for the majority of cases (about three-quarters of the countries) the changes in market-determined exchange rates have higher correlations with ination than do changes in the of?cial rate. 8 An interesting exception to this pattern of higher correla- 8. Note that, due to data limitations, we use of?cial prices rather than black market or “street” prices to measure ination here. Otherwise, the dominance of the market-determined rates in this exercise would presumably be even more pronounced. TABLE I IS THE PARALLEL MARKET RATE A GOOD PREDICTOR OF CRASHES IN THE OFFICIAL EXCHANGE RATE? SUMMARY OF THE PROBIT COUNTRY-BY-COUNTRY ESTIMATION Regression, DO t 5 a 1 b DPt2 i 1 ut “Mild” crash Percent of countries for which: b . 0 97.1 b . 0 and signi?cant a 81.4 b , 0 2.9 b , 0 and signi?cant a 1.4 Sources: Pick’s Currency Yearbook, World Currency Report, Pick’s Black Market Yearbook, and the authors’ calculations. DOt is a dummy variable that takes on the value of 1 when there is a realignment in the of?cial exchange rate along the lines described below and 0 otherwise, a and b are the intercept and slope coef?cients, respectively (our null hypothesis is b . 0), DPt2i is the twelve-monthchange in the parallel exchange rate, lagged one to six months (the lags were allowed to vary country by country, as there was no prior reason to restrict dynamics to be the same for all countries) and ut is a random disturbance. Two de?nitions of currency crashes are used in the spirit of Frankel and Rose [1996]. A “severe” currency crash refers to a 25 percent or higher monthly depreciation, which is at least 10 percent higher than the previousmonth’s depreciation.The “mild” version represents a 12.5 percent monthly depreciation, which is at least 10 percent above the preceding month’s depreciation. Since both de?nitions of crash yield similar results, we report here only those for the more inclusive de?nition. The regression sample varies by country and is determined by data availability. a. At the 10 percent con?dence level or higher. EXCHANGE RATE ARRANGEMENTS 9tions between the market-determined exchange rate changes and ination is for the industrial countries in the “Convertible Bretton Woods” period (1959 –1973), an issue that merits further study. II.C. How Important Are Parallel Markets? There are cases where the parallel (or secondary) exchange rate applies only to a few limited transactions. An example is the “switch pound” in the United Kingdom during September 1950 through April 1967. 9 However, it is not unusual for dual or parallel markets (legal or otherwise) to account for the lion’s share of transactions with the of?cial rate being little more than symbolic. As Kiguel, Lizondo, and O’Connell [1997] note, the of?cial rate typically diminishes in importance when the gap between the of?cial and market-determined rate widens. To provide a sense of the comparative relevance of the dual or parallel market, we proceed along two complementary dimensions. First, we include a qualitative description in the countryspeci?c chronologies (see background material) of what transactions take place in the of?cial market versus the secondary market. Second, we develop a quantitative measure of the potential size of the leakages into dual or parallel exchange markets. 10 9. For example, while the United Kingdom of?cially had dual rates through April 1967, the secondary rate was so trivial (both in terms of the premium and the volume of transactions it applied to) that it is classi?ed as a peg in our classi?cation scheme (see background material). In the next section we describe how our classi?cation algorithm deals with these cases. 10. For instance, according to Claessens [1997], export underinvoicing hit a historic high in Mexico during 1982—the crisis year in which the dual market was TABLE II INFLATION, OFFICIAL AND MARKET-DETERMINED EXCHANGE RATES: COUNTRY-BY-COUNTRY PAIRWISE CORRELATIONS Percent of countries for which the correlations of: The market-determined exchange rate and ination are higher than the correlations of the of?cial rate and ination 73.7 The market-determined exchange rate and ination are lower than the correlations of the of?cial rate and ination 26.3 Sources: International Monetary Fund, International Financial Statistics, Pick’s Currency Yearbook, World Currency Report, Pick’s Black Market Yearbook, and the authors’ calculations. The correlations reported are those of the twelve-month percent change in the consumer price index with the twelve-month percent change in the relevant bilateral exchange rate lagged six months. 10 QUARTERLY JOURNAL OF ECONOMICSFollowing Ghei, Kiguel, and O’Connell [1997], we classify episodes where there are dual/parallel markets into three tiers according to the level (in percent) of the parallel market premium: low (below 10 percent), moderate (10 percent or above but below 50), and high (50 percent and above). For the episodes of dual/ parallel markets, we provide information about which category each episode falls in (by calculating the average premium for the duration of the episode). In addition to the information contained in the premium, we constructed an extensive database on export misinvoicing, or the difference between what a country reports as its exports and what other countries report as imports from that country, adjusted for shipping costs. Historically, there are tight links between capital ight, export underinvoicing, and the parallel market premium. 11 As with the parallel market premium, we divide the export misinvoicing estimates into three categories (as a percent of the value of total exports): low (less than 10 percent of exports), moderate (10 to 15 percent of exports), and high (above 15 percent). For Europe, Japan, and the United States, misinvoicing calculations start in 1948, while for the remaining countries these start in 1970. In the extensive background material to this paper, we show, for each episode, which of the three categories is applicable. Finally, we construct a score (1 for Low, 2 for Moderate, and 3 for High) for both of these proxies for leakages. The combined score on the estimated size of the leakages (these range from 2 to 6) is also reported. 12 Table III, which shows the evolution of export misinvoicing (as a percent of the value of total exports) and the parallel market premium (in percent) across regions and through time, provides a general avor of the size of potential leakages from the of?cial market. According to our estimates of misinvoicing (top panel), the regional patterns show the largest leakages for the Caribbean and non-CFA Sub-Saharan Africa 1970 –2001, with averages in the 30 to 50 percent range. The lowest estimates of misinvoicing (8 to 11 percent) are for Western Europe, North America, and the introduced. Similar statements can be made about other crisis episodes that involved the introduction of exchange controls and the segmentation of markets. 11. See Kiguel, Lizondo, and O’Connell [1997] and the references contained therein. 12. See “Part II. Parallel Markets and Dual and Multiple Exchange Rate Practices: Background Material to A Modern History of Exchange Rate Arrangements: A Reinterpretation” at http://www.puaf.umd.edu/faculty/papers/reinhart/reinhart.htm. EXCHANGE RATE ARRANGEMENTS 11TABLE III LEAKAGES: EXPORT MISINVOICING AND THE PARALLEL MARKET PREMIUM ABSOLUTE VALUE OF EXPORT MISINVOICING (AS A PERCENT OF THE VALUE OF EXPORTS) Descriptive statistics Mean absolute value (by decade) Min. Max. St. dev 48–49 50–59 60–69 70–79 80–89 90–01 70–01 World 7.0 39.8 8.4 12.8 10.9 9.9 24.7 22.1 26.0 24.4 North Africa 2.5 59.9 10.3 ... ... ... 7.2 8.3 16.1 10.9 CFA 12.6 48.3 8.4 ... ... ... 28.5 21.7 21.5 23.8 Rest of Africa 16.3 201.9 33.5 ... ... ... 23.4 23.4 53.6 34.1 Middle East and Turkey 9.1 45.4 9.6 ... ... ... 30.7 16.7 17.4 21.5 Developing Asia and Paci?c 9.5 79.1 16.9 ... ... ... 31.4 14.9 24.1 23.5 Industrialized Asia 3.7 18.2 3.3 11.2 14.2 13.9 14.6 12.0 10.3 12.2 Caribbean 9.7 136.0 33.2 ... ... ... 30.8 48.9 60.0 47.0 Central and South America 12.0 49.6 8.2 ... ... ... 26.1 36.0 30.4 30.8 Central and Eastern Europe 2.5 50.0 18.3 ... ... ... 46.6 15.4 7.4 22.1 Western Europe 2.4 16.9 3.0 14.1 10.4 10.0 11.6 7.6 7.7 8.9 North America 0.6 22.6 5.9 4.6 9.4 3.8 16.0 11.4 4.8 10.4 Monthly average parallel market premium (excluding freely falling episodes, in percent) Descriptive statistics Average (by decade) Min. Max. St. dev 46–49 50–59 60–69 70–79 80–89 90–98 46–98 World 11.6 205.9 35.4 137.8 56.7 38.1 31.3 57.8 52.6 54.1 North Africa 21.2 164.8 41.4 ... 9.9 35.7 30.7 108.6 62.0 53.6 CFA 26.4 12.7 2.7 ... ... ... 0.0 1.2 1.8 0.9 Rest of Africa 1.7 322.5 73.9 31.9 6.9 33.7 113.7 112.7 107.7 71.0 Middle East and Turkey 5.1 493.1 99.6 54.6 81.0 26.0 21.4 146.5 193.2 88.6 Developing Asia and Paci?c 23.7 660.1 95.0 143.5 60.9 168.9 44.7 43.1 12.1 72.9 Industrialized Asia 26.9 815.9 107.6 324.4 43.0 12.0 3.6 1.3 1.5 36.1 Caribbean 223.8 300.0 42.8 ... ... 29.6 30.2 56.8 53.6 42.3 Central and South America 3.0 716.1 78.5 49.1 133.0 16.4 18.6 74.8 8.4 51.0 Western Europe 25.6 347.5 48.6 165.5 17.0 1.2 2.0 1.7 1.2 16.9 North America 24.3 49.7 3.3 7.2 0.5 0.0 1.1 1.4 1.6 1.3 Sources: International Monetary Fund, Direction of Trade Statistics, International Financial Statistics, Pick’s Currency Yearbook, World Currency Report, Pick’s Black Market Yearbook, and authors’ calculations. To calculate export misinvoicing, let XWi 5 imports from country i, as reported by the rest of the world (CIF basis), Xi 5 exports to the world as reported by country i, Z 5 imports CIF basis/imports COB basis, then export misinvoicing 5 (XWi /Z) 2 Xi . The averages reported are absolute values as a percent of the value of total exports. The parallel premium is de?ned as 100 3 [(P 2 O)/O)], where P and O are the parallel and of?cial rates, respectively. The averages for the parallel premium are calculated for all the countries in our sample in each region, as such, it includes countries where rates are uni?ed and the premium is zero or nil. 12 QUARTERLY JOURNAL OF ECONOMICSCFA Franc Zone. It is also noteworthy that, although low by the standards of other regions, the export misinvoicing average in 1970 –2001 for Western Europe is half of what it was in 1948 – 1949. Yet these regional averages may understate the importance of misinvoicing in some countries. For example, the maximum value for 1948 –2001 for Western Europe (16.9 percent) does not reect the fact that for Spain misinvoicing as a percent of the value of exports amounted to 36 percent in 1950, a comparable value to what we see in some of the developing regions. As to the regional average parallel market premium shown in the bottom panel of Table III, all regions fall squarely in the Moderate-to-High range (with the exception of North America, Western Europe, and CFA Africa). In the case of developing Asia, the averages are signi?cantly raised by Myanmar and Laos. It is worth noting the averages for Europe and industrialized Asia in the 1940s are comparable and even higher than those recorded for many developing countries, highlighting the importance of acknowledging and accounting for dual markets during this period. To sum, in this section we have presented evidence that leads us to conclude that parallel markets were both important as indicators of monetary policy and as representative of the prices underlying an important share of economic transactions. It is therefore quite reasonable to draw heavily on the dual or parallel market data in classifying exchange rate regimes, the task to which we now turn. III. THE “NATURAL” CLASSIFICATION CODE: A GUIDE We would describe our classi?cation scheme as a “Natural” system that relies on a broad variety of descriptive statistics and chronologies to group episodes into a much ?ner grid of regimes, rather than the three or four buckets of other recent classi?cation strategies. 13 The two most important new pieces of information we bring to bear are our extensive data on market-determined dual or parallel exchange rates and detailed country chronologies. The data, its sources, and country coverage are described along with the chronologies that map the history of exchange rate arrangements for each country in the detailed background mate- 13. In biology, a natural taxonomic scheme relies on the characteristics of a species to group them. EXCHANGE RATE ARRANGEMENTS 13rial to this paper. To verify and classify regimes, we also rely on a variety of descriptive statistics based on exchange rate and ination data from 1946 onwards; the Appendix describes these. III.A. The Algorithm Figure III is a schematic summarizing our Natural Classi?- cation algorithm. First, we use the chronologies to sort out for separate treatment countries with either of?cial dual or multiple rates or active parallel (black) markets. 14 Second, if there is no dual or parallel market, we check to see if there is an of?cial preannounced arrangement, such as a peg or band. If there is, we examine summary statistics to verify the announced regime, going forward from the date of the announcement. If the regime is veri?ed (i.e., exchange rate behavior accords with the preannounced policy), it is then classi?ed accordingly as a peg, crawling peg, etc. If the announcement fails veri?cation (by far the most common outcome), we then seek a de facto statistical classi?cation using the algorithm described below, and discussed in greater detail in the Appendix. Third, if there is no preannounced path for the exchange rate, or if the announced regime cannot be veri?ed by the data and the twelve-month rate of ination is below 40 percent, we classify the regime by evaluating exchange rate behavior. As regards which exchange rate is used, we consider a variety of potential anchor currencies including the US dollar, deutsche mark, euro, French franc, UK pound, yen, Australian dollar, Italian lira, SDR, South African rand, and the Indian rupee. A reading of the country chronologies makes plain that the relevant anchor currency varies not only across countries but sometimes within a country over time. (For example, many former British colonies switched from pegging to the UK pound to pegging to the US dollar.) Our volatility measure is based on a ?ve-year moving window (see the Appendix for details), so that the monthly exchange rate behavior may be viewed as part of a larger, continuous, regime. 15 14. See background material posted at http://www.puaf.umd.edu/faculty/ papers/reinhart/reinhart.htm. 15. If the classi?cation is based on exchange rate behavior in a particular year, it is more likely that one-time events (such as a one-time devaluation and repeg) or an economic or political shock leads to labeling the year as a change in regime, when in effect there is no change. For example, Levy-Yeyati and Sturzenegger [2002], who classify regimes one year at a time (with no memory), classi?ed all CFA zone countries as having an intermediate regime in 1994, when 14 QUARTERLY JOURNAL OF ECONOMICSthese countries had a one-time devaluation in January of that year. Our algorithm classi?es them as having pegs throughout. The ?ve-year window also makes it less likely that we classify as a peg an exchange rate that did not move simply because it was a tranquil year with no economic or political shocks. It is far less probable that there are no shocks over a ?ve-year span. FIGURE III A Natural Exchange Rate Classi?cation Algorithm EXCHANGE RATE ARRANGEMENTS 15We also examined the graphical evidence as a check on the classi?cation. In practice, the main reason for doing so is to separate pegs from crawling pegs or bands and to sort the latter into crawling and noncrawling bands. Fourth, as we have already stressed, a straightforward but fundamental departure from all previous classi?cation schemes is that we create a new separate category for countries whose twelve-month rate of ination is above 40 percent. These cases are labeled “freely falling.” 16 If the country is in a hyperination (according to the classic Cagan [1956] de?nition of 50 percent or more monthly ination), we categorize the exchange rate regime as a “hyperoat,” a subspecies of freely falling. In Figure IV, bilateral exchange rates versus the US dollar are plotted for two countries that have been classi?ed by the IMF (and all previous classi?cation efforts) as oating over much of the postwar period—Canada and Argentina. 17 To us, lumping the Canadian oat with that of Argentina during its hyperination seems, at a minimum, misleading. As Figure IV illustrates, oating regimes look rather different from freely falling regimes—witness the orders of magnitude difference in the scales between Canada (top of page) and Argentina (bottom). This difference is highlighted in the middle panel, which plots the Canadian dollar-US dollar exchange rate against Argentina’s scale; from this perspective, it looks like a ?xed rate! The exchange rate histories of other countries that experienced chronic high ination bouts—even if these did not reach the hyperination stage—look more similar to Argentina in Figure IV than to Canada. 18 In our view, regimes associated with an utter lack of monetary control and the attendant very high ination should not be automatically lumped under the same exchange rate arrangement as low ination oating regimes. On these grounds, freely falling needs to be treated as a separate category, much in the same way that Highly Indebted Poorest Countries (HIPC) are treated as a separate “type” of debtor. 16. In the exceptional cases (usually the beginning of an ination stabilization plan) where, despite ination over 40 percent, the market rate nevertheless follows a con?rmed, preannounced band or crawl, the preannounced regime takes precedence. 17. For Argentina, this of course refers to the period before the Convertibility Plan is introduced in April 1991 and for Canada the post-1962 period. 18. Two-panel ?gures, such as that shown for Chile (Figure V), for each country in the sample are found in the background material alongside the country-speci?c chronologies. 16 QUARTERLY JOURNAL OF ECONOMICSFIGURE IV The Essential Distinction between Freely Floating and Falling Sources: Pick and Se´dillot [1971]; International Currency Analysis, World Currency Yearbook, various issues. EXCHANGE RATE ARRANGEMENTS 17In step 5 we take up those residual regimes that were not classi?ed in steps 1 through 4. These regimes become candidates for “managed” or “freely” oating. 19 To distinguish between the two, we perform some simple tests (see the Appendix) that look at the likelihood the exchange rate will move within a narrow range, as well as the mean absolute value of exchange rate changes. When there are dual or parallel markets and the parallel market premium is consistently 10 percent or higher, we apply steps 1 through 5 to our data on parallel exchange rates and reclassify accordingly, though in our ?ner grid. 20 III.B. Using the Chronologies The 153 individual country chronologies are also a central point of departure from all previous efforts to classify regimes. In the ?rst instance the data are constructed by culling information from annual issues of various secondary sources, including Pick’s Currency Yearbook, World Currency Yearbook, Pick’s Black Market Yearbook, International Financial Statistics, the IMF’s Annual Report on Exchange Rate Arrangements and Exchange Restrictions, and the United Nations Yearbook. Constructing our data set required us to sort and interpret information for every year from every publication above. Importantly, we draw on national sources to investigate apparent data errors or inconsistencies. More generally, we rely on the broader economics literature to include pertinent information, such as the distribution of transactions among of?cial and parallel markets. 21 The chronologies allow us to date dual or multiple exchange rate episodes, as well as to differentiate between preannounced pegs, crawling pegs, and bands from their de facto counterparts. We think it is important to distinguish between, say, de facto pegs or bands from announced pegs or bands, because their properties are potentially different. 22 At the very least, we want to provide future researchers with the data needed to ask a variety of questions about the role of exchange rate arrangements. The 19. Our classi?cation of “freely oating” is the analogue of “independently oating” in the of?cial classi?cation. 20. When the parallel market premium is consistently (i.e., all observations within the ?ve-year window) in single digits, we ?nd that in nearly all these cases the of?cial and parallel rates yield the same classi?cation. 21. See Marion [1994], for instance. 22. Policy-makers may not be indifferent between the two. In theory, at least, announcements of pegs, bands, and so on can act as a coordinating device which, by virtue of being more transparent, could invite speculative attacks. 18 QUARTERLY JOURNAL OF ECONOMICSchronologies also ag the dates for important turning points, such as when the exchange rate ?rst oated, or when the anchor currency was changed. Table IV gives an example of one of our 153 chronologies (see background material) for the case of Chile. The ?rst column gives critical dates. Note that we extend our chronologies as far back as possible (even though we can only classify from 1946 onwards); in the case of Chile we go back to 1932. The second column lists how the arrangement is classi?ed. Primary classi?cation refers to the classi?cation according to our Natural algorithm, which may or may not correspond to the of?cial IMF classi?cation (shown in parentheses in the second column of Table IV). Secondary and tertiary classi?cations are meant only to provide supplemental information, as appropriate. So, for example, from November 1952 until April 1956, Chile’s ination was above 40 percent, and hence, its primary classi?cation is freely falling—that is, the only classi?cation that matters for the purposes of the Natural algorithm. For those interested in additional detail, however, we also note in that column that the market-determined exchange rate was a managed oat along the lines described in detail in the Appendix (secondary) and that, furthermore, Chile had multiple exchange rates (tertiary). This additional information may be useful, for example, for researchers who are not interested in treating the high ination cases separately (as we have done here). In this case, they would have suf?cient information to place Chile in the 1952–1956 period in the managed oat category. Alternatively, for those researchers who wish to treat dual or multiple exchange rate practices as a separate category altogether (say, because these arrangements usually involve capital controls), the second column (under secondary or tertiary classi?cation) provides the relevant information to do that sorting accordingly. As one can see, although Chile uni?ed rates on September 1999, it previously had some form of dual or multiple rates throughout most of its history. In these circumstances, we reiterate that our classi?cation algorithm relies on the market-determined, rather than the of?cial exchange rate. 23 Over some 23. The other Chronologies do not contain this information, but the annual of?cial IMF classi?cation for the countries in the sample is posted at http:// www.puaf.umd.edu/faculty/papers/reinhart/reinhart.htm. EXCHANGE RATE ARRANGEMENTS 19TABLE IV A SAMPLE CHRONOLOGY NI THE NATURAL CLASSIFICATION SCHEME: CHILE, 1932–2001 Date Classi?cation pr mi ary/secondary/tertiary (of?cial MI F classi?cation in parentheses) Comments September 16, 1925–April 20, 1932 Peg Go dl standard. Foreign exchange controls are ni troduced on July 30, 1931. April 20, 1932–1937 Dual market Pound Sterl ni g is reference cu ency. Suspension of go dl standard. 1937–February 1946 Managed oat ni g M/ ult pi le rates US dol al r becomes the reference cu ency. March 1946–May 1947 Freely af ll ni g M/ anaged oat ni g M/ ult pi le rates June 1947–October 1952 Managed oat ni g M/ ult pi le rates November 1952–April 16, 1956 Freely af ll ni g M/ anaged oat ni g M/ ult pi le rates April 16, 1956–August 1957 Freely af ll ni g M/ anaged oat ni g D/ ual market Rate structure is s mi pl ?i ed, and a dual market is created. September 1957–June 1958 Managed oat ni g D/ ual market July 1958–January 1, 1960 Freely af ll ni g M/ anaged oat ni g D/ ual market January 1, 1960–January 15, 1962 Peg to US dol al r The escudo rep al ces the peso. January 15, 1962–November 1964 Freely af ll ni g M/ anaged oat ni g M/ ult pi le rates Freely af ll ni g s ni ce April 1962. December 1964–June 1971 Managed oat ni g M/ ult pi le rates P( eg) July 1971–June 29, 1976 Freely af ll ni g M/ ult pi le exchange rates P( eg through 1973 m- anaged oat ni g afte wr ards) On September 29, 1975, the peso rep al ced the escudo. October 1973 c al ss ?i es as a hyperoat. June 29, 1976–January 1978 Freely af ll ni g C/ rawl ni g peg to US dol al r M( anaged oat ni g) February 1978–June 1978 Preannounced crawl ni g peg to US dol al r F/ reely fall ni g M( anaged oat ni g) The Tablita P al n. July 1978–June 30, 1979 Preannounced crawl ni g peg to US dol al r P( eg) The Tablita P al n. June 30, 1979–June 15, 1982 Peg to US dol al r P( eg) The second phase of the Tablita P al n. June 15, 1982–December 1982 Freely af ll ni g M/ anaged oat ni g D/ ual market January 1983–December 8, 1984 Managed oat ni g D/ ual market M( anaged oat ni g) Parallel market prem ui m reaches 102 percent ni early 1983. On March 1983 the ni tention to follow a P rule was announced. 20 QUARTERLY JOURNAL OF ECONOMICSTABLE IV (CONT NI UED) Date Classi?cation pr mi ary/secondary/tertiary (of?cial MI F classi?cation in parentheses) Comments December 8, 1984–January 1988 Managed oat ni g D/ ual market M( anaged oat ni g) rule. The o ?f c ai l rate is kept with ni a 62% crawl ni g band to US dol al r. February 1988–January 1, 1989 De facto crawl ni g band around US dol al r D/ ual market M( anaged oat ni g) rule. 65% band. O ?f c ai l preannounced 63% crawl ni g band to US dol al r. While the of?c ai l rate rema ni s with ni the preannounced band, parallel market prem ui m rema ni ni double digits. January 1, 1989–January 22, 1992 Preannounced crawl ni g band around US dol al r D/ ual market M( anaged oat ni g) rule. Band w di th is 65%. January 22, 1992–January 20, 1997 De facto crawl ni g band around US dol al r D/ ual market M( anaged oat ni g) rule. Band is 65%. There is an o ?f c ai l preannounced 610% crawl ni g band to US dol al r. Parallel prem ui m falls below 15 percent and ni to s ni gle digits. January 20, 1997–June 25, 1998 De facto crawl ni g band to US dol al r D/ ual market M( anaged oat ni g) Of?c ai l prea ounced crawl ni g 612 5. % band to US dol al r; de facto band is 65%. June 25, 1998–September 16, 1998 Preannounced crawl ni g band to US dol al r D/ ual market M( anaged oat ni g) 62 7. 5% band. September 16, 1998–December 22, 1998 Preannounced crawl ni g band to US dol al r D/ ual market M( anaged oat ni g) 63 5. % band. December 22, 1998–September 2, 1999 Preannounced crawl ni g band to US dol al r D/ ual market M( anaged oat ni g) 68% band. September 2, 1999–December 2001 Managed oat ni g (Independently oat ni g) Rates are un ?i ed. Reference currency is the US dollar. Data availability: Of?cial rate, 1900:1–2001:12. Parallel rate, 1946:1–1998:12. EXCHANGE RATE ARRANGEMENTS 21periods the discrepancy between the of?cial and parallel rate, however, proved to be small. For example, from January 1992 onwards the parallel market premium remained in single digits, and our algorithm shows that it makes little difference whether the of?cial or parallel rate is used. In these instances, we leave the notation in the second column that there are dual rates (for information purposes), but also note in the third column that the premium is in single digits. As noted, Chile has also experienced several periods where the twelve-month monthly ination exceeded 40 percent. Our algorithm automatically categorizes these as freely falling exchange rate regimes—unless there is a preannounced peg, crawling peg, or narrow band that is veri?ed, as was the case when the Tablita program was introduced on February 1978. The third column in our chronology gives further sundry information on the regime—e.g., the width of the announced and de facto bands, etc. For Chile, which followed a crawling band policy over many subperiods, it is particularly interesting to note the changes over time in the width of the bands. The third column also includes information about developments in the parallel market premium and currency reform. As an example of the former, we note that since 1992 the parallel premium slipped into single digits; an example of the latter is given for Chile when the peso replaced the escudo in 1975. The top panel of Figure V plots the path of the of?cial and market-determined exchange rate for Chile from 1946. It is evident that through much of the period shown the arrangement was one of a crawling peg or a crawling band, with the rate of crawl varying through time and notably slowing as ination began to stabilize following the Tablita plan of the early 1980s. The bottom panel plots the parallel market premium (in percent). This pattern is representative of many other countries in our sample; the premium skyrockets in the periods of economic and political instability, declines into single digits as credible policies are put in place and capital controls are eased. As we will discuss in the next section, the Chilean case is also illustrative, in that crawling pegs or bands are quite common. Figure VI, which shows the path of the exchange rate for the Philippines, India, and Greece, provides other examples of the plethora of crawling pegs or bands in our sample. 22 QUARTERLY JOURNAL OF ECONOMICSFIGURE V Chile: Of?cial and Market-Determined Exchange Rates and the Parallel Market Premium January 1946–December 1998 Sources: InternationalMonetary Fund, Annual Report on Exchange Arrangements and Exchange Restrictions and International Financial Statistics; Pick and Se´dillot [1971]; International Currency Analysis, World Currency Yearbook, various issues. EXCHANGE RATE ARRANGEMENTS 23FIGURE VI The Prevalence of Crawling Pegs and Bands Sources: Pick and Se´dillot [1971]; International Currency Analysis, World Currency Yearbook, various issues. 24 QUARTERLY JOURNAL OF ECONOMICSIII.C. Alternative Taxonomies: Comparing the Basic Categories Altogether, our taxonomy of exchange rate arrangements includes the fourteen classi?cations sketched in Table V (or ?fteen if hyperoats are treated as a separate category). Of course, fourteen (or ?fteen) buckets are not exhaustive, for example, if one wishes to distinguish between forward- and backward-looking crawls or bands, along the lines of Cottarelli and Giannini [1998]. Given that we are covering the entire post-World War II period, we did not have enough information to make that kind of ?ner distinction. Conversely, because we sometimes want to compare our classi?cation regime with the coarser of?cial one, we also show how to collapse our fourteen types of arrangements into ?ve broader categories; see Table V, where the least exible arrangements are assigned the lowest values in our scale. TABLE V THE FINE AND COARSE GRIDS OF THE NATURAL CLASSIFICATION SCHEME Natural classi?cation bucket Number assigned to category in: Fine grid Coarse grid No separate legal tender 1 1 Preannounced peg or currency board arrangement 2 1 Preannounced horizontal band that is narrower than or equal to 62% 3 1 De facto peg 4 1 Preannounced crawling peg 5 2 Preannounced crawling band that is narrower than or equal to 62% 6 2 De facto crawling peg 7 2 De facto crawling band that is narrower than or equal to 62% 8 2 Preannounced crawling band that is wider than 62% 9 2 De facto crawling band that is narrower than or equal to 65% 10 3 Noncrawling band that is narrower than or equal to 62% a 11 3 Managed oating 12 3 Freely oating 13 4 Freely falling (includes hyperoat) 14 5 Source: The authors. a. By contrast to the common crawling bands, a noncrawling band refers to the relatively few cases that allow for both a sustained appreciation and depreciation of the exchange rate over time. While the degree of exchange rate variability in these cases is modest at higher frequencies (i.e., monthly), lower frequency symmetric adjustment is allowed for. The Appendix provides a detailed discussion of our classi?cation algorithm. EXCHANGE RATE ARRANGEMENTS 25In the ?ner grid, we distinguish between preannounced policies and the less transparent de facto regimes. Since the former involve an explicit announcement while the latter leave it to ?nancial market analysts to determine the implicit exchange rate policy, in the ?ner classi?cation we treat preannouncement as less exible than de facto. We accordingly assign it a lower number in our scale. Those not interested in testing whether announcements serve as a coordinating device (say, to make a speculative attack more likely) and only interested in sorting out the degree of observed exchange rate exibility will prefer the coarser grid. However, even in the coarse grid, it is imperative to treat freely falling as a separate category. IV. THE “NATURAL” TAXONOMY: CRITIQUES AND COMPARISONS As the previous section described, our classi?cation strategy relies importantly on the observed behavior of the market-determined exchange rate. In this section we ?rst address some potential critiques of our approach, including whether a country’s international reserve behavior should affect its classi?cation, and whether we may be mislabeling some regimes as pegs or crawls simply due to the absence of shocks. We then proceed to compare our results with the “of?cial history,” and provide examples of how our reclassi?cation may reshape some of the existing evidence on the links between exchange rate arrangements and various facets of economic activity. IV.A. The Trilogy: Exchange Rates, Monetary Policy, and Capital Controls To capture the nuances of any exchange rate arrangement, one might also want information on the presence and effectiveness of capital controls, the modalities of (sterilized or unsterilized) foreign exchange intervention, and the extent to which interest rates (or other less conventional types of intervention) are used as a means to stabilize the exchange rate. Since, for the purposes of universality, our classi?cation rests squarely on the univariate time series behavior of the nominal exchange rates (combined with historical chronologies), in this subsection we address some of these limitations to our approach. Some studies have reclassi?ed exchange rate arrangements by also factoring in the behavior of foreign exchange reserves as 26 QUARTERLY JOURNAL OF ECONOMICSreported by the IMF’s International Financial Statistics. 24 However, as Calvo and Reinhart [2002] note, using reserves has serious limitations. In Brazil and in over two dozen other countries, foreign exchange market intervention is frequently done through purchases and sales of domestic dollar-linked debt. 25 This debt is not reected in the widely used IFS reserve data, neither were the massive interventions of the Thai authorities in the forward market during 1997 and in South Africa thereafter. Furthermore, as ?nancial liberalization has spread throughout the globe, there has been a widespread switch from direct intervention in the foreign exchange market to the use of interest rate policy in the 1990s as a means to stabilize the exchange rate. 26 Picking up on this kind of policy intervention requires having the policy interest rate—the equivalent of the federal funds rate for the United States—for each country. Such data are very dif?cult to come by, and none of the other efforts at reclassi?cation have dealt with issue. Other issues arise in the context of the links between monetary, capital controls, and exchange rate policy. In particular, while ?xing the exchange rate (or having narrow bands, or crawling pegs, or bands) largely de?nes monetary policy, our two most exible arrangement categories (managed or freely oating) do not. Floating could be consistent with monetary targets, interest rate targets, or ination targeting, the latter being a relatively recent phenomenon. 27 Since our study dates back to 1946, it spans a sea change in capital controls and monetary policy regimes, and it is beyond the scope of this paper to subdivide the monetary policy framework for the most exible arrangements in 24. For instance, the algorithm used by Levy-Yeyati and Sturzenegger [2002] also uses (besides the exchange rate) reserves and base money. This gives rise to many cases of what they refer to as “one classi?cation variable not available.” This means that their algorithm cannot provide a classi?cation for the United Kingdom (where it is hard to imagine such data problems) until 1987 and—in the most extreme of cases—some developing countries cannot be classi?ed for any year over their 1974–2000 sample. 25. See Reinhart, Rogoff, and Savastano [2003] for a recent compilation of data on domestic dollar-linked debt. 26. There are plenty of recent examples where interest rates were jacked up aggressively to fend off a sharp depreciation in the currency. Perhaps one of the more obvious examples is in the wake of the Russian default in August 1998, when many emerging market currencies came under pressure and countries like Mexico responded by doubling interest rates (raising them to 40 percent) within a span of a couple of weeks. 27. Indeed, several of the ination targeters in our sample (United Kingdom, Canada, Sweden, etc.) are classi?ed as managed oaters. (However, it must also be acknowledged that there are many different variants of ination targeting, especially in emerging markets.) EXCHANGE RATE ARRANGEMENTS 27our grid. Apart from exchange rate policy, however, our study sheds considerable light on the third leg of the trinity—capital controls. While measuring capital mobility has not been the goal of this paper, our data consistently show that the parallel market premium dwindles into insigni?cance with capital market integration, providing a promising continuous measure of capital mobility. IV.B. Exchange Rates and Real Shocks Ideally, one would like to distinguish between exchange rate stability arising from deliberate policy actions (whether its direct foreign exchange market intervention or interest rate policy, as discussed) and stability owing to the absence of economic or political shocks. In this subsection we provide evidence that, if the exchange rate is stable and it is accordingly treated in our de jure approach to classi?cation, it is typically not due to an absence of shocks. Terms of trade shocks are a natural source of potential shocks, particularly for many developing countries. Similarly, the presence (or absence) of shocks is likely to be reected in the volatility of real GDP. To investigate the incidence and size of terms of trade shocks, we constructed monthly terms of trade series for 172 countries over the period 1960 –2001. 28 The terms of trade series is a geometric weighted average of commodity prices (?xed weights based on the exports of 52 commodities). Table VI presents a summary by region of the individual country ?ndings. The ?rst column shows the share of commodities in total exports, while s Dtot denotes the variance of the monthly change in the terms of trade of the particular region relative to Australia. Australia is our benchmark, as it is both a country that is a primary commodity exporter and has a oating exchange rate that, by some estimates, approximates an optimal response to terms of trade shocks (see Chen and Rogoff [2003]). The next three columns show the variance of the monthly change in the terms of trade of the region relative to Australia (s Dtot), exchange rate of the individual region relative to Australia (s De) and the variance of the annual change in real GDP of the region relative to Australia (s Dy). The last two columns show the 28. Table VI is based on the more extensive results in Reinhart, Rogoff, and Spilimbergo [2003]. 28 QUARTERLY JOURNAL OF ECONOMICSvariance of the exchange rate relative to the variance of the terms of trade (s De)/(s Dtot) and output (s De)/(s Dy), respectively. A priori, adverse terms of trade shocks should be associated with depreciations and the converse for positive terms of trade shocks; greater volatility in the terms of trade should go handin-hand with greater volatility in the exchange rate. (In Chen and Rogoff [2003] there is greater volatility even under optimal policy.) Table VI reveals several empirical regularities: (a) most countries (regions) have more variable terms of trade than Australia—in some cases, such as the Middle East and the Caribbean, as much as three or four times as variable; (b) real GDP is also commonly far more volatile than in Australia; (c) most countries’ exchange rates appear to be far more stable than Australia’s, as evidenced by relatively lower variances for most of the groups; (d) following from the previous observations, the last two columns show that for most of the country groupings that the variance of exchange rate changes is lower than that of changes in the terms of trade or real GDP. Taken together, the implication of these ?ndings is that if the exchange rate is not moving, it is TABLE VI TERMS OF TRADE, OUTPUT, AND EXCHANGE RATE VARIABILITY VARIANCE RATIOS (NORMALIZED TO AUSTRALIA AND EXCLUDES FREELY FALLING EPISODES) Region Share s Dtot s De s Dy s De s Dtot s De s Dy North Africa 0.51 3.29 0.93 2.54 0.64 0.23 Rest of Africa (excluding CFA) 0.56 2.92 2.87 2.50 1.29 1.38 Middle East 0.60 4.15 0.95 3.48 0.33 0.50 Development Asia/Paci?c 0.34 2.02 0.85 2.40 0.54 0.44 Industrialized Asia 0.18 0.82 0.97 1.15 1.23 0.86 Caribbean 0.50 4.15 0.67 2.40 0.20 0.35 Central America 0.62 3.02 0.49 2.11 0.21 0.28 South America 0.63 2.03 1.08 2.15 0.66 0.52 Central East Europe 0.24 0.60 1.03 1.51 1.66 0.78 Western Europe 0.18 1.75 0.84 1.25 0.76 0.56 North America 0.33 1.64 0.60 1.12 0.47 0.54 Source: Reinhart, Rogoff, and Spilimbergo [2003] and sources cited therein. The variable de?nitions are as follows: Share 5 share of primary commodities to total exports; the next three columns show the variance of the monthly change in the terms of trade of the region relative to Australia (s Dtot), the variance of the monthly change in the exchange rate of the individual region relative to Australia (s De), and the variance of the annual change in real GDP of the region relative to Australia (s Dy); the last two columns show the variance of the exchange rate relative to the variance of the terms of trade (s De)/(s Dtot) and output (s De)/(s Dy), respectively. EXCHANGE RATE ARRANGEMENTS 29not for lack of shocks. Of course, terms of trade are only one class of shocks that can cause movement in the exchange rate. Thus, considering other kinds of shocks—political and economic, domestic, and international—would only reinforce the results presented here. IV.C. Fact and Fiction: Natural and Arti?cial? We are now prepared to contrast the of?cial view of the history of exchange rate regimes with the view that emerges from employing our alternative methodology. To facilitate comparisons, we will focus mainly on the coarse grid version of the Natural system. Figure VII highlights some of the key differences between the Natural and IMF classi?cations. The dark portions of the bars denote the cases where there is overlap between the IMF and the Natural classi?cation. 29 The white bar shows the cases where the IMF labels the regime in one way (say, a peg in 1970 –1973) and the Natural labels it differently. Finally, the striped portions of the bars indicate the cases where the Natural classi?cation labels the regime in one way (say, freely falling, 1991–2001) and the IMF labels differently (say, freely oating). As shown in Figure VII, according to our Natural classi?cation system, about 40 percent of all regimes in 1950 were pegs (since many countries had dual/parallel rates that did not qualify as pegs). Figure VII also makes plain that some of the “pegs” in our classi?cation were not considered pegs under the of?cial classi?cation; in turn, our algorithm rejects almost half of the of?cial pegs as true pegs. Our reclassi?cation of the early postwar years impacts not only on developing countries, but on industrialized countries as well; nearly all the European countries had active parallel markets after World War II. A second reason why our scheme shows fewer pegs is that the IMF’s pre-1997 scheme allowed countries to declare their regimes as “pegged to an undisclosed basket of currencies.” This notably nontransparent practice was especially popular during the 1980s, and it was also under this that a great deal of managed oating, freely oating, and freely falling actually took place. For the period 1974 –1990 the of?cial classi?cation has roughly 60 percent of all regimes as pegs; our classi?cation has only half as many. Again, as we see in Figure VII, this comparison 29. Speci?cally, both classi?cations assigned the regime for a particular country in a given particular year to the same category. 30 QUARTERLY JOURNAL OF ECONOMICSunderstates the differences since some of our pegs are not of?cial pegs and vice versa. For the years 1974 –1990, and 1991–2001, one can see two major trends. First, “freely falling” continues to be a signi?cant category, accounting for 12 percent of all regimes from 1974 –1990, and 13 percent of all regimes from 1991–2001. For the transition economies in the 1990s, over 40 percent of the observations are in the freely falling category. Of course, what we are reporting in Figure VII is the incidence of each regime. Clearly, future research could use GDP weights and—given that FIGURE VII Comparison of Exchange Rate Arrangements According to the IMF Of?cial and Natural Classi?cations, 1950–2001 Sources: International Monetary Fund, Annual Report on Exchange Arrangements and Exchange Restrictions and International Financial Statistics; Pick and Se´dillot [1971]; International Currency Analysis, World Currency Yearbook, various issues. The dark bars show the overlap between the IMF and Natural classi?cation (i.e., for that particular year the IMF and Natural classi?cations coincide); the white bars show the cases where the IMF classi?cation labeled the regime in one way (say, a peg in 1974–1990) and the Natural classi?cation labeled it differently; the striped bars indicate the cases where the Natural classi?cation labeled the regime in one way (say, freely falling) and the IMF labeled it differently, (say, freely oating). EXCHANGE RATE ARRANGEMENTS 31low-income countries are disproportionately represented in the freely falling category—this would reveal a lower importance to this category. 30 Second, the Natural classi?cation scheme reveals a bunching to the middle in terms of exchange rate exibility, when compared with the of?cial monetary history of the world. Limited exibility—which under the Natural classi?cation is dominated by de facto crawling pegs—becomes notably more important. From being a very small class under the of?cial scheme, the Natural classi?cation algorithm elevates limited exibility to the second most important grouping over the past decade, just behind pegs. Another startling difference is the reduced importance of freely oating. According to the of?cial classi?cation, more than 30 percent of countries were independently oating during 1991– 2001. According to the Natural classi?cation, less than 10 percent were freely oating. This is partly a manifestation of what Calvo and Reinhart [2002] term “fear of oating,” but equally because we assign high ination oats (including ones that are of?cially “pegs”) to our new freely falling category. Indeed, more countries had freely falling exchange rates than had freely oating exchange rates! The contrast between the IMF and Natural classi?cation systems becomes even more striking when one sees just how small the overlap is between the two classi?cations country by country and year by year. As shown in Table VII, if the IMF designation of the regime is a peg (1970 –2001), there is a 44 percent probability that our algorithm will place it into a more exible arrangement. If the of?cial regime is a oat, there is a 31 percent chance we will categorize it as a peg or limited exibility. If the of?cial regime is a managed oat, there is a 53 percent chance our algorithm will categorize it as a peg or limited exibility. Whether the of?cial regime is a oat or peg, it is virtually a coin toss whether the Natural algorithm will yield the same result. The bottom of the table gives the pairwise correlation between the two classi?cations, with the of?cial classi?cation running from 1 (peg) to 4 (independently oating), and the Natural classi?cation running from 1 (peg) to 5 (freely falling). The simple correlation coef?cient is only 0.42. As one can con?rm from 30. GDP weights and population weights would, of course, present very different pictures. For example, the United States and Japan alone would increase the world’s share of oaters if it were GDP weights, while weight by population would increase the weight of ?xers by China alone. 32 QUARTERLY JOURNAL OF ECONOMICSthe chronologies, the greatest overlap occurs in the classi?cation of the G3 currencies and of the limited exibility European arrangements. Elsewhere, and especially in developing countries, the two classi?cations differ signi?cantly, as we shall see. IV.D. The Pegs That Float Figure VIII plots the parallel market premium since January 1946, in percent, for Africa, Asia, Europe, and Western Hemisphere. As is evident from the Figure VIII, for all the regions except Europe, it would be dif?cult to make the case that the breakdown of Bretton Woods was a singular event, let alone a sea change. 31 For the developing world, the levels of pre- and post-1973 volatilities in the market-determined exchange rate, as revealed by the parallel market premium, are remarkably similar. Note that for all regions, we exclude the freely falling episodes that would signi?cantly increase the volatility but also distort the scale. To give a avor of the cross-country variation within region and across time, the dashed line plots the regional average plus one standard deviation (calculated across countries and shown as a ?ve-year moving average). As regards Europe, the story told by Figure VIII is consistent with the characterization of the Bretton Woods system as a period of when true exchange rate stability was remarkably short-lived. From 1946 until the arrival of the late 1950s, while Europe was not oating in the modern sense—as most currencies were not 31. We plot the premium rather than the market-determined rate, as it allows us to aggregate across countries in comparable units (percent). TABLE VII FLOATING PEGS AND PEGGED FLOATS: REVISITING THE PAST, 1970–2001 Conditional probability that the regime is: In percent “Other” according to NC a conditional on being classi?ed “Peg” by IMF 44.5 “Peg” or “Limited Flexibility” according to NC conditional on being classi?ed “Managed Floating” by IMF 53.2 “Peg” or “Limited Flexibility” according to NC conditional on being classi?ed “Independently Floating” by IMF 31.5 Pairwise correlation between IMF and NC classi?cations 42.0 Sources: The authors’ calculations. a. NC refers to the Natural Classi?cation; “Other” according to NC includes limited exibility, managed oating, freely oating, and freely falling. EXCHANGE RATE ARRANGEMENTS 33FIGURE VIII Average Monthly Parallel Market Premium: 1946–1998 Sources: International Monetary Fund, Annual Report on Exchange Arrangements and Exchange Restrictions and International Financial Statistics; Pick and Se´dillot [1971]; International Currency Analysis, World Currency Yearbook, various issues. The solid line represents the average monthly parallel market premium while the dashed line shows the ?ve-year moving average of plus one standard deviation. The regional averages are calculated excluding the freely falling episodes. 34 QUARTERLY JOURNAL OF ECONOMICSconvertible—it had some variant of de facto oating under the guise of pegged of?cial exchange rates. Each time of?cial rates are realigned, the story had already unfolded in the parallel market (as shown earlier in Figure II). While the volatility of the gap between the of?cial rate and the market exchange rate is not quite in the order of magnitude observed in the developing world, the volatility of the parallel rate is quite similar to the volatility of today’s managed or freely oating exchange rates. 32 There are many cases that illustrate clearly that little changed before and after the breakup of Bretton Woods. 33 Clearly, more careful statistical testing is required to make categorical statements about when a structural break took place; but it is obvious from the ?gures that whatever break might have taken place hardly lives up to the usual image of the move from ?xed to exible rates. IV.E. The Floats That Peg Figure IX provides a general avor of how exchange rate exibility has evolved over time and across regions. The ?gure plots ?ve-year moving averages of the probability that the monthly percent change in the exchange rate remains within a 2 percent band for Africa, Asia, Europe, and Western Hemisphere (excluding only the United States). Hence, under a pegged arrangement, assuming no adjustments to the parity, these probabilities should equal 100 percent. As before, we exclude the freely falling episodes. For comparison purposes, the ?gures plot the unweighted regional averages against the unweighted averages for the “committed oaters.” (The committed oaters include the following exchange rates against the dollar: Yen, DM (euro), Australian dollar, and the UK pound.) The dashed lines, which show plus/minus one standard deviation around the regional averages, highlight the differences between the group of oaters and the regional averages. It is evident for all regions (this applies the least to Africa) that the monthly percent variation in the exchange rate has 32. See Bordo [1993] on Bretton Woods and Bordo [2003] on a historical perspective on the evolution of exchange rate arrangements. 33. The country-by-country ?gures in “The Country Chronologies and Chartbook, Background Material to A Modern History of Exchange Rate Arrangements: A Reinterpretation” at http://www.puaf.umd.edu/faculty/papers/reinhart/ reinhart.htm are particularly revealing in this regard. EXCHANGE RATE ARRANGEMENTS 35FIGURE IX Absolute Monthly Percent Change in the Exchange Rate: Percent of Observations within a 62 Percent Band (?ve-year moving average) Sources: International Monetary Fund, Annual Report on Exchange Arrangements and Exchange Restrictions and International Financial Statistics; Pick and Se´dillot [1971]; International Currency Analysis, World Currency Yearbook, various issues. The solid line represents the average for the group while the dashed lines show plus/minus one standard deviation. The regional averages are calculated excluding the freely falling episodes. 36 QUARTERLY JOURNAL OF ECONOMICStypically been kept to a minimum—there is a great deal of smoothing of exchange rate uctuations in all regions when compared with the usual monthly variations of the committed oaters. The smoothing is most evident in Asia where the index hovers around 90 percent for most of the period, versus 60 –70 percent for the oaters. Hence, over time, the nature of the classi?cation problem has evolved from labeling something as a peg when it is not, to labeling something as oating when the degree of exchange rate exibility has in fact been very limited. IV.F. Does the Exchange Rate Regime Matter? The question of whether the exchange rate arrangement matters for various facets of economic activity has, indeed, been a far-reaching issue over the years in the literature on international trade and ?nance, and is beyond the scope of this paper. In this subsection we present a few simple exercises that do not speak to possible causal patterns between exchange rate regimes and economic performance, but are meant as illustrative of the potential usefulness of our classi?cation. First, consider Table VIII, which separates dual/parallel markets from all the other regimes where the “exchange rate is unitary,” to employ the language of the IMF. The top row shows average ination rates and real per capita GDP growth for the period 1970 –2001 for dual arrangements separately from all other regimes. This two-way split drastically alters the picture presented by the IMF’s classi- ?cation in the top and fourth rows of Table IX, which does not TABLE VIII INFLATION AND PER CAPITA REAL GDP GROWTH: A COMPARISON OF DUAL (OR MULTIPLE) AND UNIFIED EXCHANGE RATE SYSTEMS, 1970–2001 Regime Average annual ination rate Average per capita real GDP growth Uni?ed exchange rate 19.8 1.8 Dual (or multiple) exchange rates 162.5 0.8 Sources: International Monetary Fund, Annual Report on Exchange Arrangements and Exchange Restrictions and International Financial Statistics, Pick and Se´dillot [1971], International Currency Analysis, World Currency Yearbook, various issues. The averages for the two regime types (uni?ed and dual) are calculated on a country-by-country and year-by-year basis. Thus, if a country has a uni?ed exchange rate for most of the year, the observation for that year is included in the averages for uni?ed rates; if in the following year that same country introduces a dual market (or multiple rate) for most of the year, the observation for that year is included in the average for dual rates. This treatment allows us to deal with transitions across regime types over time. EXCHANGE RATE ARRANGEMENTS 37treat dual markets as a separate category. Dual (or multiple) exchange rate episodes are associated with an average ination rate of 163 percent versus 20 percent for uni?ed exchange markets—growth is one percentage point lower for dual arrangements. The explanation for this gap between the outcomes shown in Table VIII and the IMF’s in Table IX is twofold. First, 62 percent of the freely falling cases during 1970 –2001 were associated with parallel markets or dual or multiple exchange rates. Second, the high ination cases classi?ed by the IMF as freely oating were moved to the freely falling category Natural classi- ?cation. Again, we caution against overinterpreting the results in Table VIII as evidence of causality, as exchange controls and dual markets are often introduced amid political and economic crises—as the recent controls in Argentina (2001) and Venezuela (2003) attest. As Table IX highlights, according to the IMF, only limited exibility cases record moderate ination. On the other hand, freely oating cases record the best ination performance (9 percent) in the Natural classi?cation. Freely falling regimes exhibit an average annual ination rate 443 percent versus an ination average in the 9 to 17 percent range for the other categories (Table IX). TABLE IX DO CLASSIFICATIONS MATTER? GROWTH, INFLATION, AND TRADE ACROSS REGIMES: 1970–2001 Classi?cation scheme Peg Limited exibility Managed oating Freely oating Freely falling Average annual ination rate IMF Of?cial 38.8 5.3 74.8 173.9 n.a. Natural 15.9 10.1 16.5 9.4 443.3 Average annual per capita real GDP growth IMF Of?cial 1.4 2.2 1.9 0.5 n.a. Natural 1.9 2.4 1.6 2.3 22.5 Exports plus imports as a percent of GDP IMF Of?cial 69.9 81.0 65.8 60.6 n.a. Natural 78.7 80.3 61.2 44.9 57.1 Source: International Monetary Fund, World Economic Outlook. An n.a. denotes not available. The averages for each regime type (peg, limited exibility, etc.) are calculated on a country-by-country and year-by-year basis. Thus, if a country has a pegged exchange rate for most of the year, the observation for that year is included in the averages for pegs; if in the following year that same country has a managed oat for most of the year, the observation for that year is included in the average for managed oats. This treatment allows us to deal with transitions across regime types over time. 38 QUARTERLY JOURNAL OF ECONOMICSThe contrast is also signi?cant both in terms of the level of per capita GDP (Figure X) and per capita growth (Figure XI and Table IX). Freely falling has the lowest per capita income (US $3,476) of any category—highlighting that the earlier parallel to the HIPC debtor is an apt one—while freely oating has the highest (US $13,602). In the of?cial IMF classi?cation, limited exibility, which was almost entirely comprised of European countries, shows the largest per capita income. Growth is negative for the freely falling cases (22.5 percent) versus growth rates in the 1.6 –2.4 percent range for the other categories. Once freely falling is a separate category, the differences between our other classi?cations pale relative to the differences between freely falling and all others (Table VIII). In the of?cial IMF classi?cation, freely oating shows a meager average growth rate of 0.5 percent for the independently oating cases. For the Natural classi?cation, the average growth rate quadruples for the oaters to 2.3 percent. Clearly, this exercise highlights the importance of treating the freely falling episodes separately. FIGURE X PPP Adjusted GDP per Capita across Regime Types: 1970–2001 (averaging over all regions) EXCHANGE RATE ARRANGEMENTS 39V. CONCLUDING REMARKS According to our Natural classi?cation, across all countries for 1970 –2001, 45 percent of the observations of?cially labeled as a “peg” should, in fact, have been classi?ed as limited exibility, managed or freely oating— or worse, “freely falling.” PostBretton Woods, a new type of misclassi?cation problem emerged, and the odds of being of?cially labeled a “managed oat” when there was a de facto peg or crawling peg were about 53 percent. We thus ?nd that the of?cial and other histories of exchange rate arrangements can be profoundly misleading, as a striking number of pegs are much better described as oats, and vice versa. These misclassi?cation problems may cloud our view of history along some basic dimensions. Using the IMF’s classi?cation FIGURE XI Real per Capita GDP Growth across Regime Types: 1970–2001 (averaging over all regions) Sources: International Monetary Fund, Annual Report on Exchange Arrangements and Exchange Restrictions and International Financial Statistics; Pick and Se´dillot [1971]; International Currency Analysis, World Currency Yearbook, various issues. The averages for each regime type (peg, limited exibility, etc.) are calculated on a country-by-country and year-by-year basis. Thus, if a country has a pegged exchange rate for most of the year, the observation for that year is included in the averages for pegs; if in the following year that same country has a managed oat for most of the year, the observation for that year is included in the average for managed oats. This treatment allows us to deal with transitions across regime types over time. 40 QUARTERLY JOURNAL OF ECONOMICSfor the period 1970 to 2001, for instance, one would conclude that a freely oating exchange rate is not a very attractive option—it produces an average annual ination rate of 174 percent and a paltry average per capita growth rate of 0.5 percent. This is the worst performance of any arrangement. Our classi?cation presents a very different picture: free oats deliver an average in- ation that is less than 10 percent (the lowest of any exchange rate arrangement), and an average per capita growth rate of 2.3 percent. Equally importantly, we ?nd that uni?ed exchange rate regimes vastly outperform dual or multiple exchange rate arrangements, although one cannot necessarily interpret these differences as causal. While we have focused in this paper on the exchange rate arrangement classi?cation issue, the country histories and data provided in this paper may well have consequences for theory and empirics going forward, especially the issue of accounting for dual an parallel markets. In her classic history of the IMF de Vries [1969] looked back at the early years of the Bretton Woods regime and noted: Multiple exchange rates were one of the ?rst problems that faced the Fund in 1946, and have probably been its most common problem in the ?eld of exchange rates. An impressive number and diversity of countries in the last twenty years have experimented with one form or another of what the Fund has called multiple currency practices, at least for a few if not most of their transactions . . . The problem of multiple rates, then, never seems entirely at an end. Thirty-four years have passed since this history was written, and multiple exchange rate practices are showing no signs of becoming passe´ . On December 2001 Argentina suspended convertibility and, in so doing, segmented the market for foreign exchange, while on February 7, 2003, Venezuela introduced strict new exchange controls—de facto creating a multiple exchange rate system. Some things never change. APPENDIX: THE DETAILS OF THE “NATURAL” CLASSIFICATION This appendix describes the details of our classi?cation algorithm, which is outlined in Section III of the paper. We concentrate on the description of the ?ne grid as shown in Table V. A. Exchange Rate Flexibility Indices and Probability Analysis Our judgment about the appropriate exchange rate classi?- cation is shaped importantly by the time-series of several meaEXCHANGE RATE ARRANGEMENTS 41sures of exchange rate variability, based on monthly observations and averaged over two-year and ?ve-year rolling windows. The ?rst of these measures is the absolute percent change in the monthly nominal exchange rate. We prefer the mean absolute change to the variance to minimize the impact of outliers. These outliers arise when, for example, there are long periods in which the exchange rate is ?xed but, nonetheless, subject to rare but large devaluations. To assess whether exchange rate changes are kept within a band, we calculate the probabilities that the exchange rate remains within a plus/minus 1, 2, and 5 percent-wide band over any given period. Two percent seems a reasonable cutoff to distinguish between the limited exibility cases and more exible arrangements, as even in the Exchange Rate Mechanism arrangement in Europe 62 1 4 bands were allowed. As with the mean absolute deviation, these probabilities are calculated over twoyear and ?ve-year rolling windows. Unless otherwise noted in the chronologies, we use the ?ve-year rolling windows as our primary measure for the reasons discussed in Section III of the paper. These rolling probabilities are especially useful to detect implicit unannounced pegs and bands. B. De Jure and de Facto Pegs and Bands Where the chronologies show the authorities explicitly announcing a peg, we shortcut the de facto dating scheme described below and zero in on the date announced as the start of the peg. We then con?rm (or not) the peg by examining the mean absolute monthly change over the period following the announcement. The chronologies we develop, which give the day, month, and year when a peg becomes operative, are essential to our algorithm. There are two circumstances where we need to go beyond simply verifying the announced peg. The ?rst case is where our chronologies indicate that the peg applies only to an of?cial rate and that there is an active parallel (of?cial or illegal) market. As shown in Figure III, in these cases we apply the same battery of tests to the parallel market exchange rate as we do to the of?cial rate in a uni?ed market. Second, there are the cases where the of?cial policy is a peg to an undisclosed basket of currencies. In these cases, we verify if the “basket” peg is really a de facto peg to a single dominant currency (or to the SDR). If no dominant currency can be identi?ed, we do not label the episode as a peg. Potentially, of course, 42 QUARTERLY JOURNAL OF ECONOMICSwe may be missing some de facto basket pegs, though in practice, this is almost certainly not a major issue. We now describe our approach toward detecting de facto pegs. If there is no of?cially announced peg, we test for a “de facto” peg in two ways. First, we examine the monthly absolute percent changes. If the absolute monthly percent change in the exchange rate is equal to zero for four consecutive months or more, that episode is classi?ed (for however long its lasts) as a de facto peg if there are no dual or multiple exchange rates. This allows us to identify short-lived de facto pegs as well as those with a longer duration. For instance, this ?lter allowed us to identify the Philippines’ de facto peg to the US dollar during 1995–1997 in the run-up to the Asian crisis as well as the numerous European de facto pegs to the DM well ahead of the introduction of the euro. Second, we compute the probability that the monthly exchange rate change remains within a 1 percent band over a rolling ?ve-year period: 34 P~e , 1%!, where e is the monthly absolute percentage change in the exchange rate. If this probability is 80 percent or higher, then the regime is classi?ed as a de facto peg or crawling peg over the entire ?ve-year period. If the exchange rate has no drift, it is classi?ed as a ?xed parity; if a positive drift is present, it is labeled a crawling peg; and, if the exchange rate also goes through periods of both appreciation and depreciation, it is dubbed a “noncrawling” peg. Our choice of an 80 percent threshold is not accidental, but rather we chose this value because it appears to do a very good job at detecting regimes one would want to label as pegs, without drawing in a signi?cant number of “false positives.” Our approach regarding preannounced and de facto bands follows exactly the same process as that of detecting preannounced and de facto pegs, we simply replace the 61% band with a 62% band in the algorithm. If a band is announced and the chronologies show a uni?ed exchange market, we label the episode as a band unless it had already been identi?ed as a de facto peg by the criteria described earlier. But, importantly, we also verify whether the announced and de facto bands coincide, espe- 34. There are a handful of cases where a two-year window is used. In such instances, it is noted in the chronologies. EXCHANGE RATE ARRANGEMENTS 43cially as there are numerous cases where the announced (de jure) band is much wider than the de facto band. 35 To detect such cases, we calculate the probability that the monthly exchange rate change remains within a 62% band over a rolling ?ve-year period: P~e , 2%!. If this probability is 80 percent or higher, then the regime is classi?ed as a de facto narrow horizontal, crawling, or noncrawling band (which allows for both a sustained appreciation and depreciation) over the period through which it remains continuously above the 80 percent threshold. In the case where the preannounced bands are wide (meaning equal to or greater than 65%), we also verify 65% bands. The speci?cs for each case are discussed in the country chronologies. For instance, as shown earlier in Table IV, in the case of Chile we found that the de facto band during 1992–1998 was narrower (65%) than that which was announced at the time (610% and 612.5%). In the case of Libya, which had an announced 77 percent wide band along a ?xed central parity pegged to the SDR over the March 1986 –December 2001, we detected a 65% crawling band to the US dollar. C. Freely Falling As we emphasize in the text, there are situations, almost invariably due to high ination or hyperination, in which there are mega-depreciations in the exchange rate on a routine and sustained basis. We have argued that it is inappropriate and misleading to lump these cases—which is what all previous classi?cations (IMF or otherwise) do—with oating rate regimes. We label episodes freely falling on the basis of two criteria. First, periods where the twelve-month rate of ination equals or exceeds 40 percent are classi?ed as freely falling unless they have been identi?ed as some form of preannounced peg or preannounced narrow band by the above criteria. 36 The 40 percent 35. Mexico’s exchange rate policy prior to the December 1994 crisis is one of numerous examples of this pattern. Despite the fact that the band was widening over time, as the oor of the band was ?xed and the ceiling was crawling, the peso remained virtually pegged to the US dollar for extended periods of time. 36. It is critical that the peg criteria supersede the high ination criteria in the classi?cation strategy, since historically a majority of ination stabilization efforts have used the exchange rate as the nominal anchor and in many of these episodes ination rates at the outset of the peg were well above our 40 percent threshold. 44 QUARTERLY JOURNAL OF ECONOMICSination threshold is not entirely arbitrary, as it has been identi?ed as an important benchmark in the literature on the determinants of growth (see Easterly [2001]). As a special subcategory of freely falling, we dub as hyperoats those episodes that meet Cagan’s [1956] classic de?nition of hyperination (50 percent or more ination per month). A second situation where we classify an exchange rate regime as freely falling are the six months immediately following a currency crisis—but only for those cases where the crisis marks a transition from a ?xed or quasi-?xed regime to a managed or independently oating regime. 37 Such episodes are typically characterized by exchange rate overshooting. This is another situation where a large change in the exchange rate does not owe to a deliberate policy; it is the reection of a loss of credibility and recurring speculative attacks. To date these crisis episodes, we follow a variant of the approach suggested by Frankel and Rose [1996]. Namely, any month where the depreciation exceeds or equals 12 1 2 percent and also exceeds the preceding month’s depreciation by at least 10 percent is identi?ed as a crisis. 38 To make sure that this approach yields plausible crisis dates, we supplement the analysis with our extensive country chronologies, which also shed light on balance of payments dif?culties. 39 Since, as a rule, freely falling is not typically an explicit arrangement of choice, our chronologies also provide for all the freely falling cases, the underlying de jure or de facto arrangement (for example, dual markets, independently oating, etc.). D. Managed and Freely Floating Our approach toward identifying managed and freely oating episodes is basically to create these classes out of the residual pool of episodes that, after comprehensive application of our algorithm, have not been identi?ed as an explicit or implicit peg or some form of band, and that are not included in the freely 37. This rules out cases where there was a devaluation and a repeg and cases where the large exchange rate swing occurred in the context of an already oating rate. 38. Frankel and Rose [1996] do not date the speci?c month of the crisis but the year; their criteria call for a 25 percent (or higher) depreciation over the year. 39. For instance, the Thai crisis of July 1997 does not meet the modi?ed Frankel-Rose criteria. While the depreciation in July exceeded that of the preceding month by more than 10 percent, the depreciation of the Thai Baht in that month did not exceed 25 percent. For these cases, we rely on the chronologies of events. EXCHANGE RATE ARRANGEMENTS 45falling category. To proxy the degree of exchange rate exibility under freely oating and managed oats, we construct a composite statistic, e/P~e , 1%!, where the numerator is the mean absolute monthly percent change in the exchange rate over a rolling ?ve-year period, while the denominator ags the likelihood of small changes. For de jure or de facto pegs, this index will be very low (close to or equal to zero), while for the freely falling cases it will be very large. As noted, we only focus on this index for those countries and periods which are candidates for freely or managed oating. We tabulate the frequency distribution of our index for the currencies that are most transparently oating, these include US dollar/DM-euro, US dollar/yen, US dollar/UK pound, US dollar/Australian dollar, and US dollar/New Zealand dollar beginning on the date in which the oat was announced. We pool the observations (the ratio for rolling ?ve-year averages) for all the oaters. So, for example, since Brazil oated the real in January 1999, we would calculate the ratio only from that date forward. If Brazil’s ratio falls inside the 99 percent con?dence interval (the null hypothesis is freely oating and hence the rejection region is located at the lower tail of the distribution of the oater’s group), the episode is characterized as freely oating. If that ratio falls in the lower 1 percent tail, the null hypothesis of freely oating is rejected in favor of the alternative hypothesis of managed oat. It is important to note that managed by this de?nition does not necessarily imply active or frequent foreign exchange market intervention—it refers to the fact that for whatever reason our composite exchange rate variability index, e/P(e , 1%), does not behave like the indices for the freely oaters. E. Dual or Multiple Exchange Rate Regimes and Parallel Markets Dual rates are essentially a hybrid arrangement. There are cases or periods in which the premium is nil and stable so that the of?cial rate is representative of the underlying monetary policy. The of?cial exchange rate could be pegged, crawling, or maintained within some bands, or in a few cases allowed to oat. But there are countless episodes where the divergence between the of?cial and parallel rate is so large that the picture is incomplete without knowledge of what the parallel market rate is doing. The 46 QUARTERLY JOURNAL OF ECONOMICScountry chronologies are critical in identifying these episodes. In the cases where dual or multiple rates are present or parallel markets are active, we focus on the market-determined rates instead of the of?cial exchange rates. As shown in Figure III, we subject the market-determined exchange rate (dual, multiple, or parallel) to the battery of tests described above. 40 This particular category will especially reshape how we view the 1940s through the 1960s, where about half the cases in the sample involved dual markets. UNIVERSITY OF MARYLAND, COLLEGE PARK HARVARD UNIVERSITY REFERENCES Bahmani-Oskooee, Mohsen, Ilir Miteza, and A. B. M. Nasir, “The Long-Run Relationship between Black Market and Of?cial Exchange Rates: Evidence from Panel Cointegration,” Economics Letters, LXXVI (2002), 397–404. Baxter, Marianne, and Alan Stockman, “Business Cycle and Exchange Rate Regime: Some International Evidence,” Journal of Monetary Economics, XXIII (1989), 377– 400. Bordo, Michael, “The Bretton Woods International Monetary System: A Historical Overview,” in A Retrospective on the Bretton Woods System, Michael Bordo and Barry Eichengreen, eds. (Chicago, IL: University of Chicago Press, 1993), pp. 3–98. ——, “Exchange Rate Regimes in Historical Perspective,” National Bureau of Economic Research Working Paper No. 9654, 2003. Cagan, Philip, “The Monetary Dynamics of Hyperination,” in Studies in the Quantity Theory of Money, Milton Friedman, ed. (Chicago, IL: University of Chicago Press, 1956), pp. 25–117. Calvo, Guillermo A., and Carmen M. Reinhart, “Fear of Floating,” Quarterly Journal of Economics, CXVII (2002), 379– 408. Chen, Yu-chin, and Kenneth S. Rogoff, “Commodity Currencies,” Journal of International Economics, VX (2003), 133–160. Claessens, Stijn, “Estimates of Capital Flight and Its Behavior,” Revista de Ana´ lisis Econo´mico, XII (1997), 3–34. Cotarelli, C., and C. Giannini. “Credibility Without Rules? Monetary Frameworks in the Post Bretton-Woods Era,” IMF Occasional Paper No. 154 (Washington, DC: International Monetary Fund, 1998). de Vries, Margaret G., “Multiple Exchange Rates,” in The International Monetary Fund 1945–1965, Margaret de Vries and J. Keith Horse?eld, eds. (Washington, DC: International Monetary Fund, 1969), pp. 122–151. Easterly, William, The Elusive Quest for Growth (Cambridge, MA: MIT Press, 2001). Frankel, Jeffrey A., and Andrew K. Rose, “Currency Crashes in Emerging Markets: An Empirical Treatment,” Journal of International Economics, XXXXI (1996), 351–368. Ghei, Nita, Miguel A. Kiguel, and Stephen A. O’Connell, “Parallel Exchange Rates in Developing Countries: Lessons from Eight Case Studies,” in Parallel Exchange Rates in Developing Countries, Miguel Kiguel, J. Saul Lizondo, and Stephen O’Connell, eds. (New York, NY: Saint Martin’s Press, 1997), pp. 17–76. 40. There are a few such cases in the sample, where only government transactions take place at the of?cial rate. EXCHANGE RATE ARRANGEMENTS 47Ghosh, Atish, Anne-Marie Gulde, Jonathan Ostry, and Holger Wolfe, “Does the Nominal Exchange Rate Regime Matter?” National Bureau of Economic Research Working Paper No. 5874, 1997. International Currency Analysis, World Currency Yearbook (New York, NY: International Currency Analysis, 1983–1998), various issues. International Monetary Fund, Annual Report on Exchange Restrictions (Washington, DC: International Monetary Fund, 1949–1978), various issues. International Monetary Fund, Annual Report on Exchange Arrangements and Exchange Restriction (Washington, DC: International Monetary Fund, 1979– 2001), various issues. Kiguel, Miguel, J. Saul Lizondo, and Stephen A. O’Connell, eds., Parallel Exchange Rates in Developing Countries (New York, NY: Saint Martin’s Press, 1997). Levy-Yeyati, Eduardo, and Federico Sturzenegger, “Classifying Exchange Rate Regimes: Deeds versus Words,” mimeo, Universidad Torcuato Di Tella, 2002. Marion, Nancy P., “Dual Exchange Rates in Europe and Latin America,” World Bank Economic Review, VIII (1994), 213–245. Pick, Franz, World Currency Reports (New York, NY: Pick Publishing Corporation, 1945–1955), various issues. ——, Black Market Yearbook (New York, NY: Pick Publishing Corporation, 1951– 1955), various issues. ——, Pick’s Currency Yearbook (New York, NY: Pick Publishing Corporation, 1955–1982), various issues. ——, World Currency Reports (New York, NY: International Currency Analysis Inc., 1983–1998), various issues. Pick, Franz, and Rene´ Se´dillot, All the Monies of the World: A Chronicle of Currency Values (New York, NY: Pick Publishing Corporation, 1971). Reinhart, Carmen M., and Kenneth S. Rogoff, “A Modern History of Exchange Rate Arrangements: A Reinterpretation,” National Bureau of Economic Research Working Paper No. 8963, 2001. Reinhart, Carmen M., and Kenneth S. Rogoff, “Parts I and II. Background Material to a Modern History of Exchange Rate Arrangements: A Reinterpretation,” mimeo, International Monetary Fund, Washington, DC, 2003 at http:// www.puaf.umd.edu/faculty/papers/reinhart/reinhart.htm. Reinhart, Carmen M., Kenneth S. Rogoff, and Miguel A. Savastano, “Addicted to Dollars,” National Bureau of Economic Research Working Paper No. 10015. 2003. Reinhart, Carmen M., Kenneth S. Rogoff, and Antonio Spilimbergo, “When Hard Shocks Hit Soft Pegs,” mimeo, International Monetary Fund, Washington, DC, 2003. United Nations, United Nations Yearbook (New York: United Nations, 1946– 1960), various issues. 48 QUARTERLY JOURNAL OF ECONOMICSInflation Bets or Deflation Hedges
|
CD ROM Annuaire d'Entreprises France prospect (avec ou sans emails) : REMISE DE 10 % Avec le code réduction AUDEN872
10% de réduction sur vos envois d'emailing --> CLIQUEZ ICI Retour à l'accueil, cliquez ici Ináation Bets or Deáation Hedges? The Changing Risks of Nominal Bonds John Y. Campbell, Adi Sunderam, and Luis M. Viceira 1 First draft: June 2007 This version: March 21, 2011 1 Campbell: Department of Economics, Littauer Center, Harvard University, Cambridge MA 02138, USA, and NBER. Email john_campbell@harvard.edu. Sunderam: Harvard Business School, Boston MA 02163. Email asunderam@hbs.edu. Viceira: Harvard Business School, Boston MA 02163 and NBER. Email lviceira@hbs.edu. We acknowledge the extraordinarily able research assistance of Johnny Kang. We are grateful to Geert Bekaert, Andrea Buraschi, Jesus Fernandez-Villaverde, Wayne Ferson, Javier Gil-Bazo, Pablo Guerron, John Heaton, Ravi Jagannathan, Jon Lewellen, Monika Piazzesi, Pedro Santa-Clara, George Tauchen, and seminar participants at the 2009 Annual Meeting of the American Finance Association, Bank of England, European Group of Risk and Insurance Economists 2008 Meeting, Sixth Annual Empirical Asset Pricing Retreat at the University of Amsterdam Business School, Harvard Business School Finance Unit Research Retreat, Imperial College, Marshall School of Business, NBER Fall 2008 Asset Pricing Meeting, Norges Bank, Society for Economic Dynamics 2008 Meeting, Stockholm School of Economics, Tilburg University, Tuck Business School, and Universidad Carlos III in Madrid for helpful comments and suggestions. This material is based upon work supported by the National Science Foundation under Grant No. 0214061 to Campbell, and by Harvard Business School Research Funding.Abstract The covariance between US Treasury bond returns and stock returns has moved considerably over time. While it was slightly positive on average in the period 1953ñ 2009, it was unusually high in the early 1980ís and negative in the 2000ís, particularly in the downturns of 2001ñ2 and 2008ñ9. This paper speciÖes and estimates a model in which the nominal term structure of interest rates is driven by four state variables: the real interest rate, temporary and permanent components of expected ináation, and the ìnominal-real covarianceî of ináation and the real interest rate with the real economy. The last of these state variables enables the model to Öt the changing covariance of bond and stock returns. Log bond yields and term premia are quadratic in these state variables, with term premia determined by the nominal-real covariance. The concavity of the yield curveó the level of intermediate-term bond yields, relative to the average of short- and long-term bond yieldsó is a good proxy for the level of term premia. The nominal-real covariance has declined since the early 1980ís, driving down term premia.1 Introduction Are nominal government bonds risky investments, which investors must be rewarded to hold? Or are they safe investments, whose price movements are either inconsequential or even beneÖcial to investors as hedges against other risks? US Treasury bonds performed well as hedges during the Önancial crisis of 2008ñ9, but the opposite was true in the early 1980ís. The purpose of this paper is to explore such changes over time in the risks of nominal government bonds. To understand the phenomenon of interest, consider Figure 1, an update of a similar Ögure in Viceira (2010). The Ögure shows the history of the realized beta (regression coe¢ cient) of 10-year nominal zero-coupon Treasury bonds on an aggregate stock index, calculated using a rolling three-month window of daily data. This beta can also be called the ìrealized CAPM betaî, as its forecast value would be used to calculate the risk premium on Treasury bonds in the Capital Asset Pricing Model (CAPM) that is often used to price individual stocks. Figure 1 displays considerable high-frequency variation, much of which is attributable to noise in the realized beta. But it also shows interesting low-frequency movements, with values close to zero in the mid-1960ís and mid-1970ís, much higher values averaging around 0.4 in the 1980ís, a spike in the mid-1990ís, and negative average values in the 2000ís. During the two downturns of 2001ñ3 and 2008ñ9, the average realized beta of Treasury bonds was about -0.2. These movements are large enough to cause substantial changes in the Treasury bond risk premium implied by the CAPM. Nominal bond returns respond both to expected ináation and to real interest rates. A natural question is whether the pattern shown in Figure 1 is due to the changing beta of ináation with the stock market, or of real interest rates with the stock market. Figure 2 summarizes the comovement of ináation shocks with stock returns, using a rolling three-year window of quarterly data and a Örst-order quarterly vector autoregression for ináation, stock returns, and the three-month Treasury bill yield to calculate ináation shocks. Because high ináation is associated with high bond yields and low bond returns, the Ögure shows the beta of realized deáation shocks (the negative of ináation shocks) which should move in the same manner as the bond return beta reported in Figure 1. Indeed, Figure 2 shows a similar history for the deáation beta as for the nominal bond beta. Real interest rates also play a role in changing nominal bond risks. In the period 1since 1997, when long-term Treasury ináation-protected securities (TIPS) were Örst issued, Campbell, Shiller, and Viceira (2009) report that TIPS have had a predominantly negative beta with stocks. Like the nominal bond beta, the TIPS beta was particularly negative in the downturns of 2001ñ3 and 2008ñ9. Thus not only the stock-market covariances of nominal bond returns, but also the covariances of two proximate drivers of those returns, ináation and real interest rates, change over time. In the CAPM, assetsí risk premia are fully explained by their covariances with the aggregate stock market. Other modern asset pricing models allow for other ináuences on risk premia, but still generally imply that stock-market covariances have considerable explanatory power for risk premia. Time-variation in the stock-market covariances of bonds should then be associated with variation in bond risk premia, and therefore in the typical shape of the Treasury yield curve. Yet the enormous literature on Treasury bond prices has paid little attention to this phenomenon. This paper begins to Öll this gap in the literature. We make three contributions. First, we write down a simple term structure model that captures time-variation in the covariances of ináation and real interest rates, and therefore of nominal bond returns, with the real economy and the stock market. Importantly, the model allows these covariances, and the associated risk premia, to change sign. It also incorporates more traditional ináuences on nominal bond prices, speciÖcally, real interest rates and both transitory and temporary components of expected ináation. Second, we estimate the parameters of the model using postwar quarterly US time series for nominal and ináation-indexed bond yields, stock returns, realized and forecast ináation, and realized second moments of bond and stock returns calculated from daily data within each quarter. The use of realized second moments, unusual in the term structure literature, forces our model to Öt the phenomenon of interest. Third, we use the estimated model to describe how the changing stock-market covariance of bonds should have altered bond risk premia and the shape of the Treasury yield curve. The organization of the paper is as follows. Section 2 reviews the related literature. Section 3 presents our model of the real and nominal term structures of interest rates. Section 4 describes our estimation method and presents parameter estimates and historical Ötted values for the unobservable state variables of the model. Section 5 discusses the implications of the model for the shape of the yield curve and the movements of risk premia on nominal bonds. Section 6 concludes. An Appendix to this paper available online (Campbell, Sunderam, and Viceira 2010) presents details of the model solution and additional empirical results. 22 Literature Review Nominal bond risks can be measured in a number of ways. A straightforward approach is to measure the covariance of nominal bond returns with some measure of the marginal utility of investors. According to the Capital Asset Pricing Model (CAPM), for example, marginal utility can be summarized by the level of aggregate wealth. It follows that the risk of bonds can be measured by the covariance of bond returns with returns on the market portfolio, often proxied by a broad stock index. Alternatively, one can measure the risk premium on nominal bonds, either from average realized excess bond returns or from variables that predict excess bond returns such as the yield spread (Shiller, Campbell, and Schoenholtz 1983, Fama and Bliss 1987, Campbell and Shiller 1991) or a more general linear combination of forward rates (Stambaugh 1988, Cochrane and Piazzesi 2005). If the risk premium is large, then presumably investors regard bonds as risky. This approach can be combined with the Örst one by estimating an empirical multifactor model that describes the cross-section of both stock and bond returns (Fama and French 1993). These approaches are appealingly direct. However, the answers they give depend sensitively on the sample period that is used. The covariance of nominal bond returns with stock returns, in particular, is extremely unstable over time and even switches sign (Li 2002, Guidolin and Timmermann 2006, Christiansen and Ranaldo 2007, David and Veronesi 2009, Baele, Bekaert, and Inghelbrecht 2010, Viceira 2010). The average level of the nominal yield spread is also unstable over time as pointed out by Fama (2006) among others. An intriguing fact is that the movements in the average yield spread seem to line up to some degree with the movements in the CAPM beta of bonds. The average yield spread, like the CAPM beta of bonds, was lower in the 1960ís and 1970ís than in the 1980ís and 1990ís. Viceira (2010) shows that both the short-term nominal interest rate and the yield spread forecast the CAPM beta of bonds over the period 1962ñ2007. On the other hand, during the 2000ís the CAPM beta of bonds was unusually low while the yield spread was fairly high on average. Another way to measure the risks of nominal bonds is to decompose their returns into several components arising from di§erent underlying shocks. Nominal bond returns are driven by movements in real interest rates, ináation expectations, and the risk premium on nominal bonds over short-term bills. Several papers, including Barsky (1989), Shiller and Beltratti (1992), and Campbell and Ammer (1993) have estimated the covariances of these components with stock returns, assuming the 3covariances to be constant over time. The literature on a¢ ne term structure models also proceeds by modelling state variables that drive interest rates and estimating prices of risk for each one. Many papers in this literature allow the volatilities and risk prices of the state variables to change over time, and some allow risk prices and hence risk premia to change sign. 2 Several recent a¢ ne term structure models, including Dai and Singleton (2002) and Sangvinatsos and Wachter (2005), are highly successful at Ötting the moments of nominal bond yields and returns. Some papers have also modelled stock and bond prices jointly, but no existing models allow bond-stock covariances to change sign. 3 The contributions of our paper are Örst, to write down a simple term structure model that allows for bond-stock covariances that can move over time and change sign, and second, to confront this model with historical US data. The purpose of the model is to Öt new facts about bond returns in relation to the stock market, not to improve on the ability of a¢ ne term structure models to Öt bond market data considered in isolation. Our introduction of a time-varying covariance between state variables and the stochastic discount factor, which can switch sign, means that we cannot write log bond yields as a¢ ne functions of macroeconomic state variables; our model, like those of Beaglehole and Tenney (1991), Constantinides (1992), Ahn, Dittmar and Gallant (2002), and Realdon (2006), is linear-quadratic. 4 To solve our model, we use a general result on the expected value of the exponential of a non-central chi-squared 2Dai and Singleton (2002), Bekaert, Engstrom, and Grenadier (2005), Sangvinatsos and Wachter (2005), Wachter (2006), Buraschi and Jiltsov (2007), and Bekaert, Engstrom, and Xing (2009) specify term structure models in which risk aversion varies over time, ináuencing the shape of the yield curve. These papers take care to remain in the essentially a¢ ne class described by Du§ee (2002). 3 Bekaert et al. (2005) and other recent authors including Mamaysky (2002) and díAddona and Kind (2006) extend a¢ ne term structure models to price stocks as well as bonds. Bansal and Shaliastovich (2010), Eraker (2008), and Hasseltoft (2008) price both stocks and bonds in the longrun risks framework of Bansal and Yaron (2004). Piazzesi and Schneider (2006) and Rudebusch and Wu (2007) build a¢ ne models of the nominal term structure in which a reduction of ináation uncertainty drives down the risk premia on nominal bonds towards the lower risk premia on ináationindexed bonds. Similarly, Backus and Wright (2007) argue that declining uncertainty about ináation explains the low yields on nominal Treasury bonds in the mid-2000ís. 4Du¢ e and Kan (1996) point out that linear-quadratic models can often be rewritten as a¢ ne models if we allow the state variables to be bond yields rather than macroeconomic fundamentals. Buraschi, Cieslak, and Trojani (2008) also expand the state space to obtain an a¢ ne model in which correlations can switch sign. 4distribution which we take from the Appendix to Campbell, Chan, and Viceira (2003). To estimate the model, we use a nonlinear Öltering technique, the unscented Kalman Ölter, proposed by Julier and Uhlmann (1997), reviewed by Wan and van der Merwe (2001), and recently applied in Önance by Binsbergen and Koijen (2008). 3 A Quadratic Bond Pricing Model We now present a term structure model that allows for time variation in the covariances between real interest rates, ináation, and the real economy. In the model, both real and nominal bond yields are linear-quadratic functions of the vector of state variables and, consistent with the empirical evidence, the conditional volatilities and covariances of excess returns on real and nominal assets are time varying. 3.1 The SDF and the real term structure We start by assuming that the log of the real stochastic discount factor (SDF), mt+1 = log (Mt+1), follows the process: mt+1 = xt + 2 m 2 + "m;t+1; (1) whose drift xt follows an AR(1) process subject to a heteroskedastic shock t "x;t+1 and a homoskedastic shock "X;t+1: xt+1 = x (1 x ) + xxt + t "x;t+1 + "X;t+1: (2) The innovations "m;t+1, "x;t+1, and "X;t+1 are normally distributed, with zero means and constant variance-covariance matrix. We allow these shocks to be cross-correlated and adopt the notation 2 i to describe the variance of shock "i , and ij to describe the covariance between shock "i and shock "j . To reduce the complexity of the equations that follow, we assume that the shocks to xt are orthogonal to each other; that is, xX = 0. The state variable xt is the short-term log real interest rate. The price of a single-period zero-coupon real bond satisÖes P1;t = Et [exp fmt+1g] ;so that its yield 5Capitalizing On Innovation: The Case of Japan
|
CD ROM Annuaire d'Entreprises France prospect (avec ou sans emails) : REMISE DE 10 % Avec le code réduction AUDEN872
10% de réduction sur vos envois d'emailing --> CLIQUEZ ICI Retour à l'accueil, cliquez ici Robert Dujarric and Andrei Hagiu Working papers are in draft form. This working paper is distributed for purposes of comment and discussion only. It may not be reproduced without permission of the copyright holder. Copies of working papers are available from the author. Capitalizing On Innovation: The Case of Japan Robert Dujarric Andrei Hagiu Working Paper 09-114Capitalizing On Innovation: The Case of Japan1 By Robert Dujarric2 and Andrei Hagiu3 Abstract Japan’s industrial landscape is characterized by hierarchical forms of industry organization, which are increasingly inadequate in modern sectors, where innovation relies on platforms and horizontal ecosystems of firms producing complementary products. Using three case studies - software, animation and mobile telephony -, we illustrate two key sources of inefficiencies that this mismatch can create, all the while recognizing that hierarchical ecosystems have played a major role in Japan’s success in manufacturing-driven industries (e.g. Toyota in automobiles and Nintendo with videogames). First, hierarchical industry organizations can “lock out” certain types of innovation indefinitely by perpetuating established business practices. For example, the strong hardware and manufacturing bias and hierarchical structures of Japan’s computer and electronics firms is largely responsible for the virtual non-existence of a standalone software sector. Second, even when the vertical hierarchies produce highly innovative sectors in the domestic market, the exclusively domestic orientation of the “hierarchical industry leaders” can entail large missed opportunities for other members of the ecosystem, who are unable to fully exploit their potential in global markets. For example, Japan’s advanced mobile telecommunications systems (services as well as handsets) suffer from a “Galapagos effect”: like the unique fauna of these remote islands they are only found in the Japanese archipelago. Similarly, while Japanese anime is renowned worldwide for its creativity, there is no global Japanese anime content producer comparable to Disney or Pixar. Instead, anime producers are locked into a highly fragmented domestic market, dominated by content distributors (TV stations and DVD companies) and advertising agencies. We argue that Japan has to adopt legislation in several areas in order to address these inefficiencies and capitalize on its innovation: strengthening antitrust and intellectual property rights enforcement; improving the legal infrastructure (e.g. producing more corporate lawyers); lowering barriers to entry for foreign investment and facilitating the development of the venture capital sector. 1 The authors would like to thank Mayuka Yamazaki from the Harvard Business School Japan Research Center for her assistance throughout the project; Curtis Milhaupt (discussant) and participants at the Columbia Law School conference on Business Law and Innovation for very helpful comments on the first version of this paper. They are also grateful to the Research Institute for Economy Trade and Industry (RIETI) where they were visiting fellows, and (for Robert Dujarric) Temple University, Japan Campus and the Council on Foreign Relations/Hitachi Fellowship in Japan. 2 Temple University, Japan Campus. robertdujarric@gmail.com 3 Harvard Business School. ahagiu@hbs.edu1. Introduction Japan faces two interconnected challenges. The first one is common to all advanced economies: the rising competition from lower-cost countries with the capacity to manufacture mid-range and in some cases advanced industrial products. For Japan this includes not only China but also South Korea. Though South Korea is by no means a low-wage nation, the combination of lower costs (not only labor but also land and a lower cost of living) than Japan with a very advanced industrial base makes it a formidable competitor in some sectors. Unlike – or to a significantly greater extent than – other advanced economies e.g. the United States, Japan also confronts a challenge posed by the global changes in the relative weights of manufacturing and services, including soft goods, which go against the country’s longstanding comparative advantage and emphasis on manufacturing. A growing share of global value chains is now captured by services and soft goods, such as software, while the percentage which accrues to manufacturing is declining. Many of the new industries that have been created or grown rapidly in the past twenty years have software and information platforms at their core: PCs (operating systems such as Windows); the Internet (web browser such as Firefox, Internet Explorer, Safari); online search, information and e-commerce (Amazon, Bloomberg, eBay, Facebook); digital media (Apple’s iPod and iTunes combination); etc. In this context, it is striking that, as Japan has become more economically advanced, its strengths have continued to be in manufacturing. . When it comes to services and soft goods (software, content), it has either failed to produce competitive companies, or, when it has, these companies have failed to establish themselves in foreign markets. There are, for example, no truly global Japanese hotel chains, nor do any Japanese corporations compete internationally with DHL, FedEx and UPS; there are no Japanese global information services companies comparable to Bloomberg, Google and Thomson Reuters, nor is there any international Japanese consulting or accounting firm. Even more strikingly, Japanese companies are also absent from international markets in sectors which are very strong at home, such as mobile telecommunications and anime production.The principal thesis we lay out in the current paper is that these weaknesses can be attributed to Japan’s hierarchical, vertically integrated and manufacturing-driven forms of industry organization, which are increasingly inadequate in modern sectors, where innovation relies on platforms and horizontal ecosystems of firms producing complementary products. Using three case studies - software, animation and mobile telephony - we illustrate two key sources of inefficiencies that this mismatch can create, all the while recognizing that hierarchical ecosystems have played a major part in Japan’s success in manufacturing-driven industries (e.g. Toyota in automobiles, Nintendo and Sony in videogames). First, hierarchical industry organizations can “lock out” certain types of innovation indefinitely by perpetuating established business practices. For example, the strong hardware and manufacturing bias of Japan’s computer and electronics firms is largely responsible for the virtual non-existence of a standalone software sector. Second, even when the vertical hierarchies produce highly innovative sectors in the domestic market, the exclusively domestic orientation of the “hierarchical industry leaders” can entail large missed opportunities for other members of the ecosystem, who are unable to fully exploit their potential in global markets. For example, Japan’s advanced mobile telecommunications systems (services as well as handsets) suffer from a “Galapagos effect”: like the unique fauna of these remote islands they are only found in the Japanese archipelago. Similarly, while Japanese anime is renowned worldwide for its creativity, there is no global Japanese anime content producer comparable to Disney or Pixar. Instead, anime producers are locked into a highly fragmented domestic market, dominated by content distributors (TV stations and DVD companies) and advertising agencies. Consequently, Japan is facing the challenge of creating a post-industrial exporting base. This in turns requires an environment conducive to innovation. Japanese policymakers are aware of the issue. Many have called for efforts to replicate Silicon Valley, while others hope that the next Microsoft will be Japanese. These ideas, as interesting as they are, can only come to fruition decades from now. Silicon Valley is the product of over half a century of development. Its foundations include massive levels of highskilled immigration, well-funded, cosmopolitan, dynamic and competitive private and public universities, a very liquid labor market, a vibrant venture capital industry, an enormous Pentagon R&D budget, and the common law. Japan’s chances of duplicating another Silicon Valley are therefore rather low. There are however soft good and service industries in which Japan is already very strong, such as mobile telephony and anime. These are “low hanging fruits,” which offer far better prospects for Japanese industry internationally than competing with Silicon Valley. We argue that Japan has to adopt legislation in several areas in order to address the inefficiencies described above and capitalize on its innovation capabilities in these sectors: strengthening antitrust and intellectual property rights enforcement; improving the legal infrastructure (e.g. producing more business law attorneys); lowering barriers to entry for foreign investment and facilitating the development of the venture capital sector. The rest of the paper is organized as follows. In the next section we provide a brief overview and background on the fundamental shift spearheaded by computer-based industries from vertically integrated to horizontal, platform-driven industrial structures. Section 3 describes the historical characteristics of Japanese innovative capabilities. In section 4 we use three industry case studies (software, animation and mobile telecommunications) to illustrate how Japan’s manufacturing-inspired modes of industrial organization are preventing the country from taking advantage of its innovative power. Finally, in section 5 we lay out some possible solutions and we conclude in section 6. 2. The new order of industrial innovation: ecosystems and platf orms The rapid development of computer-based industries since the second half of the twentieth century has spearheaded and accelerated the shift from vertically integrated, hierarchical industry structures (e.g. mainframes) to horizontal structures, composed of platform-centered ecosystems (e.g. PCs). While this change has been pervasive throughout most sectors of the economy, it has been most salient in technology industries with short product life-cycles. As a result, the nature of competition and competitive advantage has shifted away from pursuing quality through tightly integrated vertical “stacks” of components and towards building scalable “multi-sided platforms” (cf. Evans Hagiu and Schmalensee (2006)), connecting various types of interdependent complementors and end-users (e.g. videogame consoles - game developers; Windows - software application developers and hardware manufacturers). Personal Computers (PCs): the quintessential ecosystem Ecosystems are most simply defined as constellations of firms producing complementary products or essential components of the same system. Today’s PC industry is the archetype of modern ecosystems. There are two critical components, the operating system and the microprocessor, which are controlled by two companies – Microsoft and Intel. The other ecosystem participants “gravitate” around the two “ecosystem leaders” (cf. Gawer and Cusumano 2002): hardware manufacturers (OEMs) like Dell, HP, Toshiba and Sony, independent software developers such as Intuit and Adobe Systems, third party suppliers of hardware accessories and, last but not least, end users. Ecosystem leadership is defined by three elements: i) control of the key standards and interfaces which allow the components supplied by various ecosystem participants to work with each other (e.g. the application programming interfaces - APIs - controlled by Windows); ii) control of the nature and timing (pace) of innovation throughout the industry (e.g. Intel’s successive generations of microprocessors and Microsoft’s successive versions of Windows) and iii) ability to appropriate a large share of the value created by the entire ecosystem. Microsoft in particular has positioned Windows as the multi-sided platform at the center of the PC ecosystem. Its power comes from generating network effects through the interdependence between the participations of the other ecosystem members: the value to users increases with the number and quality of independent application developers which support Windows and vice versa, third-party software vendors are drawn to Windows in proportion to the latter’s installed base of users. One source of restraint (today more so than in the 1990s) on Microsoft and Intel abusing their eco-system leadership is the existence of second-tier players in their respective markets, who could provide alternatives. Thus Linux, Google’s office suite, AMD, and Apple act as brakes on the possible misuse of ecosystem leadership on the part of the Microsoft and Intel. The fear of anti-trust action further restrains Microsoft and Intel from aggressive behavior against the other members of the ecosystem. These factors (competition and anti-trust regulations) are essential. Without them the ecosystem might degenerate into a slow moving institution, more preoccupied with extracting economic rent from consumers than with innovation and price competition. It is important to emphasize that the horizontal PC ecosystem that we know today has little to do with the structure of the PC industry at its beginning in the early 1980s. And even less to do with the structure of the computer industry in the early 1950s. At that time, each computer was on its own island. Only large corporations, government agencies, and universities bought mainframe computers, and they did so from a few large companies like Burroughs, UNIVAC, NCR, Control Data Corporation, Honeywell and IBM. Customers were buying vertically integrated hardware-software systems. IBM emerged as the clear leader from this pack by being first to adopt a modular and ecosystem-based approach with its System 360: it adopted standardized interfaces and allowed outside companies to supply select parts of the computer system (e.g. external hard drives). Nevertheless, this remained largely a vertically integrated approach as the main components – hardware, processor and operating system - were done in house. The radical change occurred in 1980, when IBM decided that the only way to get ahead of its competitors in the PC business (Apple, Commodore and Tandy) was to outsource the operating system and the microprocessor to Microsoft and Intel in order to speed up the innovation cycle. The strategy worked in that the IBM PC became the dominant personal computer. It backfired when Microsoft and Intel took control of the PC ecosystem and licensed their platforms to other OEMs such as Compaq, HP and Dell, which eventually relegated IBM to “one of the crowd”. IBM’s original PC business, ThinkPad, is now a subsidiary of the Chinese computer manufacturer Lenovo. Economic drivers of vertical disintegration and ecosystem structures While at first glance it may seem that every step of vertical disintegration in the computer industry was a strategic decision involving real tradeoffs (e.g. giving up some control vs. accelerating investment throughout the ecosystem) that could have gone either way, there is a clear sense in which the process of vertical disintegration was inevitable due to technological and economic factors beyond the control of any single actor. And this process has occurred (or is occurring) in many other technology industries: videogames, smart mobile phones, wireless mobile services, home entertainment devices, etc. There are three fundamental forces driving vertical disintegration. First, rapid technological progress leads to economies of specialization. Except in the very early stages of an industry, vertically integrated firms cannot move the innovation frontier in all segments of the value chain. As industries grow, there is scope for specializing in some layers (a key strategic decision then becomes which layers to keep in-house and which to open to third parties) and bringing other firms on board in order to develop the others. The second important factor in the evolution of technology-based industries is modularity and the emergence of standards (cf. Baldwin and Clark 1999). Increasing productivity throughout the value chain naturally drive firms to design their products and services in a modular fashion, with well-specified interfaces, which can be used by different production units within the same company or by third-party suppliers if applicable (this is related to the first factor mentioned above). The third and final driver of vertical disintegration is increasing consumer demand for product variety. The vertically integrated model works well for one-size-fitsall solutions. As soon as customers demand horizontally differentiated products, it becomes hard for one integrated firm to satisfy the entire spectrum of customer demands. This tension was famously described by Henry Ford: “We are happy to supply any car color as long as it is black.” Therefore, vertical disintegration is more likely to occur in industries with a large number of consumers with diverse needs than in markets with a small number of clients with similar needs. Thus, ecosystems are the natural consequence of vertical disintegration. They have become the most efficient market-based solution to the problem of producing complex systems in a large variety of technology-intensive industries, satisfying a large variety of end user demands and maintaining a sufficiently high rate of innovation throughout the system. It is important to emphasize however that not every industry will move towards horizontal, platform-centered ecosystems. For example, Airbus and Boeing, the two biggest players in the commercial airliner business, have increasingly relied on outsourcing and risk-sharing partners. Boeing’s latest jetliner, the 787, relies on risk-sharing partners involved in key R&D decisions, and much of the plane is actually not made but Boeing itself. Still, neither Airbus nor Boeing have created an ecosystem similar to the PC industry. Both companies sit at the apex of the industrial pyramid, make the key decisions, and sell the product directly to the customer (as opposed to Microsoft and Intel, where PCs are actually sold by the manufacturers such as Lenovo or Dell, which assemble the computers). This can be explained, among other factors, by the small number of customers (airlines and governments) for products with extremely high unit costs; the need to maintain extremely demanding and well-documented safety standards; and the direct involvement of governments in a sector with close links to national defense. 4 In light of our argument in this paper it may seem perhaps surprising that the best description of the necessity of relying on ecosystems that we have encountered comes from a senior executive at a Japanese high technology firm – NTT DoCoMo, Japan’s leading mobile operator. In discussing the reasons behind the success of NTT DoCoMo’s i-mode mobile Internet service, he explained: “In today’s IT industries, no major service can be successfully created by a single company.” In the three case studies below, we will see that, despite the success of a few remarkable ecosystem leaders in a few sectors (Nintendo, NTT DoCoMo, Sony and 4 It should also be noted that some of the outsourcing by Airbus and Boeing is motivated by the need to find foreign industrial partners in order to increase the likelihood of sales to the airlines of those countries. Toyota come to mind), these were exceptions in Japan’s broader industrial landscape. Most of Japan’s ecosystems remain strikingly similar to vertical hierarchies and the ecosystem leaders (i.e. the companies at the top of these hierarchies) are predominantly domestically focused, which makes it hard for everyone in the subordinate layers to compete globally. These eco-systems recreate, to some extent, a corporate hierarchy. It is not rare for the eco-system leader (say Toyota) to have equity stakes in some of the subordinate members. In the case of Toyota however, this hierarchical system has produced a highly-competitive international business. This is mainly because value in Toyota’s sector (automobiles) still comes largely from manufacturing rather than from services and soft goods. 3. Historical background on Japan’s innovativeness In order to achieve a better understanding of Japan’s innovation ways, it is helpful to provide a short historical perspective on their evolution. Opening to foreign trade Britain, as the leader of the Industrial Revolution, entered the industrial age on its own terms. Japan had a radically different experience. To preserve their hegemony over the country, the House of Tokugawa, which established the Edo shogunate (1600-1868), banned almost all foreign trade after the 1630s. Despite its isolation 5 , the country was not backward. It possessed a well-functioning bureaucracy and a good transportation network; there was no banditry, and literacy was high by the standards of the age. Commercial activity was modern for the era. Japanese merchants devised some of the world’s first futures trading instruments for Osaka’s commodities exchanges. But isolation froze Japanese technology at a 17 th century level. There were improvements here and there during the two centuries of shogunal power, but nothing on 5 Japan did have some overseas trade through the Ryukyus (Okinawa) and Chinese and Dutch merchants in Japan but foreign commerce was miniscule compared to island nations of similar size such as Britain. the scale of what occurred in Europe. Whereas Europe embraced innovation, the shogunate was fundamentally committed to a static posture, at least compared to European societies. Therefore, when western gunboats breached Japan’s seclusion in the 1850s, the country did not have a single railroad track, whereas Britain, smaller than Japan, already had 10,000 kilometers of railways in 1851. 6 Nor did Japan have any modern industrial base comparable to the ones being developed in Europe and North America. Japan lacked not only hardware, but also the “software” necessary to succeed during the Industrial Revolution. There was no effective civil law system. “Law” meant government edicts; there was no formal concept of civil arbitration with the state acting as a referee by providing both courts and enforcement mechanisms. 7 In fact, Japan did not have a bar with lawyers until the late 19 th century. 8 As long as Japan was cut off from other countries, it could live in peace with its 17 th century palanquins in a 19 th century world of steam engines. Unfortunately for Japan’s shoguns, once the Europeans, Russians, and Americans approached the country’s shore, its industrial immaturity put the very existence of the nation in jeopardy, as the westerners enforced trade agreements on Japan which gave themselves unilateral advantages in commerce and investment (what are known as the “unequal treaties”). Modernization during Meiji era and intellectual heritage Japan succeeded in escaping the stagnation of the Edo Era through a program of rapid modernization that transformed the country into an industrialized society (though it remained much less industrialized, especially in heavy industry, than the West until the 1930s). Still, as noted by Katz (1998), although Meiji Japan welcomed the intellectual contributions of free traders as well as protectionists, the Japanese economy developed along lines that were more restrictive of free trade than Britain and more tolerant of oligopolies and monopolies than the United States (after the adoption of US antitrust 6 Encyclopedia Britannica Online, “History > Great Britain, 1815–1914 > Social cleavage and social control in the early Victorian years > The pace of economic change”, http://www.britannica.com/eb/article- 44926/United-Kingdom 6 November 2006 7 See John Owen Haley, Authority without Power: Law and the Japanese Paradox. New York: Oxford University Press, 1991 (1995 Oxford UP paperback). 8 See Mayumi Itoh, The Hatoyama Dynasty. (New York: Palgrave MacMillan, 2003), p. 21ff. legislation). By the 1930s, due to the deterioration of the international climate and the beginning of the war in Asia (1931 in Manchuria), Japan moved towards more government involvement in the economy. The post-war economic system did retain important aspects of the semi-controlled economy, especially in the the 1940s and 1950s when the government controlled access to foreign exchange. In later years, many of these controls were removed, but the ruling Liberal Democratic Party, in order to ensure social-stability and its own political survival, followed economic policies that often favored oligopolies, protectionism, and hindered foreign investment. Moreover, the combination of the influence of Marxian thought (at least until the 1970s) and anti-liberal conservatism meant that economic liberalism has been on the defensive since 1945. Thus Japanese economic DNA is far less liberal than America’s. The consequences of this intellectual heritage for innovation are threefold. First, it has fostered a strong manufacturing bias, based on the idea that a nation without production facilities is a weak country. Unfortunately for Japan, many of the recent (last 20 years) innovations which have increased productivity and made possible the development of new industries are unrelated to manufacturing. New ways of dealing with new eco-systems, platform-based industries, legal developments in intellectual property (IPR), new financial instruments (admittedly a field currently enjoying a rather negative reputation) are fundamentally tied to service and soft goods sectors. Japan has been ill-equipped to deal with them. Second, besides a continued focus on industry, some form of hostility towards outsiders survives. When a foreign takeover beckons, Japanese corporate leaders’ first reflex is often, though not always, to band together against the alien, rather than seek a way to profit from the new investor. The merger of Nissin and Myojo, both leaders in instant noodles, orchestrated to prevent Steel Partners of the US from acquiring Myojo, is an illustrative example. It kept the foreigners at bay but deprived Myojo’s shareholders of the higher price offered by the Americans. There are, of course, cases of successful foreign investment into Japan (e.g. Renault’s acquisition of a controlling stake in Nissan) but overall, among the major developed economies, Japan is the least hospitable to foreign capital, with foreign direct investment (FDI) stock estimated at 4.1% of gross domestic product (GDP) vs. an average for developed countries of 24.7%. 9 This form of “business xenophobia” has slowed down innovation by preventing foreign ideas and managers from playing a bigger role in the Japanese economy. Third, Japan, like some continental European states from which its economic ideology is derived, has historically been far more tolerant of monopolies and oligopolies. Though anti-trust enforcement has gained somewhat it recent years, it remains deficient by Anglo-American standards. This can have a particularly nefarious impact on innovation. Companies that are already actively involved in international markets will continue to innovate, even if they enjoy monopolistic (or oligopolistic) advantages in their home market, in order to remain competitive abroad. But businesses which are not international and benefit from economic rents derived from monopolistic or oligopolistic arrangements domestically will have fewer innovation incentives. Industrial structures The US Occupation authorities dismantled the zaibatsu (?? - “financial cliques” – same ideographs as the word “chaebol,” used to denote Korea’s family-controlled conglomerates). These were large financial-industrial family conglomerates that controlled Japanese industry and finance. But in the decades following the war, partly as a way to prevent foreign takeovers, Japan developed a complex form of crossshareholdings known as “keiretsu,” (??) or “affiliated companies” by opposition to the family-owned zaibatsus. In some cases these keiretsus were vertical, with one large corporation at the top and affiliates in a subordinate position. In other cases, there was no real center, with several corporations linked by cross-shareholdings and informally coordinated by their top managers . 10 9 16.0% for the US, but as a larger economy, the US should, ceteris parabus, have a lower percentage of FDI stock than Japan, which is three times smaller. Source: UNCTAD, http://www.unctad.org/sections/dite_dir/docs/wir09_fs_jp_en.pdf (accessed 29 September 2009). 10 On corporate governance, see Gilson, Ronald and Curtis J. Milhaupt. “Choice as Regulatory Reform: The Case of Japanese Corporate Governance.” Columbia University Law School Center for Law and Economic Studies Working Paper No. 251 and Stanford Law School John M. Olin Program in Law and Economics Working Paper No. 282, 2004; Hoshi, Takeo and Anil K. Kashyap. Corporate Financing and Governance in Japan: The Road to the Future. Cambridge MA: The MIT Press, 2001; Jackson, Gregory. In the decades which followed the Showa War (1931-45 11 ), Japanese industry showed a great capacity to innovate, both in the area of manufacturing processes and also with the development of new products. Moreover, by breaking the stranglehold of trading companies (sogo shosha ????) Japanese businesses such as Toyota, Sony, and Nintendo were able to conquer international markets. In particular Toyota displayed some of the key strengths of Japanese industry. Its constant focus on product improvement and quality control gave it the credibility to win foreign market share and make its brand, unknown overseas until the 1970s, synonymous with quality. Moreover, Toyota was able to export its industrial ecosystem. As it built factories overseas, many of its Japanese suppliers followed suit, establishing their own plants in foreign countries. In a way, Toyota functioned as a sort of trading company for its suppliers by opening the doors to foreign markets which on their own they would not have been able to access. Legal systems A second factor with a significant bearing on innovation is the legal system. “One of the principal advantages of common law legal systems,” wrote John Coffee of Columbia University Law School, “is their decentralized character, which encourages self-regulatory initiatives, whereas civil law systems may monopolize all law-making initiatives.” 12 This is especially true in new industries where the absence of laws governing businesses leads to officials opposing their veto to new projects on the grounds that they are not specifically authorized by existing regulations. In the United States, innovative legal developments based on the jurisprudence of courts and new types of “Toward a comparative perspective on corporate governance and labour.” Tokyo: Research Institute on the Economy Trade and Industry, 2004 (REITI Discussion Papers Series 04-E-023); Milhaupt, Curtis J. “A Lost Decade for Japanese Corporate Governance Reform?: What’s Changed, What Hasn’t, and Why.” Columbia Law School, The Center for Law and Economic Studies, Working Paper No. 234, July 2003; Miyajima, Hideaki and Fumiaki Kuroki. “Unwinding of Cross-shareholding: Causes, Effects, and Implications.” (Paper prepared for the forthcoming Masahiko Aoki, Gregory Jackson and Hideaki Miyajima, eds., Corporate Governance in Japan: Institutional Change and Organizational Diversity.) October 2004; Patrick, Hugh. “Evolving Corporate Governance in Japan.” Columbia Business School, Center on Japanese Economy and Business, Working Paper 220 (February 2004). 11 To use the term which Yomiuri Shimbun chose among several (Great East Asia War, Pacific War, etc.) to denote the decade and a half of fighting which ended with Japan’s capitulation on 15 August 1945. 12 Coffee, “Convergence and Its Critics,” 1 (abstract). contacts have facilitated the development of new industries, something that is harder in Japan and in other code law legislations. For example, some analysts have noted how U.S. law gives more leeway to create innovative contractual arrangements than German law, 13 on which most of Japan’s legal system is built. Thus entrepreneurs, and businesses in general, are more likely to face legal and regulatory hurdles in code law jurisdictions where adapting the law to new technologies, new financial instruments, and other innovations, is more cumbersome. 3. Three industry case studies The following case studies are designed to illustrate the two key types of inefficiencies which result from the mismatch between Japan’s prevailing forms of industrial structures (vertically integrated and hierarchical) and the nature of innovation in new economy industries such as software and the Internet, where building horizontal platforms and ecosystems is paramount. First, the vertical structures can stifle some forms of innovation altogether (e.g. software). Second, they can limit valuable innovations to the domestic market (e.g. anime and mobile telephony). From these case studies, we can draw some lessons on the steps which Japan could take to enhance its capabilities to harness its strong innovative capabilities. 3.1. Software Given the degree of high-technology penetration in the Japanese economy and the international competitiveness of the hardware part of its consumer electronics sector, the weakness (indeed, the non-existence) of Japan’s packaged software industry looks puzzling. Indeed, software production in Japan has historically suffered from chronic fragmentation among incompatible platforms provided by large systems integrators 13 Steven Casper, “The Legal Framework for Corporate Governance: The Influence of Contract Law on Company Strategies in Germany and the United States,” in Hall and Soskice, eds. Varieties Of Capitalism, 329.(Hitachi, Fujitsu, NEC) and domination by customized software. Despite efforts by the Ministry of the Economy, Trade and Industry (METI, formerly MITI), there are very few small to medium-size software companies in Japan compared to the United States or even Europe. As a result, even the domestic market is dominated by foreign software vendors such as Microsoft, Oracle, Salesforce.com and SAP. Needless to add, there are virtually no standalone software exports from Japan to speak of. There is of course the videogame exception, which we do not include in our discussion here because the videogame market has a dynamic of its own, largely independent of the evolution of the rest of the software industry. There are two root causes for this peculiar situation: a strong preference for customized computer systems by both suppliers and customers and a long-standing bias (also on both sides) in favor of hardware over software. These two factors have perpetuated a highly fragmented, vertically integrated and specialized computer industry structure, precluding the emergence of modular systems and popular software platforms (e.g. Windows). In turn, the absence of such platforms has thwarted the economies of scale needed to offer sufficient innovation incentives to independent software developers, which have played a critical role in the development of the IT industry in the United States. The prevalence of customized computer systems and its origins In the early 1960s MITI orchestrated licensing agreements that paired each major Japanese computer system developer with a U.S. counterpart. Hitachi went with RCA then IBM, NEC with Honeywell, Oki with Sperry Rand, Toshiba with GE, Mitsubishi with TRW and Fujitsu went on its own before joining IBM. The intent was to make sure Japan embarked on the computer revolution and that it competed effectively with thenalmighty IBM. Since each of Japan’s major computer system suppliers had a different U.S. partner however, each had a different antecedent for its operating system. In fact, even IBM-compatible producers only had the instruction set licensed from IBM in common; their operating systems were incompatible among themselves. Very rapidly, each of the Japanese companies found it profitable to lock-in its customers by supplying highly customized software, often free of charge, which meant that clients had only one source of upgrades, support and application development. Over time, many of the former U.S. partners were forced to exit the industry due to intense global competition from IBM. However, their Japanese licensees remained and perpetuated their incompatible systems. Next, in the United States, following a highly publicized antitrust suit, IBM was forced to unbundle its software and hardware in 1969. The IBM System/360 was the first true multi-sided platform in the computer industry, in that it was the first to support thirdparty suppliers of software applications and hardware add-ons. It marked the beginning of the vertical disintegration and modularization of the computer industry. Computer systems were no longer solely provided as fully vertically integrated products; instead, users could mix and match a variety of complementary hardware and software products from independent suppliers. This led to the development of an immensely successful software industry. The new industry became prominent with the workstation and PC revolutions in the early 1980s, which brought computing power into the mainstream through smaller, cheaper, microprocessor-based machines. An important consequence was the great potential created for software/hardware platforms, which a handful of companies understood and used to achieve preeminence in their respective segments: Sun Microsystems in the workstation market, Apple and Microsoft in the PC market. By contrast, in Japan there was no catalyst for such a sweeping modularization and standardization process. Despite the adoption of a US-inspired Anti-Monopoly Law in 1949, enforcement of antitrust in Japan has been weak by US and EU standards (cf. Miwa and Ramseyer (2005)) - no one required the large systems makers to unbundle software from hardware. There were also no incentives to achieve compatibility. During the last three decades, the customized software strategies became entrenched. Clients were increasingly locked into proprietary computer systems and had to set up their own software divisions to further customize these systems, thus increasing sunk costs and reducing the likelihood of switching to newer systems. This vicious cycle essentially locked out any would-be standalone software vendor in the mainframe and minicomputer markets. Japanese computer manufacturers tried to extend the same strategy to the workstation and PC market, but failed due to competitive pressure from foreign (especially American) suppliers. The best known example is NEC, which until around 1992 held a virtual monopoly on the Japanese PC market with its "PC-98." Its hardware platform architecture was closed (like Apple's) and its operating system, though based on DOS, remained incompatible with the popular MS-DOS PC operating system. In the end, however, NEC's monopoly was broken by Dell, Compaq and low-cost Taiwanese PC makers (1991-92). There also seems to have been a preference for customized computing systems and software on the demand-side of the market. In Japan, like everywhere else in the world, the first private sector users of computer systems (mainframes in the beginning) were large corporations. However Japanese corporations have traditionally been strongly committed to adhering to internal business procedures, leading to a "how can we modify the software to fit our operations?" mindset, rather than the "how can we adapt our operations in order to take advantage of this software?" reasoning that prevailed in the U.S. For this reason, Japanese companies preferred to develop long-term relationships with their hardware suppliers and to depend on those suppliers, or on vertically related 14 software developers for highly customized software solutions. As major Japanese companies have generally relied on professionals hired straight of college who stayed with the same employer for their entire professional lives, each Japanese conglomerate has developed its own corporate culture to a greater extent than in the United States where a liquid labor means there is a much greater level of cross-fertilization between firms and consequently less divergence than in Japan in their corporate culture. The prevalence of closed, proprietary strategies prevented the economies of scale necessary for the emergence of a successful, standalone Japanese software industry. No single computing platform became popular enough with users to provide sufficient innovation incentives for packaged application software. 15 14 That is, belonging to the same keiretsu. 15 Even at its height, the standardized NEC PC-98 platform commanded a market roughly four times smaller than its U.S. counterpart for a population half the size of the U.S. Furthermore, it was incompatible Government policies and the hardware bias The second important factor which has shaped the evolution of Japan’s software industry is the longstanding bias in favor of hardware over software. Japanese computer companies' business strategy had always involved giving away software for free along with their hardware systems as a tool to lock in customers. Ironically, this bias was probably inherited from IBM, whose success they were seeking to emulate. IBM itself remained convinced that hardware was the most valuable part of computer systems, which led to its fateful (and, with today’s benefit of hindsight, strategically misguided) 1981 decision to outsource its PC operating system to Microsoft, whose subsequent rise to power signaled the beginning of the software platform era. This development was lost on Japanese computer makers, however, for several years. And MITI, which still viewed IBM as Japan's main competitor, was at that time immersed in a highly ambitious "Fifth Generation Project," a consortium that aimed to build a new type of computer with large-scale parallel-processing capabilities, thus departing from the traditional von Neumann model. The drawback, however, was that the project focused everyone's attention on building highly specialized machines (basically mainframes), whereas the computer industry was moving towards smaller, general purpose machines, based on open and non-proprietary architectures (Unix workstations) or on proprietary but very popular operating system platforms (PCs), which greatly expanded the computer market. MITI and member companies of the FifthGeneration consortium realized only later the potential of making a common, jointlydeveloped software platform available to the general public rather than concentrating on systems designed for a handful of specialized machines. This led to MITI's next initiative, The Real-time Operating-system Nucleus (TRON). The main idea of TRON was to build a pervasive and open (i.e. non-proprietary) software/hardware platform in response to the market dominance of Intel and Microsoft. TRON was supposed to be a cross-device platform: computers and all sorts of other devices everywhere would be linked by the with the MS-DOS PC standard platform, which isolated Japanese PC software developers from the worldwide PC market. same software, thus finally providing a popular platform for Japanese software developers. Although TRON was a promising platform concept; it unfortunately received little support from the major industrial players, in particular NEC, which viewed it as a direct threat to its PC monopoly. More importantly, it could not break into the crucial education market 16 precisely because it was incompatible with both the NEC PC- 98 DOS and the IBM PC DOS standards, both of which had sizable advantages in terms of installed bases of users and applications. Thus, TRON was too little too late: the big winners of the PC and workstation revolutions had already been defined and none of them were Japanese computer companies. Most importantly, the intended creation of an independent Japanese software industry did not materialize. Other factors Comparative studies of the U.S. and Japanese software industries also mention several other factors that further explain the phenomenon described above. One is the relative underdevelopment of the venture capital market for technology-oriented start-up companies in Japan compared to the United States, where venture capital had widely supported the emergence of successful small and medium-size software companies. This gap, however, has been recently narrowed due to METI policies designed to improve the availability of venture capital to technology firms. Another factor is the Japanese system of “life time employment” for regular employees of large businesses, which results in low labor mobility and is quite compatible with the "closed garden" approach to technological innovation. By contrast, high labor mobility has been a crucial driving force behind the "Silicon Valley model" of technological innovation, which is based on spillovers, transfers, cumulative inventions and a high degree of modularity. The latter model seems to have been more appropriate for creating a vibrant software industry. “Life time employment” is losing ground, but the top managerial ranks of large Japanese corporations remain dominated, and often monopolized, by those who have been with the company since they joined the labor market. 16 Callon (1995) contains an informative account of the conflict between METI and the Ministry of Education regarding the adoption of TRON by public educational institutions. 3.2. Animation 17 Few Japanese industries are as specific to Japan and as creative as animation - or “anime” 18 . Japanese anime has gained global popularity: it was estimated to account for 60% of TV anime series worldwide (Egawa et al. 2006). And it has significant influence over many creators outside Japan: the setting of Terminator 2 was influenced by Akira, a classic Japanese anime series; the director of Lilo & Stitch (Disney’s 2002 animation film) admitted that it was inspired by Hayao Miyazaki’s My Neighbor Totoro; The Matrix movies owed the starting point of their story to Ghost in the Shell, a Japanese anime movie created by Production IG; Disney’s immensely popular Lion King (released in 1994) was based on Kimba the White Lion, a 1964 Japanese TV anime series. Yet despite the global influence of Japanese animation, the Japanese anime production companies have never been able to capitalize on the popularity of their creations. The industry is highly fragmented (there are about 430 animation production companies) and dominated by distributors—TV stations, movie distributors, DVD distributors and advertising agencies -, which control funding and hold most of the copyrights on content. As a result, most animation producers are small companies laboring in obscurity. No Japanese animation production company comes even close to the size of Walt Disney Co. or Pixar. In 2005 Disney had revenues of $32 billion, whereas Toei Animation, the largest animation production company in Japan, had revenue of only ¥21 billion ($175 million at the average 2005 exchange rate). Whereas Disney and Pixar spend in excess of ¥10 billion to produce one anime movie; Japanese anime production companies’ average budget is ¥0.2-0.3 billion (Hayao Miyazaki’s Studio Ghibli is an exception: it invests ¥1-3 billion in one production). And while Japanese animes are omnipresent in global markets, Japanese anime production companies have virtually no international business presence. Their lack of business and 17 This subsection draws heavily on Egawa et al. (2006). 18 In this case study “anime” refers to animation motion pictures, as opposed to manga cartoons. financial strength can be traced down to the inefficient mode of organization of the Japanese anime “ecosystem”. Background on Japanese anime The first animation in Japan was created in 1917 with ten minute add-ons to action films. Thereafter, short animation films were produced for educational and advertisement purposes. In early 1950s, Disney’s animation and its world of dreams became very popular in the aftermath of defeat in World War II. In 1956, Toei Doga (current Toei Animation) was established as a subsidiary of Toei, a major film distributor, with the stated objective to become “the Disney of the Orient.” Some anime industry experts trace the current plight of Japanese anime production companies back to the 1963 release of Astro Boy, the first TV anime series. Its creator and producer was Osamu Tezuka, a successful manga (comic book) writer. Being more concerned with making Astro Boy popular rather than with turning it into a financial success, Tezuka accepted the low price offered by a TV station in exchange for distributing the series. In order to keep the production cost to a minimum, he reduced the number of illustrations to a third of the Disney standard (from 24 images per second to 8 images). He felt that Disney’s stories were too simplistic and lacked depth, therefore he believed that the complexity of the Astro Boy story would compensate for the inferior animation quality. Astro Boy became the first big hit in the history of Japanese TV animation, reaching a viewership of over 40% of households. However, due to intensified competition and lack of business acumen, Tezuka’s anime production company (Mushi Production) subsequently ran into financial difficulties and in 1973 filed for bankruptcy. From the early days, the majority of anime productions had derived their content from manga. In 2005, roughly 60% of anime contents were based on manga - the rest were based on novels or original stories created by the production companies themselves. The sales of manga - comic books and magazines - in 2004 were ¥505 billion, and accounted for 22% of the published goods. This was twice as much as the anime industry revenues, which in 2005 stood at ¥234 billion in 2005. Contrary to popular perception in the West, Japanese anime extends far beyond cartoons for children: “to define anime simply as Japanese cartoons gives no sense of the depth and variety that make up the medium. Essentially, anime works include everything that Western audiences are accustomed to seeing in live-action films—romance, comedy, tragedy, adventure, even psychological probing of a kind seldom attempted in recent mass-culture Western film or television.” (Napier 2005) Production committees The structure of the anime industry has not evolved much since its beginnings. The approximately 430 production companies work essentially as contractors for the powerful distribution companies: TV stations, movie distributors, DVD distributors and advertising agencies. And only 30–40 of the producers have the capacity to become main contractors; the rest work as subcontractors for the main contractors. Main contractors are responsible for delivering the end products to TV stations or movie distributors, and took charge of the majority of the processes. Subcontracting companies can only handle one or two processes. It usually takes 4–5 months to produce one 30-minute TV episode. Production of anime movies is even more labor intensive and time consuming: a 60- minute anime movie usually takes over one and a half years. In both TV anime series and anime movies, the labor intensive process of drawing and coloring animations is often outsourced to Asian countries including China, Korea, Taiwan, Philippines, Thailand, Vietnam and India. Most anime projects in Japan are done by “production committees,” an institution specific to the Japanese market, which provides financing and coordinates the distribution of the resulting contents through various channels. These committees have been created in the mid-1980s in order to alleviate the scarcity of funding sources for animation. Indeed, Japanese banks had traditionally been reluctant to lend to businesses which were exclusively focused on “soft” goods (content, software, etc.), particularly when they involved a high degree of risk. 19 As a result, TV stations often had to fund the production 19 Indeed, like for most creative content businesses (movies, novels), only 10 out of every 100 animations make any profits. cost of TV anime series since production companies were small and financially weak. Similarly, movie distributors used to fund the production of anime movies. As production costs increased and new distribution channels appeared however, production committees emerged as the standard funding vehicles for both TV series and movies. At the same time, they also took control of the creative process, as well as marketing and final distribution of the final products. Several types of companies come together in a production committee: TV broadcasting stations, the powerful advertising agencies (Dentsu and Hakuhodo), sponsors (e.g. merchandising companies), movie distributors, video/DVD publishers, and the publishers of the original manga (comic book) whenever the content is based on it. The production committee funds the anime projects and shares revenues and profits from the investments. Each member of the committee makes an investment and in exchange receives: (a) a share of the copyrights (and the associated licensing revenues) linked to the anime in proportion to the initial investment; and (b) the right to distribute the resulting content through the particular member’s channel—broadcasting right for TV stations, distribution right of videos/DVDs for video/DVD publishers. All committee members contribute to some part of the value chain, but TV stations often lead the committee because television is the primary distribution channel. Production committees contract the production of anime works with anime production companies. In most cases, anime producers receive only a fixed payment (about ¥10–¥15 million), which oftentimes is barely sufficient to cover the production cost. Due to the lack of financial resources, production companies have to rely on production committees for funding and in exchange give up copyrights to their own work to the production committees. They are usually not a member of the production committees and as a result do not have access to licensing revenue and cannot share in the upside of successful projects. (By contrast, in the United States, Financial Interest and Syndication Rules (Fin-Syn Rules) established in 1970 by the Federal Communication Commission (FCC) state that copyrights belong to production companies. 20 ) When the anime is the original creations of anime producers, they become a member of the production committee, but typically own a very small stake. Therefore, original creations result in higher profits for anime production companies, but they are also riskier, and it is harder to persuade production committee members to undertake such projects. This system creates a vicious cycle for animation production companies, which keeps them weak and subordinate to the production committees. Most importantly, the production committee members (advertising agencies, TV stations and DVD distributors) are inherently domestic businesses, which therefore also limits the anime producers to the Japanese market, even though their productions might have global appeal. Recent developments Recently, several initiatives have emerged in order to strengthen the rights of animation production companies and to create funding alternatives for anime projects. First, the Association of Japanese Animation was established in May 2002 under the leadership of the Ministry of Economy, Trade and Industry (METI) to strengthen the position of anime producers. Second, intellectual property were made legally defensible through trust arrangements in December 2004. And Mizuho Bank (one of the Japanese megabanks) initiated the securitization of profits deriving from anime copyrights. 21 This allowed Mizuho to extend financing to anime production companies such as Production I.G, which do not have tangible assets suited for collateral. In turn, production companies can invest the proceeds in production committees. To date, Mizuho has financed over 150 anime titles in this way. Third, the funding sources for anime production companies have diversified. Mizuho raised a ¥20 billion fund to invest in new movies including anime. And GDH, a recently founded animation production company, created its own fund for retail investors to finance its new TV series. 22 20 The Ministry of Economics, Trade and Industry, Research on Strengthening Infrastructure for Contents Producer Functions: Animation Production, p. 27, http://www.meti.go.jp/policy/media_contents/. 21 “Mega Banks Expanding Intellectual Property Finance,” Nihon Keizai Shimbun, April 17, 2004. 22 “Rakuten Securities, JDC, and Others Raise Funds from Individual Investors to Produce Anime,” Nikkei Sangyo Shimbun, July 28, 2004. 3.3. Mobile telephony Like animation, mobile telephony provides another illustration of a highly innovative Japanese industry, which has not been able to export its domestic success. Unlike animation however, one needs to travel to Japan in order to observe the tremendous unexploited opportunities of Japan’s mobile phone industry. The Galapagos of mobile phones Japanese owners of cell phones have long enjoyed access to the world’s most advanced handsets and services – years ahead of users anywhere else in the world. Mobile email has been offered since 1999 - it only took off in the United States and Western Europe by 2004-2005 with RIM’s Blackberry devices. Sophisticated ecommerce and other non-voice services were rolled out in Japan starting with the introduction of i-mode in 1999. i-mode was the world’s first proprietary mobile Internet service and to this day remains the most successful one. Launched by NTT DoCoMo, Japan’s largest mobile operator (or carrier), it has spawned a diverse ecosystem of over 100,000 content providers, offering i-mode handset users everything from games and news, to mobile banking, restaurant guides and dating services. KDDI and Softbank, the other two major Japanese carriers, have also introduced similar services. All of them were subsequently enhanced by third-generation networks in 2001 – meanwhile, the first functional 3G services in the rest of the world started to appear only in 2004. Since 2004, again thanks to NTT DoCoMo’s leadership, Japanese mobile phone users can simply waive their handsets in front of contactless readers to pay for purchases in convenience stores, subway turnstiles and many other places. These payment systems include both debit (pre-paid) and credit (post-paid) functionalities. Finally, since 2005, Japanese mobile customers also have access to digital television on their handsets. These last two services have yet to materialize in the rest of the world (with the sole exception of South Korea). Given the Japanese telecommunications industry’s innovative prowess, one would expect to see Japanese handsets occupying leading positions in most international markets (especially in developed economies). Strikingly enough, not only are they far from leading, they are in fact nowhere to be found (as anyone having tried to buy a Japanese mobile handset in the United States can attest). More precisely, in 2007, Nokia had a 38% market share of worldwide cell phone shipments, followed by Samsung with 14.3% and Motorola with 14.1%. No Japanese companies were in the top 5 - altogether, they made up a meager 5% of the global handset market 23 (Sharp, the largest one, barely made it to 1%). 24 Some observers (in Japan) have coined a term for this situation: the Galapagos syndrome. 25 Just like the Galapagos archipelago hosts animal species which do not exist anywhere else in the world, so does Japan host an extremely innovative mobile phone industry completely isolated from the rest of the world. Origins of the Galapagos syndrome What accounts for this isolation and for Japanese handset makers’ inability to build significant presences in international markets? The answer is found in a combination of self-reinforcing factors, the central one of which is a mobile phone industry structure very different from those prevailing in other major markets. Specifically, in Japan, the mobile operators (DoCoMo, KDDI, and Softbank) hold most of the power in the industry and are able to dictate specifications to the other participants – handset makers in particular. By contrast, carriers in other countries have much less leverage in their relationships with handset makers and are willing to make significant compromises in exchange for exclusive rights to highly popular handsets – e.g. Apple’s iPhone or Motorola’s Razr. On the one hand, the centralized, top-down 23 Economisto, 14 October 2008. “Mega competition in mobile phones,” pp. 32-35. 24 Economisto. 14 October 2008. “Mega competition in mobile phones”, p. 42. 25 Ekonomisto, February 26, 2008, “Japan's economic system losing competitiveness due to "Galapagos phenomenon"”. leadership of Japanese mobile carriers has been immensely successful in producing domestic innovation, as described above. It enabled the rapid roll-out and market adoption of complex technologies, such as mobile payments, which require the coordination of many actors in the ecosystem. On the other hand however, the subservience to operators meant that everyone in the ecosystem – including handset makers – ended up focused on serving the domestic market. Indeed, mobile carriers are operating in a fundamentally domestic business: telecommunication regulations around the world have always made it difficult for carriers to expand abroad. The only exceptions are Vodafone and T-Mobile, who have managed to build some meaningful presences outside of their home countries - although few and far-between, and with mixed results. Japan’s NTT DoCoMo, creator of i-mode, the world’s leading mobile internet service, has repeatedly failed in its attempts to export the service in international markets on a significant scale. Today, there are only 6.5 million overseas users of i-mode, roughly 10% of the Japanese total, while DoCoMo’s corresponding overseas revenues in 2007 were less than 2% of total sales. Moreover, the majority of these “international” customers and sales were in fact made up of Japanese users roaming while traveling abroad. 26 The “home bias” of the ecosystem leaders – the mobile operators – was unfortunately transplanted to the Japanese handset manufacturers. The latter ended up focusing most of their R&D resources on integrating the numerous Japan-specific hardware features demanded by the operators (contactless mobile payment systems, twodimensional bar-code scanners, digital TV capability, etc.) into their phones. They developed virtually no standalone market research, marketing and sales capabilities, which are critical for competing in international markets (in Japan that was done for them by the operators). Three additional factors have exacerbated the competitive disadvantage of handset makers in overseas markets. 26 “iMode to retry it in Europe a simple version developed by DoCoMo,” 4 December 2008, Fuji Sankei Business. First, Japan’s large domestic market and the fast growth of its mobile phone sector during the late 1990s and early 2000s was a curse disguised as a blessing. During that period the handset makers perceived no serious incentives (nor urgency) to seek expansion opportunities abroad. The contrast with South Korea is noteworthy here: the domestic Korean mobile phone industry is also largely dominated by the operators (SK Telecom in particular) and has also produced tremendous growth and very advanced services. The difference was that the Korean market was too small (less than half the size of Japan’s) for the domestic handset manufacturers to be satisfied serving it, which led Samsung, LG and others to seek opportunities in international markets from early on – today both are in the top 5 global cell-phone makers. Second, in the late 1990s the Japanese operators chose a second-generation standard for wireless telecommunications which was subsequently rejected in the rest of the world. The early choice allowed the operators to roll out advanced services far ahead of the rest of the world, without having to worry about interoperability (given their inherent domestic focus). For the handset makers, this choice raised further technological barriers to their international expansion, as they became dependent on a technology (through specific investments and resource allocation) which could not be leveraged abroad. Third and perhaps most important, Japanese handset makers have had a longstanding bias in favor of hardware and “monozukuri” (manufacturing)-driven innovation over software-driven innovation – the same bias as their counterparts in the computer industry, which prevented the development of a Japanese software sector (cf. section 3.1. above). Indeed, most Japanese phones are customized for a specific carrier (DoCoMo or KDDI or Softbank) and manufactured “from scratch”, with little concern for creating standardized interfaces and software platforms, which might have enabled them to spread development costs across multiple phone models and create some cost advantage. Japanese handset makers have neither embraced widely used smart-phone software platforms such as Nokia’s Symbian, Microsoft’s Windows Mobile or Google’s Android, nor created any such platforms of their own. Given that hardware design is the part of a mobile phone which varies the most across international markets (unlike the underlying software platforms, which can remain virtually unchanged), it is then no wonder that Japanese cell-phone makers are poorly positioned to adopt their phones to different market needs overseas. The monozukuri bias also explains why, despite their technical prowess, Japanese phone manufacturers have been unable to create a universally appealing device like Apple’s iPhone – which they are now desperately (and unsuccessfully) trying to emulate. In fact, this marks the third time in less than a decade that Apple or another US innovator has come up with a successful product way ahead of Japanese electronics manufacturers, even though the latter had the technological capabilities required to produce it long before Apple. The first episode was Sony’s inability to bring to market a successful digital music player (a category which everyone expected Sony to own, as a natural extension of its widely successful walkman), largely because of an inadequate content business model. This left the gate wide open for Apple’s iPod/iTunes combination to take over the market starting in 2001. The second episode also involved Sony, this time in the market for electronic book readers. Although Sony was the first to commercialize a device based on the underlying electronic ink technology, its eBook (launched in 2005) was largely a failure due – yet again – to an inadequate content business model. Instead, it was Amazon’s Kindle – launched 2 years later - that has come to dominate the category. There is a common and simple lesson here, which seems to have repeatedly eluded Japanese electronics manufacturers in general and handset makers in particular. Hardware and monozukuri have become subordinate to software when it comes to most digital devices: the latter are no longer pure products, but in large part services, in which software plays the key role. It is worthwhile to note that more than 90% of the hardware parts in Apple’s iPods and iPhones come from Asia – the most sophisticated components from Japan. Apple’s only – but essential – innovations are in the user interface and underlying software (QuickTime and iTunes), which allow it to extract most of the value. Although Sony and other Japanese companies clearly understand the importance of content (most visible in the recent Blu-ray vs. HD-DVD format war), they still have not matched Apple, Amazon and others in the ability to merge service, manufacturing and content. It is thus an unsettling paradox (and presumably a frustrating one for the handset makers themselves) that Japanese cell phone manufacturers do so poorly in international markets, where phones are so basic compared to Japan. The explanation is however straightforward: it is not deep technical expertise that matters most; instead, the key capabilities required are brand power, the ability to adapt in order to serve local preferences (sales and marketing savvy), and cost competitiveness. Those are the attributes that have made Nokia, Samsung and Motorola so successful in international markets – and those are the ones which Japanese manufacturers lack the most. It is more important to obtain economies of scale in standardized parts – through outsourcing and reliance on widely available software platforms – than building ultra-sophisticated, customized phones. Some observers argue that the peculiar demands of Japanese consumers drew handset makers into making products that do not sell well in the rest of the world. In our view, this is an unacceptable excuse: Nokia, Motorola and Samsung were all able to conquer international markets with very different demand characteristics than the ones they faced in their respective homes. Take the Chinese market for instance: one could argue that Japanese manufacturers should have an advantage over their Western rivals in China, given their experience with ideogram-based characters and the common cultural roots. But even there, Japanese cell-phone makers have struggled mightily. Today, the top three cell-phone makers in China are Nokia with a 30% market share; Motorola with 18.5% and Samsung 10.8%. None of the Japanese makers has more than 1% and they are behind a number of domestic Chinese manufacturers. Present situation Unfortunately, it took the current economic recession, combined with the saturation of the domestic mobile user market for Japan’s cell-phone manufacturers to realize that their competitive position is profoundly vulnerable and unsustainable. New mobile phone sales in Japan were down 20% in 2008 (compared to 2007) and are expected to decrease even further in 2009. The new government policy requiring operators to clearly distinguish the price of the handset from the price of the service plan has significantly contributed to the drop in new phone sales. Realizing the high prices of the handsets, Japanese consumers have naturally reduced the frequency with which they upgrade to new phones. The Japanese mobile phone industry faces two additional challenges: the decline in the number of teenagers and young adults (down 6.6% for ages 15-24 from 2010 to 2020) due to low fertility, and the arrival of high-performance foreign products, such as the iPhone, Android-powered devices, and BlackBerries. The slowdown in domestic sales has had two effects. One is much needed consolidation and shakeout among handset manufacturers: NEC, Hitachi and Casio have merged their mobile phone units as of September 2009, while Sanyo and Mitsubishi are exiting the business altogether. The second one is a much stronger urgency to seek opportunities abroad. Sharp and Panasonic, the domestic market leaders, have both embarked on ambitious plans to expand their business in China, a market where Japanese handset makers have been notoriously unsuccessful (as mentioned above). These setbacks might turn out to be a welcome wake-up call for Japan’s handset makers by providing sufficient incentives (and urgency) to seek to develop competitive advantage at serving other markets than Japan’s. That requires breaking free from the subservience to mobile operators and from a model which has worked well (too well) in Japan. 4. Discussion and policy implications “Inefficient” and self-sustaining industry structures As we have noted, Japanese industry is surely capable of innovation but it operates in an environment that is not conducive to mobilizing the innovative capabilities of soft goods and service sector businesses, especially in the international arena. Fundamentally, this stems from a mismatch between the country’s vertical and hierarchical industrial organizations and the horizontal, ecosystem-based structures prevailing in “new economy” sectors. The former have proven very efficient in pursuing manufacturing perfection (“kaizen monozukuri”) – a domain in which Japan has excelled. As we have argued in section 2 however, the latter have been the far more effective form of “industry architecture” for driving innovation in most of today’s technology industries, on which services and soft goods rely. This mismatch makes the current organization and performance of some Japanese sectors appear as stuck in inefficient equilibria. Indeed, one important common denominator across the three industry case studies presented above is the prevalence of self-reinforcing mechanisms which have locked the corresponding sectors into highly path-dependent structures. The weakness (or, more precisely, virtual absence) of Japan’s software industry has been perpetuated by large computer system suppliers which have locked their customers from early on into proprietary and incompatible hardwaresoftware systems; as a result, these customers have always found it in their best interest to deepen the customization and rely on the same suppliers for more proprietary systems. Absent any external shock (or public policy intervention), it is hard to see a market opportunity for potential Japanese software companies. In animation, production committees have established a bottleneck over the financing of animation projects, which allows them to obtain most of the copyrights, which in turn deprives anime production companies from the revenues that would enable them to invest in producing their own projects and acquire the corresponding intellectual property rights. Of course, this bottleneck has been perpetuated by the absence of alternative forms of financing: bank loans (Japanese financial institutions have had a long-standing reluctance to invest in businesses with only “soft” collateral) and venture capital (an industry which remains strikingly underdeveloped in Japan). Finally, the wireless communications sector in Japan has developed a top-down way of innovating, in which the mobile operators control end-customers and dictate terms to handset manufacturers, which in turn have never had sufficient incentives to develop their own marketing and independent R&D capabilities. 27 The second aspect that needs to be emphasized is that the hierarchical forms of industrial organization that prevail in some Japanese sectors are not uniformly less 27 I.e. R&D at the mobile service level, as opposed to R&D that simply pushes handset technology, while taking the level of innovation in service and corresponding standards as exogenously given. innovative than the more horizontal modes of organization. By subordinating everyone to the “ecosystem leaders” (i.e. the companies at the top of the industry structure) however, hierarchical structures can create large inefficiencies by preventing companies at lower levels of the hierarchy from capitalizing on their innovations outside of the vertical structure – in particular, in global markets. Indeed, while software has clearly been the Achilles’ heel of Japan’s high-tech and service sectors, animation and mobile telephony are two industries in which Japan has innovated arguably more than any other country in the world. The problem there is that the “ecosystem leaders” – production committee members such as TV stations and, respectively, mobile operators – have Japan-centric interests (television stations and mobile phone service are essentially local businesses due to regulations). This ends up restraining the other members of the ecosystems to the domestic market, when in fact their relevant markets are (should be) global. Of course, in contexts in which the leader is a globally-minded company - such as Sony and Toyota -, all members of the ecosystem benefit. But those situations are the exception rather than the norm. Policy measures to break from inefficient industry structures Extrapolating from the three case studies above, there are several initiatives which Japanese policy-makers could take to remedy the issue of inefficient industry structures. First, despite recent improvements, Japan remains deficient in the enforcement of anti-trust. Monopolies and oligopolies are particularly nefarious in industries where there is a need for constant and fast innovation. The self-reinforcing mechanisms we described earlier (augmented by the importance of established, long-term relationships in Japan) creates high barriers to entry in most Japanese industries which protect incumbents and make it harder for Japanese innovators to succeed. Related to the question of oligopolies and monopolies is the issue of ease of entry and exit. If there is one lesson from Silicon Valley which Japanese policy-makers should take to heart it is that both the birth and the death rate of businesses there are extremely high - as they should be in innovative sectors. This requires not only effective bankruptcy procedures, but also financing mechanisms that accept high rates of failures, liquid employment markets (for those who lose their jobs when their employer goes out of business), and a socio-cultural environment that favors risks without denigrating those who have failed – sometimes several times – in their quest for entrepreneurial success. For example, in the US, one essential catalyst of the PC era and the rise of Microsoft and other software platforms was the unbundling of IBM – the result of antitrust intervention. There was no such intervention in Japan to break the stranglehold of the large computer system manufacturers and enable entry of smaller, innovative software companies. Similarly, as we noted earlier in this paper, antitrust has placed significant constraints on Microsoft’s ability to extend its PC OS monopoly power to the Internet and/or mobile telecommunications. The objective was to make sure the emergence of new software ecosystems and platforms is not stifled. As it has grown more dominant, Google must now also take into account the risk of anti-trust prosecution. This forces it to tread more carefully in its dealings with partners and potential competitors in online search and advertising than it might otherwise do if the anti-trust regime were weaker. Second, the development of new industries based on ecosystems which are not defined by hierarchical relationships requires a strengthening of the legal system in other fields beside antitrust. In hierarchical keiretsu systems, the controlling corporation (or corporations) which sit at the top of the pyramid performs arbitration and enforcement functions for the entire eco-system. Since what is good for the eco-system is – usually - good for them, they have a built-in incentive to take good decisions, though in some cases the interests of smaller players might be at risk. This cannot be a sustainable substitute however for developing a legal infrastructure which supports and encourages innovation and entrepreneurship. In the more flexible and non-hierarchical ecosystems which define many of the innovative industries we have discussed, there is a need for effective third-party enforcement. In the United States, this is performed by civil courts which can adjudicate contractual disputes, and in some cases may involve criminal law, for example in the case of anti-trust violations. In Japan, these mechanisms are less welldeveloped. Despite changes to the regulations pertaining to the bar exam, there is still a shortage of attorneys. Moreover, the entire economy has historically been less reliant on legal remedies, making the entire legal system underdeveloped in this area. There is, both in the United States and abroad, a mistaken view that the US system breeds to many lawyers and litigation. While it may be true that frivolous class action lawsuits hurt the economy, it is America’s rich legal infrastructure that lubricates the wheels of its innovation industry. Third and also part of the legal system remedies is enforcement of intellectual property rights (IPRs). This is perhaps the key institutional ingredient for innovation, especially in the soft goods sector. For many businesses in these industries IPRs are their main asset, in some cases their only one. Japan’s weak IPR regime undermines the balance sheet of innovative companies, makes it harder for them to obtain financing, and diminishes their bargaining power. Animation is a case in point: the production committees have emerged to fill in the institutional gap of recognition and enforcement of copyrights, which would enable anime production companies to finance themselves and develop their own projects. Fourth, venture capital markets, despite some efforts, remain underdeveloped in Japan, which presented an additional hurdle for small companies trying to break away from constraining industry organizations (e.g. animation). Unlike anti-trust and IPRs, this is an area where government action in itself cannot resolve the entire problem. However, the regulatory regime can be altered to make it easier for the venture capital industry to grow faster in Japan. Finally, a necessary policy measure is to further open the country to foreign investment. The difficulty which foreign investors face in Japan deprives Japanese innovative companies of equity partners and business partners, further locking them into domestic ecosystems which may stifle their development. It also makes it harder for Japanese companies to succeed overseas, since foreign investors could help them capture markets outside of Japan. 5. Conclusions Japan presents a unique case of industrial structures which have produced remarkable innovations in certain sectors, but which seem increasingly inadequate to produce innovation in modern technology industries, which rely essentially on horizontal ecosystems of firms producing complementary products. As our three cases studies of software, animation and mobile telephony illustrate, there are two potential sources of inefficiencies that this mismatch can create. First, the Japanese hierarchical industry organizations can simply “lock out” certain types of innovation indefinitely by perpetuating established business practices: this is the case with software, an industry from which Japan is almost entirely absent. Second, even when the vertical hierarchies produce highly innovative sectors in the domestic market – as is the case with animation and wireless mobile communications -, the exclusively domestic orientation of the “hierarchical industry leaders” can entail large missed opportunities for other members of the ecosystem, who are unable to fully exploit their potential in global markets. We have argued that improving Japan’s ability to capitalize on its innovations will require certain policy measures, aiming to alter legislation and incentives that stifle innovation: strengthening the enforcement of antitrust and intellectual property rights, strengthening the legal infrastructure (e.g. related to contractual disputes), lowering barriers to entry for foreign investment. On the other hand, private sector initiative is also critical, which requires the development of the venture capital sector, a key and necessary ingredient for stimulating innovation in modern industries. Understanding the nature of the new innovation-producing ecosystems which have developed in industries associated with the new economy (software, Internet and mobile communications) will help Japanese policy-makers and managers develop better ways for Japanese business to take advantage of its existing strengths to expand innovation beyond the industrial sphere into the realm of internationally-competitive service and soft goods sector enterprises. 6. References Abe, Masahiro and Takeo Hoshi. “Corporate Finance and Human Resource Management.” June 2004. Aghion, Philippe, “A Primer on Innovation and Growth.” October 2006. Bruegel Policy Brief, Issue 2006/06. http://www.bruegel.org/Public/Publication_detail.php?ID=1169&publicationID=1265 26 June 2007. Ahmadjian, Christina L. and Patricia Robinson. “Safety in Numbers: Downsizing and the Deinstitionalization of Permanent Employment in Japan.” Administrative Science Quarterly, Vol. 46, No. 4 (December 2001), pp. 622-654. American Chamber of Commerce in Japan. “FDI Policy in Japan – from goals to reality. Education.” ACCJ FDI Task Force Specific Policy Recommendation #4. March 2004. American Chamber of Commerce in Japan. “FDI Policy in Japan – from goals to reality. Pharmaceuticals.” ACCJ FDI Task Force Specific Policy Recommendation #3. February 2004. Amin, Mohammad and Aaditya Mattoo, “Do Institutions Matter More for Services?” World Bank Policy Research Working Paper 4032, October 2006. Washington DC: The World Bank, 2006. Baldwin, Carliss Y. and Kim B. Clark Design Rules: The Power Of Modularity. Cambridge, MA: MIT Press 1999. Bandyopadhyay, Subhayu, Cletus C. Coughlin and Howard J. Wall. “Ethnic Networks and U.S. Exports.” Bonn: Forschungsinstitut zur Zukunft der Arbeit/Institute for the Study of Labor. IZA Discussion Paper 1998, March 2006. Broda, Christian and David E. Weinstein. “Happy News From the Dismal Science: Reassessing Japanese Fiscal Policy and Sustainability.” Cambridge, MA: National Bureau of Economic Research, December 2004 (Working Paper 10988). Brown, Todd R.N. “The Importation and Alteration of Western Legal Concepts in Early Meiji japan, 1867- 1899.” Paper for Modern Japanese History class, Johns Hopkins SAIS 11 December 1997. Calder, Kent E. Strategic Capitalism: Private Business and Public Purpose in Japanese Industrial Finance. Princeton: Princeton University Press, 1995 [first printing 1993]. Callon, Scott. Divided Sun: MITI and the Breakdown of Japanese High-Tech Industrial Policy, 1975-1993. Stanford University Press, 1995. Campbell-Kelly, Martin. From Airline Reservations to Sonic the Hedgehog: A History of the Software Industry. Cambridge: MIT Press, 2003. Chopra, Sunil. “Seven-Eleven Japan Co.” Evanstown IL: Kellogg School of Management (Northwestn University) case study, KEL026 Revised 14 Feb. 2005. Clark, Tim and Carl Kay. Saying Yes to Japan: How Outsiders Are Reviving a Trillion Dollar Services Market. New York: Vertical, 2005. Dam, Kenneth W. “Institutions, History and Economic Development.” Chicago: University of Chicago Law School, John M. Olin Law and Economics Working Paper No. 271 (2 nd Series), January 2006. Development Bank of Japan. Behavior Trends of Japanese Banks toward the Corporate Sector and Their Impact on the Economy. Tokyo: Development Bank of Japan, Economic and Industrial Research Department, Research Report No. 32, October 2002. Doi, Takero and Takeo Hoshi. “Paying for the FILP.” NBER Working Paper No. W9385. Revised 1 September 2002. Dore, Ronald P. Stock Market Capitalism: Welfare Capitalism: Japan and Germany versus the Anglo Saxons. New York: Oxford University Press, 2000. Dore, Ronald P., “Deviant or Different? Corporate governance in Japan and Germany.” Corporate Governance: An International Review, Vol. 13, No. 3, May 2005, pp. 437-446. Dore, Ronald P., “For whose benefit the corporation?” (Summary of Dare no tame no kaisha ni sure ka. Economist Intelligence Unit. Country Report: Japan. Various issues. Egawa, Masako, Andrei Hagiu, Tarun Khanna, Felix Oberholzer-Gee and Chisato Toyama “Production IG: Challenging The Statu Quo.” Harvard Business School case study No. 9-707-454, 2006. Evans, David, Andrei Hagiu and Richard Schmalensee Invisible Engines: How Software Platforms Drive Innovation and Transform Industries. Cambridge, MA: MIT Presss, 2006. Feldman, Robert Alan. “Japan Economics – Perversity and Futility Attack Reform – and Lose.” Tokyo: Morgan Stanley, 18 August 2003. Fridenson, Patrick. “La différence des entreprises japonaises,” in Jean-François Sabouret (dir.), “La dynamique du Japon,” Paris, Editions Saint-Simon, 2005, pp. 321-331. Fuess, Scott M., Jr. “Working Hours in Japan: Who Is Time-Privileged?” Institute for the Study of Labor (IZA), Discussion Paper No. 2195 (July 2006). Fujitsu Research Institute. Industrial Restructuring in Japana and Chances for Growth. Tokyo: Fujitsu Research Center, 26 May 2003. Fukao, Kyoji. Inward FDI and the Japanese Economy. Fukao Kyoji and Amano Tomofumi. “Foreign Direct Investment and the Japanese Economy – Key to Japan’s Revitalization.” 29 October 2003. Fukao, Mitsuhiro. “Japan’s Lost Decade and its Financial System.” The World Economy, Vol 26, March 2003, pp. 365-384. Funken, Katja. “Alternative Dispute Resolution in Japan.” Univ. of Munich School of Law Working Paper No. 24 (June 2003). Gao, Bai. Economic Ideology and Japanese Industrial Policy: Developmentalism from 1931 to 1965. Cambridge: Cambridge University Press, 1997. Gao, Bai. Japan’s Economic Dilemma: The Institutional Origins of Prosperity and Stagnation. Cambridge: Cambridge University Press, 2001. Gawer, Annabelle and Michael Cusumano. Platform Leadership: How Intel, Microsoft, And Cisco Drive Industry Innovation. Boston: Harvard Business School Press, 2002. Gilson, Ronald and Curtis J. Milhaupt. “Choice as Regulatory Reform: The Case of Japanese Corporate Governance.” Columbia University Law School Center for Law and Economic Studies Working Paper No. 251 and Stanford Law School John M. Olin Program in Law and Economics Working Paper No. 282, 2004. Gordon, Robert J. “Two Centuries of Economic Growth: Europe Chaisng the American Frontier.” London: Center for Economic Policy Research, June 2004 (Discussion Paper 4415) Haley, John Owen. Authority without Power: Law and the Japanese Paradox. New York: Oxford University Press, 1991 (1995 Oxford UP paperback). Hall, John W. and Marius B. Jansen, eds. Studies in the Institutional History of Early Modern Japan. Princeton, NJ: Princeton University Press, 1968. Hall, Peter A. and David Soskice, eds. Varieties Of Capitalism: The Institutional Foundations Of Comparative Advantage. Oxford: Oxford University Press, 2001. Hanazaki, Masaharu and Akiyoshi Horiuchi. “A review of Japan’s bank crisis from the governance perspective.” Pacific-Basin Finance Journal 11 (2003) 305-325 Hanazaki, Masaharu and Akiyoshi Horiuchi. “Is Japan’s Financial System Efficient?” Oxford Review of Economic Policy Vol. 16 No. 2 (2000) 61-73. Hanazaki, Masaharu and Akiyoshi Horiuchi. “Can the financial restraint theory explain the postwar experience of Japan’s financial system?” in Fan, Joseph P.H., Masaharu Hanazaki and Juro Teranishi, eds. Designing Financial Systems in East Asia and Japan. London: RoutledgeCurzon, 2004. Hoshi, Takeo. “The Japanese Economy: Macroeconomic Overview.” Presentation at the Asian Public Policy Program, Graduate School of International Corporate Strategy, Hitotsubashi University. June 2004. Hoshi, Takeo and Anil K. Kashyap. Corporate Financing and Governance in Japan: The Road to the Future. Cambridge MA: The MIT Press, 2001. Ito, Takatoshi and Kimie Harada. “Japan Premium and Stock Prices: Two Mirrors of Japanese Banking Crises.” NBER Working Paper 7997 (November 2000). Jackson, Gregory. “Toward a comparative perspective on corporate governance and labour.” Tokyo: Research Institute on the Economy Trade and Industry, 2004 (REITI Discussion Papers Series 04-E-023). Jackson, Gregory and Hideaki Miyajima. “Corporate Governance in Japan: Institutional Change and Organizatinal Diversity.” Revised Draft 25 May 2004. Jacobides, Michael G., T. Knudsen and M. Augier. “Benefiting from innovation: Value creation, value appropriation and the role of industry architectures.” Research Policy 35 (2006) 1200–1221. Junji, Banno. The Political Economy of Japanse Society: Vol. I: The State or the Market? Oxford: Oxford University Press, 1997. Kanaya, Akihiro and David Woo. The Japanese Banking Crisis of the 1990s: Sources and Lessons. Princeton NJ: International Economics Section, Department of Economics, Princeton University, 2001. Katz, Richard. Japan: The System That Soured – The Rise and Fall of the Japanese Economic Miracle. Armonk, NY: M.E. Sharpe, 1998. Katz, Richard. Japanese Phoenix: The Long Road to Economic Revival. Armonk, NY: M.E. Sharpe, 2003. Kawamoto Yuko. “Fixing Japan’s banking system.” The McKinsey Quarterly 2004 (No. 3), pp. 118-112. Keizai Koho Center, Japan 2005: An International Comparison. Tokyo: Keizai Koho Center, 2005. Khanna, Tarun, Krishna G. Palepu and Jayant Sinha. “Strategies That Fit Emerging Markets.” Harvard Business Review, June 2005. Kobayashi, Keichiro, “Laggard Structural Reform Hurt Society’s Weak.” Asahi Shimbun/International Herald Tribune, 12 February 2004 (RIETI reprint). Koda, Yoji. “The Russo-Japanese War: Primary Causes of Japanese Success.” Naval War College Review, Spring 2005 Vol. 58 No. 2. Koo, Richard C. Balance Sheet Recession: Japan’s Struggle with Uncharted Economics and Its Global Implications. Singapore: John Wiley & Sons (Asia), 2003 (original published in Japanese by Tokuma Shoton, 2001). Lincoln, Edward J. Arthritic Japan: The Slow Pace of Economic Reform. Washington DC: Brookings Institution, 2001. Lockwood, William W., ed. The State and Economic Enterprise in Japan. Princeton, NJ: Princeton University Press, 1965. Marshall, Byron K. Capitalism and Nationalism in Prewar Japan: The Ideology of the Business Elite, 1868-1941. Stanford, CA: Stanford Universiy Press, 1967. Mashima R., “The Turning Point for Japanese Software Companies: Can They Compete in the Prepackaged Software Market?” Berkeley Technology Law Journal, Vol. 11, 1997. McKinsey Global Institute. Why the Japanese Economy is not Growing: Micro Barriers to Productivity Growth. Washington, D.C.: McKinsey Global Institute, 2000 Mikuni, Akio and R. Taggart Murphy. Japan’s Policy Trap: Dollars, Deflation, and the Crisis of Japanese Finance. Washington DC: Brookings Institution Press, 2002. Milhaupt, Curtis J. “A Lost Decade for Japanese Corporate Governance Reform?: What’s Changed, What Hasn’t, and Why.” Columbia Law School, The Center for Law and Economic Studies, Working Paper No. 234, July 2003. Milhaupt, Curtis J. and Mark D. West, “The Dark Side of Private Ordering: An Institutional and Empirical Analysis of Organized Crime,” University of Chicago Law Review, Vol. 67 no 1 (Winter 2000). Miwa, Yoshiro and J. Mark Ramseyer, “Toward a theory of jurisdictional competition: the case of the Japanese FTC,” Journal of Competition Law and Economics 1(2):247-277, 2005. Miyagawa Tsutomu. “The Industrial Structure and the Revitalization of the Japanese Economy – From Cyclical Expansion to a Sustained Growth –“ Japan Center for Economic Research (Industry Research Report 2003). March 2004 Miyajima, Hideaki. Ongoing Corporate Board Reforms: Their Causes and Results. (Paper prepared for the the forthcoming Masahiko Aoki, Gregory Jackson and Hideaki Miyajima, eds., Corporate Governance in Japan: Institutional Change and Organizational Diversity.) August 2004. Miyajima, Hideaki and Fumiaki Kuroki. “Unwinding of Cross-shareholding: Causes, Effects, and Implications.” (Paper prepared for the the forthcoming Masahiko Aoki, Gregory Jackson and Hideaki Miyajima, eds., Corporate Governance in Japan: Institutional Change and Organizational Diversity.) October 2004. Morck, Randall and Masao Nakamura. “Been There, Done That: The History of Corporate Ownership in Japan.” Brussels: European Corporate Governance Institute, Finance Working Paper No. 1 20/2003. July 2003 (www.ecgi.org/wp). Moriguchi, Chiaki and Emmanuel Saez, “The Evolution of Income Concentration in Japan, 1885-2002: Evidence from Income Tax Statistics.” Revised on 25 Aug. 2005 (Preliminary). www.econ.barnard.columbia.edu/~econhist/papers/moriguchisaez2.pdf (21 August 2006). Morris-Suzuki, Tessa. A History of Japanese Economic Thought. London: Routledge, 1989. Morris-Suzuki, Tessa, ed. Japanese Capitalism since 1945. Armonk, NY: M.E. Sharpe, 1989. Morris-Suzuki, Tessa. The Technological Transformation of Japan: From the Seventeenth to the Twentyfirst Century. Cambridge: Cambridge University Press, 1994. Najita, Tetsuo and J. Victor Koschmann. Conflict in Modern Japanese History: The Neglected Tradition. Princeton, NJ: Princeton University Press, 1982. Nakamura, Takafusa. Lectures on Modern Japanese Economic History, 1926-1994. Tokyo: LTCB International Library Foundation, 1994. Nakamura, Takafusa. A History of Showa Japan, 1926-1989 (translated by Edwin Whenmouth). Tokyo: University of Tokyo Press, 1998. Nakazato, Minoru, J. Mark Ramseyer, Eric B. Rasmusen, “The Industrial Organization of the Japanese Bar: Levels and Determinants of Attorney Incomes.” Discussion Paper No. 559, Oct. 2006, John M. Olin Center for Law, Economics, and Business, Harvard Law School. (http://ssrn.com/abstract=951622). Napier, Susan J. Anime From Akira to Howl’s Moving Castle. New York: Palgrave Macmillan, 2005. Narita, Junji. “The Economic Consequences of the ‘Price Keeping Operation’ in the Japanese Stock Markets – From August 1992 to November 1993.” September 2002 (Presented at the Center on the Japanese Economy and Business of the Graduate School of Business, Columbia University, on 5 September 2002). North, Douglass C . Structure and Change in Economic History. New York: Norton, 1981. North, Douglass C. Institutions, Institutional Change and Economic Performance. New York: Cambridge University Press, 1990. North, Douglass C. Understanding the Process of Economic Change. Princeton, NJ: Princeton University Press, 2005. North, Douglass C. and Robert Paul Thomas. The Rise of the Western World: A New Economic History. Cambridge: Cambridge University Press, 1973. Odom, William E. and Robert Dujarric. America's Inadvertent Empire. New Haven: Yale University Press, 2004. Olson, Mancur. The Rise and Decline of Nations: Economic Growth, Stagflation, and Social Rigidities. New Haven, Conn.: Yale University Press, 1982. Ono, Hiroshi and Marcus E. Rebick. “Constraints on the Level and Efficient Use of Labor in Japan.” NBER, Working Paper 9484 (February 2003). Ono Hiroshi and Madeline Zabodny. “Gender Differences in Information Technology Usage: A U.S.- Japan Comparison.” Federal Reserve Bank of Atlanta, Working Paper 2004-2 (January 2002). Osano, Hiroshi and Toshiaki Tachibanaki, eds. Banking, Capital Markets and Corporate Governance. New York: Palgrave, 2001. Overholt, William H. Asia, America, and the Transformation of Geopolitics. New York: Cambridge University Press, 2008. Ozawa, Terutomo. “The ‘hidden’ side of the ‘flying-geese’ catch-up model: Japan’s dirigiste institutional setup and a deepening financial morass.” Columbia Business School, Center on Japanese Economy and Business, Working Paper No. 193, July 2001. Paprzycki, Ralph. “What Caused the Recent Surge of FDI into Japan?” Discussion Paper 31. Tokyo: Hitostubashi University, Institute of Economic Research, Hitotsubashi University Research Unit for Statistical Analysis in Social Sciences, April 2004. Park Se-Hark. “Bad Loans and Their Impacts on the Japanese Economy: Conceptual And Practical Issues, and Policy Options.” Discussion Paper A-439. Tokyo: Insitute for Economic Research, Hitotsubashi University. June 2003. Patrick, Hugh. “From Cozy Regulation to Competitive Markets: The Regime Shift of Japan’s Financial System.” Columbia Business School, Center on Japanese Economy and Business, Working Paper No. 186 (April 2001). Patrick, Hugh. “Japan’s Mediocre Economic Performance Persists and Fundamental Problems Remain Unresolved.” 12 December 2002 (to appear in Japanese in Toyo Keizai, 30 January 2003). Patrick, Hugh. “Evolving Corporate Governance in Japan.” Columbia Business School, Center on Japanese Economy and Business, Working Paper 220 (February 2004). Patrick, Hugh and William V. Rapp. The Future Evolution of Japanese-US Competition in Software: Policy Challenges and Strategic Prospects. Report submitted to the United States-Japan Friendship Commission by the Center of Japanese Economy and Business, Columbia Business School, 1995. Patrick, Hugh and Henry Rosovsky, eds. Asia’s New Giant: How the Japanese Economy Works. Washington DC: The Brookings Institution, 1976. Peek, Joe and Eric S. Rosengren. “Unnatural Selection: Perverse Incentives and the Misallocation of Credit in Japan.” NBER Working Paper No. 9643 (April 2003). Porter, Michael E., and Miriko Sakakibara. “Competition in Japan.” Journal of Economic Perspectives (18:1) Winter 2004: 27-50. Ramseyer, J. Mark and Frances M. Rosenbluth. The Politics of Oligarchy: Institutional Choice in Imperial Japan. New York: Cambridge University Press, 1995. Ramseyer, J. Mark and Frances McCall Rosenbluth. Japan's Political Marketplace. Cambridge, MA: Harvard University Press, 1993. Reich, Simon. Fruits of Fascism: Postwar Prosperity in Historical Perspective. Ithaca, NY: Cornell University Press, 1990. Research Institute for Economy Trade and Industry. Fiscal Reform of Japan: Redesigning the Frame of the State. Five proposals for complementary institutional reform: From a “compartmentalized” to a “crosssectional” system. Tokyo, 12 March 2004. Retherfod, Robert D. and Naohiro Ogawa. “Japan’s Baby Bust: Causes, Implications, and Policy Responses.” East West Center Working Papers, No. 118, April 2005. Rhyu, Sang-young and Seungjoo Lee. “Changing Dynamics in Korea-Japan Economic Relations” Policy Ideas and Development Strategies.” Asian Survey, Vol. 46, No. 2 (2006), pp. 195-214. Rosenbluth, Frances McCall, ed. The Political Economy of Japan’s Low Fertility. Stanford, CA: Stanford University Press, 2007. Rosovsky, Henry, ed. Industrialization in Two Systems: Essays in Honor of Alexander Gerschenkron. New York: John Wiley and Sons, 1966. Sakakibara, Eisuke. Strutural Reofrm in Japan: Breaking the Iron Triangle. Washington DC: Brookings Institution Press, 1993. Samuels, Richard J. "Rich Nation, Strong Army" National Security and the Sase, Takao. “The Irresponsible Japanese Top Management Under the Cross-Shareholding Arrangement.” New York: Columbia Business School, Center on Japanese Economy and Business, Occasional Paper Series, Occasional Paper No. 50 (January 2003). Sato, Makoto, “From Foreign Workers to Minority Residents: Diversification of International Migration in Japan.” Ritsumeikan Annual Review of International Studies, 2004 (Vol. 3), pp. 19-34. Sato, Masaru. Kokka no Wana. Shinchosha, 26 March 2005. Sautter, Christian. La France au Miroir du Japon: Croissance ou déclin. Paris : Odile Jacob, 1996. Saxenian, AnnaLee. Silicon Valley’s New Immigrant Entrepreneurs. San Francisco: Public Policy Institute of California, 1999, http://www.ppic.org/content/pubs/report/R_699ASR.pdf (27 June 2007). Schlesinger, Jacob M. Shadow Shoguns : The Rise and Fall of Japan’s Postwar Political Machine. New York: Simon & Schuster, 1997. Shirahase, Sawako. “Women’s Increased Higher Education and the Declining Fertulity Rate in Japan.” Review of Population and Social Policy No. 9, 2000 (47-63) Smith, Thomas C. Native Sources of Japanese Industrialization, 1750-1920. Berkeley and Los Angeles: University of California Press, 1988. Smitka, Michael. “Japan’s Economic Malaise: Three simple models for why Japan’s economy will never grow again.” Version 2 23 May 2003. Spar, Debora. “Toys ‘R’ Us Japan.” Boston MA: Harvard Business School case study 9-796-077, Rev. 25 Feb. 1999. Sumiya, Mikio, A History of Japanese Trade and Industry Policy. Oxford: Oxford University Press, 2000. Tajima, Junko. “Chinese Newcomers in the Global City Tokyo: Social Networks and Settlement Tendencies.” International Journal of Japanese Sociology, Vol 12, No. 1 (November 2003), pp. 68-78. Tett, Gillian. Saving the Sun: A Wall Street Gamble to Rescue Japan from Its Trillion-Dollar Meltdown. London: Random House, 2004. Threadgold, David. “Sumitomo Trust and Banking: Living with compromise.” Tokyo: Fox-Pitt, Kelton Swiss Re Capital Markets (Japan), 26 October 2004. Threadgold, David. “Japanese Mega Banks: Is it Safe?” Tokyo: Fox-Pitt, Kelton Swiss Re Capital Markets (Japan), 24 October 2003. Threadgold, David and Yuki Allyson Honjo, “Japanese Regional Banks: Suburban Values.” Tokyo: FoxPitt, Kelton Swiss Re Capital Markets (Japan), 6 May 2005. Tsuru, Shigeto. Japan’s Capitalism. Cambridge: Cambridge University Press, 1993. Van Wolferen, Karel. The Enigma of Japanese Power: People and Politics in a Stateless Nation. Tokyo: Charles E. Tuttle, 1993 (third printing, 1996).[] Vietor, Richard H.K. “Japan: Deficits, Demography and Deflation.” Boston MA: Harvard Business School case Study 9-706-004, Rev. 22 Sept. 2005. Vogel, Ezra. Japan as Number One: Lessons for America. Cambridge, MA: Harvard University Press, 1980 (5 th printing). Vogel, Steven K. Japan Remodeled: How Government and Industry are Reforming Japanese Capitalism. Ithaca and London: Cornell University Press, 2006. Ward, Robert E. and Dankwart A. Rustow, eds. Political Modernization in Japan and Turkey. Princeton, NJ: Princeton Unviersity Press, 1964. Westney, D. Eleanor. Imitation and Innovation: The Transfer of Western Organizational Patterns to Meiji Japan. New York: toExcel, 1987 [Harvard University Press]. Yafeh, Yishay. “An International Perspective of Japan’s Corporate Groups and Their Prospects.” National Bureau of Economic Research Working Paper No. 9386 (December 2002). Yahara, Hiromichi. The Battle for Okinawa. New York: John Wiley & Sons, 1995. Yamamura, Kozo, ed. The Economic Emergence of Modern Japan. Cambridge: Cambridge University Press, 1997. Yamamura, Kozo and Yasukichi Yasuba. The Political Economy of Japan: Volume 1: The Domestic Transformation. Stanford CA: Stanford University Press, 1987. Yamamura, Kozo, and Wolfgang Streeck, eds. The End of Diversity? Prospects for German and Japanese Capitalism. Ithaca, NY: Cornell University Press, 2003. Yamashita Kazuhito. “Only Japan Left Behind by International Standards.” Weekly Economist, Mainichi Shimbunsha, 23 March 2004 (RIETI reprint). Yasuaki, Chijiwara. “Insights Into Japan-U.S. Relations On the Eve of the Iraq War: Dilemmas over ‘Showing the Flag.’” Asian Survey, Vol. 45, No. 6 (Nov/Dec. 2005), pp. 843-664. Yoffie, David. Power and Protectionism: Strategies Of The Newly Industrializing Countries. New York: Columbia University Press, 1983. Yusuf, Shahid and Kaoru Nabeshima. “Japan’s Changing Industrial Landscape.” Washington DC: World Bank Policy Research Working Ppaer 3758, Nov. 2005.Managing Functional Biases in Organizational Forecasts: A Case Study of Consensus Forecasting in Supply Chain Plannin
|
CD ROM Annuaire d'Entreprises France prospect (avec ou sans emails) : REMISE DE 10 % Avec le code réduction AUDEN872
10% de réduction sur vos envois d'emailing --> CLIQUEZ ICI Retour à l'accueil, cliquez ici Rogelio Oliva and Noel Watson. Working papers are in draft form. This working paper is distributed for purposes of comment and discussion only. It may not be reproduced without permission of the copyright holder. Copies of working papers are available from the author. Managing Functional Biases in Organizational Forecasts: A Case Study of Consensus Forecasting in Supply Chain Planning Rogelio Oliva Noel Watson Managing Functional Biases in Organizational Forecasts: A Case Study of Consensus Forecasting in Supply Chain Planning Rogelio Oliva Mays Business School Texas A&M University College Station, TX 77843-4217 Ph 979-862-3744 | Fx 979-845-5653 roliva@tamu.edu Noel Watson Harvard Business School Soldiers Field Rd. Boston, MA 02163 Ph 617-495-6614 | Fx 617-496-4059 nwatson@hbs.edu Draft: December 14, 2007. Do not quote or cite without permission from the authors. Managing Functional Biases in Organizational Forecasts: A Case Study of Consensus Forecasting in Supply Chain Planning Abstract To date, little research has been done on managing the organizational and political dimensions of generating and improving forecasts in corporate settings. We examine the implementation of a supply chain planning process at a consumer electronics company, concentrating on the forecasting approach around which the process revolves. Our analysis focuses on the forecasting process and how it mediates and accommodates the functional biases that can impair the forecast accuracy. We categorize the sources of functional bias into intentional, driven by misalignment of incentives and the disposition of power within the organization, and unintentional, resulting from informational and procedural blind spots. We show that the forecasting process, together with the supporting mechanisms of information exchange and elicitation of assumptions, is capable of managing the potential political conflict and the informational and procedural shortcomings. We also show that the creation of an independent group responsible for managing the forecasting process, an approach that we distinguish from generating forecasts directly, can stabilize the political dimension sufficiently to enable process improvement to be steered. Finally, we find that while a coordination system—the relevant processes, roles and responsibilities, and structure—can be designed to address existing individual and functional biases in the organization, the new coordination system will in turn generate new individual and functional biases. The introduced framework of functional biases (whether those biases are intentional or not), the analysis of the political dimension of the forecasting process, and the idea of a coordination system are new constructs to better understand the interface between operations management and other functions. Keywords: forecasting, marketing/operations interface, sales and operations planning, organizational issues, case/field study. 1 1. Introduction The importance of forecasting for operations management cannot be overstated. Within the firm, forecast generation and sharing is used by managers to guide the distribution of resources (Antle and Eppen, 1985; Stein, 1997), to provide targets for organizational efforts (Hamel and Prahalad, 1989; Keating et al., 1999), and to integrate the operations management function with the marketing (Crittenden et al., 1993; Griffin and Hauser, 1992), sales (Lapide, 2005; Mentzer and Bienstock, 1998), and product development (Griffin and Hauser, 1996; Wheelwright and Clark, 1992) functions. Errors in forecasting often cross the organizational boundary and translate into misallocation of resources that can impact shareholders’ return on investment (Copeland et al., 1994), and affect customers’ perception of service quality (Oliva, 2001; Oliva and Sterman, 2001). Across the supply chain, forecast sharing is a prevalent practice for proactively aligning capacity and managing supply (Cachon and Lariviere, 2001; Terwiesch et al., 2005). Over the past five years, demand/supply planning processes for planning horizons in the intermediate range have been receiving increasing attention, especially as the information technology originally intended to facilitate this planning has achieved limited success. Crossfunctional coordination among groups such as sales, operations, and finance is needed to ensure the effectiveness of some of these planning processes and the forecasting that supports it. Such processes have been referred to in the managerial literature as sales and operations planning (S&OP) processes (Bower, 2005; Lapide, 2005). Forecasts within this multi-functional setting that characterizes many organizations cannot be operationalized or analyzed in an organizational and political vacuum. However, to date, little research has been done on managing the organizational and political dimensions of generating and improving forecasts in corporate settings; dimensions which determine significantly the overall effectiveness of the forecasting process (Bretschneider and Gorr, 1989, p. 305). 2 We present a case study that illustrates the implementation of an S&OP process, concentrating in detail on the forecasting approach around which the planning process revolves. Our study describes how individuals and functional areas (whether intentionally or not) biased the organizational forecast and how the forecasting process implemented managed those biases in a supply chain setting that requires responsive planning. We define biases broadly here to include those occasioned by functional and individual incentives, and informational or procedural shortcomings. Our analysis reveals that the forecasting process, together with the supporting mechanisms of information exchange and elicitation of assumptions, is capable of managing the political conflict and the informational and procedural shortcomings that accrue to organizational differentiation. We show that the creation of an independent group responsible for managing the forecasting process can stabilize the political dimension sufficiently to enable process improvement to be steered. The deployment of a new system, however, introduces entirely new dynamics in terms of influence over forecasts and active biases. The recognition that the system both needs to account, and is in part responsible, for partners’ biases introduces a level of design complexity not currently acknowledged in the literature or by practitioners. The rest of this paper is structured as follows: In section 2, we review the relevant forecasting literature motivating the need for our case study and articulating hypotheses for findings in our research setting. Our research site and methodological design are described in section 3. In section 4 we report the conditions that triggered the deployment of the forecasting process, assess its impact in the organization, and describe the process, its actors, and dynamics in detail. Section 5 contains the core of our analysis: we analyze the organizational and process changes that were deployed, and assess how intentional and unintentional biases in the organization were managed through these mechanisms. Some of the challenges the organization faces under the new forecasting process are explored in section 6, which also provides a framework for understanding the need to continuously 3 monitor and adapt to the processes. The paper concludes with an evaluation of the implications of our findings for practitioners and researchers. 2. Research Motivation Most organizations use forecasts as input to comprehensive planning processes—such as financial planning, budgeting, sales planning, and finished goods inventory planning—that are charged with accomplishing particular goals. This implies that the forecast needs not only to be accepted by external parties, but also to guide efforts of the organization. Thus, an important measure of forecast effectiveness is how much they support these planning needs. The fit between forecasting and planning is an under-studied relationship in the literature, but at a minimum level, the forecast process needs to match the planning process in terms of the frequency and speed in which the forecast is produced. The forecasting horizon and accuracy of the forecast should be such that it allows the elaboration and execution of plans to take advantage of the forecast (Makridakis et al., 1998; Mentzer and Bienstock, 1998). For example, a planning approach such as Quick Response (Hammond, 1990) requires as input a sense of the uncertainty surrounding the forecasts in order to manage production. Thus, the forecasting process complementing such a planning process should have a means of providing a relative measure of uncertainty (Fisher et al., 1994; Fisher and Raman, 1996). Nevertheless, forecasting is not an exact science. In an organizational setting, the forecasting process requires information from multiple sources (e.g., intelligence about competitors, marketing plans, channel inventory positions, etc.) and in a variety of formats, not always amenable to integration and manipulation (Armstrong, 2001b; Fildes and Hastings, 1994; Lawrence et al., 1986; Makridakis et al., 1998). Existing case studies in the electronic and financial industries (e.g., Hughes, 2001; Watson, 1996) emphasize the informational deficiency in creating organization forecasts as a result of poor communication across functions. The multiplicity of data sources and 4 formats creates two major challenges for a forecasting process. First, since not all information can be accurately reflected in a statistical algorithm, judgment calls are a regular part of forecasting processes (Armstrong, 2001a; Sanders and Manrodt, 1994; Sanders and Ritzman, 2001). The judgmental criteria to make, adjust, and evaluate forecasts can result in individual and functional limitations and biases that potentially compromise the quality of the forecasts. Second, since the vast majority of the information providers and the makers of those judgment calls are also the users of the forecast, there are strong political forces at work explicitly attempting to bias the outcome of the process. Thus the forecasting process, in addition to fitting with the organization planning requirements, needs to explicitly manage the biases (whether individual or functional) that might affect the outcome of the process. We recognize two potential sources of biases in the organization — intentional and unintentional — that incorporate the judgmental, informational, and political dynamics that affect forecasting performance. In the following subsections, we provide analytical context from relevant literature to articulate frameworks and expectations that will help the reader to assimilate the case details in these two dimensions. 2.1 Managing Biases due to Incentive Misalignment and Dispositions of Power Intentional sources of bias (i.e., an inherent interest and ability to maintain a level of misinformation in the forecasts) are created by incentive misalignment across functions coupled with a particular disposition of power within the organization. Local incentives will drive different functional groups to want to influence the forecast process in directions that might benefit their own agenda. For example, a sales department — compensated through sales commissions — might push to inflate the forecast to ensure ample product availability, while the operations group — responsible for managing suppliers, operating capacity, and inventories — might be interested in a forecast that smoothes demand and eliminate costly production swings (Shapiro, 1977). Power is the ability of 5 the functional group to influence the forecast, and is normally gained by access to a resource (e.g., skill, information) that is scarce and valued as critical by the organization, and the ability to leverage such resources is contingent to the degree of uncertainty surrounding the organizational decision-making process (Salancik and Pfeffer, 1977). For example, the power that a sales organization could extract from intimate knowledge of customer demand diminishes as that demand becomes stable and predictable to the rest of the organization. Mahmoud et al. (1992) in discussing the gap between forecasting theory and practice, refers in particular to the effects of the disparate functional agendas and incentives as the political gap, while according to Hanke and Reitsch (1995) the most common source of bias in a forecasting context is political pressure within a company. Thus, forecasts within a multi-functional setting cannot be operationalized or analyzed in an organizational and political vacuum. As sources of incentive misalignment and contributors to the dispositions of power within the organization, disparate functional agendas and incentives, standardized organizational decision-making processes, and shared norms and values, all have an impact on the forecasting process and forecast accuracy (Bromiley, 1987). However, most of the academic literature only examines the individual and group unintentional biases that can affect forecasting ex situ (Armstrong, 2001a), with little research directed at managing the multi-objective and political dimensions of forecast generation and improvement in corporate settings (Bretschneider and Gorr, 1989; Deschamps, 2004). Research on organizational factors and intentional sources of biases in forecasting has been done in the public sector where political agendas are explicit. This research suggests that directly confronting differences in goals and assumptions increases forecast accuracy. Bretschneider and Gorr (1987) and Bretschneider et al. (1989) found that a state’s forecast accuracy improved if forecasts were produced independently by the legislature and executive, and then combined through a formal consensus procedure that exposed political positions and forecast assumptions. Deschamps 6 (2004) found forecast accuracy to be improved by creating a neutral negotiation space and an independent political agency with dedicated forecasters to facilitate the learning of technical and consensus forecasting skills. As different organizational functions have access to diverse commodities of power (e.g., sales has a unique access to current customer demand) we recognize that each group will have unique ways to influence the outcome of the forecasting process. The process through which groups with different interests reach accommodation ultimately rests on this disposition of power and it is referred to in the political science and management literatures as a political process (Crick, 1962; Dahl, 1970; Pfeffer and Salancik, 1974; Salancik and Pfeffer, 1977). In forecasting, a desirable outcome of a well-managed political contention would be a process that enables the known positive influences on forecast accuracy while weakening the negative influences on forecast accuracy. That is, a politically savvy process should take into consideration the commodities of power owned by the different functional areas and the impact that they might have on forecast accuracy, and explicitly manage the disposition of power to minimize negative influences on forecast accuracy. 2.2 Abating Informational and Procedural Blind Spots Although functional goals and incentives can translate into intentional efforts to bias a forecast, other factors can affect forecasts in ways which managers might not be aware. Thus, we recognize unintentional, but systematic, sources of forecast error resulting from what we term blind spots, ignorance in specific areas which affect negatively an individual’s or group’s forecasts. Blind spots can be informational — related to an absence of otherwise feasibly collected information on which a forecast should be based — or procedural — related to the algorithms and tasks used to generate forecasts given the information available. This typology is an analytic one; the types are not always empirically distinct. Some informational blind spots could result from naiveté in forecasting methodology (procedural blind spot) that does not allow the forecaster to use the available 7 information. Yet, while the two types may intermingle in an empirical setting, they tend to derive from different conditions and require different countermeasures. We expect then that a forecasting process should try to manage the informational and procedural blind spots that may exist for the process. Some individual biases that have been shown to affect subjective forecasting include over-confidence, availability, anchor and adjustment, and optimism (Makridakis et al., 1998). Forecasters, even when provided with statistical forecasts as guides, have difficulty assigning less weight to their own forecasts (Lim and O'Connor, 1995). Cognitive information processing limitations and other biases related to the selection and use of information can also compromise the quality of plans. Gaeth and Shanteau (1984), for example, showed that irrelevant information aversely affected judgment, and Beach et al. (1986) showed that when the information provided is poor, forecasters might expend little effort to ensure that forecasts are accurate. Such individual biases can affect both the quality of the information collected and used to infer forecasts (informational blind spots), and the rules of inference themselves (procedural blind spots). Research suggests process features and processing capabilities that might potentially mitigate the effect of individual biases. For example, combining forecasts with other judgmental or statistical forecasts tends to improve forecast accuracy (Lawrence et al., 1986). Goodwin and Wright (1993) summarize the research and empirical evidence that supports six strategies for improving judgmental forecasts: using decomposition, improving forecasters’ technical knowledge, enhancing data presentation, mathematically correcting biases, providing feedback to forecasters to facilitate learning, and combining forecasts or using groups of forecasters. Group forecasting is thought to contribute two important benefits to judgmental forecasting: (1) broad participation in the forecasting process maximizes group diversity, which reduces political bias and the tendency to cling to outmoded assumptions, assumptions that can contribute to both 8 procedural and informational blind spots (Voorhees, 2000), and (2) the varied people in groups enrich the contextual information available to the process, reducing informational blind spots and thereby improving the accuracy of forecasts (Edmundson et al., 1988; Sanders and Ritzman, 1992). Some researchers maintain that such variety is even useful for projecting the expected accuracy of forecasts (Gaur et al., 2007; Hammond and Raman, 1995). Group dynamics can, however, have unwanted effects on the time to achieve consensus, the quality of consensus (whether true agreement or acquiescence), and thus, the quality of the forecasts. Kahn and Mentzer (1994), who found that a team approach led to greater satisfaction with the forecasting process, also reported mixed results regarding the benefits of group forecasting. Dysfunctional group dynamics reflect group characteristics such as the participants’ personal dynamics, politics, information asymmetries, differing priorities, and varying information assimilation and processing capabilities. Group processes can vary in terms of the degree of interaction afforded participants and the structure of the rules for interaction. The most popular structured, non-interacting, group forecasting approach is the Delphi method wherein a group’s successive individual forecasts elicits anonymous feedback in the form of summary statistics (Rowe and Wright, 2001). Structured interacting groups, those with rules governing interaction, have not been found to perform significantly worse than groups that use the Delphi method (Rowe and Wright, 1999). However, Ang and O’Connor (1991) found that modified consensus (in which an individual’s forecast was the basis for the group’s discussion) outperformed forecasts based on group mean, consensus, and Nominal Group Technique (Delphi with some interaction). 2.3 Conclusions from Review The above review suggests that while the current academic literature recognizes the need for an understanding of the organizational and political context in which the forecasting process takes place, the literature still lacks the operational and organizational frameworks for analyzing the 9 generation of organizational forecasts. Our research aims to address this shortcoming by developing insights into managing the impact of the organizational and political dimensions of forecasting. The literature does lead us to expect a forecasting process that is attuned to the organizational and political context in which it operates, to be based on a group process, to combine information and forecasts from multiple sources, and to be deliberate about the way it allows different interests to affect forecast accuracy. We opted to explore this set of issues through a case study since the forecasting process has not been analyzed previously from this perspective, and our interest is to develop the constructs to understand its organizational and political context (Meredith, 1998). We consequently focus our analysis not on the forecast method (the specific technique used to arrive at a forecast), but on the forecasting process, that is, the way the organization has systematized information gathering, decision-making, and communication activities, and the organizational structure that supports that process. 3. Research Methodology 3.1 Case Site The case site is a northern California-headquartered consumer electronics firm called Leitax (name has been disguised) that sold its products primarily through retailers such as Best Buy and Target and operated distribution centers (DCs) in North America, Europe, and the Far East. The Leitax product portfolio consisted of seven to nine models, each with multiple SKUs that were produced by contract-manufacturers with plants in Asia and Latin America. The product life across the models, which was contracting, ranged from nine to fifteen months, with high-end, feature-packed, products tending to have the shortest product lives. The site was chosen because prior to the changes in the forecasting process, the situation was characterized by having shortcomings along the two dimensions described above. That is, the forecasting process was characterized by informational and procedural blind spots and was marred by intentional manipulation of information to advance functional agendas. The case site represents 10 an exemplar for the study of the management of these dimensions, and constitutes a unique opportunity to test the integration of the two strands of theory that make explicit predictions about unintentional and intentional biases (Yin, 1984). The forecasting approach introduced was considered at least reasonably successful by many of the organizational participants and its forecasting accuracy, and accompanying improvements of operational indicators (e.g., inventory turns, obsolescence), corroborates this assessment. The issues and dynamics addressed by the implementation of the participatory forecasting process are issues that are not unique to Leitax, but characterize a significant number of organizations. Thus, the site provides a rich setting in which to seek to understand the dynamics involved in managing an organizational forecasting process and from which we expect to provoke theory useful for academics and practitioners alike. Our case study provides one reference for managing these organizational forecasts within an evolving business and operations strategy. As such, it does more to suggest potential relationships, dynamics, and solutions, than to definitively define or propose them. 3.2 Research Design Insights were derived primarily from an intensive case study research (Eisenhardt, 1989; Yin, 1984) with the following protocol: the research was retrospective; the primary initiative studied, although evolving, was fully operational at the time the research was undertaken. Data were collected through 25 semi-structured, 45- to 90-minute interviews conducted with leaders, analysts, and participants from all functional areas involved in the forecasting process, as well as with heads of other divisions affected by the process. The interviews were supplemented with extensive reviews of archival data including internal and external memos and presentations, and direct observation of two planning and forecasting meetings. The intent of the interviews was to understand the interviewees’ role in the forecasting process, their perception of the process, and to explore explicitly the unintentional biases due to blind spots as well as the political agendas of the different 11 actors and functional areas. To assess the political elements of the forecasting process, we explicitly asked interviewees about their incentives and goals. We then triangulated their responses with answers from other actors and asked for explanations for observed behavior during the forecasting meetings. When appropriate, we asked interviewees about their own and other parties’ sources of power, i.e., the commodity through which they obtained the ability to influence an outcome—e.g., formal authority, access to important information, external reputation (Checkland and Scholes, 1990). Most interviews were conducted in the organization’s northern California facility, with some follow-up interviews done by telephone. Given the nature of the research, interviewees were not required to stay within the standard questions; interviewees perceived to be exploring fruitful avenues were permitted to continue in that direction. All interviews were recorded. Several participants were subsequently contacted and asked to elaborate on issues they had raised or to clarify comments. The data is summarized in the form of a detailed case study that relates the story of the initiative and current challenges (Watson and Oliva, 2005). Feedback was solicited from the participants, who were asked to review their quotations, and the case, for accuracy. The analysis of the data was driven by three explicit goals: First, to understand the chronology of the implemented changes and the motivation behind those changes (this analysis led to the realization of mistrust across functional areas and the perceived biases that hampered the process). Second, to understand and to document the implemented forecasting process, the roles that different actors took within the process, and the agreed values and norms that regulated interactions within the forecasting group; and third, to assess how different elements of the process addressed or mitigated the individual or functional biases identified. 4. Forecasting at Leitax The following description of the consensus forecasting process at Leitax was summarized from the interviews with the participants of the process. The description highlights the political dimension of 12 the situation at Leitax by describing the differing priorities of the different functional groups and how power to influence the achievement of those priorities was expressed. 4.1 Historical and Organizational Context Prior to 2001, demand planning at Leitax was ill-defined, with multiple private forecasts the norm. For new product introductions and mid-life product replenishment, the sales directors, (Leitax employed sales directors for three geographical regions—the Americas; Europe, the Middle East, and Africa; and Asia Pacific—and separate sales directors for Latin America and Canada) made initial forecasts that were informally distributed to the operations and finance groups, sometimes via discussions in hallways. These shared forecasts were intended to be used by the operations group as guides for communicating build or cancel requests to the supply chain. The finance group, in turn, would use these forecasts to guide financial planning and monitoring. These sales forecasts, however, were often mistrusted or second-guessed when they crossed into other functional areas. For example, with inventory shortages as its primary responsibility, the operations group would frequently generate its own forecasts to minimize the perceived exposure to inventory discrepancies, and marketing would do likewise when it anticipated that promotions might result in deviations from sales forecasts. While the extent of bias in the sales forecast was never clearly determined; the mere perception that sales had an incentive to maintain high inventory positions in the channel was sufficient to compromise the credibility of its forecasts. Sales might well have intended to communicate accurate information to the other functions, but incentives to achieve higher sell-in rates tainted the objectivity of its forecasting, which occasioned the other functions’ distrust and consequent generation of independent forecasts. Interviewees, furthermore, suspected executive forecasts to be biased by goal setting pressures, operational forecasts to be biased by inventory liability and utilization policies, and finance forecasts to be biased by market expectations and profitability 13 thresholds. These biases stem from what are believed to be naturally occurring priorities of these functions. Following two delayed product introductions that resulted in an inventory write-off of approximately 10% of FY01-02 revenues, major changes were introduced during the fall of 2001 including the appointment of a new CEO and five new vice-presidents for product development, product management, marketing, sales, and operations. In April 2002, the newly hired director of planning and fulfillment launched a project with the goal of improving the velocity and accuracy of planning information throughout the supply chain. Organizationally, management and ownership of the forecasting process fell to the newly created Demand Management Organization (DMO), which had responsibility for managing, synthesizing, challenging, and creating demand projections to pace Leitax’s operations worldwide. The three analysts who comprised the group, which reported to the director of planning and fulfillment, were responsible not only for preparing statistical forecasts but also for supporting all the information and coordination requirements of the forecasting process. By the summer of 2003, a stable planning and coordination system was in place and by the fall of 2003, Leitax had realized dramatic improvements in forecasting accuracy. Leitax defined forecast accuracy as one minus the ratio of the absolute deviation of sales from forecast to the forecast (FA=1-|sales-forecast|/forecast). Three-month ahead sell-through (sell-in) forecast accuracy improved from 58% (49%) in the summer of 2002 to 88% (84%) by fall 2003 (see Figure 1). Sell-in forecasts refer to expected sales from Leitax’s DCs into their resellers, and sell-through forecasts refer to expected sales from the resellers. Forecast accuracy through ’05 was sustained at an average of 85% for sell-through. Better forecasts translated into significant operational improvements: Inventory turns increased to 26 in Q4 ’03 from 12 the previous year, and average on hand inventory decreased from $55M to $23M. Excess and obsolescence costs decreased from an average of $3M 14 for fiscal years 2000-2002 to practically zero in fiscal year 2003. The different stages of the forecasting process are described in detail in the next section. 4.2 Process Description By the fall of 2003, a group that included the sales directors and VPs of marketing, product strategy, finance, and product management, were consistently generating a monthly forecast. The process, depicted in Figure 2, begins with the creation of an information package, referred to as the business assumptions package, from which functional forecasts are created. These forecasts are combined and discussed at consensus forecasting meetings until there is a final forecast upon which there is agreement. Business Assumptions Package The starting point for the consensus forecasting process, the business assumptions package (BAP), contained price plans for each SKU, intelligence about market trends and competitors’ products and marketing strategies, and other information of relevance to the industry. The product planning and strategy, marketing, and DMO groups guided assessments of the impact of the information on future business performance entered into the BAP (an Excel document with multiple tabs for different types of information and an accompanying PowerPoint presentation). These recommendations were carefully labeled as such and generally made in quite broad terms. The BAP generally reflected a one-year horizon, and was updated monthly and discussed and agreed upon by the forecasting group. The forecasting group generally tried not to exclude information deemed relevant from the BAP even when there were differences in opinion about the strength of the relevance. The general philosophy was that of an open exchange of information that at least one function considered relevant. Functional Forecasts Once the BAP was discussed, the information in it was used by three groups: product planning and strategy, sales, and the DMO, to elaborate functional forecasts at the family level, leaving the 15 breakdown of that forecast into specific SKU demand to the sales and packing schedules. The three functional forecasts were made for sell-through sales and without any consideration to potential supply chain capacity constraints. Product planning and strategy (PPS), a three-person group that supported all aspects of product life cycle from launch to end-of-life, and assessed competitive products and effects of price changes on demand, prepared a top-down forecast of global expected demand. The PPS forecast reflected a worldwide estimate of product demand derived from product and region specific forecasts based on historical and current trends of market-share and the current portfolio of products being offered by Leitax and its competitors. The PPS group relied on external market research groups to spot current trends, and used appropriate history as precedent in assessing competitive situations and price effects. The sales directors utilized a bottom-up approach to generate their forecast. Sales directors from all regions aggregated their own knowledge and that of their account managers about channel holdings, current sales, and expected promotions to develop a forecast based on information about what was happening in the distribution channel. The sales directors’ bottom-up forecast was first stated as a sell-in forecast. Since incentives for the sales organization were based on commissions on sell-in, this was how account managers thought of the business. The sell-in forecast was then translated into a sell-through forecast that reflected the maximum level of channel inventory (inventory at downstream DC’s and at resellers). The sales directors’ bottom-up forecast, being based on orders and retail and distribution partner feedback, was instrumental in determining the first 13 weeks of the master production schedule. The DMO group prepared, on the basis of statistical inferences from past sales, a third forecast of sell-through by region intended primarily to provide a reference point for the other two forecasts. Significant deviations from the statistical forecast would require that the other forecasting groups investigate and justify their assumptions. 16 The three groups’ forecasts were merged into a proposed consensus forecast using a formulaic approach devised by the DMO that gave more weight to the sales directors’ forecast in the short term. Consensus Forecast Meetings The forecasting group met monthly to evaluate the three independent forecasts and the proposed consensus forecast. The intention was that all parties at the meeting would understand the assumptions that drove each forecast and agree to the consensus forecast based on their understanding of these assumptions and their implications. Discussion tended to focus on the nearest two quarters. In addition to some detail planning for new and existing products, the consensus forecast meetings were also a source of feedback on forecasting performance. In measuring performance, the DMO estimated the 13-week (the longest lead-time for a component in the supply chain) forecasting accuracy based on the formula that reflected the fractional forecast error (FA=1-|sales-forecast|/forecast). Finalizing Forecasts The agreed upon final consensus forecast (FCF) was sent to the finance department for financial roll up. Finance combined the FCF with pricing and promotion information from the BAP to establish expected sales and profitability. Forecasted revenues were compared with the company’s financial targets; if gaps were identified, an attempt was made to ensure that the sales department was not under-estimating market potential. If revisions made at this point did not result in satisfactory financial performance, the forecasting group would return to the business assumptions and, together with the marketing department, revise the pricing and promotion strategies to meet financial goals and analyst expectations. These gap-filling exercises, as they were called, usually occurred at the end of each quarter and could result in significant changes to forecasts. The approved FCF was released and used to generate the master production schedule. Operations validation of the FCF was ongoing. The FCF was used to generate consistent and 17 reliable production schedules for Leitax’s contract manufacturers and distributors. Suppliers responded by improving the accuracy and opportunity of information flows regarding the status of the supply chain and their commitment to produce received orders. More reliable production schedules also prepared suppliers to meet future expected demand. Capacity issues were communicated and discussed in the consensus meetings and potential deviations from forecasted sales incorporated in the BAP. 5. Analysis In this section we examine how the design elements of the implemented forecasting process addressed potential unintentional functional biases (i.e., informational and procedural blind spots), and resolved conflicts that emerge from misalignments of functional incentives. We first take a process perspective and analyze how each stage worked to minimize functional and collective blind spots. In the second subsection, we present an analysis of how the process managed the commodities of power to improve forecast accuracy. Table 1 summarizes the sources of intentional and unintentional biases addressed by each stage of the consensus forecasting process. 5.1 Process Analysis Business Assumptions Package The incorporation of diverse information sources is one of the main benefits reported for group forecasting (Edmundson et al., 1988; Sanders and Ritzman, 1992). The BAP document explicitly incorporated and assembled information in a common, sharable format that facilitated discussion by the functional groups. The sharing of information not only eliminated some inherent functional blind spots, but also provided a similar starting point for, and thereby improved the accuracy of, the individual functional forecasts (Fildes and Hastings, 1994). The guidance and recommendations provided by the functional groups’ assessments of the impact of information in the BAP on potential demand represented an additional point of convergence for assimilating diverse information. The fact that the functions making these assessments were expected to have greater 18 competencies for determining such assessments, helped to address potential procedural blind spots for the functions that used these assessments. The fact that these assessments and interpretations were explicitly labeled as such made equally explicit their potential for bias. Finally, the generation of the BAP in the monthly meetings served as a warm-up to the consensus forecasting meeting inasmuch as it required consensus about the planning assumptions. Functional Forecasts The functional forecasts that were eventually combined into the proposed consensus forecast were generated by the functional groups, each following a different methodological approach. Although the BAP was shared, each group interpreted the information it contained according to its own motivational or psychological biases. Moreover, there existed private information that had not been economical or feasible to include in, or that had been strategically withheld from, the BAP (e.g., actual customer intended orders, of which only sales was cognizant). The combination of the independently generated forecasts using even a simple average would yield a forecast that captured some of the unique and relevant information in, and thereby improved the accuracy of, the constituent forecasts (Lawrence et al., 1986). At Leitax, the functional forecasts were combined into the proposed consensus forecast using an algorithm more sophisticated that the simple average, based, as the literature recommends (Armstrong, 2001b), on the track record of the individual forecasts. By weighting the sales directors’ forecast more heavily in the short-term and the PPS’s forecast more heavily in the long-term, the DMO recognized each function’s different level of intimacy with different temporal horizons, thereby reducing the potential impact of functional blind spots. Through this weighting, the DMO also explicitly managed each group’s degree of influence on the forecasting horizon, which could have served as political appeasement. Consensus Forecasting Meetings The focus of the forecasting process on sell-through potentially yielded a clearer signal of market demand as sell-in numbers tended to be a distorted signal of demand; the sales force was known to 19 have an incentive to influence sell-in in the short-term and different retailers had time-varying appetites for product inventory. Discussion in the monthly consensus forecasting meetings revolved mainly around objections to the proposed consensus forecast. In this context, the proposed consensus forecast provided an anchoring point that was progressively adjusted to arrive at the final consensus forecast (FCF). Anchoring on the proposed consensus forecast not only reduced the cognitive effort required of the forecasting team members, but also eliminated their psychological biases and reduced the functional biases that might still be present in the functional forecasts. There is ample evidence in the literature that an anchoring and adjustment heuristic improves the accuracy of a consensus approach to forecasting (Ang and O'Connor, 1991). Discussion of objections to the proposed consensus forecast was intended to surface the private information or private interpretation of public information that motivated the objections. These discussions also served to reveal differences in the inference rules that functions used to generate forecasts. Differences might result from information that was not revealed in the BAP, from incomplete rules of inference (i.e., rules that do not consider all information), or from faulty rules of inference (i.e., rules that exhibited inconsistencies in logic). Faulty forecast assumptions were corrected and faulty rules of inference refined over time. The consensus meetings were also a source of feedback to the members of the forecasting group on forecasting performance. The feedback rendered observable not only unique and relevant factors that affect the accuracy of the overall forecasting process, but, through the three independent functional forecasts, other factors such as functional or psychological biases. For example, in early 2004 the DMO presented evidence that sale’s forecasts tended to over-estimate near- and underestimate long-term sales. Fed back to the functional areas, these assessments of the accuracy of their respective forecasts created awareness of potential blind spots. The functional forecasts’ historical accuracy also served to guide decision-making under conditions that demanded precision such as 20 allocation under constrained capacity or inventory. The director of planning and fulfillment’s selection of a measure of performance to guide these discussions is also worthy of note. Some considered this measure of accuracy, which compared forecasts to actual sales as if actual sales represented true demand, simplistic. Rather than a detailed, complex measure of forecast accuracy, he opted to use a metric that in its simplicity was effective only in providing a directional assessment of forecast quality (i.e., is forecast accuracy improving over time?). Tempering the pursuit of improvement of this accuracy metric, the director argued that more sophisticated metrics (e.g., considering requested backlog to estimate final demand) would be more uncertain, convey less information, and prevent garnering sufficient support to drive improvement of the forecasting process. Supporting Financial and Operational Planning Leitax’s forecasting process, having the explicit goal of supporting financial and operational planning, allowed these functions to validate the agreed upon consensus forecast by transforming it into a revenue forecast and a master production schedule. Note, however, the manner in which exceptions to the forecast were treated: if the financial forecast was deemed unsatisfactory or the production schedule not executable because of unconsidered supply chain issues, a new marketing and distribution plan was developed and incorporated in the BAP. Also, note that this approach was facilitated by the process ignoring capacity constraints in estimating demand. It was common before the implementation of the forecasting process for forecasts to be affected by perceptions of present and future supply chain capacity, which resulted in a subtle form of self-fulfilling prophecy; even if manufacturing capacity became available, deflated forecasts would have positioned lower quantities of raw materials and components in the supply chain. By reflecting financial goals and operational restrictions in the BAP and asking the forecasting group (and functional areas) to update their forecasts based on the new set of assumptions, instead of adjusting the final consensus forecast directly, Leitax embedded the forecasting process in the 21 planning process. Reviewing the new marketing and product development plans reflected in the BAP, and validating it through the lenses of different departments via the functional and consensus forecast, essentially ensured that all of the functional areas involved in the process were re-aligned with the firm’s needs and expectations. Separation of the forecasting and decision-making processes has been found to be crucial to forecast accuracy (Fildes and Hastings, 1994). We discuss the contributions of this process to cross-functional coordination and organizational alignment in a separate paper (Oliva and Watson, 2006). 5.2 Political Analysis As shown in Table 1, certain components of the forecasting process dealt directly with the biases created by incentive misalignment. However, the implementation of the forecasting process was accompanied with significant structural additions, which we examine here via a political analysis. As mentioned in the section 2, we expect the forecasting process to create a social and procedural context that enables, through the use of commodities of power, the positive influences on forecast accuracy, while weakening the influence of functional biases that might reduce the forecast accuracy. The most significant component of this context is the creation of the DMO. Politically, the DMO was an independent group with responsibility for managing the forecasting process. The introduction of an additional group and its intrinsic political agenda might increase the complexity of the forecasting process and thereby reduce its predictability or complicate its control. However, the DMO, albeit neutral, was by no means impotent. Through the mandate to manage the forecasting process and being accountable for its accuracy, the DMO had the ability to determine the impact of different functions on forecast accuracy and to enforce procedural changes to mediate their influence. Specifically, related to biases due to incentive misalignment, because the DMO managed all exchanges of information associated with the process, it determined how other functions’ power and influence would be expressed in the forecasts and could enforce the 22 expression of this influence in production requests and inventory allocation decisions. The direct empowerment of the DMO group at Leitax resulted from its relationship with the planning function that made actual production requests and inventory allocations. The planning function, in turn, derived its power from the corporate mandate for a company turnaround. While the particular means of empowerment of the DMO group are not consequential — alternative sources of power could have been just as affective—the fact that DMO was empowered was crucial for the creation and the success of the forecasting process. The empowerment of the DMO may seem antithetical to a consensual approach. In theory, the presence of a neutral body has been argued to be important for managing forecasting processes vulnerable to political influence (Deschamps, 2004), as a politically neutral actor is understood to have a limited desire to exercise power and is more easily deferred to for arbitration. In practice, an empowered entity such as the DMO needs to be careful to use this power to maintain the perception of neutrality. In particular, the perception of neutrality was reinforced by the DMO’s mandate to manage the forecasting process (as opposed to actual forecasts), the simplicity and transparency of the information exchanges (basic Excel templates), and performance metrics (recall the director’s argument for the simplest measure of forecast accuracy). The forecasting process is itself an example of the empowerment of a positive influence on forecasting performance. The feasibility of the implemented forecasting process derived from the creation of the DMO and the director’s ability to assure the attendance and participation of the VPs in the consensus forecasting meetings. While the forecasting process might have been initially successful because of this convening power, the process later became self-sustaining when it achieved credibility among the participants and the users of the final consensus. At that point in time, the principal source of power (ability to influence the forecast) became expertise and internal reputation as recognized by the forecasting group based on past forecasting performance. 23 Interestingly, this historical performance also reinforced the need for a collaborative approach to forecasting as no function had distinguished itself as possessing the ability to manage the process single-handedly. Nevertheless, since the forecasting approach accommodated some influence by functional groups, the DMO could be criticized for not eliminating fully opportunities for incentive misalignment. Functional groups represent stakeholders with information sets and goals relevant to the organization’s viability, thus, it is important to listen to those interests. It is, however, virtually impossible to determine a priori whether the influence of any function will increase or decrease forecast accuracy. Furthermore, its own blind spots precluded the DMO from fully representing these stakeholders. Consequently, it is conceivably impossible to eliminate incentive misalignment entirely if stakeholder interests are to be represented in the process. Summarizing, the DMO managed the above complicating factors in its development of the forecasting process by generating the proposed consensus forecast and having groups react to, or account for, major differences with it. The process implemented by the DMO shifted the conversation from functional groups pushing for their respective agendas, to justifying the sources of the forecasts and explicitly recognizing areas of expertise or dominant knowledge (e.g., sales in the short-term, PPS in the long term). The participatory process and credibility that accrued to the forecasting group consequent to improvements in forecast accuracy made the final consensus forecast more acceptable to the rest of the organization and increased its effectiveness in coordinating procurement, manufacturing, and sales (Hagdorn-van der Meijden et al., 1994). 6. Emerging Challenges The deployment of a new system can introduce entirely new dynamics in terms of influence over forecasts and active biases. Here, we describe two missteps suffered in 2003 and relate performance feedback from participants in the consensus forecasting process and then explore the implications 24 for the design of the process and the structure that supports it. 6.1 Product Forecasting Missteps The first misstep occurred when product introduction and early sales were being planned for a new product broadly reviewed and praised in the press for its innovative features. Although the forecasting process succeeded in dampening to some degree the specialized press’ enthusiasm, the product was nevertheless woefully over-forecasted and excess inventory resulted in a write-off of more than 1% of lifetime volume materials cost. The second misstep occurred when Leitax introduced a new product that was based on a highly successful model currently being sold to the professional market. Leitax considered the new product inferior in quality since it was cheaper to manufacture and targeted it at “prosumers,” a marketing segment considered to be between the consumer and professional segments. Despite warnings from the DMO suggesting the possibility of cannibalization, the consensus forecast had the existing product continuing its impressive sales rate throughout the introduction of the new product. The larger-than-expected cannibalization resulted in an obsolescence write off for the existing product of 3% of lifetime volume materials cost. These two missteps suggest a particular case of “groupthink” (Janis, 1972), whereby optimism, initially justified, withstands contradictory data or logic as functional (or individual) biases common to all parties tend to be reinforced. Since the forecasting process seeks agreement, when the input perspectives are similar but inaccurate, as in the case of the missteps described above, the process can potentially reinforce the inaccurate perceptions. In response to these missteps, the DMO group considered changing the focus of the consensus meetings from the next two quarters towards the life-cycle quantity forecasts for product families and allowing the allocation to quarters to be more historically driven. This would serve to add another set of forecasts to the process to help improve accuracy. This focus on expected sales over the life of the product would also help mediate the intentional biases driven by natural interest in 25 immediate returns that would surface when the two nearest quarters were instead the focus. The DMO group, however, had to be careful about how the changes were introduced so as to maintain its neutral stance and not create the perception of generating forecasts rather than the forecasting process. 6.2 Interview Evaluations General feedback from interviewees reported lingering issues with process compliance. For instance, more frequently than the DMO expected, the process yielded a channel inventory level greater than the desired 7 to 8 weeks. This was explained by overly optimistic forecasts from sales and sales’ over selling into the channel in response to its incentives. Some wondered about the appropriate effect of the finance group on the process. Sales, for example, complained that finance used the consensus meetings to push sales for higher revenues. Gap-filling exercises channeling feedback from finance back into the business assumptions, sometimes effected significant changes to forecasts that seemed inappropriate. The inappropriate effects of sales and finance described above can be compared with the dynamics that existed before implementation to reveal emerging challenges associated with the forecasting process. For example, under DMO’s inventory allocation policies, the only line of influence for sales is its forecasts — the process had eliminated the other sources of influence that sales had. Thus, sales would explicitly bias its forecasts in an attempt to swing regional sales in the preferred direction. For finance, the available lines of influence are the gap-filling exercises and the interaction within the consensus forecasting meetings. Given that the incentives and priorities of these functions had not changed, the use of lines of influence in this manner is not unexpected. However, it is not easy to predict exactly how these lines of influence will be used. 6.3 Implications for Coordination System Design The consensus forecasting process occasioned lines of influence on forecasts to be used in ways that were not originally intended, and did not always dampen justifiable optimism regarding product 26 performance. The latter dynamic can be characterized as a group bias whereby functional (individual) biases/beliefs common to all parties tend to be reinforced. Since the process seeks agreement, when the input perspectives are similar but inaccurate, as in the case of the missteps described above, the process can potentially reinforce the inaccurate perceptions. Both dynamics illustrate how, in response to a particular set of processes, responsibilities, and structures — what we call a coordination system (Oliva and Watson, 2004) — new behavioral dynamics outside of those intended by the process might develop, introducing weaknesses (and conceivably strengths) not previously observed in the process. In principle, a coordinating system should be designed to account and compensate for individual and functional biases of supply chain partners. But coordination system design choices predispose individual partners to certain problem space, simplifications, and heuristics. Because the design of a coordinating system determines the complexity of each partner's role, it is also, in part, responsible for the biases exhibited by the partners. In other words, changes attendant on a process put in place to counter particular biases might unintentionally engender a different set of biases. The recognition that a coordinating system both needs to account, and is in part responsible, for partners’ biases, introduces a level of design complexity not currently acknowledged. Managers need to be aware of this possibility and monitor the process in order to identify unintended adjustments, recognizing that neither unintended behavioral adjustments nor their effects are easily predicted given the many process interactions that might be involved. This dual relationship between the coordination system and associated behavioral schema (see Figure 3), although commonly remarked in the organizational theory literature (e.g., Barley, 1986; Orlikowski, 1992), has not previously been examined in the forecasting or operations management literatures. 7. Conclusion The purpose of case studies is not to argue for specific solutions, but rather to develop explanations 27 (Yin 1984). By categorizing potential sources of functional biases into a typology—intentional, that is, driven by incentive misalignment and dispositions of power, and unintentional, that is, related to informational and procedural blind spots—we address a range of forecasting challenges that may not show up as specifically as they do at Leitax, but are similarly engendered. By a complete mapping of the steps of the forecasting process, its accompanying organizational structure and its role within the planning processes of the firm, we detail the relevant elements of an empirically observed phenomenon occurring within its contexts. By capturing the political motivations and exchanges and exploring how the deployed process and structure mitigated the existing biases, we assess the effectiveness of the process in a dimension that has largely been ignored by the forecasting literature. Finally, through the assessment of new sources of biases after the deployment of the coordination system, we identify the adaptive nature of the political game played by the actors. Through the synthesis of our observations on these relevant elements of this coordinated forecasting system, previous findings from the forecasting literature, and credible deductions linking the coordination system to the mitigation of intentional and unintentional biases identified and the emergence of new ones, we provide sufficient evidence for the following propositions concerning the management of organizational forecasts (Meredith 1998): Proposition I: Consensus forecasting, together with the supporting elements of information exchange and assumption elicitation, can prove a sufficient mechanism for constructively managing the influence of both biases on forecasts while being adequately responsive to managing a fast-paced supply chain. Proposition II: The creation of an independent group responsible for managing the consensus forecasting process, an approach that we distinguish from generating forecasts directly, provides an effective way of managing the political conflict and informational and procedural shortcomings occasioned by organizational differentiation. Proposition III: While a coordination system—the relevant processes, roles and responsibilities, and structure—can be designed to address existing individual and functional biases in the organization, the new coordination system will in turn generate new individual and functional biases. 28 The empirical and theoretical grounding of our propositions suggest further implications for practitioners and researchers alike. The typology of functional biases into intentional and unintentional highlights managers’ need to be aware that better and more integrated information may not be sufficient for a good forecast, and that attention must be paid as well to designing the process so that the social and political dimensions of the organization are effectively managed. Finally, new intentional and unintentional biases can emerge directly from newly implemented processes. This places a continuous responsibility on managers monitoring implemented systems for emerging biases and understanding the principles for dealing with different types of biases, to make changes to these systems to maintain operational and organizational gains. Generating forecasts may involve an ongoing process of iterative coordination system improvement. For researchers in operations management and forecasting methods, the process implemented by Leitax might be seen, at a basic level, as a “how to” for implementing in the organization many of the lessons from the research in forecasting and behavioral decision-making. More important, the case illustrates the organizational and behavioral context of forecasting, a context that, to our knowledge, had not been fully addressed. Given the role of forecasting in the operations management function, and as argued in the introduction, future research is needed to continue to build frameworks for managing forecasting along the organizational and political dimensions in operational settings. Such research should be primarily empirical, including both exploratory and theory building methodology that can draw heavily from the current forecasting literature, which has uncovered many potential benefits for forecasting methods ex situ. References Ang, S., M.J. O'Connor, 1991. The effect of group-interaction processes on performance in timeseries extrapolation. Int. J. Forecast. 7 (2), 141-149. Antle, R., G.D. Eppen, 1985. Capital rationing and organizational slack in capital-budgeting. Management Sci. 31 (2), 163-174. 29 Armstrong, J.S. (ed.), 2001a. Principles of Forecasting. Kluwer Academic Publishers, Boston. Armstrong, J.S., 2001b. Combining forecasts. In: J.S. Armstrong (Ed), Principles of Forecasting. Kluwer Academic Publisher, Boston, pp. 417-439. Barley, S., 1986. Technology as an occasion for structuring: Evidence from observations of CT scanners and the social order of radiology departments. Adm. Sci. Q. 31, 78-108. Beach, L.R., V.E. Barnes, J.J.J. Christensen-Szalanski, 1986. Beyond heuristics and biases: A contingency model of judgmental forecasting. J. Forecast. 5, 143-157. Bower, P., 2005. 12 most common threats to sales and operations planning process. J. Bus. Forecast. 24 (3), 4-14. Bretschneider, S.I., W.L. Gorr, 1987. State and local government revenue forecasting. In: S. Makridakis, and S.C. Wheelwright (Eds), The Handbook of Forecasting: A Manager's Guide. Wiley, New York, pp. 118-134. Bretschneider, S.I., W.L. Gorr, 1989. Forecasting as a science. Int. J. Forecast. 5 (3), 305-306. Bretschneider, S.I., W.L. Gorr, G. Grizzle, E. Klay, 1989. Political and organizational influences on the accuracy of forecasting state government revenues. Int. J. Forecast. 5 (3), 307-319. Bromiley, P., 1987. Do forecasts produced by organizations reflect anchoring and adjustment. J. Forecast. 6 (3), 201-210. Cachon, G.P., M.A. Lariviere, 2001. Contracting to assure supply: How to share demand forecasts in a supply chain. Management Sci. 47 (5), 629-646. Checkland, P.B., J. Scholes, 1990. Soft Systems Methodology in Action. Wiley, Chichester, UK. Copeland, T., T. Koller, J. Murrin, 1994. Valuation: Measuring and Managing the Value of Companies, 2nd ed. Wiley, New York. Crick, B., 1962. In Defence of Politics. Weidenfeld and Nicolson, London. Crittenden, V.L., L.R. Gardiner, A. Stam, 1993. Reducing conflict between marketing and manufacturing. Ind. Market. Manag. 22 (4), 299-309. Dahl, R.A., 1970. Modern Political Analysis, 2nd ed. Prentice Hall, Englewood Cliffs, NJ. Deschamps, E., 2004. The impact of institutional change on forecast accuracy: A case study of budget forecasting in Washington State. Int. J. Forecast. 20 (4), 647-657. Edmundson, R.H., M.J. Lawrence, M.J. O'Connor, 1988. The use of non-time series information in sales forecasting: A case study. J. Forecast. 7, 201-211. Eisenhardt, K.M., 1989. Building theories from case study research. Acad. Manage. Rev. 14 (4), 532-550. 30 Fildes, R., R. Hastings, 1994. The organization and improvement of market forecasting. J. Oper. Res. Soc. 45 (1), 1-16. Fisher, M.L., A. Raman, 1996. Reducing the cost of demand uncertainty through accurate response to early sales. Oper. Res. 44 (1), 87-99. Fisher, M.L., J.H. Hammond, W.R. Obermeyer, A. Raman, 1994. Making supply meet demand in an uncertain world. Harvard Bus. Rev. 72 (3), 83-93. Gaeth, G.J., J. Shanteau, 1984. Reducing the influence of irrelevant information on experienced decision makers. Organ. Behav. Hum. Perf. 33, 263-282. Gaur, V., S. Kesavan, A. Raman, M.L. Fisher, 2007. Estimating demand uncertainty using judgmental forecast. Man. Serv. Oper. Manage. 9 (4), 480-491. Goodwin, P., G. Wright, 1993. Improving judgmental time series forecasting: A review of guidance provided by research. Int. J. Forecast. 9 (2), 147-161. Griffin, A., J.R. Hauser, 1992. Patterns of communication among marketing, engineering and manufacturing: A comparison between two new product teams. Management Sci. 38 (3), 360- 373. Griffin, A., J.R. Hauser, 1996. Integrating R&D and Marketing: A review and analysis of the literature. J. Prod. Innovat. 13 (1), 191-215. Hagdorn-van der Meijden, L., J.A.E.E. van Nunen, A. Ramondt, 1994. Forecasting—bridging the gap between sales and manufacturing. Int. J. Prod. Econ. 37, 101-114. Hamel, G., C.K. Prahalad, 1989. Strategic intent. Harvard Bus. Rev. 67 (3), 63-78. Hammond, J.H., 1990. Quick response in the apparel Industry. Harvard Business School Note 690- 038. Harvard Business School, Boston. Hammond, J.H., A. Raman, 1995. Sport Obermeyer Ltd. Harvard Business School Case 695-002. Harvard Business School, Boston. Hanke, J.E., A.G. Reitsch, 1995. Business Forecasting, 5th ed. Prentice Hall, Englewood Cliffs, NJ. Hughes, M.S., 2001. Forecasting practice: Organizational issues. J. Oper. Res. Soc. 52 (2), 143-149. Janis, I.L., 1972. Victims of Groupthink. Houghton Mifflin, Boston. Kahn, K.B., J.T. Mentzer, 1994. The impact of team-based forecasting. J. Bus. Forecast. 13 (2), 18- 21. Keating, E.K., R. Oliva, N. Repenning, S.F. Rockart, J.D. Sterman, 1999. Overcoming the improvement paradox. Eur. Mgmt. J. 17 (2), 120-134. Lapide, L., 2005. An S&OP maturity model. J. Bus. Forecast. 24 (3), 15-20. 31 Lawrence, M.J., R.H. Edmundson, M.J. O'Connor, 1986. The accuracy of combining judgmental and statistical forecasts. Management Sci. 32 (12), 1521-1532. Lim, J.S., M.J. O'Connor, 1995. Judgmental adjustment of initial forecasts: Its effectiveness and biases. J. Behav. Decis. Making 8, 149-168. Mahmoud, E., R. DeRoeck, R. Brown, G. Rice, 1992. Bridging the gap between theory and practice in forecasting. Int. J. Forecast. 8 (2), 251-267. Makridakis, S., S.C. Wheelwright, R.J. Hyndman, 1998. Forecasting: Methods and Applications, 3rd ed. Wiley, New York. Mentzer, J.T., C.C. Bienstock, 1998. Sales Forecasting Management. Sage, Thousand Oaks, CA. Meredith, J., 1998. Building operations management theory through case and field research. J. Oper. Manag. 16, 441-454. Oliva, R., 2001. Tradeoffs in responses to work pressure in the service industry. California Management Review 43 (4), 26-43. Oliva, R., J.D. Sterman, 2001. Cutting corners and working overtime: Quality erosion in the service industry. Management Sci. 47 (7), 894-914. Oliva, R., N. Watson. 2004. What drives supply chain behavior? Harvard Bus. Sch., June 7, 2004. Available from: http://hbswk.hbs.edu/item.jhtml?id=4170&t=bizhistory. Oliva, R., N. Watson, 2006. Cross functional alignment in supply chain planning: A case study of sales & operations planning. Working Paper 07-001. Harvard Business School, Boston. Orlikowski, W., 1992. The duality of technology: Rethinking the concept of technology in organizations. Organ. Sci. 3 (3), 398-427. Pfeffer, J., G.R. Salancik, 1974. Organizational decision making as a political process: The case of a university budget. Adm. Sci. Q. 19 (2), 135-151. Rowe, G., G. Wright, 1999. The Delphi technique as a forecasting tool: Issues and analysis. Int. J. Forecast. 12 (1), 73-92. Rowe, G., G. Wright, 2001. Expert opinions in forecasting: The role of the Delphi technique. In: J.S. Armstrong (Ed), Principles of Forecasting. Kluwer Academic Publishers, Norwell, MA, pp. 125-144. Salancik, G.R., J. Pfeffer, 1977. Who gets power – and how they hold on to it: A strategiccontingency model of power. Org. Dyn. 5 (3), 3-21. Sanders, N.R., L.P. Ritzman, 1992. Accuracy of judgmental forecasts: A comparison. Omega 20, 353-364. Sanders, N.R., K.B. Manrodt, 1994. Forecasting practices in U.S. corporations: Survey results. Interfaces 24, 91-100. 32 Sanders, N.R., L.P. Ritzman, 2001. Judgmental adjustment of statistical forecasts. In: J.S. Armstrong (Ed), Principles of Forecasting. Kluwer Academic Publishers, Boston, pp. 405-416. Shapiro, B.P., 1977. Can marketing and manufacturing coexist? Harvard Bus. Rev. 55 (5), 104-114. Stein, J.C., 1997. Internal capital markets and the competition for corporate resources. Journal of Finance 52 (1), 111-133. Terwiesch, C., Z.J. Ren, T.H. Ho, M.A. Cohen, 2005. An empirical analysis of forecast sharing in the semiconductor equipment supply chain. Management Sci. 51 (2), 208-220. Voorhees, W.R., 2000. The impact of political, institutional, methodological, and economic factors on forecast error. PhD dissertation, Indiana University. Watson, M.C., 1996. Forecasting in the Scottish electronics industry. Int. J. Forecast. 12 (3), 361- 371. Watson, N., R. Oliva, 2005. Leitax (A). Harvard Business School Case 606-002. Harvard Business School, Boston. Wheelwright, S.C., K.B. Clark, 1992. Revolutionizing Product Development. Wiley, New York. Yin, R., 1984. Case Study Research. Sage, Beverly Hills, CA. Figure 1. Forecast Accuracy Performance † 0% 20% 40% 60% 80% 100% Dec-Feb 2002 Mar-May 2002 Jun-Aug 2002 Sep-Nov 2002 Dec-Feb 2003 Mar-May 2003 Jun-Aug 2003 Sep-Nov 2003 Accuracy Goal Sell-thorugh Sell-in Project Redesign Go-Live † The dip forecasting performance in Sept-Nov 2003 was as a result of a relocation of a distribution center. 33 Figure 2. Consensus Forecasting Process Industry Info Historical Info Sales Info Statistical Forecast (DMO) Top-down Forecast (PPS) Bottom-up Forecast (SD) Consensus Forecast Joint Planning Business Assumptions Package Figure 3. Dual Relationship between Coordination System and Behavioral Dynamics individual or functional biases coordination system processes roles structure values influence the design create / generate Table 1: Process Steps and Biases Mitigated Consensus Forecasting Process Procedural Blind Spot Informational Blind Spot Incentive Misalignment Business Assumptions Package Multiple sources ? Multiple interpretations ? Interpretation source explicitly labeled ? Functional forecasts Private info not in BAP ? Functional interpretation of assumptions ? Aggregate forecasts at family level ? Ignoring planning expectations supply chain constraints ? ? Proposed Consensus Forecast Weighted average of functional forecasts ? Weights in terms of past proven performance ? Initial anchoring for consensus process ? Final consensus meeting Resolution of diverging forecast ? Uncover private information used in functional forecasts ? Uncover private interpretation of public information ? ? Forecast Review Financial and Operational ? ? BAP revision ? ?Managing Functional Biases in Organizational Forecasts: A Case Study of Consensus Forecasting in Supply Chain Plannin
|
CD ROM Annuaire d'Entreprises France prospect (avec ou sans emails) : REMISE DE 10 % Avec le code réduction AUDEN872
10% de réduction sur vos envois d'emailing --> CLIQUEZ ICI Retour à l'accueil, cliquez ici Rogelio Oliva and Noel Watson. Working papers are in draft form. This working paper is distributed for purposes of comment and discussion only. It may not be reproduced without permission of the copyright holder. Copies of working papers are available from the author. Managing Functional Biases in Organizational Forecasts: A Case Study of Consensus Forecasting in Supply Chain Planning Rogelio Oliva Noel Watson Managing Functional Biases in Organizational Forecasts: A Case Study of Consensus Forecasting in Supply Chain Planning Rogelio Oliva Mays Business School Texas A&M University College Station, TX 77843-4217 Ph 979-862-3744 | Fx 979-845-5653 roliva@tamu.edu Noel Watson Harvard Business School Soldiers Field Rd. Boston, MA 02163 Ph 617-495-6614 | Fx 617-496-4059 nwatson@hbs.edu Draft: December 14, 2007. Do not quote or cite without permission from the authors. Managing Functional Biases in Organizational Forecasts: A Case Study of Consensus Forecasting in Supply Chain Planning Abstract To date, little research has been done on managing the organizational and political dimensions of generating and improving forecasts in corporate settings. We examine the implementation of a supply chain planning process at a consumer electronics company, concentrating on the forecasting approach around which the process revolves. Our analysis focuses on the forecasting process and how it mediates and accommodates the functional biases that can impair the forecast accuracy. We categorize the sources of functional bias into intentional, driven by misalignment of incentives and the disposition of power within the organization, and unintentional, resulting from informational and procedural blind spots. We show that the forecasting process, together with the supporting mechanisms of information exchange and elicitation of assumptions, is capable of managing the potential political conflict and the informational and procedural shortcomings. We also show that the creation of an independent group responsible for managing the forecasting process, an approach that we distinguish from generating forecasts directly, can stabilize the political dimension sufficiently to enable process improvement to be steered. Finally, we find that while a coordination system—the relevant processes, roles and responsibilities, and structure—can be designed to address existing individual and functional biases in the organization, the new coordination system will in turn generate new individual and functional biases. The introduced framework of functional biases (whether those biases are intentional or not), the analysis of the political dimension of the forecasting process, and the idea of a coordination system are new constructs to better understand the interface between operations management and other functions. Keywords: forecasting, marketing/operations interface, sales and operations planning, organizational issues, case/field study. 1 1. Introduction The importance of forecasting for operations management cannot be overstated. Within the firm, forecast generation and sharing is used by managers to guide the distribution of resources (Antle and Eppen, 1985; Stein, 1997), to provide targets for organizational efforts (Hamel and Prahalad, 1989; Keating et al., 1999), and to integrate the operations management function with the marketing (Crittenden et al., 1993; Griffin and Hauser, 1992), sales (Lapide, 2005; Mentzer and Bienstock, 1998), and product development (Griffin and Hauser, 1996; Wheelwright and Clark, 1992) functions. Errors in forecasting often cross the organizational boundary and translate into misallocation of resources that can impact shareholders’ return on investment (Copeland et al., 1994), and affect customers’ perception of service quality (Oliva, 2001; Oliva and Sterman, 2001). Across the supply chain, forecast sharing is a prevalent practice for proactively aligning capacity and managing supply (Cachon and Lariviere, 2001; Terwiesch et al., 2005). Over the past five years, demand/supply planning processes for planning horizons in the intermediate range have been receiving increasing attention, especially as the information technology originally intended to facilitate this planning has achieved limited success. Crossfunctional coordination among groups such as sales, operations, and finance is needed to ensure the effectiveness of some of these planning processes and the forecasting that supports it. Such processes have been referred to in the managerial literature as sales and operations planning (S&OP) processes (Bower, 2005; Lapide, 2005). Forecasts within this multi-functional setting that characterizes many organizations cannot be operationalized or analyzed in an organizational and political vacuum. However, to date, little research has been done on managing the organizational and political dimensions of generating and improving forecasts in corporate settings; dimensions which determine significantly the overall effectiveness of the forecasting process (Bretschneider and Gorr, 1989, p. 305). 2 We present a case study that illustrates the implementation of an S&OP process, concentrating in detail on the forecasting approach around which the planning process revolves. Our study describes how individuals and functional areas (whether intentionally or not) biased the organizational forecast and how the forecasting process implemented managed those biases in a supply chain setting that requires responsive planning. We define biases broadly here to include those occasioned by functional and individual incentives, and informational or procedural shortcomings. Our analysis reveals that the forecasting process, together with the supporting mechanisms of information exchange and elicitation of assumptions, is capable of managing the political conflict and the informational and procedural shortcomings that accrue to organizational differentiation. We show that the creation of an independent group responsible for managing the forecasting process can stabilize the political dimension sufficiently to enable process improvement to be steered. The deployment of a new system, however, introduces entirely new dynamics in terms of influence over forecasts and active biases. The recognition that the system both needs to account, and is in part responsible, for partners’ biases introduces a level of design complexity not currently acknowledged in the literature or by practitioners. The rest of this paper is structured as follows: In section 2, we review the relevant forecasting literature motivating the need for our case study and articulating hypotheses for findings in our research setting. Our research site and methodological design are described in section 3. In section 4 we report the conditions that triggered the deployment of the forecasting process, assess its impact in the organization, and describe the process, its actors, and dynamics in detail. Section 5 contains the core of our analysis: we analyze the organizational and process changes that were deployed, and assess how intentional and unintentional biases in the organization were managed through these mechanisms. Some of the challenges the organization faces under the new forecasting process are explored in section 6, which also provides a framework for understanding the need to continuously 3 monitor and adapt to the processes. The paper concludes with an evaluation of the implications of our findings for practitioners and researchers. 2. Research Motivation Most organizations use forecasts as input to comprehensive planning processes—such as financial planning, budgeting, sales planning, and finished goods inventory planning—that are charged with accomplishing particular goals. This implies that the forecast needs not only to be accepted by external parties, but also to guide efforts of the organization. Thus, an important measure of forecast effectiveness is how much they support these planning needs. The fit between forecasting and planning is an under-studied relationship in the literature, but at a minimum level, the forecast process needs to match the planning process in terms of the frequency and speed in which the forecast is produced. The forecasting horizon and accuracy of the forecast should be such that it allows the elaboration and execution of plans to take advantage of the forecast (Makridakis et al., 1998; Mentzer and Bienstock, 1998). For example, a planning approach such as Quick Response (Hammond, 1990) requires as input a sense of the uncertainty surrounding the forecasts in order to manage production. Thus, the forecasting process complementing such a planning process should have a means of providing a relative measure of uncertainty (Fisher et al., 1994; Fisher and Raman, 1996). Nevertheless, forecasting is not an exact science. In an organizational setting, the forecasting process requires information from multiple sources (e.g., intelligence about competitors, marketing plans, channel inventory positions, etc.) and in a variety of formats, not always amenable to integration and manipulation (Armstrong, 2001b; Fildes and Hastings, 1994; Lawrence et al., 1986; Makridakis et al., 1998). Existing case studies in the electronic and financial industries (e.g., Hughes, 2001; Watson, 1996) emphasize the informational deficiency in creating organization forecasts as a result of poor communication across functions. The multiplicity of data sources and 4 formats creates two major challenges for a forecasting process. First, since not all information can be accurately reflected in a statistical algorithm, judgment calls are a regular part of forecasting processes (Armstrong, 2001a; Sanders and Manrodt, 1994; Sanders and Ritzman, 2001). The judgmental criteria to make, adjust, and evaluate forecasts can result in individual and functional limitations and biases that potentially compromise the quality of the forecasts. Second, since the vast majority of the information providers and the makers of those judgment calls are also the users of the forecast, there are strong political forces at work explicitly attempting to bias the outcome of the process. Thus the forecasting process, in addition to fitting with the organization planning requirements, needs to explicitly manage the biases (whether individual or functional) that might affect the outcome of the process. We recognize two potential sources of biases in the organization — intentional and unintentional — that incorporate the judgmental, informational, and political dynamics that affect forecasting performance. In the following subsections, we provide analytical context from relevant literature to articulate frameworks and expectations that will help the reader to assimilate the case details in these two dimensions. 2.1 Managing Biases due to Incentive Misalignment and Dispositions of Power Intentional sources of bias (i.e., an inherent interest and ability to maintain a level of misinformation in the forecasts) are created by incentive misalignment across functions coupled with a particular disposition of power within the organization. Local incentives will drive different functional groups to want to influence the forecast process in directions that might benefit their own agenda. For example, a sales department — compensated through sales commissions — might push to inflate the forecast to ensure ample product availability, while the operations group — responsible for managing suppliers, operating capacity, and inventories — might be interested in a forecast that smoothes demand and eliminate costly production swings (Shapiro, 1977). Power is the ability of 5 the functional group to influence the forecast, and is normally gained by access to a resource (e.g., skill, information) that is scarce and valued as critical by the organization, and the ability to leverage such resources is contingent to the degree of uncertainty surrounding the organizational decision-making process (Salancik and Pfeffer, 1977). For example, the power that a sales organization could extract from intimate knowledge of customer demand diminishes as that demand becomes stable and predictable to the rest of the organization. Mahmoud et al. (1992) in discussing the gap between forecasting theory and practice, refers in particular to the effects of the disparate functional agendas and incentives as the political gap, while according to Hanke and Reitsch (1995) the most common source of bias in a forecasting context is political pressure within a company. Thus, forecasts within a multi-functional setting cannot be operationalized or analyzed in an organizational and political vacuum. As sources of incentive misalignment and contributors to the dispositions of power within the organization, disparate functional agendas and incentives, standardized organizational decision-making processes, and shared norms and values, all have an impact on the forecasting process and forecast accuracy (Bromiley, 1987). However, most of the academic literature only examines the individual and group unintentional biases that can affect forecasting ex situ (Armstrong, 2001a), with little research directed at managing the multi-objective and political dimensions of forecast generation and improvement in corporate settings (Bretschneider and Gorr, 1989; Deschamps, 2004). Research on organizational factors and intentional sources of biases in forecasting has been done in the public sector where political agendas are explicit. This research suggests that directly confronting differences in goals and assumptions increases forecast accuracy. Bretschneider and Gorr (1987) and Bretschneider et al. (1989) found that a state’s forecast accuracy improved if forecasts were produced independently by the legislature and executive, and then combined through a formal consensus procedure that exposed political positions and forecast assumptions. Deschamps 6 (2004) found forecast accuracy to be improved by creating a neutral negotiation space and an independent political agency with dedicated forecasters to facilitate the learning of technical and consensus forecasting skills. As different organizational functions have access to diverse commodities of power (e.g., sales has a unique access to current customer demand) we recognize that each group will have unique ways to influence the outcome of the forecasting process. The process through which groups with different interests reach accommodation ultimately rests on this disposition of power and it is referred to in the political science and management literatures as a political process (Crick, 1962; Dahl, 1970; Pfeffer and Salancik, 1974; Salancik and Pfeffer, 1977). In forecasting, a desirable outcome of a well-managed political contention would be a process that enables the known positive influences on forecast accuracy while weakening the negative influences on forecast accuracy. That is, a politically savvy process should take into consideration the commodities of power owned by the different functional areas and the impact that they might have on forecast accuracy, and explicitly manage the disposition of power to minimize negative influences on forecast accuracy. 2.2 Abating Informational and Procedural Blind Spots Although functional goals and incentives can translate into intentional efforts to bias a forecast, other factors can affect forecasts in ways which managers might not be aware. Thus, we recognize unintentional, but systematic, sources of forecast error resulting from what we term blind spots, ignorance in specific areas which affect negatively an individual’s or group’s forecasts. Blind spots can be informational — related to an absence of otherwise feasibly collected information on which a forecast should be based — or procedural — related to the algorithms and tasks used to generate forecasts given the information available. This typology is an analytic one; the types are not always empirically distinct. Some informational blind spots could result from naiveté in forecasting methodology (procedural blind spot) that does not allow the forecaster to use the available 7 information. Yet, while the two types may intermingle in an empirical setting, they tend to derive from different conditions and require different countermeasures. We expect then that a forecasting process should try to manage the informational and procedural blind spots that may exist for the process. Some individual biases that have been shown to affect subjective forecasting include over-confidence, availability, anchor and adjustment, and optimism (Makridakis et al., 1998). Forecasters, even when provided with statistical forecasts as guides, have difficulty assigning less weight to their own forecasts (Lim and O'Connor, 1995). Cognitive information processing limitations and other biases related to the selection and use of information can also compromise the quality of plans. Gaeth and Shanteau (1984), for example, showed that irrelevant information aversely affected judgment, and Beach et al. (1986) showed that when the information provided is poor, forecasters might expend little effort to ensure that forecasts are accurate. Such individual biases can affect both the quality of the information collected and used to infer forecasts (informational blind spots), and the rules of inference themselves (procedural blind spots). Research suggests process features and processing capabilities that might potentially mitigate the effect of individual biases. For example, combining forecasts with other judgmental or statistical forecasts tends to improve forecast accuracy (Lawrence et al., 1986). Goodwin and Wright (1993) summarize the research and empirical evidence that supports six strategies for improving judgmental forecasts: using decomposition, improving forecasters’ technical knowledge, enhancing data presentation, mathematically correcting biases, providing feedback to forecasters to facilitate learning, and combining forecasts or using groups of forecasters. Group forecasting is thought to contribute two important benefits to judgmental forecasting: (1) broad participation in the forecasting process maximizes group diversity, which reduces political bias and the tendency to cling to outmoded assumptions, assumptions that can contribute to both 8 procedural and informational blind spots (Voorhees, 2000), and (2) the varied people in groups enrich the contextual information available to the process, reducing informational blind spots and thereby improving the accuracy of forecasts (Edmundson et al., 1988; Sanders and Ritzman, 1992). Some researchers maintain that such variety is even useful for projecting the expected accuracy of forecasts (Gaur et al., 2007; Hammond and Raman, 1995). Group dynamics can, however, have unwanted effects on the time to achieve consensus, the quality of consensus (whether true agreement or acquiescence), and thus, the quality of the forecasts. Kahn and Mentzer (1994), who found that a team approach led to greater satisfaction with the forecasting process, also reported mixed results regarding the benefits of group forecasting. Dysfunctional group dynamics reflect group characteristics such as the participants’ personal dynamics, politics, information asymmetries, differing priorities, and varying information assimilation and processing capabilities. Group processes can vary in terms of the degree of interaction afforded participants and the structure of the rules for interaction. The most popular structured, non-interacting, group forecasting approach is the Delphi method wherein a group’s successive individual forecasts elicits anonymous feedback in the form of summary statistics (Rowe and Wright, 2001). Structured interacting groups, those with rules governing interaction, have not been found to perform significantly worse than groups that use the Delphi method (Rowe and Wright, 1999). However, Ang and O’Connor (1991) found that modified consensus (in which an individual’s forecast was the basis for the group’s discussion) outperformed forecasts based on group mean, consensus, and Nominal Group Technique (Delphi with some interaction). 2.3 Conclusions from Review The above review suggests that while the current academic literature recognizes the need for an understanding of the organizational and political context in which the forecasting process takes place, the literature still lacks the operational and organizational frameworks for analyzing the 9 generation of organizational forecasts. Our research aims to address this shortcoming by developing insights into managing the impact of the organizational and political dimensions of forecasting. The literature does lead us to expect a forecasting process that is attuned to the organizational and political context in which it operates, to be based on a group process, to combine information and forecasts from multiple sources, and to be deliberate about the way it allows different interests to affect forecast accuracy. We opted to explore this set of issues through a case study since the forecasting process has not been analyzed previously from this perspective, and our interest is to develop the constructs to understand its organizational and political context (Meredith, 1998). We consequently focus our analysis not on the forecast method (the specific technique used to arrive at a forecast), but on the forecasting process, that is, the way the organization has systematized information gathering, decision-making, and communication activities, and the organizational structure that supports that process. 3. Research Methodology 3.1 Case Site The case site is a northern California-headquartered consumer electronics firm called Leitax (name has been disguised) that sold its products primarily through retailers such as Best Buy and Target and operated distribution centers (DCs) in North America, Europe, and the Far East. The Leitax product portfolio consisted of seven to nine models, each with multiple SKUs that were produced by contract-manufacturers with plants in Asia and Latin America. The product life across the models, which was contracting, ranged from nine to fifteen months, with high-end, feature-packed, products tending to have the shortest product lives. The site was chosen because prior to the changes in the forecasting process, the situation was characterized by having shortcomings along the two dimensions described above. That is, the forecasting process was characterized by informational and procedural blind spots and was marred by intentional manipulation of information to advance functional agendas. The case site represents 10 an exemplar for the study of the management of these dimensions, and constitutes a unique opportunity to test the integration of the two strands of theory that make explicit predictions about unintentional and intentional biases (Yin, 1984). The forecasting approach introduced was considered at least reasonably successful by many of the organizational participants and its forecasting accuracy, and accompanying improvements of operational indicators (e.g., inventory turns, obsolescence), corroborates this assessment. The issues and dynamics addressed by the implementation of the participatory forecasting process are issues that are not unique to Leitax, but characterize a significant number of organizations. Thus, the site provides a rich setting in which to seek to understand the dynamics involved in managing an organizational forecasting process and from which we expect to provoke theory useful for academics and practitioners alike. Our case study provides one reference for managing these organizational forecasts within an evolving business and operations strategy. As such, it does more to suggest potential relationships, dynamics, and solutions, than to definitively define or propose them. 3.2 Research Design Insights were derived primarily from an intensive case study research (Eisenhardt, 1989; Yin, 1984) with the following protocol: the research was retrospective; the primary initiative studied, although evolving, was fully operational at the time the research was undertaken. Data were collected through 25 semi-structured, 45- to 90-minute interviews conducted with leaders, analysts, and participants from all functional areas involved in the forecasting process, as well as with heads of other divisions affected by the process. The interviews were supplemented with extensive reviews of archival data including internal and external memos and presentations, and direct observation of two planning and forecasting meetings. The intent of the interviews was to understand the interviewees’ role in the forecasting process, their perception of the process, and to explore explicitly the unintentional biases due to blind spots as well as the political agendas of the different 11 actors and functional areas. To assess the political elements of the forecasting process, we explicitly asked interviewees about their incentives and goals. We then triangulated their responses with answers from other actors and asked for explanations for observed behavior during the forecasting meetings. When appropriate, we asked interviewees about their own and other parties’ sources of power, i.e., the commodity through which they obtained the ability to influence an outcome—e.g., formal authority, access to important information, external reputation (Checkland and Scholes, 1990). Most interviews were conducted in the organization’s northern California facility, with some follow-up interviews done by telephone. Given the nature of the research, interviewees were not required to stay within the standard questions; interviewees perceived to be exploring fruitful avenues were permitted to continue in that direction. All interviews were recorded. Several participants were subsequently contacted and asked to elaborate on issues they had raised or to clarify comments. The data is summarized in the form of a detailed case study that relates the story of the initiative and current challenges (Watson and Oliva, 2005). Feedback was solicited from the participants, who were asked to review their quotations, and the case, for accuracy. The analysis of the data was driven by three explicit goals: First, to understand the chronology of the implemented changes and the motivation behind those changes (this analysis led to the realization of mistrust across functional areas and the perceived biases that hampered the process). Second, to understand and to document the implemented forecasting process, the roles that different actors took within the process, and the agreed values and norms that regulated interactions within the forecasting group; and third, to assess how different elements of the process addressed or mitigated the individual or functional biases identified. 4. Forecasting at Leitax The following description of the consensus forecasting process at Leitax was summarized from the interviews with the participants of the process. The description highlights the political dimension of 12 the situation at Leitax by describing the differing priorities of the different functional groups and how power to influence the achievement of those priorities was expressed. 4.1 Historical and Organizational Context Prior to 2001, demand planning at Leitax was ill-defined, with multiple private forecasts the norm. For new product introductions and mid-life product replenishment, the sales directors, (Leitax employed sales directors for three geographical regions—the Americas; Europe, the Middle East, and Africa; and Asia Pacific—and separate sales directors for Latin America and Canada) made initial forecasts that were informally distributed to the operations and finance groups, sometimes via discussions in hallways. These shared forecasts were intended to be used by the operations group as guides for communicating build or cancel requests to the supply chain. The finance group, in turn, would use these forecasts to guide financial planning and monitoring. These sales forecasts, however, were often mistrusted or second-guessed when they crossed into other functional areas. For example, with inventory shortages as its primary responsibility, the operations group would frequently generate its own forecasts to minimize the perceived exposure to inventory discrepancies, and marketing would do likewise when it anticipated that promotions might result in deviations from sales forecasts. While the extent of bias in the sales forecast was never clearly determined; the mere perception that sales had an incentive to maintain high inventory positions in the channel was sufficient to compromise the credibility of its forecasts. Sales might well have intended to communicate accurate information to the other functions, but incentives to achieve higher sell-in rates tainted the objectivity of its forecasting, which occasioned the other functions’ distrust and consequent generation of independent forecasts. Interviewees, furthermore, suspected executive forecasts to be biased by goal setting pressures, operational forecasts to be biased by inventory liability and utilization policies, and finance forecasts to be biased by market expectations and profitability 13 thresholds. These biases stem from what are believed to be naturally occurring priorities of these functions. Following two delayed product introductions that resulted in an inventory write-off of approximately 10% of FY01-02 revenues, major changes were introduced during the fall of 2001 including the appointment of a new CEO and five new vice-presidents for product development, product management, marketing, sales, and operations. In April 2002, the newly hired director of planning and fulfillment launched a project with the goal of improving the velocity and accuracy of planning information throughout the supply chain. Organizationally, management and ownership of the forecasting process fell to the newly created Demand Management Organization (DMO), which had responsibility for managing, synthesizing, challenging, and creating demand projections to pace Leitax’s operations worldwide. The three analysts who comprised the group, which reported to the director of planning and fulfillment, were responsible not only for preparing statistical forecasts but also for supporting all the information and coordination requirements of the forecasting process. By the summer of 2003, a stable planning and coordination system was in place and by the fall of 2003, Leitax had realized dramatic improvements in forecasting accuracy. Leitax defined forecast accuracy as one minus the ratio of the absolute deviation of sales from forecast to the forecast (FA=1-|sales-forecast|/forecast). Three-month ahead sell-through (sell-in) forecast accuracy improved from 58% (49%) in the summer of 2002 to 88% (84%) by fall 2003 (see Figure 1). Sell-in forecasts refer to expected sales from Leitax’s DCs into their resellers, and sell-through forecasts refer to expected sales from the resellers. Forecast accuracy through ’05 was sustained at an average of 85% for sell-through. Better forecasts translated into significant operational improvements: Inventory turns increased to 26 in Q4 ’03 from 12 the previous year, and average on hand inventory decreased from $55M to $23M. Excess and obsolescence costs decreased from an average of $3M 14 for fiscal years 2000-2002 to practically zero in fiscal year 2003. The different stages of the forecasting process are described in detail in the next section. 4.2 Process Description By the fall of 2003, a group that included the sales directors and VPs of marketing, product strategy, finance, and product management, were consistently generating a monthly forecast. The process, depicted in Figure 2, begins with the creation of an information package, referred to as the business assumptions package, from which functional forecasts are created. These forecasts are combined and discussed at consensus forecasting meetings until there is a final forecast upon which there is agreement. Business Assumptions Package The starting point for the consensus forecasting process, the business assumptions package (BAP), contained price plans for each SKU, intelligence about market trends and competitors’ products and marketing strategies, and other information of relevance to the industry. The product planning and strategy, marketing, and DMO groups guided assessments of the impact of the information on future business performance entered into the BAP (an Excel document with multiple tabs for different types of information and an accompanying PowerPoint presentation). These recommendations were carefully labeled as such and generally made in quite broad terms. The BAP generally reflected a one-year horizon, and was updated monthly and discussed and agreed upon by the forecasting group. The forecasting group generally tried not to exclude information deemed relevant from the BAP even when there were differences in opinion about the strength of the relevance. The general philosophy was that of an open exchange of information that at least one function considered relevant. Functional Forecasts Once the BAP was discussed, the information in it was used by three groups: product planning and strategy, sales, and the DMO, to elaborate functional forecasts at the family level, leaving the 15 breakdown of that forecast into specific SKU demand to the sales and packing schedules. The three functional forecasts were made for sell-through sales and without any consideration to potential supply chain capacity constraints. Product planning and strategy (PPS), a three-person group that supported all aspects of product life cycle from launch to end-of-life, and assessed competitive products and effects of price changes on demand, prepared a top-down forecast of global expected demand. The PPS forecast reflected a worldwide estimate of product demand derived from product and region specific forecasts based on historical and current trends of market-share and the current portfolio of products being offered by Leitax and its competitors. The PPS group relied on external market research groups to spot current trends, and used appropriate history as precedent in assessing competitive situations and price effects. The sales directors utilized a bottom-up approach to generate their forecast. Sales directors from all regions aggregated their own knowledge and that of their account managers about channel holdings, current sales, and expected promotions to develop a forecast based on information about what was happening in the distribution channel. The sales directors’ bottom-up forecast was first stated as a sell-in forecast. Since incentives for the sales organization were based on commissions on sell-in, this was how account managers thought of the business. The sell-in forecast was then translated into a sell-through forecast that reflected the maximum level of channel inventory (inventory at downstream DC’s and at resellers). The sales directors’ bottom-up forecast, being based on orders and retail and distribution partner feedback, was instrumental in determining the first 13 weeks of the master production schedule. The DMO group prepared, on the basis of statistical inferences from past sales, a third forecast of sell-through by region intended primarily to provide a reference point for the other two forecasts. Significant deviations from the statistical forecast would require that the other forecasting groups investigate and justify their assumptions. 16 The three groups’ forecasts were merged into a proposed consensus forecast using a formulaic approach devised by the DMO that gave more weight to the sales directors’ forecast in the short term. Consensus Forecast Meetings The forecasting group met monthly to evaluate the three independent forecasts and the proposed consensus forecast. The intention was that all parties at the meeting would understand the assumptions that drove each forecast and agree to the consensus forecast based on their understanding of these assumptions and their implications. Discussion tended to focus on the nearest two quarters. In addition to some detail planning for new and existing products, the consensus forecast meetings were also a source of feedback on forecasting performance. In measuring performance, the DMO estimated the 13-week (the longest lead-time for a component in the supply chain) forecasting accuracy based on the formula that reflected the fractional forecast error (FA=1-|sales-forecast|/forecast). Finalizing Forecasts The agreed upon final consensus forecast (FCF) was sent to the finance department for financial roll up. Finance combined the FCF with pricing and promotion information from the BAP to establish expected sales and profitability. Forecasted revenues were compared with the company’s financial targets; if gaps were identified, an attempt was made to ensure that the sales department was not under-estimating market potential. If revisions made at this point did not result in satisfactory financial performance, the forecasting group would return to the business assumptions and, together with the marketing department, revise the pricing and promotion strategies to meet financial goals and analyst expectations. These gap-filling exercises, as they were called, usually occurred at the end of each quarter and could result in significant changes to forecasts. The approved FCF was released and used to generate the master production schedule. Operations validation of the FCF was ongoing. The FCF was used to generate consistent and 17 reliable production schedules for Leitax’s contract manufacturers and distributors. Suppliers responded by improving the accuracy and opportunity of information flows regarding the status of the supply chain and their commitment to produce received orders. More reliable production schedules also prepared suppliers to meet future expected demand. Capacity issues were communicated and discussed in the consensus meetings and potential deviations from forecasted sales incorporated in the BAP. 5. Analysis In this section we examine how the design elements of the implemented forecasting process addressed potential unintentional functional biases (i.e., informational and procedural blind spots), and resolved conflicts that emerge from misalignments of functional incentives. We first take a process perspective and analyze how each stage worked to minimize functional and collective blind spots. In the second subsection, we present an analysis of how the process managed the commodities of power to improve forecast accuracy. Table 1 summarizes the sources of intentional and unintentional biases addressed by each stage of the consensus forecasting process. 5.1 Process Analysis Business Assumptions Package The incorporation of diverse information sources is one of the main benefits reported for group forecasting (Edmundson et al., 1988; Sanders and Ritzman, 1992). The BAP document explicitly incorporated and assembled information in a common, sharable format that facilitated discussion by the functional groups. The sharing of information not only eliminated some inherent functional blind spots, but also provided a similar starting point for, and thereby improved the accuracy of, the individual functional forecasts (Fildes and Hastings, 1994). The guidance and recommendations provided by the functional groups’ assessments of the impact of information in the BAP on potential demand represented an additional point of convergence for assimilating diverse information. The fact that the functions making these assessments were expected to have greater 18 competencies for determining such assessments, helped to address potential procedural blind spots for the functions that used these assessments. The fact that these assessments and interpretations were explicitly labeled as such made equally explicit their potential for bias. Finally, the generation of the BAP in the monthly meetings served as a warm-up to the consensus forecasting meeting inasmuch as it required consensus about the planning assumptions. Functional Forecasts The functional forecasts that were eventually combined into the proposed consensus forecast were generated by the functional groups, each following a different methodological approach. Although the BAP was shared, each group interpreted the information it contained according to its own motivational or psychological biases. Moreover, there existed private information that had not been economical or feasible to include in, or that had been strategically withheld from, the BAP (e.g., actual customer intended orders, of which only sales was cognizant). The combination of the independently generated forecasts using even a simple average would yield a forecast that captured some of the unique and relevant information in, and thereby improved the accuracy of, the constituent forecasts (Lawrence et al., 1986). At Leitax, the functional forecasts were combined into the proposed consensus forecast using an algorithm more sophisticated that the simple average, based, as the literature recommends (Armstrong, 2001b), on the track record of the individual forecasts. By weighting the sales directors’ forecast more heavily in the short-term and the PPS’s forecast more heavily in the long-term, the DMO recognized each function’s different level of intimacy with different temporal horizons, thereby reducing the potential impact of functional blind spots. Through this weighting, the DMO also explicitly managed each group’s degree of influence on the forecasting horizon, which could have served as political appeasement. Consensus Forecasting Meetings The focus of the forecasting process on sell-through potentially yielded a clearer signal of market demand as sell-in numbers tended to be a distorted signal of demand; the sales force was known to 19 have an incentive to influence sell-in in the short-term and different retailers had time-varying appetites for product inventory. Discussion in the monthly consensus forecasting meetings revolved mainly around objections to the proposed consensus forecast. In this context, the proposed consensus forecast provided an anchoring point that was progressively adjusted to arrive at the final consensus forecast (FCF). Anchoring on the proposed consensus forecast not only reduced the cognitive effort required of the forecasting team members, but also eliminated their psychological biases and reduced the functional biases that might still be present in the functional forecasts. There is ample evidence in the literature that an anchoring and adjustment heuristic improves the accuracy of a consensus approach to forecasting (Ang and O'Connor, 1991). Discussion of objections to the proposed consensus forecast was intended to surface the private information or private interpretation of public information that motivated the objections. These discussions also served to reveal differences in the inference rules that functions used to generate forecasts. Differences might result from information that was not revealed in the BAP, from incomplete rules of inference (i.e., rules that do not consider all information), or from faulty rules of inference (i.e., rules that exhibited inconsistencies in logic). Faulty forecast assumptions were corrected and faulty rules of inference refined over time. The consensus meetings were also a source of feedback to the members of the forecasting group on forecasting performance. The feedback rendered observable not only unique and relevant factors that affect the accuracy of the overall forecasting process, but, through the three independent functional forecasts, other factors such as functional or psychological biases. For example, in early 2004 the DMO presented evidence that sale’s forecasts tended to over-estimate near- and underestimate long-term sales. Fed back to the functional areas, these assessments of the accuracy of their respective forecasts created awareness of potential blind spots. The functional forecasts’ historical accuracy also served to guide decision-making under conditions that demanded precision such as 20 allocation under constrained capacity or inventory. The director of planning and fulfillment’s selection of a measure of performance to guide these discussions is also worthy of note. Some considered this measure of accuracy, which compared forecasts to actual sales as if actual sales represented true demand, simplistic. Rather than a detailed, complex measure of forecast accuracy, he opted to use a metric that in its simplicity was effective only in providing a directional assessment of forecast quality (i.e., is forecast accuracy improving over time?). Tempering the pursuit of improvement of this accuracy metric, the director argued that more sophisticated metrics (e.g., considering requested backlog to estimate final demand) would be more uncertain, convey less information, and prevent garnering sufficient support to drive improvement of the forecasting process. Supporting Financial and Operational Planning Leitax’s forecasting process, having the explicit goal of supporting financial and operational planning, allowed these functions to validate the agreed upon consensus forecast by transforming it into a revenue forecast and a master production schedule. Note, however, the manner in which exceptions to the forecast were treated: if the financial forecast was deemed unsatisfactory or the production schedule not executable because of unconsidered supply chain issues, a new marketing and distribution plan was developed and incorporated in the BAP. Also, note that this approach was facilitated by the process ignoring capacity constraints in estimating demand. It was common before the implementation of the forecasting process for forecasts to be affected by perceptions of present and future supply chain capacity, which resulted in a subtle form of self-fulfilling prophecy; even if manufacturing capacity became available, deflated forecasts would have positioned lower quantities of raw materials and components in the supply chain. By reflecting financial goals and operational restrictions in the BAP and asking the forecasting group (and functional areas) to update their forecasts based on the new set of assumptions, instead of adjusting the final consensus forecast directly, Leitax embedded the forecasting process in the 21 planning process. Reviewing the new marketing and product development plans reflected in the BAP, and validating it through the lenses of different departments via the functional and consensus forecast, essentially ensured that all of the functional areas involved in the process were re-aligned with the firm’s needs and expectations. Separation of the forecasting and decision-making processes has been found to be crucial to forecast accuracy (Fildes and Hastings, 1994). We discuss the contributions of this process to cross-functional coordination and organizational alignment in a separate paper (Oliva and Watson, 2006). 5.2 Political Analysis As shown in Table 1, certain components of the forecasting process dealt directly with the biases created by incentive misalignment. However, the implementation of the forecasting process was accompanied with significant structural additions, which we examine here via a political analysis. As mentioned in the section 2, we expect the forecasting process to create a social and procedural context that enables, through the use of commodities of power, the positive influences on forecast accuracy, while weakening the influence of functional biases that might reduce the forecast accuracy. The most significant component of this context is the creation of the DMO. Politically, the DMO was an independent group with responsibility for managing the forecasting process. The introduction of an additional group and its intrinsic political agenda might increase the complexity of the forecasting process and thereby reduce its predictability or complicate its control. However, the DMO, albeit neutral, was by no means impotent. Through the mandate to manage the forecasting process and being accountable for its accuracy, the DMO had the ability to determine the impact of different functions on forecast accuracy and to enforce procedural changes to mediate their influence. Specifically, related to biases due to incentive misalignment, because the DMO managed all exchanges of information associated with the process, it determined how other functions’ power and influence would be expressed in the forecasts and could enforce the 22 expression of this influence in production requests and inventory allocation decisions. The direct empowerment of the DMO group at Leitax resulted from its relationship with the planning function that made actual production requests and inventory allocations. The planning function, in turn, derived its power from the corporate mandate for a company turnaround. While the particular means of empowerment of the DMO group are not consequential — alternative sources of power could have been just as affective—the fact that DMO was empowered was crucial for the creation and the success of the forecasting process. The empowerment of the DMO may seem antithetical to a consensual approach. In theory, the presence of a neutral body has been argued to be important for managing forecasting processes vulnerable to political influence (Deschamps, 2004), as a politically neutral actor is understood to have a limited desire to exercise power and is more easily deferred to for arbitration. In practice, an empowered entity such as the DMO needs to be careful to use this power to maintain the perception of neutrality. In particular, the perception of neutrality was reinforced by the DMO’s mandate to manage the forecasting process (as opposed to actual forecasts), the simplicity and transparency of the information exchanges (basic Excel templates), and performance metrics (recall the director’s argument for the simplest measure of forecast accuracy). The forecasting process is itself an example of the empowerment of a positive influence on forecasting performance. The feasibility of the implemented forecasting process derived from the creation of the DMO and the director’s ability to assure the attendance and participation of the VPs in the consensus forecasting meetings. While the forecasting process might have been initially successful because of this convening power, the process later became self-sustaining when it achieved credibility among the participants and the users of the final consensus. At that point in time, the principal source of power (ability to influence the forecast) became expertise and internal reputation as recognized by the forecasting group based on past forecasting performance. 23 Interestingly, this historical performance also reinforced the need for a collaborative approach to forecasting as no function had distinguished itself as possessing the ability to manage the process single-handedly. Nevertheless, since the forecasting approach accommodated some influence by functional groups, the DMO could be criticized for not eliminating fully opportunities for incentive misalignment. Functional groups represent stakeholders with information sets and goals relevant to the organization’s viability, thus, it is important to listen to those interests. It is, however, virtually impossible to determine a priori whether the influence of any function will increase or decrease forecast accuracy. Furthermore, its own blind spots precluded the DMO from fully representing these stakeholders. Consequently, it is conceivably impossible to eliminate incentive misalignment entirely if stakeholder interests are to be represented in the process. Summarizing, the DMO managed the above complicating factors in its development of the forecasting process by generating the proposed consensus forecast and having groups react to, or account for, major differences with it. The process implemented by the DMO shifted the conversation from functional groups pushing for their respective agendas, to justifying the sources of the forecasts and explicitly recognizing areas of expertise or dominant knowledge (e.g., sales in the short-term, PPS in the long term). The participatory process and credibility that accrued to the forecasting group consequent to improvements in forecast accuracy made the final consensus forecast more acceptable to the rest of the organization and increased its effectiveness in coordinating procurement, manufacturing, and sales (Hagdorn-van der Meijden et al., 1994). 6. Emerging Challenges The deployment of a new system can introduce entirely new dynamics in terms of influence over forecasts and active biases. Here, we describe two missteps suffered in 2003 and relate performance feedback from participants in the consensus forecasting process and then explore the implications 24 for the design of the process and the structure that supports it. 6.1 Product Forecasting Missteps The first misstep occurred when product introduction and early sales were being planned for a new product broadly reviewed and praised in the press for its innovative features. Although the forecasting process succeeded in dampening to some degree the specialized press’ enthusiasm, the product was nevertheless woefully over-forecasted and excess inventory resulted in a write-off of more than 1% of lifetime volume materials cost. The second misstep occurred when Leitax introduced a new product that was based on a highly successful model currently being sold to the professional market. Leitax considered the new product inferior in quality since it was cheaper to manufacture and targeted it at “prosumers,” a marketing segment considered to be between the consumer and professional segments. Despite warnings from the DMO suggesting the possibility of cannibalization, the consensus forecast had the existing product continuing its impressive sales rate throughout the introduction of the new product. The larger-than-expected cannibalization resulted in an obsolescence write off for the existing product of 3% of lifetime volume materials cost. These two missteps suggest a particular case of “groupthink” (Janis, 1972), whereby optimism, initially justified, withstands contradictory data or logic as functional (or individual) biases common to all parties tend to be reinforced. Since the forecasting process seeks agreement, when the input perspectives are similar but inaccurate, as in the case of the missteps described above, the process can potentially reinforce the inaccurate perceptions. In response to these missteps, the DMO group considered changing the focus of the consensus meetings from the next two quarters towards the life-cycle quantity forecasts for product families and allowing the allocation to quarters to be more historically driven. This would serve to add another set of forecasts to the process to help improve accuracy. This focus on expected sales over the life of the product would also help mediate the intentional biases driven by natural interest in 25 immediate returns that would surface when the two nearest quarters were instead the focus. The DMO group, however, had to be careful about how the changes were introduced so as to maintain its neutral stance and not create the perception of generating forecasts rather than the forecasting process. 6.2 Interview Evaluations General feedback from interviewees reported lingering issues with process compliance. For instance, more frequently than the DMO expected, the process yielded a channel inventory level greater than the desired 7 to 8 weeks. This was explained by overly optimistic forecasts from sales and sales’ over selling into the channel in response to its incentives. Some wondered about the appropriate effect of the finance group on the process. Sales, for example, complained that finance used the consensus meetings to push sales for higher revenues. Gap-filling exercises channeling feedback from finance back into the business assumptions, sometimes effected significant changes to forecasts that seemed inappropriate. The inappropriate effects of sales and finance described above can be compared with the dynamics that existed before implementation to reveal emerging challenges associated with the forecasting process. For example, under DMO’s inventory allocation policies, the only line of influence for sales is its forecasts — the process had eliminated the other sources of influence that sales had. Thus, sales would explicitly bias its forecasts in an attempt to swing regional sales in the preferred direction. For finance, the available lines of influence are the gap-filling exercises and the interaction within the consensus forecasting meetings. Given that the incentives and priorities of these functions had not changed, the use of lines of influence in this manner is not unexpected. However, it is not easy to predict exactly how these lines of influence will be used. 6.3 Implications for Coordination System Design The consensus forecasting process occasioned lines of influence on forecasts to be used in ways that were not originally intended, and did not always dampen justifiable optimism regarding product 26 performance. The latter dynamic can be characterized as a group bias whereby functional (individual) biases/beliefs common to all parties tend to be reinforced. Since the process seeks agreement, when the input perspectives are similar but inaccurate, as in the case of the missteps described above, the process can potentially reinforce the inaccurate perceptions. Both dynamics illustrate how, in response to a particular set of processes, responsibilities, and structures — what we call a coordination system (Oliva and Watson, 2004) — new behavioral dynamics outside of those intended by the process might develop, introducing weaknesses (and conceivably strengths) not previously observed in the process. In principle, a coordinating system should be designed to account and compensate for individual and functional biases of supply chain partners. But coordination system design choices predispose individual partners to certain problem space, simplifications, and heuristics. Because the design of a coordinating system determines the complexity of each partner's role, it is also, in part, responsible for the biases exhibited by the partners. In other words, changes attendant on a process put in place to counter particular biases might unintentionally engender a different set of biases. The recognition that a coordinating system both needs to account, and is in part responsible, for partners’ biases, introduces a level of design complexity not currently acknowledged. Managers need to be aware of this possibility and monitor the process in order to identify unintended adjustments, recognizing that neither unintended behavioral adjustments nor their effects are easily predicted given the many process interactions that might be involved. This dual relationship between the coordination system and associated behavioral schema (see Figure 3), although commonly remarked in the organizational theory literature (e.g., Barley, 1986; Orlikowski, 1992), has not previously been examined in the forecasting or operations management literatures. 7. Conclusion The purpose of case studies is not to argue for specific solutions, but rather to develop explanations 27 (Yin 1984). By categorizing potential sources of functional biases into a typology—intentional, that is, driven by incentive misalignment and dispositions of power, and unintentional, that is, related to informational and procedural blind spots—we address a range of forecasting challenges that may not show up as specifically as they do at Leitax, but are similarly engendered. By a complete mapping of the steps of the forecasting process, its accompanying organizational structure and its role within the planning processes of the firm, we detail the relevant elements of an empirically observed phenomenon occurring within its contexts. By capturing the political motivations and exchanges and exploring how the deployed process and structure mitigated the existing biases, we assess the effectiveness of the process in a dimension that has largely been ignored by the forecasting literature. Finally, through the assessment of new sources of biases after the deployment of the coordination system, we identify the adaptive nature of the political game played by the actors. Through the synthesis of our observations on these relevant elements of this coordinated forecasting system, previous findings from the forecasting literature, and credible deductions linking the coordination system to the mitigation of intentional and unintentional biases identified and the emergence of new ones, we provide sufficient evidence for the following propositions concerning the management of organizational forecasts (Meredith 1998): Proposition I: Consensus forecasting, together with the supporting elements of information exchange and assumption elicitation, can prove a sufficient mechanism for constructively managing the influence of both biases on forecasts while being adequately responsive to managing a fast-paced supply chain. Proposition II: The creation of an independent group responsible for managing the consensus forecasting process, an approach that we distinguish from generating forecasts directly, provides an effective way of managing the political conflict and informational and procedural shortcomings occasioned by organizational differentiation. Proposition III: While a coordination system—the relevant processes, roles and responsibilities, and structure—can be designed to address existing individual and functional biases in the organization, the new coordination system will in turn generate new individual and functional biases. 28 The empirical and theoretical grounding of our propositions suggest further implications for practitioners and researchers alike. The typology of functional biases into intentional and unintentional highlights managers’ need to be aware that better and more integrated information may not be sufficient for a good forecast, and that attention must be paid as well to designing the process so that the social and political dimensions of the organization are effectively managed. Finally, new intentional and unintentional biases can emerge directly from newly implemented processes. This places a continuous responsibility on managers monitoring implemented systems for emerging biases and understanding the principles for dealing with different types of biases, to make changes to these systems to maintain operational and organizational gains. Generating forecasts may involve an ongoing process of iterative coordination system improvement. For researchers in operations management and forecasting methods, the process implemented by Leitax might be seen, at a basic level, as a “how to” for implementing in the organization many of the lessons from the research in forecasting and behavioral decision-making. More important, the case illustrates the organizational and behavioral context of forecasting, a context that, to our knowledge, had not been fully addressed. Given the role of forecasting in the operations management function, and as argued in the introduction, future research is needed to continue to build frameworks for managing forecasting along the organizational and political dimensions in operational settings. Such research should be primarily empirical, including both exploratory and theory building methodology that can draw heavily from the current forecasting literature, which has uncovered many potential benefits for forecasting methods ex situ. References Ang, S., M.J. O'Connor, 1991. The effect of group-interaction processes on performance in timeseries extrapolation. Int. J. Forecast. 7 (2), 141-149. Antle, R., G.D. Eppen, 1985. Capital rationing and organizational slack in capital-budgeting. Management Sci. 31 (2), 163-174. 29 Armstrong, J.S. (ed.), 2001a. Principles of Forecasting. Kluwer Academic Publishers, Boston. Armstrong, J.S., 2001b. Combining forecasts. In: J.S. Armstrong (Ed), Principles of Forecasting. Kluwer Academic Publisher, Boston, pp. 417-439. Barley, S., 1986. Technology as an occasion for structuring: Evidence from observations of CT scanners and the social order of radiology departments. Adm. Sci. Q. 31, 78-108. Beach, L.R., V.E. Barnes, J.J.J. Christensen-Szalanski, 1986. Beyond heuristics and biases: A contingency model of judgmental forecasting. J. Forecast. 5, 143-157. Bower, P., 2005. 12 most common threats to sales and operations planning process. J. Bus. Forecast. 24 (3), 4-14. Bretschneider, S.I., W.L. Gorr, 1987. State and local government revenue forecasting. In: S. Makridakis, and S.C. Wheelwright (Eds), The Handbook of Forecasting: A Manager's Guide. Wiley, New York, pp. 118-134. Bretschneider, S.I., W.L. Gorr, 1989. Forecasting as a science. Int. J. Forecast. 5 (3), 305-306. Bretschneider, S.I., W.L. Gorr, G. Grizzle, E. Klay, 1989. Political and organizational influences on the accuracy of forecasting state government revenues. Int. J. Forecast. 5 (3), 307-319. Bromiley, P., 1987. Do forecasts produced by organizations reflect anchoring and adjustment. J. Forecast. 6 (3), 201-210. Cachon, G.P., M.A. Lariviere, 2001. Contracting to assure supply: How to share demand forecasts in a supply chain. Management Sci. 47 (5), 629-646. Checkland, P.B., J. Scholes, 1990. Soft Systems Methodology in Action. Wiley, Chichester, UK. Copeland, T., T. Koller, J. Murrin, 1994. Valuation: Measuring and Managing the Value of Companies, 2nd ed. Wiley, New York. Crick, B., 1962. In Defence of Politics. Weidenfeld and Nicolson, London. Crittenden, V.L., L.R. Gardiner, A. Stam, 1993. Reducing conflict between marketing and manufacturing. Ind. Market. Manag. 22 (4), 299-309. Dahl, R.A., 1970. Modern Political Analysis, 2nd ed. Prentice Hall, Englewood Cliffs, NJ. Deschamps, E., 2004. The impact of institutional change on forecast accuracy: A case study of budget forecasting in Washington State. Int. J. Forecast. 20 (4), 647-657. Edmundson, R.H., M.J. Lawrence, M.J. O'Connor, 1988. The use of non-time series information in sales forecasting: A case study. J. Forecast. 7, 201-211. Eisenhardt, K.M., 1989. Building theories from case study research. Acad. Manage. Rev. 14 (4), 532-550. 30 Fildes, R., R. Hastings, 1994. The organization and improvement of market forecasting. J. Oper. Res. Soc. 45 (1), 1-16. Fisher, M.L., A. Raman, 1996. Reducing the cost of demand uncertainty through accurate response to early sales. Oper. Res. 44 (1), 87-99. Fisher, M.L., J.H. Hammond, W.R. Obermeyer, A. Raman, 1994. Making supply meet demand in an uncertain world. Harvard Bus. Rev. 72 (3), 83-93. Gaeth, G.J., J. Shanteau, 1984. Reducing the influence of irrelevant information on experienced decision makers. Organ. Behav. Hum. Perf. 33, 263-282. Gaur, V., S. Kesavan, A. Raman, M.L. Fisher, 2007. Estimating demand uncertainty using judgmental forecast. Man. Serv. Oper. Manage. 9 (4), 480-491. Goodwin, P., G. Wright, 1993. Improving judgmental time series forecasting: A review of guidance provided by research. Int. J. Forecast. 9 (2), 147-161. Griffin, A., J.R. Hauser, 1992. Patterns of communication among marketing, engineering and manufacturing: A comparison between two new product teams. Management Sci. 38 (3), 360- 373. Griffin, A., J.R. Hauser, 1996. Integrating R&D and Marketing: A review and analysis of the literature. J. Prod. Innovat. 13 (1), 191-215. Hagdorn-van der Meijden, L., J.A.E.E. van Nunen, A. Ramondt, 1994. Forecasting—bridging the gap between sales and manufacturing. Int. J. Prod. Econ. 37, 101-114. Hamel, G., C.K. Prahalad, 1989. Strategic intent. Harvard Bus. Rev. 67 (3), 63-78. Hammond, J.H., 1990. Quick response in the apparel Industry. Harvard Business School Note 690- 038. Harvard Business School, Boston. Hammond, J.H., A. Raman, 1995. Sport Obermeyer Ltd. Harvard Business School Case 695-002. Harvard Business School, Boston. Hanke, J.E., A.G. Reitsch, 1995. Business Forecasting, 5th ed. Prentice Hall, Englewood Cliffs, NJ. Hughes, M.S., 2001. Forecasting practice: Organizational issues. J. Oper. Res. Soc. 52 (2), 143-149. Janis, I.L., 1972. Victims of Groupthink. Houghton Mifflin, Boston. Kahn, K.B., J.T. Mentzer, 1994. The impact of team-based forecasting. J. Bus. Forecast. 13 (2), 18- 21. Keating, E.K., R. Oliva, N. Repenning, S.F. Rockart, J.D. Sterman, 1999. Overcoming the improvement paradox. Eur. Mgmt. J. 17 (2), 120-134. Lapide, L., 2005. An S&OP maturity model. J. Bus. Forecast. 24 (3), 15-20. 31 Lawrence, M.J., R.H. Edmundson, M.J. O'Connor, 1986. The accuracy of combining judgmental and statistical forecasts. Management Sci. 32 (12), 1521-1532. Lim, J.S., M.J. O'Connor, 1995. Judgmental adjustment of initial forecasts: Its effectiveness and biases. J. Behav. Decis. Making 8, 149-168. Mahmoud, E., R. DeRoeck, R. Brown, G. Rice, 1992. Bridging the gap between theory and practice in forecasting. Int. J. Forecast. 8 (2), 251-267. Makridakis, S., S.C. Wheelwright, R.J. Hyndman, 1998. Forecasting: Methods and Applications, 3rd ed. Wiley, New York. Mentzer, J.T., C.C. Bienstock, 1998. Sales Forecasting Management. Sage, Thousand Oaks, CA. Meredith, J., 1998. Building operations management theory through case and field research. J. Oper. Manag. 16, 441-454. Oliva, R., 2001. Tradeoffs in responses to work pressure in the service industry. California Management Review 43 (4), 26-43. Oliva, R., J.D. Sterman, 2001. Cutting corners and working overtime: Quality erosion in the service industry. Management Sci. 47 (7), 894-914. Oliva, R., N. Watson. 2004. What drives supply chain behavior? Harvard Bus. Sch., June 7, 2004. Available from: http://hbswk.hbs.edu/item.jhtml?id=4170&t=bizhistory. Oliva, R., N. Watson, 2006. Cross functional alignment in supply chain planning: A case study of sales & operations planning. Working Paper 07-001. Harvard Business School, Boston. Orlikowski, W., 1992. The duality of technology: Rethinking the concept of technology in organizations. Organ. Sci. 3 (3), 398-427. Pfeffer, J., G.R. Salancik, 1974. Organizational decision making as a political process: The case of a university budget. Adm. Sci. Q. 19 (2), 135-151. Rowe, G., G. Wright, 1999. The Delphi technique as a forecasting tool: Issues and analysis. Int. J. Forecast. 12 (1), 73-92. Rowe, G., G. Wright, 2001. Expert opinions in forecasting: The role of the Delphi technique. In: J.S. Armstrong (Ed), Principles of Forecasting. Kluwer Academic Publishers, Norwell, MA, pp. 125-144. Salancik, G.R., J. Pfeffer, 1977. Who gets power – and how they hold on to it: A strategiccontingency model of power. Org. Dyn. 5 (3), 3-21. Sanders, N.R., L.P. Ritzman, 1992. Accuracy of judgmental forecasts: A comparison. Omega 20, 353-364. Sanders, N.R., K.B. Manrodt, 1994. Forecasting practices in U.S. corporations: Survey results. Interfaces 24, 91-100. 32 Sanders, N.R., L.P. Ritzman, 2001. Judgmental adjustment of statistical forecasts. In: J.S. Armstrong (Ed), Principles of Forecasting. Kluwer Academic Publishers, Boston, pp. 405-416. Shapiro, B.P., 1977. Can marketing and manufacturing coexist? Harvard Bus. Rev. 55 (5), 104-114. Stein, J.C., 1997. Internal capital markets and the competition for corporate resources. Journal of Finance 52 (1), 111-133. Terwiesch, C., Z.J. Ren, T.H. Ho, M.A. Cohen, 2005. An empirical analysis of forecast sharing in the semiconductor equipment supply chain. Management Sci. 51 (2), 208-220. Voorhees, W.R., 2000. The impact of political, institutional, methodological, and economic factors on forecast error. PhD dissertation, Indiana University. Watson, M.C., 1996. Forecasting in the Scottish electronics industry. Int. J. Forecast. 12 (3), 361- 371. Watson, N., R. Oliva, 2005. Leitax (A). Harvard Business School Case 606-002. Harvard Business School, Boston. Wheelwright, S.C., K.B. Clark, 1992. Revolutionizing Product Development. Wiley, New York. Yin, R., 1984. Case Study Research. Sage, Beverly Hills, CA. Figure 1. Forecast Accuracy Performance † 0% 20% 40% 60% 80% 100% Dec-Feb 2002 Mar-May 2002 Jun-Aug 2002 Sep-Nov 2002 Dec-Feb 2003 Mar-May 2003 Jun-Aug 2003 Sep-Nov 2003 Accuracy Goal Sell-thorugh Sell-in Project Redesign Go-Live † The dip forecasting performance in Sept-Nov 2003 was as a result of a relocation of a distribution center. 33 Figure 2. Consensus Forecasting Process Industry Info Historical Info Sales Info Statistical Forecast (DMO) Top-down Forecast (PPS) Bottom-up Forecast (SD) Consensus Forecast Joint Planning Business Assumptions Package Figure 3. Dual Relationship between Coordination System and Behavioral Dynamics individual or functional biases coordination system processes roles structure values influence the design create / generate Table 1: Process Steps and Biases Mitigated Consensus Forecasting Process Procedural Blind Spot Informational Blind Spot Incentive Misalignment Business Assumptions Package Multiple sources ? Multiple interpretations ? Interpretation source explicitly labeled ? Functional forecasts Private info not in BAP ? Functional interpretation of assumptions ? Aggregate forecasts at family level ? Ignoring planning expectations supply chain constraints ? ? Proposed Consensus Forecast Weighted average of functional forecasts ? Weights in terms of past proven performance ? Initial anchoring for consensus process ? Final consensus meeting Resolution of diverging forecast ? Uncover private information used in functional forecasts ? Uncover private interpretation of public information ? ? Forecast Review Financial and Operational ? ? BAP revision ? ?Perspectives on Psychological Science
|
CD ROM Annuaire d'Entreprises France prospect (avec ou sans emails) : REMISE DE 10 % Avec le code réduction AUDEN872
10% de réduction sur vos envois d'emailing --> CLIQUEZ ICI Retour à l'accueil, cliquez ici Science http://pps.sagepub.com/ Perspectives on Psychological http://pps.sagepub.com/content/6/1/9 The online version of this article can be found at: DOI: 10.1177/1745691610393524 Perspectives on Psychological Science 2011 6: 9 Michael I. Norton and Dan Ariely Building a Better America--One Wealth Quintile at a Time Published by: http://www.sagepublications.com On behalf of: Association For Psychological Science Additional services and information for Perspectives on Psychological Science can be found at: Email Alerts: http://pps.sagepub.com/cgi/alerts Subscriptions: http://pps.sagepub.com/subscriptions Reprints: http://www.sagepub.com/journalsReprints.nav Permissions: http://www.sagepub.com/journalsPermissions.nav Downloaded from pps.sagepub.com at Harvard Libraries on February 3, 2011Building a Better America—One Wealth Quintile at a Time Michael I. Norton 1 and Dan Ariely 2 1 Harvard Business School, Boston, MA, and 2 Department of Psychology, Duke University, Durham, NC Abstract Disagreements about the optimal level of wealth inequality underlie policy debates ranging from taxation to welfare. We attempt to insert the desires of ‘‘regular’’ Americans into these debates, by asking a nationally representative online panel to estimate the current distribution of wealth in the United States and to ‘‘build a better America’’ by constructing distributions with their ideal level of inequality. First, respondents dramatically underestimated the current level of wealth inequality. Second, respondents constructed ideal wealth distributions that were far more equitable than even their erroneously low estimates of the actual distribution. Most important from a policy perspective, we observed a surprising level of consensus: All demographic groups—even those not usually associated with wealth redistribution such as Republicans and the wealthy—desired a more equal distribution of wealth than the status quo. Keywords inequality, fairness, justice, political ideology, wealth, income Most scholars agree that wealth inequality in the United States is at historic highs, with some estimates suggesting that the top 1% of Americans hold nearly 50% of the wealth, topping even the levels seen just before the Great Depression in the 1920s (Davies, Sandstrom, Shorrocks, & Wolff, 2009; Keister, 2000; Wolff, 2002). Although it is clear that wealth inequality is high, determining the ideal distribution of wealth in a society has proven to be an intractable question, in part because differing beliefs about the ideal distribution of wealth are the source of friction between policymakers who shape that distribution: Proponents of the ‘‘estate tax,’’ for example, argue that the wealth that parents bequeath to their children should be taxed more heavily than those who refer to this policy as a burdensome ‘‘death tax.’’ We took a different approach to determining the ideal level of wealth inequality: Following the philosopher John Rawls (1971), we asked Americans to construct distributions of wealth they deem just. Of course, this approach may simply add to the confusion if Americans disagree about the ideal wealth distribution in the same way that policymakers do. Thus, we had two primary goals. First, we explored whether there is general consensus among Americans about the ideal level of wealth inequality, or whether differences—driven by factors such as political beliefs and income—outweigh any consensus (see McCarty, Poole, & Rosenthal, 2006). Second, assuming sufficient agreement, we hoped to insert the preferences of ‘‘regular Americans’’ regarding wealth inequality into policy debates. A nationally representative online sample of respondents (N ¼ 5,522, 51% female, mean age ¼ 44.1), randomly drawn from a panel of more than 1 million Americans, completed the survey in December, 2005. 1 Respondents’ household income (median ¼ $45,000) was similar to that reported in the 2006 United States census (median ¼ $48,000), and their voting pattern in the 2004 election (50.6% Bush, 46.0% Kerry) was also similar to the actual outcome (50.8% Bush, 48.3% Kerry). In addition, the sample contained respondents from 47 states. We ensured that all respondents had the same working definition of wealth by requiring them to read the following before beginning the survey: ‘‘Wealth, also known as net worth, is defined as the total value of everything someone owns minus any debt that he or she owes. A person’s net worth includes his or her bank account savings plus the value of other things such as property, stocks, bonds, art, collections, etc., minus the value of things like loans and mortgages.’’ Corresponding Authors: Michael I. Norton, Harvard Business School, Soldiers Field Road, Boston, MA 02163, or Dan Ariely, Duke University, One Towerview Road, Durham, NC 27708 E-mail: mnorton@hbs.edu or dandan@duke.edu Perspectives on Psychological Science 6(1) 9–12 ª The Author(s) 2011 Reprints and permission: sagepub.com/journalsPermissions.nav DOI: 10.1177/1745691610393524 http://pps.sagepub.com Downloaded from pps.sagepub.com at Harvard Libraries on February 3, 2011Americans Prefer Sweden For the first task, we created three unlabeled pie charts of wealth distributions, one of which depicted a perfectly equal distribution of wealth. Unbeknownst to respondents, a second distribution reflected the wealth distribution in the United States; in order to create a distribution with a level of inequality that clearly fell in between these two charts, we constructed a third pie chart from the income distribution of Sweden (Fig. 1). 2 We presented respondents with the three pairwise combinations of these pie charts (in random order) and asked them to choose which nation they would rather join given a ‘‘Rawls constraint’’ for determining a just society (Rawls, 1971): ‘‘In considering this question, imagine that if you joined this nation, you would be randomly assigned to a place in the distribution, so you could end up anywhere in this distribution, from the very richest to the very poorest.’’ As can be seen in Figure 1, the (unlabeled) United States distribution was far less desirable than both the (unlabeled) Sweden distribution and the equal distribution, with some 92% of Americans preferring the Sweden distribution to the United States. In addition, this overwhelming preference for the Sweden distribution over the United States distribution was robust across gender (females: 92.7%, males: 90.6%), preferred candidate in the 2004 election (Bush voters: 90.2%; Kerry voters: 93.5%) and income (less than $50,000: 92.1%; $50,001–$100,000: 91.7%; more than $100,000: 89.1%). In addition, there was a slight preference for the distribution that resembled Sweden relative to the equal distribution, suggesting that Americans prefer some inequality to perfect equality, but not to the degree currently present in the United States. Building a Better America Although the choices among the three distributions shed some light into preferences for distributions of wealth in the abstract, we wanted to explore respondents’ specific beliefs about their own society. In the next task, we therefore removed Rawls’ ‘‘veil of ignorance’’ and assessed both respondents’ estimates of the actual distribution of wealth and their preferences for the ideal distribution of wealth in the United States. For their estimates of the actual distribution, we asked respondents to indicate what percent of wealth they thought was owned by each of the five quintiles in the United States, in order starting with the top 20% and ending with the bottom 20%. For their ideal distributions, we asked them to indicate what percent of wealth they thought each of the quintiles ideally should hold, again starting with the top 20% and ending with the bottom 20%. To help them with this task, we provided them with the two most extreme examples, instructing them to assign 20% of the wealth to each quintile if they thought that each quintile should have the same level of wealth, or to assign 100% of the wealth to one quintile if they thought that one quintile should hold all of the wealth. Figure 2 shows the actual wealth distribution in the United States at the time of the survey, respondents’ overall estimate of that distribution, and respondents’ ideal distribution. These results demonstrate two clear messages. First, respondents vastly underestimated the actual level of wealth inequality in the United States, believing that the wealthiest quintile held about 59% of the wealth when the actual number is closer to 84%. More interesting, respondents constructed ideal wealth distributions that were far more equitable than even their erroneously low estimates of the actual distribution, reporting a desire for the top quintile to own just 32% of the wealth. These desires for more equal distributions of wealth took the form of moving money from the top quintile to the bottom three quintiles, while leaving the second quintile unchanged, evincing a greater concern for the less fortunate than the more fortunate (Charness & Rabin, 2002). We next explored how demographic characteristics of our respondents affected these estimates. Figure 3 shows these estimates broken down by three levels of income, by whether respondents voted for George W. Bush (Republican) or John Kerry (Democrat) for United States president in 2004, and by gender. Males, Kerry voters, and wealthier individuals estimated that the distribution of wealth was relatively more unequal than did women, Bush voters, and poorer individuals. For estimates of the ideal distribution, women, Kerry voters, and the poor desired relatively more equal distributions than did their counterparts. Despite these (somewhat predictable) differences, what is most striking about Figure 3 is its demonstration of much more consensus than disagreement among these different demographic groups. All groups—even the wealthiest respondents—desired a more equal distribution of wealth than what they estimated the current United States level to be, and all groups also desired some inequality—even the poorest respondents. In addition, all groups Fig. 1. Relative preference among all respondents for three distributions: Sweden (upper left), an equal distribution (upper right), and the United States (bottom). Pie charts depict the percentage of wealth possessed by each quintile; for instance, in the United States, the top wealth quintile owns 84% of the total wealth, the second highest 11%, and so on. 10 Norton and Ariely Downloaded from pps.sagepub.com at Harvard Libraries on February 3, 2011agreed that such redistribution should take the form of moving wealth from the top quintile to the bottom three quintiles. In short, although Americans tend to be relatively more favorable toward economic inequality than members of other countries (Osberg & Smeeding, 2006), Americans’ consensus about the ideal distribution of wealth within the United States Fig. 3. The actual United States wealth distribution plotted against the estimated and ideal distributions of respondents of different income levels, political affiliations, and genders. Because of their small percentage share of total wealth, both the ‘‘4th 20%’’ value (0.2%) and the ‘‘Bottom 20%’’ value (0.1%) are not visible in the ‘‘Actual’’ distribution. Fig. 2. The actual United States wealth distribution plotted against the estimated and ideal distributions across all respondents. Because of their small percentage share of total wealth, both the ‘‘4th 20%’’ value (0.2%) and the ‘‘Bottom 20%’’ value (0.1%) are not visible in the ‘‘Actual’’ distribution. Building a Better America 11 Downloaded from pps.sagepub.com at Harvard Libraries on February 3, 2011appears to dwarf their disagreements across gender, political orientation, and income. Overall, these results demonstrate two primary messages. First, a large nationally representative sample of Americans seems to prefer to live in a country more like Sweden than like the United States. Americans also construct ideal distributions that are far more equal than they estimated the United States to be—estimates which themselves were far more equal than the actual level of inequality. Second, there was much more consensus than disagreement across groups from different sides of the political spectrum about this desire for a more equal distribution of wealth, suggesting that Americans may possess a commonly held ‘‘normative’’ standard for the distribution of wealth despite the many disagreements about policies that affect that distribution, such as taxation and welfare (Kluegel & Smith, 1986). We hasten to add, however, that our use of ‘‘normative’’ is in a descriptive sense— reflecting the fact that Americans agree on the ideal distribution—but not necessarily in a prescriptive sense. Although some evidence suggests that economic inequality is associated with decreased well-being and health (Napier & Jost, 2008; Wilkinson & Pickett, 2009), creating a society with the precise level of inequality that our respondents report as ideal may not be optimal from an economic or public policy perspective (Krueger, 2004). Given the consensus among disparate groups on the gap between an ideal distribution of wealth and the actual level of wealth inequality, why are more Americans, especially those with low income, not advocating for greater redistribution of wealth? First, our results demonstrate that Americans appear to drastically underestimate the current level of wealth inequality, suggesting they may simply be unaware of the gap. Second, just as people have erroneous beliefs about the actual level of wealth inequality, they may also hold overly optimistic beliefs about opportunities for social mobility in the United States (Benabou & Ok, 2001; Charles & Hurst, 2003; Keister, 2005), beliefs which in turn may drive support for unequal distributions of wealth. Third, despite the fact that conservatives and liberals in our sample agree that the current level of inequality is far from ideal, public disagreements about the causes of that inequality may drown out this consensus (Alesina & Angeletos, 2005; Piketty, 1995). Finally, and more broadly, Americans exhibit a general disconnect between their attitudes toward economic inequality and their self-interest and public policy preferences (Bartels, 2005; Fong, 2001), suggesting that even given increased awareness of the gap between ideal and actual wealth distributions, Americans may remain unlikely to advocate for policies that would narrow this gap. Acknowledgments We thank Jordanna Schutz for her many contributions; George Akerlof, Lalin Anik, Ryan Buell, Zoe¨ Chance, Anita Elberse, Ilyana Kuziemko, Jeff Lee, Jolie Martin, Mary Carol Mazza, David Nickerson, John Silva, and Eric Werker for their comments; and surveysampling.com for their assistance administering the survey. Declaration of Conflicting Interests The authors declared that they had no conflicts of interest with respect to their authorship or the publication of this article. Notes 1. We used the survey organization Survey Sampling International (surveysampling.com) to conduct this survey. As a result, we do not have direct access to panelist response rates. 2. We used Sweden’s income rather than wealth distribution because it provided a clearer contrast to the other two wealth distribution examples; although more equal than the United States’ wealth distribution, Sweden’s wealth distribution is still extremely top heavy. References Alesina, A., & Angeletos, G.M. (2005). Fairness and redistribution. American Economic Review, 95, 960–980. Bartels, L.M. (2005). Homer gets a tax cut: Inequality and public policy in the American mind. Perspectives on Politics, 3, 15–31. Benabou, R., & Ok, E.A. (2001). Social mobility and the demand for redistribution: The POUM hypothesis. Quarterly Journal of Economics, 116, 447–487. Charles, K.K., & Hurst, E. (2003). The correlation of wealth across generations. Journal of Political Economy, 111, 1155–1182. Charness, G., & Rabin, M. (2002). Understanding social preferences with simple tests. Quarterly Journal of Economics, 117, 817–869. Davies, J.B., Sandstrom, S., Shorrocks, A., & Wolff, E.N. (2009). The global pattern of household wealth. Journal of International Development, 21, 1111–1124. Fong, C. (2001). Social preferences, self-interest, and the demand for redistribution. Journal of Public Economics, 82, 225–246. Keister, L.A. (2000). Wealth in America. Cambridge, England: Cambridge University Press. Keister, L.A. (2005). Getting rich: America’s new rich and how they got that way. Cambridge, England: Cambridge University Press. Kluegel, J.R., & Smith, E.R. (1986). Beliefs about inequality: Americans’ views of what is and what ought to be. New York: Aldine de Gruyter. Krueger, A.B. (2004). Inequality, too much of a good thing. In J.J. Heckman & A.B. Krueger (Eds.), Inequality in America: What role for human capital policies (pp. 1–75). Cambridge, MA: MIT Press. McCarty, N., Poole, K.T., & Rosenthal, H. (2006). Polarized America: The dance of ideology and unequal riches. Cambridge, MA: MIT Press. Napier, J.L., & Jost, J.T. (2008). Why are conservatives happier than liberals? Psychological Science, 19, 565–572. Osberg, L., & Smeeding, T. (2006). "Fair’’ inequality? Attitudes to pay differentials: The United States in comparative perspective. American Sociological Review, 71, 450–473. Piketty, T. (1995). Social mobility and redistributive politics. Quarterly Journal of Economics, 110, 551–584. Rawls, J. (1971). A theory of justice. Cambridge, MA: Harvard University Press. Wilkinson, R., & Pickett, K. (2009). The spirit level: Why greater equality makes societies stronger. New York: Bloomsbury. Wolff, E.N. (2002). Top heavy: The increasing inequality of wealth in American and what can be done about it. New York: New Press. 12 Norton and Ariely Downloaded from pps.sagepub.com at Harvard Libraries on February 3, 2011Payout Taxes and the Allocation of Investment
|
CD ROM Annuaire d'Entreprises France prospect (avec ou sans emails) : REMISE DE 10 % Avec le code réduction AUDEN872
10% de réduction sur vos envois d'emailing --> CLIQUEZ ICI Retour à l'accueil, cliquez ici Payout Taxes and the Allocation of Investment Bo Becker Marcus Jacob Martin Jacob Working Paper 11-040 September 27, 2011 Payout Taxes and the Allocation of Investment ? Bo Becker Harvard University and NBER bbecker@hbs.edu Marcus Jacob EBS European Business School marcus.jacob@ebs.edu Martin Jacob WHU – Otto Beisheim School of Management martin.jacob@whu.edu This draft: September, 2011 ABSTRACT. When corporate payout is taxed, internal equity (retained earnings) is cheaper than external equity (share issues). High taxes will favor firms who can finance internally. If there are no perfect substitutes for equity finance, payout taxes may thus change the investment behavior of firms. Using an international panel with many changes in payout taxes, we show that this prediction holds well. Payout taxes have a large impact on the dynamics of corporate investment and growth. Investment is “locked in” in profitable firms when payout is heavily taxed. Thus, apart from any aggregate effects, payout taxes change the allocation of capital. JEL No. G30, G31, H25. ? We thank Chris Allen and Baker Library Research Services for assistance with data collection. We are grateful to James Poterba, Raj Chetty, Fritz Foley, Jochen Hundsdoerfer, Richard Sansing, Kristian Rydqvist and seminar participants at European Business School, Harvard Business School, Harvard Economics Department, the UNC Tax Symposium, the Nordic Workshop on Tax Policy and Public Economics, and the Stockholm Institute for Financial Research (SIFR) for helpful comments. 1 1. Introduction Corporate payout, in the form of dividends or as repurchases of shares, is subject to taxation in most countries. Such taxes on corporate payout drive a wedge between the cost of internal and external equity (retained earnings and equity issues, respectively). 1 Therefore, higher payout taxes are expected to “lock in” investment in profitable firms, at the expense of firms with good investment opportunities which would require external equity financing to undertake. The empirical relevance of this simple prediction has not been well tested. Despite the large amount of theoretical and empirical research about the effect of dividend taxes on the level of investment and on the valuation of firms (see, e.g., Auerbach 1979a, Bradford 1981, Chetty and Saez 2010, Feldstein 1970, Guenther and Sansing 2006, Harberger 1962, King 1977, Korinek and Stiglitz 2009, Poterba and Summers 1984 and 1985), little is known about the effects of such taxes on the allocation of investment across firms. Yet, the theoretical prediction is very clear: higher payout taxes will increase the wedge between the cost of internal and external equity, and firms with more costly external financing will exhibit greater investment cash flow sensitivities. Put differently, payout taxes favor investment financed by retained earnings over investment financed by equity issues. 2 This can matter for the productivity and nature of investment if a) debt finance is an imperfect substitute for equity (in other words, if the Miller Modigliani propositions do not hold), b) different firms have different investment opportunities, c) the marginal investor is subject to taxation, and d) firms make equity payouts while the tax is in effect. All these conditions have some empirical support. 3 But are such frictions important enough for this to matter 1 To see the tax difference, consider a firm facing a dividend tax rate of t and which has the opportunity to invest one dollar now in order to receive ? in the future. If the firm issues equity, it can pay a dividend of 1+?. The initial investment is paid in capital and not subject to dividend taxes, so the shareholders will receive 1+?(1-t) in after-tax payoff. Alternatively, investors can invest the dollar at a tax-free return (1+r). This firm should invest if ? (1-t)>r. Now consider another firm, which has retained earnings, so that it faces the choice between paying out one dollar, producing (1-t) in after-tax payoff to investors today, which will be worth (1-t)(1+r) tomorrow, or investing, producing (1+?(1-t)) in after-tax dividend for investors. This firm should invest if ?>r. The tax wedge is the difference between the two firms’ investment criteria. Put differently, the after-tax cost of capital is lower for firms with inside equity. Lewellen and Lewellen (2006) develop this intuition and further results in a richer model. We sometimes refer to this prediction as the tax wedge theory. 2 The debate about the impact of payout taxes on the level of investment between the “old view” (Harberger 1962, 1966, Feldstein 1970, Poterba and Summers 1985) and the “new view” (Auerbach 1979a, Bradford 1981, King 1977) can be understood in terms of different assumptions about the marginal source of investment financing. To simplify, the old view assumes that marginal investment is financed by equity issues, so that payout taxes raise the cost of capital and reduce investment. The new view assumes that marginal investment is financed by retained earnings, so that payout taxes do not reduce investment. In practice, firms are likely to differ in their ability to finance investment with internal resources (e.g. Lamont 1997). If they do, the tax rate will affect the allocation of investment. Auerbach (1979b) makes a related point about how firms with and without internal funds should respond differently to dividend taxes. 3 Regarding the imperfect substitutability between debt and equity, see e.g. Myers (1977), Jensen and Meckling (1976). Regarding the variation in investment opportunities across firms, see e.g. Coase (1937) and Zingales (2000). Firms with limited access to internal equity may include entrepreneurial firms and firms with strong growth 2 in practice for investment levels? This paper aims to test the extent to which the “lock in” effect of payout taxes matters empirically. There are several challenges in testing how payout taxes affect the cross-firm allocation of investment. First, large changes in the US tax code are rare. The 2003 tax cut has provided a suitable natural experiment for testing how dividend levels responded to taxes (see Chetty and Saez 2005 and Brown, Liang, and Weisbenner 2007), but investment is a more challenging dependent variable than dividends, so the experiment may not provide sufficient statistical power for examining investment responses. First, unlike dividends, investment is imperfectly measured by accounting data which, for example, leaves out many types of intangible investment such as that in brands and human capital. This means that available empirical proxies (e.g. capital expenditures) are noisy estimates of the true variable of interest. Second, much investment is lumpy and takes time to build, so any response to tax changes is likely slow and more difficult to pinpoint in time. This suggests that a longer time window may be necessary (the payout studies used quarters around the tax change).Third, however, investment is affected by business cycles and other macro-economic trends, so extending the window around a single policy change introduces more noise from other sources, and may not provide better identification. We address these challenges by using an international dividend and capital gains tax data set covering 25 countries over the 19-year period 1990-2008 (Jacob and Jacob 2011). This data set contains fifteen substantial tax reforms and 67 discrete changes in the dividend or capital gains tax rate. With so many tax changes, we have sufficient variation to study the effects of payout taxes on the investment allocation. 4 We use this tax data base to test if the allocation of investment across firms with and without access to internal equity depends on payout taxes. 5 We first run non-parametric tests that contrast the investment by the two groups of firms around tax reforms. We focus on events where payout taxes changed by at least three percentage points and compare the five years preceding the tax change with the two years following it. There are fifteen events with payout tax reductions. The mean tax drop is 9.8 percentage points (median 5.5). There are fourteen tax increase events with a tax change of 8.4 percentage opportunities. Regarding the taxability of the marginal investor, see e.g. our Section 4.4. and note also that in many countries outside the U.S. and the U.K. (for example, in Germany and Austria) investment funds managing private investors’ money are ultimately taxed like private investors. Regarding payout, may firms pay dividends or repurchase shares every year. Others may plan to do so in the future. Korinek and Stiglitz (2010) consider firms’ ability to time their payout around tax changes. 4 Because dividends and share repurchases are treated very differently for tax purposes, we construct a measure of the overall tax burden on payout. We do this by weighting the tax rates on dividends and on capital gains by the observed quantity of each in a country (using amounts of dividends and repurchases from our sample firms over the sample period). We also report results using the dividend tax and using an average payout tax measure adjusted for effective capital gains taxation. We vary assumptions about the amount of taxable capital gains caused by repurchases. Variations of the measurement of taxes produce similar results. See section 3. Data for details. 5 As discussed in detail in the empirical section below, we use a range of variables to classify firms into those with and without access to internal equity, including net income, operating cash flow, and even cash holdings. Neither measure is perfect, since a firm’s perceived access to internal equity must depend on (unobservable) expectations about future years. 3 points (median 5.6). 6 We sort firms into quintiles of the ratio of cash flow to assets in each country-year cell. We then calculate average investment over lagged assets for each quintile. There is no trend in investment for any of the quintiles during the five year period preceding the tax events. After the tax cuts, we observe a significant convergence of the investment rate of high and low cash flow firms (top and bottom quintiles). In other words, firms with limited internal equity increase their investment relative to firms with plenty of internal equity. This is consistent with the tax wedge theory, and suggests that low taxes favor firms with limited access to internal equity. In contrast, following increases in payout taxes there is a divergence of investment of high and low cash flow firms. The estimated effects appear large in both sets of tax reforms. On average, the difference in investment between low and high cash flow firms increases from 5.33% (of assets) to 7.59% following a payout tax increase – a 42% increase. When payout taxes are cut, the difference in investment falls from 7.27% to 5.54% – a decrease by 31%. In other words, for the typical large tax change, a large quantity of investment is estimated to get displaced (when taxes go up, investment flows from firms with limited access to internal equity to those with more internal equity, and vice versa for tax reductions). These non-parametric results are consistent with the predictions of the tax wedge theory: tax increases raise the cost of capital wedge between firms with and without access to internal equity financing, and thereby increases the investment of internally funded firms relative to firms that have limited access to internal equity. Because the panel data set contains multiple tax change events, we can estimate not just the mean treatment effect of a tax change, but also ranges. Only two (three) of the fifteen (fourteen) tax decreases (increases) have difference-in-difference effects that are in conflict with our hypothesis. The other estimates agree with the tax wedge hypothesis, and many point estimates are large: one third of tax decreases events reduce the difference in the investment rate of high and low cash flow firms by at least 2.5 percentage points. About 40% of the tax raises are associated with a point estimate for the increased wedge between high and low cash flow firms by more than 2.5 percentage points. In other words, the effect of tax changes on the relative investment of firms varies quite a bit across events, and is sometimes large. 7 We next turn to parametric tests in the form of linear regressions. The regressions use data from all years, and can integrate both tax increases and decreases in the same specifications. 8 For our baseline 6 We report results for the country-average payout tax rate here, but results are similar with alternative measures, described below. 7 We can also use the individual diff-in-diff point estimates to do non-parametric tests. For example, a sign test of the frequencies with which estimates are positive and negative suggest that we can reject that an increase and a decrease of the investment rate difference are equally likely after a tax increase (decrease) at the 5% (1%) level of statistical significance. 8 The weights placed on different observations also differ between linear regression tests and non-parametric tests. Because of the many differences, it is useful to verify that both methods deliver similar results. 4 tests, we regress investment on firm controls, fixed effects for firms and for country-year cells, and the interaction of the payout tax rate with cash flow. Thanks to the panel structure of the data set, we can allow the coefficient on cash flow to vary across countries and years, in essence replicating the identification strategy of the many studies exploiting the 2003 tax cut in the US, but for the whole panel of 25 country times 19 year. The estimated coefficient for the tax-cash flow interaction variable is consistently positive and significant. In other words, the higher payout taxes are, the stronger is the tendency for investment to occur where cash flows are high. As predicted by the tax wedge theory, payout taxes “lock in” investment in firms generating earnings and cash flow. The estimated magnitudes are large. For example, going from the 25 th percentile of payout tax (15.0%) to the 75 th percentile (32.2%) implies that the effective coefficient on cash flow increases by 0.029, an increase by 33% over the conditional estimate at the 25 th percentile. Like the NP results, this implies that payout taxes have an important effect on the allocation of capital across firms. We report extensive robustness tests for our results. For most tests, we report regression results with three alternative tax rates, with similar results. The results also hold for alternative measures of the ability to finance out of internal resources (e.g. net income instead of cash flow), as well as when controlling for the corporate income tax rate and its interaction with cash flow. We also collect economic policy controls from the World Development Indicators (World Bank 2010). This is to address endogeneity concerns, i.e. to ensure that tax changes are not just fragments of wider structural changes in an economy that change firms' investment behavior around tax reforms. This test shows that payout tax changes appear to have their own very unique and economically significant effect on the allocation of investment (assuming we have identified the relevant set of policy variables). We next examine in greater detail the predictions of the old and new view. A key distinguishing feature of models belonging to the old and new view is whether the marginal source of investment funds is assumed to be internal cash flow or external equity. We hypothesize that both these assumptions may be valid for a subset of firms at any given time. Some firms behave as predicted by the old view, and reduce investment when payout taxes increase. Others behave more like the new view predicts, and respond less. This has two implications. First, this difference in responsiveness to taxes generates the within-country, within-year, cross-firm prediction our paper focuses on. By comparing different firms in the same country and at the same time, we get rid of concerns about omitted aggregate time-series variables. This prediction is what we examine with all our main tests (regressions and non-parametric tests). A second implication is that it becomes interesting to try to identify the relevant groups of firms in the data, and to test their responses. We go about this by differentiating between firms based on three alternative measures. First, we define firms as old view firms if predicted equity sales are above 2% of lagged assets. Second, we look at historical equity issuance by firms. We exploit the fact that such 5 issuance is persistent, so that classifying firms by recent equity issuance likely indicates their ability to issue in the future. 9 Firms with recent equity issuance activity, which are more likely to consider external equity their marginal source of investment funds, correspond most closely to the assumptions of the old view. Third, we classify firms as new view firms if the Kaplan and Zingales (1997) index of financial constraints is above 0.7, and as old view firms otherwise. For all three classifications, there is a sizable difference in the effect of taxation on the marginal source of funds for investment between old view firms and new view firms. For old view firms, the cash flow coefficient is always sensitive to tax rates, as predicted. For new view firms, the coefficient estimate is positive but smaller and insignificant in all specifications. This suggest that both the old and the new view have predictive power, and exactly for the set of firms which match the critical assumptions of the two views. This confirms the mechanism behind the differential responses of investment to tax rates that we have documented earlier: high tax rates drive a wedge between the cost of internal and external equity. We also examine the effect of governance. Chetty and Saez (2010) predict that a dividend tax cut will not affect poorly governed firms in the same way it will well governed firms. In poorly governed firms with much cash, investment is inflated by CEOs who derive private benefits from investment (or from firm size). A tax cut reduces the incentive for cash-rich firms to (inefficiently) over-invest in pet projects because it becomes more attractive for the CEO to get dividends from his shareholdings. It is important to note that the same result does not apply to well governed firms in the model: a tax cut raises equity issues and productive (as well as unproductive) investment by such firms. If Chetty and Saez’ mechanism is important, the pattern we have established in the data between taxes, cash flow and investment, will in fact be driven by the set of well governed firms. 10 To proxy for governance across multiple countries, where laws, practices, and financial development varies substantially, we use the ownership stake of insiders (i.e., corporate directors and officers). This is based on the notion that managers and directors with large stakes have both the power and the incentive to make sure the firm is maximizing value (Shleifer and Vishny 1986, Jensen and Murphy 1990). The insider ownership variable is available for many of our sample firms, and measured fairly consistently across countries. When sorting by insider ownership, we find that firms with very low insider ownership show a less significant response to taxes, whereas firms with strong ownership have larger and more significant responses to 9 In our data, firms that issued any equity in the previous year are 3.9 times as likely to issue again next year. Firms issuing more than 5% of assets over the last year are 7.7 times as likely to do so again this year. These numbers probably reflect capital needs as well as access to the market. There are several possible reasons for this. Issuing costs are high for equity (see Asquith and Mullins, 1986, and Chen and Ritter 2000). However, some firms find it less costly to issue equity, for example because they have a favorable stock valuation (see Baker, Stein, and Wurgler, 2003). 10 The tests of the US tax cut in 2003 have found that governance variables have strong predictive power for firms’ responses to the tax cuts. See, e.g. Chetty and Saez (2005) and Brown, Liang, and Weisbenner (2007). 6 taxes. This is consistent with the Chetty and Saez predictions. Since individual owners (such as insiders) are more likely to be taxable than owners in general (which include tax exempt institutions), this result also highlights that where the marginal shareholder is more likely to be a taxable investor the tax effects are stronger. Finally, we examine how quantities of equity raised respond to taxes. If our identifying assumptions are valid, and if we have identified real variation in the effective taxation as perceived by firms, we would expect to see a drop in equity issuance when taxes go up. We find exactly this: When taxes are high, equity issuance tends to be low. This supports the interpretation that the tax variation we pick up is meaningful. Our results have three main implications. First, it appears that payout taxes influence the allocation of capital across firms. High taxes lock in capital in those firms that generate internal cash flows, ahead of those firms that need to raise outside equity. If firms have different investment opportunities, this means that tax rate changes alter the type of investments being made. For example, high payout taxes may favor established industries. 11 Second, the effect of payout taxes is related to both access to the equity market and governance. Firms which can access the equity market, “old view” firms, are the most affected by tax changes. Firms whose only source of equity finance is internal are less affected by taxes, as predicted by the “new view”. A final source of heterogeneity is governance. Firms where decision makers have low financial stakes are less affected by tax changes, reflecting their propensity to make investment decisions for reasons unrelated to the cost of capital. 12 Third, the relation between cash flow and investment (see e.g. Fazzari, Hubbard, and Petersen 1988, Kaplan and Zingales 1997) appears to partially reflect the difference in the after-tax cost of capital between firms with and without access to inside equity. 2. Taxes on corporate payout across countries 2.1 Tax systems The prerequisite for a useful study of the relationship between payout tax policies and the allocation of investment across countries is a sufficient degree of identifying variation in dividend and capital gains tax regimes and tax rates both across countries and within countries across time. Tables 1 and 2, and Figures 1, 2, and 3 illustrate that this is the case for the 25 countries scrutinized in this study. 11 We consider the allocation across firms an important topic in itself, but there may also be some suggestive implications for aggregate investment. While we do not estimate the impact of taxes on the level of corporate investment directly, our main result is inconsistent with a standard new view model of payout taxes. Hence, our results generally point to the relevance of payout taxes for investment 12 Although, to be precise, our findings do not necessarily support an empire building agency problem. See e.g. Malmendier and Tate (2005) for other possibilities. 7 We count five major tax systems in our data set: classical corporate tax systems, shareholder relief systems, dividend tax exemption systems, and full and partial imputation systems. Classical corporate taxation systems (for example, currently used in Ireland, and previously in the Netherlands or Spain) are characterized by double taxation of corporate profits, that is, income, before it is distributed as dividends, is taxed at the corporate level, and later taxed again as dividend income at the individual shareholder level. This contrasts with shareholder relief systems (for example, currently used in the US, Japan, and Spain) which aim to reduce the full economic burden of double taxation that applies under a pure classical system. For example, at the individual shareholder level, reduced tax rates on dividends received or exclusion of a proportion of dividend income from taxation are common forms of shareholder tax relief. Under an imputation system (for example, used currently in Australia and Mexico, and previously in France), taxes paid by a corporation are considered as paid on behalf of its shareholders. As a result, shareholders are entitled to a credit (the “imputation credit”) for taxes already paid at the corporate level. That is, shareholders are liable only for the difference between their marginal income tax rate and the imputation rate. Full and partial imputation systems are distinguished by the nature of the imputation credit, which may be the full corporate tax or only a fraction thereof. In dividend tax exemption systems (currently only Greece in our sample) dividend income is generally not taxed. 13 Table 1 shows that there have been many changes in payout tax systems over the last two decades. While in the first half of our sample period the classical corporate tax system dominates, from 2005 the shareholder relief system is the most widespread tax system. While there are only five shareholder relief systems in place in 1990, shareholder relief systems can be found in almost 70% of the countries (17) in our sample at the end of the sample period. The reduction in the prevalence of full and partial imputation systems from 11 in 1990 to only 6 in 2008 is largely due to the harmonization of European tax laws that necessitated an abolition of differences in the availability of imputation credits for domestic and foreign investors across EU member states. 2.2 Tax rates The significant trend from imputation systems and classical corporate tax systems to shareholder relief systems naturally coincides with the development of the absolute taxation of dividend income and capital gains. Yet, as Tables 1 and 2 illustrate, tax reforms are not necessarily accompanied by changes in the effective taxation of dividends and capital gains. Rather, much of the dynamics in dividend and capital gains taxation relate to pure rate changes. Changes occur frequently absent any tax system reforms. 13 See La Porta, Lopez-de-Silanes, Shleifer, and Vishny (2000) for additional information on characteristics of the various tax systems. 8 In this study, we are interested in the effective tax burden on dividend income and capital gains faced by individual investors. One concern with our analysis is that the tax rates we measure do not have sufficiently close correspondence with actual share ownership of our sample firms. Rydqvist, Spizman and Strebulaev (2010) point to the reduced role of the taxable investors in recent decades. They suggest that the influence of private investors’ taxes has likely been falling through time. In the extreme, if the marginal investor for every firm is a (tax neutral) institution, individual shareholder taxation should not matter. If this is true for our sample firms, we would find no effect. To the extent that we identify an effect of payout taxes, we can conclude taxable investors have some impact on firm prices (at least for a subset of firms). 14 Similarly, the increasing role of cross-country stock holdings might affect our ability to isolate true tax rates faced on payout by equity owners through the tax rules for domestic investors. Our data do not allow us to identify the fraction of foreign ownership in a company. However, since there is strong evidence of a substantial home bias in national investment portfolios (see, for example, French and Poterba 1991, Mondria and Wu 2010), we believe domestic tax rules are likely the most important source of time series variation in tax rates. The tax rates applicable to domestic investors is the most plausible approximation for the typical investor’s tax burden, especially for smaller firms, where international ownership is likely lower. The first, immediate, observation from Table 2 is that the level of taxation on dividends and share repurchases varies considerably across countries and time. As we report in Panel A of Table 2, the highest average tax rates on dividend income over the sample period can be observed in the Netherlands, Denmark, Switzerland, France, and Ireland. Peak values range from 66.2% in Sweden (1990), to 60.9% in Denmark (1990), to 60.0% in the Netherlands (1990-2000), to 47.3% in Korea (1990-1993), to 46% in Spain (1990/1991, 1993/1994). Over the same period investors faced the lowest average tax burden in Greece – a dividend tax exemption country and the only mandatory dividend country in our sample – and in Mexico, Finland, New Zealand, and Norway. The within-country standard deviation ranges from 10.8% to 20.5%, and the within-country differences between maximum and minimum tax rates from 25% to 38%, for Norway, Sweden, the Netherlands, Japan, the US and Finland, which provide the most variation in dividend tax rates over the sample period (Table 3, Panel A, and Figure 1). In contrast, we observe the most stable tax treatment of dividends in Greece, Mexico, Austria, Poland, and Portugal, where the personal income tax rate fluctuates within a narrow band of at most 5 percentage points 14 The Rydqvist et al prediction seems to be borne out in US dividend policy: Chetty and Saez (2005) and Perez-Gonzalez (2003) show that firms with a large share of institutional (tax exempt) ownership exhibit smaller changes in policy after the 2003 tax cut. For our sample, which contains many non-US firms, tax exempt investors may be a smaller factor. Unfortunately, we lack the requisite ownership data to test whether there is a similar pattern in our sample. 9 difference between peak and lowest taxation over the sample period. On average, the difference between maximum and minimum dividend tax rate in our sample countries in 1990-2008 is 19.9%, thus underpinning the substantial time-variant differences in dividend tax rates. Capital gains taxation across countries is special in many respects and often strongly intertwined with the legal treatment of share repurchases. For example, in some European countries share repurchases were either difficult to implement (for example, France) or illegal (for example, Germany and Sweden) until the turn of the 3rd millennium (Rau and Vermaelen 2002, DeRidder 2009). Moreover, in some countries with high taxes on dividends and low capital gains taxes (such as in Belgium, in the Netherlands before 2001, and in Switzerland since 1998), specific tax provisions existed to discourage share repurchases. In Japan, restrictions on corporate share repurchases thwarted corporations from buying back their own shares until enactment of a special law in 1995. Since the mid-1990s, the Japanese government has gradually relaxed and removed restrictions on share repurchases, originally as a part of emergency economic measures to revitalize the economy and its tumbling stock market (Hashimoto 1998). In Panel B of Table 2 we report capital gains tax rates across our sample countries that take these effects into consideration. The tax rates are applicable to investors with non-substantial shareholdings and holding periods that qualify as long-term investments in accordance with country-specific tax legislation. We show that over the sample period, on average, the most unfavorable tax environment for capital gains prevailed in Denmark, the UK, Australia, the Netherlands, and Canada, while in eight countries capital gains are generally tax exempt. We observe peak capital gains tax rates in the Netherlands (1990-2000), Australia (1990-1999), Poland (1994-1996), and Switzerland (1998-2007). The range of capital gains tax rates is substantial – from 0.0% to 60.0%. With standard deviation greater than 14.5% and differences between maximum and minimum tax rate of 31% to 60%, the Netherlands, Switzerland, Belgium, and Poland exhibit the largest within-country variation in capital gains tax rates across countries (Table 2, Panel B, and Figure 2). In contrast, capital gains taxation is constant in 1990-2008 in Austria, Germany, Greece, Korea, Mexico, New Zealand, and Portugal. On average, the within-country difference between maximum and minimum capital gains tax rate in our sample countries in 1990-2008 is 18.7%, thus providing further ample identifying variation in corporate payout taxation. 3. Data sample 3.1 Firm data We source our firm-level data from the July 2009 edition of the WorldScope database and restrict our analysis to those countries for which conclusive tax data for the full sample period could be obtained. To ensure a meaningful basis for the calculation of our country-level statistics we also exclude from our sample firms from countries for which we have less than 10 observations after the below sample 10 adjustments. The start year of our analysis is 1990. 15 Since accounting data are often reported and collected with a delay, we use data through 2008. We collect data on active as well as dead and suspended listings that fulfill our data requirements to avoid survivorship bias. Table 3 Panel A summarizes the composition of our sample. Financial and utility firms have motives to pay out cash that are different from non-financial firms (see e.g., Dittmar 2000 and Fama and French 2001). We therefore restrict our sample to non-financial and also non-utility firms, defined as firms with SIC codes outside the intervals of 4,900-4,949 and 6,000-6,999. We also exclude firms without an SIC code. We further restrict our sample to firms with non-missing values for dividends to common and preferred shareholders, net income, sales, and total assets for at least 4 consecutive years in the 1988- 2008 period. From the original set of firms, we finally eliminate the following firms: firms with erroneous or missing stock price, dividends, or share repurchase information, firms whose dividends exceed sales, firms with an average weekly capital gain of over 1,000% in one year and finally, firms with closely held shares exceeding 100% or falling short of 0%. To prevent extreme values and outliers from distorting our results we further eliminate, when appropriate, observations of our dependent and independent variables that are not within the 1st and the 99th percentile of observations, and we also drop firm observations with total assets less than USD 10 million (see Baker, Stein, and Wurgler 2003). This returns our basic sample of 7,661 companies (81,222 firm-year observations) from 25 countries. We obtain annual personal income tax, and capital gains tax data for the 25 countries in our sample from Jacob and Jacob (2010). This comprehensive tax data set allows a heretofore unavailable, thorough analysis of payout taxes and the allocation of investment within a multi-country, multi-year framework. We also cross-check our tax classifications and rates against those reported in Rydqvist, Spizman, and Strebulaev (2010) who examine the effect of equity taxation on the evolution of stock ownership patterns in many countries. As in this paper, Rydqvist et al use the top statutory tax rate on dividends and the tax rate on capital gains that qualify as long-term to conduct their analysis. 3.2 Investment variables Table 3 Panel B presents summary statistics for our investment variables. Our proxies for firm investment are threefold. First, we create variable Investment, defined as additions to fixed assets other than those associated with acquisitions 16 (capital expenditure) normalized by total assets. Second, we 15 We start our analysis in 1990 for two reasons. First, WorldScope provides less than comprehensive coverage of individual data items for non-U.S. firms before 1990. An earlier start may thus have biased our results for earlier sub-periods away from international evidence towards evidence from North America. Second, 1990 is a historically logical year to begin. With the transformation into capitalist, democratic systems in 1990, many former communist countries have only begun to incorporate dividends and capital gains taxation in their tax laws. 16 It includes additions to property, plant and equipment, and investments in machinery and equipment. 11 include PPE Growth, the growth in plant, property, and equipment from t-1 to t divided by the end-ofyear t-1 assets. Our final measure of investment intensity is Asset Growth, the ratio of growth in total assets normalized by total assets of the firm. The numerator in our investment variables is measured one year after our total assets variable, the denominator. Before computing investment, we translate capital expenditures, PPE, and total assets in US dollars into real terms (base year 2000) by using the US GNP deflator (World Development Indicators, Worldbank 2010). In our sample, firms on average have capital expenditures amounting to 5.9% of the value of their prior year total assets. The average growth rate in plant, property, and equipment is 8.1% and the average growth rate in total assets of 7.9%. The range of values of investment is considerable – from 0.8% (10 th percentile) to 12.7% (90 th percentile) (Investment), -13.8% to 29.0% (PPE Growth), or -17.0% to 30.8% (Asset Growth). 3.3 Tax variables Summary statistics for tax variables and controls are presented in Panel C of Table 3. All tax rates that we employ apply to investors with non-substantial shareholdings and holding periods that qualify as long-term investments in accordance with country-specific tax legislation. We construct three tax variables. Dividend Tax is the personal income tax rate on dividends in a country and year (in %). 17 Its range of values is wide, from 0% to 66.2% with mean dividend tax burden of 27.8% and standard deviation of 12.6%, reflecting the considerable variation of payout taxes across countries and over time. Effective Tax C is the country-specific weighted effective corporate payout tax rate (in %). It is calculated by weighting the effective tax rate on dividends and share repurchases by the importance of dividends and share repurchases as payout channels in a country over the 1990-2008 period. With this measure, we follow prior analyses of effective capital gains taxation and assume the effective tax rate on capital gains from share repurchases to be one-fourth of the statutory tax rate (see La Porta et al 2000 and Poterba 1987). This way, we control for the effect that capital gains are taxed only at realization and that thus the effective capital gains tax rate may be significantly lower than the statutory rate. 18 The importance weight of dividends in a country is calculated by averaging the dividend-to-assets ratio across firms and years, and then dividing by the average total payout ratio (sum of dividends and share repurchases normalized 17 Imputation credits and country-specific tax exemptions available to investors have been taken into account when calculating this “effective” rate. For example, as per the definition of imputation systems above, if the tax rate on dividend income is 50% and the available imputation credit is 20% then the ‘effective’ rate we employ is 30%. If, as for example in Germany from 2001-2008, 50% of dividend income is tax exempt, then the effective rate is half the statutory tax rate. 18 The assumption that the true tax rate is a quarter of the stated rate is not important to our conclusions. We get very similar magnitudes using other assumptions (including anything in the [0,1] range). 12 by total assets) across firms and years. The share repurchase weight is calculated analogously. 19 Average Tax C, the country weighted average tax, is an alternative measure of the average corporate payout tax rate (in %). It is obtained by weighing each year’s dividend and statutory capital gains tax rate by the relative importance of dividends and share repurchases as payout channels in a country over the sample period. 20 In principle, there are reasons to prefer either of the measures. The dividend tax rate disregards the tax burden of repurchases, but requires no assumptions about the capital gains taxes incurred when firms retain earnings (i.e. retaining earnings makes the share price higher, thereby increasing current capital gains sellers of share, reducing future capital gains taxes for buyers). We have also rerun all our regressions with a weighted average of tax rates where we allowed weights to vary not only by country but also by year (i.e. there is one set of weights for each country-year, which is applied to tax rates that also may vary by country-year). The country-average tax rate may be unrepresentative if the mix of payout varies a lot, but raises fewer endogeneity concerns. In practice, country average tax rates and country-year average tax rates are very similar, and the regression results are very close, so we do not report results for the latter. The mean values of our Effective Tax C and Average Tax C variables are 18.3% and 24.5%, with standard deviations 9.1% and 10.3%. Figure 3 illustrates the inverse cumulative distribution function (CDF) of tax rates across observations in our sample. As is evident, the variation in tax rates is considerable by any of our three tax measures, reflecting the substantial tax experimentation taking place during our sample period. Because of the uneven number of firms across countries, longlived tax systems in large countries (the US and Japan) produce lots of data. 3.4 Other variables Our firm-level variables measure internal funds, capital structure, Tobin’s Q, and growth. The availability of internal funds for investment is measured with three alternative variables: a) Cash Flow is the funds from operations of the company measured as the ratio of cash flow relative to total assets, b) Cash is defined cash holdings over total assets, and c) EBITDA measures earnings before interest, tax, and depreciation as a fraction of total assets. Unlike cash flow, EBITDA does not include tax payments, or increases in working capital. 19 Throughout we use cash dividends only, to avoid that differences in the tax treatment of cash and stock dividends infect our results. Our share repurchase variable is measured by the actual funds used to retire or redeem common or preferred stock and comes from the cash flow statement. 20 Weighing the capital gains tax by the prevalence of repurchases has the important advantage of automatically dealing with limitations on repurchases. If a country has high taxes on dividends and low taxes on repurchases, but severely restricts repurchases through laws and regulations, it is not fair to say that payout faces low taxes. Because we weight by actual quantities, we will put a small weight on the low payout tax rate. 13 We measure capital structure through leverage, defined as total book debt over total book assets. We include Tobin’s Q, the ratio between the market value and replacement value of the physical assets of a firm (Q). This variable can measure future profitability, that is, the quality of investment opportunities, as well as measurement error arising from accounting discrepancies between book capital and economic replacement costs. We include the natural logarithm of growth in sales from year t-2 to t (Sales Growth) and the relative size of a firm (Size) to control for the fact that smaller, high growth firms have greater profitable investment opportunities than bigger and more mature companies. We measure the relative size of a firm as the percentage of sample firms smaller than the firm for each country in each year. The numerator in our firm-level controls is measured one year after our total assets variable, the denominator. All values for these control variables in US dollars are converted into real terms (base year 2000) by using the US GNP deflator. 4. Tests and results 4.1 Internal resources and investment under different taxes: non-parametric results The simplest way of testing how payout taxes impact investment of firms with and without access to internal equity is to track firm investment around tax reforms. We do this in our panel sample by sorting firms in each country-year into quintiles based on the ratio of cash flow to assets. This is meant to capture firms’ ability to finance investment internally. 21 We then calculate average investment over assets for each group in each country-year cell. We demean these ratios by country-year, to account for crosscountry and time variation in average investment levels. Next we identify tax changes, using the countryweighted average payout tax rate (Average Tax C, results are similar with the two alternative measures). We focus on events where payout taxes changed by at least three percentage points. We exclude any events with fewer than thirty observations (firms) in the first year of the tax change. To avoid overlapping periods, and following Korinek and Stiglitz (2009), we further exclude events where a substantial tax cut (increase) is followed by a tax increase (cut) within two years following the original reform (Sweden 1994/1995, Australia 2000/2001, Norway 2001/2002, and Korea 1999/2001). As Korinek and Stiglitz show, where firms perceive tax changes as only temporary, tax changes may generate smaller effects. Since tax reform is often debated extensively, it seems possible that these tax reversals can be predicted by some firms and investors. We further exclude an event where the effects of the payout tax change overlap with a substantial corporate tax reform (Korea 1994). The remaining 29 events include fifteen events with an average tax drop of 9.8 percentage points (median 5.5) and fifteen events with an average tax increase of 8.4 percentage points (median 5.6). 21 Sorting on related variables such as Net Income/Assets gives very similar results. 14 For every event, we track the average ratio of investment to lagged assets for firms in each quintile in the three years leading up to the tax change, the first year when the new rules apply, and the two years following the tax change. Average differences in investment between high and low cash flow firms around the tax events are shown in Figure 4. This graph shows the difference between the average investment of the low and high cash flow quintiles. The point estimate is positive in all years, i.e. the firms with high internal cash flows tend to invest more. There is no apparent trend in the investment rate difference prior to a tax reform. After a tax reform, however, the investment difference follows the direction of the tax change (e.g. the difference increases when taxes are raised and falls when taxes are reduced). In Table 4, we provide a detailed analysis of the relative investment of high and low cash flow firms. The table shows average investment (demeaned by country year) for both pre- and post-reform periods, and for the two groups of firms. The difference and difference-in-difference estimates are shown as well. The time period analyzed around tax events is from four years before to two years after the reform. The effects are in line with the hypothesis that higher taxes should be associated with relatively higher investment in those firms that have access to internal cash (Column 3, Panels A and B). After payout tax increases (decreases) the importance of the availability of internal resources for high investment increases (decreases) significantly. On average, the difference in investment between low and high cash flow firms increases from 5.33% to 7.59% following a payout tax increase. When payout taxes are cut, the difference in investment falls from 7.27% to 5.54%. These results are consistent with the prediction that corporate payout taxes drive a wedge between the cost of inside and outside equity and that high such taxes favor investment by firms with internal resources. The tax-based theory of the cost of capital wedge suggests that firms with inside funding should not respond to tax incentives (they are “new view” firms). Nevertheless, there is movement in the high cash flow group of firms in Table 4 (after a tax increase, they increase investment relative to the median firm), disagreeing with this prediction. There are four possible explanations for the investment changes made for high cash flow firms. First, countercyclical fiscal policy could generate patterns in aggregate investment consistent with Table 4. In principle, forces of political economy could produce endogeneity in either direction: tax increases may be more likely in contractions when the government budget is in deficit or in expansions when there is less political pressure to stimulate the economy with fiscal expansion. Investment tends to fall after tax reductions and rise after tax increases, which might be due to countercyclical tax policy (i.e. taxes are raised at times when investment is temporarily low and can be expected to increase). This type of endogeneity is a key motivator for our approach of using difference-indifference tests with demeaned investment. By looking at relative cross-firm differences in investment 15 within a country and year, we difference out aggregate level effects. 22 A second possibility is that agency problems are a driver of investment in our sample firms in a way consistent with Chetty and Saez (2010): when tax rates go up, pressure to pay out cash is reduced, permitting managers to undertake excessive investment. Unlike the new view, this theory predicts that cash rich firms will respond to tax changes, and that aggregate investment may respond perversely to payout taxes. Third, cash rich firms may experience increase investment opportunities when cash poor firms withdraw. Finally, the aggregate patterns may be related to the permanence of tax changes. Korinek and Stiglitz (2009) predict that a tax cut which is expected (by firms) to be temporary can lead to inter-temporal tax arbitrage: firms want to take advantage of the temporarily low tax by paying out more cash, and do so in part by reducing investment. This tax arbitrage is done by mature (i.e. cash rich) firms who generate the bulk of payout. Thus, there are four reasons that the investment of cash rich firms is correlated with tax changes in the direction evident in Table 4. Importantly, under all four scenarios, our inferences based on the relative investment of high and low cash flow firms remains valid, i.e. the difference-in-difference result tells us that low payout taxes favors cash poor firms in a relative sense. Interpreting aggregate correlations is much more complicated, and we do not hope to be able to tell the possible explanations of the aggregate pattern apart. We believe the lessons learned from the cross-sectional differences are less ambiguous and of great potential importance for understanding corporate investment and for setting public policy. The estimated difference-in-difference estimate varies considerably across events. Figure 5 plots the empirical densities of difference-in-difference estimates for tax decrease and increase events. Two (three) of the fifteen (fourteen) tax decreases (increases) have difference-in-difference effects that are in conflict with our hypothesis. On the contrary, one third of the tax decreases reduce the difference in the ratio of investment to assets between high and low cash flow firms by more than 2.5 percentage points – more than one third of the pre-tax change differences. 40% of the tax raises increased the wedge in investment between high and low cash flow firms by more than 2.5 percentage points, i.e. more than 50% of the pre-tax change differences. 4.2 Internal resources and investment under different taxes: OLS results 22 We expect that endogeneity between payout tax changes and the dispersion of investment (as opposed to the level) is much less likely to be important. The correlation table in Appendix A.IX supports this expectation. It also highlights that tax changes are at best weakly related to other macroeconomic determinants that affect the level of investment in an economy. Tax changes are only weakly correlated with current and prior year GDP growth and not significantly related to other macroeconomic variables with the potential to influence investment: inflation, and cost for setting up businesses (see e.g., Djankov et al. 2010), and government spending measured by subsidies, military expenditures and R&D expenditures. We also implement several robustness tests to control for government policy in various ways (see Section 5). 16 Compared to the non-parametric tests, the regressions have several advantages. They use more of the variation in the data, and can easily integrate both tax increases and decreases in the same specifications. They also allow for more detailed controls of firm heterogeneity. However, it is harder to study the detailed time patterns in the regression tests. By construction, regressions put more weight on those events that happen in countries with many firms (i.e., Japan and the US), 23 although in principle that can be changed by using GLS (we do not do this, although we always cluster errors by country-year, so that we properly take into account the amount of statistical power we have). 24 The regressions exploit all of the variation in tax rates that is visible in Figure 3. For our baseline tests, we regress investment on firm controls, fixed effects for firms and for country-year cells, and the interaction of the payout tax rate with cash flow (we do not include the level of the tax, since this is absorbed by the country-year fixed effects). 25 We control for relative size, Tobin’s q, cash flow, and leverage. We include firm and country-year fixed effects in all our regressions. These help control for business cycles and other macro-economic factors. The main variable of interest is the interaction of internal resources (cash flow) and taxes. If taxes raise the relative cost of external equity, we expect high taxes to coincide with a stronger effect of cash flow on investment (since high cash flow means a firm can finance more investment with cheap internal equity). We therefore predict that the interaction coefficient should be positive. Regression results are reported in Table 5, for each of the three tax variables. The estimated coefficient for the tax-equity interaction variable is consistently positive and significant. In other words, the higher payout taxes are, the stronger is the tendency for investment to occur where retained earnings are high. As predicted by the tax wedge theory, payout taxes “lock in” investment in firms generating earnings and cash flow. The estimated magnitudes are large. For example, going from the 25 th percentile of the country-weighted average tax rate (15.0%) to the 75 th percentile (32.2%) implies that the effective coefficient on cash flow increases by 0.029, an increase by 32.8% over 23 We get similar results when excluding Japanese and U.S. firms (Table A.I of the Appendix). 24 We also test the robustness of our results to regression specifications in which we cluster standard errors at the country level and at the country-industry level. Standard errors for the cash flow*tax interactions obtained from these additional specifications are very similar to those in our baseline tests. They are reported in Table A.II of the Appendix. 25 For brevity, in what follows we only discuss the results obtained by using our Investment dependent variable. The results using our alternative measures of investment, PPE Growth and Asset Growth, align very closely with the results reported in this section. The results are displayed in Table A.III of the Appendix. Of the six coefficient estimates for the interaction of the payout tax rate with cash flow, five are significantly different from zero. We also ensure robustness of our results to alternative ways of scaling our measures of investment. In what follows, we use book assets to scale investment. As our sample includes smaller and nonmanufacturing firms with modest fixed assets and varying degrees of intangible assets this appeared the logical approach (cf. Baker, Stein, and Wurgler 2003). Nevertheless, following Fazzari, Hubbard, and Petersen (1988) and Kaplan and Zingales (1997) we also investigate robustness of our results to using the alternative denominators property, plant, and equipment (PPE) and the book value of fixed assets to scale investment. The estimated coefficients for the tax-cash flow interaction variable are again consistently positive and significant when we use these alternative scale variables for investment. 17 the conditional estimate at the 25 th percentile. Using the country-weighted effective tax rate, the effect is slightly larger. Going from the 25 th percentile (7.8%) to the 75 th percentile (25.2%) implies that the effective coefficient on cash flow increases by 0.037, 36.6% more than the baseline estimate in Table 5. One implication of this is that it appears a large part of the cash flow coefficient in investment regressions may reflect the differential cost of capital for firms with and without access to internal funds (the literature has mainly focused on financial constraints and varying investment opportunities as explanations of such coefficients). The high R-squared in the regressions in Table 5 stems largely from the many firm fixed effects included. On their own, these explain about 52% of the variation in investment rates. This suggests that they may be important to include, and we maintain them in all regressions. In fact, their inclusion does not change our estimates for the tax-cash flow interaction noticeably. We next use alternative measures of internal equity to check the robustness of our results thus far. We use the ratio of EBITDA to lagged assets as an alternative flow measure, and cash to lagged assets as a stock measure. Conceptually, a stock measure may be more natural than a flow measure, but cash may be financed on the margin by debt, in which case this becomes less informative about whether the firm has internal equity. In Table 6, both measures are interacted with all three tax variables. Of the six coefficient estimates, five are significantly different from zero. The magnitudes are smaller than those reported for cash flow in Table 5. We have also used further measures of internal resources, such as net income, or operating income. Results are similar (Table A.IV of the Appendix). In a next step, we consider more flexible econometric specifications. Thanks to the panel structure of the data set, we can allow the coefficient on cash flow to vary across countries and years, in essence replicating the identification strategy of the many studies exploiting the 2003 tax cut in the US (for seventy nine changes across 25 countries). In Table 7 we report regressions including interactions of cash flow with both country and year indicator variables. Allowing the slope on cash flow to vary by country, we can rule out any time-invariant differences in the relation between payout taxes and the allocation of investment in different countries. For example, accounting differences could make cash flow less precisely measured (reported) in some countries, where we would therefore see a smaller slope on cash flow due to attenuation bias. As long as such issues are time-invariant, we can eliminate any effect on our results by including the interaction of country fixed effects with cash flow. The coefficient estimates for the cash flow-payout tax interaction remain statistically significant, and are somewhat large across the board (the firm controls have coefficients that are very similar to base line specifications). In fact, allowing these extra controls the estimated magnitudes are larger than those estimated in Table 5. The effective coefficient on the cash flow*tax interaction increases by 0.0002 (dividend tax), 0.0006 (Effective Tax C), and 0.0004 (Average Tax C) when compared to the coefficients reported in Table 4. 18 The R-squared increases by about twenty-five basis points. Thus, a more conservative estimation technique gives a more precise result in line with the predictions of the tax wedge theory. With the more demanding flexible specifications we address one additional concern. We want to repeat our analysis using cash flow percentile ranks rather than the raw cash flow measure. This addresses concerns that despite our eliminating extreme observations of our key independent variables our results may be sensitive to outliers or to cross-country variation in the standard deviation of cash flow. 26 The results using cash flow percentile ranks are reported in Table 8. Coefficient estimates are more significant than those for the raw CF variables. T-statistics for the coefficients on our cash flow * tax interactions are very high. An auxiliary prediction of the theory of tax-induced cost differences between internal and external equity is that high taxes reduce the need to reallocate resources from profitable to unprofitable firms. Therefore, high taxes should reduce the amount of equity issues. 27 This provides an additional falsification test. We test this by using firm-level data on payout tax and quantities of equity raised. If we cannot see a negative correspondence between payout tax and amount of equity issues, it becomes less plausible that our tax measure properly captures variation in the cost of equity. Table 9 presents tests of the predicted negative relation between taxes and equity issues in our sample. To help control for market timing (as opposed to payout tax timing), we control for recent stock return in the equity issues regressions. As predicted, the coefficient estimate is negative for all three measures of taxes. A ten percentage point increase in the dividend tax rate (the country average payout tax rate) predicts a drop in equity issuance by 9% (12%) of the unconditional mean. High payout taxes are associated with both low investment and low equity issuance among firms with low profits. This is consistent with taxes as a driver of the cost of capital. It also suggests one channel through which the differential investment responses to taxes come about: with lower taxes, domestic stock markets reallocate capital to firms without access to internal cash. 4.3 Difference-in-difference analysis: old view firms vs. new view firms We next sort firms by their likely access to the equity market. This is an important distinguishing feature between new view and old view models. According to the new view, all firms finance internally (on the margin), and therefore do not respond to taxes on payout. According to the old view, all firms finance their investment externally (again, on the margin), and therefore respond to taxes on payout (their 26 Dependent variables are truncated, so to some extent this is already addressed. 27 The same prediction applies to payout: lower taxes should be associated with more payout. However, this prediction is less unique. If firms perceive tax changes as predictable, they may attempt to time payout to times when taxes are low (e.g., Korinek and Stiglitz 2009). It therefore seems that testing equity issues provides better discrimination among theories than testing payout volumes. 19 cost of capital increases in such taxes). We hypothesize that the two assumptions fit different firms. By sorting firms by access to the equity market, we may be able to test the two theories. We attempt to sort firms into those that can source funds in the equity markets (old view) and firms that have to rely more on internal resources to finance investment (new view). To classify firms, we use three methods: predicted equity issues, actual equity issues in preceding years, and the KZ index of financial constraints (Kaplan and Zingales 1997). 28 We estimate the effect of taxation on the cash flow sensitivity of investment separately for the groups of firms. In Table 10, Panel A, we sort firms based on the predicted probability that a firm issues shares using common share free float, share turnover, sales growth, leverage, market capitalization and market-to-book. We define firms as old view firms if predicted equity sales are above 2% of lagged assets. In Panel B, we define firms as old view firms if the sum of the net proceeds from the sale/issue of common and preferred stock over the preceding year exceeded zero, and as new view firms otherwise 29 . In Panel C, we classify firms as new view if the KZ index of financial constraints is above 0.7, and otherwise as old view firms. For all three classifications, there is a sizable difference in the effect of taxation on the marginal source of funds for investment between old view firms and new view firms. The differences between the coefficients are statistically significant at the 5% level or better in each pair of regressions. For old view firms, the cash flow coefficient is always sensitive to tax rates, as predicted. For new view firms, the coefficient estimate is positive but smaller and insignificant in all cases. 4.4 Governance and the impact of taxation on the cash flow sensitivity of investment Studies of the 2003 US tax cut found that governance variables tended to have a large impact on firm responses to the tax cut (Brown et al 2007 and Chetty and Saez 2005). Chetty and Saez (2010) model this, and suggest that poorly governed firms have CEOs who invest for reasons unrelated to the marginal cost and value of investment (i.e., they are unresponsive to the cost of capital). When taxes fall, such CEOs switch from excessive investment to payout, and so lower taxes have important welfare benefits. One prediction of their model is that poorly governed firms will not respond as much to tax changes as well governed firms. To identify governance, we look at directors’ ownership stakes (including officers’) in the company. This is based on the notion that only owners with large stakes have both the power and the incentive to make sure the firm is maximizing value (Shleifer and Vishny 1986, Jensen and Murphy 1990). Additionally, the measure seems plausibly institution-independent, i.e. we expect it to be meaningful across countries and time. Our sample countries vary substantially in terms of legal 28 Note that we cannot condition on payout to distinguish financially constrained vs. unconstrained firms, since payout may be determined simultaneously with investment, which is our dependent variable. 29 Our results are robust to using the dividend tax rate and the country-weighted effective tax rate instead of the country-weighted average tax rate for this analysis (Tables A.V and A.VI of the Appendix). 20 institutions, ownership structure, and other factors. Finally, this measure can be calculated for many of our sample firms (about three quarters of observations). To calculate the fraction of shares held by insiders we use the sum of the outstanding shares of a company held by directors and officers (if above the local legal disclosure requirement) relative to total shares outstanding. 30 The median ownership stake held by insiders is 4.4% for the firms in our sample. With standard deviation of 18.9% and interquartile range of 16.6%, the variation of insider ownership across firms and years is substantial. Particularly low insider ownership stakes are observed, for example, for companies Johnson & Johnson (US, 0.1%), Samsung Fine Chemicals (KOR, 0.1%), and Rentokil (UK, 0.1%). High ownership concentration is observed for, for example, Archon (US, 89%), Grupo Embotella (MEX, 72.4%), and Maxxam (US, 65%). As a comparison, currently over 12% of shares in Microsoft are held by corporate insiders. We observe the lowest insider ownership stakes in Austria (median value of 0.2%), the Netherlands (0.4%), and Japan (0.4%). High ownership concentration is found in Greece (42.6%), Italy (35.9%), and Belgium (11.1%). In the U.S., approximately 8.9% in a company are held by directors and officers in our sample. We sort firms into four quartiles, with respective averages of 0.27%, 2.5%, 10.7% and 41.8% insider ownership. 31 When sorting by insider ownership, and running separate regressions for each subsample, we find that firms with very low insider ownership show much less response to taxes (Table 11). The coefficient estimate is insignificant for the three groups of firms with the lowest ownership and significant for the group with high insider ownership. 32 This is consistent with the Chetty and Saez’(2010) predictions that CEOs with incentives more in line with investors make decisions that are more responsive to tax incentives. More generally, the results may suggest that some firms are more responsive to changes in the cost of capital. However, the differences of the coefficient estimates across groups are not statistically significant, and are therefore only suggestive. Since insiders are individuals, this result also highlights that where the marginal shareholder is more likely to be a taxable investor, the tax effects may be stronger. 30 We obtain insider ownership data from the September 2010 version of the Worldscope database. The disadvantage of Worldscope is that it reports current insider ownership at any given time (or latest available) only. Thus, we have to assume that the fraction of shares held by directors and officers at the time we accessed the data is informative about the fraction of shares historically held by insiders. Prior evidence in the literature suggests that this aspect of the ownership structure usually changes slowly (Zhou, 2001). 31 We get similar results when sorting for each country separately. 32 We get very similar results when we use the dividend tax rate and the country-weighted effective tax rate instead of the country-weighted average tax rate for this analysis (Tables A.VII and A.VIII of the Appendix). 21 5. Robustness to endogeneity concerns about payout taxes We next turn to several important additional robustness tests. One central concern about our results is that tax changes are just fragments of larger policy changes in an economy which coincide with tax reforms and change firms’ investment behavior. After all, governments are unlikely to set their tax policies completely independently of other developments in an economy. In particular, our tests (regressions and non-parametric tests) might be biased if tax changes were motivated by factors related to the relative investment of cash-rich and -poor firms. If, for example, taxation, cash flow and investment all change simultaneously in response to other macroeconomic determinants or government policies then we need to be concerned about endogeneity. Throughout our analyses we have used a number of checks to ensure robustness of our results to endogeneity concerns. For example, in our non-parametric test we have relied on differences in investment across firms instead of investment levels. Similarly, in all regressions we include country-year dummies to ensure that average investment is taken out (and, likewise, any particular government investment initiative that may inflate investment in a given year). Nevertheless we turn to several important additional robustness checks below. They address concerns that tax rates change in response to policy variables or macroeconomic determinants that might also affect the allocation of investment across firms (thus causing false positive conclusions about taxation). We now consider further features of the tax system. We first want to control for the corporate tax rate. Corporate taxes may be connected to payout taxes for many reasons, including government budget trade-offs, and political preferences (i.e. pro-business). Corporate taxes might also affect how important internal resources are for firms. 33 Therefore, if different features of the tax code are correlated, an empirical link between payout taxes and relative investment across firms might be reflective of a true relationship between corporate taxes and relative investment. To make sure our results are not biased in either direction we include the interaction of corporate tax with firm cash flows, we include an interaction of corporate taxes and cash flow in a regression. Here, we need to make a distinction between imputation system and other tax regimes. In imputation systems, corporate and payout taxes are particularly strongly intertwined as corporate tax at the firm level is “pre-paid” on behalf of shareholders and can be credited against payout taxes at the individual shareholder level. Thus, the corporate tax rate is in some way a measure of investor taxes. To distinguish tax systems we thus also add an interaction of cash flow*corporate tax with the dummy variable Imp which takes the value of 1 for imputation systems, and zero otherwise. The results are reported in Table 12. The interaction of corporate tax with cash flow is insignificant in all 33 For example, if many firms are financially constrained, they may be unable to respond to lower corporate tax rates by investing more. In that case, lower tax rates may coincide with lower coefficients on internal resources. 22 specifications, suggesting that outside of imputation systems 34 , the corporate tax rate is not related to our findings. The triple interaction with the imputation system dummy is positive and significant, suggesting that in imputation systems 35 , internal cash flow is a stronger predictor of investment when taxes are high. In other words, internal resources appear to matter more when corporate taxes are high. One interpretation of this coefficient is that when taxes are high, financial constraints bind more than at other times (see e.g. Rauh 2006). Importantly for our purposes, the interaction of cash flow and payout tax is not much affected. The coefficient estimates remain significant (although the significance is somewhat lower for the dividend tax rate), and very close to the baseline regressions in magnitude. Apart from corporate income taxes, we are also concerned about other features of the tax system. Changes to payout taxes may coincide with modifications to the tax code apart from the corporate tax rate. We therefore introduce a set of broad measures of public sector policy as covariates, which may make investment more profitable. More generally, this way we can address legislative endogeneity concerns: if firms with little internal equity increase investment following a payout tax reduction, is that because of the tax cut or did these firms just lobby to make the investment they were planning to do anyway more profitable? We collect alternative indicators of policy preferences for the economies in our sample from the World Development Indicators (World Bank, 2010). We opt for four indicators that measure government policy in three distinct dimensions: government stimulus, consumption climate, and legal environment. We sequentially include each policy control and its interaction with cash flow. To control for the effect of government stimulus programs that may affect investment we use control variables Subsidies, Grants, Social Benefits and Military Expenditure. The former measures government transfers on current account to private and public enterprises, and social security benefits in cash and in kind (relative to total government expense) (Table 13, Panel A). The latter includes all current and capital expenditures on the armed forces (relative to GDP) (Panel B). We measure governments’ stance on consumption through control variable Sales and Turnover Tax. It measures the tax burden on goods and services relative to the value added of industry and services (Panel C). 36 Finally, we measure public spending on research through R&D Expenditures as a fraction of GPD. It measures expenditures on basic research, applied research, and experimental development. (Panel D). We use the more demanding flexible specifications to perform this additional check. Coverage for the world development indicators is generally poorer than for our tax variables over the sample period. In three of the four additional specifications the number of observations is at best half compared to our baseline specifications. Results 34 Austria, Belgium, Denmark, Finland, France, Germany, Greece, Hungary, Ireland, Italy, Netherlands, New Zealand, Norway, Poland, Portugal, Spain, Sweden, Switzerland, the United States (all in 2008). 35 Australia, Canada, Japan, Korea, Mexico, UK (all in 2008). 36 It includes general sales and turnover or value added taxes, selective taxes on services, taxes on the use of goods or property, taxes on extraction and production of minerals, and profits of fiscal monopolies. 23 are reported in Table 13. Despite the reduction in sample size and the additional policy controls the coefficient for the cash flow*tax interaction remains strong and significant in all but two specifications. 37 6. Conclusions Our results have three main implications. First, it appears that payout taxes drive the allocation of capital across firms. High taxes lock in capital in those firms that generate internal cash flows, ahead of those firms that need to raise outside equity. If firms have different investment opportunities, this means that tax rates change the type of investments being made. For example, high payout taxes may favor established industries. Taxes on payout may be as important for investment decisions and the cost of capital as the corporate income tax. 38 Second, the effect of payout taxes is related to both access to the equity market and governance. Firms which can access the equity market, “old view” firms, are the most affected by tax changes. Firms whose only source of equity finance is internal are less affected by taxes, as predicted by the “new view”. A final source of heterogeneity is governance. Firms where decision makers have low financial stakes are less affected by tax changes, reflecting their propensity to make investment decisions for reasons unrelated to the cost of capital. Third, the relation between cash flow and investment (see e.g. Fazzari, Hubbard, Petersen 1988, Kaplan and Zingales 1997, Lamont 1997) appears to partially reflect the difference in the cost of capital between firms with and without access to inside equity. Firms invest more if they have easy access to more resources (see e.g. Lamont 1997 and Rauh 2006), especially internal cash flows. There is a potentially important tax channel through which internal resources affect investment: having internal cash flows implies a lower after-tax cost of equity capital. Thus, tax policy offers one important potential channel for affecting the access to investment resources by firms without retained earnings. 37 When we include all four policy controls the reduction in the number of observations is immense – 77%. Nevertheless, for two of our three tax variables the influence of taxation on the cash flow sensitivity of investment remains statistically significant. 38 In fact, US tax receipts data suggest that payout taxes are quite relevant. From 1960 to 2009, the share of corporate income taxes in U.S. Federal tax receipts fell from 24% to 10% (IRS 2009). A study by the Department of the Treasury, Office of Tax Analysis suggested that individual income taxes on dividends were 13% of Federal tax receipts in 2005. In other words, payout-related taxes may currently raise more revenue than corporate income taxes. 24 References Asquith, Paul and David W. Mullins, 1986, “Equity issues and offering dilution”, Journal of Financial Economics, 15 (1-2): 61–89. Auerbach, Alan J., 1979a, “Wealth maximization and the cost of capital”, Quarterly Journal of Economics, 93 (3): 433–446. Auerbach, Alan, 1979b, “Share Valuation and Corporate Equity Policy,” Journal of Public Economics, 11 (3): 291-305, Baker, Malcolm P., Jeremy C. Stein, and Jeffrey A. Wurgler, 2003, “When Does the Market Matter? Stock Prices and the Investment of Equity-Dependent Firms”, Quarterly Journal of Economics, 118 (3): 969–1006. Becker, Bo, Zoran Ivkovic, and Scott Weisbenner, 2011, ”Local Dividend Clienteles”, Journal of Finance, 66 (2), April. Bernheim, B. Douglas, 1991, “Tax Policy and the Dividend Puzzle”, RAND Journal of Economics, 22 (4): 455–476. Bradford, David F., 1981, “The incidence and allocation effects of a tax on corporate distributions”, Journal of Public Economics, 15 (1): 1–22. Brown, Jeffrey R., Nellie Liang, and Scott Weisbenner, 2007, “Executive Financial Incentives and Payout Policy: Firm Responses to the 2003 Dividend Tax Cut”, Journal of Finance, 62 (4): 1935–1965. Chen, Hsuan-Chi, and Jay Ritter, 2000, “The Seven Percent Solution”, Journal of Finance, 55 (3): 1105– 1131. Chetty, Raj and Emmanuel Saez, 2005, “Dividend Taxes and Corporate Behavior: Evidence from the 2003 Dividend Tax Cut”, Quarterly Journal of Economics, 120 (3): 791–833. Chetty, Raj and Emmanuel Saez, 2010, “Dividend and Corporate Taxation in an Agency Model of the Firm”, American Economic Journal: Economic Policy, 2 (3): 1–31. Coase, Ronald H., 1937, “The Nature of the Firm”, Economica, 4 (16): 386–405. DeRidder, Adri, 2009, “Share Repurchases and Firm Behaviour”, International Journal of Theoretical and Applied Finance, 12 (5): 605–631. Dittmar, Amy, 2000, “Why do Firms Repurchase Stock?”, Journal of Business, 73 (3): 331–355. Djankov, Simeon, Tim Ganser, Caralee McLiesh, Rita Ramalho, and Andrei Shleifer, 2010, “The Effect of Corporate Taxes on Investment and Entrepreneurship”, American Economic Journal: Macroeconomics, 2 (July): 31-64. Fama, Eugene F. and Kenneth R. French, 2001, ”Disappearing dividends: changing firm characteristics or lower propensity to pay?”, Journal of Financial Economics, 60 (1): 3–43. Fazzari, Steven M., R. Glenn Hubbard, and Bruce Petersen, 1988, “Finance Constraints and Corporate Investment”, Brookings Papers on Economic Activity, 1: 141–195. Feldstein, Martin S., 1970, “Corporate Taxation and Dividend Behaviour”, Review of Economic Studies, 37 (1): 57–72. French, Kenneth, and James Poterba, 1991, “Investor Diversification and International Equity Markets”, American Economic Review, 81 (2): 222–226. 25 Gordon, Roger and Martin Dietz, 2006, “Dividends and Taxes”, NBER Working Paper No.12292, forthcoming in Alan J. Auerbach and Daniel Shaviro, editors, Institutional Foundations of Public Finance: Economic and Legal Perspectives, Harvard University Press, Cambridge, MA. Guenther, David A., and Richard Sansing, 2006, “Fundamentals of shareholder tax capitalization”, Journal of Accounting and Economics, 42 (3), 371-383. Harberger, Arnold C., 1962, “The Incidence of the Corporation Income Tax”, Journal of Political Economy, 70 (3): 215–240. Harberger, Arnold C., 1966, “Efficiency effects of taxes on income from capital", in: Marian Krzyzaniak, editor, Effects of corporation income tax, Wayne State University Press, Detroit. Hashimoto, Masanori, 1998, “Share Repurchases and Cancellation”, Capital Market Trend Report 1998- 17, Capital Market Research Group, Nomura Research Institute. Internal Revenue Service, 2009, IRS Data Book 2009. Jacob, Marcus and Martin Jacob, 2011, “Taxation, Dividends, and Share Repurchases: Taking Evidence Global“, SSRN Working Paper. Jensen, Kevin J. and Michael C. Jensen, 1990, “CEO Incentives: It's Not How Much You Pay, But How”, Harvard Business Review, 3 (3): 138–153. Jensen, Michael C., and William H. Meckling, 1976, “Theory of the Firm: Managerial Behavior, Agency Costs and Ownership Structure”, Journal of Financial Economics, 3 (4):305–360. Jensen, Michael C. and Kevin J. Murphy, 1990, “Performance Pay and Top-Management Incentives”, Journal of Political Economy, 98 (2): 225–264. Kaplan, Steven N. and Luigi Zingales, 1997, “Do Investment-Cash Flow Sensitivities Provide Useful Measures of Financing Constraints?”, Quarterly Journal of Economics, 112 (1): 169–215. King, Mervyn A., 1977, Public Policy and the Corporation. Chapman and Hall, London. Korinek, Anton and Joseph E. Stiglitz, 2009, “Dividend Taxation and Intertemporal Tax Arbitrage”, Journal of Public Economics, 93 (1-2): 142–159. La Porta, Rafael, Florencio Lopez-de-Silanes, Andrei Shleifer, and Robert W. Vishny, 2000, “Agency Problems and Dividend Policies around the World”, Journal of Finance, 55 (1): 1–33. Lamont, Owen, 1997, ”Cash Flow and Investment: Evidence from Internal Capital Markets”, Journal of Finance, 52 (1): 83–109. Lewellen, Jonathan and Katharina Lewellen, 2006, “Internal Equity, Taxes, and Capital Structure”, Working Paper, Dartmouth. Malmendier, Ulrike and Geoffrey Tate, 2005, “CEO Overconfidence and Corporate Investment”, Journal of Finance, 60 (6): 2661–2700. Mondria, Jordi, and Thomas Wu. 2010, “The puzzling evolution of the home bias, information processing and financial openness”, Journal of Economic Dynamics and Control, 34(5): 875–896. Myers, Steward C., 1977, “Determinants of Corporate Borrowing”, Journal of Financial Economics, 5 (2): 147–175. Perez-Gonzalez, Francisco, 2003, “Large Shareholders and Dividends: Evidence From U.S. Tax Reforms”, working paper, Columbia University. Poterba, James M., 1987, “Tax Policy and Corporate Savings”, Brookings Papers on Economic Policy, 2: 455–503. 26 Poterba, James M., 2004, “Taxation and Corporate Payout Policy”, American Economic Review, 94 (2): 171–175. Poterba, James M. and Lawrence H. Summers, 1984, “New Evidence That Taxes Affect the Valuation of Dividends”, Journal of Finance, 39 (5): 1397–1415. Poterba, James M. and Lawrence H. Summers, 1985, “The Economic Effects of Dividend Taxation”, In Edward Altman and Marti Subrahmanyam, editors, Recent advances in corporate finance: 227–284. Dow Jones-Irwin Publishing: Homewood, IL. Rau, P. Raghavendra and Theo Vermaelen, 2002, “Regulation, Taxes, and Share Repurchases in the United Kingdom”, Journal of Business, 75 (2): 245–282. Rauh, Joshua, 2006, “Investment and Financing Constraints: Evidence from the Funding of Corporate Pension Plans”, Journal of Finance, 61 (1): 33–71. Rydqvist, Kristian, Joshua Spizman, and Ilya Strebulaev. 2010, “The Evolution of Aggregate Stock Ownership.” Working Paper. Shleifer, Andrei and Robert W. Vishny, 1986, “Large Shareholders and Corporate Control”, Journal of Political Economy, 94 (3): 461–88. Zhou, Xianming, 2001, “Understanding the determinants of managerial ownership and the link between ownership and performance: Comment”, Journal of Financial Economics, 62 (3): 559-571. Zingales, Luigi, 2000, “In Search of New Foundations”, Journal of Finance, 55 (4): 1623–1653. 27 Figure 1 Personal Tax Rates on Dividend Income – High Variation Countries This figure shows dividend tax rates for the six countries in our sample with the largest within-country variation in personal income tax rates on dividend income over the 1990-2008 period. 0 10 20 30 40 50 60 70 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 Tax Rate (%) Year Finland Japan Netherlands Norway Sweden United States Figure 2 Capital Gains Tax Rates – High Variation Countries This figure shows taxation of share repurchases for the six countries in our sample with the largest within-country variation in tax rates on capital gains over the 1990-2008 period. 0 10 20 30 40 50 60 70 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 Tax Rate (%) Year Canada Netherlands Australia Spain Poland Switzerland28 Figure 3 Tax Rates – Distribution over Sample This figure illustrates the distribution of tax rates across 81,222 observations in our sample over the 1990-2008 period. The graph is a transposed cumulative distribution function with number of observations on the x-axis and tax rates on the y-axis. Dividend Tax is the personal income tax rate on dividends (in %). Effective Tax C is the countryweighted effective corporate payout tax rate (in %). It is obtained by weighting each year’s dividend and effective capital gains tax rates by the relative importance of dividends and share repurchases as payout channels (relative to total corporate payout) in a country over the sample period. The effective tax rate on share repurchases equals onefourth of the statutory capital gains tax rate. Average Tax C is an alternative measure of the average corporate payout tax rate (in %). It is calculated by weighting each year’s dividend and statutory capital gains tax rates by the relative importance of dividends and share repurchases as payout channels (relative to total corporate payout) in a country over the sample period. 0 10 20 30 40 50 60 70 0 10,000 20,000 30,000 40,000 50,000 60,000 70,000 80,000 Tax rate (%) Observations Dividend Tax Effective Tax C Average Tax C US 2003-2008: 15%: US 1993-2000: 39.6% Japan 2004-2008: 10%:29 Figure 4 Average Investment by High and Low Cash Flow Firm Quintiles Around Payout Tax Changes of at Least 3 Percentage Points, 1992-2006 This figure shows the average investment by cash flow group for three years around 15 payout tax decreases and 14 payout tax increases in 1992-2006 with at least 30 observations in the country-year. We measure investment by capital expenditures normalized by prior-year total assets (CapEx/A) and demean investment by country-year cell. We then sort firms in each country-year cell into five quintiles according to their cash-flow, and calculate average investment for each quintile. The 14 payout tax increase events are Australia 1993, Canada 1993, Denmark 1993 Denmark 2001, Germany 1994, Germany 1995, Finland 2005, Finland 2006, France 1997, Japan 2000, Norway 2006, Poland 2004, Switzerland 1998, and the US 1993. The 15 tax decrease events include Belgium 2002, Canada 1996, Canada 2001, Canada 2006, Germany 2001, France 2002, Italy 1998, Japan 2004, Netherlands 2001, Poland 2001, Spain 1996, Spain 1999, Spain 2003, US 1997, and the US 2003. 0.04 0.05 0.06 0.07 0.08 -3 -2 -1 0 1 2 Year relative to tax change Tax increase events Tax decrease events30 Figure 5 Difference-in-Difference Estimates, Empirical Distribution This figure presents the empirical distribution of difference-in-difference estimates around tax increase and decrease events. Events are included if they represent a 3 percentage points or larger change in the tax rate, if there are at least 30 firm observations for each year around the change, and if they occur during 1992-2006. For each event, we sort firms in each year into five groups based on cash flows. For each year, the difference in the average investment to lagged assets between the firm quintiles with the highest and lowest cash flows is calculated. The difference-indifference estimate for each event is defined as the change in this difference from the three years before to the three year after the tax change. The graph presents tax decreases and increases separately. 0 1 2 3 4 5 6 7 8 9 -7.5 to -5 -5 to -2.5 -2.5 to 0 -0 to 2.5 2.5 to 5 5 to 7.5 Number of tax events Investment rate differences in percentage points Tax decreases (<-3%) Tax increases (>3%) Mean -1.42 Med. -1.98 Std d. 2.40 N=15 Mean 1.68 Med. 1.36 Std d. 2.53 N=1431 Table 1 Tax Regimes Across 25 Countries (1990-2008) This table reports prevailing tax regimes across 25 countries over the 1990-2008 period. CL, FI, PI, SR, and TE abbreviate classical corporate taxation system, full imputation system, partial imputation system, shareholder relief system, and dividend tax exemption system, respectively. 1 – Split-rate system for distributed and retained earnings. 2 – Individuals had the option to accumulate the dividend grossed up applying a factor of 1.82 combined with a tax credit of 35% on the grossed up dividend. This mechanism is similar to a full imputation system (Source: OECD). Country 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 Australia FI FI FI FI FI FI FI FI FI FI FI FI FI FI FI FI FI FI FI Austria SR SR SR SR SR SR SR SR SR SR SR SR SR SR SR SR SR SR SR Belgium SR SR SR SR SR SR SR SR SR SR SR SR SR SR SR SR SR SR SR Canada PI PI PI PI PI PI PI PI PI PI PI PI PI PI PI PI PI PI PI Denmark CL CL CL CL CL CL CL CL CL CL CL CL CL CL CL SR SR SR SR Finland PI PI PI FI FI FI FI FI FI FI FI FI FI FI FI SR SR SR SR France FI FI FI FI FI FI FI FI FI FI FI FI FI FI FI SR SR SR SR Germany FI 1 FI 1 FI 1 FI 1 FI 1 FI 1 FI 1 FI 1 FI 1 FI 1 FI 1 SR SR SR SR SR SR SR SR Greece - - TE TE TE TE TE TE TE TE TE TE TE TE TE TE TE TE TE Hungary SR SR SR SR SR SR SR SR SR SR SR SR SR SR SR SR SR SR SR Ireland PI PI PI PI PI PI PI PI PI PI CL CL CL CL CL CL CL CL CL Italy FI FI FI FI FI FI FI FI SR SR SR SR SR SR SR SR SR SR SR Japan CL CL CL CL CL CL CL CL SR SR SR SR SR SR SR SR SR SR SR Korea PI PI PI PI PI PI PI PI PI PI PI PI PI PI PI PI PI PI PI Mexico FI 2 FI 2 TE TE TE TE TE TE TE FI FI FI FI FI FI FI FI FI FI Netherlands CL CL CL CL CL CL CL CL CL CL CL SR SR SR SR SR SR SR SR New Zealand FI FI FI FI FI FI FI FI FI FI FI FI FI FI FI FI FI FI FI Norway SR SR FI FI FI FI FI FI FI FI FI PI FI FI FI FI SR SR SR Poland - - - SR SR SR SR SR SR SR SR SR SR SR SR SR SR SR SR Portugal SR SR SR SR SR SR SR SR SR SR SR SR SR SR SR SR SR SR SR Spain CL CL CL CL CL PI PI PI PI PI PI PI PI PI PI PI PI SR SR Sweden CL SR SR SR SR SR SR SR SR SR SR SR SR SR SR SR SR SR SR Switzerland CL CL CL CL CL CL CL CL CL CL CL CL CL CL CL CL CL SR SR United Kingdom PI PI PI PI PI PI PI PI PI PI PI PI PI PI PI PI PI PI PI United States CL CL CL CL CL CL CL CL CL CL CL CL CL SR SR SR SR SR SR 32 Table 2 Personal Income Tax Rates and Capital Gains Tax Rates Across 25 Countries (1990-2008) This table shows effective corporate payout tax rates across 25 countries over the 1990-2008 period. Panel A reports personal income tax rates on dividend income (in %). Panel B reports capital gains tax rates (in %). All capital gains tax rates reported are effective rates incurred by investors with non-substantial shareholdings and holding periods that qualify as long-term investments in accordance with country-specific tax legislation. For example in Denmark, Germany or the United States, capital gains from long-term shareholdings are taxed at the lower rate reported in Panel B. Austria, Italy, and Netherlands are examples for countries where capital gains from substantial shareholdings are taxed at higher rates. A shareholding qualifies as substantial if it exceeds a certain threshold in share capital (for example 5% in the Netherlands). See Jacob and Jacob (2010) for a detailed description of applied tax rates. Panel A: Personal Income Tax Rates on Dividend Income (in %) Country 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 Australia 15.2 15.2 15.2 23.0 23.0 19.5 19.5 19.5 19.5 19.5 22.0 26.4 26.4 26.4 26.4 26.4 23.6 23.6 23.6 Austria 25.0 25.0 25.0 25.0 22.0 22.0 22.0 25.0 25.0 25.0 25.0 25.0 25.0 25.0 25.0 25.0 25.0 25.0 25.0 Belgium 25.0 25.0 25.0 25.0 25.0 15.0 15.0 15.0 15.0 15.0 15.0 15.0 15.0 15.0 15.0 15.0 15.0 15.0 15.0 Canada 38.3 39.1 40.1 43.5 44.6 44.6 37.0 35.8 34.6 33.6 33.2 31.9 31.9 31.9 31.9 31.9 24.4 24.1 23.6 Denmark 60.9 45.0 45.0 40.0 40.0 40.0 40.0 40.0 40.0 40.0 40.0 43.0 43.0 43.0 43.0 43.0 43.0 43.0 45.0 Finland 59.5 55.6 55.9 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 16.0 19.6 19.6 19.6 France 39.9 39.9 39.9 41.8 41.8 42.6 39.0 43.4 41.9 41.9 40.8 40.1 35.6 33.5 33.9 32.3 32.7 32.7 32.7 Germany 26.6 29.7 29.7 26.6 32.9 38.5 38.5 38.5 37.0 37.0 34.0 25.6 25.6 25.6 23.7 22.2 22.2 23.7 26.4 Greece - - 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 Hungary 20.0 20.0 10.0 10.0 10.0 10.0 10.0 10.0 20.0 20.0 20.0 20.0 20.0 20.0 20.0 25.0 25.0 10.0 10.0 Ireland 35.8 35.7 32.0 30.7 30.7 32.0 32.5 34.4 39.3 39.3 44.0 42.0 42.0 42.0 42.0 42.0 42.0 41.0 41.0 Italy 21.9 21.9 23.4 23.4 23.4 23.4 22.2 22.2 12.5 12.5 12.5 12.5 12.5 12.5 12.5 12.5 12.5 12.5 12.5 Japan 35.0 35.0 35.0 35.0 35.0 35.0 35.0 35.0 35.0 35.0 43.6 43.6 43.6 43.6 10.0 10.0 10.0 10.0 10.0 Korea 47.3 47.3 47.3 47.3 38.4 37.0 33.4 33.4 33.4 22.7 22.7 33.4 28.1 28.1 28.1 31.1 31.1 31.1 31.1 Mexico 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 Netherlands 60.0 60.0 60.0 60.0 60.0 60.0 60.0 60.0 60.0 60.0 60.0 25.0 25.0 25.0 25.0 25.0 25.0 22.0 25.0 New Zealand 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 8.9 8.9 8.9 8.9 8.9 8.9 9.0 8.9 12.9 Norway 25.5 23.5 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 11.0 0.0 0.0 0.0 0.0 28.0 28.0 28.0 Poland - - - 20.0 20.0 20.0 20.0 20.0 20.0 20.0 20.0 15.0 15.0 15.0 19.0 19.0 19.0 19.0 19.0 Portugal 25.0 25.0 25.0 25.0 25.0 25.0 25.0 25.0 25.0 25.0 25.0 25.0 20.0 20.0 20.0 20.0 20.0 20.0 20.0 Spain 46.0 46.0 43.0 46.0 46.0 38.4 38.4 38.4 38.4 27.2 27.2 27.2 27.2 23.0 23.0 23.0 23.0 18.0 18.0 Sweden 66.2 30.0 30.0 30.0 0.0 30.0 30.0 30.0 30.0 30.0 30.0 30.0 30.0 30.0 30.0 30.0 30.0 30.0 30.0 Switzerland 40.9 40.9 41.5 42.4 42.4 42.4 42.4 42.4 42.4 42.4 42.1 41.5 41.0 40.4 40.4 40.4 40.4 40.4 25.7 United Kingdom 20.0 20.0 20.0 22.6 25.0 25.0 25.0 25.0 25.0 25.0 25.0 25.0 25.0 25.0 25.0 25.0 25.0 25.0 25.0 United States 28.0 31.0 31.0 39.6 39.6 39.6 39.6 39.6 39.6 39.6 39.6 39.1 38.6 15.0 15.0 15.0 15.0 15.0 15.0 33 Panel B: Capital Gains Tax Rates (in %) Country 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 Australia 48.5 48.5 48.5 48.5 48.5 48.5 48.5 48.5 48.5 48.5 24.3 24.3 24.3 24.3 24.3 24.3 23.3 23.3 23.3 Austria 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 Belgium 41.0 39.0 39.0 40.2 40.2 40.2 40.2 40.2 40.2 40.2 40.2 40.2 10.0 10.0 10.0 10.0 10.0 10.0 10.0 Canada 35.1 35.7 36.3 38.6 39.3 39.3 39.0 37.1 36.3 35.9 31.9 23.2 23.2 23.2 23.2 23.2 23.2 23.2 23.2 Denmark 0.0 0.0 0.0 40.0 40.0 40.0 40.0 40.0 40.0 40.0 40.0 43.0 43.0 43.0 43.0 43.0 43.0 43.0 45.0 Finland 23.8 27.8 27.9 25.0 25.0 25.0 28.0 28.0 28.0 28.0 29.0 29.0 29.0 29.0 29.0 28.0 28.0 28.0 28.0 France 19.4 19.4 19.4 19.4 19.4 19.4 19.4 19.9 19.9 26.0 26.0 26.0 26.0 26.0 26.0 27.0 27.0 27.0 30.1 Germany 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 Greece - - - 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 Hungary 20.0 20.0 20.0 20.0 20.0 10.0 10.0 10.0 20.0 20.0 20.0 20.0 20.0 20.0 0.0 0.0 20.0 20.0 20.0 Ireland 40.0 40.0 40.0 40.0 40.0 40.0 40.0 40.0 20.0 20.0 20.0 20.0 20.0 20.0 20.0 20.0 20.0 20.0 20.0 Italy 25.0 25.0 25.0 25.0 25.0 25.0 25.0 25.0 12.5 12.5 12.5 12.5 12.5 12.5 12.5 12.5 12.5 12.5 12.5 Japan 35.0 35.0 35.0 35.0 35.0 26.0 26.0 26.0 26.0 26.0 26.0 26.0 26.0 26.0 10.0 10.0 10.0 10.0 10.0 Korea 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 Mexico 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 Netherlands 60.0 60.0 60.0 60.0 60.0 60.0 60.0 60.0 60.0 60.0 60.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 New Zealand 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 Norway 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 28.0 28.0 28.0 Poland - - 40.0 40.0 45.0 45.0 45.0 44.0 40.0 0.0 0.0 0.0 0.0 0.0 19.0 19.0 19.0 19.0 19.0 Portugal 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 Spain 11.2 11.2 10.6 37.3 37.3 37.3 20.0 20.0 20.0 20.0 18.0 18.0 18.0 18.0 15.0 15.0 15.0 18.0 18.0 Sweden 33.1 30.0 25.0 25.0 12.5 30.0 30.0 30.0 30.0 30.0 30.0 30.0 30.0 30.0 30.0 30.0 30.0 30.0 30.0 Switzerland 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 42.4 42.4 42.1 41.5 41.0 40.4 40.4 40.4 40.4 40.4 25.7 United Kingdom 40.0 40.0 40.0 40.0 40.0 40.0 40.0 40.0 40.0 40.0 40.0 40.0 40.0 40.0 40.0 40.0 40.0 40.0 18.0 United States 28.0 28.0 28.0 28.0 28.0 28.0 28.0 20.0 20.0 20.0 20.0 20.0 20.0 15.0 15.0 15.0 15.0 15.0 15.0 34 Table 3 Sample Overview and Summary Statistics The sample consists of 7,661 firms in 25 countries for 1990-2008 presented in Panel A. Summary statistics for investment variables are presented in Panel B. Investment refers to capital expenditure in year t divided by the endof-year t-1 assets. PPE Growth refers to growth in plant, property, and equipment from t-1 to t divided by the endof-year t-1 assets, and Asset Growth is defined as the growth rate of assets over the prior year. Summary statistics for independent variables are presented in Panel C. Dividend Tax is the personal income tax rate on dividends (in %). Effective Tax C is the country-weighted effective corporate payout tax rate (in %). It is obtained by weighting each year’s dividend and effective capital gains tax rates by the relative importance of dividends and share repurchases as payout channels (relative to total corporate payout) in a country over the sample period. The effective tax rate on share repurchases equals one-fourth of the statutory capital gains tax rate. Average Tax C is an alternative measure of the average corporate payout tax rate (in %). It is calculated by weighting each year’s dividend and statutory capital gains tax rates by the relative importance of dividends and share repurchases as payout channels (relative to total corporate payout) in a country over the sample period. Cash Flow is the ratio of cash flow in year t relative to prior year total assets. Cash is defined as cash holdings over prior year assets. EBITDA measures earnings before interest, tax, and depreciation in year t as a fraction of t-1 total assets. Q is defined as the market-to-book ratio, that is, the market value divided by the replacement value of the physical assets of a firm. Sales Growth is the logarithm of the growth rate of sales from t-2 to t. Leverage is the ratio of year t total debt to prior year total assets, and Size is the relative firm size measured as the percentage of firms in the sample that are smaller than this firm. All variables are in real USD (base year 2000). Panel A: Sample Overview Country N(Firms) N(Obs) Country N(Firms) N(Obs) Country N(Firms) N(Obs) Australia 261 1,879 Hungary 13 111 Poland 70 403 Austria 26 332 Ireland 18 252 Portugal 28 269 Belgium 38 463 Italy 66 925 Spain 41 577 Canada 320 2,525 Japan 2,071 22,347 Sweden 100 1,112 Denmark 65 867 Korea 477 4,528 Switzerland 85 1,136 Finland 57 727 Mexico 39 401 UK 470 6,054 France 212 2,608 Netherlands 68 894 USA 2,720 28,439 Germany 245 3,067 New Zealand 31 272 Total 7,661 81,222 Greece 99 519 Norway 41 515 Panel B: Summary Statistics for Investment N Mean Standard Deviation 10 th Percentile Median 90 th Percentile Investment 81,222 0.0594 0.0676 0.0083 0.0398 0.1271 PPE Growth 77,626 0.0805 0.2364 -0.1377 0.0514 0.2898 Asset Growth 81,222 0.0785 0.3128 -0.1702 0.0338 0.3079 Panel C: Summary Statistics for Independent Variables N Mean St. Dev. 10 th % Median 90 th % Dividend Tax 81,222 27.7640 12.5679 10.0000 30.0000 43.6000 Effective Tax C 81,222 18.2530 9.1225 7.6536 17.5143 31.9932 Average Tax C 81,222 24.1584 10.3002 10.0000 26.9082 38.0938 Cash Flow 81,222 0.0696 0.1043 -0.0217 0.0720 0.1767 Cash 81,222 0.1480 0.1883 0.0127 0.0922 0.3409 EBITDA 81,222 0.0957 0.1139 -0.0066 0.1008 0.2138 Q 81,222 2.1270 2.9255 0.7524 1.2183 4.0391 Sales Growth 81,222 0.1114 0.3924 -0.2719 0.0896 0.5080 Leverage 81,222 0.2607 0.2345 0.0031 0.2276 0.5313 Size 81,222 0.6306 0.2404 0.2800 0.6571 0.9363 35 Table 4 Average Investment and Cash Flow around Payout Tax Changes Panel A of this table shows the average investment for bottom and top quintiles of cash flow to assets around 14 payout tax increases (Average Tax C) in 1990-2008 of at least 3 percentage points and with at least 30 observations in the country-year. Panel B illustrates the difference in investment between top and bottom cash flow quintiles around 15 payout tax decreases. We measure investment by capital expenditure in year t divided by the end-of-year t-1 assets. The table also shows the difference between groups and periods, and the difference-in-difference estimate. Standard errors are in parentheses. ***, **, * indicate statistical significance at 1%, 5%, and 10% level, respectively. The 31 tax events are listed in Figure 4. Panel A: 14 Tax Increase Events Low Cash Flow Firms High Cash Flow Firms Difference between Groups (1) (2) (3) Pre-reform Periodt-4;t-1 -0.0230*** 0.0307*** 0.0533*** (0.0015) (0.0038) (0.0046) Post-reform Period t;t+2 -0.0278** 0.0481*** 0.0759*** (0.0025) (0.0037) (0.0051) Difference between Periods -0.0048* 0.0173*** 0.0226*** (0.0029) (0.0053) (0.0069) Panel B: 15 Tax Decrease Events Low Cash Flow Firms High Cash Flow Firms Difference between Groups (1) (2) (3) Pre-reform Periodt-4;t-1 -0.0232*** 0.0495*** 0.0727*** (0.0024) (0.0035) (0.0046) Post-reform Period t;t+2 -0.0163*** 0.0390*** 0.0554*** (0.0029) (0.0030) (0.0042) Difference between Periods 0.0068* -0.0105** -0.0173*** (0.0038) (0.0046) (0.0062) 36 Table 5 Firm Investment and Internal Resources under Various Tax Regimes This table reports linear regression results for firm investment behavior, estimated over the 1990-2008 period. The dependent variable is Investment, defined as capital expenditure in year t divided by the end-of-year t-1 assets. We use Cash Flow as a measure of firm’s availability of internal resources for investment. Cash Flow is the ratio of cash flow in year t relative to prior year total assets. See Table 3 for a description of the other independent variables included in the regressions. In column (1) we measure firms’ tax burden on corporate payouts (Tax) as the personal income tax rate on dividends (Dividend Tax). Column (2) uses the country-weighted effective tax rate (Effective Tax C), and column (3) employs the country-weighted average tax rate (Average Tax C). Country-year interaction indicator variables are included in all specifications. Standard errors (shown in parentheses) allow for heteroskedasticity and are clustered by country-years. ***, **, * indicate statistical significance at 1%, 5%, and 10% level, respectively. Dividend Tax Rate Country-Weighted Effective Tax Rate Country-Weighted Average Tax Rate (1) (2) (3) Cash Flow*Tax 0.0009** 0.0021*** 0.0017*** (0.0004) (0.0006) (0.0005) Cash Flow 0.0749*** 0.0644*** 0.0599*** (0.0115) (0.0101) (0.0123) Sales Growth 0.0157*** 0.0156*** 0.0156*** (0.0011) (0.0011) (0.0011) Leverage 0.0374*** 0.0373*** 0.0373*** (0.0029) (0.0029) (0.0029) Size 0.0025 0.0031 0.0030 (0.0040) (0.0040) (0.0040) Q 0.0011*** 0.0011*** 0.0010*** (0.0001) (0.0001) (0.0001) Firm FE Yes Yes Yes Country-year FE Yes Yes Yes Observations 81,222 81,222 81,222 R-squared 0.5779 0.5781 0.5781 37 Table 6 Firm Investment and Internal Resources under Various Tax Regimes – Alternative Measures This table reports linear regression results for firm investment behavior, estimated over the 1990-2008 period. The dependent variable is Investment, defined as capital expenditure in year t divided by the end-of-year t-1 assets. We use two alternative measures of firm’s availability of internal resources for investment. Cash is defined as cash holdings over prior year assets (columns (1), (3), (5)). EBITDA measures earnings before interest, tax, and depreciation in year t as a fraction of t-1 total assets (columns (2), (4), (6)). See Table 3 for a description of the other independent variables included in the regressions. In columns (1) and (2) we measure firms’ tax burden on corporate payouts (Tax) as the personal income tax rate on dividends (Dividend Tax). Columns (3) and (4) use the country-weighted effective tax rate (Effective Tax C), and columns (5) and (6) employ the country-weighted average tax rate (Average Tax C). Countryyear interaction indicator variables are included in all specifications. Standard errors (shown in parentheses) allow for heteroskedasticity and are clustered by country-years. ***, **, * indicate statistical significance at 1%, 5%, and 10% level, respectively. Dividend Tax Rate Country-Weighted Effective Tax Rate Country-Weighted Average Tax Rate (1) (2) (3) (4) (5) (6) Cash*Tax 0.0005** 0.0006* 0.0005* (0.0002) (0.0003) (0.0002) EBITDA*Tax 0.0003 0.0010** 0.0009** (0.0003) (0.0004) (0.0003) Cash 0.0014 0.0060 0.0028 (0.0060) (0.0054) (0.0063) EBITDA 0.0395*** 0.0319*** 0.0283*** (0.0085) (0.0075) (0.0089) Sales Growth 0.0213** 0.0188*** 0.0213** 0.0188*** 0.0213** 0.0188*** (0.0011) (0.0012) (0.0011) (0.0012) (0.0011) (0.0012) Leverage 0.0331** 0.0366*** 0.0331** 0.0366*** 0.0332** 0.0365*** (0.0030) (0.0031) (0.0029) (0.0030) (0.0029) (0.0030) Size 0.0062 0.0038 0.0060 0.0042 0.0062 0.0041 (0.0041) (0.0040) (0.0041) (0.0040) (0.0041) (0.0040) Q 0.0013** 0.0013*** 0.0013** 0.0013*** 0.0013** 0.0013*** (0.0001) (0.0001) (0.0001) (0.0001) (0.0001) (0.0001) Firm FE Yes Yes Yes Yes Yes Yes Country-year FE Yes Yes Yes Yes Yes Yes Observations 81,222 81,222 81,222 81,222 81,222 81,222 R-squared 0.5688 0.5707 0.5687 0.5708 0.5687 0.5708 38 Table 7 Firm Investment and Internal Resources under Various Tax Regimes – Flexible Specifications This table reports linear regression results for firm investment behavior, estimated over the 1990-2008 period. The dependent variable is Investment, defined as capital expenditure in year t divided by the end-of-year t-1 assets. We use Cash Flow to measure firms’ availability of internal resources for investment. Cash Flow is the ratio of cash flow in year t relative to prior year total assets. See Table 3 for a description of the other independent variables included in the regressions. In column (1) we measure firms’ tax burden on corporate payouts (Tax) as the personal income tax rate on dividends (Dividend Tax). Column (2) uses the country-weighted effective tax rate (Effective Tax C), and column (3) employs country-weighted average tax rate (Average Tax C). Country-year interaction indicator variables are included in all three specifications. We also include the interaction of Cash Flow with both country and year indicator variables. Standard errors (shown in parentheses) allow for heteroskedasticity and are clustered by country-years. ***, **, * indicate statistical significance at 1%, 5%, and 10% level, respectively. Dividend Tax Rate Country-Weighted Effective Tax Rate Country-Weighted Average Tax Rate (1) (2) (3) Cash Flow*Tax 0.0011** 0.0027*** 0.0021*** (0.0005) (0.0008) (0.0006) Sales Growth 0.0158*** 0.0157*** 0.0157*** (0.0011) (0.0011) (0.0011) Leverage 0.0373*** 0.0372*** 0.0372*** (0.0029) (0.0029) (0.0029) Size 0.0035 0.0040 0.0038 (0.0040) (0.0040) (0.0040) Q 0.0009*** 0.0009*** 0.0009*** (0.0001) (0.0001) (0.0001) Firm FE Yes Yes Yes Country-year FE Yes Yes Yes Year FE*CashFlow Yes Yes Yes Country FE*CashFlow Yes Yes Yes Observations 81,222 81,222 81,222 R-squared 0.5803 0.5805 0.5804 39 Table 8 Firm Investment and Internal Resources under Various Tax Regimes – Cash Flow Percentile Ranks This table reports linear regression results for firm investment behavior, estimated over the 1990-2008 period. The dependent variable is Investment, defined as capital expenditure in year t divided by the end-of-year t-1 assets. We use the interaction of payout tax with the cash flow percentile rank (CF Rank) as explanatory variable. See Table 3 for a description of the other independent variables included in the regressions. Country-year interaction indicator variables are included in all specifications. In columns (2), (4), and (6) we also include the interaction of Cash Flow with both country and year indicators for the more demanding flexible specifications. Standard errors (shown in parentheses) allow for heteroskedasticity. ***, **, * indicate statistical significance at 1%, 5%, and 10% level, respectively. Dividend Tax Rate Country-Weighted Effective Tax Rate Country-Weighted Average Tax Rate (1) (2) (3) (4) (5) (6) CF Rank*Tax 0.0008*** 0.0008*** 0.0012*** 0.0013*** 0.0010*** 0.0010*** (0.0001) (0.0001) (0.0002) (0.0002) (0.0001) (0.0001) Baseline Controls Yes Yes Yes Yes Yes Yes Firm FE Yes Yes Yes Yes Yes Yes Country-year FE Yes Yes Yes Yes Yes Yes Year FE*CashFlow No Yes No Yes No Yes Country FE*CashFlow No Yes No Yes No Yes Observations 81,222 81,222 81,222 81,222 81,222 81,222 R-squared 0.5795 0.5818 0.5795 0.5817 0.5796 0.5818 40 Table 9 External Equity Financing and Tax Regimes This table presents linear regression results for external financing behavior, estimated over the 1990-2008 period. The dependent variable is the value of new equity issues to start-of-year book value of assets. Observations where the dependent variable exceeds 0.15 are excluded. See Table 3 for a description of the independent variables included in the regressions. In column (1) we measure firms’ tax burden on corporate payouts (Tax) as the personal income tax rate on dividends (Dividend Tax). Column (2) uses the country-weighted effective tax rate (Effective Tax C), and column (3) employs the country-weighted average tax rate (Average Tax C). Coefficient estimates are based on baseline specifications with country-fixed effects and year-fixed effects. Standard errors (shown in parentheses) are heteroskedasticity-robust and clustered by country-years. ***, **, * indicate statistical significance at 1%, 5%, and 10% level, respectively. Dividend Tax Rate Country-Weighted Average Tax Rate Country-Weighted Average Tax Rate (1) (2) (3) Tax -0.0001*** -0.0002*** -0.0002*** (0.0000) (0.0001) (0.0001) Cash Flow -0.0088*** -0.0089*** -0.0088*** (0.0031) (0.0031) (0.0031) Stock Price Appreciation 0.0112*** 0.0112*** 0.0112*** (0.0009) (0.0009) (0.0009) Sales Growth 0.0048*** 0.0047*** 0.0047*** (0.0006) (0.0006) (0.0006) Leverage 0.0085*** 0.0085*** 0.0085*** (0.0017) (0.0017) (0.0017) Size 0.0073*** 0.0072*** 0.0072*** (0.0025) (0.0025) (0.0025) Q 0.0006*** 0.0006*** 0.0006*** (0.0001) (0.0001) (0.0001) Year FE Yes Yes Yes Firm FE Yes Yes Yes Observations 33,280 33,280 33,280 R-squared 0.3819 0.3815 0.3819 41 Table 10 Old and New View Firms and the Link between Payout Taxes and Cash Flow Table 11 Corporate Governance and the Link between Payout Taxes and Cash Flow This table presents coefficient estimates for Cash Flow*Tax interaction using the country-weighted average tax rate (Average Tax C). Firms are sorted into quartiles of insider ownership, and regressions are estimated separately for each quartile. b is the coefficient estimate, (se) is the heteroskedasticity-robust standard error clustered by country-years, tstat is the t-statistic of the significance of coefficient b, and n is the number of observations.***, **, * indicate statistical significance at 1%, 5%, and 10% level, respectively. This table presents coefficient estimates for Cash Flow*Tax interaction using the country-weighted average tax rate (Average Tax C). We define firms as old view firms if predicted net proceeds from the sale/issue of common and preferred stock to lagged assets exceeds 2% (Panel A) or if previous years’ sales of shares divided by lagged book assets exceeded zero (Panel B) or if the firm has low financial constraints (using the KZ Index of financial constraints, with a cutoff of 0.7, see text for detail). We predict issues of common stock by common share free float, share turnover, sales growth, leverage, market capitalization and Tobin's q. b is the coefficient estimate, (se) is the heteroskedasticity-robust standard error clustered by country-years, t-stat is the t-statistic of the significance of coefficient b, and n is the number of observations. ***, **, * indicate statistical significance at 1%, 5%, and 10% level, respectively. Panel A: Predicted Equity Issues Category b (se) [t-stat] N New view firms; predicted equity issues < 2% 0.1012 (0.0847) [1.19] 21,614 Old view firms; predicted equity issues > 2% 0.2042** (0.0952) [2.14] 13,770 Panel B: Previous year Equity Issues Category B (se) [t-stat] n New view firms; last year equity issues = 0 0.1159 (0.0764) [1.52] 24,734 Old view firms; last year equity issues > 0 0.2588*** (0.0879) [2.94] 32,663 Panel C: KZ Index of Financial Constraints Category b (se) [t-stat] n New view firms; low financial constraints 0.0787 (0.0733) [1.07] 25,004 Old view firms; high financial constraints 0.1991*** (0.0671) [2.97] 25,003 Quartile of insider ownership Range of ownership B (se) [t-stat] n Low ownership 0-0.8% 0.0012 (0.0010) [1.19] 15,338 2 0.8%-5.0% 0.0016 (0.0010) [1.62] 14,942 3 5.0%-19.4% 0.0014 (0.0009) [1.55] 14,011 High ownership 19.4%- 0.0021** (0.0009) [2.46] 12,657 42 Table 12 Firm Investment and Internal Resources under Various Tax Regimes – Control for Corporate Income Tax This table replicates regressions for investment behavior from Table 4, estimated over the 1990-2008 period, but features the corporate tax rate as an additional explanatory variable for investment. Corporate Tax is the statutory tax rate on corporate income. We additionally interact CashFlow, CashFlow*CorporateTax, and CorporateTax with the indicator variable Imp, which is equal to 1 for imputation tax systems and zero otherwise. Baseline regression controls are as in Table 4. Country-year interaction indicator variables and interactions between the corporate tax rate and cash flow are included in all specifications. Standard errors (shown in parentheses) allow for heteroskedasticity and are clustered by country-years. ***, **, * indicate statistical significance at 1%, 5%, and 10% level, respectively. Dividend Tax Rate Country-Weighted Average Tax Rate Country-Weighted Average Tax Rate Cash Flow*Tax 0.0007* 0.0012** 0.0015*** (0.0004) (0.0006) (0.0005) CashFlow* CorporateTax 0.0016 0.0016 0.0017 (0.0013) (0.0014) (0.0014) CashFlow*Imp* CorporateTax 0.0048** 0.0045** 0.0044** (0.0019) (0.0020) (0.0020) Baseline Controls Yes Yes Yes Firm FE Yes Yes Yes Country-year FE Yes Yes Yes Observations 81,222 81,222 81,222 R-squared 0.5788 0.5788 0.5788 43 Table 13 Impact of Taxation on the Cash Flow Sensitivity of Investment – Robustness to Other Macroeconomic Determinants of Investment This table reports coefficients for the cash flow*tax interaction in the linear regressions for firm investment behavior, estimated over the 1990-2008 period. Regression specifications are as in Table 8 but additional macroeconomic determinants of investment are included as controls. Those are Subsidies, Grants, Social Benefits, which include all government transfers on current account to private and public enterprises, and social security benefits in cash and in kind (Panel A); Military Expenditure as a fraction of GDP, which includes all current and capital expenditures on the armed forces (Panel B), Sales and Turnover Tax, which measure taxes on goods and services as a fraction of value added of industry and services (Panel C); and the R&D Expenditure as a fraction of GDP, which includes all expenditures for research and development covering basic research, applied research, and experimental development (Panel D). Standard errors (shown in parentheses) allow for heteroskedasticity and are clustered by country-years. ***, **, * indicate statistical significance at 1%, 5%, and 10% level, respectively. Dividend Tax Rate Country-Weighted Effective Tax Rate Country-Weighted Average Tax Rate (1) (2) (3) Panel A: Subsidies, Grants, Social Benefits Cash Flow *Tax 0.0012 0.0026*** 0.0018** (0.0007) (0.0007) (0.0007) Observations 41,577 41,577 41,577 R-squared 0.6044 0.6048 0.6045 Panel B: Military Expenditure Cash Flow *Tax 0.0008** 0.0021*** 0.0016*** (0.0004) (0.0006) (0.0005) Observations 81,222 81,222 81,222 R-squared 0.5780 0.5781 0.5781 Panel C: Sales and Turnover Tax Cash Flow *Tax 0.0009 0.0024** 0.0012* (0.0007) (0.0010) (0.0007) Observations 39,608 39,608 39,608 R-squared 0.6019 0.6021 0.6019 Panel D: R&D Expenditure Cash Flow *Tax 0.0004 0.0011* 0.0009* (0.0003) (0.0005) (0.0005) Observations 61,963 61,963 61,963 R-squared 0.6128 0.6128 0.6128 44 Appendix Table A.I Firm Investment and Internal Resources under Various Tax Regimes – Tests without U.S. and Japan This table replicates regressions for investment behavior from Table 4, estimated over the 1990-2008 period, but excludes firms from U.S. and Japan. Baseline regression controls are as in Table 4. Country-year interaction indicator variables are included in all specifications. In columns (2), (4), and (6) we also include the interaction of cash flow with both country and year indicator variables. Standard errors (shown in parentheses) allow for heteroskedasticity and are clustered by country-years. ***, **, * indicate statistical significance at 1%, 5%, and 10% level, respectively. Dividend Tax Rate Country-Weighted Effective Tax Rate Country-Weighted Average Tax Rate (1) (2) (3) (4) (5) (6) Cash Flow *Tax 0.0017** 0.0044*** 0.0021** 0.0055*** 0.0013* 0.0040*** (0.0007) (0.0010) (0.0009) (0.0011) (0.0007) (0.0010) Baseline Controls Yes Yes Yes Yes Yes Yes Firm FE Yes Yes Yes Yes Yes Yes Country-year FE Yes Yes Yes Yes Yes Yes Year*CashFlow No Yes No Yes No Yes Country*CashFlow No Yes No Yes No Yes Observations 30,436 30,436 30,436 30,436 30,436 30,436 R-squared 0.5214 0.5262 0.5213 0.5262 0.5212 0.5261 Table A.II Firm Investment and Internal Resources under Various Tax Regimes – Different Clusters This table replicates regressions for investment behavior from Table 4, estimated over the 1990-2008 period, but with different clusters. Baseline regression controls are as in Table 4. Country-year interaction indicator variables and interactions between the corporate tax rate and cash flow are included in all specifications. Standard errors (shown in parentheses) allow for heteroskedasticity. ***, **, * indicate statistical significance at 1%, 5%, and 10% level, respectively. 25 Country Clusters 220 Country-Industry Clusters (1) (2) (3) (4) (5) (6) DivTax EffTaxC AvgTaxC DivTax EffTaxC AvgTaxC Cash Flow*Tax 0.0011 0.0027** 0.0021** 0.0011* 0.0027*** 0.0021*** (0.0006) (0.0011) (0.0009) (0.0006) (0.0009) (0.0008) Baseline Controls Yes Yes Yes Yes Yes Yes Firm FE Yes Yes Yes Yes Yes Yes Country-year FE Yes Yes Yes Yes Yes Yes Year*CashFlow Yes Yes Yes Yes Yes Yes Country*CashFlow Yes Yes Yes Yes Yes Yes Observations 81,222 81,222 81,222 81,222 81,222 81,222 R-squared 0.5803 0.5805 0.5804 0.5803 0.5805 0.5804 45 Table A.III Firm Investment and Internal Resources under Various Tax Regimes – Alternative Measures of Investment This table replicates regressions for investment behavior from Table 4, estimated over the 1990-2008 period, but uses growth in plant, property, and equipment from t-1 to t as dependent variable (columns (1) to (3), Panel A). In Column (4) to (6), Panel A assets growth from t-1 to t is the dependent variable. Regressions in columns (1) to (3), Panel B use capital expenditure in year t divided by the end-of-year t-1 plant, property, and equipment (Capex/PPE) as dependent variable. In Column (4) to (6), Panel B, capital expenditure in year t divided by the end-of-year t-1 fixed assets (Capex/FA) is the dependent variable. Baseline regression controls are as in Table 4. Country-year interaction indicator variables and interactions between the corporate tax rate and cash flow are included in all specifications. Standard errors (shown in parentheses) allow for heteroskedasticity and are clustered by country-years. ***, **, * indicate statistical significance at 1%, 5%, and 10% level, respectively. Panel A: PPE Growth and Assets Growth PPE Growth Assets Growth (1) (2) (3) (4) (5) (6) DivTax EffTaxC AvgTaxC DivTax EffTaxC AvgTaxC Cash Flow*Tax 0.0041* 0.0097*** 0.0081*** 0.0043 0.0118** 0.0097** (0.0022) (0.0036) (0.0030) (0.0033) (0.0052) (0.0044) Baseline Controls Yes Yes Yes Yes Yes Yes Firm FE Yes Yes Yes Yes Yes Yes Country-year FE Yes Yes Yes Yes Yes Yes Year*CashFlow Yes Yes Yes Yes Yes Yes Country*CashFlow Yes Yes Yes Yes Yes Yes Observations 77,626 77,626 77,626 81,222 81,222 81,222 R-squared 0.4392 0.4394 0.4394 0.5501 0.5502 0.5502 Panel B: Capex/PPE and Capex/FA Capex/PPE Capex/FA (1) (2) (3) (4) (5) (6) DivTax EffTaxC AvgTaxC DivTax EffTaxC AvgTaxC Cash Flow*Tax 0.2605** 0.6234*** 0.5105*** 0.0039* 0.0079** 0.0061** (0.1189) (0.1626) (0.1346) (0.0022) (0.0031) (0.0025) Baseline Controls Yes Yes Yes Yes Yes Yes Firm FE Yes Yes Yes Yes Yes Yes Country-year FE Yes Yes Yes Yes Yes Yes Year*CashFlow Yes Yes Yes Yes Yes Yes Country*CashFlow Yes Yes Yes Yes Yes Yes Observations 78,911 78,911 78,911 80,969 80,969 80,969 R-squared 0.4350 0.4351 0.4351 0.4490 0.4491 0.4491 46 Table A.IV Firm Investment and Internal Resources under Various Tax Regimes – Alternative Measures of Internal Resources This table reports linear regression results for firm investment behavior, estimated over the 1990-2008 period. The dependent variable is Investment, defined as capital expenditure in year t divided by the end-of-year t-1 assets. We use another alternative measure of firm’s availability of internal resources for investment. NetIncome is defined as net income over prior year assets. OpIncome is defined as operating income over prior year assets. See Table 3 for a description of the other independent variables included in the regressions. Country-year interaction indicator variables are included in all specifications. We additionally include the interaction of NetIncome and OpIncome respectively with both country and year indicator variables. Standard errors (shown in parentheses) allow for heteroskedasticity and are clustered by country-years. ***, **, * indicate statistical significance at 1%, 5%, and 10% level, respectively. Dividend Tax Rate Country-Weighted Effective Tax Rate Country-Weighted Average Tax Rate (1) (2) (3) (4) (5) (6) NetIncome *Tax 0.0005 0.0012** 0.0010** (0.0003) (0.0006) (0.0005) OpIncome *Tax 0.0005 0.0014** 0.0011** (0.0004) (0.0006) (0.0005) Baseline Controls Yes Yes Yes Yes Yes Yes Firm FE Yes Yes Yes Yes Yes Yes Country-year FE Yes Yes Yes Yes Yes Yes Year* Income Yes Yes Yes Yes Yes Yes Country*Income Yes Yes Yes Yes Yes Yes Observations 81,188 81,120 81,188 81,120 81,188 81,120 R-squared 0.5723 0.5747 0.5723 0.5747 0.5723 0.5747 47 Table A.V Old and New View Firms and the Link between Payout Taxes and Cash Flow – Dividend Tax Rate This table presents coefficient estimates for Cash Flow*Tax interaction using the dividend tax rate (Dividend Tax C). We define firms as old view firms if predicted net proceeds from the sale/issue of common and preferred stock to lagged assets exceeds 2% (Panel A) or if previous years’ sales of shares divided by lagged book assets exceed zero (Panel B) or if the firm has low financial constraints (using the KZ Index of financial constraints, with a cutoff of 0.7, see text for details). We predict issues of common stocks by past issuances, free float, stock turnover, sales growth, leverage, size and Tobin's q. b is the coefficient estimate, (se) is the heteroskedasticity-robust standard error clustered by country-years, tstat is the t-statistic of the significance of coefficient b, and n is the number of observations. ***, **, * indicate statistical significance at 1%, 5%, and 10% level, respectively. Panel A: Predicted Equity Issues Category b (se) [t-stat] N New view firms; predicted equity issues < 2% 0.0893 (0.0589) [1.52] 21,614 Old view firms; predicted equity issues > 2% 0.1215* (0.0625) [1.94] 13,770 Panel B: Previous year Equity Issues Category B (se) [t-stat] n New view firms; last year equity issues = 0 0.1029 (0.0682) [1.51] 24,734 Old view firms; last year equity issues > 0 0.1138 (0.0700) [1.63] 32,663 Panel C: KZ Index of Financial Constraints Category b (se) [t-stat] n New view firms; low financial constraints 0.0315 (0.0689) [0.46] 25,004 Old view firms; high financial constraints 0.1261** (0.0509) [2.48] 25,003 48 Table A.VI Old and New View Firms and the Link between Payout Taxes and Cash Flow – Country-Weighted Effective Tax Rate Table A.VII Corporate Governance and the Link between Payout Taxes and Cash Flow– Dividend Tax Rate This table presents coefficient estimates for Cash Flow*Tax interaction using the statutory dividend tax rate (Dividend Tax). Firms are sorted into quartiles of insider ownership, and regressions are estimated separately for each quartile. b is the coefficient estimate, (se) is the heteroskedasticity-robust standard error clustered by country-years, t-stat is the tstatistic of the significance of coefficient b, and n is the number of observations.***, **, * indicate statistical significance at 1%, 5%, and 10% level, respectively. This table presents coefficient estimates for Cash Flow*Tax interaction using the country-weighted effective tax rate (Effective Tax C). We define firms as old view firms if predicted net proceeds from the sale/issue of common and preferred stock to lagged assets exceeds 1% (Panel A) or if precious years’ sales of shares divided by lagged book assets exceed zero (Panel B) or if the firm has low financial constraints (using the KZ Index of financial constraints, with a cutoff of 0.7, see text for details). We predict issues of common stocks by past issuances, free float, stock turnover, sales growth, leverage, size and Tobin's q. b is the coefficient estimate, (se) is the heteroskedasticity-robust standard error clustered by country-years, t-stat is the t-statistic of the significance of coefficient b, and n is the number of observations. ***, **, * indicate statistical significance at 1%, 5%, and 10% level, respectively. Panel A: Predicted Equity Issues Category b (se) [t-stat] N New view firms; predicted equity issues < 2% 0.1125 (0.0945) [1.19] 21,614 Old view firms; predicted equity issues > 2% 0.1899* (0.1114) [1.70] 13,770 Panel B: Previous year Equity Issues Category b (se) [t-stat] n New view firms; last year equity issues = 0 0.1698* (0.0976) [1.74] 24,734 Old view firms; last year equity issues > 0 0.2759*** (0.0878) [3.14] 32,663 Panel C: KZ Index of Financial Constraints Category b (se) [t-stat] n New view firms; low financial constraints 0.1188 (0.0799) [1.49] 25,004 Old view firms; high financial constraints 0.2330*** (0.0786) [2.96] 25,003 Quartile of insider ownership Range of ownership B (se) [t-stat] n Low ownership 0-0.8% 0.0009 (0.0009) [1.0296] 15,338 2 0.8%-5.0% 0.0013* (0.0007) [1.7725] 14,942 3 5.0%-19.4% 0.0005 (0.0007) [0.6666] 14,011 High ownership 19.4%- 0.0009 (0.0006) [1.5839] 12,657 49 Table A.VIII Corporate Governance and the Link between Payout Taxes and Cash Flow– Country-Weighted Effective Tax Rate This table presents coefficient estimates for Cash Flow*Tax interaction using the country-weighted effective tax rate (Effective Tax C). Firms are sorted into quartiles of insider ownership, and regressions are estimated separately for each quartile. b is the coefficient estimate, (se) is the heteroskedasticity-robust standard error clustered by country-years, tstat is the t-statistic of the significance of coefficient b, and n is the number of observations.***, **, * indicate statistical significance at 1%, 5%, and 10% level, respectively. Quartile of insider ownership Range of ownership b (se) [t-stat] n Low ownership 0-0.8% 0.0009 (0.0012) [0.78] 15,338 2 0.8%-5.0% -0.0001 (0.0011) [-0.10] 14,942 3 5.0%-19.4% 0.0018* (0.0010) [1.91] 14,011 High ownership 19.4%- 0.0031*** (0.0009) [3.50] 12,657 50 Table A.IX Correlation between Tax Changes and Macroeconomic Factors This table reports correlation coefficients for 444 country-year observations. ?DivTax is the change in the dividend tax rate from t-1 to t. ?AvgTax (?EffTax) represents the change in country-weighted average (effective) payout tax rate. As macroeconomic variables we include GDP Growth, subsidies, cost for startups (Cost Startup), inflation, military expenditures and R&D expenditures by the government. P-values are shown in parentheses. Insignificant correlations (p = 0.1) are reported in italics. ?DivTax ?AvgTax ?EffTax GDP Growtht GDP Growtht-1 Subsidies Cost Startup Inflation Military Expenditures R&D Expenditures ?DivTax 1 ?AvgTax 0.936 1 (0.000) ?EffTax 0.985 0.970 1 (0.000) (0.000) GDP Growth 0.112 0.094 0.117 1 (0.018) (0.048) (0.014) GDP Growtht-1 0.153 0.116 0.145 0.516 1 (0.001) (0.015) (0.002) (0.000) Subsidies -0.023 -0.011 -0.016 -0.238 -0.263 1 (0.685) (0.849) (0.778) (0.000) (0.000) Cost Startup -0.022 -0.022 -0.043 0.236 0.158 0.088 1 (0.785) (0.790) (0.603) (0.004) (0.054) (0.311) Inflation 0.019 0.010 0.015 -0.108 -0.055 -0.201 0.164 1 (0.688) (0.826) (0.749) (0.019) (0.243) (0.000) (0.045) Military Expenditures -0.024 -0.021 -0.022 -0.029 -0.056 -0.150 0.086 0.067 1 (0.617) (0.667) (0.652) (0.535) (0.235) (0.009) (0.293) (0.143) R&D Expenditures -0.020 -0.003 -0.001 -0.218 -0.165 0.336 -0.568 -0.515 0.038 1 (0.746) (0.968) (0.987) (0.000) (0.007) (0.000) (0.000) (0.000) (0.541)Exploring the Duality between Product and Organizational Architectures: A Test of the “Mirroring” Hypothesis
|
CD ROM Annuaire d'Entreprises France prospect (avec ou sans emails) : REMISE DE 10 % Avec le code réduction AUDEN872
10% de réduction sur vos envois d'emailing --> CLIQUEZ ICI Retour à l'accueil, cliquez ici 2007, 2008, 2011 by Alan MacCormack, John Rusnak, and Carliss Baldwin Working papers are in draft form. This working paper is distributed for purposes of comment and discussion only. It may not be reproduced without permission of the copyright holder. Copies of working papers are available from the author. Exploring the Duality between Product and Organizational Architectures: A Test of the “Mirroring” Hypothesis Alan MacCormack John Rusnak Carliss Baldwin Working Paper 08-039Exploring the Duality between Product and Organizational Architectures: A Test of the “Mirroring” Hypothesis Corresponding Author: Alan MacCormack MIT Sloan School of Management 50 Memorial Drive E52-538 Cambridge MA 02142 alanmac@mit.edu John Rusnak, Carliss Baldwin Harvard Business School Soldiers Field Park Boston, MA 02163 cbaldwin@hbs.edu; jrusnak@hbs.edu2 Abstract A variety of academic studies argue that a relationship exists between the structure of an organization and the design of the products that this organization produces. Specifically, products tend to “mirror” the architectures of the organizations in which they are developed. This dynamic occurs because the organization’s governance structures, problem solving routines and communication patterns constrain the space in which it searches for new solutions. Such a relationship is important, given that product architecture has been shown to be an important predictor of product performance, product variety, process flexibility and even the path of industry evolution. We explore this relationship in the software industry. Our research takes advantage of a natural experiment, in that we observe products that fulfill the same function being developed by very different organizational forms. At one extreme are commercial software firms, in which the organizational participants are tightly-coupled, with respect to their goals, structure and behavior. At the other, are open source software communities, in which the participants are much more loosely-coupled by comparison. The mirroring hypothesis predicts that these different organizational forms will produce products with distinctly different architectures. Specifically, loosely-coupled organizations will develop more modular designs than tightly-coupled organizations. We test this hypothesis, using a sample of matched-pair products. We find strong evidence to support the mirroring hypothesis. In all of the pairs we examine, the product developed by the loosely-coupled organization is significantly more modular than the product from the tightly-coupled organization. We measure modularity by capturing the level of coupling between a product’s components. The magnitude of the differences is substantial – up to a factor of eight, in terms of the potential for a design change in one component to propagate to others. Our results have significant managerial implications, in highlighting the impact of organizational design decisions on the technical structure of the artifacts that these organizations subsequently develop. Keywords: Organizational Design, Product Design, Architecture, Modularity, OpenSource Software.3 1. Introduction The architecture of a product can be defined as the scheme by which the functions it performs are allocated to its constituent components (Ulrich, 1995). Much prior work has highlighted the critical role of architecture in the successful development of a firm’s new products, the competitiveness of its product portfolio and the evolution of its organizational capabilities (e.g., Eppinger et al, 1994; Ulrich, 1995; Sanderson and Uzumeri, 1995; Sanchez and Mahoney, 1996; Schilling, 2000; Baldwin and Clark, 2000; MacCormack, 2001). For any given set of functional requirements however, a number of different architectures might be considered viable. These designs will possess differing performance characteristics, in terms of important attributes such as cost, quality, reliability and adaptability. Understanding how architectures are chosen, how they are developed and how they evolve are therefore critical topics for academic research. A variety of studies have examined the link between a product’s architecture and the characteristics of the organization that develops it (Conway, 1968; Henderson and Clark, 1990; Brusoni and Prencipe, 2001; Sosa et al, 2004; Cataldo et al, 2006). Most examine a single project, focusing on the need to align team communications to the technical interdependencies in a design. In many situations however, these interdependencies are not predetermined, but are the product of managerial choices. Furthermore, how these choices are made can have a direct bearing on a firm’s success. For example, Henderson and Clark (1990) show that leading firms in the photolithography industry stumbled when faced with innovations that required radical changes to the product architecture. They argue that these dynamics occur because designs tend to reflect the organizations that develop them. Given organizations are slow to change, the designs they produce can quickly become obsolete in a changing marketplace. Empirical evidence of such a relationship however, has remained elusive. In this study, we provide evidence to support the hypothesis that a relationship exists between product and organizational designs. In particular, we use a network analysis technique called the Design Structure Matrix (DSM) to compare the design of products developed by different organizational forms. Our analysis takes advantage of the fact that software is an information-based product, meaning that the design comprises a series of instructions (or “source code”) that tell a computer what tasks to perform. Given this 4 feature, software products can be processed automatically to identify the dependencies that exist between their component elements (something that cannot be done with physical products). These dependencies, in turn, can be used to characterize a product’s architecture, by displaying the information visually and by calculating metrics that capture the overall level of coupling between elements in the system. We chose to analyze software because of a unique opportunity to examine two distinct organizational forms. Specifically, in recent years there has been a growing interest in open source (or “free”) software, which is characterized by: a) the distribution of a program’s source code along with the binary version of the product 1 and; b) a license that allows a user to make unlimited copies of and modifications to this product (DiBona et al, 1999). Successful open source software projects tend to be characterized by large numbers of volunteer contributors, who possess diverse goals, belong to different organizations, work in different locations and have no formal authority to govern development activities (Raymond, 2001; von Hippel and von Krogh, 2003). In essence, they are “loosely-coupled” organizational systems (Weick, 1976). This form contrasts with the organizational structures of commercial firms, in which smaller, collocated teams of individuals sharing common goals are dedicated to projects full-time, and given formal decision-making authority to govern development. In comparison to open source communities, these organizations are much more “tightly-coupled.” The mirroring hypothesis suggests that the architectures of the products developed by these contrasting forms of organization will differ significantly: In particular, open source software products are likely to be more modular than commercial software products. Our research seeks to examine the magnitude and direction of these differences. Our paper proceeds as follows. In the next section, we describe the motivation for our research and prior work in the field that pertains to understanding the link between product and organizational architectures. We then describe our research design, which involves comparing the level of modularity of different software products by analyzing the coupling between their component elements. Next, we discuss how we construct a sample of matched product pairs, each consisting of one open source and one 1 Commercial software is distributed in a binary form (i.e., 1’s and 0’s) that is executed by the computer.5 commercially developed product. Finally, we discuss the results of our analysis, and highlight the implications for practitioners and the academy. 2. Research Motivation The motivation for this research comes from work in organization theory, where it has long been recognized that organizations should be designed to reflect the nature of the tasks that they perform (Lawrence and Lorsch, 1967; Burns and Stalker, 1961). In a similar fashion, transaction cost economics predicts that different organizational forms are required to solve the contractual challenges associated with tasks that possess different levels of interdependency and uncertainty (Williamson, 1985; Teece, 1986). To the degree that different product architectures require different tasks to be performed, it is natural to assume that organizations and architectures must be similarly aligned. To date however, there has been little systematic empirical study of this relationship. Research seeking to examine this topic has followed one of two approaches. The first explores the need to match patterns of communication within a development project to the interdependencies that exist between different parts of a product’s design. For example, Sosa et al (2004) examined a single jet engine project, and found a strong tendency for communications to be aligned with key design interfaces. The likelihood of “misalignment” was shown to be greater when dependencies spanned organizational and system boundaries. Similarly, Cataldo et al (2006) explored the impact of misalignment in a single software development project, and found tasks were completed more rapidly when the patterns of communication between team members were congruent with the patterns of interdependency between components. Finally, Gokpinar et al (2006) explored the impact of misalignment in a single automotive development project, and found subsystems of higher quality were associated with teams that had aligned their communications to the technical interfaces with other subsystems. The studies above begin with the premise that team communication must be aligned to the technical interdependencies between components in a system, the latter being determined by the system’s functionality. A second stream of work however, adopts the reverse perspective. It assumes that an organization’s structure is fixed in the short-term, and explores the impact of this structure on the technical designs that emerge. This idea 6 was first articulated by Conway who stated, “any organization that designs a system will inevitably produce a design whose structure is a copy of the organization’s communication structure” (Conway, 1968). The dynamics are best illustrated in Henderson and Clark’s study of the photolithography industry, in which they show that market leadership changed hands each time a new generation of equipment was introduced (Henderson and Clark, 1990). These observations are traced to the successive failure of leading firms to respond effectively to architectural innovations, which involve significant changes in the way that components are linked together. Such innovations challenge existing firms, given they destroy the usefulness of the architectural knowledge embedded in their organizing structures and information-processing routines, which tend to reflect the current “Dominant Design” (Utterback, 1996). When this design is no longer optimal, established firms find it difficult to adapt. The contrast between the two perspectives can be clarified by considering the dynamics that occur when two distinct organizational forms develop the same product. Assuming the product’s functional requirements are identical, the first stream of research would assume that the patterns of communication between participants in each organization should be similar, driven by the nature of the tasks to be performed. In contrast, the second stream of research would predict that the resulting designs would be quite different, each reflecting the architecture of the organization from which it came. We define the latter phenomenon as “mirroring.” A test of the mirroring hypothesis can be conducted by comparing the designs of “matched-pair” products – products that fulfill the same function, but that have been developed by different organizational forms. To conduct such a test, we must characterize these different forms, and establish a measure by which to compare the designs of products that they produce. 2.1 Organizational Design and “Loosely-Coupled” Systems Organizations are complex systems comprising individuals or groups that coordinate actions in pursuit of common goals (March and Simon, 1958). Organization theory describes how the differing preferences, information, knowledge and skills of these organizational actors are integrated to achieve collective action. Early “classical” approaches to organization theory emphasized formal structure, authority, control, and 7 hierarchy (i.e., the division of labor and specialization of work) as distinguishing features of organizations, building upon work in the fields of scientific management, bureaucracy and administrative theory (Taylor, 1911; Fayol, 1949; Weber, 1947; Simon, 1976). Later scholars however, argued that organizations are best analyzed as social systems, given they comprise actors with diverse motives and values that do not always behave in a rational economic manner (Mayo, 1945; McGregor, 1960). As this perspective gained popularity, it was extended to include the link between an organization and the environment in which it operates. With this lens, organizations are seen as open systems, comprising “interdependent activities linking shifting coalitions of participants” (Scott, 1981). A key assumption is that organizations can vary significantly in their design; the optimal design for a specific mission is established by assessing the fit between an organization and the nature of the tasks it must accomplish (Lawrence and Lorsch, 1967). Weick was the first to introduce the concept that organizations can be characterized as complex systems, comprising many elements with different levels of coupling between them (Weick, 1976; Orton and Weick, 1990). Organizational coupling can be analyzed along a variety of dimensions, however the most important of these fall into three broad categories: Goals, structure and behavior (Orton and Weick, 1990). Organizational structure, in turn, can be further decomposed to capture important differences in terms of membership, authority and location. All these dimensions represent a continuum along which organizations vary in the level of coupling between participants. When aligned, they generate two distinct organizational forms, representing opposite ends of this continuum (see Table 1). While prior work had assumed that the elements in organizational systems were coupled through dense, tight linkages, Weick argued that some organizations (e.g., educational establishments) were only loosely-coupled. Although real-world organizations typically fall between these “canonical types,” they remain useful constructs for characterizing the extent to which organizations resemble one extreme or the other (Brusoni et al, 2001).8 Table 1: Characterizing Different Organizational Forms Tightly-Coupled Loosely-Coupled Goals Shared, Explicit Diverse, Implicit Membership Closed, Contracted Open, Voluntary Authority Formal, Hierarchy Informal, Meritocracy Location Centralized, Collocated Decentralized, Distributed Behavior Planned, Coordinated Emergent, Independent The software industry represents an ideal context within which to study these different organizational forms, given the wide variations in structure observed in this industry. At one extreme, we observe commercial software firms, which employ smaller, dedicated (i.e., full-time), collocated development teams to bring new products to the marketplace. These teams share explicit goals, have a closed membership structure, and rely on formal authority to govern their activities. At the other, we observe open source (or “free” software) communities, which rely on the contributions of large numbers of volunteer developers, who work in different organizations and in different locations (von Hippel and von Krogh, 2003). The participants in these communities possess diverse goals and have no formal authority to govern development, instead relying on informal relationships and cultural norms (Dibona et al, 1999). These forms of organization closely parallel the canonical types described above, with respect to the level of coupling between participants. They provide for a rich natural experiment, in that we observe products that perform the same function being developed in each. 2.2 Product Design, Architecture and Modularity Modularity is a concept that helps us to characterize different designs. It refers to the way that a product’s architecture is decomposed into different parts or modules. While there are many definitions of modularity, authors tend to agree on the concepts that lie at its heart; the notion of interdependence within modules and independence between modules (Ulrich, 1995). The latter concept is often called “loose-coupling.” Modular designs are loosely-coupled in that changes made to one module have little impact on the others. Just as there are degrees of coupling, there are degrees of modularity.9 The costs and benefits of modularity have been discussed in a stream of research that has sought to examine its impact on the management of complexity (Simon, 1962), product line architecture (Sanderson and Uzumeri, 1995), manufacturing (Ulrich, 1995), process design (MacCormack, 2001) process improvement (Spear and Bowen, 1999) and industry evolution (Baldwin and Clark, 2000). Despite the appeal of this work however, few studies have used robust empirical data to examine the relationship between measures of modularity, the organizational factors assumed to influence this property or the outcomes that it is thought to impact (Schilling, 2000; Fleming and Sorenson, 2004). Most studies are conceptual or descriptive in nature. Studies that attempt to measure modularity typically focus on capturing the level of coupling that exists between different parts of a design. In this respect, the most promising technique comes from the field of engineering, in the form of the Design Structure Matrix (DSM). A DSM highlights the inherent structure of a design by examining the dependencies that exist between its constituent elements in a square matrix (Steward, 1981; Eppinger et al, 1994; Sosa et al, 2003). These elements can represent design tasks, design parameters or the actual components. Metrics that capture the degree of coupling between elements have been calculated from a DSM, and used to compare different architectures (Sosa et al, 2007). DSMs have also been used to explore the degree of alignment between task dependencies and project team communications (Sosa et al, 2004). Recent work extends this methodology to show how design dependencies can be automatically extracted from software code and used to understand architectural differences (MacCormack et al, 2006). In this paper, we use this method to compare designs that come from different forms of development organization. 2.3 Software Design The measurement of modularity has gained most traction in the software industry, given the information-based nature of the product lends itself to analytical techniques that are not possible with physical products. The formal study of software modularity began with Parnas (1972) who proposed the concept of information hiding as a mechanism for dividing code into modular units. Subsequent authors built on this work, proposing metrics to capture the level of “coupling” between modules and “cohesion” within 10 modules (e.g., Selby and Basili, 1988; Dhama, 1995). This work complemented studies that sought to measure the complexity of software, to examine its effect on development productivity and quality (e.g., McCabe 1976; Halstead, 1976). Whereas measures of software complexity focus on characterizing the number and nature of the elements in a design, measures of modularity focus on the patterns of dependencies between these elements. Software can be complex (i.e., have many parts) and modular (i.e., have few dependencies between these parts). In prior work, this distinction is not always clear. 2 Efforts to measure software modularity generally follow one of two approaches. The first focuses on identifying specific types of dependency between components in a system, for example, the number of non-local branching statements (Banker et al, 1993); global variables (Schach et al, 2002); or function calls (Banker and Slaughter, 2000; Rusovan et al, 2005). The second infers the presence of dependencies by assessing which components tend to be changed concurrently. For example, Eick et al (1999) show that code decays over time, by looking at the number of files that must be altered to complete a modification request; while Cataldo et al (2006) show that modifications involving files that tend to change along with others, take longer to complete. While the inference approach avoids the need to specify the type of dependency being examined, it requires access to maintenance data that is not always captured consistently across projects. In multi-project research, dependency extraction from source code is therefore preferred. With the rise in popularity of open source software, interest in the topic of modularity has received further stimulus. Some authors argue that open source software is inherently more modular than commercial software (O’Reilly, 1999; Raymond, 2001). Others have suggested that modularity is a required property for this method of development to succeed (Torvalds, as quoted in DiBona, 1999). Empirical work to date however, yields mixed results. Some studies criticize the number of dependencies between critical components in systems such as Linux (Schach et al, 2002; Rusovan et al, 2005). Others provide quantitative and qualitative data that open source products are easier to modify (Mockus et al, 2002; Paulsen et al, 2004) or have fewer interdependencies between components (MacCormack et al, 2006). None of these studies however, conducts a 2 In some fields, complexity is defined to include inter-element interactions (Rivkin and Siggelkow, 2007).11 rigorous apples-to-apples comparison between open source and commercially developed software; the results may therefore be driven by idiosyncrasies of the systems examined. In this paper, we explore whether organizations with distinctly different forms – as captured by the level of coupling between participants – develop products with distinctly different architectures – as captured by the level of coupling between components. Specifically, we conduct a test of the “mirroring” hypothesis, which can be stated as follows: Loosely-coupled organizations will tend to develop products with more modular architectures than tightly-coupled organizations. We use a matched-pair design, to control for differences in architecture that are related to differences in product function. We build upon recent work that highlights how DSMs can be used to visualize and measure software architecture (Lopes and Bajracharya, 2005; MacCormack et al, 2006). 3. Research Methods 3 There are two choices to make when applying DSMs to a software product: The unit of analysis and the type of dependency. With regard to the former, there are several levels at which a DSM can be built: The directory level, which corresponds to a group of source files that pertain to a specific subsystem; the source file level, which corresponds to a collection of related processes and functions; and the function level, which corresponds to a set of instructions that perform a specific task. We analyze designs at the source file level for a number of reasons. First, source files tend to contain functions with a similar focus. Second, tasks and responsibilities are allocated to programmers at the source file level, allowing them to maintain control over all the functions that perform related tasks. Third, software development tools use the source file as the unit of analysis for version control. And finally, prior work on design uses the source file as the primary unit of analysis (e.g., Eick et al, 1999; Rusovan et al, 2005; Cataldo et al, 2006). 4 3 The methods we describe here build on prior work in this field (see MacCormack et al, 2006; 2007). 4 Metaphorically, source files are akin to the physical components in a product; whereas functions are akin to the nuts and bolts that comprise these components.12 There are many types of dependency between source files in a software product. 5 We focus on one important dependency type – the “Function Call” – used in prior work on design structure (Banker and Slaughter, 2000; Rusovan et al, 2005). A Function Call is an instruction that requests a specific task to be executed. The function called may or may not be located within the source file originating the request. When it is not, this creates a dependency between two source files, in a specific direction. For example, if FunctionA in SourceFile1 calls FunctionB in SourceFile2, then we note that SourceFile1 depends upon (or “uses”) SourceFile2. This dependency is marked in location (1, 2) in the DSM. Note this does not imply that SourceFile2 depends upon SourceFile1; the dependency is not symmetric unless SourceFile2 also calls a function in SourceFile1. To capture function calls, we input a product’s source code into a tool called a “Call Graph Extractor” (Murphy et al, 1998). This tool is used to obtain a better understanding of system structure and interactions between parts of the design. 6 Rather than develop our own extractor, we tested several commercial products that could process source code written in both procedural and object oriented languages (e.g., C and C++), capture indirect calls (dependencies that flow through intermediate files), run in an automated fashion and output data in a format that could be input to a DSM. A product called Understand C++ 7 was selected given it best met all these criteria. The DSM of a software product is displayed using the Architectural View. This groups each source file into a series of nested clusters defined by the directory structure, with boxes drawn around each successive layer in the hierarchy. The result is a map of dependencies, organized by the programmer’s perception of the design. To illustrate, the Directory Structure and Architectural View for Linux v0.01 are shown in Figure 1. Each “dot” represents a dependency between two particular components (i.e., source files). 5 Several authors have developed comprehensive categorizations of dependency types (e.g., Shaw and Garlan, 1996; Dellarocas, 1996). Our work focuses on one important type of dependency. 6 Function calls can be extracted statically (from the source code) or dynamically (when the code is run). We use a static call extractor because it uses source code as input, does not rely on program state (i.e., what the system is doing at a point in time) and captures the system structure from the designer’s perspective. 7 Understand C++ is distributed by Scientific Toolworks, Inc. seeReinventing Savings Bonds
|
CD ROM Annuaire d'Entreprises France prospect (avec ou sans emails) : REMISE DE 10 % Avec le code réduction AUDEN872
10% de réduction sur vos envois d'emailing --> CLIQUEZ ICI Retour à l'accueil, cliquez ici Harvard Business School Working Paper Series, NO. 06-017 Copyright © 2005 Working papers are in draft form. This working paper is distributed for purposes of comment and discussion only. It may not be reproduced without permission of the copyright holder. Copies of working papers are available from the author. Reinventing Savings Bonds Peter Tufano Daniel Schneider Peter Tufano Harvard Business School and NBER and D2D Fund Daniel Schneider Harvard Business School Reinventing Savings Bonds* Savings Bonds have always served multiple objectives: funding the U. S. government, democratizing national financing, and enabling families to save. Increasingly, this last goal has been ignored. A series of efficiency measures introduced in 2003 make these bonds less attractive and less accessible to savers. Public policy should go in the opposite direction: U.S. savings bonds should be reinvigorated to help low and moderate income (LMI) families build assets. More and more, these families’ saving needs are ignored by private sector asset managers and marketers. With a few relatively modest changes, the Savings Bond program can be reinvented to help these families save, while still increasing the efficiency of the program as a debt management device. Savings bonds provide market-rate returns, with no transaction costs, and are a useful commitment savings device. Our proposed changes include (a) allowing Federal taxpayers to purchase bonds with tax refunds; (b) enabling LMI families to redeem their bonds before twelve months; (c) leveraging private sector organizations to market savings bonds; and (d) contemplating a role for savings bonds in the life cycles of LMI families. Peter Tufano Daniel Schneider Harvard Business School Harvard Business School and D2D Fund and NBER Soldiers Field Soldiers Field Boston, MA 02163 Boston, MA 02163 ptufano@hbs.edu dschneider@hbs.edu * We would like to thank officials at the Bureau of Public Debt (BPD) for their assistance locating information on the Savings Bonds program. We would also like to thank officials from BPD and Department of Treasury, Fred Goldberg, Peter Orszag, Anne Stuhldreher, Bernie Wilson, Lawrence Summers, Jim Poterba and participants at the New America Foundation/Congressional Savings and Ownership Caucus and the Consumer Federation of America/America Saves Programs for useful comments and discussions. Financial support for this research project was provided by the Division of Research of the Harvard Business School. Any opinions expressed are those of the authors and not those of any of the organizations above. For the most up to date version of this paper, please visit http://www.people.hbs.edu/ptufano. 2 I. Introduction In a world in which financial products are largely sold and not bought, savings bonds are a quaint oddity. First offered as Liberty Bonds to fund World War I and then as Baby Bonds 70 years ago, savings bonds seem out of place in today’s financial world. While depository institutions and employers nominally market these bonds, they have few incentives to actively sell them. As financial institutions move to serve up-market clients with higher profit margin products, savings bonds receive little if no marketing or sales attention. Even the Treasury seems uninterested in marketing them. In 2003, the Treasury closed down the 41 regional marketing offices for savings bonds and has zeroed-out the budget for the marketing office, staff, and ad buys from $22.4 million to $0. (Block (2003)). No one seems to have much enthusiasm for selling savings bonds. Maybe this lack of interest is sensible. After all, there are many financial institutions selling a host of financial products in a very competitive financial environment. The very name “Savings Bonds” is out of touch; it is unfashionable to think of ourselves as “savers.” We are now “investors.” We buy investment products and hold our “near cash” in depository institutions or money market mutual funds. Saving is simply passé, and American families’ savings rate has dipped to its lowest point in recent history. Even if we put aside the macro-economic debate on the national savings rate, there is little question that lower income Americans would be well served with greater savings. Families need enough savings to withstand temporary shocks to income, but a shockingly large fraction don’t even have enough savings to sustain a few months of living expenses (see Table I). Financial planners often advise that families have sufficient liquid assets to replace six months of household income in the event of an emergency. Yet, only 22% of households, and only 19% of LMI households, meet this standard. Fewer than half (47%) of US households, and only 29% of LMI households, have sufficient liquid assets to meet their own stated emergency savings goals. Families do somewhat better when financial assets in retirement accounts are included, but even then more than two-thirds of households do not have sufficient savings to replace six months of income. And while the financial landscape may be generally competitive, there are low-profit pockets where competition cannot be counted upon to solve all of our problems. While it may be profitable to sell low income families credit cards, sub-prime loans, payday loans or check cashing services, there is no rush to offer them savings products. A not insubstantial number of them may have prior credit records that lead depository institutions to bar them from opening even savings accounts. Many do not have the requisite minimum balances of $2500 or $3000 that most money market mutual funds demand. Many of them are trying to build assets, but their risk profile 3 cannot handle the potential principal loss of equities or equity funds. Many use alternative financial services, or check cashing outlets, as their primary financial institution, but these firms do not offer asset building products. For these families, old-fashioned U. S. savings bonds offer an investment without any risk of principal loss due to credit or interest rate moves, while providing a competitive rate of return with no fees. Bonds can be bought in small denominations, rather than requiring waiting until the saver has amassed enough money to meet some financial institution’s minimum investment requirements. And finally, bonds have an “out-of-sight and out-of-mind” quality, which fits well with the mental accounting consumers use to artificially separate spending from saving behavior. Despite all of these positives, we feel the savings bond program needs to be reinvigorated to enhance its role in supporting family saving. In the current environment, the burden is squarely on these families to find and buy the bonds. Financial institutions and employers have little or no incentives to encourage savers to buy bonds. The government has eliminated its bond marketing program. Finally, by pushing the minimum holding period up to twelve months, the program is discouraging low-income families, who might face a financial emergency, from investing in them. We feel these problems can and should be solved, so that savings bonds can once again become a strong part of families’ savings portfolios. At one point in American history, savings bonds were an important tool for families to build assets to get ahead. They were “designed for the small investor – that he may be encouraged to save for the future and receive a fair return on his money” (US Department of the Treasury (1935)). While times have changed, this function of savings bonds may be even more important now. Our set of recommendations is designed to make savings bonds a viable asset building device for low to moderate income Americans, as well as reduce the cost to sell them to families. The proposal reflects an important aspect of financial innovation. Often financial innovations from a prior generation are reinvented by a new generation. The convertible preferred stock that venture capitalists use to finance high tech firms was used to finance railroads in the nineteenth century. Financiers of these railroads invented income bonds, which have been refined to create trust preferred securities, a popular financing vehicle. The “derivatives revolution” began centuries ago, when options were bought and sold on the Amsterdam Stock Exchange. Wise students of financial innovation realize that old products can often be re-invented to solve new problems. Here, we lay out a case for why savings bonds, an invention of the 20 th century, can and should be re-imagined to help millions of Americans build assets now. In section 2, we briefly describe why LMI families might not be fully served by private sector savings opportunities. In section 3, we briefly recount the history of savings bonds and fast forward to discuss their role in 4 the current financial services world. In section 4, we discuss our proposal to reinvent savings bonds as a legitimate device for asset building for American families. An important part of our proposal involves the tax system, but our ideas do not involve any new tax provisions or incentives. Rather, we make proposals about how changes to the “plumbing” of the tax system can help revitalize the savings bond program and support family savings. 2. An Unusual Problem: Nobody Wants My Money! 1 In our modern world, where many of us are bombarded by financial service firms seeking our business, why would we still need or want a seventy year old product like savings bonds? To answer this question, we have to understand the financial services landscape of low and moderate income Americans, which for our discussion includes the 41 million American households who earn under $30,000 a year or the 24 million households with total financial assets under $500 or the more than 18 million US households making less than $30,000 a year and holding less than $500 in financial assets (Survey of Consumer Finances (2001)) and Current Population Survey (2002)). In particular, we need to understand asset accumulation strategies for these families, their savings goals, and their risk tolerances. But we also need to understand the motives of financial service firms offering asset-building products. In generic terms, asset gatherers and managers must master a simple profit equation: revenues must exceed costs. Costs include customer acquisition, customer servicing and the expense of producing the investment product. Customer acquisition and servicing costs are not necessarily any less for a small account than for a large one. Indeed, if the smaller accounts are sufficiently “different” they can be quite costly; if held by people who speak different languages, require more explanations, or who are not well understood by the financial institution. The costs of producing the product would include the investment management expenses for a mutual fund or the costs of running a lending operation for a bank. On the revenue side, the asset manager could charge the investor a fixed fee for its services. However, industry practice is to charge a fee that is a fraction of assets under management (as in the case of a mutual fund which charges an expense ratio) or to give the investor only a fraction of the investment return (in the classic “spread banking” practiced by depository institutions.) The optics of the financial service business are to take the fee out of the return earned by the investor in an “implicit fee” to avoid the sticker shock of having to charge an explicit fee for services. Financial services firms can also earn revenues if they can subsequently sell customers other high margin products and services, the so called “cross-sell.” 5 At the risk of oversimplifying, our asset manager can earn a profit on an account if: Size of Account x (Implicit Fee in %) – Marginal Costs to Serve > 0 Because implicit fees are netted from the gross investment returns, they are limited by the size of these returns (because otherwise investors would suffer certain principal loss.) If an investor is risk averse and chooses to invest in low-risk/low-return products, fees are constrained by the size of the investment return. For example, when money market investments are yielding less than 100 bp, it is infeasible for a money market mutual fund to charge expenses above 100 bp. Depository institutions like banks or credit unions face a less severe problem, as they can invest in high risk projects (loans) while delivering low risk products to investors by virtue of government supplied deposit insurance. Given even relatively low fixed costs per client and implicit fees that must come out of revenue, the importance of having large accounts (or customers who can purchase a wide range of profitable services) is paramount. At a minimum, suppose that statements, customer service costs, regulatory costs, and other “sundries” cost $30 per account per year. A mutual fund that charges 150 bp in expense ratios would need a minimum account size of $30/.015 = $2000 to just break even. A bank that earns a net interest margin between lending and borrowing activities of 380 bp would need a minimum account size of $30/.038 = $790 to avoid a loss (Carlson and Perli (2004)). Acquisition costs make having large and sticky accounts even more necessary. The cost per new account appears to vary considerably across companies, but is substantial. The industry-wide average for traditional banks is estimated at $200 per account (Stone (2004)). Individual firms have reported lower figures. TD Waterhouse spent $109 per new account in the fourth quarter of 2001 (TD Waterhouse (2001)). T Rowe Price spent an estimated $195 for each account it acquired in 2003. 2 H&R Block, the largest retail tax preparation company in the United States, had acquisition costs of $130 per client (Tufano and Schneider (2004)). One can justify this outlay only if the account is large, will purchase other follow-on services, or will be in place for a long time. Against this backdrop, an LMI family that seeks to build up its financial assets faces an uphill battle. Given the risks that these families face and the thin margin of financial error they perceive, they seem to prefer low risk investments, which have more constrained fee opportunities for financial service vendors. By definition, their account balances are likely to be small. With respect to cross-sell, financial institutions might be leery of selling LMI families profitable 1 Portions of this section are adapted from an earlier paper, Schneider and Tufano, 2004, “New Savings from Old Innovations: Asset Building for the Less Affluent,” New York Federal Reserve Bank, Community Development Finance Research Conference. 2 Cost per new account estimate is based on a calculation using data on the average size of T Rowe Price accounts, the amount of new assets in 2003, and annual marketing expenses. Data is drawn from T Rowe Price (2003), Sobhani and Shteyman (2003), and Hayashi (2004)). 6 products that might expose the financial institutions to credit risk. Finally, what constitute inconveniences for wealthier families (e.g., a car breakdown or a water heater failure) can constitute emergencies for LMI families that deplete their holdings, leading to less sticky assets. These assertions about LMI financial behavior are borne out with scattered data. Table II and Table III report various statistics about U.S. financial services activity by families sorted by income. The preference of LMI families for low-risk products is corroborated by their revealed investment patterns, as shown by their substantially lower ownership rates of equity products. Low income families were less likely to hold every type of financial asset than high income families. However, the ownership rate for transaction accounts among families in the lowest income quintile was 72% of that of families in the highest income decile while the ownership rate among low-income families for stocks was only 6% and for mutual funds just 7% of the rate for high-income families. The smaller size of financial holdings by the bottom income quintile of the population is quite obvious. Even if they held all of their financial assets in one institution, the bottom quintile would have a median balance of only $2,000 (after excluding the 25.2% with no financial assets of any kind). The likelihood that LMI family savings will be drawn down for emergency purposes has been documented by Schreiner, Clancy, and Sherraden (2002) in their national study of Individual Development Accounts (matched savings accounts intended to encourage asset building through savings for homeownership, small business development, and education). They find that 64% of participants made a withdrawal to use funds for a non-asset building purpose, presumably one pressing enough that it was worth foregoing matching funds. In our own work (Beverly, Schneider, and Tufano (2004)), we surveyed a selected set of LMI families about their savings goals. Savings for “emergencies” was the second most frequent savings goal (behind unspecified savings), while long horizon saving for retirement was a goal for only 5% of households. A survey of the 15,000 participants in the America Saves program found similar results with 40% of respondents listing emergency savings as their primary savings goal (American Saver (2004)). The lower creditworthiness of LMI families is demonstrated by the lower credit scores of LMI individuals and the larger shares of LMI families reporting having past due bills. 3 Given the economics of LMI families and of most financial services firms, a curious equilibrium has emerged. With a few exceptions, firms that gather and manage assets are simply not very interested in serving LMI families. While their “money is as green as anyone else’s,” the 3 Bostic, Calem, and Wachter (2004) use data from the Federal Reserve and the Survey of Consumer Finances (SCF) to show that 39% of those in the lowest income quintile were credit constrained by their credit scores (score of less than 660) compared with only 2.8% of families in the top quintile and only 10% of families in the fourth quintile. A report from Global Insight (2003) also using data from the SCF finds that families in the bottom two quintiles of income were more than three times as likely to have bills more than 60 days past due than families in the top two quintiles of income. 7 customers are thought too expensive to serve, their profit potential too small, and, as a result, the effort better expended elsewhere. While firms don’t make public statements to this effect, the evidence is there to be seen. • Among the top ten mutual funds in the country, eight impose minimum balance restrictions upwards of $250. Among the top 500 mutual funds, only 11% had minimum initial purchase requirements of less than $100 (Morningstar (2004)). See Table IV. • Banks routinely set minimum balance requirements or charge fees on low balances, in effect discouraging smaller savers. Nationally, minimum opening balance requirements for statement savings accounts averaged $97, and required a balance of at least $158 to avoid average yearly fees of $26. These fees were equal to more than a quarter of the minimum opening balance, a management fee of 27%. Fees were higher in the ten largest Metropolitan Statistical Areas (MSAs), with average minimum opening requirements of $179 and an average minimum balance to avoid fees of $268 (Board of Governors of the Federal Reserve (2003)). See Table V. While these numbers only reflect minimum opening balances, what we cannot observe is the level of marketing activity (or lack thereof) directed to raising savings from the poor. • Banks routinely use credit scoring systems, like ChexSystems to bar families from becoming customers, even from opening savings accounts which pose minimal, if any, credit risks. Over 90% of bank branches in the US use the system, which enables banks to screen prospective clients for problems with prior bank accounts and to report current clients who overdraw accounts or engage in fraud (Quinn (2001)). Approximately seven million people have ChexSystems records (Barr (2004)). While ChexSystems was apparently designed to prevent banks from making losses on checking accounts, we understand that it is not unusual for banks to use it to deny customers any accounts, including savings accounts. Conversations with a leading US bank suggest that policy arises from the inability of bank operational processes to restrict a customer’s access to just a single product. In many banks, if a client with a ChexSystems record were allowed to open a savings account, she could easily return the next day and open a checking account. • Banks and financial services firms have increasingly been going “up market” and targeting the consumer segment known as the “mass affluent,” generally those with over $100,000 in investible assets. Wells Fargo’s Director of investment consulting noted that “the mass affluent are very important to Wells Fargo” (Quittner (2003) and American Express Financial Advisors’ Chief Marketing Officers stated that, “Mass affluent clients have special investment needs… Platinum and Gold Financial Services (AEFA products) 8 were designed with them in mind” (“Correcting and Replacing” (2004)). News reports have detailed similar sentiments at Bank of America, Citi Group, Merrill Lynch, Morgan Stanley, JP Morgan, Charles Schwab, Prudential, and American Express. • Between 1975 and 1995 the number of bank branches in LMI neighborhoods declined by 21%. While declining population might explain some of that reduction (per capita offices declined by only 6.4%), persistently low-income areas, those that that were poor over the period of 1975 -1995, experienced the most significant decline; losing 28% of offices, or a loss of one office for every 10,000 residents. Low income areas with relatively high proportions of owner-occupied housing did not experience loss of bank branches, but had very few to begin with (Avery, Bostic, Calem, and Caner (1997)). • Even most credit unions pay little attention to LMI families, focusing instead on better compensated occupational groups. While this tactic may be profitable, credit unions enjoy tax free status by virtue of provisions in the Federal Credit Union Act, the text of which mandates that credit unions provide credit “to people of small means” (Federal Credit Union Act (1989)). Given their legislative background, it is interesting that the median income of credit union members is approximately $10,000 higher than that of the median income of all Americans (Survey of Consumer Finances (2001)) and that only 10% of credit unions classify themselves as “low income,” defined as half members having incomes of less than 80% of the area median household income (National Credit Union Administration (2004) and Tansey (2001)). • Many LMI families have gotten the message, and prefer not to hold savings accounts citing high minimum balances, steep fees, low interest rates, problems meeting identification requirements, denials by banks, and a distrust of banks (Berry (2004)). • Structurally, we have witnessed a curious development in the banking system. The traditional payment systems of banks (e.g., bill paying and check cashing) have been supplanted by non-banks in the form of alternative financial service providers such as check cashing firms. These same firms have also developed a vibrant set of credit products in the form of payday loans. However, these alternative financial service providers have not chosen to offer asset building or savings products. Thus, the most active financial service players in many poor communities do not offer products that let poor families save and get ahead. This stereotyping of the financial service world obviously does not do justice to a number of financial institutions that explicitly seek to serve LMI populations’ asset building needs. This includes Community Development Credit Unions, financial institutions like ShoreBank in 9 Chicago, and the CRA-related activities of the nation’s banks. However, we sadly maintain that these are exceptions to the rule, and the CRA-related activities, while real, are motivated by regulations and not intrinsically by the financial institutions. We are reminded about one subtle—but powerful—piece of evidence about the lack of interest of financial institutions in LMI asset building each year. At tax time, many financial institutions advertise financial products to help families pay less in taxes: IRAs, SEP-IRAs, and KEOGHs. These products are important—for taxpayers. However, LMI families are more likely refund recipients, by virtue of the refundable portions of the Earned Income Tax Credit (EITC), the Child Tax Credit (CTC), and refunds from other sources which together provided over $78 billion in money to LMI families in 2001, mostly early in the year around February (refund recipients tend to file their taxes earlier than payers) (Internal Revenue Service (2001)). With the exception of H&R Block, which has ongoing pilot programs to help LMI families save some of this money, financial institutions seem unaware—and uninterested—in the prospect of gathering some share of a $78 billion flow of assets (Tufano and Schneider (2004)). “Nobody wants my money” may seem like a bit of an exaggeration, but it captures the essential problem of LMI families wanting to save. “Christmas Club” accounts, where families deposited small sums regularly, have all but disappeared. While they are not barred from opening bank accounts or mutual fund accounts, LMI families could benefit from a low risk account with low fees, which delivers a competitive rate of return, with a small minimum balance and initial purchase price, and which is available nationally and portable if the family moves from place to place. The product has to be simple, the vendor trustworthy, and the execution easy—because the family has to do all the work. Given these specifications, savings bonds seem like a good choice. 3. U. S. Savings Bonds: History and Recent Developments A. A Brief History of Savings Bonds Governments, including the U.S. government, have a long tradition of raising monies by selling bonds to the private sector, including large institutional investors and small retail investors. U.S. Treasury bonds fall into the former group and savings bonds the latter. The U. S. is not alone in selling small denomination bonds to retail investors; since the 1910s, Canada has offered its residents a form of Canada Savings Bonds. 4 Generally, huge demands for public debt, occasioned by wartime, have given rise to the most concerted savings bond programs. The earliest bond issue by the US was conducted in 1776 to finance the revolutionary war. Bonds were issued 10 again to finance the War of 1812, the Civil War, the Spanish American War, and with the onset of World War I, the Treasury Department issued Liberty Bonds, mounting extensive marketing campaigns to sell the bonds to the general public (Cummings (1920)). The bond campaign during World War II is the best known of these efforts, though bonds were also offered in conjunction with the Vietnam War and, soon after the terrorist attacks in 2001, the government offered the existing EE bonds as “Patriot Bonds” in order to allow Americans to “express their support for anti-terrorism efforts” (US Department of the Treasury (2002)). During these war-time periods, bond sales have been tied to patriotism. World War I campaigns asked Americans to “buy the “Victorious Fifth” Liberty Bonds the way our boys fought in France – to the utmost” (Liberty Loan Committee (1919)). World War II era advertisements declared, “War bonds mean bullets in the bellies of Hitler’s hordes” (Blum (1976)). The success of these mass appeals to patriotism was predicated on bonds being accessible and affordable to large numbers of Americans. Both the World War I and World War II bond issues were designed to include small savers. While the smallest denomination Liberty Bond was $100, the Treasury also offered Savings Stamps for $5, as well as the option to purchase “Thrift Stamps” in increments of 25 cents that could then be redeemed for a Savings Stamp (Zook (1920)). A similar system was put in place for the World War II era War Bonds. While the smallest bond denomination was $25, Defense Stamps were sold through Post Offices and schools for as little as 10 cents and were even given as change by retailers (US Department of the Treasury (1984), US Department of the Treasury (1981)). Pasted in albums, these stamps were redeemable for War Bonds. The War Bonds campaign went further than Liberty Bonds to appeal to small investors. During World War II, the Treasury Department oriented its advertising to focus on small savers, choosing popular actors and musicians that the Treasury hoped would make the campaign “pluralistic and democratic in taste and spirit” (Blum (1976)). In addition to more focused advertising, changes to the terms of War Bonds made them more appealing to these investors. The bonds were designed to be simple. Unlike all previous government bond issues, they were not marketable and were protected from theft (US Department of the Treasury (1984)). Many of these changes to the bond program had actually been put in place before the war. In 1935, the Treasury had introduced the “Savings Bond” (the basis for the current program) with the intention that it “appeal primarily to individuals with small amounts to invest” (US Department of the Treasury (1981)). The Savings Bond was not the first effort by the Treasury to encourage small investors to save during a peace time period. Following World War I and the Liberty Bond 4 Brennan and Schwartz (1979) provide an introduction to Canadian Savings Bonds as well as the savings bond offerings of a number of European countries. For current information on Canadian Savings Bonds see 11 campaigns, the Treasury decided to continue its promotion of bonds and stamps. It stated that in order to: Make war-taught thrift and the practice of saving through lending to the Government a permanent and happy habit of the American people, the United States Treasury will conduct during 1919 an intensive movement to promote wise spending, intelligent saving, and safe investment (US Department of the Treasury (1918)). The campaign identified seven principal reasons to encourage Americans to save including: (1) “Advancement” which was defined as savings for “a definite concrete motive, such as buying a home…an education, or training in trade, profession or art, or to give children educational advantages,” (2) “Motives of self interest” such as “saving for a rainy day,” and (3) “Capitalizing part of the worker’s earnings,” by “establishing the family on ‘safety lane’ if not on ‘easy street’” (US Department of the Treasury (1918)). Against this background, it seems clear that the focus of savings bonds on the “small saver” was by no means a new idea, but rather drew inspiration from the earlier “thrift movement” while attempting to tailor the terms of the bonds more precisely to the needs of small savers. However, even on these new terms, the new savings bonds (also called “baby bonds”) did not sell quickly. In his brief, but informative, summary of the 1935 bond introduction, Blum details how: “At first sales lagged, but they picked up gradually under the influence of the Treasury’s promotional activities, to which the Secretary gave continual attention. By April 18, 1936, the Department had sold savings bonds with a maturity value of $400 million. In 1937 [Secretary of the Treasury] Morgenthau enlisted the advertising agency of Sloan and Bryan, and before the end of that year more than 1,200,000 Americans had bought approximately 4 1/2 million bonds with a total maturity value of over $1 billion” (Blum (1959)). Americans planned to use these early savings bonds for much the same things that low-income Americans save for now, first and foremost, for emergencies (Blum (1959)). The intent of the program was not constrained to just providing a savings vehicle. The so-called “Baby-bond” allowed all Americans the opportunity to invest even small amounts of money in a governmentbacked security, which then-Secretary of the Treasury Morgenthau saw as a way to: “Democratize public finance in the United States. We in the Treasury wanted to give every American a direct personal stake in the maintenance of sound Federal Finance. Every man and woman who owned a Government Bond, we believed, would serve as a bulwark against the constant threats to Uncle Sam’s pocketbook from pressure blocs and special-interest groups. In short, we wanted to the ownership of America to be in the hands of the American people” (Morgenthau, (1944)). In theory, the peacetime promotion of savings bonds as a valuable savings vehicle with both public and private benefits continues. From the Treasury’s web site, we can gather its “pitch” to would-be buyers of bonds focuses on the private benefits of owning bonds: http://www.csb.gc.ca/eng/resources_faqs_details.asp?faq_category_ID=19 (visited September 26, 2004). 12 “There's no time like today to begin saving to provide for a secure tomorrow. Whether you're saving for a new home, car, vacation, education, retirement, or for a rainy day, U.S. Savings Bonds can help you reach your goals with safety, market-based yields, and tax benefits” (US Department of the Treasury (2004a)). But the savings bond program, as it exists today, does not seem to live up to this rhetoric, as we discuss below. Recent policy decisions reveal much about the debate over savings bonds as merely one way to raise money for the Treasury versus their unique ability to help families participate in America and save for their future. As we keep score, the idea that savings bonds are an important tool for family savings seems to be losing. B. Recent debates around the Savings Bond program and program changes Savings bonds remain an attractive investment for American families. In Appendix A we provide details on the structure and returns of bonds today. In brief, the bonds offer small investors the ability to earn fairly competitive tax advantage returns on a security with no credit risk and no principal loss due to interest rate exposure, in exchange for a slightly lower yield relative to large denomination bonds and possible loss of some interest in the event the investor needs to liquidate her holdings before five years. As we argue below and discuss in Appendix B, the ongoing persistence of the savings bond program is testimony to their attractiveness to investors. As we noted, both current and past statements to consumers about savings bonds suggest that Treasury is committed to making them an integral part of household savings. Unfortunately, the changes to the program over the past two years seem contrary to this goal. Three of these changes may make it more difficult for small investors and those least well served by the financial service community to buy bonds and save for the future. More generally, the structure of the program seems to do little to promote the sale of the bonds. On January 17 th , 2003, the Department of the Treasury promulgated a rule that amends section 31 of the CFR to increase the minimum holding period before redemption for Series EE and I Bonds from 6 months to 12 months for all newly issued bonds (31 CFR part 21 (2003)). In rare cases, savings bonds may be redeemed before 12 months, but generally only in the event of a natural disaster (US Department of the Treasury (2004b)). This increase in the minimum holding period essentially limits the liquidity of a bondholder’s investment, which is most important for LMI savers who might be confronted with a family emergency that requires that they liquidate their bonds within a year. By changing the minimum initial holding periods, the Department of the Treasury makes it bonds less attractive for low-income families. 13 The effect this policy change seems likely to have on small investors, particularly those with limited means, appears to be unintended. Rather, this policy shift arises out of concern over rising numbers of bondholders keeping their bonds for only the minimum holding period in order to maximize their returns in the short term. Industry observers have noted that given the low interest rates available on such investment products as CDs or Money market funds, individuals have been purchasing Series EE bonds and I bonds, holding them for 6 months, paying the interest penalty for cashing out early, but still clearing a higher rate of interest than they might find elsewhere (Pender (2003)). The Department of the Treasury cited this behavior as the primary factor in increasing the minimum holding period. Officials argue that this amounts to “taking advantage of the current spread between savings bond returns and historically low short-term interest rates,” an activity which they believe contravenes the nature of the savings bond as a long term investment vehicle (US Department of the Treasury (2003a)). Second, marketing efforts for savings bonds have been eliminated. Congress failed to authorize $22.4 million to fund the Bureau of Public Debt’s marketing efforts and on September 30, 2003, the Treasury closed all 41 regional savings bond marketing offices and cut 135 jobs. This funding cut represents the final blow to what was once a large and effective marketing strategy. Following the Liberty Bond marketing campaign, as part of the “thrift movement” the Treasury continued to advertise bonds, working through existing organizations such as schools, “women’s organizations,” unions, and the Department of Agriculture’s farming constituency (Zook (1920)). Morgenthau’s advertising campaign for Baby Bonds continued the marketing of bonds through the 1930’s, preceding the World War II era expansion of advertising in print and radio (Blum (1959)). Much of this war-time advertising was free to the government, provided as a volunteer service through the Advertising Council beginning in 1942. Over the next thirty years, the Ad Council arranged for contributions of advertising space and services worth hundreds of millions of dollars (US Department of the Treasury, Treasury Annual Report (1950-1979)). In 1970, the Treasury discontinued the Savings Stamps program, which it noted was one of “the Bond program’s most interesting (and promotable) features” (US Department of the Treasury (1984)). The Advertising Council ended its affiliation with the Bond program in 1980, leaving the job of marketing bonds solely to the Treasury (Advertising Council (2004)). In 1999, the Treasury began a marketing campaign for the newly introduced I bonds. However, that year the Bureau spent only $2.1 million on the campaign directly and received just $13 million in donated advertising, far short of the $73 million it received in donated advertising in 1975 (James (2000) and US Department of the Treasury, Treasury Annual Report (1975)). Third, while not a change in policy, the current program provides little or no incentive for banks or employers to sell bonds. Nominally, the existing distribution outlets for bonds are quite 14 extensive, including financial institutions, employers, and the TreasuryDirect System. There are currently more than 40,000 financial institutions (banks, credit unions and other depositories) eligible to issue savings bonds (US Department of the Treasury (2004b)). In principle, someone can go up to a teller and ask to buy a bond. As anecdotal evidence, one of us tried to buy a savings bond in this way, and had to go to a few different bank branches before the tellers could find the necessary forms, an experience similar to that detailed by James T. Arnold Consultants (1999) in their report on the Savings Bonds program. This lack of interest in selling bonds may reflect the profit potential available to a bank selling bonds. The Treasury pays banks fees of $.50 - $.85 per purchase to sell bonds and the bank receives no other revenue from the transaction. 5 In off-therecord discussions, bank personnel have asserted that these payments cover less than 25% of the cost of processing a savings bond purchase transaction. The results of an in-house evaluation at one large national bank showed that there were 22 steps and four different employees involved with the processing of a bond purchase. Given these high costs and miniscule payments, our individual experience is hardly surprising, as are banks’ disinterest in the bond program. In addition, savings bonds can be purchased via the Payroll Savings Plan, which the Treasury reports as available through some 40,000 employer locations (US Department of the Treasury (2004c)). 6 Again, by way of anecdote, one of us called our employer to ask about this program and waited weeks before hearing back about this option. Searching the University intranet, the term “savings bonds” yielded no hits, even though the program was officially offered. Fourth, while it is merely a matter of taste, we may not be alone in thinking that the “front door” to savings bonds, the U.S. Treasury’s Saving Bond web site 7 is complicated and confusing for consumers (though the BPD has now embarked on a redesign of the site geared toward promoting the online TreasuryDirect system). This is particularly important in light of the fact that the Treasury has eliminated its marketing activities for these bonds. Financial service executives are keenly aware that cutting all marketing from a product, even an older product, does not encourage its growth. Indeed, commercial firms use this method to quietly “kill” products. Fifth, on May 8 th , 2003 the Department of the Treasury published a final rule on the “New Treasury Direct System.” This rule made Series EE bonds available through the TreasuryDirect System (Series I bonds were already available) (31 CFR part 315 (2003)). This new system 5 Fees paid to banks vary depending on the exact role the bank plays in the issuing process. Banks which process savings bond orders electronically receive $.85 per bond while banks which submit paper forms receive only $.50 per purchase (US Department of the Treasury (2000), Bureau of Public Debt, 2005, Private Correspondence with Authors. 6 This option allows employees to allocate a portion of each paycheck towards the purchase of savings bonds. Participating employees are not required to allocate sufficient funds each pay period for the purchase of an entire bond but rather, can allot smaller amounts that are held until reaching the value of the desired bond (US Department of the Treasury (1993) and US Department of the Treasury (2004d). 7 http://www.publicdebt.treas.gov/sav/sav.htm15 represents the latest incarnation of TreasuryDirect, which was originally used for selling marketable Treasury securities (US GAO (2003)). In essence, the Treasury proposes that a $50 savings bond investor follow the same procedures as a $1 million investor in Treasury Bills. The Department of the Treasury aims to eventually completely phase out paper bonds (Block (2003)) and to that end have begun closing down certain aspects of the Savings Bond program, such as promotional give-aways of bonds, which rely on paper bonds. The Treasury also recently stopped the practice of allowing savers to buy bonds using credit cards. These changes seem to have the impact of reducing the access of low-income families to savings bonds or depress demand of their sale overall. By moving towards an only-on-line system of savings bonds distribution, the Department of the Treasury risks closing out those individuals without Internet access. Furthermore, in order to participate in TreasuryDirect, the Treasury Department requires users to have a bank account and routing number. This distribution method effectively disenfranchises the people living in the approximately 10 million unbanked households in the US (Azicorbe, Kennickell, and Moore (2003) and US Census (2002)). While there have been a few small encouraging pilot programs in BPD to experiment with making Treasury Direct more user-friendly for poorer customers, the overall direction of current policy seems makes bonds less accessible to consumers. 8 Critics of the Savings Bonds program, such as Representative Ernest Istook (R-OK), charge that the expense of administering the US savings bond program is disproportionate to the amount of federal debt covered by the program. These individuals contend that while savings bonds represent only 3% of the Federal debt that is owned by the public, some three quarters of the budget of the Bureau of Public Debt is dedicated to administering the program (Berry (2003)). Thus they argue that the costs of the savings bond program must be radically reduced. Representative Istook (R-OK) summed up this perspective with the statement: 8 Working with a local bank partner in West Virginia, the Bureau has rolled out “Over the Counter Direct” (OTC Direct). The program is designed to allow Savings Bond customers to continue to purchase bonds through bank branches, while substantially reducing the processing costs for banks. Under the program, a customer arrives at the bank and dictates her order to a bank employee who enters it into the OTC Direct website. Clients receive a paper receipt at the end of the transaction and then generally are mailed their bonds (in paper form) one to two weeks later. In this sense, OTC Direct represents an intermediate step; the processing is electronic, while the issuing is paper-based. While not formally provided for in the system, the local bank partner has developed protocols to accommodate the unbanked and those who lack web access. For instance, the local branch manager will accept currency from an unbanked bond buyer, set up a limited access escrow account, deposit the currency into the account, and affect the debit from the escrow account to the BPD. In cases where bond buyers lack an email address, the branch manager has used his own. A second pilot program, with Bank of America, placed kiosks that could be used to buy bonds in branch lobbies. The kiosks were linked to the Treasury Direct website, and thus enabled bond buyers without their own method of internet access to purchase bonds. However, the design of this initiative was such that the unbanked were still precluded from purchasing bonds. 16 “Savings Bonds no longer help Uncle Sam; instead the cost him money…Telling citizens that they help America by buying Savings Bonds, rather than admitting they have become the most expensive way for our government to borrow, is misplaced patriotism” (Block (2003)). However, some experts have questioned this claim. In testimony, the Commissioner of the Public Debt described calculations that showed that series EE and I savings bonds were less costly than Treasury marketable securities. 9 However, the BPD itself seems to have ascribed to this cost focused perspective with Treasury’s debt financing objective to borrow the money needed to operate the federal government at the lowest cost. In May 2005, the Treasury substantially changed the terms of EE bonds. Instead of having interest on these bonds float with the prevailing five year treasury, they became fixed-rate bonds, with their interest rate set for the life of the bond at the time of purchase. 10 While this may be prudent debt management policy from the perspective of lowering the government’s cost of borrowing, consumers have responded negatively to this news. 11 We would hope that policy makers took into consideration the impact this decision this might have in the usefulness of bonds to help families meet their savings goals. Focusing decisions of this sort solely on the cost of debt to the federal government misses a larger issue; the Savings Bond program was not created only to provide a particularly low-cost means of financing the federal debt. Rather, the original rationale for the savings bond program was to provide a way for individuals of limited means to invest small amounts of money and to allow more Americans to become financially invested in government. While this is not to say that the cost of the Savings Bonds program should be disregarded, this current debate seems to overlook one real public policy purpose of savings bonds: helping families save. And so while none of these recent developments (a longer holding period, elimination of marketing, and changes to the bond buying process) or the ongoing problems of few incentives to sell bonds or a lackluster public image seems intentionally designed to discourage LMI families from buying bonds, their likely effect is to make the bonds less attractive to own, more difficult to learn about, and less easy to buy. These decisions about bonds were made on the basis of the costs of raising money through savings bonds versus through large denomination Treasury bills, notes and bonds. 12 This discussion, while appropriate, seems to lose sight of the fact that savings bonds also have served— 9 See testimony by Van Zeck (Zeck (2002)); However, a recent GAO study requested by Rep. Istook cast doubt on the calculations that the Treasury used to estimate the costs of the program (US GAO (2003)). 10 See http://www.publicdebt.treas.gov/com/comeefixedrate.htm. 11 See http://www.bankrate.com/brm/news/sav/20050407a1.asp for one set of responses. 17 and can serve—another purpose: to help families save. The proposals we outline below are intended to reinvigorate this purpose, in a way that may make savings bonds even more efficient to run and administer. 4. Reinventing the Savings Bond The fundamental savings bond structure is sound. As a “brand,” it is impeccable. The Ibond experience has shown that tinkering with the existing savings bond structure can broaden its appeal while serving a valuable public policy purpose. Our proposals are designed to make the savings bond a valuable tool for low and moderate income families, while making savings bonds a more efficient debt management tool for the Treasury. Our goal is not to have savings bonds substitute for or crowd out private investment vehicles, but rather to provide a convenient, efficient, portable, national savings platform available to all families. 1. Reduce the Required Holding Period for Bondholders Facing Financial Emergencies While the Treasury legitimately lengthened the savings bond holding period to discourage investors seeking to arbitrage the differential between savings bond rates and money market rates, the lengthening of the holding period makes bonds less attractive to LMI families. The current minimum required holding period of 12 months is a substantial increase from the original 60 days required of baby bond holders. This longer period essentially requires investors to commit to saving for at least one year. A new Bureau of Public Debt program suggests that this may not be a problem for some investors. In an effort to encourage bond holders to redeem savings bonds that have passed maturity, the Bureau of Public Debt is providing a search service (called “Treasury Hunt”) to find the holders of these 33 million bonds worth $13.5 billion (Lagomarsino (2005)) . The program either reveals that bonds are an extremely efficient mechanism to encourage long term saving because they have an “out of sight, out of mind” quality—perhaps too much so. So, while many small investors may intend to save for the long term, and many may have no trouble doing so, this new extended commitment could still be particularly difficult for LMI families in that they would be prohibited from drawing on these funds even if faced with financial emergency. If we want to encourage bond-savings by LMI families, Treasury could either (a) exempt small withdrawals from the required holding periods or (b) set up and publicize existing simple emergency withdrawal rules. Under the first rule, Treasury could allow a holder to redeem some amount (say $5000 per year) earlier than twelve months, with or without interest penalty. While this design would most precisely address the need for emergency redemption, it could be difficult 12 For a cost-based view of the Savings Bond program from the perspective of the Bureau of Public Debt see US Department of the Treasury (2002). For an opposing view also from this cost-based perspective see 18 to enforce as redeeming banks do not have a real-time link to BPD records and so a determined bond holder could conceivably “game the system” by redeeming $5,000 bundles of bonds at several different banks. Alternatively, while current rules allow low-income bondholders who find themselves in a natural disaster or financial emergency to redeem their bonds early, this latter provision receives virtually no publicity. BPD does publicize the rule that allows bond holders who have been affected by natural disasters to redeem their bonds early. Were the BPD to provide a similar level of disclosure of the financial emergency rules LMI savers might be encouraged to buy savings bonds. Whether by setting some low limit of allowable early redemptions for all, or merely publicizing existing emergency withdrawal rules, it seems possible to meet the emergency needs of LMI savers while continuing to discourage arbitrage activity. 2. Make Savings Bonds Available to Tax Refund Recipients The IRS allows filers to direct nominal sums to funding elections through the Federal Election Campaign Fund and permits refund recipients to direct their refunds to pay future estimated taxes. We propose that taxpayers be able to direct that some of their refunds be invested in savings bonds. The simplest implementation of this system—merely requiring one additional line on the 1040 form—would permit the refund recipient to select the Series (I or EE) and the amount; the bonds would be issued in the primary filer’s name. Slightly more elaborate schemes might allow the filer to buy multiple series of bonds, buy them for other beneficiaries (e.g., children), or allow taxpayers not receiving refunds to buy bonds at the time of paying their taxes. 13 The idea of letting refund recipients take their refund in the form of savings bonds is not a radical idea, but rather an old one. Between 1962 and 1968 the IRS allowed refund recipients to purchase Savings bonds with their refunds. Filers directed less than 1% of refunds to bond purchase during this period (Internal Revenue Service (1962-1968)). On its face, it might appear that allowing filers to purchase savings bonds with their refunds has little potential, but we feel this historical experience may substantially underestimate the opportunity to build savings at tax time via our refund-based bond sales for two reasons. First, the size of low-income filers’ tax refunds has increased from an average of $636 in 1964 (in 2001 dollars) to $1,415 in 2001 allowing more filers to put a part of their refund aside as savings (Internal Revenue Service (2001, 1964)). 14 These refunds tend to be concentrated among low-income families, where we would like to stimulate savings. Second, the historical experiment was an all or nothing program, it did not GAO (2003). 13 Our proposal would allow taxpayers to purchase bonds with after-tax dollars, so it would have no implications for tax revenues. 19 allow refund recipients to direct only a portion of their refunds to bonds. We expect our proposal will be more appealing since filers would be able to split their refunds, and direct only a portion towards savings bonds while receiving the remainder for current expenses. By allowing this option, the Department of the Treasury would enable low-income filers to couple a large cash infusion with the opportunity to invest in savings bonds. Perhaps the largest single pool of money on which low-income families can draw for asset building and investment is the more than $78 billion dollars in refundable tax-credits made available through federal and state government each year (Internal Revenue Service (2001)). Programs across the country have helped low-income taxpayers build assets by allowing filers to open savings accounts and Individual Development Accounts when they have their taxes prepared. A new program in Tulsa, Oklahoma run by the Community Action Project of Tulsa County and D2D has allowed tax-filers to split their refund, committing some to savings and receiving the remainder as a check. This program allowed families to precommit to saving their refunds, instead of having to make a saving decision when the refund was in hand and temptation to spend it was strong. While these small sample results are difficult to extrapolate, the program seemed to increase savings initially and families reported that the program helped them their financial goals. Since the short-lived bond-buying program in the 1960’s, the BPD has introduced other initiatives to encourage tax refund recipients to purchase bonds. The first of these, beginning in the 1980s, inserted marketing materials along with the refund checks sent to refund recipients. Though only limited data has been collected, it appears that these mailings were sent at random points throughout the tax season (essentially depending on availability as the BPD competed for “envelope space” with other agencies) and that no effort was made to segment the market, with all refund recipients (low income and higher income) receiving the materials. In all, the BPD estimates that between 1988 and 1993, it sent 111,000,000 solicitations with a response rate of little less than .1%. While rate may appear low, it is comparable to the .4% response rate on credit card mailings and some program managers at BPD deemed the mailings cost effective (Anonymous (2004)). Considering that the refund recipient had to take a number of steps to effect the bond transaction (cash the refund, etc.) these results are in some sense fairly encouraging. A second related venture was tried for the first time in tax season 2004. The BPD partnered with a volunteer income tax preparation (VITA) site in West Virginia to try to interest low-income refund recipients in using the Treasury Direct System. The tax site was located in a public library and was open for approximately 12 hours per week, during tax season. In 2004, the 14 We define LMI filers as those with incomes of less than $30,000 in 2001 or less than $5,000 in the period from 1962-1968 (which is approximately $30,000 in 2001 dollars). 20 site served approximately 500 people. The program consisted of playing a PowerPoint presentation in the waiting area of the free tax preparation site and making available brochures describing the Treasury Direct system. Informal evaluation by tax counselors who observed the site suggests that tax filer interest was extremely limited and that most filers were pre-occupied with ensuring that they held their place in line and were able to get their taxes completed quickly. While both of these programs attempt to link tax refunds with savings, they do so primarily through advertising, not through any mechanism that would make such savings easier. The onus is still on the tax refund recipient to receive the funds, convert them to cash (or personal check), fill out a purchase order, and obtain the bonds. In the case of the 2004 experiment, the refund recipient had to set up a Treasury Direct account, which would involve having a bank account, etc. While these programs remind tax filers that savings is a good idea, but do not make saving simple. We remain optimistic in part based on data collected during the Tulsa experiment described above. While the experiment did not offer refund recipients the option of receiving savings bonds, we surveyed them on their interest in various options. Roughly 24% of participants expressed an interest in savings bonds and nearly three times as large a fraction were interested when the terms of savings bonds were explained (Beverly, Schneider, and Tufano (2004)). Our sample is too small to draw a reliable inference from this data, but it certainly suggests that the concept of offering savings bonds is not completely ungrounded. Currently, a family wanting to use their refund to buy savings bonds would have to receive their refund, possibly pay a check casher to convert the refund to cash, make an active decision to buy the bond, and go online or to a bank to complete the paperwork. Under our proposal, the filer would merely indicate the series and amount, the transaction would be completed, and the money would be safely removed from the temptation of spending. Most importantly, since the government does not require savings bond buyers to pass a ChexSystem hurdle, this would open up savings to possibly millions of families excluded from opening bank accounts. While we would hope that refund recipients could enjoy a larger menu of savings products than just bonds, offering savings bonds seamlessly on the tax form has practical advantages over offering other products at taxtime. By putting a savings option on the tax form, all filers— including self-filers, could be reminded that tax time is potentially savings time. Paid and volunteer tax sites wishing to offer other savings options on site would face a few practical limitations. First, certain products (like mutual funds) could only be offered by licensed brokerdealers which would either require on-site integration of a sales force or putting the client in touch via phone or other means with an appropriately licensed agent. More generally, tax preparers— especially volunteer sites—would be operationally challenged by the prospect of opening accounts 21 on site. However, merely asking the question: “How much of your refund—if any—would you like in savings bonds?” could be incorporated relatively easily in the process-flow. Not only would a refund-driven savings bond program make saving easier for families, it would likely reduce the cost of marketing and administering the savings bond program for the Treasury. All of the information needed to purchase a bond is already on the filer’s tax return, so there would be less likelihood of error. It should not require substantial additional forms, but merely a single additional line or two on the 1040. The Treasury would not need to pay banks fees of $.50 - $.85 per purchase to sell bonds. 15 Furthermore, the refund monies would never leave the federal government. If subsequent investigation uncovered some tax compliance problem for a refund recipient, some of the contested funds would be easily traceable. Given annual LMI refunds of $78 billion annually, saving bond sales could increase by 9.8% for each 1% of these refunds captured. 16 Ultimately, whether or not refund recipients are interested in buying bonds will only be known if one makes a serious attempt to market to them at refund time. We are attempting to launch an experiment this coming tax season which will test this proposition. 3. Enlist private sector social marketing for savings bonds Right now, banks and employers have little incentive to market savings bonds. If an account is likely to be profitable, a bank would rather open the account than sell the person a savings bond. If an account is unlikely to be profitable, the bank is not likely to expend much energy selling bonds to earn $.50 or $.85. With a reinvented Savings Bond program, the Treasury could leverage other private sector marketing. First, one can imagine a very simple advertising program for the tax-based savings bond program focusing its message on the simplicity of buying bonds at tax time and the safety of savings bond investments. We envision a “RefundSaver” program. Groups like the Consumer Federation of America and AmericaSaves might be enlisted to join in the public service effort if the message were sufficiently simple. 17 With a tax-centered savings bond marketing program, the IRS could leverage paid and volunteer tax preparers to market bonds. If these tax preparers could enhance their “value 15 Fees paid to banks vary depending on the exact role the bank plays in the issuing process. Banks who only accept bond orders and payment from customers but sent those materials to regional Federal Reserve Banks for final processing are paid $.50 per purchase. Banks which do this final level of processing themselves, inscription, receive $.85 per bond issue (US Department of the Treasury (2000)). 16 Savings Bonds sales of EE and I bonds through payroll and over-the-counter were $7.9 billion in 2004. Total refunds to LMI filers in 2001 were $78 billion. Each $780 million in refunds captured would be a 9.8% increase in Savings Bonds sales. 22 proposition” that they have with their clients by offering them a valuable asset building service at tax time, they might have a strong incentive to participate, possibly without any compensation. If the Treasury paid preparers the same amount that it offered to banks selling bonds, this would create even greater incentives for the preparers to offer the bonds, although this might create some perverse incentives for preparers as well. 4. Consider savings bonds in the context of a family’s financial life cycle As they are currently set up, savings bonds are the means and end of household savings. Bonds are bought and presumably redeemed years (if not decades) later. Data from the Treasury department partially bears out this assumption. Of the bonds redeemed between 1950 and 1980, roughly half were redeemed prior to maturity. Through the mid-1970s, redemptions of unmatured bonds made up less than half of all bond redemptions (41% on average), however in the late 1970s this ratio changed, with unmatured bonds making up an increasingly large share of redemptions (up to 74% in 1981, the last year for which the data is reported). However, even without this increase in redemptions (perhaps brought on by the inflationary environment of the late 1970s) early redemptions seem to have been quite frequent. This behavior is in line with the use of bonds as described by the Treasury in the 1950s, as a means of “setting aside liquid savings out of current income” (US Department of the Treasury, Treasury Annual Report (1957)). Under our proposal, savings bonds would be a savings vehicle for LMI families who have small balances and low risk tolerances. Over time, these families might grow to have larger balances and greater tolerance for risk; in addition, their investing horizons might lengthen. At this time, our savings bond investors might find that bonds are no longer the ideal investment vehicle, and our reinvented savings bonds should recognize this eventuality. We propose that the Treasury study the possibility of allowing Savings Bond holders to “roll over” their savings bonds to other investment vehicles. In the simplest form, the Treasury would allow families to move their savings bonds directly into other investments. These investments might be products offered by the private sector (mutual funds, certificates of deposits, etc.) If the proposals to privatize Social Security became reality, these “rollovers” could be into the new private accounts. Finally, it might be possible to roll over savings bond amounts into other tax deferred accounts, although this concept would add complexity, as one would need to consider the ramifications of mixing after-tax and pre-tax investments. The proposal for Retirement Savings Bonds (R-Bonds) takes a related approach. These bonds would allow employers to set aside small amounts of retirement savings for employees at a lower cost than 17 The Bureau of Public Debt commissioned Arnold Consultants to prepare a report on marketing strategy in 1999. They also cite the potential for a relationship between the BPD and non-profit private sector groups 23 would be incurred through using traditional pension systems. R-bonds would be specifically earmarked for retirement and could only be rolled over into an IRA (Financial Services Roundtable (2004)). 5. Make the process of buying savings bonds more user friendly There has been a shift in the type of outlets used to distribute US Savings Bonds. While there are still more than 40,000 locations at which individuals can purchase savings bonds, these are now exclusively financial institutions. Post Offices, the original distribution mode for baby bonds, no longer retail bonds. This shift is of particular concern to low-income small investors. Over the past 30 years a number of studies have documented the relationship between bank closings and the racial and economic make-up of certain neighborhoods. In a study of five large US cities, Caskey (1994) finds that neighborhoods with large African American or Hispanic populations are less likely to have a bank branch and that in several of the cities, “low-income communities are significantly less likely to have a local bank than are other communities.” Post Offices, on the other hand, remain a ubiquitous feature of most neighborhoods and could again serve as an ideal location for the sale of savings bonds. Our tax-intermediated bond program should make savings bonds more accessible for most Americans. In addition, just as the Treasury allows qualified employers to offer savings bonds, retailers like Wal-Mart or AFS providers like ACE might prove to be effective outlets to reach LMI bond buyers. Further, the Department of the Treasury could work with local public libraries and community based organizations to facilitate access to TreasuryDirect for the millions of Americans without Internet access. * * * * * * * * * * * Our proposals very much are in the spirit of RE-inventing the savings bond. As a business proposition, one never wants to kill a valuable brand. We suspect that savings bonds – conjuring up images of old fashioned savings – may be one of the government’s least recognized treasures. It was—and can be again—a valuable device to increase household savings while simultaneously becoming a more efficient debt management tool. The U.S. Savings Bond program, when first introduced in the early twentieth century, was a tremendous innovation that created a new class of investors and enabled millions of Americans to buy homes, durables goods, and pursue higher education (Samuel (1997)). In the same way, a revitalized Savings Bond program, aimed squarely at serving LMI families can again become a pillar of family savings. In mid-September of 2005, Senators Mary Landrieu and David Vitter proposed a renewed savings bond marketing effort, aimed at raising funds for the reconstruction of areas dedicated to encouraging savings (James T. Arnold Consultants (1999)). 24 damaged by hurricane Katrina (Stone (2005)). The Senators alluded to the success of the 1940’s War Bond program as inspiration. We think that they should focus on bonds not only to raise funds to rebuild infrastructure and homes, but also to use the opportunity to help families rebuild their financial lives. These “rebuilding-bonds” could be used to help families affected by the hurricane to save and put their finances in order, perhaps by offering preferred rates on the bonds or by offering matching on all bond purchases. Non-affected families could simply use the occasion to save for their futures or emergencies. A national bond campaign might emphasize that bond purchasers can rebuild not only critical infrastructure, homes, and businesses, but also families’ savings. 25 Sources 31 CFR Part 21 et al., United States Savings Bonds, Extension of Holding Period; Final Rule, Federal Register, 17 January, 2003. 31 CFR Part 315, et al: Regulations Governing Treasury Securities, New Treasury Direct System; Final Rule, 2003, Federal Register, 8 May, 2003. Advertising Council, 2004, Historic Campaigns: Savings Bonds, http://www.adcouncil.org/ campaigns/historic_savings_bonds/ (last accessed October 12 th , 2004). America Saves, 2004, Savings strategies: The importance of emergency savings, The American Saver http://www.americasaves.org/back_page/winter2004.pdf (last accessed October 12th, 2004). Anonymous, “Behind 2003’s Direct-Mail-Numbers,” Credit Card Management, 17(1), April 2004, ABI/INFORM Global. Aizcorbe, Ana M., Arthur B. Kennickell, and Kevin B. Moore, 2003, Recent Changes in U.S. Family Finances: Evidence from the 1998 and 2001 Survey of Consumer Finances, Federal Reserve Bulletin, 1 -32 http://www.federalreserve.gov/pubs/oss/oss2/2001/bull0103.pdf Avery, Robert B., Raphael W. Bostic, Paul S. Calem, and Glenn B. Canner, 1997, Changes in the distribution of banking offices, Federal Reserve Bulletin http://www.federalreserve.gov/pubs/bulletin/1997/199709LEAD.pdf (last accessed October 6th, 2004). Avery, Robert B., Gregory Elliehausen, and Glenn B. Canner, 1984, Survey of Consumer Finances, 1983, Federal Reserve Bulletin, 89; 679-692, http://www.federalreserve.gov/pubs/oss/oss2/83 /bull0984.pdf Bankrate.com, 2005, Passbook/Statement Savings Rates, http://www.bankrate.com/brm/publ/passbk. asp. Barr, Michael S., 2004, Banking the poor, Yale J. on Reg. 21(1). Berry, Christopher, 2004, To bank or not to bank? A survey of low-income households, Harvard University, Joint Center for Housing Studies, Working Paper BABC 04-3 http://www.jchs. harvard.edu/publications/finance/babc/babc_04-3.pdf (last accessed March 12th, 2004). Berry, John M., 2003, Savings Bonds under siege, The Washington Post, 19 January 2003, http://global.factiva.com/ene/Srch/ss_hl .asp (last accessed October 12, 2004). Beverly, Sondra, Daniel Schneider, and Peter Tufano, 2004, Splitting tax refunds and building savings: An empirical test, Working Paper. Blum, John Morton, 1959, From the Morgenthau Diaries: Years of Crisis, 1928-1938 (Houghton Mifflin Company, Boston, MA). Blum, John Morton, 1976, V was for Victory: Politics and American Culture During World War II (Harvest/HBJ, San Diego, CA). 26 Block, Sandra, 2003, An American tradition too unwieldy?, USA Today, September 8 th , 2003 http://global.factiva.com/ene/Srch/ss_hl .asp (last accessed September 28, 2004). Board of Governors of the Federal Reserve, 2003, Annual Report to Congress on Retail Fees and Services of Depository Institutions, http://www.federalreserve.gov/boarddocs/rptcongress/2003 fees.pdf Bostic, Raphael W., Paul S. Calem, and Susan M. Wachter, 2004, Hitting the wall: Credit as an impediment to homeownership, Harvard University, Joint Center for Housing Studies, Working Paper BABC 04-5 http://www.jchs.harvard.edu/publications/finance/babc/babc_04-5.pdf (last accessed September 29th, 2004). Brennan, Michael J. and Eduardo S. Schwartz, 1979, Savings Bonds: Theory and Empirical Evidence, New York University Graduate School of Business Administration, Monograph Series in Finance and Economics, Monograph 1979-4. Caskey, John, 1994, “Bank Representation in Low-Income and Minority Urban Communities,” Urban Affairs Review 29, 4 (June 1994): 617. Carlson, Mark and Roberto Perli, 2004, Profits and balance sheet development at US commercial banks in 2003, Federal Reserve Bulletin, Spring 2004, 162-191, http://www.federalreserve.gov /pubs/bulletin/2004/spring04profit.pdf (last accessed September 9th, 2004). Correcting and replacing: New ad campaign from American Express Financial Advisors speaks from investor’s point of view, 2004, Business Wire, September 27 th , 2004, http://global.factiva.com/ene/Srch/ss_hl .asp (last accessed October 7 th , 2004). Cummings, Joseph, 1920, United States government bonds as investments, in The New American Thrift, ed. Roy G. Blakey, Annals of the American Academy of Political and Social Science, vol. 87. Current Population Survey, 2002 March Supplement to the Current Population Survey Annual Demographic Survey, http://ferret.bls.census.gov/macro/032003/hhinc/new06_000.htm Federal Credit Union Act, 12 U.S.C. §1786, http://www.ncua.gov/Regulations OpinionsLaws/fcu_act/fcu_act.pdf (last accessed October 8th, 2004). Federal Deposit Insurance Corporation (FDIC), 2004, Historical Statistics on Banking, Table CB15, Deposits, FDIC- -Insured Commercial Banks, United States and Other Areas, Balances at Year End, 1934 – 2003, http://www2.fdic.gov/hsob/HSOBRpt.asp?state=1&rptType=1&Rpt_Num=15. Federal Reserve Board, 2005, Federal Reserve Statistics, Selected Interest Rates, Historical Data, http://www.federalreserve.gov/releases/h15/data.htm. Financial Services Roundtable, 2004, The Future of Retirement Security in America, http://www.fsround.org/pdfs/RetirementSecurityFuture12-20-04.pdf (last accessed March 3rd, 2004) Global Insight, 2003, Predicting Personal Bankruptcies: A Multi-Client Study, http://www.globalinsight.com/publicDownload/genericContent/10-28-03_mcs.pdf (last accessed 10/12/04). 27 Hanc, George, 1962, The United States Savings Bond Program in the Postwar Period, Occasional Paper 81 (National Bureau of Economic Research, Cambridge, MA). Hayashi, Yuka, 2004, First-quarter earnings for T. Rowe Price nearly double, Dow Jones Newswires, April 27 th , 2004, http://global.factiva.com/ene/Srch/ss_hl .asp (last accessed October13th, 2004). Imoneynet.com, 2005, Money Market Mutual Funds Data Base, data file in possession of authors. Internal Revenue Service Statistics of Income, 2001, Individual income tax statistics – 2001, Table 3.3 – 2001 Individual income tax, all returns: Tax liability, tax credits, tax payments, by size of adjusted gross income, http://www.irs.gov/pub/irs-soi/01in33ar.xls. Internal Revenue Service Statistics of Income, 1960-1969, Individual income tax statistics, Table 4 – Individual income tax, all returns: Tax liability, tax credits, tax payments, by size of adjusted gross income, http://www.irs.gov/pub/irs-soi/01in33ar.xls. Internal Revenue Service, 2003, Investment Income and Expenses (including Capital Gains and Losses), Publication 550, (Department of the Treasury, Internal Revenue, Washington, D.C.). James, Dana, 2000, Marketing bonded new life to “I” series, Marketing News, 34(23). James E. Arnold Consultants, (1999), Marketing Strategy Development for the Retail Securities Programs of the Bureau of Public Debt, Report to the Bureau of Public Debt, on file with the authors. Kennickell, Arthur B., Martha Starr-McLuer, and Brian J. Surette, 2000, Recent changes in US family finances: Results from the 1998 Survey of Consumer Finances, Federal Reserve Bulletin, 88; 1-29, http://www.federalreserve.gov/pubs/oss/oss2/98/bull0100.pdf Kennickell, Arthur and Janice Shack-Marquez, 1992, Changes in family finances from 1983 to 1989: Evidence from the Survey of Consumer Finances, Federal Reserve Bulletin, 78; 1-18, http://www.federalreserve.gov/pubs/oss/oss2/89/bull0192.pdf. Kennickell, Arthur B., Martha Starr-McLuer, 1994, Canges in US family finances from 1989 to 1992: Evidence from the Survey of Consumer Finances, Federal Reserve Bulletin, 80; 861-882, http://www.federalreserve.gov/pubs/oss/oss2/92/bull1094.pdf Kennickell, Arthur B., Martha Starr-McLuer, and Annika E. Sunden, 1997, Family finances in the US: Recent evidence from the Survey of Consumer Finances, Federal Reserve Bulletin, 83;, 1-24, http://www.federalreserve.gov/pubs/oss/oss2/95/bull01972.pdf Deborah Lagomarsino, “Locating Lost Bonds Only a ‘Treasury Hunt’ Away,” The Wall Street Journal, September 20, 2005, http://global.factiva.com (accessed September 23, 2005). Liberty Loan Committee of New England, 1919, Why Another Liberty Loan (Liberty Loan Committee of New England, Boston, MA). Morningstar Principia mutual funds advanced, 2004, CD-ROM Data File (Morningstar Inc., Chicago, Ill.). Morgenthau, Henry, 1944, War Finance Policies: Excerpts from Three Addresses by Henry Morgenthau, (US Government Printing Office, Washington D.C.). 28 National Credit Union Administration, 2004, NCUA Individual Credit Union Data http://www.ncua.gov/indexdata.html (last accessed October 12th, 2004). Pender, Kathleen, 2003, Screws Tightened on Savings Bonds,” San Francisco Chronicle, 16 January 2003, B1 Projector, Dorothy S., Erling T. Thorensen, Natalie C. Strader, and Judith K. Schoenberg, 1966, Survey of Financial Characteristics of Consumers (Board of the Federal Reserve System, Washington DC). Quinn, Jane Bryant, 2001, Checking error could land you on blacklist, The Washington Post, September 30 th , 2001, http://global.factiva.com/ene/Srch/ss_hl .asp (last accessed March 12, 2004). Quittner, Jeremy, 2003, Marketing separate accounts to the mass affluent, American Banker, January 8 th , 2003, http://global.factiva.com/ene/Srch/ss_hl .asp (last accessed October 7 th , 2004). Samuel, Lawrence R., 1997, Pledging Allegiance: American Identity and the Bond Drive of WWII (Smithsonian Institution Press, Washington DC). Schneider, Daniel and Peter Tufano, 2004, “New Savings from Old Innovations: Asset Building for the Less Affluent,” New York Federal Reserve Bank, Community Development Finance Research Conference, http://www.people.hbs.edu/ptufano/New_from_old.pdf. Schreiner, Mark, Margaret Clancy, and Michael Sherraden, 2002, “Final report: Saving performance in the American Dream Demonstration, a national demonstration of Individual Development Accounts 9 (Washington University in St. Louis, Center for Social Development, St. Louis, MO). Sobhani, Robert and Maryana D. Shteyman, 2003, T. Rowe Price Group, Inc. (TROW): Initiating Coverage with a Hold; Waiting for an Entry Point, http://onesource.com (last accessed October 7 th , 2004) (Citigroup Smith Barney, New York, NY). Stone, Adam, 2004, After some well-placed deposits in media, bank campaign shows positive returns, PR News, March 1 st , 2004, http://global.factiva.com/ene/Srch/ss_hl .asp (last accessed October 13 th , 2004). Stone, Andrea, “Republicans Offer Spending Cuts,” USA Today, September 20, 2005 available online at www.usatoday.com (last accessed September 23, 2005). Survey of Consumer Finances, 2001, Federal Reserve Board, 2003, Electronic Data File, http://www.federalreserve.gov/pubs/oss/oss2/2001/scf2001home.html#scfdata2001 (last accessed June, 2003). T.D. Waterhouse, 2001, TD Waterhouse Group, Inc. Reports Cash Earnings of $.01 per Share for the Fiscal Quarter Ended October 31, 2001 www.tdwaterhouse.com (last accessed October 13 th , 2004). T. Rowe Price, 2003, T. Rowe Price 2003 Annual Report: Elements of Our Success, www.troweprice.com (last accessed October 13 th , 2004). Tansey, Charles D., 2001, Community development credit unions: An emerging player in low income communities, Capital Xchange, Brookings Institution Center on Urnabn and Metropolitan Policy 29 and Harvard University Joint Center for Housing Studies http://www.brook.edu/metro /capitalxchange/article6.htm (last accessed October 1st, 2004). Tufano, Peter and Daniel Schneider, 2004, H&R Block and “Everyday Financial Services,” Harvard Business School Case no. 205-013 (Harvard Business School Press, Boston, MA). US Census, 2002, http://ferret.bls.census.gov/macro/032002/hhinc/new01_001.htm. United States Department of the Treasury, 1915-1980, Annual Report of the Secretary of the Treasury on the State of the Finances for the Year (Department of the Treasury, Washington, DC). United States Department of the Treasury, 1918, To Make Thrift a Happy Habit (US Treasury, Washington D.C.). United States Department of the Treasury, 1935-2003, Treasury Bulletin (Department of the Treasury, Washington, DC). United States Department of the Treasury, 1935, United States Savings Bonds (US Department of Treasury, Washington, DC). United States Department of the Treasury, 1981, United States Savings Bond Program, A study prepared for the Committee on Ways and Means, US House of Representatives (US Government Printing Office, Washington, DC). United States Department of the Treasury, U.S. Savings Bonds Division, 1984, A History of the United States Savings Bond Program (US Government Printing Office, Washington, DC). United States Department of the Treasury, 1993, Help Your Coworkers Secure Their Future Today, Take Stock in America, U.S. Savings Bonds, Handbook for Volunteers, (United States Department of the Treasury, Washington, D.C.). United States Department of the Treasury, 2000, Statement: Payment of Fees for United States Savings Bonds, ftp://ftp.publicdebt.treas.gov/forms/sav4982.pdf (last accessed October 7th, 2004). United States Department of the Treasury, 2002, Terrorist attack prompts sale of Patriot Bond, The Bond Teller, 31(1). United States Department of the Treasury, 2003a, Minimum holding period for EE/I bonds extended to 12 months, The Bond Teller, January 31 st , 2003, http://www.publicdebt.treas.gov/sav/ savbtell.htm (last accessed October 12th, 2004). United States Department of the Treasury, Fiscal Service, Bureau of the Public Debt, July 2003b, Part 351- Offering of United States Savings Bonds, Series EE, Department Circular, Public Debt Series 1-80. United States Department of the Treasury, Fiscal Service, Bureau of the Public Debt, July 2003c, Part 359- Offering of United States Savings Bonds, Series I, Department Circular, Public Debt Series 1-98. United States Department of the Treasury, 2004a, 7 Great Reasons to Buy Series EE bonds, http://www.publicdebt.treas.gov/sav/savbene1.htm#easy (last visited September 26 th , 2004). United States Department of the Treasury, 2004b, The U.S. Savings Bonds Owner’s Manual, ftp://ftp.publicdebt.treas.gov/marsbom.pdf (last accessed March 12th, 2004). 30 United States Department of the Treasury, 2004c, http://www.publicdebt.treas.gov/mar/marprs.htm (last accessed October 12th, 2004). United States Department of the Treasury Bureau of Public Debt, 2004d, FAQs: Buying Savings Bonds Through Payroll Savings, www.publicdebt.treas.gov. United States Department of the Treasury, Bureau of Public Debt, 2005, Current Rates (through April 2005), http://www.publicdebt.treas.gov/sav/sav.htm. United States Department of the Treasury, Bureau of Public Debt, 2005, “EE Bonds Fixed Rate Frequently Asked Questions,” available online at http://www.treasurydirect.gov/indiv /research/indepth/eefixedrate faqs.htm, last accessed June 23 rd , 2005. Unites States Department of the Treasury, Bureau of Public Debt, 2005, Private Correspondence with Authors, on file with authors. United States Government Accounting Office, 2003, Savings Bonds: Actions Needed to Increase the Reliability of Cost-Effectiveness Measures (United States Government Accounting Office, Washington, D.C.). Zeck, Van, 2002, Testimony before House subcommittee on Treasury, Postal Service, and General Government Appropriations, March 20, 2002. Zook, George F., 1920, Thrift in the United States, in The New American Thrift, ed. Roy G. Blakey, Annals of the American Academy of Political and Social Science, vol. 87. 31 Table I Fraction of U.S. Households Having “Adequate” Levels of Emergency Savings[1] Financial Assets (Narrow) [2] Financial Assets (Broad) [3] All Households; Savings adequate to Replace six months of income 22% 44% Replace three months of income 32% 54% Meet emergency saving goal [4] 47% 63% Household Income < $30,000; Savings adequate to Replace six months of income 19% 28% Replace three months of income 25% 35% Meet stated emergency saving goal 29% 39% Source: Author’s tabulations from the 2001 Survey of Consumer Finances (SCF (2001)) Notes: [1] This chart compares different levels of financial assets to different levels of precautionary savings goals. If a household’s financial assets met or exceed the savings goals, it was considered adequate. The analysis was conducted for all households and for households with incomes less than $30,000 per year. [2] Financial Assets (Narrow) includes checking, saving, and money market deposits, call accounts, stock, bond, and combination mutual funds, direct stock holdings, US savings bonds, Federal, State, Municipal, corporate, and foreign bonds. [3] Financial Assets (Broad) includes all assets under Financial Assets (Narrow) as well as certificates of deposit, IRA and Keogh accounts, annuities and trusts, and the value of all 401(k), 403 (b), SRA, Thrift, Savings pensions plans as well as the assets of other plans that allow for emergency withdrawals of borrowing. [4] Respondents were asked how much they felt it was necessary to have in emergency savings. This row reports the percentage of respondents with financial assets greater than or equal to that emergency savings goal.32 Table II Percent Owning Select Financial Assets, by Income and Net Worth (2001) Savings Bonds Certificates of Deposit Mutual Funds Stocks Transaction Accounts All Financial Assets Percentile of Income Less than 20 3.8% 10.0% 3.6% 3.8% 70.9% 74.8% 20 - 39.9 11.0% 14.7% 9.5% 11.2% 89.4% 93.0% 40 - 59.9 14.1% 17.4% 15.0% 16.4% 96.1% 98.3% 60 - 79.9 24.4% 16.0% 20.6% 26.2% 99.8% 99.6% 80 - 89.9 30.3% 18.3% 29.0% 37.0% 99.7% 99.8% 90 – 100 29.7% 22.0% 48.8% 60.6% 99.2% 99.7% Lowest quintile ownership rate as a 12.8% 45.5% 7.4% 6.3% 71.5% 75.0% percent of top decile ownership rate Percentile of net worth Less than 25 4.3% 1.8% 2.5% 5.0% 72.4% 77.2% 25 - 49.9 12.8% 8.8% 7.2% 9.5% 93.6% 96.5% 50 - 74.9 23.5% 23.2% 17.5% 20.3% 98.2% 98.9% 75 - 89.9 25.9% 30.1% 35.9% 41.2% 99.6% 90.8% 90 – 100 26.3% 26.9% 54.8% 64.3% 99.6% 100.0% Lowest quintile ownership rate as a 16.3% 6.7% 4.6% 7.8% 72.7% 77.2% percent of top decile ownership rate Source: Aizcorbe, Kennickell, and Moore (2003). 33 Table III Median value of Select Financial Assets among Asset Holders, by Income and Net Worth (2001) Savings Bonds Certificates of Deposit Mutual Funds Stocks Transaction Accounts All Financial Assets Percentile of Income Less than 20 $1,000 $10,000 $21,000 $7,500 $900 $2,000 20 - 39.9 $600 $14,000 $24,000 $10,000 $1,900 $8,000 40 - 59.9 $500 $13,000 $24,000 $7,000 $2,900 $17,100 60 - 79.9 $1,000 $15,000 $30,000 $17,000 $5,300 $55,500 80 - 89.9 $1,000 $13,000 $28,000 $20,000 $9,500 $97,100 90 – 100 $2,000 $25,000 $87,500 $50,000 $26,000 $364,000 Percentile of net worth Less than 25 $200 $1,500 $2,000 $1,300 $700 $1,300 25 - 49.9 $500 $500 $5,000 $3,200 $2,200 $10,600 50 - 74.9 $1,000 $11,500 $15,000 $8,300 $5,500 $53,100 75 - 89.9 $2,000 $20,000 $37,500 $25,600 $13,700 $201,700 90 – 100 $2,000 $40,000 $140,000 $122,000 $36,000 $707,400 Source: Aizcorbe, Kennickell, and Moore (2003). Medians represent holdings among those with non-zero holdings. 34 Table IV Minimum Initial Purchase Requirements among Mutual Funds in the United States. Min = $0 Min =< $100 Min =< $250 Among all Funds listed by Morningstar Number allowing 1,292 1,402 1,785 Percent allowing 8% 9% 11% Among the top 500 mutual funds by net assets Number allowing 49 55 88 Percent allowing 10% 11% 18% Among the top 100 index funds by net assets Number allowing 30 30 30 Percent allowing 30% 30% 30% Among the top 100 domestic stock funds by net assets Number allowing 11 13 24 Percent allowing 11% 13% 24% Among the top 100 money market funds by net assets Number allowing 6 6 6 Percent allowing 6% 6% 6% Source: Morningstar (2004) and imoneynet.com (2005). Table V Average Savings Account Fees and Minimum Balance Requirements, Nationally and in the Ten Largest Consolidated Metropolitan Statistical Areas (CMSAs) (2001) Monthly Fee Annual Fee Minimum Balance to Open Account Minimum Balance to Avoid Monthly Fee Annual Fee as a Percent of Min Balance Requirement All Respondent Banks $97 $2.20 $158 $26 27% New York $267 $3.10 $343 $37 14% Los Angeles $295 $2.80 $360 $34 11% Chicago $122 $3.50 $207 $43 35% District of Columbia $100 $3.20 $152 $38 38% San Francisco $275 $2.80 $486 $34 12% Boston $44 $2.70 $235 $33 75% Dallas $147 $3.20 $198 $38 26% Average 10 Largest CMSAs $179 $2.90 $268 $35 20% Source: Board of Governors of the Federal Reserve (2002) 35 Table VI Attributes of Common Savings Vehicles, February 9 th , 2005 Sources: bankrate.com, imoneynet.com, US Department of the Treasury (2005). * Rate assuming early redemption in month 12 (first redemption date) and penalty of loss of three months of interest. Savings Bonds Savings Accounts Certificates of Deposit Money Market Mutual Funds Yield Series EE: 3.25% (2.44%*) Series I: 3.67% (2.75%*) 1.59% 1-month: 1.16% 3-month: 1.75% 6-month: 2.16% Taxable: 1.75% Non-table: 1.25% Preferential Tax Treatment Federal taxes deferred until time of redemption. State and local tax exempt None None None Liquidity Required 12 month holding period. Penalty for redemption before 5 years equal to loss of prior three months of interest. On demand Penalties for early withdrawal vary: all interest on 30 day CD, 3 months on 18 month CD, 6 months on 2 year or longer CD. On demand, but fees are assessed upon exit from fund. Risk "Full faith and credit of US" No principal risk FDIC insurance to $100,000 FDIC insurance to $100,000 Risk to principal, although historically absent for Money Market Funds Minimum Purchase $25 Minimum opening deposit average $100 Generally, $500 Generally, $250 or more Credit Check None ChexSystems sometimes used ChexSystems sometimes used None 36 Table VII Savings Bonds (all series) Outstanding as a Percent of Total Domestic Deposits and Total Domestic Savings Deposits at Commerical Banks 0% 20% 40% 60% 80% 100% 120% 140% 160% 180% Fiscal Year 1937 1940 1943 1946 1949 1952 1955 1958 1961 1964 1967 1970 1973 1976 1979 1982 1985 1988 1991 1994 1997 2000 Savings Bonds Outstanding as a Percent of Commercial Bank Deposits Total Domestic Deposits Total Savings Deposits Source: US Treasury Department, Treasury Bulletin (1936-2003), FDIC (2004) 37 Table VIII Ownership of Select Financial Assets (1946 – 2001) 1946 1951 1960 1963 1970 1977 1983 1989 1992 1995 1998 2001 Checking Accounts 34% 41% 57% 59% 75% 81% 79% 81% 84% 85% 87% 87% Savings Accounts 39% 45% 53% 59% 65% 77% 62% n/a n/a n/a n/a 55% Transaction Account n/a n/a n/a n/a n/a n/a n/a 85% 88% 87% 91% 91% Savings Bonds 63% 41% 30% 28% 27% 31% 21% 24% 23% 23% 19% 17% Corporate Stock n/a n/a 14% 14% 25% 25% 19% 16% 18% 15% 19% 21% Mutual Funds n/a n/a n/a 5% n/a n/a n/a 7% 11% 12% 17% 18% Source: Aizcorbe, Kennickell, and Moore, (2003); Avery, Elliehausen, and Canner, (1984); Kennickell, Starr McLuer, and Surette, (2000); Kennickell and Shack -Marquez, (1992); Kennickell and Starr-McLuer, (1994); Kennickell, Starr-McLuer, and Sunden, (1997); Projector, Thorensen, Strader, and Schoenberg, (1966). Table IX Savings Bond Ownership by Income Quintile, 1957 and 2001. 1957 2001 Percent Decrease Bottom 20 12.8% 3.8% 70.3% Second 21.3% 11.0% 48.4% Third 27.4% 14.1% 48.5% Fourth 35.9% 17.4% 51.5% Top 20 44.9% 17.2% 61.8% Source: Hanc (1962) and Aizcorbe, Kennickell, and Moore (2003) 38 Appendix A: Savings Bonds Today Series EE and the Series I bonds are the two savings bonds products now available (Table VI summarizes the key features of the bonds in comparison to other financial products). 18 Both are accrual bonds; interest payments accumulate and are payable on redemption of the bond. Series EE bonds in paper form are sold at 50% of their face value (a $100 bond sells for $50) and, until May of 2005, accumulated interest at a variable “market rate” reset semiannually as 90% of the five-year Treasury securities yield on average over the prior 6 month period. However, as of May, the interest rate structure for EE bonds changed. Under the new rules, EE bonds earn a fixed rate of interest, set bi-annually in May and October. The rate is based on the 10 year Treasury bond yield, but the precise rate will be set “administratively” taking into account the tax privlidges of savings bonds and the early redemption option. 19 EE bonds are guaranteed to reach face value after 20 years, but continue to earn interest for an additional 10 years before the bond reaches final maturity (US Department of the Treasury (2003b)). Inflation-indexed I Bonds are sold at face value and accumulate interest at an inflation-adjusted rate for 30 years (Treasury (2003c)). 20 In terms of their basic economic structure of delivering fixed rates, EE savings bonds resemble fixed rate certificates of deposit (CDs). Backed by the “full faith and credit of the United States Government,” savings bonds have less credit risk than any private sector investment. (Bank accounts are only protected by the FDIC up to $100,000 per person) Holders face no principal loss, as rises in rates do not lead to a revaluation of principal, because the holder may redeem them without penalty (after a certain point). Also, interest on I bonds is indexed to inflation rates. The holder faces substantial short-term liquidity risk, as current rules do not allow a bond to be redeemed earlier than 1 year from the date of purchase (although this requirement may be waived in rare circumstances involving natural disasters or, on a case-by-case basis, individual financial problems.) Bonds redeemed less than 5 years from the date of purchase are subject to a penalty equal to 3 months of interest. In terms of liquidity risk, savings bonds are similar to certificates of deposit more so than to MMMF or MMDA accounts. Interest earnings on EE and I Bonds are exempt from state and local taxes, but federal taxes must be paid either 1) when the bond is redeemed, 2) 30 years from the date of purchase, or 18 The current income bond, the Series HH, was discontinued in August of 2004 (United States Department of the Treasury Bureau of Public Debt, Series HH/H Bonds, available online at www.publicdebt.treas.gov). 19 United States Department of the Treasury, Bureau of Public Debt, “EE Bonds Fixed Rate Frequently Asked Questions,” available online at http://www.treasurydirect.gov/indiv/research/indepth/eefix edratefaqs.htm, last accessed June 23 rd , 2005. 20 This inflation-adjusted rate is determined by a formula which is essentially the sum of a fixed real rate (set on the date of the bond issue) and the lagging rate of CPI inflation 39 3) yearly. 21 With respect to tax treatment, savings bonds are attractive relative to many private sector products. Comparing the actual yields of savings bonds with those of other savings products is not simple. The rates of return on savings bonds vary, as do those on short term CDs. Further, the true yield of savings bonds is influenced by their partially tax-exempt status as well as the penalties associated with early redemption. In order to get an as accurate as possible estimate of yields, we model realized returns over a five year period with various assumptions regarding early redemption, yields, and taxes. Generally, EE bond’s performance is on par with that of average certificates of deposit with a 6-month, 2.5 year, or 5 year term or a NOW account. 22 While their returns are 10% less than the Treasury securities to which they are pegged, savings bond holders do not face the interest rate exposure and principal risk that holders of Treasury securities face and are able to buy them in small convenient denominations. It is more difficult to evaluate the Series I bonds, as U.S. private sector analogues for these instruments are scarce. 21 Under the Education Savings Bond Program, bondholders making qualified higher education expenditures may exclude some or all of the interest earned on a bond from their Federal Taxes. This option is incometested and is only available to joint filers making less than $89,750 and to single filers making less than $59,850 (IRS (2003)). 22 Historically, when they were offered in the 1940s to support World War II, Savings Bonds earned better rates than bank deposits (Samuel (1997)). Savings Bonds retained this advantage over savings accounts and over corporate AAA bonds (as well as CDs following their introduction in the early 1960s) through the late 1960s. However, while rates on CDs and corporate bonds rose during the inflationary period of the late 1970s, yields on Savings Bonds did not keep pace and even by the late 1990’s had not fully recovered their competitive position (Federal Reserve (2005)). 40 Appendix B: Patterns and Trends in Bond Ownership Patterns of bond ownership and purchase have changed over time. Savings Bond sales rose rapidly in the early 1940s with the onset of World War II but then slowed substantially in the post war period. Savings bond holdings benchmarked against domestic deposits in US commercial banks fell from 39% of total domestic deposits in 1949 to 5% of domestic deposits in 2002 (Table VII). In 1946, 63% of households held savings bonds. Over the next 60 years, savings bond ownership declined steadily; dropping to around 40% of households in the 1950s, to approximately 30% through the 1960s and 1970s, and then to near 20% for much of the 1980s and 1990s. The 16.7% 2001 ownership rate appears to be the lowest since World War II. See Table VIII for saving bond and other saving product ownership rates over time. In 2001, high income and high wealth households were far more likely to hold savings bonds than low income and low wealth households. While gaps nearly as wide or wider appear between income and wealth quintiles for stocks, mutual funds, and CDs (Transaction account ownership is closer) the gap for savings bonds is of particular note given the product’s original purpose of appealing to the “small-saver.” Historically, ownership of savings bonds was far more equal. While savings bond ownership is down from the 1950s’ levels across households of all incomes, this shift is most pronounced among lower-income households. Table IX summarizes ownership rates by income in 1957 and 2001. Overall, in 2001 savings bond ownership was down 42% from 1957. For those in the lowest income quintile, savings bonds ownership declined by 70%. Interestingly, savings bond ownership was off 62% in the highest income quintile. However, while large shares of high-income households now own stocks and mutual funds, ownership rates for these products are quite low (3-4%) among low-income households. If lowincome households have moved savings from savings bonds to other products, it has most likely been into transaction accounts, not the more attractive investment vehicles more common among high income householdsSocial Enterprise Initiative Kick-Of
|
CD ROM Annuaire d'Entreprises France prospect (avec ou sans emails) : REMISE DE 10 % Avec le code réduction AUDEN872
10% de réduction sur vos envois d'emailing --> CLIQUEZ ICI Retour à l'accueil, cliquez ici Copyright © President & Fellows of Harvard College Social Enterprise Initiative Kick-Off Tuesday, September 6, 2011A multi-disciplinary approach to addressing societal issues through a managerial lens ? Applying innovative business practices ? Driving sustained, high-impact social change ? Grounded in the mission of HBS 2 What is Social Enterprise at HBS?3 Supporting a Dynamic Community grounded in practice Faculty and Administrative leadership Alumni engagement Student engagement4 Key Ingredient: Faculty Engagement Faculty Co-Chairs Herman B. “Dutch” Leonard V. Kasturi “Kash Rangan SE Faculty Group Extended SE Faculty Group Allen Grossman Warren McFarlan Joshua Margolis Michael Chu Forest Reinhardt Nava Ashraf Julie Battilana Joseph Bower Dennis Campbell Shawn Cole Bill Sahlman Amy Edmondson Stephen Greyser Andre Hagiu Regina Herzlinger James Heskett Robert Higgins Rosabeth Kanter Rob Kaplan Christopher Marquis Alnoor Ebrahim Karthik Ramanna Luis Viceira Arthur Segel Andreas Nilsson John J-H Kim Michael Toffel Youngme Moon Eric Werker Other Faculty Engaged in Specific SE Activities Nicolas Retsinas Jim Austin Bob Eccles Ray Goldberg Kathleen McGinn Ramana Nanda Rebecca Henderson Bob Kaplan George SerafeimSince 1993, HBS faculty members have published more than 500 cases, 100 articles, and several books including: • Joining a Nonprofit Board: What You Need to Know (2011, McFarlan, Epstein) • Leading for Equity: The Pursuit of Excellence in the Montgomery County Public Schools (2009, Childress, Doyle, and Thomas) • SuperCorp: How Vanguard Companies Create Innovation, Profits, Growth, and Social Good (2009, Kanter) • Business Solutions for Global Poor (2007, Rangan, Quelch, Herrero, Barton) • Entrepreneurship in the Social Sector (2007, Wei-Skillern, Austin, Leonard, Stevenson) • Managing School Districts for High Performance (2007, Childress, Elmore, Grossman, Moore Johnson) 5 Knowledge Generation6 Key Ingredient: Administrative Engagement SE Administrative Group Director: Laura Moon Director of Programs: Margot Dushin Assistant Director: Keri Santos Coordinator: Liz Cavano Key Administrative Partners Knowledge & Library Services MBA Program Office Admissions/ Financial Aid Executive Education Registrar Services Student and Academic Services Donor Relations Alumni Relations Other HBS Administrative Departments MBA Career & Professional Development Other Initiatives (BEI, HCI, Entrepreneurship, Global, Leadership)A Little Bit About You • Approximately 12% of the Class of 2013 has prior experience working in the nonprofit or public sectors (with about two-thirds coming to HBS directly from these sectors) • You and your colleagues represent a breadth of experience • Including entrepreneurial ventures, for-profit efforts focused on social impact, funding organizations, government agencies, nonprofit organizations • In issue areas including arts, education, economic development, environment, healthcare, human services, international development • In countries and regions around the world • Colleagues in the Class of 2012, reflect a similar profile • Approximately 8% of the class pursued Social Enterprise Summer Fellowships with organizations in 20+ countries around the world 7 Key Ingredient: Student Engagement8 Catalyzing Student Involvement9 SEI MBA Career Support Programs Private Sector Social Entrepreneurship Nonprofit Sector Public Sector Goldsmith Fellowship Bplan Contest RC Year Summer EC Year Post HBS Summer Fellowship Independent Project Leadership Fellows Leadership Fellows Social Entrepreneurship Fellowship Loan Repayment Assistance Summer Fellowship Summer Fellowship Summer Fellowship Goldsmith Fellowship Independent Project Loan Support Social Entrepreneurship Fellowship Loan Repayment Assistance Loan Repayment Assistance Bplan Contest Social Entrepreneurship Fellowship Bplan Contest Goldsmith Fellowship Bplan Contest Independent Project Bplan Contest Loan Support Independent Project Bplan Contest10 Social Enterprise Focused Student Clubs • Social Enterprise Club • Social Enterprise Conference • Board Fellows • Harbus Foundation • Volunteer Consulting Organization • Volunteers• SEI Website—a Gateway to Information • www.hbs.edu/socialenterprise • Periodic Email Announcements for SEI • Mainstream HBS Communications • MBA Events Calendar • MyHBS • Student Clubs • SEC Weekly e-Newsletter • Other Club Communications • Follow us on Twitter: HBSSEI 11 Information, Resources and Staying Connected• Student Club Fair, September 8 • CPD Super Day—Social Enterprise Industry 101, September 23 • Social Enterprise Professional Perspectives Session, September 27 • Club Kick-Offs and Events • Social Enterprise Community Engagement Lunches • …and more… • And, now…Join us for an ice-cream reception! 12 Next Month and BeyondThe Consequences of Entrepreneurial Finance : A Regression Discontinuity Analysis
|
CD ROM Annuaire d'Entreprises France prospect (avec ou sans emails) : REMISE DE 10 % Avec le code réduction AUDEN872
10% de réduction sur vos envois d'emailing --> CLIQUEZ ICI Retour à l'accueil, cliquez ici Copyright © 2010 by William R. Kerr, Josh Lerner, and Antoinette Schoar Working papers are in draft form. This working paper is distributed for purposes of comment and discussion only. It may not be reproduced without permission of the copyright holder. Copies of working papers are available from the author. The Consequences of Entrepreneurial Finance: A Regression Discontinuity Analysis William R. Kerr Josh Lerner Antoinette Schoar Working Paper 10-0861 The Consequences of Entrepreneurial Finance: A Regression Discontinuity Analysis William R. Kerr, Josh Lerner, and Antoinette Schoar * Abstract: This paper documents the role of angel funding for the growth, survival, and access to follow-on funding of high-growth start-up firms. We use a regression discontinuity approach to control for unobserved heterogeneity between firms that obtain funding and those that do not. This technique exploits that a small change in the collective interest levels of the angels can lead to a discrete change in the probability of funding for otherwise comparable ventures. We first show that angel funding is positively correlated with higher survival, additional fundraising outside the angel group, and faster growth measured through growth in web site traffic. The improvements typically range between 30% and 50%. When using the regression discontinuity approach, we still find a strong, positive effect of angel funding on the survival and growth of ventures, but not on access to additional financing. Overall, the results suggest that the bundle of inputs that angel investors provide have a large and significant impact on the success and survival of start-up ventures. * Harvard University; Harvard University; and MIT. All three authors are affiliates of the National Bureau of Economic Research. We thank James Geshwiler of CommonAngels, Warren Hanselman and Richard Sudek of Tech Coast Angels, and John May of the Washington Dinner Club for their enthusiastic support of this project and willingness to share data. We also thank the many entrepreneurs who responded to our inquiries. Harvard Business School‘s Division of Research and the Kauffman Foundation supported this research. Andrei Cristea provided excellent research assistance. All errors and omissions are our own.2 One of the central and most enduring questions in the entrepreneurial finance literature is the extent to which early stage financiers such as angels or venture funds have a real impact on the firms in which they invest. An extensive theoretical literature suggests the combination of intensive monitoring, staged investments, and powerful control rights in these types of deals should alleviate agency problems between entrepreneurs and institutional investors (examples include Admati and Pfleiderer, 1994; Berglöf, 1994; Bergmann and Hege, 1998; Cornelli and Yosha, 2003; and Hellmann, 1998). This bundle of inputs, the works suggest, can lead to improved governance and operations in the portfolio firms, lower capital constraints, and ultimately stronger firm growth and performance. But the empirical documentation of this claim has been challenging. Hellmann and Puri (2000) provide a first detailed comparison of the growth path of venture backed versus non venture backed firms. 1 This approach, however, faces the natural challenge that unobserved heterogeneity across entrepreneurs, such as ability or ambition, might drive the growth path of the firms as well as the venture capitalists‘ decision to invest. These problems are particularly acute for evaluating early-stage investments. An alternative approach is to find exogenous shocks to the level of venture financing. Examples of such exogenous shocks are public policy changes (Kortum and Lerner, 2000), variations in endowment returns (Samila and Sorenson, 2010), and differences in state pension funding levels (Mollica and Zingales, 2007). These studies, however, can only examine the impact of entrepreneurial finance activity at an aggregate level. Given the very modest share that high-potential growth firms represent of all 1 A similar approach is taken in Puri and Zarutskie (2008) and Chemmanur et al. (2009) who employ comprehensive Census Bureau records of private firms to form more detailed control groups based on observable characteristics.3 entrepreneurial ventures and economic activity overall, these studies face a ?needle in the haystack? type challenge to detect any results. This paper takes a fresh look at the question of whether entrepreneurial financiers affect the success and growth of new ventures. We focus on a neglected segment of entrepreneurial finance: angel investments. Angel investors have received much less attention than venture capitalists, despite the fact that some estimates suggest that these investors are as significant a force for high-potential start-up investments as venture capitalists, and even more significant investors elsewhere (Shane, 2008; Goldfarb et al., 2007; Sudek et al., 2008). Angel investors are increasingly structured as semi-formal networks of high net worth individuals, often former entrepreneurs themselves, who meet in regular intervals (usually once a month for breakfast or dinner) to hear aspiring entrepreneurs pitch their business plans. The angels then decide to conduct further due diligence and ultimately whether to invest in some of these deals either individually or in subgroups of the members. Similarly to traditional venture capital investments, angel investment groups often adopt a very hands-on role in the deals they get involved in and provide entrepreneurs with advice and contacts to potential business partners. In addition to their inherent interest as funders of early stage companies, angel investment groups are distinguished from the majority of traditional venture capital organizations by the fact that they make their investment decisions through well documented collections of interest and, in some cases, formal votes. By way of contrast, the venture firms that we talked to all employ a consensual process, in which controversial proposals are withdrawn before coming up for a formal vote or disagreements are resolved in conversations before the actual voting takes place. In addition, venture firms also rarely document the detailed voting behind their decisions. Angel investors, in contrast, express their interest for deals independently from one another and based 4 upon personal assessment. This allows us to observe the level of support or lack thereof for the different deals that come before the angel group. These properties allow us to undertake a regression discontinuity design using data from two angel investment groups. This approach, while today widely used in program evaluations by economists (Lee and Lemieux, 2009), remains underutilized in financial economics (exceptions include Rauh, 2006; and Cherenko and Sunderam, 2009). We essentially compare firms that fall just above and those that are just below the criteria for funding for the angel group. The underlying identification relies on the idea that firms that fall just around the cut-off level have very similar ex ante characteristics that allow us to estimate the causal effect of obtaining angel financing. After showing the ex ante comparability of the ventures in the border region, we examine differences in their long-run performance. In this way, we can employ micro-data on firm outcomes while minimizing the problem of unobserved heterogeneity between the funded and rejected transactions. Several clear patterns emerge from our analysis: First, and maybe not surprisingly, companies that elicit higher interest in initial voting at the angel meeting are far more likely to be ultimately funded by the angel groups. More importantly, angel groups display break points or discontinuities where a small change in the collective interest levels of the angels leads to a discrete change in the probability of funding among otherwise comparable ventures. This provides a powerful empirical foothold for overcoming quality differences and selection bias between funded and unfunded ventures. Second, we look at the impact of angel funding on performance and access to follow-on financing for firms that received angel funding compared to those that did not. We first compare the outcomes for the full sample of firms that pitched to the angels and then narrow our 5 identification strategy to the firms that fall just above and below the funding breakpoint we identified. We show that funded firms are significantly more like to survive for at least four years (or until 2010) and to raise additional financing outside the angel group. They are also more likely to show improved venture performance and growth as measured through growth in web site traffic and web site rankings. The improvement gains typically range between 30% and 50%. An analysis of ventures just above and below the threshold, which removes the endogeneity of funding and many omitted variable biases, confirms the importance of receiving angel investments for the survival and growth of the venture. However, we do not see an impact of angel funding on accessing additional financing using this regression discontinuity approach. This may suggest that access to additional financing might often be a by-product of how angel funded firms grow but that this path may not be essential for venture success as we measure it. In addition, the result on follow-on venture funding might underline that in the time period we study, prior angel financing was not an essential prerequisite to accessing follow-on funding. However, the results overall suggest that the bundle of inputs that angel investors provide have a large and significant impact on the success and survival of the firms. Finally, we also show that the impact of angel funding on firm outcomes would be overstated if we look at the full distribution of ventures that approach the angel groups, since there is a clear correlation between the quality of the start up and the level of interest. Simply restricting the treatment and control groups to a narrow range around the border discontinuity reduces the measured effects by a quarter from the raw correlations. This result reinforces the need to focus on the regression discontinuity approach we follow in this paper. Thus, this paper provides a fresh look and new evidence at an essential question in entrepreneurial finance. It quantifies the positive impact that angel investors make to the 6 companies that they fund in a way that simultaneously exploits novel, rich micro-data and addresses concerns about unobserved heterogeneity. Our work is closest in spirit to the papers in the entrepreneurial finance literature that focus on the investment process of venture capitalist. For example, Sorensen (2007) assesses the returns to being funded by different tiers of investors. Our work instead focuses on the margin of obtaining initial funding or not. Kaplan and Stromberg (2004) and Kaplan et al. (2009) examine the characteristics and dimensions that venture capitalists rely on when making investment decisions. The plan of this paper is as follows. Section 1 reviews the angel group investment process. Section 2 introduces our angel investment data and describes our methodology. Section 3 introduces our outcomes data. Section 4 presents the analysis. The final section concludes the paper. 1. The Angel Group Investment Process Angel investments—or equity investments by individuals into high-risk ventures—is an among the oldest of human commercial activities, dating back at least as far as the investment agreements recorded in the Code of Hammurabi circa 1790 B.C. For most of American economic history, angels represented the primary way in which entrepreneurs obtained high-risk capital for start-up businesses (e.g., Lamoreaux, Levenstein and Sokoloff, 2004), whether directly through individuals or through the offices that managed the wealth of high net worth individuals beginning in the last decades of the nineteenth century. Wealthy families such as the Phippses, Rockefellers, Vanderbilts, and Whitneys invested in and advised a variety of business enterprises, including the predecessor entities to AT&T, Eastern Airlines, McDonald-Douglas, and W.R. Grace.7 The first formal venture capital firm, however, was not established until after World War II: American Research and Development (ARD) was formed by MIT President Karl Compton, Harvard Business School Professor Georges F. Doriot, and Boston business leaders in 1946. Over time, a number of the family offices transformed as well into stand-alone venture firms, including such groups as Bessemer, Venrock, and J.H. Whitney. While angel investors have a long history, angel investment groups are a quite recent phenomenon. Beginning in the mid 1990s, angels began forming groups to collectively evaluate and invest in entrepreneurial ventures. These groups are seen as having several advantages by the angels. First, angels can pool their capital to make larger investments than they could otherwise. Second, each angel can invest smaller amounts in individual ventures, allowing participation in more opportunities and diversification of investment risks. They can also undertake costly due diligence of prospective investments as a group, reducing the burdens for individual members. Fourth, these groups are generally more visible to entrepreneurs and thus receive a superior deal flow. Finally, the groups frequently include some of the most sophisticated and active angel investors in the region, which results in superior decision-making. The Angel Capital Association (ACA) lists 300 American groups in its database. The average ACA member angel group had 42 member angels and invested a total of $1.94 million in 7.3 deals in 2007. Between 10,000 and 15,000 angels are believed to belong to angel groups in the U.S. 2 Most groups follow a template that is more or less similar. Entrepreneurs typically begin the process by submitting to the group an application that may also include a copy of their business plan or executive summary. The firms, after an initial screening by the staff, are then 2 Statistics are based on http://www.angelcapitalassociation.org/ (accessed February 15, 2010).8 invited to give a short presentation to a small group of members, followed by a question-andanswer session. Promising companies are then invited to present at a monthly meeting (often a weekday breakfast or dinner). The presenting companies that generate the greatest interest then enter a detailed due diligence process, although the extent to which due diligence and screening leads or follows the formal presentation varies across groups. A small group of angel members conduct this additional, intensive evaluation. If all goes well, this process results in an investment one to three months after the presentation. Figure 1 provides a detailed template for Tech Coast Angels (Sudek et al. 2008). 2. Angel Group Data and Empirical Methodology This section jointly introduces our data and empirical methodology. The discussion is organized around the two groups from which we have obtained large datasets. The unique features of each investment group, their venture selection procedures, and their data records require that we employ conceptually similar, but operationally different, techniques for identifying group-specific discontinuities. We commence with Tech Coast Angels, the larger of our two investment groups, and we devote extra time in this first data description to also convey our empirical approach and the biases it is meant to address. We then describe our complementary approach with CommonAngels and how we ultimately join the two groups together to analyze their joint behavior. 2.1. Tech Coast Angels Tech Coast Angels is a large angel investment group based in southern California. They have over 300 angels in five chapters seeking high-growth investments in a variety of high-tech 9 and low-tech industries. The group typically looks for funding opportunities of $1 million or less. Additional details on this venture group are available at http://www.techcoastangels.com/. Tech Coast Angels kindly provided us with access to their database regarding prospective ventures under explicit restrictions that the confidentiality of individual ventures and angels remain secure. For our study, this database was exceptional in that it allowed us to fully observe the deal flow of Tech Coast Angels. Our analysis considers ventures that approached Tech Coast Angels between 2001 and 2006. We thus mainly build upon data records that existed in early 2007. At this time, there were over 2500 ventures in the database. The database is also exceptional in that it has detailed information about many of the companies that are not funded by Tech Coast Angels. We first document in Table 1 the distribution of interest from the angel investors across the full set of potential deals. This description sets the stage for identifying a narrower group of firms around a funding discontinuity that offers a better approach for evaluating the consequences of angel financing. Table 2 then evaluates the ex ante comparability of deals around the border, which is essential for the identification strategy. The central variable for the Tech Coast Angel analysis is a count of the number of angels expressing interest in a given deal. This indication of interest does not represent a financial commitment, but instead expresses a belief that the venture should be pursued further by the group. The decision to invest ultimately depends upon a few angels taking the lead and championing the deal. While this strength of conviction is unobserved, we can observe how funding relates to obtaining a critical mass of interested angels. Table 1 documents the distribution of deals and angel interest levels. The first three columns of Table 1 show 64% of ventures receive no interest at all. Moreover, 90% of all 10 ventures receive interest by fewer than ten angels. This narrowing funnel continues until the highest bracket, where there are 44 firms that receive interest from 35 or more angels. The maximum observed interest is 191 angels. This funnel shares many of the anecdotal traits of venture funding—such as selecting a few worthy ventures out of thousands of business plans— but it is exceptionally rare to have the interest level documented consistently throughout the distribution and independent of actual funding outcomes. The shape of this funnel has several potential interpretations. It may reflect heterogeneity in quality among companies that are being pitched to the angels. It could also reflect simple industry differences across ventures. For example, the average software venture may receive greater interest than a medical devices company if there are more angels within the group involved in the software industry. There could also be an element of herding around ?hot deals?. But independent of what exactly drives this investment behavior of angels, we want to explore whether there are discontinuities in interest levels such that small changes in angels expressing interest among otherwise comparable deals results in material shifts in funding probability. The central idea behind this identification strategy is that angel interest in ventures does not map one-to-one into quality differences across ventures, which we verify empirically below. Instead, there is some randomness or noise in why some firms receive n votes and others receive n+1. It is reasonable to believe that there are enough idiosyncrasies in the preferences and beliefs of angels so that the interest count does not present a perfect ranking of the quality of the underlying firms. Certainly, the 2% of ventures with 35 or more interested angels are not comparable to the 64% of ventures with zero interest. But we will show that ventures with 18 votes and 22 votes are much more comparable, except that the latter group is much more likely to be funded.11 We thus need to demonstrate two pieces. First, we need to identify where in the distribution do small changes in interest level lead to a critical mass of angels, and thus a substantial increase in funding probability. As Tech Coast Angels does not have explicit funding rules that yield a mandated cut-off, we must identify from observed behavior where de facto breaks exist. We then need to show that deals immediately above and below this threshold appear similar at the time that they approached Tech Coast Angels. To investigate the first part, the last column of Table 1 documents the fraction of ventures in each interest group that are ultimately funded by Tech Coast Angels. None of the ventures with zero interest are funded, whereas over 40% of deals in the highest interest category are. The rise in funding probability with interest level is monotonic with interest, excepting some small fluctuations at high interest levels. There is a very stark jump in funding probability between interest levels of 15-19 angels and 20-24 angels, where the funded share increases from 17% to 38%. This represents a distinct and permanent shift in the relationship between funding and interest levels. We thus identify this point as our discontinuity for Tech Coast Angels. In most of what follows, we discard deals that are far away from this threshold, and instead look around the border. We specifically drop the 90% of deals with fewer than ten interested angels, and we drop the 44 deals with very high interest levels. We designate our ?above border? group as those ventures with interest levels of 20-34; our ?below border? group is defined as ventures with 10-19 interest levels. Having identified from the data the border discontinuity, we now verify the second requirement that ventures above and below the border look ex ante comparable except that they received funding from Tech Coast Angels. This step is necessary to assert that we have identified a quasi exogenous component to angel investing that is not merely reflecting underlying quality 12 differences among the firms. Once established, a comparison of the outcomes of above border versus below border ventures will provide a better estimate of the role of angel financing in venture success as the quality differences inherent in the Table 1‘s distribution will be removed. Before assessing this comparability, we make two sample adjustments. First, to allow us to later jointly analyze our two investment groups, we restrict the sample to ventures that approached Tech Coast Angels in the 2001-2006 period. This restriction also allows us a minimum horizon of four years for measuring outcomes. Second, we remove cases where the funding opportunity is withdrawn from consideration by the venture itself. These withdrawn deals are mainly due to ventures being funded by venture capital firms (i.e., the venture was courting multiple financiers simultaneously). As these deals do not fit well into our conceptual experiment of the benefits and costs of receiving or being denied angel funding, it is best to omit them from the sample. Our final sample includes 87 firms from Tech Coast Angels, with 46 ventures being above the border and 41 below. 45 of the 87 ventures are funded by Tech Coast Angels. Table 2 shows that the characteristics of ventures above and below the funding threshold are very similar to one another ex ante. If our empirical approach is correct, the randomness in how localized interest develops will result in the observable characteristic of firms immediately above and below the threshold not being statistically different. Table 2 documents this comparability across a number of venture characteristics. Columns 2 and 3 present the means of the above border and below border groups, respectively. The fourth column tests for the equality of the means, and the t-tests allow for unequal variance. The two border groups are very comparable in terms of venture traits, industries, and venture stages. The first four rows show that basic characteristics like the amount of funding 13 requested, the documents provided by the venture to the angels, and the firm‘s number of managers and employees are not materially different for the firms above and below the discontinuity. The same is true for industry composition and stage of the business (e.g., is the firm in the idea stage, in its initial marketing and product development stage, or already revenue generating). For all of these traits, the null hypothesis that the two groups are similar is not rejected. While there are no observable differences in the characteristics of the ventures in the first three panels, the fourth panel of Table 2 shows that there are significant differences in how angels engage with ventures above and below the cut-off. With just a small adjustment in interest levels, angels assemble many more documents regarding the venture (evidence of due diligence), have more discussion points in their database about the opportunity, and ultimately are 60% more likely to fund the venture. All of these differences are statistically significant. 2.2. CommonAngels CommonAngels is the leading angel investment group in Boston, Massachusetts. They have over 70 angels seeking high-growth investments in high-tech industries. The group typically looks for funding opportunities between $500 thousand and $5 million. Additional details on this venture group are available at http://www.commonangels.com. CommonAngels kindly provided us with access to their database regarding prospective ventures under explicit restrictions that the confidentiality of individual ventures and angels remain secure. The complete database for CommonAngels as of early 2007 contains over 2000 ventures. The funnel process is again such that a small fraction of ventures receive funding. 14 Unlike the Tech Coast Angels data, however, CommonAngels does not record interest for all deals. We thus cannot explicitly construct a distribution similar to Table 1. CommonAngels does, however, conduct a paper-based poll of members following pitches at its monthly breakfast meetings. Most importantly, attending angels give the venture an overall score. Angels also provide comments about ventures and potential investments they might make in the company. Figure 2 provides a recent evaluation sheet. We focus on the overall score provided by angels for the venture as this metric is collected on a consistent basis throughout the sample period. CommonAngels provided us with the original ballots for all pitches between 2001 and 2006. After dropping two poor quality records, our sample has 63 pitches in total. One potential approach would be to order deals by the average interest levels of angels attending the pitch. We find, however, that the information content in this measure is limited. Instead, the data strongly suggest that the central funding discontinuity exists around the share of attending angels that award a venture an extremely high score. During the six years covered, CommonAngels used both a five and ten point scale. It is extremely rare that an angel awards a perfect score to a pitch. The breaking point for funding instead exists around the share of attending angels that award the pitch 90% or more of the maximum score (that is, 4.5 out of 5, 9 out of 10). This is close in spirit to the dichotomous expression of interest in the Tech Coast Angels database. Some simple statistics describe the non-linear effect. Of the 63 pitches, 14 ventures receive a 90% or above score from at least one angel; no deal receives such a score from more than 40% of attending angels. Of these 14 deals, 7 deals are ultimately funded by CommonAngels. Of the 49 other deals, only 11 are funded. This stark discontinuity is not present when looking at lower cut-offs for interest levels. For example, all but 12 ventures 15 receive at least one vote that is 80% of the maximum score (that is, 4 out of 5, 8 out of 10). There is further no material difference in funding probability based upon receiving more or fewer 80% votes. The same applies to lower cut-offs for interest levels. We restrict the sample to the 43 deals that have at least 20% of the attending angels giving the presentation a score that is 80% of the maximum possible score or above. As a specific example, a venture is retained after presenting to a breakfast meeting of 30 angels if at least six of those angels score the venture as 8 out of 10 or higher. This step removes the weakest presentations and ventures. We then define our border groups based upon the share of attending angels that give the venture a score greater than or equal to 90% of the maximum possible score. To continue our example, a venture is considered above border if it garners six or more angels awarding the venture 9 out of 10 or better. A venture with only five angels at this extreme value is classified as below border. While distinct, this procedure is conceptually very similar to the sample construction and culling undertaken with the Tech Coast Angels data. We only drop 20 Common Angel pitches that receive low scores, but that is because the selection into providing a formal pitch to the group itself accomplishes much of the pruning. With Tech Coast Angels, we drop 90% of the potential deals due to low interest levels. We implicitly do the same with CommonAngels by focusing only on the 63 pitches out of over 2000 deals in the full database. Our formal empirical analyses jointly consider Tech Coast Angels and CommonAngels. To facilitate this merger, we construct simple indicator variables for whether a venture is funded or not. We likewise construct an indicator variable for above and below the border discontinuity. We finally construct uniform industry measures across the groups. This pooling produces a regression sample of 130 ventures.16 3. Outcome Data This section documents the data that we collect on venture outcomes. This is the most significant challenge for this type of project as we seek comparable data for both funded and unfunded ventures. In many cases, the prospective deals are small and recently formed, and may not even be incorporated. We develop three broad outcomes: venture survival, venture growth and performance as measured by web site traffic data, and subsequent financing events. 3.1. Venture Survival Our simplest measure is firm survival as of January 2010. This survival date is a minimum of four years after the potential funding event with the angel group. We develop this measure through several data sources. We first directly contacted as many ventures as possible to learn their current status. Second, we looked for evidence of the ventures‘ operations in the CorpTech and VentureXpert databases. Finally, we examined every venture‘s web site if one exists. Existence of a web site is not sufficient for being alive, as some ventures leave a web site running after closing operations. We thus based our measurement on how recent various items like press releases were. In several cases, ventures have been acquired prior to 2010. We coded whether the venture was alive or not through a judgment of the size of the acquisition. Ventures are counted as alive if the acquisition or merger was a successful exit that included major announcements or large dollar amounts. If the event was termed an ?asset sale? or similar, we code the venture as not having survived. The results below are robust to simply dropping these cases.17 3.2. Venture Performance and Web Site Traffic Our second set of metrics quantify whether ventures are growing and performing better in the period after the potential financing event. While we would ideally consider a range of performance variables like employment, sales, and product introductions, obtaining data on private, unfunded ventures is extremely challenging. A substantial number of these ventures do not have employees, which limits their coverage even in comprehensive datasets like the Census Bureau surveys. We are able to make traction, however, through web traffic records. To our knowledge, this is the first time that this measure has been employed in an entrepreneurial finance study. We collected web traffic data from www.alexa.com, one of the largest providers of this type of information. Alexa collects its data primarily by tracking the browsing patterns of web users who have installed the Alexa Toolbar, a piece of software that attaches itself onto a user‘s Internet browser and records the user‘s web use in detail. According to the company, there are currently millions of such users. The statistics are then extrapolated from this user subset to the Internet population as a whole. The two =building block‘ pieces of information collected by the toolbar are web reach and page views. Web reach is a measure of what percentage of the total number of Internet users visit a website in question, and page views measures how many pages, on average, they visit on that website. Multiple page views by the same user in the same day only count as one entry in the data. The two usage variables are then combined to produce a variable known as site rank, with the most visited sites like Yahoo and Google having lower ranks. We collected web traffic data in the summer of 2008 and January 2010. We identify 91 of our 130 ventures in one of the two periods, and 58 ventures in both periods. The absolute level of web traffic and its rank are very dependent upon the specific traits and business models of 18 ventures. This is true even within broad industry groups as degrees of customer interaction vary. Some venture groups may also wish to remain ?under the radar? for a few years until they are ready for product launch or have obtained intellectual property protection for their work. Moreover, the collection method by Alexa may introduce biases for certain venture types. We thus consider the changes in web performance for the venture between the two periods. These improvements or declines are more generally comparable across ventures. One variable simply compares the log ratio of the web rank in 2010 to that in 2008. This variable is attractive in that it measures the magnitudes of improvements and declines in traffic. A limitation, however, is that it is only defined for ventures whose web sites are active in both periods. We thus also define a second outcome measure as an indicator variable for improved venture performance on the web. If we observe the web ranks of a company in both 2008 and 2010, the indicator variable takes a value of one if the rank in 2010 is better than that in 2008. If we only observe the company on the web in 2008, we deem its web performance to have declined by 2010. Likewise, if we only observe a company in 2010, we deem its web performance to have improved. This technique allows us to consider all 91 ventures for which we observe web traffic at some point, while sacrificing the granularity of the other measure. 3.3. Subsequent Financing Events Our final measures describe whether the venture received subsequent financing external to the angel group. We define this measure through data collected from CorpTech and VentureXpert, cross-checked with as many ventures directly as possible. We consider a simply indicator variable for a subsequent, external financing and a count of the number of financing rounds.19 4. Results This section documents our empirical results. We first more closely examine the relationship between border investments and angel funding. We then compare the subsequent outcomes of funded ventures with non-funded ventures; we likewise compare above border ventures with those below the discontinuity. 4.1. Border Discontinuities and Angel Funding Table 3 formally tests that there is a significant discontinuity in funding around the thresholds for the ventures considered by Tech Coast Angels and CommonAngels. The dependent variable is an indicator variable that equals one if the firm received funding and zero otherwise. The primary explanatory variable is an indicator variable for the venture being above or below the interest discontinuity. Column 1 controls for angel group fixed effects, year fixed effects, and industry fixed effects. Year fixed effects are for the year that the venture approached the angel group. These regressions combine data from the two angel groups. Across these two groups, we have 130 deals that are evenly distributed above and below the discontinuity. We find that there is a statistically and economically significant relationship between funding likelihood and being above the border: being above the border increases funding likelihood by about 33%. Clearly, the border line designation is not an identity or perfect rule, but it does signify a very strong shift in funding probability among ventures that are ex ante comparable as shown in Table 2. Column 2 shows similar results when we add year*angel group fixed effects. These fixed effects control for the secular trends of each angel group. The funding jump also holds for20 each angel group individually. Column 3 repeats the regression controlling for deal characteristics like firm size and number of employees at the time of the pitch. The sample size shrinks to 87 as we only have this information for Tech Coast Angel deals. But despite the smaller sample size, we still find a significant difference in funding probability. The magnitude of the effect is comparable to the full sample at 29%. Unreported regressions find a groupspecific elasticity for CommonAngels of 0.45 (0.21). These patterns further hold in a variety of unreported robustness checks. These results suggest that the identified discontinuities provide a reasonable identification strategy. 4.2. The Impact of Funding on Firm Outcomes We now look at the relationship between funding and firm outcomes. In the first column of Table 4, we regress a dummy variable for whether the venture was alive in 2010 on the indicator for whether the firm received funding from the angel group. We control for angel group, year, and industry fixed effects. The coefficient on indicator variable is 0.27 and is statistically significant at the 1% level. Firms that received angel funding are 27% more likely to survive for at least 4 years. Columns 2 through 5 repeat this regression specification for the other outcomes variables. Funded companies show improvements in web traffic performance. Funded ventures are 16% more likely to have improved performance, but this estimate is not precisely measured. On the other hand, our intensive measure of firm performance, the log ratio of web site ranks, finds a more powerful effect. Funded ventures show on average 39% stronger improvements in web rank than unfunded ventures.21 Finally, we estimate whether angel funding promotes future funding opportunities. We only look at venture funding external to the angel group in question. Column 4 finds a very large effect: angel funding increases the likelihood of subsequent venture investment by 44%. This relationship is very precisely measured. Column 5 also shows a positive relationship to a count of additional venture rounds. Funded firms have about 3.8 more follow-on funding rounds than those firms that did not get angel funding in the first place. Of course, we cannot tell from this analysis whether angel-backed companies pursue different growth or investment strategies and thus have to rely on more external funding. Alternatively, the powerful relationships could reflect a supply effect where angel group investors and board members provide networks, connections, and introductions that help ventures access additional funding. We return this issue below after viewing our border discontinuity results. 4.3. The Role of Sample Construction The results in Table 4 suggest an important association between angel funding and venture performance. In describing our data and empirical methodology, we noted several ways that our analysis differed from a standard regression. We first consider only ventures that approach our angel investors, rather than attempting to draw similar firms from the full population of business activity to compare to funded ventures. This step helps ensure ex ante comparable treatment and control groups in that all the ventures are seeking funding. Second, we substantially narrow even this distribution of prospective deals (illustrated in Table 1) until we have a group of funded and unfunded companies that are ex ante comparable (show in Table 2). 22 This removes heterogeneous quality in the ventures that approach the angel investors. Finally, we introduce the border discontinuity to bring exogenous variation in funding outcomes. Before proceeding to the border discontinuity, it is useful to gauge how much the second step— narrowing the sample of ventures to remove quality differences inherent in the selection funnel—influences our regression estimates. Table 5 presents this analysis for one outcome variable and the Tech Coast Angels data. We are restricted to only one outcome variable by the intense effort to build any outcomes data for unfunded ventures. The likelihood of receiving subsequent venture funding is the easiest variable to extend to the full sample. The first column repeats a modified, univariate form of Column 4 in Table 4 with just the Tech Coast Angels sample. The elasticities are very similar. The second column expands the sample to include 2385 potential ventures in the Tech Coast Angels database. The elasticity increases 25% to 0.56. The difference in elasticities between the two columns demonstrates the role of sample construction in assessing angel funding and venture performance. The narrower sample provides a more comparable control group. Our rough estimate of the bias due to not controlling for heterogeneous quality is thus about a quarter of the true association. 4.4. Border Discontinuities and Firm Outcomes Table 6 considers venture outcomes and the border discontinuity. Even with eliminating observable heterogeneity through sample selection, the results in Table 4 are still subject to the criticism that ventures are endogenously funded. Omitted variables may also be present. Looking above and below the funding discontinuity helps us to evaluate whether the ventures that looked ex ante comparable, except in their probability of being funded, are now performing differently. 23 This test provides a measure of exogeneity to the relationship between angel financing and venture success. Table 6 has the same format as Table 5; the only difference is that the explanatory variable is the indicator variable for being above the funding border. The results are similar in direction and magnitude for the first three outcomes, although the coefficients in Tables 5 and 6 are not directly comparable in a strict sense. Being above the border is associated with stronger chances for survival and better operating performance as measured by web site traffic. This comparability indicates that endogeneity in funding choices and omitted variable biases are not driving these associations for the impact of angel financing. On the other hand, the last two columns show no relationship between being above the border discontinuity and improved funding prospects in later years. Our experiment thus does not confirm that angel financing leads to improved future investment flows to portfolio companies. This may indicate the least squares association between current financing and future financing reflects the investment and growth strategies of the financiers, but that this path is not necessary for venture success as measured by our outcome variables. This interpretation, however, should be treated with caution as we are not able to measure a number of outcomes that would be of interest (e.g., the ultimate value of the venture at exit). 5. Conclusions and Interpretations The results of this study, and our border analysis in particular, suggest that angel investments improve entrepreneurial success. By looking above and below the discontinuity in a restricted sample, we remove the most worrisome endogeneity problems and the sorting between ventures and investors. We find that the localized increases in interest by angels at break points, 24 which are clearly linked to obtaining critical mass for funding, are associated with discrete jumps in future outcomes like survival and stronger web traffic performance. Our evidence regarding the role of angel funding for access to future venture financing is more mixed. The latter result could suggest that start-up firms during that time period had a number of funding options and thus could go to other financiers when turned down by our respective angel groups. Angel funding per se was not central in whether the firm obtained follow-on financing at a later point. However, angel funding by one of the groups in our sample does positively affect the long run survival and web traffic of the start-ups. We do not want to push this asymmetry too far, but one might speculate that access to capital per se is not the most important value added that angel groups bring. Our results suggest that some of the ?softer? features, such as their mentoring or business contacts, may help new ventures the most. Overall we find that the interest levels of angels at the stages of the initial presentation and due diligence are predictive of investment success. However, additional screening and evaluation do not substantially improve the selection and composition of the portfolio further. These findings suggest that the selection and screening process is efficient at sorting proposals into approximate bins: complete losers, potential winners, and so on. The process has natural limitations, however, in further differentiating among the potential winners (e.g., Kerr and Nanda, 2009). At the same time, this paper has important limitations. Our experiment does not allow us to identify the costs to ventures of angel group support (e.g., Hsu, 2004), as equity positions in the counterfactual, unfunded ventures are not defined. We thus cannot evaluate whether taking the money was worth it from the entrepreneur‘s perspective after these costs are considered. On a similar note, we have looked at just a few of the many angel investment groups that are active in 25 the US. Our groups are professionally organized and managed, and it is important for future research to examine a broader distribution of investment groups and their impact for venture success. This project demonstrates that angel investments are important and also offer an empirical foothold for analyzing many important questions in entrepreneurial finance 26 References Admati, A., and Pfleiderer, P. 1994. Robust financial contracting and the role for venture capitalists. Journal of Finance 49, 371–402. Berglöf, E. 1994. A control theory of venture capital finance. Journal of Law, Economics, and Organizations 10, 247–67. Bergemann, D., and Hege, U. 1998. Venture capital financing, moral hazard, and learning. Journal of Banking and Finance 22, 703-35. Chemmanur, T., Krishnan, K., and Nandy, D. 2009. How does venture capital financing improve efficiency in private firms? a look beneath the surface. Unpublished working paper, Center for Economic Studies. Cherenko, S., and Sunderam, A. 2009. The real consequences of market segmentation. Unpublished working paper, Harvard University. Cornelli, F., and Yosha, O. 2003. Stage financing and the role of convertible debt. Review of Economic Studies 70, 1–32. Goldfarb, B., Hoberg, G., Kirsch, D., and Triantis, A. 2007. Are angels preferred series a investors? Unpublished working paper, University of Maryland. Hellmann, T. 1998. The allocation of control rights in venture capital contracts. RAND Journal of Economics 29, 57–76. Hellmann, T., and Puri, M. 2000. The interaction between product market and financing strategy: the role of venture capital. Review of Financial Studies 13, 959–84. Hsu, D. 2004. What do entrepreneurs pay for venture capital affiliation? Journal of Finance 59, 1805–44. Kaplan, S., and Strömberg, P. 2004. Characteristics, contracts, and actions: evidence from venture capitalist analyses. Journal of Finance 59, 2177–210. Kaplan, S., Sensoy, B., and Strömberg, P. 2009. Should investors bet on the jockey or the horse? evidence from the evolution of firms from early business plans to public companies. Journal of Finance 64, 75–115. Kerr, W., and Nanda, R. 2009. Democratizing entry: banking deregulations, financing constraints, and entrepreneurship. Journal of Financial Economics 94, 124–49. Kortum, S., and Lerner, J. 2000. Assessing the contribution of venture capital to innovation. RAND Journal of Economics 31, 674–92. 27 Lamoreaux, N., Levenstein, M., and Sokoloff, K. 2004. Financing invention during the second industrial revolution: Cleveland, Ohio, 1870-1920. Working paper no. 10923, National Bureau of Economic Research. Lee, D., and Lemieux, T. 2009. Regression discontinuity designs in economics. Working paper no. 14723, National Bureau of Economic Research. Mollica, M., and Zingales, L. 2007. The impact of venture capital on innovation and the creation of new businesses. Unpublished working paper, University of Chicago. Puri, M., and Zarutskie, R. 2008. On the lifecycle dynamics of venture-capital- and non-venturecapital-financed firms. Unpublished working paper, Center for Economic Studies. Rauh, J. 2006. Investment and financing constraints: evidence from the funding of corporate pension plans. Journal of Finance 61, 31–71. Samila, S., and Sorenson, O. 2010. Venture capital, entrepreneurship and economic growth. Review of Economics and Statistics, forthcoming. Shane, S. 2008. The importance of angel investing in financing the growth of entrepreneurial ventures. Unpublished working paper, U.S. Small Business Administration, Office of Advocacy. Sorensen, M. 2007. How smart is the smart money? a two-sided matching model of venture capital. Journal of Finance 62, 2725–62. Sudek, R., Mitteness, C., and Baucus, M. 2008. Betting on the horse or the jockey: the impact of expertise on angel investing. Academy of Management Best Paper Proceedings.28 Figure 1: Tech Coast Angels Investment Process29 Figure 2: CommonAngels Pitch Evaluation SheetAngel group Number of Cumulative Share funded interest level ventures share of ventures by angel group 0 1640 64% 0.000 1-4 537 84% 0.007 5-9 135 90% 0.037 10-14 75 93% 0.120 15-19 52 95% 0.173 20-24 42 96% 0.381 25-29 33 97% 0.303 30-34 21 98% 0.286 35+ 44 100% 0.409 Table 1: Angel group selection funnel Notes: Table documents the selection funnel for Tech Coast Angels. The vast majority of ventures proposed to Tech Coast Angels receive very little interest, with 90% of plans obtaining the interest of fewer than ten angels. A small fraction of ventures obtain extremely high interest levels with a maximum of 191 angels expressing interest. We identify an interest level of 20 angels as our border discontinuity. Our "below border" group consists of ventures receiving 10-19 interested angels. Our "above border" group consists of ventures receiving 20-34 interested angels.Traits of ventures above and Above border Below border Two-tailed t-test below border discontinuity ventures ventures for equality of means Basic characteristics Financing sought ($ thousands) 1573 1083 0.277 Documents from company 3.0 2.5 0.600 Management team size 5.8 5.4 0.264 Employee count 13.4 11.2 0.609 Primary industry (%) Biopharma and healthcare 23.9 29.3 0.579 Computers, electronics, and measurement 15.2 17.1 0.817 Internet and e-commerce 39.1 39.0 0.992 Other industries 21.7 14.6 0.395 Company stage (%) Good idea 2.2 2.4 0.936 Initial marketing and product development 34.8 46.3 0.279 Revenue generating 63.0 51.2 0.272 Angel group decisions Documents by angel members 10.5 5.1 0.004 Discussion items by angel members 12.0 6.7 0.002 Share funded 63.0 39.0 0.025 Table 2: Comparison of groups above and below border discontinuity Notes: Table demonstrates the ex ante comparability of ventures above and below the border discontinuity. Columns 2 and 3 present the means of the above border and below border groups, respectively. The fourth column tests for the equality of the means, and the t-tests allow for unequal variance. The first three panels show that the two groups are very comparable in terms of venture traits, industries, and venture stage. The first row tests equality for log value of financing sought. For none of these ex ante traits are the groups statistically different from each other. The two groups differ remarkably, however, in the likelihood of receiving funding. This is shown in the fourth panel. Comparisons of the subsequent performance of these two groups thus offers a better estimate of the role of angel financing in venture success as the quality heterogeneity of ventures inherent in the full distribution of Table 1 is removed.(1) (2) (3) (0,1) indicator variable for venture being 0.328 0.324 0.292 above the funding border discontinuity (0.089) (0.094) (0.110) Angel group, year, and industry fixed effects Yes Yes Yes Year x angel group fixed effects Yes Additional controls Yes Observations 130 130 87 Table 3: Border discontinuity and venture funding by angel groups Notes: Regressions employ linear probability models to quantify the funding discontinuity in the border region. Both Tech Coast Angels and CommonAngels data are employed excepting Column 3. Additional controls in Column 3 include stage of company and employment levels fixed effects. A strong, robust increase in funding probability of about 30% exists for ventures just above the border discontinuity compared to those below. Robust standard errors are reported. (0,1) indicator variable for being funded by angel group(0,1) indicator (0,1) indicator Log ratio of (0,1) indicator Count variable for variable for 2010 web rank variable for of subsequent venture being improved web to 2008 rank receiving later venture financing alive in January rank from 2008 (negative values funding external rounds external 2010 to 2010 are improvements) to angel group to angel group (1) (2) (3) (4) (5) (0,1) indicator variable for venture 0.276 0.162 -0.389 0.438 3.894 funding being received from angel group (0.082) (0.107) (0.212) (0.083) (1.229) Angel group, year, and industry fixed effects Yes Yes Yes Yes Yes Observations 130 91 58 130 130 Table 4: Analysis of angel group financing and venture performance Notes: Linear regressions quantify the relationship between funding and venture outcomes. Both Tech Coast Angels and CommonAngels data for 2001-2006 are employed in all regressions. Differences in sample sizes across columns are due to the availability of outcome variables. The first column tests whether the venture is alive in 2010. The second and third columns test for improved venture performance through web site traffic data from 2008 to 2010. Column 2 is an indicator variable for improved performance, while Column 3 gives log ratios of web traffic (a negative value indicates better performance). The last two columns test whether the venture received subsequent financing outside of the angel group by 2010. Across all of these outcomes, funding by an angel group is associated with stronger subsequent venture performance. Robust standard errors are reported.Outcome variable is (0,1) indicator Simple TCA Full TCA variable for receiving later funding univariate univariate external to angel group regression with regression with (see Column 4 of Table 4) border sample complete sample (1) (2) (0,1) indicator variable for venture 0.432 0.562 funding being received from angel group (0.095) (0.054) Observations 87 2385 Table 5: Border samples versus full samples Notes: Linear regressions quantify the role of sample construction in the relationship between funding and venture outcomes. Column 1 repeats a modified, univariate form of the Column 4 in Table 4 with just the Tech Coast Angels sample. Column 2 expands the sample to include all of the potential ventures in the Tech Coast Angels database, similar to Table 1. The difference in elasticities between the two columns demonstrates the role of sample construction in assessing angel funding and venture performance. The narrower sample provides a more comparable control group. Robust standard errors are reported.(0,1) indicator (0,1) indicator Log ratio of (0,1) indicator Count variable for variable for 2010 web rank variable for of subsequent venture being improved web to 2008 rank receiving later venture financing alive in January rank from 2008 (negative values funding external rounds external 2010 to 2010 are improvements) to angel group to angel group (1) (2) (3) (4) (5) (0,1) indicator variable for venture being 0.229 0.232 -0.382 0.106 -0.318 above the funding border discontinuity (0.094) (0.120) (0.249) (0.100) (1.160) Angel group, year, and industry fixed effects Yes Yes Yes Yes Yes Observations 130 91 58 130 130 Table 6: Analysis of border discontinuity and venture performance Notes: See Table 4. Linear regressions quantify the relationship between the border discontinuity and venture outcomes. Companies above the border are more likely to be alive in 2010 and have improved web performance relative to companies below the border. These results are similar to the funding relationships in Table 4. The border discontinuity in the last two columns, however, is not associated with increased subsequent financing events.The Cycles of Theory Building in Management Researc
|
CD ROM Annuaire d'Entreprises France prospect (avec ou sans emails) : REMISE DE 10 % Avec le code réduction AUDEN872
10% de réduction sur vos envois d'emailing --> CLIQUEZ ICI Retour à l'accueil, cliquez ici 05-057 Copyright © Working papers are in draft form. This working paper is distributed for purposes of comment and discussion only. It may not be reproduced without permission of the copyright holder. Copies of working papers are available from the author. The Cycles of Theory Building in Management Research Paul R. Carlile School of Management Boston University Boston, MA 02215 carlile@bu.edu Clayton M. Christensen Harvard Business School Boston, MA 02163 cchristensen@hbs.edu The Cycles of Theory Building in Management Research Paul R. Carlile School Of Management Boston University Boston, MA 02215 carlile@bu.edu Clayton M. Christensen Harvard Business School Boston, MA 02163 cchristensen@hbs.edu October 27, 2004 Version 5.01 The Cycle of Theory Building in Management Research Theory thus become instruments, not answers to enigmas, in which we can rest. We don’t lie back upon them, we move forward, and, on occasion, make nature over again by their aid. (William James, 1907: 46) Some scholars of organization and strategy expend significant energy disparaging and defending various research methods. Debates about deductive versus inductive theory-building and the objectivity of information from field observation versus large-sample numerical data are dichotomies that surface frequently in our lives and those of our students. Despite this focus, some of the most respected members of our research profession (i.e., Simon (1976), Solow (1985), Staw and Sutton (1995), and Hayes (2002)) have continued to express concerns that the collective efforts of business academics have produced a paucity of theory that is intellectually rigorous, practically useful, and able to stand the tests of time and changing circumstances. The purpose of this paper is to outline a process of theory building that links questions about data, methods and theory. We hope that this model can provide a common language about the research process that helps scholars of management spend less time defending the style of research they have chosen, and build more effectively on each other’s work. Our unit of analysis is at two levels: the individual research project and the iterative cycles of theory building in which a community of scholars participates. The model synthesizes the work of others who have studied how communities of scholars cumulatively build valid and reliable theory, such as Kuhn (1962), Campbell & Stanley (1963), Glaser & Strauss (1967) and Yin (1984). It has normative and pedagogical implications for how we conduct research, evaluate the work of others, and for how we train our doctoral students. While many feel comfortable in their own understanding of these perspectives, it has been our observation that those who have written about the research process and those who think they understand it do not yet share even a common language. The same words are applied to very different phenomena and processes, and the same phenomena can be called by many different words. Papers published in reputable journals often violate rudimentary rules for generating cumulatively improving, reliable and valid theory. While recognizing that research progress is hard to achieve at a collective level, we assert here that if scholars and practitioners of management shared a sound understanding of the process by which theory is built, we could be much more productive in doing research that doesn’t just get published, but meets the standards of rigorous scholarship and helps managers know what actions will lead to the results they seek, given the circumstances in which they find themselves. We first describe a three stage process by which researchers build theory that is at first descriptive, and ultimately normative. Second, we discuss the role that discoveries of anomalies play in the building of better theory, and describe how scholars can build theory whose validity can be verified. Finally, we suggest how scholars can define research questions, execute projects, and design student coursework that lead to the building of good theory. 2 The Theory Building Process The building of theory occurs in two major stages – the descriptive stage and the normative stage. Within each of these stages, theory builders proceed through three steps. The the theory-building process iterates through these stages again and again. 1 In the past, management researchers have quite carelessly applied the term theory to research activities that pertain to only one of these steps. Terms such “utility theory” in economics, and “contingency theory” in organization design, for example, actually refer only to an individual stage in the theory-building process in their respective fields. We propose that it is more useful to think of the term “theory” as a body of understanding that researchers build cumulatively as they work through each of the three steps in the descriptive and normative stages. In many ways, the term “theory” might better be framed as a verb, as much as it is a noun – because the body of understanding is continuously changing as scholars who follow this process work to improve it. The Building of Descriptive Theory The descriptive stage of theory building ia a preliminary stage because researchers must pass through it in order to develop normative theory. Researchers who are building descriptive theory proceed through three steps: observation, categorization, and association. Step 1: Observation In the first step researchers observe phenomena and carefully describe and measure what they see. Careful observation, documentation and measurement of the phenomena in words and numbers is important at this stage because if subsequent researchers cannot agree upon the descriptions of phenomena, then improving theory will prove difficult. Early management research such as The Functions of the Executive (Barnard, 1939) and Harvard Business School cases written in the 1940s and 50s was primarily descriptive work of this genre – and was very valuable. This stage of research is depicted in Figure 1 as the base of a pyramid because it is a necessary foundation for the work that follows. The phenomena being explored in this stage includes not just things such as people, organizations and technologies, but processes as well. Without insightful description to subsequently build upon, researchers can find themselves optimizing misleading concepts. As an example: For years, many scholars of inventory policy and supply chain systems used the tools of operations research to derive ever-more-sophisticated optimizing algorithms for inventory replenishment. Most were based on an assumption that managers know what their levels of inventory are. Ananth Raman’s pathbreaking research of the phenomena, however, obviated much of this research when he showed that most firms’ computerized inventory records were broadly inaccurate – even when they used state-of-the-art automated tracking systems (Raman 199X). He and his colleagues have carefully described how inventory replenishment systems work, and what variables affect the accuracy of those processes. Having laid this foundation, supply chain scholars have now begun to build a body of theories and policies that reflect the real and different situations that managers and companies face. 1 This model is a synthesis of models that have been developed by scholars of this process in a range of fields and scholars: Kuhn (1962) and Popper (1959) in the natural sciences; Kaplan (1964), Stinchcombe (1968), Roethlisberger (1977) Simon (1976), Kaplan (1986), Weick (1989),Eisenhardt (1989) and Van de Ven (2000) in the social sciences. 3 Researchers in this step often develop abstractions from the messy detail of phenomena that we term constructs. Constructs help us understand and visualize what the phenomena are, and how they operate. Joseph Bower’s Managing the Resource Allocation Process (1970) is an outstanding example of this. His constructs of impetus and context, explaining how momentum builds behind certain investment proposals and fails to coalesce behind others, have helped a generation of policy and strategy researchers understand how strategic investment decisions get made. Economists’ concepts of “utility” and “transactions cost” are constructs – abstractions developed to help us understand a class of phenomena they have observed. We would not label the constructs of utility and transactions cost as theories, however. They are part of theories – building blocks upon which bodies of understanding about consumer behavior and organizational interaction have been built. Step 2: Classification With the phenomena observed and described, researchers in the second stage then classify the phenomena into categories. In the descriptive stage of theory building, the classification schemes that scholars propose typically are defined by the attributes of the phenomena. Diversified vs. focused firms, and vertically integrated vs. specialist firms are categorization examples from the study of strategy. Publicly traded vs. privately held companies is a categorization scheme often used in research on financial performance. Such categorization schemes attempt to simplify and organize the world in ways that highlight possibly consequential relationships between the phenomena and the outcomes of interest. Management researchers often refer to these descriptive categorization schemes as frameworks or typologies. Burgelman (1986), for example, built upon Bower’s (1970) construct of context by identifying two different types of context – organizational and strategic. Step 3: Defining Relationships In the third step, researchers explore the association between the category-defining attributes and the outcomes observed. In the stage of descriptive theory building, researchers recognize and make explicit what differences in attributes, and differences in the magnitude of those attributes, correlate most strongly with the patterns in the outcomes of interest. Techniques such as regression analysis typically are useful in defining these correlations. Often we refer to the output of studies at this step as models. Descriptive theory that quantifies the degree of correlation between the category-defining attributes of the phenomena and the outcomes of interest are generally only able to make probabilistic statements of association representing average tendencies. For example, Hutton, Miller and Skinner (2000) have examined how stock prices have responded to earnings announcements that were phrased or couched in various terms. They coded types of words and phrases in the statements as explanatory variables in a regression equation, with the ensuing change in equity price as the dependent variable. This analysis enabled the researchers then to assert that, on average across the entire sample of companies and announcements, delivering earnings announcements in a particular way would lead to the most favorable (or least unfavorable) reaction in stock price. Research such as this is important descriptive theory. However, at this point it can only assert on average what attributes are associated with the best 4 results. A specific manager of a specific company cannot know whether following that average formula will lead to the hoped-for outcome in her specific situation. The ability to know what actions will lead to desired results for a specific company in a specific situation awaits the development of normative theory in this field, as we will show below. The Improvement of Descriptive Theory When researchers move from the bottom to the top of the pyramid in these three steps – observation, categorization and association – they have followed the inductive portion of the theory building process. Theory begins to improve when researchers cycle from the top back to the bottom of this pyramid in the deductive portion of the cycle – seeking to “test” the hypothesis that had been inductively formulated. This most often is done by exploring whether the same correlations exist between attributes and outcomes in a different set of data than the data from which the hypothesized relationships were induced. When scholars test a theory on a new data set (whether the data are numbers in a computer, or are field observations taken in a new context), they might find that the attributes of the phenomena in the new data do indeed correlate with the outcomes as predicted. When this happens, this “test” confirms that the theory is of use under the conditions or circumstances observed. 2 However, the researcher returns the model to its place atop the pyramid tested but unimproved. It is only when an anomaly is identified – an outcome for which the theory can’t account – that an opportunity to improve theory occurs. As Figure 1 suggests, discovery of an anomaly gives researchers the opportunity to revisit the categorization scheme – to cut the data in a different way – so that the anomaly and the prior associations of attributes and outcomes can all be explained. In the study of how technological innovation affects the fortunes of leading firms, for example, the initial attribute-based categorization scheme was radical vs. incremental innovation. The statements of association that were built upon it concluded that the leading established firms on average do well when faced with incremental innovation, but they stumble in the face of radical change. But there were anomalies to this generalization – established firms that successfully implemented radical technology change. To account for these anomalies, Tushman & Anderson (1986) offered a different categorization scheme, competence-enhancing vs. competence-destroying technological changes. This scheme resolved many of the anomalies to the prior scheme, but subsequent researchers uncovered new ones for which the TushmanAnderson scheme could not account. Henderson & Clark’s (1990) categories of modular vs. architectural innovations; Christensen’s (1997) categories of sustaining vs. disruptive technologies; and Gilbert’s (2001) threat-vs.-opportunity framing each uncovered and resolved anomalies for which the work of prior scholars could not account. This body of understanding has improved and become remarkably useful to practitioners and subsequent scholars (Adner, 2003; Daneels, 2005) because these scholars followed the process in a disciplined way: – uncovered anomalies, sliced the phenomena in different ways, and articulated new associations between the attributes that defined the categories and the outcome of interest. 2 Popper asserts that a researcher in this phase, when the theory accurately predicted what he observed, can only state that his test or experiment of the theory “corroborated” or “failed to dis-confirm” the theory. 5 Predict Anomaly Confirm Observe, describe & measure the phenomena (constructs) Categorization based upon attributes of phenomena (frameworks & typologies) Statements of association (models) Deductive proce The Process of Building Theory Inductive proce Figure 1 Figure 1 suggests that there are two sides to every lap around the theory-building pyramid: an inductive side and a deductive side. In contrast to either/or debates about the virtues of deductive and inductive approaches to theory, this suggests that any complete cycle of theory building includes both. 3 Descriptive theory-building efforts typically categorize by the attributes of the phenomena because they are easiest to observe and measure. Likewise, correlations between attributes and outcomes are easiest to hypothesize and quantify through techniques such as regression analysis. Kuhn (1962) observed that confusion and contradiction typically are the norm during descriptive theory-building. This phase is often characterized by a plethora of categorization schemes, as in the sequence of studies of technology change cited above, because the phenomena generally have many different attributes. Often, no model is irrefutably superior: Each seems able to explain anomalies to other models, but suffers from anomalies to its own. The Transition from Descriptive to Normative Theory The confusion and contradiction that often accompany descriptive theory become resolved when careful researchers – often through detailed empirical and ethnographic observation – move beyond statements of correlation to define what causes the outcome of interest. As depicted in Figure 2, they leap across to the top of the pyramid of causal theory. With their understanding of causality, researchers then work to improve theory by following the same three steps that were 3 Kant, Popper, Feyerabend and others have noted that all observations are shaped, consciously or unconsciously, by cognitive structures, previous experience or some theory-in-use. While it is true that individual researchers might start their work at the top of the pyramid, we believe that the hypotheses that deductive theorists test generally had been derived consciously or unconsciously, by themselves or others, from an inductive source. There are few bluesky hypotheses that were formulated in the complete absence of observation. 6 used in the descriptive stage. Hypothesizing that their statement of causality is correct, they cycle deductively to the bottom of the pyramid to test the causal statement: If we observe these actions being taken, these should be the outcomes that we observe. When they encounter an anomaly, they then delve into the categorization stage. Rather than using schemes based on attributes of the phenomena, however, they develop categories of the different situations or circumstances in which managers might find themselves. They do this by asking, when they encounter an anomaly, “What was it about the situation in which those managers found themselves, that caused the causal mechanism to yield a different result? By cycling up and down the pyramid of normative theory, researchers will ultimately define the set of the situations or circumstances in which managers might find themselves when pursuing the outcomes of interest. This allows researchers to make contingent statements of causality – to show how and why the casual mechanism results in a different outcome, in the different situations. A theory completes the transition from descriptive to normative when it can give a manager unambiguous guidance about what actions will and will not lead to the desired result, given the circumstance in which she finds herself. Anomaly Predict Deductive proce Confirm Inductive proce Anomaly Conf Pre irm dict Categorization by the attributes of the phenomena Preliminary statements of correlation Deductive proce Inductive proce Observe, describe & measure the phenomena Careful field-based research Statement of causality Observe, describe & measure the phenomena Categorization of the circumstances in which we might find ourselves Figure 2: The Transition from Descriptive Theory to Normative Theory Normative Theory Descriptive Theory The history of research into manned flight is a good way to visualize how this transition from descriptive to normative theory occurs, and how it is valuable. During the middle ages, would-be aviators did their equivalent of best-practices research and statistical analysis. They observed the many animals that could fly well, and compared them with those that could not. The vast majority of the successful fliers had wings with feathers on them; and most of those that couldn’t fly had neither. This was quintessential descriptive theory. Pesky outliers like ostriches 7 had feathered wings but couldn’t fly; bats had wings without feathers and were very good at it; and flying squirrels had neither and got by. But the R 2 was so high that aviators of the time copied the seemingly salient characteristics of the successful fliers in the belief that the visible attributes of the phenomena caused the outcome. They fabricated wings, glued feathers on them, jumped off cathedral spires, and flapped hard. It never worked. For centuries they assumed that the prior aviators had failed because they had bad wing designs; hadn’t bulked up their muscles enough; or hadn’t flapped hard enough. There were substantial disagreements about which of the birds’ attributes truly enabled flight. For example, Roger Bacon in about 1285 wrote an influential paper asserting that the differentiating attribute was birds’ hollow bones (Clegg, 2003). Because man had solid bones, Bacon reasoned, we could never fly. He then proposed several machine designs that could flap their wings with sufficient power to overcome the disadvantage of solid bones. But it still never worked. Armed with the correlative statements of descriptive theory, aviators kept killing themselves. Then through his careful study of fluid dynamics Daniel Bernoulli identified a shape that we call an airfoil – a shape that, when it cuts through air, creates a mechanism that we call lift. Understanding this causal mechanism, which we call Bernoulli’s Principle, made flight possible. But it was not yet predictable. In the language of this paper, the theory predicted that aviators would fly successfully when they built machines with airfoils to harness lift. But while they sometimes flew successfully, occasionally they did not. Crashes were anomalies that Bernoulli’s theory could not explain. Discovery of these anomalies, however, allowed the researchers to revisit the categorization scheme. But this time, instead of slicing up the world by the attributes of the good and bad fliers, researchers categorized their world by circumstance – asking the question, “What was it about the circumstance that the aviator found himself in that caused the crash?” This then enabled them to improve equipment and techniques that were based upon circumstance-contingent statements of causality: “This is how you should normally fly the plane. But when you get in this situation, you need to fly it differently in order to get the desired outcome. And when you get in that situation, don’t even try to fly. It is impossible.” When their careful studies of anomalies allowed researchers to identify the set of circumstances in which aviators might find themselves, and then modified the equipment or developed piloting techniques that were appropriate to each circumstance, manned flight became not only possible, but predictable. Hence, it was the discovery of the fundamental causal mechanism that made flight possible. And it was the categorization of the salient circumstances that made flight predictable. This is how this body of understanding about human flight transitioned from descriptive to normative theory. Dsciplined scholars can achieve the same transition in management research. The discovery of the fundamental causal mechanisms makes it possible for managers purposefully to pursue desired outcomes successfully and predictably. When researchers categorize managers’ world according to the circumstances in which they might find themselves, they can make circumstance-contingent statements of cause and effect, of action and result. Circumstance-based categories and normative theory Some cynical colleagues despair of any quest to develop management theories that make success possible and predictable – asserting that managers’ world is so complex that there are an 8 infinite number of situations in which they might find themselves. Indeed, this is very nearly true in the descriptive theory phase. But normative theory generally is not so confusing. Researchers in the normative theory phase resolve confusion by abstracting up from the detail to define a few categories – typically two to four – that comprise salient circumstances. Which boundaries between circumstances are salient, and which are not? Returning to our account of aviation research, the boundaries that defined the salient categories of circumstance are determined by the necessity to pilot the plane differently. If a different circumstance does not require different methods of piloting, then it is not a meaningful category. The same principle defines the salience of category boundaries in management theory. If managers find themselves in a circumstance where they must change actions or organization in order to achieve the outcome of interest, then they have crossed a salient boundary. Several prominent scholars have examined the improvement in predictability that accompanies the transition from the attribute-based categorization of descriptive theory, to the circumstance-based categorization of normative theory. Consider, for example, the term “Contingency Theory” – a concept born of Lawrence & Lorsch’s (1967) seminal work. They showed that the best way to organize a company depended upon the circumstances in which the company was operating. In our language, contingency is not a theory per se. Rather, contingency is a crucial element of every normative theory – it is the categorization scheme. Rarely do we find one-size-fits-all answers to every company’s problem. The effective course of action will generally “depend” on the circumstance. Glaser and Strauss’s (1967) treatise on “grounded theory” actually is a book about categorization. Their term substantive theory corresponds to the attribute-defined categories in descriptive theory. And their concept of formal theory matches our definition of normative theory that employs categories of circumstance.. Thomas Kuhn (1962) discussed in detail the transition of understanding from descriptive to normative theory in his study of the emergence of scientific paradigms. He described a preliminary period of confusion and debate in theory building, which is an era of descriptive theory. His description of the emergence of a paradigm corresponds to the transition to normative theory described above. We agree with Kuhn that even when a normative theory achieves the status of a broadly believed paradigm, it continues to be improved through the process of discovering anomalies, as we describe above. Indeed, the emergence of new phenomena – which probably happens more frequently in competitive, organizational and social systems than in the natural sciences – ensures that there will always be additional productive laps up and down the theory pyramid that anomaly-seeking researchers can run. The observation that management research is often faddish has been raised enough that it no longer seems shocking (Micklethwait and Wooldridge, 1996; Abrahamson, 1998). Fads come and go when a researcher studies a few successful companies, finds that they share certain characteristics, concludes that he has seen enough, and then skips the categorization step entirely by writing a book asserting that if all managers would imbue their companies with those same characteristics, they would be similarly successful. When managers then apply the formula and find that it doesn’t work, it casts a pall on the idea. Some faddish theories aren’t uniformly bad. It’s just that their authors were so eager for their theory to apply to everyone that they never took the care to distinguish correlation from causality, or to figure out the circumstances in which their 9 statement of causality would lead to success, and when it would not. Efforts to study and copy “the best practices of successful companies” almost uniformly suffer from this problem. Unfortunately, it is not just authors-for-profit of management books that contribute to the problem of publishing theory whose application is uncertain. Many academics contribute to the problem by taking the other extreme – articulating tight “boundary conditions” outside of which they claim nothing. Delimiting the applicability of a theory to the specific time, place, industry and/or companies from which the conclusions were drawn in the first place is a mutation of one of the cardinal sins of research – sampling on the dependent variable. In order to be useful to managers and to future scholars, researchers need to help managers understand the circumstance that they are in. Almost always, this requires that they also be told about the circumstances that they are not in. The Value of Anomalies As indicated before, when researchers in both the descriptive and normative stages use statements of association or causality to predict what they will see, they often observe something that the theory did not lead them to expect; thus identifying an anomaly—something the theory could not explain. This discovery forces theory builders to cycle back into the categorization stage with a puzzle such as “there’s something else going on here” or “these two things that we thought were different, really aren’t.” The results of this effort typically can include: 1) more accurately describing and measuring what the phenomena are and are not; 2) changing the definitions by which the phenomena or the circumstances are categorized – adding or eliminating categories or defining them in different ways; and/or 3) articulating a new theoretical statement of what is associated with, or causes what, and why, and under what circumstances. The objective of this process is to revise theory so that it still accounts for both the anomalies identified and the phenomena as previously explained. Anomalies are valuable in theory building because the discovery of an anomaly is the enabling step to identifying and improving the categorization scheme in a body of theory – which is the key to being able to apply the theory with predictable results. Researchers whose goal is to “prove” a theory’s validity are likely to view discovery of an anomaly as failure. Too often they find reasons to exclude outlying data points in order to get more significant measures of statistical fit. There typically is more information in the points of outlying data than in the ones that fit the model well, however, because understanding the outliers or anomalies is generally the key to discovering a new categorization scheme. This means that journal editors and peer reviewers whose objective is to improve theory should embrace papers that seek to surface and resolve anomalies. Indeed, productive theory-building research is almost invariably prompted or instigated by an anomaly or a paradox (Poole & Van de Ven, 1989). The research that led to Michael Porter’s (1991) Competitive Advantage of Nations is an example. Before Porter’s work, the theory of international trade was built around the notion of comparative advantage. Nations with inexpensive electric power, for example, would have a competitive advantage in those products in which the cost of energy was high; those with low labor costs would enjoy an advantage in making and selling products with high labor content; and so on. Porter saw anomalies for which this theory could not account. Japan, with little iron ore and coal, became a successful steel 10 producer. Italy became the world’s dominant producer of ceramic tile even though it had high electricity costs and had to import much of the clay used in making the tile. Porter’s work categorized the world into two circumstances – situations in which a factor-based advantage exists, and those in which it does not. In the first situation the reigning theory of comparative advantage still has predictive power. But in the latter circumstance, Porter’s theory of competitive industrial clusters explained the phenomena that had been anomalous to the prior theory. Porter’s theory is normative because it gives planners clear guidance about what they should do, given the circumstance in which they find themselves. The government of Singapore, for example, attributes much of that country’s prosperity to the guidance that Porter’s theory has provided. Yin (1984) distinguishes between literal replications of a theory, versus theoretical replications. A literal replication occurs when the predicted outcome is observed. A theoretical replication occurs when an unusual outcome occurs, but for reasons that can be explained by the model. Some reviewers cite “exceptions” to a theory’s predictions as evidence that it is invalid. We prefer to avoid using the word “exception” because of its imprecision. For example, the observation that airplanes fly is an exception to the general assertion that the earth’s mass draws things down toward its core. Does this exception disprove the theory of gravity? Of course not. While falling apples and flaming meteors are literal replications of the theory, manned flight is a theoretical replication. It is a different outcome than we normally would expect, but Bernoulli’s Principle explains why. An anomaly is an outcome that is neither a literal or theoretical replication of a theory. How to Design Anomaly-Seeking Research Although some productive anomalies might be obvious from the outset, often the task of theory-building scholars is to design their research to maximize the probability that they will be able to identify anomalies. Here we describe how to define research questions that focus on anomalies, and outline three ways to design anomaly-seeking research. We conclude this section by describing how literature reviews might be structured to help readers understand how knowledge has accumulated in the past, and position the present paper in the stream of scholarship. Anomaly-Seeking Research Questions Anomaly-seeking research enables new generations of researchers to pick up even wellaccepted theories, and to run the theory-building cycle again – adding value to research that already has earned broad praise and acceptance. Consider Professor Porter’s (1991) research mentioned above. In Akron, Ohio there was a powerful cluster of tire manufacturers whose etiologies and interactions could be explained well by Porter’s theory. That group subsequently vaporized – in part because of the actions of a company, Michelin, that operated outside of this cluster (Sull, 2000). This anomaly suggests that there must situations in time or space in which competing within a cluster is competitively important; in other situations it must be less important. When an improved categorization scheme emerges from Sull’s and others’ work, the community of scholars and policy makers will have an even clearer sense for when the competitive crucible of clusters is critical for developing capabilities, when it is not, and why. 11 In this spirit, we outline below some examples of “productive” questions that could be pursed by future researchers that potentially challenge many current categories used in management research: • When might process re-engineering or lean manufacturing be bad ideas? • When could sourcing from a partner or supplier something that is not your core competence lead to disaster? • Are there circumstances in which pencil-on-paper methods of vendor management yield better results than using supply-chain management software? • When and why is a one-stop-shopping or “portal” strategy effective and when would we expect firms using focused specialist strategies to gain the upper hand? • When are time-based competition and mass customization likely to be critical and when might they be competitively meaningless? • Are SIC codes the right categories for defining “relatedness” in diversification research? • When should acquiring companies integrate a firm they have just purchased into the parent organization, and when should they keep it separate? Much published management research is of the half-cycle, terminal variety – hypotheses are defined and “tested.” Anomaly-seeking research always is focused on the categorization step in the pyramid. Many category boundaries (such as SIC codes) seem to be defined by the availability of data, rather than their salience to the underlying phenomena or their relation to the outcome – and questioning their sufficiency is almost always a productive path for building better theory. “When doesn’t this work?” and “Under what conditions might this gospel be bad news?” are simple questions that can yield breakthrough insights – and yet too few researchers have the instinct to ask them. The Lenses of Other Disciplines One of Kuhn’s (1962) most memorable observations was that the anomalies that led to the toppling of a reigning theory or paradigm almost invariably were observed by researchers whose backgrounds were in different disciplines than those comprising the traditional training of the leaders in the field. The beliefs that adherents to the prior theory held about what was and was not possible seemed to shape so powerfully what they could and could not see that they often went to their graves denying the existence or relevance of the very anomalous phenomena that led to the creation of improved theory. Researchers from different disciplines generally use different methods and have different interests toward their object of study. Such differences often allow them to see things that might not be recognized or might appear inconsequential to an insider. It is not surprising, therefore, that many of the most important pieces of breakthrough research in the study of management, organization and markets have come from scholars who stood astride two or more academic disciplines. Porter’s (1980, 1985, 1991) work in strategy, for 12 example, resulted from his having combined insights from business policy and industrial organization economics. The insights that Robert Hayes and his colleagues (1980, 1984, 1985, 1988) derived about operations management combined insights from process research, strategy, cost accounting and organizational behavior. Baldwin & Clark’s (2000) insights about modularity were born at the intersection of options theory in finance with studies of product development. Clark Gilbert ((2001) looked at Christensen’s (1997) theory of disruptive innovation through the lenses of prospect theory and risk framing (Kahnemann & Tversky 1979, 1984), and saw explanations of what had seemed to be anomalous behavior, for which Christensen’s model could not account. Studying the Phenomena within the Phenomena The second method to increase the probability that researchers will identify anomalies is to execute nested research designs that examine different levels of phenomena. Rather than study just industries or companies or divisions or groups or individuals, a nested research design entails studying how individuals act and interact within groups; and how the interaction amongst groups and the companies within which they are embedded affect the actions of individuals. Many anomalies will only surface while studying second-order interactions across levels within a nested design. The research reported in Johnson & Kaplan’s Relevance Lost (1987) which led to the concept of activity-based costing, is a remarkable example of the insights gained through nested research designs. Most prior researchers in managerial accounting and control had conducted their research at a single level—the numbers printed in companies’ financial statements. Johnson and Kaplan saw that nested beneath each of those printed numbers was a labyrinth of political, negotiated, judgmental processes that could systematically yield inaccurate numbers. Spear and Bowen (1999) developed their path-breaking insights of the Toyota Production System through a nested research design. Researchers in the theory’s descriptive stage had studied Toyota’s production system at single levels. They documented visible artifacts such as minimal inventories, kanban scheduling cards and rapid tool changeovers. After comparing the performance of factories that did and did not possess these attributes, early researchers asserted that if other companies would use these same tools, they could achieve similar results (see, for example, Womack et.al., 1990). The anomaly that gripped Spear and Bowen was that when other firms used these artifacts, they still weren’t able to achieve Toyota’s levels of efficiency and improvement. By crawling inside to study how individuals interacted with individuals, in the context of groups interacting with other groups, within and across plants within the company and across companies, Spear and Bowen were able to go beyond the correlative statements of descriptive theory, to articulate the fundamental causal mechanism behind the Toyota system’s self-improving processes – which they codified as four “rules-in-use” that are not written anywhere but are assiduously followed when designing processes of all sorts at Toyota. Spear is now engaged in search of anomalies on the deductive side of the cycle of building normative theory. Because no company besides Toyota has employed this causal mechanism, Spear cannot retrospectively study other companies. Like Johnson & Kaplan did when they used 13 “action research” to study the implementation problems of activity-based costing, Spear is helping companies in very different circumstances to use his statements of causality, to see whether the mechanism of these four rules yields the same results. To date, companies in industries as diverse as aluminum smelting, hospitals, and jet engine design have achieved the results that Spear’s theory predicts – he has not yet succeeded in finding an anomaly. The categorization step of this body of normative theory still has no salient boundaries within it. Observing and Comparing a Broad Range of Phenomena The third mechanism for maximizing the probability of surfacing an anomaly is to examine, in the deductive half of the cycle, a broader range of phenomena than prior scholars have done. As an example, Chesbrough’s (1999) examination of Japanese disk drive makers (which Christensen had excluded from his study) enabled Chesbrough to surface anomalies for which Christensen’s theory of disruptive technology could not account—leading to an even better theory that then explains a broader range of phenomena. The broader the range of outcomes, attributes and circumstances that are studied at the base of the pyramid, the higher the probability that researchers will identify the salient boundaries among the categories. Anomaly-Seeking Research and the Cumulative Structure of Knowledge When interviewing new faculty candidates who have been trained in methods of modeling, data collection and analysis as doctoral students, we observe that many seem almost disinterested in the value of questions that their specialized techniques are purporting to answer. When asked to position their work upon a stream of scholarship, they recite long lists of articles in “the literature,” but then struggle when asked to diagram within that body of work which scholar’s work resovles anomalies to prior scholars’ theories; whose results contradicted whose, and why. Most of these lists of prior publications are simply lists, sometimes lifted from prior authors’ lists of prior articles. They are listed because of their relatedness to the topic. Few researchers have been taught to organize citations in a way that describes the laps that prior researchers have taken, to give readers a sense for how theory has or has not been built to date. Rather, after doffing the obligatory cap to prior research, they get busy testing their hypotheses in the belief that if nobody has tested these particular ones before, using novel analytical methods on a new data set, it breaks new ground. Our suggestion is that in the selection of research questions and the design of research methods, authors physically map the literature on a large sheet of paper in the format of Figure 2 above, and then answer questions like these: • Is this body of theory in the descriptive or normative stage? • What anomalies have surfaced in prior authors’ work, and which pieces of research built on those by resolving the anomaly? In this process, how have the categorization schemes in this field improved? • At what step am I positioning my work? Am I at the base of the pyramid defining constructs to help others abstract from the detail of the phenomena what really is going on? Am I strengthening the foundation by offering better ways to examine and measure 14 the phenomena more accurately? Am I resolving an anomaly by suggesting that prior scholars haven’t categorized things correctly? Am I running half a lap or a complete cycle, and why? Similarly, in the “suggestions for future research” section of the paper, we suggest that scholars be much more specific about where future anomalies might be buried. “Who should pick up the baton that I am setting down at the end of my lap, and in what direction should they run?” We have attempted to construct such maps in several streams of research with which we are familiar (See, for example, Gilbert 2005). It has been shocking to see how difficult it is to map how knowledge has accumulated within a given sub-field. In many cases, it simply hasn’t summed up to much, as the critics cited in our first paragraph have observed. We suggest that the pyramids of theory building might constitute a generic map, of sorts, to bring organization to the collective enterprises within each field and sub-field. The curriculum of doctoral seminars might be organized in this manner, so that students are brought through the past into the present in ways that help them visualize the next steps required to build better theory. Literature reviews, if constructed in this way at the beginning of papers, would help readers position the work in the context of this stream, in a way that adds much more value than listing articles that are topically related. Here’s just one example of how this might be done. Alfred Chandler’s (1977, 1990) landmark studies essentially proposed a theory: that the “visible hand” of managerial capitalism was a crucial enabling factor that led not just to rapid economic growth between 1880 and 1930, but led to the dominance of industry after industry by large, integrated corporations that had the scale and scope to pull everything together. In recent years, much has been written about “virtual” corporations and “vertical dis-integration;” indeed, some of today’s most successful companies such Dell are specialists in just one or two slices of the vertical value-added chain. To our knowledge, few of the studies that focus on these new virtual forms of industrial organization have even hinted that the phenomena they are focusing upon actually is an anomaly for which Chandler’s theory of capitalism’s visible hand cannot adequately account. If these researchers were to build their work on this anomaly, it would cause them to delve back into the categorization process. Such an effort would define the circumstances in which technological and managerial integration of the sort that Chandler observed are crucial to building companies and industries, while identifying other circumstances in which specialization and market-based coordination are superior structures. A researcher who structured his or her literature review around this puzzle, and then executed that research, would give us a better contingent understanding of what causes what and why. Establishing the Validity of Theory A primary concern of every consumer of management theory is to understand where it applies, and where it does not apply. Yin (1984) helps us with these concerns by defining two types of validity for a theory – internal and external validity – which are the dimensions of a body of understanding that help us guage whether and when we can trust it. In this section we’ll discuss how these concepts relate to our model of theory building, and describe how researchers can make their theories valid on both of these dimensions. 15 Internal Validity Yin asserts that a theory’s internal validity is the extent to which: 1) its conclusions are logically drawn from its premises; and 2) the researchers have ruled out all plausible alternative explanations that might link the phenomena with the outcomes of interest. The best way we know to ensure the internal validity of a theory is to examine the phenomena through the lenses of as many disciplines and parts of the company as possible – because the plausible alternative explanations almost always are found in the workings of another part of the company, as viewed through the lenses of other academic disciplines. We offer here two illustrations. Intel engineered a remarkable about-face in the early 1980s, as it exited the industry it built – Dynamic Random Access Memories (DRAMs) – and threw all of its resources behind its microprocessor strategy. Most accounts of this impressive achievement attribute its success to the leadership and actions of its visionary leaders, Gordon Moore and Andy Grove (see, for example, Yoffie et.al. 2002). Burgelman’s careful ethnographic reconstruction of the resource allocation process within Intel during those years of transition, however, reveals a very different explanation of how and why Intel was able to make this transition. As he and Grove have shown, it had little to do with the decisions of the senior-most management (Burgelman, 2002). One of the most famous examples of research that strengthens its internal validity by examining a phenomenon through the lenses of several disciplines is Graham Allison’s (1971) The Essence of Decision. Allison examined the phenomena in a single situation—the Cuban missile crisis—using the assumptions of three different theoretical lenses (e.g., rational actor, organizational, & bureaucratic). He surfaced anomalies in the current understanding of decision making that could not have been seen had he only studied the phenomenon from a single disciplinary perspective. Through the use of multiple lenses he contributed significantly to our understanding of decision making in bureaucratic organizations. As long as there’s the possibility that another researcher could say, “Wait a minute. There’s a totally different explanation for why this happened,” then we cannot be assured of a theory’s internal validity. If scholars will patiently examine the phenomena and outcomes of interest through the lenses of these different perspectives, they can incorporate what they learn into their explanations of causality. And one-by-one, they can rule out other explanations so that theirs is the only plausible one left standing. It can then be judged to be internally valid. External Validity The external validity of a theory is the extent to which a relationship that was observed between phenomena and outcomes in one context can be trusted to apply in different contexts as well. Many researchers have come to believe that a theory’s external validity is established by “testing” it on different data sets. This can never conclusively establish external validity, however – for two reasons. First, researchers cannot test a theory on every conceivable data set; and second, data only exists about the past. How can we be sure a model applies in the future, when there is no data to test it on? Consider, for illustration, Christensen’s experience after publishing the theory of disruptive innovation in The Innovator’s Dilemma (Christensen, 1997). This book presented in its first two chapters a normative theory, built upon careful empirical descriptions of the history of the disk drive industry. It asserted that there are two circumstances 16 – sustaining and disruptive situations – in which innovating managers might find themselves. Then it defined a causal mechanism – the functioning of the resource allocation process in response to the demands of customers and financial markets – that caused leading incumbent firms and entrants to succeed or fail at different types of innovations in those circumstances. Christensen’s early papers summarized the history of innovation in the disk drive industry, from which the theory was inductively derived. Those who read these papers instinctively wondered, “Does this apply outside the disk drive industry?” In writing The Innovator’s Dilemma, Christensen sought to establish the generalizability or external validity of the theory by “testing” it on data from as disparate a set of industries as possible – including hydraulic excavators, steel, department stores, computers, motorcycles, diabetes care, accounting software, motor controls and electric vehicles. Despite the variety of industries in which the theory seemed to have explanatory power, executives from industries that weren’t specifically studied kept asking, “Does it apply to health care? Education? Financial services?” When Christensen published additional papers that applied the model to these industries, the response was, “Does it apply to telecommunications? Relational database software? Does it apply to Germany” The killer question, from an engineer in the disk drive industry, was, “It clearly applies to the history of the disk drive industry. But does it apply to its future as well? Things are very different now.” As these queries illustrate, it is simply impossible to establish the external validity of a theory by testing it on data sets – because there will always be another one upon which it hasn’t yet been tested, and the future will always lie just beyond the reach of data. When researchers have defined what causes what, and why, and show how the result of that causal mechanism differs by circumstance, then the scope of the theory, or its external validity, is established. In the limit, we could only say that a theory is externally valid when the process of seeking and resolving anomaly after anomaly results in a set of categories that are collectively exhaustive and mutually exclusive. Mutually exclusive categorization would allow managers to say, “I am in this circumstance and not that one.” And collectively exhaustive categorization would assure us that all situations in which managers might find themselves with respect to the phenomena and outcomes of interest, are accounted for in the theory. No theory’s categorization is likely to achieve the ultimate status of mutually exclusive and collectively exhaustive, of course. But the accumulation of insights and improvements from cycles of anomaly-seeking research can improve theory asymptotically towards that goal. This raises an interesting paradox for large sample-size research that employs “mean” analyses to understand ways to achieve the optimum result or best performance. One would think that a theory derived from a large data set representing an entire population of companies would have greater external validity than a theory derived from case studies of a limited number of situations within that population. However, when the unit of analysis is a population of companies, the researcher can be specific only about the entire population of companies – the population comprises one category, and other sources of variance or differences that exist in that population become potentially lost as an explanation. Some managers will find that following the formula that works best on average, works best in their situation as well, of course. However, sometimes the course of action that is optimal on average will not yield the best outcome in a specific situation. Hence, researchers who derive a theory from statistics about a population still need to establish external validity through circumstance-based categorization. 17 Some large sample, quantitative studies in strategy research have begun to turn to analyses that estimate simultaneously the expected value (a mean analysis) and the variance associated with performance oriented dependent variables using a “variance decomposition” approach (Fleming and Sorensen, 2001; Sorensen and Sorensen, 2001). The simultaneous nature of this methodological approach allows a deeper understanding of the mean as well as the variance associated with a firm overtime (Sorensen, 2002) or a population of firms (Hunter, 2002). What such analysis suggests is that when there are significant heterogeneity in a given strategic environment, not only will there be variance in firm performance, but also what a firm needs to do to be successful will also differ based of the niche that they pursue. This reminds us that explanations for strategic questions are not only contingent, but more importantly are based on an understanding what sources of variance, what relations across different variables, matter most and why. From a methodological point of view, this also reminds of how our abilities (i.e., tools, methods) to represent data shape how we are able to describe what “strategic action” is possible. The value of progressing from descriptive to normative theory can be illustrated in the case of Jim Collins’ (2001) popular book, Good to Great. Collins and his research team found 15 companies that had gone from a period of mediocre performance to a period of strong performance. They then found a matching set of companies in similar industries that had gone from mediocre performance to another period of mediocre performance, and identified attributes that the “good-to-great” companies shared in common, and found that the “good-to-good” companies did not share these attributes. Greater success is associated with the companies that possess these attributes. They have done a powerful piece of descriptive theory-building built on a categorization scheme of companies that share these attributes, vs. companies that do not. The research in this book has been very helpful to many executives and academics. As descriptive theory, however, there is still uncertainty about whether a specific company in a specific situation will succeed if it acquires the attributes of the good-to-great, because the theory has not yet gone through the process of circumstance-based categorization. For example, one of those attributes is that the good-to-great companies were led by relatively humble CEOs who generally have shunned the limelight, whereas the mediocre companies tended to be led by more ego-centric, hired-in “superstar” executives. There might indeed be situations in which an egocentric superstar executive is crucial to success, however. Such a precise, situation-specific statement will only possible – and the theory can be judged to be externally valid – only when this body of understanding has progressed to the normative theory stage. What is Good Data? The dichotomy between subjectivity and objectivity is often used as a cleavage point to judge the scientific quality of data – with many seeing objective data as more legitimate than subjective data. Case- or field-derived data versus large-sample data sets is a parallel dichotomy that often surfaces in academic discourse. Much like theory, the only way we can judge the value of data is by their usefulness in helping us understand how the world works, identifying categories, making predictions and surfacing anomalies. Research that employs a nested design often reveals how illogical these dichotomies are. Christensen’s (1997) research, for example, was built upon a history of the disk drive industry derived from analysis of tens of thousands of data points about markets, technologies and 18 products that were reported in Electronic Business and Disk/Trend Report. In the context of the industry’s history, the study then recounted the histories of individual companies, which were assembled partially from published statistics and partially from interviews with company managers. The study also included histories of product development projects within these companies, based upon a few numbers and extensive personal interviews. Finally, the study included many accounts of individuals’ experiences in developing and launching new products, comprised exclusively of information drawn from interviews – with no numbers included whatsoever. So what is a case study? Because a case is a description and assessment of a situation over a defined period of time, every level in Christensen’s study was a case – industry, company, group and individual. And what is data? Each level of this study involved lots of data of many sorts. Each of these descriptions – from the industry’s history to the individuals’ histories – captured but a fraction of the richness in each of the situations. Indeed, the “hardest” numbers on product product performance, company revenues and competitors’ market shares, really were after-thefact proxy manifestations all the processes, prioritizations and decisions amongst the groups and individuals that were observed in the nested, “subjective” portions of the study. Let’s drill more deeply on this question of where much quantitative data comes from. For example, the data used in many research projects comes directly or indirectly from the reported financial statements of publicly traded companies. Is this objective data? Johnson & Kaplan (1987) showed quite convincingly that the numbers representing revenues, costs and profits that appear in companies’ financial statements are typically the result of processes of estimation, allocation, debate and politics that can produce grossly inaccurate reflections of true cost and profit. The subjective nature of financial statement data, and the skills and methods used by those who made those judgments, however, are hidden from the view of researchers who use the published numbers. The healthiest and probably the most accurate mindset for researchers is that nearly all research – whether presented in the form of large data sample analysis, a mathematical optimization model, or an ethnographic description of behavior – is a description of a situation and is, therefore, a case. And all data are subjective. Each form of data is a higher-level abstraction from a much more complex reality, out of which the researcher attempts to pull the most salient variables or patterns for examination. Generally, the subjectivity of data is glaringly apparent in field-based, ethnographic research, whereas the subjectivity tends to be hidden behind numerical data. Researchers of every persuasion ought always to strive to examine phenomena not just through the lenses of different academic or functional disciplines, but through the lenses of multiple forms of data as well. And none of us ought to be defensive or offensive about the extent to which the data in our or others’ research are subjective. We are all in the same boat, and are obligated to do our best to be humble and honest with ourselves and our colleagues as we participate individually within and collectively across the theory building cycle. 4 4 An excellent account that has helped us understand how pervasive the exercise of subjectivity is in the generation of “facts” is E.H. Carr’s (1961) treatise, What Is History. Carr describes that even the most complete historical accounts simply summarize what those who recorded events decided were important or interesting enough to record. In most 19 Implications for Course Design Schools of management generally employ two methods of classroom instruction: casebased classes and lecture-based classes. These are descriptive categorizations of the phenomena. Attempts to assess which method of instruction is associated with the best outcomes is fraught with anomaly. We suggest that there is a different, circumstance-based categorization scheme that may constitute a better foundation of a theory of course design: Whether the instructor is using the course to develop theory, or to help students practice the use of theory. When designing a course on a subject about which normative theory has not yet emerged, designing the course to move up the inductive side of the theory pyramid can be very productive. For example, Harvard Business School professor Kent Bowen decided several years ago that because a significant portion of HBS graduates end up running small businesses, he ought to create a course that prepares students to do that. He then discovered that the academic literature was amply stocked with studies of how to structure deals and start companies, but that there wasn’t much written about how to run plain old low-tech, slow-growth companies. Bowen tackled the problem with an inductive course-design strategy. He first wrote a series of cases that simply described what managers in these sorts of companies worry about and do. In each class Bowen led the students in case discussions whose purpose was to understand the phenomena thoroughly. After a few classes, Bowen paused, and orchestrated a discussion through which they sought to define patterns in the phenomena – to begin categorizing by type of company, type of manager, and type of problem. Finally, they explored the association between these types, and the outcomes of interest. In other words, Bowen’s course had an inductive architecture that moved up the theory pyramid. Then armed with their preliminary body of theory, Bowen and his students cycled down the deductive side of the pyramid to examine more companies in a broader range of circumstances. This allowed them to discover things that their initial theory could not explain; and to improve their constructs, refine their classification scheme, and improve their understanding of what causes what, and why. There is another circumstance – where well-researched theories pertaining to a field of management already exist. In this situation, a deductive course architecture can work effectively. For example, Clayton Christensen’s case-based course, Building a Sustainable Enterprise, is designed deductively. For each class, students read a paper that summarizes a normative theory about a dimension of a general manager’s job. The students also study a case about a company. They then look through the lenses of the theory, to see if it accurately explains what historically happened in the company. They also use the theory to discuss what management actions will and will not lead to the desired outcomes, given the situation the company is in. Because the cases are complicated, students often discover an anomaly that then enables the class to revisit the categorization scheme and the associated statement of causality. Students follow this process, theory after theory, class after class, for the semester – and in the process, learn not just how to use theory, but how to improve it. 5 processes that geneate numerical data the subjectivity that was exercised in the process of recording or not recording lies hidden. 5 At one point Christensen attempted to teach his course through an inductive architecture. Case by case, he attempted to lead his students to discover well-documented theories that prior scholars already had discovered. The course was a disaster – the wrong architecture for the circumstance. Students could tell that Christensen already had 20 As the experiences of Professors Bowen and Christensen suggest, the dichotomy that many see between teaching and research need not create conflict. It may be better to view developing and teaching courses as course research. And there are two circumstances in which professors might find themselves. When a body of theory has not yet coalesced, an inductive architecture is productive. When useful theory already has emerged, then a deductive architecture can make sense. In both circumstances, however, instructors whose interest is to build theory and help students learn how to use theory, can harness the brainpower of their students by leading them through cycles up and down the theory-building pyramid. Implications: Theory as Method Building theory in management research is how we define and measure our value and usefulness as a research community to society. We have focused on specific examples from management research to illustrate how our approaches to the empirical world shape what we can represent and can value and, more broadly, how theory collectively shapes the field of management research. This reminds us that building theory at an individual or collective level, handing off or picking up the baton, is not a detached or neutral process, yet the model developed here gives us a method to guide these efforts. From this model we recognize first the importance of both the inductive and deductive sides of the pyramid; second how subsequent cycles move us from attributes and substantive categories toward a circumstance-based understanding and more formal theory; and third eventually to an understanding of the relational properties that are of consequence and define the boundary conditions wherein the theory is of value. This is our uiltimate aim: As students of business we readily accept that if employees in manufacturing and service companies follow robust processes they can predictably produce outputs of quality and value. When reliable processes are followed, success and failure in producing the desired result become less dependent upon the capabilities of individual employees, because they are embedded in the process. We assert that the same can be true for management researchers. If we follow a robust, reliable process, even the most “average” of us can produce and publish research that is of high value to academics and practitioners. the answer, and his attempts to orchestrate a case discussion seemed like the professor was asking the students to guess what was on his mind. The next year, Christensen revised his course to the deductive architecture described above, and students reacted very positively to the same material. 21 Parking Lot for Important ideas that need to go somewhere: So a major question that arises in conducting research is how do we know we are categorizing or measuring the best things to help us understand the phenomena of interest? Glaser and Strauss state that the elements of theory are, first, the conceptual categories with their conceptual properties and, second, the generalized relations among categories and their properties (1967: 35-43). A way to proceed with combining these elements is to emphasize a “relational” approach to theorizing (Bourdieu and Wacquant, 1992: 224-233) rather than just a substantialist approach. As already alluded to, a substantialist approach emphasizes “things” to be counted and categorized such as people, groups, products, or organizations. A relational approach, however, emphasizes the properties between things in a given area of interest, or what determines the relative positions of force or power between people, groups or organizations. The reason that most research follows a substantialist approach is that most methodological tools are focused on and best suited in identifying convenient sources data that can be easily counted and categorized more readily than the relational properties that exist between individuals, groups or organizations in a given social space over time (Bourdieu, 1989). Given the methodological focus toward convenient sources of data to collect, it is not surprising that a substantialist approach dominates most of management research, as well as the social sciences. For example, the concept of “core competency” (Selznick, 1957) was developed to account for organizations that were successful in their environments. This concept became a very useful concept in the field of strategy in the late 1980s and the 90s (Prahalad and Hamel, 1990). However, the limitation of this category is that it was used to identify only successful companies; less successful companies were seen as lacking a core competency. The field of strategy did not begin to look more closely at the concept until Dorothy Leonard’s research (1992; 1995) focus on the processes and outcomes that identified how a core competency can turn into a source of core rigidity. Leonard found that changes in a firm’s “relations” to its suppliers and customers determine whether the firm can remain competitive. The corollary of this is that a core competency can become a core rigidity, diminishing competitive strength. By identifying this consequential “relations” Leonard not only provided a deeper formalization of “competency,” but this also proved helpful to managers in suggesting how they apply their firm’s resources to avoid this competency-rigidity tendency. While a relational approach can push research to a deeper level of formalization, it raises methodological challenges. Because relations among individuals, groups or organizations are most telling as they change over time, a relational approach requires both the means of collecting data over time and a method of analyzing and representing the insights that such data can reveal. In one of the most influential ethnographic studies of technology implementation in management research, Barley’s careful ethnographic analysis (1986; 1988; 1990) provided a comparative and temporal window into the implementation of the same technology in similar hospital settings. Despite these similarities, Barley documented very different outcomes in how radiologist and technicians joint used the CT-Scanning technology implemented. Based on these different outcomes, he asserted that technological and social structures mutually adapted differently over time. Barley observations over time helped to replaced the either or debate between the static view of technological determinism and the situated view of technology. 22 Using Barley’s empirical documentation, Black, Carlile and Repenning (2003) formalized his observation at a more specific causal level through the use of a system dynamics method. This allowed them to specify the relation between radiologist and technicians and how their relative expertise in using the technology explains the different outcomes that Barley documented. Even though Barley recognized the importance of the “distribution of expertise” (Barley, 1986) between the two groups, he lacked a methodology to represent how over time the relative accumulations of expertise accounted for the different outcomes he observed. With this more formalized approach Black et al. could state a balance in “relative expertise” in using the new technology was essential in developing collaboration around a new technology. The specification of these relational properties was an improvement upon Barley managerial suggestion that a more decentralized organization is better able to successfully implement a new technology than a centralized one. This more formalized theory and relational understanding provides specific guidance to a practitioner about what to do when faced with the challenge of implementing a new technology when collaboration is desired. This relational approach goes farther than a “contingency theory” approach (Lawrence and Lorsch, 1967)—because it recognizes not only are things contingent, but that in any situation some things, some relations, matter more than others in explaining the contingent (different) outcomes possible. The development of contingency theory has provided significant insight into the field of organizational behavior and design because it has identified that circumstances do affect outcomes. However, the fact that contingency theory is viewed by many as a stand-alone theory rather than a further reason to search for the particular sources of contingency limits the theory-building effort. This points to the proclivity of many researchers to leap directly from phenomena to theory and back again. If we continue around the theory building cycle, what we at first call contingent (e.g., decentralization versus centralization), upon further analysis reveals the underlying relational properties and why those relations are most consequential and why (e.g., how and why relative expertise matters). 23 References Allison, G. (197), The Essence of Decision. Glenview, IL: Scott, Foresman & Co. Argyris, C. (1993), On Organizational Learning. Cambridge, MA: Blackwell. Argyris, C. & Schon, D. (1976), Theory in Practice. San Francisco: Jossey-Bass. Baldwin, C. and Clark, K.B. (2000), Design Rules: The Power of Modularity. Cambridge, MA: MIT Press. Barley, S.R. (1986), “Technology as an occasion for structuring: Evidence from observations of CT scanners and the social order of radiology departments.” Administrative Science Quarterly, 31, 1: 78-108. Black, L., Repenning, N. and Carlile, P.R. (2002) “Formalizing theoretical insights from ethnographic evidence: Revisiting Barley’s study of CT-Scanning implementations.” Under revision, Administrative Science Quarterly. Bourdieu, P. (1989/1998), Practical Reason. Stanford: Stanford University Press. Bourdieu, P. and Wacquant, L. (1992), An Invitation to Reflexive Sociology. Chicago: University of Chicago Press. Bower, Joseph (1970), Managing the Resource Allocation Process. Englewood Cliffs, NJ: Irwin. Bower, J.L., and Gilbert, C.G., eds. (2005), From Resource Allocation to Strategy. Oxford University Press. Burgelman, Robert & Leonard Sayles (1986), Inside Corporate Innovation. New York: The Free Press. Burgelman, Robert (2002), Strategy Is Destiny. New York: The Free Press. Campbell, D.T.and Stanley, J.C. (1963), Experimental and Quasi-experimental Design for Research. Boston: Hougthon Mifflin Press. Carlile, P.R. (2003), “Transfer, translation and transformation: Integrating approach in sharing and assessing knowledge across boundaries.” Under revision, Organization Science. Carr, E.H. (1961), What Is History? New York: Vintage Books. Chandler, A. D. Jr. (1977), The Visible Hand: The Managerial Revolution in American Business. Cambridge, MA: Belknap Press. Chandler, A. D. Jr. (1990), Scale and Scope: The Dynamics of Industrial Capitalism. Cambridge, MA: The Belknap Press. 24 Christensen, C.M. (1997), The Innovator’s Dilemma: When New Technologies Cause Great Firms to Fail. Boston: Harvard Business School Press. Chesbrough, H.W. (1999). “The Differing organizational impact of technological change: A comparative theory of institutional factors.” Industrial and Corporate Change, 8: 447-485. Clegg, Brian (2003), The First Scientist: A Life of Roger Bacon. New York: Carroll & Graf Publishers. Daneels, Erwin (2005), “The Effects of Disruptive Technology on Firms and Industries,” Journal of Product Innovation Management (forthcoming special issue that focuses on this body of theory). Gilbert, C.G. (2001), A Dilemma in Response: Examining the Newspaper Industry’s Response to the Internet. Unpublished DBA thesis, Harvard Business School. Gilbert, C.G., and Christensen, C.M. (2005). “Anomaly Seeking Research: Thirty Years of Development in Resource Allocation Theory.” In Bower, J.L., and Gilbert, C.G., eds. From Resource Allocation to Strategy. Oxford University Press, forthcoming. Fleming, L. and Sorensen, O. (2001), “Technology as a complex adaptive system: Evidence from patent data.” Research Policy, 30: 1019-1039. Glaser, B. & Straus, A. (1967), The Discovery of Grounded Theory: Strategies of Qualitative Research. London: Wiedenfeld and Nicholson. Hayes, R. (1985), “Strategic Planning: Forward in Reverse?” Harvard Business Review, November-December: 111-119. Hayes, R. (2002), “The History of Technology and Operations Research,” Harvard Business School Working paper. Hayes, R. and Abernathy, W. (1980), “Managing our Way to Economic Decline.” Harvard Business Review, July-August: 7-77. Hayes, R. and Wheelwright, S.C. (1984), Restoring our Competitive Edge. New York: John Wiley & Sons. Hayes, R., Wheelwright, S. and Clark, K. (1988), Dynamic Manufacturing. New York: The Free Press. Henderson, R.M. & Clark, K.B. (1990), “Architectural Innovation: The Reconfiguration of Existing Systems and the Failure of Established Firms.” Administrative Science Quarterly, 35: 9- 30. Hunter, S.D. (2002), “Information Technology, Organizational Learning and Firm Performance.” MIT/Sloan Working Paper. Hutton, A., Miller, G., and Skinner, D. (2000), “Effective Voluntary Disclosure.” Harvard Business School working paper. 25 James, W. (1907), Pragmatism. New York: The American Library. Johnson, H.T. & Kaplan, R. (1987), Relevance Lost. Boston: Harvard Business School Press. Kaplan, A. (1964), The Conduct of Inquiry: Methodology for Behavioral Research. Scranton, PA: Chandler. Kaplan, R. (1986), “The role for Empirical Research in Management Accounting.” Accounting, Organizations and Society, 4: 429-452. Kuhn, T. (1962), The Structure of Scientific Revolutions. Chicago: University of Chicago Press, 1962. Lawrence, P. R. and Lorsch, J.W. (1967), Organization and Environment. Boston: Harvard Business School Press. Leonard, D. (1995), Wellsprings of Knowledge. Boston: Harvard Business School Press. Poole, M. & Van de Ven, A. (1989), “Using Paradox to Build Management and Organization Theories.” Academy of Management Review 14: 562-578. Popper, K. (1959), The Logic of Scientific Discovery. New York: Basic Books. Porter, M. (1980), Competitive Strategy. New York: The Free Press. Porter, M. (1985), Competitive Advantage. New York: The Free Press. Porter, M. (1991), The Competitive Advantage of Nations. New York: The Free Press. Raman, Ananth, (need citation) Roethlisberger, F. (1977), The Elusive Phenomena. Boston: Harvard Business School Press. Rumelt, Richard P. (1974), Strategy, Structure and Economic Performance. Cambridge, MA: Harvard University Press. Selznick, P. (1957), Leadership in Administration: A Sociological Interpretation. Berkeley: University of California Press. Simon, H. (1976), Administrative Behavior (3 rd edition). New York: The Free Press. Solow, R. M. (1985), “Economic History and Economics.” The American Economic Review, 75: 328-331. Sorensen, O. and Sorensen, J. (2001), Research Note - Finding the right mix: Franchising, organizational learning, and chain performance. Strategic Management Journal, 22: 713-724. Sorensen, J. (2002), “The Strength of Corporate Culture and the Reliability of Firm Performance,” Administrative Science Quarterly, 47: 70-91. 26 Spear, S.C. and Bowen, H.K. (1999), “Decoding the DNA of the Toyota production system.” Harvard Business Review, September-October. Stinchcombe, Arthur L. (1968), Constructing Social Theories.” New York: Harcourt, Brace & World. Sull, D. N. (2000), “Industrial Clusters and Organizational Inertia: An Institutional Perspective.” Harvard Business School working paper. Van de Ven, A. (2000), “Professional Science for a Professional School.” In Beer, M. and Nohria, N. (Eds), Breaking the Code of Change. Boston: Harvard Business School Press. Weick, K. (1989), “Theory Construction as Disciplined Imagination,” Academy of Management Review, 14: 516-532. Womack, J. P., Jones, D. T. & Roos, D. (1990), The Machine that Changed the World. New York: Rawson Associates. Yin, R. (1984), Case Study Research. Beverly Hills: Sage Publications. Yoffie, David, Sasha Mattu & Ramon Casadesus-Masanell (2002), “Intel Corporation, 1968- 2003,” Harvard Business School case #9-703-427.The Psychological Costs of Pay-for-Performance: Implications for the Strategic Compensation of Employee
|
CD ROM Annuaire d'Entreprises France prospect (avec ou sans emails) : REMISE DE 10 % Avec le code réduction AUDEN872
10% de réduction sur vos envois d'emailing --> CLIQUEZ ICI Retour à l'accueil, cliquez ici Ian Larkin, Lamar Pierce, and Francesca Gino Working papers are in draft form. This working paper is distributed for purposes of comment and discussion only. It may not be reproduced without permission of the copyright holder. Copies of working papers are available from the author. The Psychological Costs of Pay-for-Performance: Implications for the Strategic Compensation of Employees Ian Larkin Lamar Pierce Francesca Gino Working Paper 11-056Strategic Compensation 1 Running Head: STRATEGIC COMPENSATION The Psychological Costs of Pay-for-Performance: Implications for the Strategic Compensation of Employees Ian Larkin 1 , Lamar Pierce 2 , and Francesca Gino 1 Forthcoming, Strategic Management Journal 1 Harvard Business School, Soldiers Field Road, Boston MA 02163 ilarkin@hbs.edu 617-495-6884, fgino@hbs.edu 617-495-0875 2 Olin Business School, Washington University in St. Louis, One Brookings Drive Box 1133, St. Louis, MO 63130 pierce@wustl.edu 314-935-5205Strategic Compensation 2 Abstract Most research linking compensation to strategy relies on agency theory economics and focuses on executive pay. We instead focus on the strategic compensation of nonexecutive employees, arguing that while agency theory provides a useful framework for analyzing compensation, it fails to consider several psychological factors that increase costs from performance-based pay. We examine how psychological costs from social comparison and overconfidence reduce the efficacy of individual performance-based compensation, building a theoretical framework predicting more prominent use of teambased, seniority-based, and flatter compensation. We argue that compensation is strategic not only in motivating and attracting the worker being compensated, but also in its impact on peer workers and the firm’s complementary activities. The paper discusses empirical implications and possible theoretical extensions of the proposed integrated theory. Keywords: compensation; pay; incentives; principal-agent models; motivation; psychologyStrategic Compensation 3 The Psychological Costs of Pay-for-Performance: Implications for the Strategic Compensation of Employees Compensation is a critical component of organizational strategy, influencing firm performance by motivating employee effort and by attracting and retaining high-ability employees. Compensation is the largest single cost for the average company (Gerhart, Rynes and Fulmer, 2009), with employee wages accounting for 60 to 95 percent of average company costs excluding a firm’s physical cost of goods sold (Bureau of Labor Statistics, 2009). Although literatures across disciplines including economics, social psychology and human resource management take different approaches to studying compensation, the strategy literature on compensation is dominated by one theory and one focus: the use of agency theory and a focus on executive compensation. Indeed, by our count, over 80 percent of recent papers on compensation in leading strategy journals explicitly or implicitly use agency theory as the dominant lens of analysis. 3 Nearly three-quarters of these papers also examine executive compensation, rather than focusing on compensation for “non-boardroom” employees. The impact of executive compensation on firm strategy is undeniable (e.g. Dalton et. al, 2007; Wowak and Hambrick, 2010), given the importance of attracting top executive talent and financially motivating strong effort and profitable choices. Yet pay for top executives averages only a few percentage points of the total compensation costs of the firm (Whittlesey, 2006), 3 Between 2004 and 2009, one-hundred fifty-two papers in five of the leading strategy journals – Strategic Management Journal; Organization Science; Management Science; Academy of Management Journal and Academy of Management Review – contained the word “compensation” in the topic listed in the Social Sciences Citation Index. 82 of these explicitly used the lens of agency theory, and a further 45 clearly used the basic predictions of agency theory in the research. Over 83 percent of the papers on compensation therefore rested on agency theory. In contrast, only 16 of the papers, or just more than 10 percent, discussed any concepts from social psychology or behavioral decision research. Similarly, a recent review article on compensation by Gerhart, Rynes and Fulmer (2009) contained over 220 citations, 60 of which were in strategy journals. Of these 60 articles, 52 explicitly or implicitly used agency theory as the dominant lens of analysis, and only three discussed social psychology in a significant way. Across these two data sources, 72 percent of compensation papers in strategy journals focused on executive pay.Strategic Compensation 4 meaning the bulk of a company’s wage bill represents pay to non-executives. Furthermore, employee compensation is intimately tied to firm decisions regarding technology, diversification, market position, and human capital (Balkin and Gomez-Mejia, 1990; Nickerson and Zenger, 2008), and has widespread implications for organizational performance (Gomez-Mejia, 1992). Non-executive compensation therefore remains an important but under-explored topic in the strategy literature. In this paper, we examine the strategic implications of compensation choices for nonexecutive employees. We argue that agency theory falls short in providing fully accurate predictions of strategic compensation choices by firms for non-executive employees. 4 The prominent use of agency theory by strategy scholars 35 years after its introduction by Jensen and Meckling (1976) and Holmstrom (1979) suggests that this theoretical approach has substantial merit. Yet, most firms’ compensation strategies for non-executive employees do not fully align with the predictions of agency theory. As detailed below, in fact, agency theory predicts the use of individualized performance-based pay far more frequently than is actually observed for nonexecutive employees. We argue that the predictions of agency theory often fail because performance-based pay is less effective than the theory predicts. We propose a more realistic theory of strategic compensation for non-executive employees that uses the basic framework of agency theory but incorporates important insights from social psychology and behavioral decision research. We argue that while these insights impact compensation strategy in many ways, two main factors are of first-order importance: social comparison processes and overconfidence. We concentrate on these factors because they 4 The question of the extent to which agency theory is an adequate framework for explaining strategic executive compensation is outside the scope of this paper. We believe, however, that the theory developed in the paper will prove useful in examining executive compensation choices as well.Strategic Compensation 5 most dramatically affect the underlying differences in the objectives and information on which agency theory is based. Also, these factors strongly influence firm performance due to their impact not only on the behavior of the employee being compensated, but the decisions and actions of other employees. We first incorporate these factors into an agency theory framework, and then argue that the true costs of individual performance-based systems are far greater than predicted by agency theory. We use our theory to derive a set of testable propositions regarding how psychological factors, economic factors, and information influence both the efficacy and prevalence of certain strategic compensation choices. Our main argument is that psychological factors raise the cost of individual pay-for-performance, leading firms to rely on team-based, seniority-based and flatter compensation strategies such as hourly wages or salaries. Although several notable studies in the management literature have examined the effect of individuals’ psychology on compensation (e.g., Gerhart and Rynes, 2003; Gerhart, Rynes and Fulmer, 2009), to the best of our knowledge our paper is the first to integrate economic and psychological factors into a theory of how strategic employee compensation impacts firm strategy and performance. The role psychology plays in compensation choice is by no means a new topic. Gerhart, Rynes and Fulmer (2009) cite 42 articles in psychology journals that examined compensation issues, yet most of these studies ignore or even dismiss the relevance of economic theory, in our opinion making the same mistake as agency theory research in neglecting relevant factors from other disciplines. Additionally, these studies do not attempt to fully assess the costs and benefits to firms of different compensation choices, and tend to be more narrowly focused on partial effects. Similarly, while some economists acknowledge the importance of psychological factors such as fairness in wages (Akerlof and Yellen, 1990; Fehr and Gachter, 2000; Card et al., 2010) Strategic Compensation 6 and the non-pecuniary costs and benefits such as shame (Mas and Moretti, 2009), social preferences (Bandiera, Barankay, and Rasul, 2005), and teamwork (Hamilton, Nickerson, and Owan, 2003), these papers primarily focus on social welfare or individual or team performance. Only Nickerson and Zenger (2008) discuss the strategic implications of psychological processes for employee compensation but, different from the current paper, focus exclusively on the role of employee envy on the firm. Our work seeks to build theory that integrates the predictions of agency theory and insights from the psychology literature in a comprehensive way. Agency theory is a natural lens by which to study strategic compensation because it approaches the setting of compensation from a cost-benefit viewpoint, with the firm’s principals, or owners, as the fundamental unit of analysis. By using agency theory as a base, our integrated framework leads to a rich set of testable predictions around the methods by which firms strategically set compensation policy. We further seek to illustrate the impact of non-executive compensation on the broader strategy of the firm, explaining how our framework can inform other complementary activities and choices made by the firm. The paper is laid out as follows. In the next section, we briefly introduce the approach we take to building an integrated theory of strategic compensation. We then review agency theory as well as the literatures on social psychology and behavioral decision making for relevant and empirically-supported insights regarding social comparison processes and overconfidence. Next, we combine insights from these literatures into an integrated theory of strategic compensation. We end the paper by examining the implications of our theory for strategic compensation decisions by firms, and by discussing empirical implications, testable propositions and next steps.Strategic Compensation 7 The Implications of the Infrequency of Individual Pay-for-performance Our research is primarily motivated by the disconnect between the broad effectiveness of individual pay-for-performance predicted by agency theory and the relative infrequency with which it is observed. 5 We hold that agency theory is correct in broadly equating the effectiveness of different compensation regimes with their prevalence. Compensation systems that tend to be more effective will be used more often. Although firms often deviate from the most efficient systems and can make mistakes, in general the prevalence of systems and decisions is highly correlated with efficiency and effectiveness (Nelson, 1991; Rumelt and Schendel, 1995). We note that the theory we propose in this paper is focused on effectiveness, but due to the above correlation we will often make reference to the prevalence of certain schemes as prima facie evidence of effectiveness. Indeed, the infrequent use of individual performance-based pay for non-executives casts doubt on its overall efficacy (Zenger, 1992). A 2010 international survey of 129,000 workers found only 40 percent received pay tied to performance at any level (individual, team, firm) (Kelly, 2010), and over half of Fortune 1000 companies report using individual performancebased pay for “some,” “almost none” or “none” of their work force (Lawler, 2003). Even when performance-based pay is used, the proportion contingent on performance is typically low. The median bonus for MBA graduates, whose employment skews toward professional services that frequently use performance-pay, represents only 20 percent of base salary (VanderMay, 2009). Performance pay based on team metrics – such as division profitability, product market share, or other non-individual measures – is far more common than individual performance-based pay. 5 Note “pay-for-performance” includes pay based on subjective measures of performance as well as objective ones. Agency theory holds that even when output is not observable or measurable, firms will often use performancebased, subjective measures of performance (e.g. Baker, 1992).Strategic Compensation 8 This unexpectedly low prevalence suggests higher costs or lower performance from individual incentives than agency theory predicts. Still, this discrepancy does not mean that agency theory fails to garner empirical support. Many of the core predictions of agency theory have been empirically validated in experimental and real-world settings (Gerhart, Rynes and Fulmer, 2009; Prendergast, 1999). Our theory takes the insights from agency theory that have received strong empirical support and integrates them with empirically validated insights from social psychology. We argue that only by using an integrated cost-benefit lens can accurate predictions around compensation be made at the level of the firm. Agency Theory and Strategic Compensation At its core, agency theory posits that compensation is strategic in that firms will use the compensation program that maximizes profits based on its unique costs and benefits. In agency theory, costs arise due to differences between firms and employees in two crucial areas: objectives and information. Two potential costs arise from these differences: an employee may not exert maximum effort (or effort may be inefficiently allocated), and the firm may pay workers more than they are worth (i.e. their expected marginal product). In this section we detail the key differences between employees and firms in objectives and information, and the resulting predictions from agency theory about a firm’s compensation strategy. Figure 1 summarizes the arguments described below. *** Insert Figure 1 here *** Objectives The fundamental tension in agency theory arises from differences in the objectives of firms and employees. Firms seek to maximize profits, and increased compensation affects Strategic Compensation 9 profitability by motivating employee effort (+) and attracting more highly-skilled employees (+) while increasing wage costs (-) (Prendergast, 1999). Employees, on the other hand, seek to maximize utility. Increased compensation affects utility by increasing income (+), yet employees must balance utility from income with the disutility (or cost) of increasing effort (-). Agency theory argues that effort is costly to employees at the margin; employees may intrinsically enjoy effort in small or moderate levels, but dislike increases in effort at higher levels (Lazear and Oyer, 2011). Agency theory further argues that firms must pay workers a premium for taking on any risk in pay uncertainty, since employees are risk averse. This creates distortion with risk neutral firm owners, who can use financial markets to optimally hedge against risk (Jensen and Meckling, 1976). However, we limit our discussion of risk in this paper for the sake of brevity, and because agency theory’s predictions on risk have demonstrated very little if any empirical support (Pendergrast, 1999). In contrast, agency theory’s prediction on the relationship between effort and pay has been largely supported in the empirical literature (Pendegrast, 1999; Lazear and Oyer, 2011). Information Two information asymmetries, where the worker knows more than the firm, drive compensation choices in agency theory. Workers know their own effort exertion and skill level, while firms have imperfect information about both. Agency theory holds that firms overcome these asymmetries by providing incentives for workers to exert effort and self-select by skill level. For example, by offering a low guaranteed wage with a large performance element, a firm can incentivize higher effort from all workers, but it can also attract and retain workers with high skills, while “sorting away” those with low skills (Lazear, 1986; Lazear and Oyer, 2011).Strategic Compensation 10 Predictions of standard agency theory The basic tradeoffs in agency theory are around effort (good for the firm but bad for the employee) and pay (bad for the firm but good for the employee). Given the information problems described above, and ignoring psychological factors, firms should pay employees for performance if the productivity gains from the effort it motivates are greater than the cost of the pay. Secondarily, pay-for-performance systems separate skilled employees who earn more under such schemes from unskilled ones better off in settings where performance does not matter. Basic agency theory holds that there are two basic alternatives firms take when setting pay: paying a flat wage, or paying for performance. The most obvious way to pay for performance is to base pay on some observed output of the worker or company, but firms can also base pay on subjective measures not tied to observed output. 6 The tradeoffs noted above lead to three fundamental insights on information and individual pay-for-performance that emerge from agency theory: Insight 1: Employees work harder when their pay is based on performance Insight 2: Firms are more likely to use performance-based pay (vs. flat pay) when they have less information about actual employee effort. Insight 3: Firms are more likely to use performance-based pay (vs. flat pay) as they have less information about employee skill level, and/or as employee skill level is more heterogeneous. Team-based compensation Agency theory also approaches team-based compensation with a cost-benefit lens; teambased compensation improves performance when benefits from coordination outweigh costs from the reduced effort of free-riding (Bonin et al., 2007). Notably, standard agency theory 6 Agency theory holds that firms are more likely to use subjective measures as the correlation between observed output and effort is lower (Baker, 1992).Strategic Compensation 11 views team-based compensation as important only when the firm chooses a production process requiring close integration across a team to internalize production externalities from individual workers. Consequently, when coordination is unnecessary, team-based incentives are unlikely to be efficient and firms set compensation strategy largely based on the observability of output, effort, and skill. If high-powered incentives are particularly important but individual effort is not observable, firms may use team-based compensation, although the costs of free-riding make this an exception rather than the rule. Furthermore, team-based pay on average may attract lowerskilled or less productive workers than individual-based pay due to lower earning potential and lower costs to shirking. 7 This leads to a fourth insight from standard agency theory: Insight 4: Firms are more likely to use team-based performance pay vs. individual-based pay when coordination across workers is important, when free-riding is less likely, or when monitoring costs are low. Basic predictions of agency theory Given these four insights from agency theory, we present the likely compensation choices of firms under an agency theory model in Figure 2, where coordination by employees is not required and the primary determinants of pay are observability of output, effort, and ability. As noted in the left-hand figure, when ability is observable, individual performance-based pay is more likely to be used as a firm better observes individual output, but is less able to observe actual effort. When both effort and output are highly observable, firms prefer to use a set salary, where an employee is given a set wage regardless of performance. 8 It is important to note that with effort and output both observable, this salary is inherently based on average performance. 7 Results from Hamilton, Nickerson, and Owan’s (2003) study of garment factory workers cast some doubt on these predictions. They found that high-ability workers prefer to work in teams, despite earning lower wages. This is consistent with recent work on how the social preferences of workers can overwhelm financial incentives (Bandiera, Barankay, and Rasul, 2005). 8 This prediction also stems from the assumed risk aversion of employees.Strategic Compensation 12 While the worker can reduce effort for short periods, the observability of this effort means that the firm can adjust compensation or terminate the employee in response to observed output. *** Insert Figure 2 here *** As noted in the right-hand figure, the situation changes dramatically when individual skill is not observable. In such cases, compensation not only motivates employees, but also attracts types of employees to the firm. Individual performance-based pay is more likely across both margins on the graph: at a given level of output or effort observability, firms are more likely to use performance-based pay when employee skills are not observable compared to when they are. When it is important for employees to coordinate effort across tasks, a third compensation strategy comes into play: team performance-based pay. This refers to a pay system that measures and rewards performance at a level other than the individual, such as the division, product line or company. As depicted in Figure 3, assuming imperfect (but not zero) observability of individual output, team performance-based pay is more likely as coordination across employees increases and observability of an individual effort decreases. Finally, as individual effort observability increases, firms again prefer salaries as they are the most efficient form of compensation. As before, individual-based performance pay becomes more important as the need for sorting due to skill unobservability grows high. *** Insert Figure 3 here *** Agency theory provides a compact, plausible theory that predicts the profitability and use of performance-based pay in a wide number of settings. It is therefore surprising that individual performance-based pay is used so little (Camerer et al., 2004; Baker, Jensen and Murphy, 1988), given the strong empirical evidence of its impact on employee effort (e.g., Lazear, 1986; Paarsch and Shearer, 2000). Part of this inconsistency may be due to the fact that the induced effort is Strategic Compensation 13 directed toward non-productive or detrimental activities (Kerr, 1975; Oyer, 1998; Larkin, 2007). However, even considering these “gaming costs,” the magnitude in performance differences in the above empirical studies makes it difficult to believe gaming alone explains the dearth of performance-based pay. 9 Incorporating Insights from Psychology and Decision Research into Agency Theory We argue that the low prevalence of individual performance-based pay in firms reflects several important relationships between the psychology of employees and their pay, utility, and resulting actions. In each case, the psychological mechanism we suggest to be at work makes performance-based pay more costly for firms, which may help explain why performance-based pay is less common than agency theory predicts. However, we also argue that the basic structure of agency theory is still a useful lens for examining how insights from psychology and behavioral decision research affect compensation predictions. Like agency theory, our framework decomposes the strategic element of compensation into differences between firms and employees in objectives and information, and recognizes that there is a “work-shirk” tradeoff for the average employee. Integrating psychological insights into this agency-based framework allows us to put forward an integrated theory of strategic compensation that considers both economic and psychological factors, and a testable set of propositions. As with all models, we abstract away from many variables that are relevant to compensation, and focus on two psychological factors which, in our view, create the largest impact on the methods by which firms compensate workers: overconfidence and social 9 Note that the existence of costs from performance-based pay, as demonstrated in the studies above, does not mean that these pay systems are suboptimal. Agency theory would hold that the net benefits of the system, even including the identified costs, must be greater than the net benefits of any other system.Strategic Compensation 14 comparison processes. In this section, we discuss how these psychological factors add costs to performance-based compensation systems, using the framework developed in Section 2. These additions are depicted in Figure 4. Throughout the section, we will refer back to this figure to clearly explain how the consideration of these psychological costs modifies some of the main predictions of standard agency theory. *** Insert Figure 4 here *** Performance-based pay and social comparison Social comparison theory (Festinger, 1954) introduces considerable costs associated with individual pay-for-performance systems because it argues that individuals evaluate their own abilities and opinions in comparison to referent others. Psychologists have long suggested that individuals have an innate desire to self-evaluate by assessing their abilities and opinions. Because objective, nonsocial standards are commonly lacking for most such assessments, people typically look to others as a standard. Generally, individuals seek and are affected by social comparisons with people who are similar to them (Festinger, 1954), gaining information about their own performance. As noted in Figure 4, social comparison theory adds a fourth information set to the three studied in agency theory: firms’ and employees’ knowledge about the pay of other employees. When deciding how much effort to exude, workers not only respond to their own compensation, but also respond to pay relative to their peers as they socially compare. In individual pay-forperformance systems, pay will inevitably vary across employees, generating frequent pay comparisons between peers. As suggested by equity theory (Adams, 1965), workers are not necessarily disturbed by such differences, since they consider information about both the inputs (performance) and outputs (pay) in such comparisons. If workers were to rationally perceive pay Strategic Compensation 15 inequality to be fairly justified by purely objective and easily observable performance differences, then such pay differences would generate few (if any) psychological costs. Yet pay comparisons can lead to distress resulting from perceptions of inequity if inputs or performance are either unobservable or perceptions of those inputs are biased. For example, employees might believe they are working longer hours or harder than referent coworkers, and if their pay level is relatively low, they will likely perceive inequity. Theoretical work in economics and strategy has followed psychology in arguing that such comparisons can lead to reduced effort (Solow, 1979; Akerlof and Yellen, 1990) and behavior grounded in envy, attrition, and the tendency to sabotage other workers within the same organization (Nickerson and Zenger, 2008; Bartling and von Siemens, 2010). 10 Empirical studies show that social comparisons are indeed important to workers (Blinder and Choi, 1990; Campbell and Kamlani, 1990; Agell and Lundborg, 2003), and can hurt morale (Mas, 2008), stimulate unethical behavior (Cropanzano et al., 2003; Pruitt and Kimmel, 1977; Gino and Pierce, 2010; Edelman and Larkin, 2009), and reduce effort (Greenberg, 1988, Cohn et al., 2011; Nosenzo, 2011). Perceived inequity can also increase turnover and absenteeism (Schwarzwald et al., 1992) and lower commitment to the organization (Schwarzwald et al., 1992). While this negative impact is typically stronger when the employee is disadvantaged (Bloom, 1999), costly behavior can also occur when the employee is advantaged and feels compelled to help others (Gino and Pierce, 2009). Perceived inequity in pay can furthermore have a costly asymmetric effect. Recent evidence suggests that below-median earners suffer lower job satisfaction and are more likely to search for a new job, while above-median earners generate no productivity benefits from 10 Social psychology’s work on equity and social comparison has slowly disseminated into the economics literature, having a profound impact on experimental economics (Rabin 1996), particularly in the literature on fairness (e.g., Camerer, 2003; Fehr & Gachter, 2000; Fehr & Schmidt, 1999).Strategic Compensation 16 superior pay (Card et al., 2010) and may even engage in costly actions to assuage guilt (Gino and Pierce, 2009). While not all below-median earners perceive unfairness, this evidence is certainly consistent with a substantial frequency of inequity perception, and may also reflect dissatisfaction with the procedures used to allocate pay across workers. Social comparison across firms by CEO’s has also been shown to lead to costly escalations in executive salaries, a phenomenon that can also occur between employees in the same firms (Faulkender and Yang, 2007; DiPrete, Eirich, and Pittinsky, 2008). As noted in Figure 4, social comparison theory adds two insights to the costs of performance-based pay: Insight 5a: Perceived inequity through wage comparison reduces the effort benefits of individual pay-for-performance compensation systems. Insight 5b: Perceived inequity through wage comparison introduces additional costs from sabotage and attrition in individual pay-for-performance compensation systems. Furthermore, employees may believe “random shocks” to performance-based pay as being unfair, especially if these shocks do not occur to other workers. If a regional salesperson’s territory suffers an economic downturn, for example, this may impact their pay despite no change in their effort or ability. Other shocks, such as weather, equipment malfunctions, customer bankruptcies, or changing consumer preferences, may negatively impact worker compensation outside the employee’s control. Resulting perceptions of unfairness can lead to the same problems noted above: lack of effort, sabotage and attrition. As noted in Figure 4, this generates an additional insight: Insight 6: Perceived inequity arising through random shocks in pay introduces additional costs from effort, sabotage, and attrition in individual pay-for-performance compensation systems. Therefore, social comparison theory essentially adds another information set to agency theory: the pay of others. The firm, of course, knows everyone’s pay, but the effects of social comparison on pay are greater as workers have more information about the pay of referent Strategic Compensation 17 others. The psychology literature has until recently placed less emphasis on tying the importance of social comparisons to employee actions which benefit or cost firms, and the strategy literature has, with the exception of Nickerson and Zenger (2008), not yet integrated this construct into studies of organizational strategy. As we show in a later section, the failure of agency theory to include social comparisons costs means that many of the firm-wide costs of performance-based pay are missed. Overconfidence and performance-based pay Psychologists and decision research scholars have long noted that people tend to be overconfident about their own abilities and too optimistic about their futures (e.g., Weinstein, 1980; Taylor and Brown, 1988). Overconfidence is thought to take at least three forms (Moore and Healy, 2008). First, individuals consistently express unwarranted subjective certainty in their personal and social predictions (e.g., Dunning et al., 1990; Vallone et al., 1990). Second, they commonly overestimate their own ability; and finally they tend to overestimate their ability relative to others (Christensen-Szalanski and Bushyhead, 1981; Russo and Schoemaker, 1991; Zenger, 1992; Svenson, 1981). Recent research has shown that overconfidence is less an individual personality trait than it is a bias that affects most people, depending on the task at hand (e.g., Moore and Healy, 2008). People tend to be overconfident about their ability on tasks they perform very frequently, find easy, or are familiar with. Conversely, people tend to be underconfident on difficult tasks or those they seldom carry out (e.g., Moore, 2007; Moore and Kim, 2003). This tendency has large implications for overconfidence in work settings, since work inherently involves tasks with Strategic Compensation 18 which employees are commonly very familiar. 11 We suggest that overconfidence changes the informational landscape by which firms determine compensation structure, as noted in Figure 4. When overconfident, employees’ biased beliefs about their own ability and effort alters the cost-benefit landscape of performance-based pay. First and foremost, performance-based pay may fail to efficiently sort workers by skill level, reducing one of the fundamental benefits of performance-based bay. Overconfident workers will tend to select into performance-based compensation systems, particularly preferring individual-based pay-for-performance (Cable and Judge, 1994; Larkin and Leider, 2011). This implies that workers may no longer accurately selfselect into optimal workplaces based on the incentives therein. Instead, overestimating their ability, they may select into performance-based positions that are suboptimal for their skill set. If workers overestimate the speed with which they can complete tasks (Buehler et al., 1994), for instance, they may expect a much higher compensation than they will ultimately receive, leading to repeated turnover as workers seek their true avocation. While this sorting problem may impact some firms less due to superior capability to identify talent, considerable evidence suggests that hiring lower-ability workers is a widespread problem (Bertrand and Mullainathan, 2004). A similar sorting problem may occur when overconfident workers are promoted more frequently under a tournament-based promotion system, exacerbating problems as they rise to managerial positions (Goel and Thakor, 2008). These overconfident managers may in turn attract similar overconfident employees, amplifying future problems (Van den Steen, 2005). Based on this reasoning, we propose the following insight: 11 Economists have begun to study the effect of overconfidence on firm and employee actions, finding overconfidence influences individuals’ market-entry decisions (Camerer and Lovallo, 1999), investment decisions (e.g., Barber and Odean, 2001), and CEOs’ corporate decisions (e.g., Malmendier and Tate, 2005).Strategic Compensation 19 Insight 7: Overconfidence bias reduces the sorting benefits of individual pay-forperformance compensation systems. Overconfidence not only has immediate implications for the optimal sorting of workers across jobs, but it also may lead to reduced effort when combined with social comparison. A worker, believing himself one of the most skilled (as in Zenger, 1992), will perceive lower pay than a peer as inequitable, despite that peer’s true superior performance. This perceived inequity would be particularly severe when there is imperfect information equating effort and ability to measurable and thus compensable performance. We thus suggest that: Insight 8a: Overconfidence bias increases perceived inequity in wage comparison and thereby decreases the effort benefits of individual pay-for-performance compensation systems. Insight 8b: Overconfidence bias increases perceived inequity in wage comparison and thereby aggravates costs from sabotage and attrition in individual pay-for-performance compensation systems. Reducing Psychological Costs through Team-Based and Scaled Compensation Although psychological costs of social comparison and overconfidence make individual pay-for-performance systems less attractive than under a pure agency theory model, firms may still wish to harness the effort-improvement from performance-based pay. We argue that firms frequently use intermediate forms of compensation that combine some level of pay-forperformance with the flatter wages of fixed salaries. In this section we use an integrated agency and psychology lens to analyze the costs and benefits of two of these intermediate forms: teambased and scale-based wages. While both team-based and scale-based systems can be costly due to decreased effort, they present clear psychological benefits. Under a team-based system, an employee is compensated based on the performance of multiple employees, not just their individual performance. The primary psychological benefit of team-based performance pay is that it reduces the costs of social comparison, making it relatively Strategic Compensation 20 more attractive than predicted by agency theory, which holds that team-based pay will be used only when there are benefits to coordination across employees that are greater than the costs of free-riding. Under a scaled wage system, employees are compensated in relatively tight “bands” based largely on seniority. As with team-based systems, scaled wages result in lower costs from social comparison and overconfidence, and are therefore more attractive than standard agency theory would predict, even if effort is somewhat attenuated due to weakened incentives. Reducing social comparison costs through intermediate forms of compensation In team-based compensation systems, the firm retains performance-based incentives, but instead of tying them to individual performance they link them with the performance of teams of employees. These teams may be extremely large, such as at the business unit or firm level, or may be based in small work groups. In general, smaller groups present higher-powered incentives and reduce free-riding, while larger groups present weaker incentives. Team-based compensation can reduce one dimension of social comparison: wage comparison. By equalizing earnings across workers within teams, team-based compensation removes discrepancies in income among immediate coworkers that might be perceived as sources for inequity or unfairness. Employees, however, examine the ratio of inputs to outcomes when judging equity (Adams, 1965). The evening of wages within teams reduces social comparison on wages (outcomes) and not comparisons of contribution through perceived ability or effort (inputs). Team members will therefore perceive equivalent pay among members as truly equitable only if they perceive each employee’s contribution to the team to be equal, so some problems of social comparison remain. Although overconfidence may magnify perceptions of own contributions, existing studies, while limited, suggest that perceptions of fairness depend Strategic Compensation 21 much more on outcomes than inputs (Oliver and Swan, 1989; Siegel et al., 2008; Kim et al., 2009), with employees more focused on compensation than inputs (Gomez-Mejia and Balkin, 1992). 12 Team-based compensation would best resolve the social comparison problem in teams where contribution is homogeneous, but given the lesser weight of inputs in equity evaluations, even widely heterogeneous differences in ability or effort are unlikely to produce the social comparison costs that wage inequality will. This reasoning leads to the following proposition: Proposition 1: Team-based compensation reduces costs of social comparison when individual contribution is not highly heterogeneous within the team. Team-based compensation fails to reduce an additional social comparison costs, however: it cannot address wage comparisons across teams. Workers in some teams may believe earnings in higher-paid teams are inequitable, which may lead to psychological costs similar to individual-based systems. This problem may be exacerbated by workers’ perception that their team assignment was inherently unfair, and thereby may create a new dimension for comparison. Firms can reduce this potential social comparison cost by implementing scaled wages. Scaled wages will severely reduce equity and envy-based problems associated with wage comparisons across teams by creating uniformity throughout the firm for given job and seniority levels. While workers may still perceive outcome and effort to be unfair, this perception will be less personal given the firm’s consistent policy of scale-based wages. The worker may view the policy as unfair, but will not feel personally affronted by a managerial decision to underpay them. Costs from inequity and envy will therefore be reduced, reducing psychological costs relative to performance-based pay. Scaled wages will of course motivate the highest-ability workers to leave the firm, because their contribution will not be adequately remunerated, but this 12 Gachter, Nosenzo, and Sefton (2010) find that laboratory participants socially compare on effort, and that this reduces the efficacy of increases in flattened financial incentives in inducing effort. This suggests team-based compensation may be less effective relative to flat wages in motivating effort. Strategic Compensation 22 is a cost already accounted for in economic theories of agency. Similarly, scaled wages may also involve larger administrative and bureaucratic costs, since firms must determine and communicate the appropriate basis on which the scaled system is based. These administrative costs, however, may actually deepen employee trust in the fairness of the system. We thus propose that: Proposition 2: Scaled wages have lower social comparison costs than team-based and individual-based compensation systems. We illustrate our model’s predicted impact of social comparison on the likely compensation choices of the firm in Figure 5. For reference, the left-hand box shows the standard predictions of agency theory, based on Figure 3 and the assumption of a moderate degree of task coordination across employees. As noted in the figure, agency theory assumes that compensation choice does not depend on the ability of employees to observe the pay of peers. The right-hand box shows how the incorporation of social comparison costs changes the model’s predicted compensation choice. As seen in the figure, individual-performance-based pay is predicted far less often when social comparison is present, and team-based and salary-based pay are predicted more often. Also, scale-based pay is predicted with social comparison, but not under agency theory. The model’s predictions therefore change dramatically with the incorporation of psychology. *** Insert Figure 5 here *** At high levels of pay observability by peers, performance-based pay is very costly, and firms are predicted to turn towards scale-based pay or flat salaries. As employee observability of peer pay goes down, pay based on team performance becomes more likely as the motivational benefits of pay for performance begin to outweigh the costs of social comparison. Still, if peers have some view of peer pay, the model holds that firms are unlikely to base pay primarily on Strategic Compensation 23 individual performance. Hence, team-based pay is used far more frequently than predicted in agency theory because of its lower social comparison costs. Finally, individual-based performance pay is predicted only when peers have very poor visibility of others’ pay, and when effort cannot be perfectly observed. This is analogous to the prediction of standard agency theory, which does not take social comparison costs into consideration. Reducing overconfidence costs through flattening compensation Overconfidence creates considerable problems for individual-based compensation in its aggravation of social comparison and its undermining of efficient sorting processes. It creates similar problems in team-based compensation. Overconfident employees, unless they can observe the actual contribution of teammates, will usually interpret underperformance by the team as reflective of other workers’ deficiencies, while attributing strong team performance to themselves. These biased conclusions, which result from biases in attribution of performance, will create erroneous perceptions of inequity that may lead to reduced effort, attrition, and reduced cooperation. Similarly, overconfident workers will perceive assignments to lower quality teams as unfair, because they will perceive their teammates as below their own ability. This can result in workers constantly trying to switch into better teams of the level they observe themselves to be. Thus, we introduce the following proposition: Proposition 3: Team-based compensation only resolves problems of overconfidence in individual pay-for-performance systems if the actual contribution of teammates is observable. Introducing scaled compensation within teams may not completely alleviate costs of overconfidence, but scaled wage systems can prove much less costly when overconfidence is present. With flatter wages across the firm, workers are less likely to socially compare with peers in other teams, and are less likely to expend political effort attempting to transfer into other team. Strategic Compensation 24 Instead, overconfident workers under scale-based wages will potentially observe workers at other firms earning higher wages and attempt to leave a firm in order to restore perceived inequality. The most overconfident workers are unlikely to even sort into the firm, given their perception that they will never be paid what they are truly worth. Scale-based wages therefore solve psychological costs of overconfidence by sorting out the most overconfident workers. This reasoning leads to our next proposition: Proposition 4: Scale-based wages reduce costs of overconfidence in individual- and team-based pay-for-performance. We present the impact of overconfidence on likely pay choices in Figure 6, which shows our model’s predictions about how a firm’s compensation policy changes when employees are overconfident. For comparison, Figure 6 on the left repeats the right-hand box in Figure 5, where overconfidence is not considered. As noted in the figure, overconfidence increases the need for team-based and scale-based wages because they sort out overconfident workers who are more likely to perceive inequity in pay. Correspondingly, firms are less likely to use salaries even when individual effort is observable, because employees do not have unbiased views on their own or others’ effort. Even when employees cannot see one another’s pay, firms are more likely to use team-based pay because an overconfident employee has biased views about her own contributions and effort and overestimates the pay of peers (Lawler, 1965; Milkovich and Anderson, 1972). A team performance-based system can provide positive effort motivation while weeding out highly overconfident workers. Therefore, when overconfidence is most severe, scale-based and team performance-based wages will drive out the most overconfident and potentially destructive workers, and are much more likely to be used than salaries or individual performance-based wages. Compared to the predictions from standard agency theory shown in the left-hand side of Figure 5, which does not take into account the costs of social comparison or Strategic Compensation 25 overconfidence, our model shows that scale- and team- performance-based pay are far more likely than agency theory predicts. *** Insert Figure 6 here *** Implications for Firm Strategy Reflecting agency theory, strategic compensation has almost exclusively focused on improved effort and sorting that firms enjoy when using optimal compensation strategy. While these direct effects are undeniably relevant, an important implication of our model is that indirect effects of compensation also have strategic implications. Indeed, employee compensation is not an isolated firm policy. It broadly impacts the other choices and activities of the firm, and must be complementary with them in order to support the firm’s strategic position (Porter, 1996). Also, social comparison theory suggests that compensation for one employee can spill over and affect decisions made by other employees within a firm. Social comparison costs can dramatically impact the overall strategy of the firm by limiting the firm’s ability to apply high-powered incentives or a wide variance in compensation levels across employees. Williamson (1985) explained how this can affect a firm’s corporate strategy in limiting gains from mergers and acquisitions in his discussion of Tenneco’s acquisition of Houston Oil and Minerals Corporation. Agency theory would predict that premerger firms having considerably different pay structures would have little impact on the postmerger firm. Yet Tenneco was forced to standardize pay across employees to avoid social comparison costs, an adjustment that cost USAir 143 million USD the year following its acquisition of Piedmont Aviation (Kole and Lehn, 2000). This reflects how firm boundaries can change reference groups among employees and force firms to elevate the wages of the lowest Strategic Compensation 26 peer group to improve perceptions of pay equity among new coworkers (Kwon and MeyerssonMilgrom, 2009). Similarly, Dushnitsky and Shapira (2010) suggest that a firm’s strategic decision to implement a corporate venture capital program may create problems of social comparison, since the efficacy of high-powered incentives in such programs necessitates pay-for-performance. Since the considerable upside of such compensation contracts can generate huge pay inequalities within the firm, such programs may generate conflict across personnel. Similar problems have limited the ability to implement individual pay-for-performance for internal pension fund managers in firms and state governments (Young, 2010; Wee, 2010). In enterprise software, aggressive pay-for-performance in sales – a single job function – has been shown to be correlated with high turnover and low employee satisfaction in other job functions such as marketing and product development (Larkin, 2008). Overconfidence can also impact the strategic implications of compensation policy. Investment banks frequently take highly-leveraged positions in the marketplace, creating tremendous profit potential but also greater risk. The high-powered performance-based incentives of investment banking attract many high-ability individuals, but these compensation schemes also attract some of the most overconfident workers in the world (Gladwell, 2009). While this overconfidence may yield some benefits in bluffing and credible commitment, it also produced considerable problems at firms like Bear Stearns, which collapsed early in the recent banking crisis. First, persistent overconfidence led the bank toward aggressive, highly-leveraged derivatives that ultimately yielded liquidity problems. Second, envy and comparison of bonus pay led to increasingly aggressive behavior in investment banks.Strategic Compensation 27 Furthermore, recent work suggests that overconfident CEO’s are more likely to pursue innovation, particularly in highly-competitive industries (Galasso and Simcoe, forthcoming). While the focus of our paper is non-executive pay, the same rule may apply at lower levels in the firm, whether in research and development, product development, operations, or finance. Experimental evidence suggests that overconfident technical managers are much more likely to pursue aggressive R&D strategy (Englmaier, 2010). Under individual pay-for-performance, which is inherently highly-competitive, non-executive employees may also pursue extensive innovation for financial or career gains. The decision to grant such employees wide discretion in applying innovation and change within the firm may require flatter compensation structures to reduce the risk of attracting overconfident workers and incentivizing them toward excessive risk. Similarly, many firms position their products in ways that require personal and customized sales channels. Because effort is difficult to monitor among these salespeople, firms typically employ pay-for-performance commission schemes, which motivates effort, but can provide few sorting benefits. One leading management consulting company used extensive surveys to find that enterprise software salespeople’s expected commissions averaged $800,000 per year. Yet these expectations were nearly eight times the actual median compensation, suggesting high overconfidence about their own sales abilities. Larkin (2007) notes that the annual attrition rate of similar software salespeople was nearly 30 percent, and average tenure level was only two years, suggesting that salesperson failure to meet excessive expectations motivated attrition. Given that industry sales cycles are a year or more and customer relationships are critical, high salesperson attrition is extremely costly to software vendors (Sink, 2006). Empirical Implications and Directions for Future ResearchStrategic Compensation 28 The agency theory approach to strategic compensation has proved very robust: it makes simple, testable predictions, many of which have held up to considerable empirical testing. The three major predictions with strong empirical support are that 1) employees increase effort in response to incentives; 2) employees put effort into “gaming” incentive systems which can negatively affect performance; and 3) incentives can lead employees to sort by skill level. Our integrated framework suggests a number of new predictions regarding the role of psychological costs in the study of strategic compensation. We identified two sets of psychological costs: social comparison costs and overconfidence costs. A first set of predictions focuses on social comparison costs. Our theory predicts that social comparison costs reduce the efficacy of individual performance-based pay as a compensation strategy. Consequently, firms will take one of two actions when social comparisons are prevalent among employees: dampen the use of performance-based incentives, or attempt to keep wages secret. Although many firms have strict wage secrecy policies, these are frequently ineffective due to workers’ overestimation of peer wages (Lawler, 1965; Milkovich and Anderson, 1972) or are explicitly illegal (Card et al. 2010). The difficulty of imposing and maintaining wage secrecy makes flattening wages through scale- or team-based pay a frequently necessary solution. One approach to testing these propositions is to collect data from surveys or industry reviews to examine how and when the prevalence and costs of social comparisons vary across industry and company environments. Instruments developed in the psychology literature provide guidance on how to measure social comparison processes using survey items or field interventions in organizations. Such an analysis would be inherently cross-sectional, however, and would merely establish correlations between social comparison and compensation practices. Strategic Compensation 29 One fruitful avenue for empirical testing may be publicly-funded organizations such as universities and hospitals. In many jurisdictions, salary-disclosure laws have produced natural experiments that allow for the study of behavioral responses to newly observed peer compensation, and the organizational responses to them. Recent work by Card et al. (2010), which exploits the public disclosure of California employee salaries, is an example of the potential of this approach. Similarly, acquiring data on firms that change compensation structure or acquire another firm with different wage levels can allow for examining how increased variance in pay may reduce worker productivity. Such findings would be particularly striking if productivity decreased despite absolute pay increases. Exploiting variation in worker assignment (Chan et al. 2011; Mas and Moretti, 2009) or exogenous organizational change for workers (Dahl, 2011) could allow for estimating the effect of relative pay on performance while controlling for absolute pay. Similar changes between team- and individual- based compensation systems could potentially identify how individuals react to social comparison in different incentive structures, and how that influences performance. Where data on such changes are not available, field experiments that change compensation systems for a random set of employees and study resulting behavior and performance may prove useful (for a recent example, see Hossain and List, 2009). A second set of new predictions resulting from our theoretical framework centers around overconfidence. If overconfidence plays a negative role in the job function, we predict that firms will either dampen incentive intensity, or set up a compensation scheme which sorts against overconfidence. As noted, overconfidence can exacerbate the perceived inequity of pay-forperformance schemes in settings where social comparisons matter. We would therefore expect that industries and job settings marked by strong social comparison effects will strategically use Strategic Compensation 30 compensation to screen against overconfidence. Furthermore, theoretical work suggests it can considerably reduce sorting benefits from individual pay-for-performance and even generate an escalating attraction and promotion of overconfident employees (Van den Steen, 2005; Goel and Thakor, 2008). However, we still have limited empirical evidence on how compensation sorts by confidence, so future research needs to focus on this question first. In job functions where confidence is important for success, such as in the sales setting, we predict that firms will strategically use compensation to sort by confidence. Data on sales commission structure by industry are available (e.g., Dartnell, 2009); a researcher could test whether industries with lower “lead-to-sales” ratios, and/or industries with longer sales cycles, have commission schedules which appear to cater to overconfident employees. For example, in enterprise software, an industry with low “lead-to-sales” ratios and an 18-24 month sales cycle, salespeople are paid by convex commission schedules that can differ by a factor of 20 times or more depending on the salesperson’s other sales in the quarter (Larkin, 2007). Our theoretical framework predicts a relationship between convex compensation (or other schemes that would sort by confidence) and the industry sales cycle and/or lead-to-sales ratio. However, we still need a better understanding of the role confidence plays in job functions outside sales. There is considerable research yet to be done on psychological factors causing employees to sort into different job functions. Future research might also benefit from extending our theoretical framework to include new factors influencing strategic compensation such as employee attitudes towards risk and uncertainty, or to relax some of the assumptions made in our model, for example around the fixed nature of production and technology. These extensions are likely to provide opportunities for future research on the boundary conditions of influences identified in our model.Strategic Compensation 31 Managerial implications We believe our work has a number of immediate implications for managers in both the private and public sector. The first, and most obvious implication, is that the efficacy of individual pay-for-performance is powerfully influenced by psychological factors which if not considered a priori could have considerable unintended consequences for the firm. In choosing whether to implement such a pay system, managers must not only consider easily quantifiable economic costs related to the observability of worker pay and productivity, but also psychological costs due to social comparisons and overconfidence. Under increasing global pressure for worker performance in private sectors, managers are reevaluating traditional scale-based and other flat compensation systems and experimenting with high-powered incentive systems. Similarly, in the public sector, managers facing tightened budgets and public perceptions of ineffectiveness are implementing pay-for-performance schemes to improve effort in settings where these schemes have rarely been used before, such as education (e.g. Lavy, 2009) and aviation regulation (Barr, 2004). While in many cases these increased incentives may prove effective, our work suggests that there may be a sound basis for many of the existing flat compensation systems. Focusing exclusively on increasing effort through high-powered incentives may ignore many of the benefits social and psychological benefits that existing compensation systems provide. In addition, social networking and related phenomena have made information about peer effort, performance and compensation more readily available. We would argue that the costs of performance-based systems are heightened as employees share information across social networks, similar to the impact of online salary information for public employees observed in Card et al (2010). With pay secrecy increasingly difficult to enforce, and the private lives of Strategic Compensation 32 coworkers increasingly observable, social comparison costs seem even more likely to play an important role in compensation in the future. Limitations Our theoretical framework needs to be qualified in light of various limitations. One limitation of is our focus on financial incentives as the major driver of effort and job choice. Research in psychology and organizational behavior has proposed that individuals are intrinsically motivated by jobs or tasks (Deci and Ryan, 1985, Deci 1971). While many scholars agree that money is a strong motivator (Jurgensen, 1978; Rynes, Gerhart, and Minette, 2004), powerful pecuniary incentives may be detrimental by reducing an individual’s intrinsic motivation and interest in the task or job. As Deci and Ryan (1985) argue, this reduction occurs because when effort is exerted in exchange for pay, compensation becomes an aspect controlled by others that threatens the individual’s need for self-determination. In the majority of cases, the effects of extrinsic or pay-based motivators on intrinsic motivation are negative (Deci, Koestner, and Ryan, 1999; Gerhart and Rynes, 2003). This stream of research highlights the importance of distinguishing between extrinsic and intrinsic motivation, distinctions which are increasingly being incorporated into the personnel economics literature (Hamilton, Nickerson, and Owan, 2003; Bandiera et al., 2005; Mas and Moretti, 2009). An additional limitation of this work is that we ignore other psychological factors that can impact the role of employee compensation in firm strategy. Loss aversion, for example, could greatly impact the efficacy of individual pay-for-performance. Considerable work in psychology and behavioral decision research has shown that many individuals are asymmetrically loss-averse, where losses are of greater impact than same-sized gains (Kahneman and Tversky, 1979; Tversky and Kahneman, 1991, 1992). These models present individuals as Strategic Compensation 33 having psychologically-important reference points, target income levels based in previous earnings, social expectations, cash-flow requirements, or arbitrary numbers. Workers below the target suffer tremendous losses from this sub-reference income, and will respond with increased effort (Camerer et al., 1997; Fehr and Goette, 2007), misrepresentation of performance or gaming (Schweitzer, Ordonez, and Doumo, 2004), and increased risk-taking. This loss-averse behavior could particularly hurt the firm when the income of the pay-for-performance worker depends on economic returns to the firm. Since such workers typically earn more when returns are high, the direct implication is that workers will put forth less effort when it is most beneficial to the firm and more effort when least beneficial (Koszegi and Rabin, 2009). Conclusion Compensation is inherently strategic. Organizations use different compensation strategies and have discriminatory power in choosing their reward and pay policies (Gerhart and Milkovich, 1990). As the human resource and personnel economics literatures explain, these policies directly affect employee performance, but they are also highly complementary with the other activities of the firm. Compensation is not an isolated choice for the firm. It is inextricably linked to the technology, marketing, operations, and financial decisions of the firm. Furthermore, in a world with imperfect information, differing risk attitudes and behavioral biases, achieving an efficient, “first best” compensation scheme is impossible, thereby creating the opportunity for firms to gain strategic advantage through compensation strategies complementary to their market position. Given the important effects of compensation for both firm performance and employee behavior, it is important to understand what factors managers should consider when designing their firms’ compensation systems and what elements should be in place for compensation systems to produce desirable worker behavior. Strategic Compensation 34 This paper proposed an integrated framework of strategic compensation drawing from both the economics and psychology literatures. The dominant theoretical perspective for the majority of studies of compensation has been the economics theory of agency (e.g., Jensen and Meckling, 1976; Holmstrom, 1979). Agency theory, with the later extensions of personnel economics, provides powerful insight into the strategic role of compensation by clearly defining the mechanisms that affect employee and firm performance, namely effort provision and sorting. In economic theory, the three observability problems of effort, skill, and output are key to the efficacy of compensation systems in incentivizing effort and sorting workers. We argued that, while providing useful insights on how to design compensation systems, the economic perspective on strategic compensation captures only some of the factors that can affect compensation policy performance. We described an integrated theoretical framework that relies on the effort provision and sorting mechanisms of agency theory, but that introduces psychological factors largely neglected in economics. We focused on the psychology of information, specifically incorporating social comparison costs and overconfidence costs, and their effects on the performance and likely frequency of specific compensation strategies. We demonstrated that firms that account for these psychological costs will likely enact flatter compensation policies or else suffer costs of lower effort, lower ability, and sabotage in their workers. We believe our theoretical framework offers guidance on the main factors managers should consider when determining compensation strategy. At the same time, it offers guidance to researchers interested in advancing and deepening our understanding of the economic and psychological foundations of strategic compensation. Strategic Compensation 35 Acknowledgments We thank Editor Will Mitchell, Todd Zenger, and three anonymous reviewers for insightful feedback on earlier versions of this paper. References Adams JS. 1965. Inequity in social exchange. In Advances in Experimental Social Psychology, Berkowitz L (ed). Academic Press: New York; 2, 267–299. Agell J, Lundborg P. 2003. Survey evidence on wage rigidity and unemployment: Sweden in the 1990s. Scandinavian Journal of Economics 105(1): 15-29. Akerlof GA, Yellen JL. 1990. The fair-wage effort hypothesis and unemployment. Quarterly Journal of Economics 105: 255-283. Baker GP. 1992. Incentive Contracts and Performance Measurement. Journal of Political Economy 100 (3): 598-614. Baker GP, Jensen MC, Murphy KJ. 1988. Compensation and incentives: theory and practice. The Journal of Finance 43 (3): 593-616. Balkin D, Gomez-Mejia L. 1990. Matching compensation and organizational strategies. Strategic Management Journal 11: 153-169. Bandiera O, Barankay I, Rasul I. 2005. Social preferences and the response to incentives: evidence from personnel data. Quarterly Journal of Economics 120: 917-962. Barber BM, Odean T. 2001. Boys will be boys: gender, overconfidence, and common stock investment. The Quarterly Journal of Economics 116 (1): 261-292. Barr, S. (2004). At FAA, some lingering discontent over pay system. The Washington Post, November 30, 2004. Metro; B02. Bartling B, von Siemens FA. 2010. The intensity of incentives in firms and markets: Moral hazard with envious agents. Labour Economics 17 (3): 598-607. Bertrand M, Mullainathan S. 2004. Are Emily and Greg more employable than Lakisha and Jamal? A field experiment on labor market discrimination. American Economic Review 94 (4): 991-1013. Blinder AS, Choi DH. 1990. A shred of evidence on theories of wage stickiness. Quarterly Journal of Economics 105: 1003-1015. Bloom M. 1999. The performance effects of pay dispersion on individuals and organizations. Academy of Management Journal 42: 25-40. Bonin H, Dohmen T, Falk A, Huffman D, Sunde U. 2007. Cross-sectional earnings risk and occupational sorting: the role of risk attitudes. Labour Economics 14: 926-937. Buehler R, Griffin D, Ross M. 1994. Exploring the “planning fallacy”: why people underestimate their task completion times. Journal of Personality and Social Psychology 67: 366-381. Bureau of Labor Statistics. 2009. National Compensation Survey. http://www.bls.gov/eci/ Last accessed May 1, 2011. Cable DM, Judge TA. 1994. Pay preferences and job search decisions: a person-organization fit perspective. Personnel Psychology 47: 317–348. Camerer C. 2003. Strategizing in the brain. Science 300: 1673-1675.Strategic Compensation 36 Camerer C, Babcock L, Loewenstein G, Thaler R. 1997. Labor supply of New York City cabdrivers: one day at a time. Quarterly Journal of Economics 112 (2):407-441. Camerer C, Lovallo D. 1999. Overconfidence and excess entry: an experimental approach. American Economic Review 89 (1): 306-318. Camerer C, Loewenstein G, Rabin M. 2004. Advances in Behavioral Economics,. Princeton University Press: Princeton, NJ. Campbell C, Kamlani K. 1990. The reasons for wage rigidity: Evidence from a survey of firms. Quarterly Journal of Economics 112: 759-789. Card D, Mas A, Moretti E, Saez E. 2010. Inequality at work: The effect of peer salaries on job satisfaction. NBER Working Paper No. 16396. Chan TY, Li J, Pierce L. 2011. Compensation and peer effects in competing sales teams. Unpublished Working Paper. Christensen-Szalanski JJ, Bushyhead JB. 1981. Physician’s use of probabilistic information in a real clinical setting. Journal of Experimental Psychology: Human Perception and Performance 7: 928-935. Cohn A, Fehr E, Herrmann B, Schneider F. 2011. Social comparison in the workplace: Evidence from a field experiment. IZA Discussion Paper No. 5550. Cropanzano R, Rupp DE, Byrne ZS. 2003. The relationship of emotional exhaustion to work attitudes, job performance, and organizational citizenship behaviors. Journal of Applied Psychology 88(1): 160-169. Dahl M. 2011. Organizational change and employee stress. Management Science 57 (2): 240- 256. Dalton DR, Hitt MA, Certo ST, Dalton C. 2007. The fundamental egency problem and its mitigation: Independence, equity, and the market for corporate control. Academy of Management Annals 1: 1-65. Dartnell Corp. 2009. Dartnell’s 30 th Sales Force Compensation Survey. The Dartnell Corporation: Chicago. Deci E. 1971. Effects of externally mediated rewards on intrinsic motivation. Journal of Personality and Social Psychology 18: 105-115. Deci EL, Ryan RM. 1985. Intrinsic Motivation and Self-Determination in Human Behavior. Plenum: New York. Deci EL, Koestner R, Ryan RM. 1999. A meta-analytic review of experiments examining the effects of extrinsic rewards on intrinsic motivation. Psychological Bulletin 125: 627-668. DiPrete TA, Eirich GM, Pittinsky. M. 2008. Compensation benchmarking, leapfrogs, and the surge in executive pay. American Journal of Sociology 115 (6): 1671-1712. Dunning D, Griffin DW, Milojkovic J D, Ross L. 1990. The overconfidence effect in social prediction. Journal of Personality and Social Psychology 58: 568-581. Dushnitsky G, Shapira ZB. 2010. Entrepreneurial finance meets corporate reality: comparing investment practices and performing of corporate and independent venture capitalists. Strategic Management Journal 31(9): 990-1017. Edelman B and Larkin I. 2009. Envy and deception in academia: evidence from self-inflation of SSRN download counts. Working paper, Harvard University, Cambridge, MA. Englmaier F. 2010. Managerial optimism and investment choice. Managerial and Decision Economics 31 (4): 303 – 310. Fang H, Moscarini G. 2005. Morale hazard. Journal of Monetary Economics 52 (4): 749-777.Strategic Compensation 37 Faulkender MW, Yang J. 2007. Inside the black box: the role and composition of compensation peer groups. Unpublished manuscript. Fehr E, Schmidt K. 1999. A theory of fairness, competition, and cooperation. Quarterly Journal of Economics 114: 817-868. Fehr E, Gächter S. 2000. Cooperation and punishment in public goods experiments. American Economic Review 90: 980-994. Fehr E, Goette L. 2007. Do workers work more if wages are high? Evidence from a randomized field experiment. American Economic Review 97 (1): 298-317. Festinger L. 1954. A theory of social comparison processes. Human Relations 7 (2): 117-140. Galasso A, Simcoe TS. forthcoming. CEO overconfidence and innovation. Management Science. Gächter S, Nosenzo D, Sefton M. 2011. The impact of social comparisons of reciprocity. Forthcoming in Scandinavian Journal of Economics. Gerhart BA, Milkovich GT. 1990. Organizational differences in managerial compensation and financial performance. Academy of Management Journal 33 (4): 663-692. Gerhart B, Rynes S. 2003. Compensation: Theory, Evidence, and Strategic Implications. Sage Publications. Gerhart B, Rynes S, Smithey Fulmer I. 2009. Pay and performance: Individuals, groups, and executives. The Academy of Management Annals 3: 251-315. Gino F, Pierce L. 2009. Dishonesty in the name of equity. Psychological Science 20 (9): 1153- 1160. Gino F, Pierce L. 2010. Robin Hood under the hood: wealth-based discrimination in illicit customer help. Organization Science 21 (6): 1176-1194. Gladwell M. 2009. Cocksure: banks, battles, and the psychology of overconfidence. The New Yorker 27 July: 24. Goel AM, Thakor AV. 2008. Overconfidence, CEO selection, and corporate governance. Journal of Finance 63: 2737-2784, Gomez-Mejia LR. 1992. Structure and process of diversification, compensation strategy, and firm performance. Strategic Management Journal 13 (5): 381-397. Gomez-Mejia LR, Balkin DB. 1992. Compensation, organizational strategy and firm performance. Cincinnati: Southwestern. Greenberg J. 1988. Equity and workplace status: a field experiment. Journal of Applied Psychology 73: 606-613. Hamilton BH, Nickerson JA, Owan H. 2003. Team incentives and worker heterogeneity: an empirical analysis of the impact of teams on productivity and participation. Journal of Political Economy 111 (3): 465-497. Holmstrom B. 1979. Moral hazard and observability. Bell Journal of Economics 10: 74-91. Hossain T, List J. 2009. The behavioralist visits the factory: increasing productivity using simple framing manipulations. Working Paper No. 15623, National Bureau of Economic Research. Jensen MC, Meckling W. 1976. Theory of the firm: managerial behavior, agency costs, and ownership structure. Journal of Financial Economics 11 (4): 5-50. Jurgensen CE. 1978. Job preferences (what makes a job good or bad?). Journal of Applied Psychology 63: 267-76. Kahneman D, Tversky A. 1979. Prospect theory: an analysis of decision under risk. Econometrica XLVII: 263-291.Strategic Compensation 38 Kelly Services. 2010. Performance pay and profit sharing entice high-performance workers. Kelly Global Workforce Index. Kelly Services: Troy, MI. Kerr S. 1975. On the folly of rewarding A, while hoping for B. Academy of Management Journal 18(4): 769-783. Kim TY, Weber TJ, Leung K, Muramoto Y. 2009. Perceived fairness of pay: The importance of task versus maintenance inputs in Japay, South Korea, and Hong Kong. Management and Organization Review 6 (1): 31-54. Kole SR, Lehn K. 2000. Workforce integration and the dissipation of value in mergers – The case of USAir’s acquisition of Piedmont Aviation. In Kaplan S (ed.), Mergers and Productivity, University of Chicago Press: 239-279. Koszegi B, Rabin M. 2009. Reference-dependent consumption plans. American Economic Review 99 (3): 909-936. Kwon I, Meyersson-Milgrom E. 2009. Status, relative pay, and wage growth: Evidence from M&A. Unpublished Working Paper. Stanford University. Larkin I. 2007. The cost of high-powered incentives: employee gaming in enterprise software sales. Working paper, Harvard University, Cambridge, MA. Larkin I. 2008. Bargains-then-ripoffs: Innovation, pricing and lock-in in enterprise software. Working paper, Harvard University, Cambridge, MA. Larkin I, Leider S. 2011. Incentive Schemes, Sorting and Behavioral Biases of Employees: Experimental Evidence. Forthcoming, American Economic Journal: Applied Microeconomics. Lavy, V. 2009. Performance pay and teachers’ effort, productivity, and grading ethics. American Economic Review 99 (5): 1979-2011. Lawler EE. 2003. Pay practices in Fortune 1000 corporations. WorldatWork Journal 12(4): 45- 54. Lazear EP. 1986. Salaries and piece rates. Journal of Business 59 (3): 405-431. Lazear EP, Oyer P. 2011. Personnel economics: Hiring and incentives. in Ashenfelter O, Card D, editors: Handbook of Labor Economics, Vol 4b, Great Britain, North Holland, 2011, pp. 1769-1823. Malmendier U, Tate G. 2005. CEO overconfidence and corporate investment. Journal of Finance 60: 2660-2700. Mas A. 2008. Labor unrest and the quality of production: Evidence from the construction equipment resale market. Review of Economic Studies 75: 229-258. Mas A, Moretti E. 2009. Peers at work. Forthcoming in American Economic Review. Milkovich GT, Anderson PH. Management compensation and secrecy policies. Personnel Pscyhology 25(2): 293-302. Moore, DA. 2007. Not so above average after all: When people believe they are worse than average and its implications for theories of bias in social comparison. Organizational Behavior and Human Decision Processes 102(1): 42-58. Moore DA, Healy PJ. 2008. The trouble with overconfidence. Psychological Review 115 (2): 502-517. Moore DA, Kim TG. 2003. Myopic social prediction and the solo comparison effect. Journal of Personality and Social Psychology 85(6): 1121-1135. Nelson RR. 1991. Why do firms differ, and how does it matter? Strategic Management Journal 12 (S2): 61-74.Strategic Compensation 39 Nickerson JA, Zenger, TR. 2008. Envy, comparison costs, and the economic theory of the firm. Strategic Management Journal 29(13): 1429-1449. Nosenzo D. 2010. The impact of pay comparisons on effort behavior. CeDEx Discussion Paper n.2010-03, Centre for Decision Research and Experimental Economics at the University of Nottingham, Nottingham, U.K. Oliver RL, Swan JE. 1989. Consumer perceptions of interpersonal equity in transactions: A field survey approach. The Journal of Marketing 53 (2): 21-35. Oyer P. 1998. Fiscal year ends and non-linear incentive contracts: the effect on business seasonality. Quarterly Journal of Economics 113: 149-185. Paarsch H, Shearer B. 2000. Piece rates, fixed wages, and incentive effects: statistical evidence from payroll records. International Economic Review 41: 59-92. Porter M. 1996. What is strategy? Harvard Business Review 74 (6): 61-78. Prendergast C. 1999. The provision of incentives in firms. Journal of Economic Literature 37 (1): 7-63. Pruitt DG, Kimmel M J. 1977. Twenty years of experimental gaming: critique, synthesis, and suggestions for the future. Annual Review of Psychology 28: 363-392. Rabin M. 1996. In American Economists of the Late Twentieth Century, Kahneman D, Tversky A. Edward Elgar Publishing Ltd.: Cheltehem, UK; 111-137. Rumelt RP, Schendel DE, Teece DJ. 1994. Fundamental Issues in Strategy: A Research Agenda. Harvard Business School Press: Boston. Russo J E, Schoemaker PJH. 1991. Decision Traps. Simon & Schuster: New York. Rynes SL, Gerhart B, Minete A. 2004. The importance of pay in employee motivation: What people say and what they do. Human Resource Management 43 (4): 381-394. Schwarzwald J, Koslowsky M., Shalit B. 1992. A field study of employees' attitudes and behaviors after promotion decisions. Journal of Applied Psychology 77: 511-514. Schweitzer ME, Ordóñez L, Douma B. 2004. Goal setting as a motivator of unethical behavior. Academy of Management Journal 47 (3): 422-432. Siegel P, Schraeder M, Morrison R. 2008. A taxonomy of equity factors. Journal of Applied Social Psychology 38 (1): 61-75. Sink E. 2006. Eric Sink on the Business of Software. Apress: New York. Solow, RM. 1979. Another possible source of wage stickiness. Journal of Macroeconomics 1(1): 79-82. Svenson O. 1981. Are we all less risky and more skillful than our fellow drivers? Acta Psychologica 47: 143-48. Taylor SE, Brown JD. 1988. Illusion and well-being: a social psychological perspective on mental health. Psychological Bulletin 103: 193-210. Tversky A, Kahneman D. 1991. Loss aversion in riskless choice: a reference dependent model. Quarterly Journal of Economics 106: 1039-1061. Tversky A, Kahneman D. 1992. Advances in prospect theory: cumulative representation of uncertainty. Journal of Risk and Uncertainty 5(4): 297-323. Vallone RP, Griffin DW, Lin S, Ross L. 1990. Overconfident prediction of future actions and outcomes by self and others. Journal of Personality and Social Psychology 58: 568-581. Van den Steen E. 2005. Organizational beliefs and managerial vision. Journal of Law, Economics, and Organization 21 (1): 256-282, Vandermay A. 2009. MBA pay: riches for some, not all. Bloomberg Business Week, Sept. 28.Strategic Compensation 40 Wee G. 2010. Harvard endowment chief Mendillo paid almost $1 million in 2008. Business Week, May 18. Weinstein ND. 1980. Unrealistic optimism about future life events. Journal of Personality and Social Psychology 39: 806-820. Whittlesey, F. 2006. The great overpaid CEO debate. CNET News, June 1. Williamson OE. 1985. The Economic Institutions of Capitalism. Free Press: New York. Wowak AJ, Hambrick DC. 2010. A model of person-pay interaction: How executives vary in their responses to compensation arrangements. Strategic Management Journal 31: 803- 821. Young V. 2010. Missouri pension system to stop giving bonuses. St. Louis Post Dispatch, Jan. 22. Zenger TR. 1992. Why do employers only reward extreme performance? Examining the relationship among performance pay and turnover. Administrative Science Quarterly 37: 198–219.Strategic Compensation 41 Figures Figure 1: Agency Theory Framework PREFERENCES Objective function Strategic compensation variables which affect objective function INFORMATION State of nature Employee effort Employee ability Firms Maximize profits Employee skill level (+) Employee effort (+) Wage costs (-) Random Unknown/imperfect Unknown/imperfect Employees Maximize utility Pay (+) Effort (-) Risk (-/averse) Random Known KnownStrategic Compensation 42 Figure 2: Compensation Predictions from Agency Theory (With No Task Coordination Benefits) Figure 3: Compensation Predictions from Agency Theory (With Task Coordination Benefits and Imperfect Observability of Individual Output)Strategic Compensation 43 Figure 4: Insights from Psychology and Decision Research on the Agency Theory Framework PREFERENCES Objective function Strategic compensation variables which affect objective function INFORMATION State of nature Employee effort Employee ability Pay of others Firms Maximize profits Employee skill level (+) ? (-) Employee effort (+) Wage costs (-) Non-wage costs (-) ?, ? Random Unknown/imperfect Unknown/imperfect Known ? Employees Maximize utility Pay (+) Effort (-/averse) Risk (averse) Perceived inequity (-) ?, ? Random Potentially unfair ? Known Biased ? Known Biased ? Known ? ? Introduced by social comparison ? Introduced by overconfidenceStrategic Compensation 44 Figure 5: Compensation Implications of Social Comparison Figure 6: Compensation Implications of OverconfidenceTo Groupon or Not to Groupon: The Profitability of Deep Discounts
|
CD ROM Annuaire d'Entreprises France prospect (avec ou sans emails) : REMISE DE 10 % Avec le code réduction AUDEN872
10% de réduction sur vos envois d'emailing --> CLIQUEZ ICI Retour à l'accueil, cliquez ici To Groupon or Not to Groupon: The Profitability of Deep Discounts Benjamin Edelman Sonia Jaffe Scott Duke Kominers Working Paper 11-063To Groupon or Not to Groupon: The Protability of Deep Discounts Benjamin Edelman y Sonia Jae z Scott Duke Kominers x June 16, 2011 Abstract We examine the protability and implications of online discount vouchers, a new marketing tool that oers consumers large discounts when they prepay for participating merchants' goods and services. Within a model of repeat experience good purchase, we examine two mechanisms by which a discount voucher service can benet aliated merchants: price discrimination and advertising. For vouchers to provide successful price discrimination, the valuations of consumers who have access to vouchers must systematically dier from|and typically be lower than|those of consumers who do not have access to vouchers. Oering vouchers is more protable for merchants which are patient or relatively unknown, and for merchants with low marginal costs. Extensions to our model accommodate the possibilities of multiple voucher purchases and merchant price re-optimization. Keywords: voucher discounts, Groupon, experience goods, repeat purchase. The authors appreciate the helpful comments and suggestions of Peter Coles, Clayton Featherstone, Alvin Roth, and participants in the Harvard Workshop on Research in Behavior in Games and Markets. Kominers gratefully acknowledges the support of a National Science Foundation Graduate Research Fel- lowship, a Yahoo! Key Scientic Challenges Program Fellowship, and a Terence M. Considine Fellowship in Law and Economics funded by the John M. Olin Center. yHarvard Business School; bedelman@hbs.edu. zDepartment of Economics, Harvard University; sjae@fas.harvard.edu. xDepartment of Economics, Harvard University, and Harvard Business School; skominers@hbs.edu. 11 Introduction A variety of web sites now sell discount vouchers for services as diverse as restaurants, skydiving, and museum visits. To consumers, discount vouchers promise substantial savings|often 50% or more. To merchants, discount vouchers oer opportunities for price discrimination as well as exposure to new customers and online \buzz." Best known among voucher vendors is Chicago-based Groupon, a two-year-old startup that purport- edly rejected a $6 billion acquisition oer from Google (Surowiecki (2010)) in favor of an IPO at yet-higher valuation. Meanwhile, hundreds of websites oer discount schemes similar to that of Groupon. 1 The rise of discount vouchers presents many intriguing questions: Who is liable if a merchant goes bankrupt after issuing vouchers but before performing its service? What happens if a merchant simply refuses to provide the promised service? Since vouchers entail prepayment of funds by consumers, do buyers enjoy the consumer protections many states provide for gift certicates (such as delayed expiration and the right to a cash refund when value is substantially used)? Must consumers using vouchers remit tax on merchants' ordinary menu prices, or is tax due only on the voucher-adjusted prices consumers actually pay? What prevents consumers from printing multiple copies of a discount voucher and redeeming those copies repeatedly? To merchants considering whether to oer discount vouchers, the most important question is the basic economics of the oer: Can providing large voucher discounts actu- ally be protable? Voucher discounts are worthwhile if they predominantly attract new customers who regularly return, paying full price on future visits. But if vouchers prompt many long-time customers to use discounts, oering vouchers could reduce prots. For most merchants, the eects of oering vouchers lie between these extremes: vouchers bring in some new customers, but also provide discounts to some regular customers. In this paper, we oer a model to explore how consumer demographics and oer details interact to shape the protability of voucher discounts. We illustrate two mechanisms by which a discount voucher service can benet al- iated merchants. First, discount vouchers can facilitate price discrimination, allowing merchants to oer distinct prices to dierent consumer populations. In order for voucher oers to yield protable price discrimination, the consumers who are oered the voucher discounts must be more price-sensitive (with regards to participating merchants' goods or services) than the population as a whole. Second, discount vouchers can benet merchants through advertising, by informing consumers of a merchant's existence. For these adver- tising eects to be important, a merchant must begin with suciently low recognition among prospective consumers. The remainder of this paper is organized as follows. We review the related literature in Section 2. We present our model of voucher discounts in Section 3, exploring price discrimination and advertising eects. In Section 4, we extend our model to consider the possibility of consumers purchasing multiple vouchers and of merchants adjusting prices 1 Seeing these many sites, several companies now oer voucher aggregation. Yipit, one such company, tracked over 400 dierent discount voucher services as of June 2011. 2in anticipation of voucher usage. Finally, in Section 5, we discuss implications of our results for merchants and voucher services. 2 Related Literature The recent proliferation of voucher discount services has garnered substantial press: a multitude of newspaper articles and blog posts, and even a short feature in The New Yorker (Surowiecki (2010)). However, voucher discounts have received little attention in the academic literature. The limited academic work on online voucher discounts is predominantly empirical. Dholakia (2011) surveys businesses that oered Groupon discounts. 2 Echoing sentiments expressed in the popular press, 3 Dholakia (2011) nds mixed empirical results: some business owners speak glowingly of Groupon, while others regret their voucher promo- tions. Byers et al. (2011) develop a data set of Groupon deal purchases, and use this data to estimate Groupon's deal-provision strategy. To the best of our knowledge, the only other theoretical work on voucher discounting is that of Arabshahi (2011), which considers vouchers from the perspective of the voucher service, whereas we operate from the perspective of participating merchants. Unlike the other academic work on voucher discounting, we (1) seek to understand voucher discount economics on a theoretical level, and (2) focus on the decision problem of the merchant, rather than that of the voucher service provider. Our results indicate that voucher discounts are naturally good ts for certain types of merchants, and poor ts for others; these theoretical observations can help us interpret the range of reactions to Groupon and similar services. Although there is little academic work on voucher discounts, a well-established liter- ature explores the advertising and pricing of experience goods, i.e. goods for which some characteristics cannot be observed prior to consumption (Nelson (1970, 1974)). The parsimonious framework of Bils (1989), upon which we base our model, studies how prices of experience goods respond to shifts in demand. Bils (1989) assumes that consumers know their conditional valuations for a rm's goods, but do not know whether that rm's goods \t" until they have tried them. 4 Analyzing overlapping consumer gen- erations, Bils (1989) measures the tradeo between attracting more rst-time consumers and extracting surplus from returning consumers. Meanwhile, much of the work on experience goods concerns issues of information asym- metry: if a merchant's quality is unknown to consumers but known to the merchant, then advertising (Nelson (1974); Milgrom and Roberts (1986)), introductory oers (Shapiro (1983); Milgrom and Roberts (1986); Bagwell (1990)), or high initial pricing (Bagwell and Riordan (1991); Judd and Riordan (1994)) can provide signals of quality. Of this lit- erature, the closest to our subject is the work on introductory oers. Voucher discounts, a 2 In a related case study, Dholakia and Tsabar (2011) track a startup's Groupon experience in detail. 3 For example, Overly (2010) reports on Washington merchants' mixed reactions to the LivingSocial voucher service. 4 Firms know the distribution of consumer valuations and the (common) probability of t. 3form of discounted initial pricing, may encourage consumers to try experience goods they otherwise would have ignored. However, we identify this eect in a setting without asym- metric information regarding merchant quality; consumer heterogeneity, not information asymmetries, drives our main results. 5 Additionally, our work diers from the classical literature on the advertisement of experience goods, as advertising in our setting serves the purpose of awareness, rather than signaling. 6 A substantial literature has observed that selective discounting provides opportunities for price discrimination. In the settings of Varian (1980), Jeuland and Narasimhan (1985), and Narasimhan (1988), for example, merchants engage in promotional pricing in order to attract larger market segments. 7 Similar work illustrates how promotions may draw new customers (Blattberg and Neslin (1990); Lewis (2006)), and lead those customers to become relational customers (Dholakia (2006)). These results have been found to motivate the use of coupons (Neslin (1990)), especially cents-o coupons (Cremer (1984); Narasimhan (1984)). We harness the insights of the literature on sale-driven price discrim- ination to analyze voucher discounting|a new \sale" technology. Like the price-theoretic literature which precedes our work, we nd that price discrimination depends crucially upon the presence of signicant consumer heterogeneity. Our work also importantly diers from antecedents in that the prior literature, in- cluding the articles discussed above, has considered only marginal pricing decisions. In particular, the previous work on experience goods and price discrimination does not con- sider deep discounts of the magnitudes now oered by voucher services. 3 Model Oering a voucher through Groupon has two potential advantages: price discrimination and advertising. We present a simple model in which a continuum of consumers have the opportunity to buy products from a single rm. The consumers are drawn from two populations, one of which can be targeted by voucher discount oers. First, in Section 3.1, we consider the case in which all consumers are aware of the rm and vouchers serve only to facilitate price discrimination. Then, in Section 3.2, we introduce advertising eects. 5Of course, our treatment of advertising includes a very coarse informational asymmetry: some con- sumers are simply not aware of the merchant's existence. However, conditional upon learning of the merchant, consumers in our model receive more information than the merchant does about their valua- tions. This is in sharp contrast to much of the previous work on experience goods, in which merchants can in principle exploit private quality information in order to lead consumers to purchase undesirable (or undesirably costly) products (e.g., Shapiro (1983); Bagwell (1987)). 6 In the classical theory of experience goods, advertising serves a \burning money" role. Merchants with high-quality products can aord to advertise more than those with low-quality products can, as consumers recognize this fact in equilibrium and ock to merchants who advertise heavily (Nelson (1974); Milgrom and Roberts (1986)). In our model, advertising instead serves to inform consumers of a merchant's existence; these announcements are a central feature of the service voucher vendors promise. 7 In other models, heterogeneity in consumer search costs (e.g., Salop and Stiglitz (1977)) or reservation values (e.g., Sobel (1984)) motivate sales. Bergemann and Valimaki (2006) study the pricing paths of \mass-market" and \niche" experience goods, nding that initial sales are essential in niche markets to guarantee trac from new buyers. 4We present comparative statics in Section 3.3. Our model has two periods, and the rm ex ante commits to a price p for both periods. The rm and consumers share a common discount factor . Following the setup of Bils (1989), consumers share a common probability r that the rm's product is a \t." Conditional on t, the valuation of a consumer i for the rm's oering is vi . A consumer i purchases in the rst period if either the single-period value, rvi p, or the expected discounted future value, rvi p + r(vi p), is positive, i.e. if maxfrvi p; rvi p + r(vi p)g 0: For > 0, there is an informational value to visiting in the rst period: if a consumer learns that the rm's product is a t, then the consumer knows to return. As a result, all consumers with values at least v(p) 1 + r r + r p purchase in the rst period. To consider the eects of oering discounts to a subset of consumers, we assume there are two distinct consumer populations. Proportion of consumers have valuations drawn from a distribution with cumulative distribution function G, while proportion 1 have valuations drawn from a distribution with cumulative distribution function F . We denote by V supp(F ) [ supp(G) the set of possible consumer valuations. We assume that G(v) F (v) for all v 2 V , i.e. that the valuations of consumers in the G population are systematically lower than those of consumers in the F population. The rm faces demand (1 G(v(p))) + (1 )(1 F (v(p))) in the rst period, and fraction r of those consumers return in the second period. The rm maximizes prots given by (p) (1 + r)((1 G(v(p))) + (1 )(1 F (v(p))))(p c); where c is the rm's marginal cost. The rst-order condition of the rm's optimization problem is (1 G(v )) + (1 )(1 F (v )) 1 + r r + r ((p c)(g(v ) + (1 )f(v )) = 0 (1) where p is the optimal price and v v(p ). We assume that the distribution of con- sumers is such that prots are single-peaked, so that p is uniquely dened. 3.1 Discount Vouchers After setting its optimal price p , the rm is given the opportunity to oer a discount voucher. 8 Only fraction of consumers in the G population have access to the discount 8 For now, we assume the rm did not consider the possibility of a voucher when setting its price. In Section 4.2, we consider the possibility of re-optimization. 5Social Enterprise Series No. 32: Value Creation in Business – Nonprofit Collaborations
|
CD ROM Annuaire d'Entreprises France prospect (avec ou sans emails) : REMISE DE 10 % Avec le code réduction AUDEN872
10% de réduction sur vos envois d'emailing --> CLIQUEZ ICI Retour à l'accueil, cliquez ici James E. Austin and M. May Seitanidi Working papers are in draft form. This working paper is distributed for purposes of comment and discussion only. It may not be reproduced without permission of the copyright holder. Copies of working papers are available from the author. Social Enterprise Series No. 32: Value Creation in Business – Nonprofit Collaborations James E. Austin M. May Seitanidi Working Paper 12-019 September 26, 20111 VALUE CREATION IN BUSINESS – NONPROFIT COLLABORATIONS James E. Austin, Eliot I. Snider and Family Professor of Business Administration, Emeritus, Harvard Business School M. May Seitanidi, Senior Lecturer in CSR, Director of the Centre for Organisational Ethics, Hull University Business School, University of Hull-UK PURPOSE & CONTENT This focused review of theoretical and empirical research findings in the corporate social responsibility (CSR) and business-nonprofit collaboration literature aims to develop an analytical framework for and a deeper understanding of the interactions between nonprofit organizations and businesses that contribute to the co-creation of value. Our research question is: How can collaboration between businesses and NPOs most effectively co-create significant economic and social value, including environmental value, for society, organizations, and individuals? More specifically, we will: ? elaborate a Collaborative Value Creation (CVC) framework for analyzing social partnerships between businesses and nonprofits; ? review how the evolving CSR literature has dealt with value creation and collaboration; ? analyze how collaborative value creation occurs across different stages and types of collaborative relationships: philanthropic, transactional, integrative, transformational; ? examine the nature of value creation processes in collaboration formation and implementation and the resultant outcomes for the societal [macro], organizational [meso], and individual [micro] levels; ? identify knowledge gaps and research needs. IMPORTANCE OF THE COLLABORATION PHENOMENON The growing magnitude and complexity of socioeconomic problems facing societies throughout the world transcend the capacities of individual organizations and sectors to deal with them. As Visser (2011, p. 5) stated, “Being responsible also does not mean doing it all ourselves. Responsibility is a form of sharing, a way of recognizing that we’re all in this together. ‘Sole responsibility’ is an oxymoron.” Cross-sector partnering, and in particular collaboration between businesses and NPOs, has increased significantly and is viewed by academics and by business and nonprofit practitioners as an inescapable and powerful vehicle for implementing CSR and for achieving social and economic missions. Our starting premise is that creating value for collaborators and society is the central justification for such cross- sector partnering, and closer scrutiny and greater knowledge of the processes for and extent of value creation in general and co-creation more specifically are required for needed theoretical advancement and practitioner guidance. 2 ANALYTICAL FRAMEWORK: COLLABORATIVE VALUE CREATION The CVC Framework is a conceptual and analytical vehicle for viewing more clearly and understanding more systematically the phenomenon of value creation through collaboration (Austin, 2010). We define collaborative value as the transitory and enduring benefits relative to the costs that are generated due to the interaction of the collaborators and that accrue to the organizations, individuals, and society. Thus, the focus is on the value creating processes of and results from partnering, in this case, between businesses and nonprofits. There are two main types of value, economic and social (including environmental), but to examine more thoroughly value creation within the collaboration context the Framework elaborates further dimensions. The four components of the Framework are: the Value Creation Spectrum, Collaboration Stages, Partnering Processes, and Collaboration Outcomes. Each component provides a different window through which to examine the co-creation process. We will elaborate the Value Creation Spectrum as it is a new conceptualization and is a reference point for the other three components that have received attention in the literature and will only be briefly described here and expanded on in their subsequent respective sections. CVC Component I: Value Creation Spectrum Within the construct of collaboration, value can be created by the independent actions of one of the partners, which we label as “sole creation,” or it can be created by the conjoined actions of the partners, which we label as “co-creation”. While there is always some level of interaction within a collaborative arrangement, the degree and form can vary greatly and this carries significant implications for value creation. To provide a richer understanding of the multiple dimensions of social and economic value, the Framework posits four potential sources of value and identifies four types of collaboration value that reflect different ways in which benefits arise. Our overall hypothesis is that greater value is created at the meso, micro, and macro levels as collaboration moves across the Value Creation Spectrum from sole creation toward co-creation. The four sources of value are: Resource Complementarity – The Resource Dependency literature stresses that a fundamental basis for collaboration is obtaining access to needed resources that are different than those it possesses. However, the realization of the potential value of resource complementarity is dependent on achieving organizational fit. The multitude of sectoral differences between businesses and nonprofits are simultaneously impediments to collaboration and sources of value creation. Organizational fit helps overcome barriers and enable collaboration. We hypothesize that greater the resource complementarity and the closer the organizational fit between the partners, the greater the potential for co-creation of value. Resource Type – The partners can contribute to the collaboration either generic assets, i.e., those that any company has, e.g., money, or any nonprofit, e.g., a positive reputation; or, they can mobilize and leverage more valuable organization-specific assets, such as, knowledge, capabilities, infrastructure, and relationships, i.e., those assets key to the organization’s success. We hypothesize that the more an organization mobilizes for the collaboration its distinctive competencies, the greater the potential for value creation.3 Resource Directionality and Use – Beyond the type of the resources brought to the partnership is the issue of how they are used. The resource flow can be largely unilateral, coming primarily from one of the partners, or it could be a bilateral and reciprocal exchange between the partners, or it could be a conjoined intermingling of their resources. Parallel but separate inputs or exchanges can each create value, but combining complementary and distinctive resources to produce a new service or activity that neither organization could have created alone or in parallel co-creates new value. The most leveraged form of these resource combinations produces economic and social innovations. We hypothesize that the more the partners integrate their key resources into distinctive combinations, the greater the potential for value creation. Linked Interests – Although collaboration motivations are often a mixture of altruism and utilitarianism, self-interest – organizational or individual – is a powerful shaper of behaviour. Unlike single sector partnerships, collaborators in cross-sector alliances may have distinct objective functions; there is often no common currency or price with which to assess value. The value is dependent on its particular utility to the recipient. Therefore, it is essential to understand clearly how partners view value – both benefits and costs- and to reconcile any divergent value creation frames. The collaborators must perceive that the value exchange -their respective shares of the co-created value- is fair, otherwise, the motivation for continuing the collaboration erodes. We hypothesize that the more collaborators perceive their self- interests as linked to the value they create for each other and for the larger social good and the greater the perceived fairness in the sharing of that value, the greater the potential for co-creating synergistic economic and social value. The combinations of the above value sources produce the following four different types of value in varying degrees: “Associational Value” is a derived benefit accruing to another partner simply from having a collaborative relationship with the other organization. For example, one global survey of public attitudes revealed that over 2/3 of the respondents agreed with the statement “My respect for a company would go up if it partnered with an NGO to help solve social problems.”(GlobeScan, 2003) “Transferred Resource Value” is the benefit derived by a partner from the receipt of an asset from the other partner. The significance of the value will depend on the nature of the assets transferred and how they are used. Some assets are depreciable, for example, a cash or product donation gets used up, and other assets are durable, for example, a new skill learned from a partner becomes an on-going improvement in capability. In either case, once the asset is transferred, to remain an attractive on-going value proposition the partnership needs to repeat the transfer of more or different assets that are perceived as valuable by the receiving partner. In effect, value renewal is essential to longevity. “Interaction Value” is the benefits that derive from the processes of interacting with one’s partner. It is the actual working together that produces benefits in the form of intangibles. Co-creating value both requires and produces intangibles. In effect, these special assets are both enablers of and benefits from the collaborative value creation process. Intangibles are a form of economic and social value and include, e.g., reputation, trust, relational capital, learning, knowledge, joint problem-solving, communication, coordination, transparency, accountability, and conflict resolution. “Synergistic Value” arises from the underlying premise of all collaborations that combining partners’ resources enables them to accomplish more together than they could have separately. Our more4 specific focus is the recognition that the collaborative creation of social value can generate economic value and vice versa, either sequentially or simultaneously. Innovation, as an outcome of the synergistic value creation is one of perhaps the highest forms of value creation because it produces a completely new form of change due to the combination of the collaborators’ distinctive assets, thereby holding the potential for significant organizational and systemic advancement at the micro, meso, and macro levels. There is a virtuous value circle. Kanter (1983, p. 20) states that all innovations require change, associated with the disruption of pre-existing routines and defining innovation as “the generation, acceptance, and implementation of new ideas, processes, products, or services”. CVC Component II: Relationship Stages Value creation is a dynamic process that changes as the relationship between partners evolves. To describe the changing nature of the collaborative relationship across the spectrum we draw on Austin’s Collaboration Continuum with its three relationship categories of philanthropic, transactional, and integrative (Austin, 2000a; 2000b), and we add a fourth stage – transformational. Within each stage there can exist different types of collaboration with varying value creation processes. We hypothesize that as the relationship moves toward the integrative and transformational stages, the greater the potential for co-creation of value, particularly societal value. CVC Component III: Partnering Processes The realization of the potential collaborative value depends on the partnering processes that occur during the formation, selection, and implementation phases. It is these processes that tap the four sources of value and produce the four forms of value. The dynamic nature of social problems (McCann, 1983) on one hand and the complexities of partnership implementation on the other can result in a multitude of problems including early termination and hence inability to materialize their potential by providing solutions to social problems. Understanding the formation and implementation process in partnerships is important in order to overcome value creation difficulties during the implementation stage (Seitanidi & Crane, 2009) but also to unpack the process of co-creation of synergistic value. CVC Component IV: Partnering Outcomes The focus in this element of the framework is on who benefits from the collaboration. Collaborations generate value at multiple levels –meso, micro, and macro-often simultaneously. For our purpose of examining value, we distinguish two loci: within the collaboration and external to it. Internally, we examine value accruing at the meso and micro levels for the partnering organizations and the individuals within those organizations. Externally, we focus on the macro or societal level where social welfare is improved by the collaboration in the form of benefits at the micro (to individual recipients), meso (other organizations), and macro (systemic changes) levels. The benefits accruing to the partnering organizations and their individuals internal to the collaboration are ultimately largely due to the value created external to the social alliance. Ironically, while societal betterment is the fundamental5 justification for cross-sector collaborative value creation, this is the value dimension that is least thoroughly dealt with in the literature and in practice. CSR & VALUE CREATION As a precursor to our examination of collaborative value creation, it is relevant to examine how the evolving CSR literature has positioned value creation and collaboration with nonprofits. Corporate Social Responsibility can be defined as discretionary business actions aimed at increasing social welfare, but CSR has been in a state of conceptual evolution for decades (Bowen, 1953; Carroll, 2006). This is reflected in the variety of additional labels that have emerged, such as Corporate Social Performance, Corporate Citizenship, Triple Bottom Line, and Sustainability that incorporated environmental concerns (Elkington, 1997; 2004). The bibliometric analysis of three decades of CSR research by de Bakker, Groenewegen and den Hond (2005), which builds on earlier reviews of the literature (Rowley & Berman, 2000; Carroll, 1999; Gerde & Wokutch, 1998; Griffen & Mahon, 1997), provides a comprehensive view of the evolving theoretical, prescriptive, and descriptive work in this field. Garriga and Melé (2004) categorize CSR theories and approaches into four categories: instrumental, political, integrative, and ethical. These and other more recent CSR reviewers (Lockett, Moon, & Visser, 2006; Googins, Mirvis & Rochlin, 2007; Egri & Ralston, 2008) conclude that CSR is deeply established as a field of study and practice but still lacks definitional and theoretical consensus. The field continues to evolve conceptually and in implementation. Our purpose is not to add yet another general review of the CSR literature but rather to focus on the following five central themes that emerged from the literature review on CSR and how it has dealt with collaborative value creation: Primacy of Business Value vs. Stakeholder Approach, Empirical Emphasis, Evolving Practice and Motivations, Integration of Economic and Social Value, and CSR Stages. Primacy of Business Value vs. Stakeholder Approach The most referenced anchor argument against CSR is that set forth by Friedman that pitted social actions and their moral justifications by managers as contrary to the primary function of generating profits and returns to shareholders. His stated position is: “there is one and only one social responsibility of business - to use its resources and engage in activities designed to increase its profits so long as it stays within the rules of the game, which is to say, engages in open and free competition without deception or fraud.” (Friedman, 1962; 1970) The intellectual current flowing against this argument of incompatibility of social and business value came from the broadening conceptualization of relevant stakeholders beyond investors to include consumers (Green & Peloza, 2011), employees, communities, governments, the environment, among others (Freeman, 1984; Neville & Menguc, 2006). This approach also opened the relational door for nonprofits as a type of stakeholder from communities or civil society. While for some academics this theory placed stakeholders as alternative claimants on company value (wealth redistribution), embedded in this approach was the argument that attending to stakeholders other than just investors was not incompatible with profitability but rather contributed to it through a variety of ways. Various researchers stressed the instrumental value of stakeholder engagement (Donaldson & Preston, 1995;6 Jones & Wicks, 1999; Freeman, 1999). In effect, creating social value – benefits to other stakeholders - produced business value, such as, better risk management; enhanced reputation, legitimacy and license to operate; improved employee recruitment, motivation, retention, skill development, and productivity; consumer preference and loyalty; product innovation and market development; preferential regulatory treatment (Makower,1994; Burke & Logsdon, 1996; Googins, Mirvis & Rochlin 2007). This is what we have labelled in the CVC Framework “Synergistic Value Creation.” Jensen (2002), a pioneering thinker on agency theory, recognized that “we cannot maximize the long- term market value of an organization if we ignore or mistreat any important constituency,” but he also specified that under “enlightened value maximization” “managers can choose among competing stakeholder demands by”…spending “an additional dollar on any constituency to the extent that the long-term value added to the firm from such expenditure is a dollar or more.” Jensen adds, “enlightened stakeholder theorists can see that although stockholders are not some special constituency that ranks above all others, long-term stock value is an important determinant…of total long-term firm value. They would see that value creation gives management a way to assess the tradeoffs that must be made among competing constituencies, and that it allows for principled decision making independent of the personal preferences of managers and directors.” Recognizing the complexity of value measurement, Jensen notes that “none of the above arguments depend on value being easily observable. Nor do they depend on perfect knowledge of the effects on value of decisions regarding any of a firm's constituencies” (Jensen, 2002). This approach to value creation and assessment through CSR and stakeholder interaction is primarily instrumental (Jones, 1995; Hill & Jones, 1985). Even though there has been this broadening view of the business benefits derived from benefitting other stakeholders, Halal (2001, p. 28) asserts that “corporations still favour financial interests rather than the balanced treatment of current stakeholder theory". Margolis and Walsh (2003, p. 282) express the concern that “if corporate responses to social misery are evaluated only in terms of their instrumental benefits for the firm and its shareholders, we never learn about their impact on society, most notably on the intended beneficiaries of these initiatives.” Empirical Emphasis: Corporate Social Performance & Corporate Financial Performance The emergence of the asserted “Business Case” for CSR (Makower, 1994) led to a stream of research aimed at empirically testing whether in the aggregate Corporate Social Performance (CSP) contributed positively or negatively to Corporate Financial Performance (CFP), i.e., the link between social value and economic value (Margolis & Walsh, 2003). While this literature over the decades yielded ambiguous and conflicting conclusions, the most recent and comprehensive meta-analysis of 52 studies with a sample size of 33,878 observations by Orlitzky, Schmidt and Rynes (2003) found a positive association. Barnett (2007) asserts that assessing the business case for CSR must recognize that financial results are dependent on the specific historical relationship pathways between companies and their stakeholders, and thus will vary across firms and time. The special capabilities of a firm “to identify, act on, and profit from opportunities to improve stakeholder relationships through CSR” (Barnett, 2007, p. 803) and the perceptions and responses of stakeholders, including consumers (Schuler & Cording, 2006), to new CSR7 actions produce unique value outcomes. Looking at the macro level of value creation, Barnett (2007, p. 805) also adds: “ ‘Does CSR improve social welfare?’ Oddly enough, this question is seldom asked or answered.’ ” This consolidated view of CSR does not disaggregate the value contributed from collaborative activities in particular, but it is important in moving the debate from the “should we” to the “how” and “so what” perspectives, which is where collaborations enter the socio-economic value equation. As Margolis and Walsh (2003, p. 238) put it: “the work leaves unexplored questions about what it is firms are actually doing in response to social misery and what effects corporate actions have, not only on the bottom line but also on society.” However, they also state that examples of partnering with nonprofits abound and are increasing and “may be the option of choice when the firm has something to give and gain from others when it makes its social investments” (p. 289). Andrioff and Waddock (2002, p. 42) stress the mutual dependency in their definition: “Stakeholder engagements and partnerships are defined as trust- based collaboration between individuals and/or social institutions with different objectives that can only be achieved together.” Finn (1996) emphasizes how stakeholder strategies can create collaborative advantage. Evolving Practice & Multiple Motivations Even in advance of the researchers’ empirical validation, practitioners perceived value in CSR and broadly and increasingly have been taking actions to implement it, although the degree and form vary across firms and over time. Recent surveys of more than a thousand executives by Boston College’s Center for Corporate Community Relations revealed that over 60% saw “as very important that their company treat workers fairly and well, protect consumers and the environment, improve conditions in communities, and, in larger companies, attend to ethical operation of their supply chain” (Googins, Mirvis & Rochlin, 2007, p. 22). Research exploring the motivations behind this increased practice suggests that it is not entirely instrumental, but rather is a varying mix of altruism (“doing the right thing”) and utilitarianism (Galaskiewicz, 1997; Donnelly, 2001; Austin, Reficco, Berger, Fischer, Gutierrez, Koljatic, Lozano, Ogliastri & SEKN team, 2004; Goodpaster & Matthews, 1982). Aguilera, Rupp, Williams and Ganapathi (2007) present an integrative theoretical model that contends that “organizations are pressured to engage in CSR by many different actors, each driven by instrumental, relational, and moral motives.” Among these actors are nonprofit organizations acting as societal watchdogs to counter adverse business practices and agitate for positive corporate social actions, which we elaborate on in a subsequent section. Marquis, Glynn and Davis (2007) point to institutional pressures at the community level as key shapers of the nature and level of corporations’ social actions. Campbell (2007) also stresses contextual factors but emphasizes economic and competitive conditions as the determiners of CSR, but with the effects being mediated by actions of stakeholders. Some have asserted that societies’ growing expectations (GlobeScan, 2005) that business should assume a more significant responsibility for solving social problems have created a “new standard of corporate performance-one that encompasses both moral and financial dimensions” (Paine, 2003). The argument is that values – personal and corporate – have intrinsic and social worth but are also a source of economic value for the company. Martin (2002) asserts that the potential for value creation is greater when the motivation is intrinsic rather than instrumental.8 Integrating Economic and Social Value This movement toward a merged value construct has most recently been extended into a repositioning of the very purpose of corporations and capitalism. Porter and Kramer (2011), while putting forth the same premise of producing economic and social value previously discussed extensively in the literature and referred to in our CVC Framework as “Synergistic Value Creation”, give emphasis to making this central to corporate purpose, strategy, and operations. It is asserted that such an approach will stimulate and expand business and social innovation and value as well as restore credibility in business, in effect, reversing the Friedman position of Thou shalt not! to Thou must! Walsh, Weber and Margolis (2003) also signalled the growing importance of double value: “Attending to social welfare may soon match economic performance as a condition for securing resources and legitimacy.” Growing investor interest in social along with economic returns has been manifested by the emergence of several social rating indicators such as Dow Jones Sustainability Indexes, FTSE4Good Indexes, Calvert Social Index, Social Investment Index. This dual value perspective is found in companies around the world, such as the Mexican-headquartered multinational FEMSA: “our commitment to social responsibility is an integral part of our corporate culture. We recognize the importance of operating our businesses to create economic and social value for our employees and the communities where we operate, and to preserve the planet for future generations” (www.femsa.com/es/social). In the 2009 ‘Report to Society’ (De Beers, 2009, p. 2) the Chairman of the De Beers Group highlights their search for the “new normal” that will stem from exploiting the synergies that exist between “running a sustainable and responsible business, and a profitable one” that in some cases, he admits, will represent a departure from their past practices. Such an open plea for change is not an isolated nor a surprising statement as gradually companies realize that the ability to anticipate, manage, and mitigate long-term risks, address difficult situations at exceptionally challenging and turbulent times (Selsky & Parker, 2011), and develop new capabilities will be achieved through deepening their collaboration with stakeholders including employees, customers, governments, local communities and developing inter-organizational capabilities (Porter & Kramer, 2011; Austin, 2000a). Central to the development of the ‘new normal’ of intense interactions is the call for business to demonstrate strong intent in playing a substantial role not only in social issues management but co-creating solutions with wide and deep impacts. NPOs are key-actors with deep levels of expertise in fields such as health, education, biodiversity, poverty, and social inclusion. In addition, their expertise is embedded across local communities (Kolk, Van Tulder & Westdijk, 2006) and global networks on social issues (Crane & Matten, 2007; Pearce & Doh, 2005; Heath, 1997; Salamon & Anheier, 1997). Hence, NPOs represent substantial opportunities for corporations intentionally to co- create local and potentially global value by providing solutions to social problems (Van Tulder & Kolk, 2007) or by designing social innovations that will deliver social betterment (Austin & Reavis, 2002). Porter and Kramer (2011) see this happening by (1) developing new and profitable products, services, and markets that meet in superior ways societal needs; (2) improving processes related to, for example, worker welfare, environment, resource use in the value chain that simultaneously enhance productivity and social well-being; and (3) strengthening the surrounding community’s physical and service infrastructure that is essential for cluster and company competitiveness. They, along with several business leaders, have also emphasized the need for business to escape the narrow sightedness caused by fixation on short-term financial results and shift to a longer-term orientation within which to build9 mutually reinforcing social and economic value (Barton, 2011). Porter and Kramer’s conception contends that “Not all profit is equal. Profits involving social purpose represent a higher form of capitalism, one that creates a positive cycle of company and community prosperity” (p. 15). To achieve this they emphasize as a critical element the “ability to collaborate across profit/nonprofit boundaries” (p. 4). Unilever’s CEO Roger Polman (2010) has called for a shift to “collaborative capitalism.” Halal (2001) earlier had urged “viewing stakeholders as partners who create economic and social value through collaborative problem-solving.” Zadek (2001) similarly called for collaboration with the increasingly important nonprofit sector as the way to move beyond traditional corporate philanthropy. Ryuzaburo Kaku, the former chairman of Cannon, stated that the way for companies to reconcile economic and social obligations is kyosei, “ ‘spirit of cooperation,’ in which individuals and organizations live and work together for the common good” (1997, p. 55) . This approach of integrating social and economic value generation into the business strategy and operations is also the central premise of the “Base of the Pyramid” movement that has emerged over the last decade aimed at incorporating into the value chain the low income sector as consumers, suppliers, producers, distributors, and entrepreneurs (Prahalad, 2005; Prahalad & Hammond, 2002; Prahalad & Hart, 2002; Rangan, Quelch, Herrero & Barton, 2007; Hammond, Kramer, Katz, Tran & Walker, 2007). The fundamental socioeconomic value being sought is poverty alleviation through market-based initiatives. Recent research has shifted the focus from “finding a fortune” in the business opportunities of the mass low income markets to “creating a fortune” with the low income actors (London & Hart, 2011). Recent research has also highlighted the critical roles that not for profit organizations frequently play in building these ventures and co-creating value (Márquez, Reficco & Berger, 2010). Portocarrero and Delgado (2010), based on 33 case studies throughout Latin America and Spain, provide further elaboration of the concept of social value produced by socially inclusive , market-based initiatives involving the low income sector, starting from the Social Enterprise Knowledge Network’s earlier definition of social value (Social Enterprise Knowledge Network, 2006): “the pursuit of societal betterment through the removal of barriers that hinder social inclusion, the assistance to those temporarily weakened or lacking a voice, and the mitigation of undesirable side effects of economic activity.” They posit four categories of social value: (1) increasing income and expanding life options resulting from inclusion as productive agents into market value chains; (2) expanding access to goods and services that improve living conditions; (3) building political, economic, and environmental citizenship through restoring rights and duties; and (4) developing social capital through constructing networks and alliances. CSR Stages The foregoing movement toward integration is part of the evolution of theory and practice. Various scholars have attempted to categorize into stages the wide and evolving range of corporate approaches to CSR. These stage conceptualizations are relevant to our co-creation model because where a corporation has been and is heading is a precursor conditioning factor shaping the potential for and nature of collaborative value creation. 10 Zadek (2004) conceptualized corporations’ learning about CSR as passing through five stages: (1) Defensive (Deny practices, outcomes, or responsibilities), (2) Compliance (Adopt a policy-based compliance approach as a cost of doing business), (3) Managerial (Embed the societal issue in their core management processes), (4) Strategic (Integrate the societal issue into their core business strategies), (5) Civil (Promote broad industry participation in corporate responsibility). Googins, Mirvis and Rochlin (2007) – based on examination of company practices - have created a more elaborated 5 stage model, with each stage having a distinct “Strategic Intent,” which expresses the value being sought at each stage: (1) Elementary (Legal Compliance)->(2) Engaged (License to Operate)->(3) Innovative (Business Case)->(4) Integrated (Value Proposition)->(5) Transforming (Market Creation or Social Change). Across these 5 stages, stakeholder relationships also evolve: Unilateral->Interactive->Mutual Influence- >Partnership/Alliances->Multi-Organization. The authors assert that for the emerging generation of partnerships between businesses and nonprofits “the next big challenge is to co-create value for business and society” (p. 8). In effect, at higher levels of CSR, collaboration becomes more important in the value creation process. As creating synergistic value becomes integrated and institutionalized into a company’s mission, values, strategy, and operations, engaging in the co-creation of value with nonprofits and other stakeholders becomes an imperative. Hence, co-creation of value indicates a higher degree of CSR institutionalization. NONPROFITS’ MIGRATION TOWARD ENGAGEMENT WITH BUSINESS Just as businesses have increasingly turned to nonprofits as collaborators to implement their CSR and to produce social value, several factors have also been moving nonprofits toward a greater engagement with companies. Parallel to the increasing integration of social value into business strategy there has emerged a growing emphasis in nonprofits on incorporating economic value into their organizational equation. The field of social enterprise and social entrepreneurship emerged as an organizational concept, with some conceptualizations referring to the application of business expertise and market- based skills to the social sector, such as when nonprofit organizations operate revenue-generating enterprises. (Reis, 1999; Thompson, 2008; Boschee & McClurg, 2003). Broader conceptualizations of social entrepreneurship refer to innovative activity with a social purpose in either the business or nonprofit sectors or as hybrid structural forms which mix for-profit and nonprofit activities. (Dees, 1998a; 1998b; Austin, Stevenson & Wei-Skillern, 2006; Bromberger, 2011). Social entrepreneurship has also been applied to corporations and can include cross-sector collaborations (Austin, Leonard, Reficco & Wei-Skillern, 2006). Emerson (2003) has emphasized generation of “blended” social and economic value. The field of social marketing emerged as the application of marketing concepts and techniques to change behaviour to achieve social betterment (Kotler & Zaltman, 1971). It is a set of tools that can be used independently by either businesses or nonprofits as part of their strategies. However, Kotler and Lee (2009) have recently highlighted the importance of cross-sector collaboration in its application. 11 Many academics and practitioners have commented on the “blurring of boundaries” between the sectors (Dees & Anderson, 2003; Glasbergen, Biermann & Mol, 2007; Crane, 2010), and some researchers have empirically documented this “convergence” (Social Enterprise Knowledge Network, 2006; Austin, Gutiérrez, Ogliastri & Reficco, 2007). While this overlap of purposes reflects an increasingly common appreciation and pursuit of social and economic value creation and fosters collaboration across the sector, this is not a comfortable move for all nonprofits. Many advocacy nonprofits, in fact, view themselves as in opposition to corporations and fight against practices that they deem as detrimental to society (Grolin, 1998; Waygood & Wehrmeyer, 2003; Rehbein, Waddock & Graves, 2004; Hendry, 2006). While this can serve as a healthy social mechanism of checks and balances, it is interesting to note that many nonprofits that have traditionally been antagonists of corporations have increasingly discovered common ground and joint benefits through alliances with companies (Yaziji & Doh, 2009; Ählström & Sjöström, 2005; Stafford, Polonsky & Hartman, 2000). Heugens (2003) found that even from adversarial relationships with NGOs a company could develop “integrative and communication skills.” Similarly, many business leaders have shifted their conflictive posture with activist nonprofits and viewed them as important stakeholders with whom constructive interaction is possible and desirable (Argenti, 2004). John Mackey, founder and CEO of Whole Foods Market, stated, “I perceived them as our enemies. Now the best way to argue with your opponents is to completely understand their point of view,” adding, “To extend our love and care beyond our narrow self-interest is antithetical to neither our human nature nor our financial success. Rather, it leads to the further fulfilment of both” (Koehn & Miller, 2007). Porter and Kramer (2006) contend “Leaders in both business and civil society have focused too much on the friction between them and not enough on the points of intersection. The mutual dependence of corporations and society implies that both business decisions and social policies must follow the principle of shared value. That is, choices must benefit both sides. If either a business or a society pursues policies that benefit its interests at the expense of the other, it will find itself on a dangerous path. A temporary gain to one will undermine the long-term prosperity of both.” A recent illustration of this interface is when Greenpeace attacked the outdoor apparel maker Timberland with the accusation that leather for its boots came from Brazilian cattle growers who were deforesting the Amazon. CEO Jeff Swartz, who received 65,000 emails from Greenpeace supporters, engaged with the nonprofit and ensured with its suppliers that none of its leather would be sourced from the Amazon area. Nike made a similar agreement. Reflecting on the experience with the activist NGO, Swartz observed, “You may not agree with their tactics, but they may be asking legitimate questions you should have been asking yourself. And if you can find at least one common goal-in this case, a solution to deforestation- you’ve also found at least one reason for working with each other, not against” (Swartz, 2010,p. 43). Eccles, Newquist and Schatz’s (2007) advice on managing reputational risk echoed Swartz’s perspective: “Many executives are skeptical about whether such organizations are genuinely interested in working collaboratively with companies to achieve change for the public good. But NGOs are a fact of life and must be engaged. Interviews with them can also be a good way of identifying issues that may not yet have appeared on the company’s radar screen” (p. 113). In a similar vein, Yaziji (2004) documents the valuable types of resources that nonprofits can bring: legitimacy, awareness of social forces, distinct networks, and specialized technical expertise that can head off12 trouble for the business, accelerate innovation, spot future shifts in demand, shape legislation, and set industry standards. One of the bridging areas between nonprofit advocacy and collaboration with businesses has been corporate codes of conduct. Arya and Salk (2006) point out how nonprofits have compelled the adoption of such codes but also help corporations by providing knowledge that enables compliance. Conroy (2007) has labeled this phenomenon as the “Certification Revolution” wherein nonprofits and companies have established standards and external verification systems across a wide array of socially desirable business practices and sectors, e.g., forestry, fishing, mining, textiles, and apparel. The resultant Fair Trade movement has experienced rapid and significant growth, resulting in improved economic and social benefits to producers and workers while also giving companies a vehicle for differentiating and enriching their brands due to the social value they are co-creating. Providing consumers with more information on a company’s social practices, such as labor conditions for apparel products, can positively affect “willingness-to-pay” (Hustvedt & Bernard, 2010 ). Various more general standards and social reporting systems have emerged, such as AA 1000 on Stakeholder Management (www.accountability21.net), SA 8000 on Labor Issues (www.sa-intl.org), ISO 14000 Series of Standards on Environmental Management and ISO 26000 on Corporate Social Responsibility (www.ISO.org); Global Reporting Initiative (GRI) on economic, environmental, and social performance (www.globalreporting.org). NPO - BUSINESS COLLABORATION AND VALUE CREATION Businesses and nonprofit organizations can and do create economic and social value on their own. However, as is clear from the stakeholder literature discussed and from resource dependency theory (Pfeffer & Salancik, 1978; Wood & Gray, 1991) and from various major articles and books with ample examples of practice, cross-sector collaboration is the organizational vehicle of choice for both businesses and nonprofits to create more value together than they could have done separately (Kanter, 1999; Austin, 2000a,b; Sagawa & Segal, 2000; Googins & Rochlin, 2000; Jackson & Nelson, 2004; Selsky & Parker, 2005; Galaskiewicz & Sinclair Colman, 2006; Googins, Mirvis & Rochlin, 2007; Seitanidi, 2010; Austin, 2010). For companies, as the foregoing sections have revealed, collaborating with NPOs is a primary means of implementing their CSR. For nonprofits, alliances with businesses increase their ability to pursue more effectively their missions. The calls for heightened social legitimacy for corporations (Porter & Kramer, 2011; Manusco Brehm, 2001; Wood, 1991), corporate accountability (Newell, 2002; Bendell, 2004; Bendell, 2000b) and increased accountability for nonprofit organizations (Meadowcroft, 2007; Ebrahim, 2003; Najam, 1996) signalled the equal importance of process and outcomes (Seitanidi & Ryan, 2007) while paying attention to the role of multiple stakeholders, such as employees and beneficiaries (Le Ber & Branzei, 2010a; Seitanidi & Crane, 2009). Interestingly, Mitchell, Agle and Wood, (1997: 862) remarked in their chronology and stakeholder identification rationales that there was no stakeholder definition “emphasising mutual power”, a balance required for the process of co-creation. The previous role of NPOs as influence seekers (Oliver, 1990) has moved beyond the need to demonstrate power, legitimacy, and urgency to business managers (Mitchell, Agle & Wood, 1997) as their new found salience stems from their ability to be value producers (Austin, 2010; Le Ber & Branzei, 2010b) and from the extreme urgency of social problems (Porter & Kramer, 2011). The involvement of nonprofit organizations as a source of value creation ranges from their potential to co-produce intangible resources such as new capabilities through employee volunteering programmes (Muthuri, Matten & Moon, 2009), and new13 production methods as a result of the adoption of advanced technology held by nonprofit organizations (Stafford & Hartman, 2001). Salamon (2007) stresses the role of the nonprofit sector as a “massive economic force, making far more significant contributions to the solution of public problems than existing official statistics suggest” based on mobilizing millions of volunteers, engaging grass-roots energies, building cross-sector partnerships, and reinvigorating democratic governance and practice. All the above constitute the un-tapped potential of the nonprofit sector. Evidence is provided by Salamon (2007) in a country scale suggesting that the nonprofit sector “exceeds the overall growth of the economy in many countries. Thus, between 2000 and 2003, the sector's average annual rate of growth in Belgium outdistanced that of the overall economy by a factor of 2:1 (6.7 versus 3.2 per cent). In the United States, between 1996 and 2004, the non-profit sector grew at a rate that was 20 per cent faster than the overall GDP.” The above demonstrate the value potential but also the difficulties in understanding and unpacking the value creation that stems from the nonprofit sector during the partnership implementation. The role of the partners is to act as facilitators and enablers of the value creation process, understand how to add value to their partner (Andreasen, 1996), and design appropriate mechanisms to enhance the co-creation processes. The fundamental reason for the proliferation of nonprofit-business partnerships is the recognition that how businesses interact with nonprofits can have a direct effect in their success due to the connection of social and financial value (Austin, 2003). Equally, nonprofits are required to work with other organizations to achieve and defend their missions against financial cuts, a shrinking pool of donors, fierce competition by demonstrating efficiency and effectiveness in delivering value for money. Coupled with the realization that nonprofits are of significant value to business is the acceptance that nonprofits can also achieve mutual benefit through the collaboration with companies (Austin, 2003). The Corporate-NGO Partnership Barometer Summary Report (C&E, 2010) confirms the above, indicating that 87% of NGOs consider partnerships important, particularly for the generation of resources; similarly, 96% of businesses consider partnerships with NGOs important in order to meet their CSR agendas (ibid, p. 4-5). Interestingly, 59% of the respondents confirmed that they are engaged in approximately 11-50 or more partnerships (C&E, 2010, p. 7), indicating the necessity for partnership portfolio management in order to achieve portfolio balance (Austin, 2003). The most frequently identified (52%) challenge for business in a partnership is “the lack of clear processes for reviewing and measuring performance” (C&E, 2010: 13). Only 21% of nonprofit organizations consider the above as a key-challenge, as their most pressing challenge remains (52%) “lack of resources on our part” (ibid). There is a significant literature on economic value creation and capture for businesses dealing with other businesses or even co-creating value with their consumers (Brouthers, Brouthers & Wilkerson, 1995; Bowman & Ambrosini, 2000; Foresstrom, 2005; O’Cass & Ngo, 2010; Lepak, Smith & Taylor, 2007), and similarly nonprofits collaborating with other nonprofits (Cairns, Harris & Hutchinson, 2010; McLaughlin, 1998). Additionally, there is much written about cross-sector collaborations by business and/or nonprofits with government (Bryson, Crosby & Middleton Stone, 2006; Cooper, Bryer & Meek, 2006). While there are commonalities and differences in value creation processes across all types of intra and inter-sector collaborations that are worthy of analysis (Selsky & Parker, 2005; Milne, Iyer & Gooding-Williams, 1996), the scope of our inquiry is limited to business-nonprofit dyads. We will now examine collaborative value creation from three dimensions: collaboration relationship stages, partnering processes, and collaboration outcomes. 14 Relationship Stages and Value Creation The collaborative relationships between NPOs and businesses take distinct forms and can evolve over time through different stages. Our focus is on understanding how the value creation process can vary across these stages. To facilitate this analysis, we will use Austin’s (2000a; 2000b) conceptualization of a Collaboration Continuum, given that this work seems to be amply referenced by various cross-sector scholars in significant reviews and publications (e.g., Selsky & Parker 2010; 2005; LeBer & Branzei 2010b; 2010c; Seitanidi & Lindgreen, 2010; Bowen, Newenham-Kahindi & Herremans, 2010; Seitanidi, 2010; Kourula & Laasonen, 2010; Jamali & Keshishian, 2009; Setanidi & Crane, 2009; Glasbergen, Biermann & Mol, 2007; Brickson, 2007; Googins, Mirvis & Rochlin, 2007; Seitanidi & Ryan, 2007; Galaskiewicz & Sinclair Colman, 2006; Berger, Cunningham & Drumwright, 2004; Rondinelli & London, 2003; Wymer & Samu, 2003; Margolis & Walsh, 2003). Seitanidi (2010, p.13) explained that Austin (2000) “positioned previous forms of associational activity between the profit and the non-profit sectors in a continuum...This was an important conceptual contribution, as it allowed for a systematic and cohesive examination of previously disparate associational forms. The ‘Collaboration Continuum’ is a dynamic conceptual framework that contains two parameters of the associational activity: the degree, referring to the intensity of the relationship, and the form of interaction, referring to the structural arrangement between nonprofits and corporations (ibid, p. 21), which he based on the recognition that cross-sector relationships come in many forms and evolve over time. In fact, he termed the three stages that a relationship between the sectors may pass through as: philanthropic, transactional and integrative.” We will present this conceptualization, relate it to other scholars’ takes on relationship stages and typologies, and then examine the nature of value creation in each stage. The Collaboration Continuum (CC) has three relationship stages: Philanthropic (charitable corporate donor and NPO recipient, largely a unilateral transfer of resources), Transactional (the partners exchange more valuable resources through specific activities, sponsorships, cause-related marketing, personnel engagements), and Integrative (where missions, strategies, values, personnel, and activities experience organizational integration and co-creation of value). Figure 1 suggests how the nature of the relationship changes across those stages in terms of the following descriptors: level of engagement, importance to mission, magnitude of resources, scope of activities, interaction level, managerial complexity, strategic value, and co-creation of value. 15 INSERT FIGURE 1 HERE Figure 1. The Collaboration Continuum Stage I Stage II Stage III NATURE OF RELATIONSHIP Philanthropic>>>>Transactional>>>>Integrative ? Level of Engagement Low>>>>>>>>>>>>>>>>>>>>>>>>>>>>>High ? Importance to Mission Peripheral>>>>>>>>>>>>>>>>>>>>>>>Central ? Magnitude of Resources Small>>>>>>>>>>>>>>>>>>>>>>>>>>>>>Big ? Type of resources Money>>>>>>>>>>>>>>>>>Core Competencies ? Scope of Activities Narrow>>>>>>>>>>>>>>>>>>>>>>>>>>Broad ? Interaction Level Infrequent>>>>>>>>>>>>>>>>>>>>>>Intensive ? Trust Modest>>>>>>>>>>>>>>>>>>>>>>>>>>>Deep ? Managerial Complexity Simple>>>>>>>>>>>>>>>>>>>>>>>>>Complex ? Strategic Value Minor>>>>>>>>>>>>>>>>>>>>>>>>>>>Major ? Co-creation of value Sole>>>>>>>>>>>>>>>>>>>>>>>>>Conjoined Source: Derived from James E. Austin, The Collaboration Challenge, (San Francisco:Jossey-Bass, 2000) The use of a continuum is important analytically because it recognizes that that the stages are not discrete points; conceptually and in practice a collaborative relationship is multifaceted and some characteristics may be closer to one reference stage while other traits are closer to another. Nor does a relationship automatically pass from one stage to another; movement, in either direction, is a function of decisions and actions by the collaborators. Furthermore, one need not pass through each stage, but rather could begin at a different stage, e.g., creating a transactional relationship without having had a prior philanthropic relationship. A continuum captures more usefully the dynamic nature and heterogeneity of evolving relationships and the corresponding value creation process. Several researchers have also found useful the concept of a continuum, although they have depicted the content somewhat differently than used in the CC. Bryson, Crosby and Middleton Stone(2006) use a collaboration continuum construct, with one end for organizations that only barely relate to each other regarding a social problem (as in the CC’s Philanthropic stage), and the other end for “organizations that have merged into a new entity to handle problems through merged authority and capabilities,”(p. 44) as in the CC’s Integrative stage. Rondinelli and London (2003) similarly use a continuum of the relationship’s “intensity,” moving from low intensity “arm’s-length” (similar to the CC’s Philanthropic stage), to moderate intensity “interactive collaborations” (similar to the Transactional stage), to high intensity “management alliances” (similar to the Integrative stage). 16 Bowen, Newenham-Kahindi & Herremans’ (2010) review of 200 academic and practitioner sources on cross-sector collaboration uses a “continuum of community engagement” concept and offers a typology of three engagement strategies: transactional, transitional, and transformational. Their descriptions of the three strategies are different in definition than in the CC. Their “transactional” strategy of “giving back” is close to the definition of the Philanthropic stage in the CC. Their “transitional” strategy points to increasing collaborative behaviour but lacks definitional power as it is seen as a phase of moving from philanthropic activities to a “transformational” phase, which has some of the characteristics of the CC’s Integrative stage of joint problem-solving, decision-making, management, learning, and creating conjoined benefits. They point to difficulties in “distinguishing between ‘collaboration and partnership’ and truly transformational engagement” (p. 307). Googins, Mirvis & Rochlin (2007) characterize company relationships with stakeholders as moving from unilateral, which corresponds to Austin’s Philanthropic stage, to mutual influence, which is close to the Transactional stage, to partnerships and alliances, which have integrative characteristics, and then to multi-organization, which is “transforming” and seems to depict a more aspirational stage that achieves significant social change. The identification by these researchers of a transformational stage offers an opportunity to enrich the CC, so we will make that elaboration below. Galaskiewicz and Sinclair Colman’s (2006) major review of business-NPO collaboration does not explicitly use a continuum, but the underlying differentiator in its typology is motivation and destination of the benefits generated. Their collaboration types can be connected to the CC framework. The review’s primary focus and exhaustive treatment is on the philanthropic relationship. This first stage in the CC is the most common collaborative relationship and is characterized as predominantly motivated by altruism, although some indirect benefits for the company are hoped for. Additionally, but with much less elaboration, they pointed to “strategic collaborations” involving event sponsorships and in- kind donations aimed at generating direct benefits for the company and the NPO. Similarly, they point to “commercial collaborations” involving cause-related marketing, licensing, and scientific cooperation, also aimed at producing direct benefits. The asserted distinctions between these two categories are that in the latter the benefits are easier to measure and the “activity is unrelated to the social mission.” It is unclear why the former would be “strategic” but not the latter, as both could be part of an explicit strategy. Some researchers have even labelled philanthropy as “strategic” based on how it is focused (Porter & Kramer, 2002). In relationship to the CC, the strategic and the commercial categories correspond to the Transactional Stage. Galaskiewicz and Sinclair Colman also refer to “political collaboration” that aims at influencing other entities, social or governmental; depending on the precise nature of the relationship in carrying out this purpose, this type could be placed in any of the 3 stages of the CC, but would seem closest to a transactional relationship. We will now examine value creation in each of the three stages of the CC: Philanthropic, Transactional, Integrative, and also add a fourth stage, Transformational.17 -Philanthropic Collaborations As Lim (2010) points out in introducing his helpful review on assessing the value of corporate philanthropy, “How to measure the value and results of corporate philanthropy remains one of corporate giving professionals’ greatest challenges. Social and business bene?ts are often long-term or intangible, which make systematic measurement complex. And yet: Corporate philanthropy faces increasing pressures to show it is as strategic, cost-e?ective, and value-enhancing as possible.” In philanthropic collaborations, the directionality of the resource flow is primarily unilateral, flowing from the company to the nonprofit. In the USA corporations donated $14.1 billion in cash and goods in 2009, up 5.9% from 2008 in inflation adjusted dollars (Giving USA Foundation, 2010). About 31% of these donations come via company foundations, of which there were an estimated 2,745 in 2009 (Lawrence & Mukai, 2010). This “transferred resource value” accrues to the nonprofit. It is an economic value that enables the nonprofit to pursue its mission, the completion of which creates social value. Margolis and Walsh (2003, p. 289) depict these donations as the “buy” option for implementing CSR. The nonprofit has the organizational capabilities lacking in the company to address a particular social need and the company has the funds that the nonprofit lacks. This is basic resource complementarity but the resource type is generic – cash. It enables the nonprofit to do more of what it already does, but it does not add any more value than what would come from any other cash donor. Legally, corporate donations made via their foundations cannot benefit directly the corporation, although Levy (1999) revealed many ways to capture synergies between the company and its foundation. Still, it has been asserted that beyond the tax deduction, donations are largely altruistic; the benefit flows in one direction to the nonprofit and the hoped for generation of social value. However, a variety of benefits can, in fact, accrue to the business. There is the potential for associational value, whereby the company’s reputation and goodwill with various stakeholders, including communities and regulators affecting its “License to Operate,” is enhanced due to its philanthropic association with the nonprofit and its social mission. This is in part due to the generally higher levels of trust associated with nonprofits and the value created for the business when that asset is transferred through association (Seitanidi, 2010). One survey (Deloitte, 2004) indicated that 92% of Americans think that it is important for companies to make charitable contributions or donate products and/or services to nonprofit organizations in the community. It has been calculated that 14% of U.S. companies’ reputations is attributable to citizenship efforts (Reputation Institute, 2011). Similarly, the nonprofit can gain credibility and enhance its reputation by having been vetted and selected as a donation recipient of an important company (Galaskiewicz & Wasserman, 1989). Managing reputational risk is an important task for companies and nonprofits. Several researchers have documented that companies’ philanthropic activities provide an “insurance policy” that helps mitigate the repercussions of negative events (Godfrey, Merrill & Hansen, 2009). Both partners run the risk of being tainted by their partner’s negative actions and the corresponding bad publicity (Galaskiewicz & Sinclair Colman, 2006).18 When the donation is a company’s product, it is more distinctive than a cash contribution; product donations are sometimes preferred as a way of moving inventories or promoting product usage and brand recognition. There is evidence that a company that is perceived as collaborating with nonprofits and contributing to the resolution of social problems will garner greater respect and preference from consumers (GlobeScan, 2003). However, their pathway from intention to buy to actual purchase is circuitous and requires other explicit companion actions (Bhattacharya & Sen, 2004) that are more likely to occur in the more structured collaborations found in the Transactional stage, such as Cause-Related Marketing, which we will discuss in the next section. Another stakeholder of particular relevance in philanthropic collaborations is employees, with the perceived benefits of attracting, retaining, and motivating them (Boston College Center for Corporate Citizenship & Points of Light Foundation, 2005). Survey and experimental work has revealed that almost ¾ of those surveyed would choose to work for a company with a good philanthropic record, all other things being equal (Deloitte, 2004; Greening & Turban, 2000). CEOs have also pointed to attracting talent as a significant motivation for their corporate philanthropy (Bishop & Green, 2008; Bhattacharya, Sen & Korschun, 2008). If the company moves beyond cash donations, including matching employee grants, and engages in employee volunteerism through outreach programs with nonprofit groups, then additional benefits can be expected. The Deloitte (2004) survey revealed that: ? 87% percent of Americans believe it is important for companies to offer volunteer opportunities to its employees; ? 73% say that workplace volunteer opportunities help companies contribute to the well being of communities; ? 61% think that they help to communicate a company’s values; ? 58% believe that workplace volunteer opportunities improve morale. A survey of 131 major U.S. corporations revealed that 92% had formal employee volunteer programs (Lim, 2010). Research has identified benefits in terms of increased employee identification with the company and enhanced job performance (Bartel, 2001; Jones, 2007). Volunteering and interacting with the nonprofits can also foster new skill development (Peterson, 2004; Sagawa & Segal, 2000). Corporate volunteering can be relatively informal but sometimes develops into highly structured collaborative projects with the nonprofit with specific objectives, time frames, and expected exchanges of assets. For example, Timberland has a highly developed community service program with City Year and other nonprofits, including giving employees paid 40 hours of paid release time to work with nonprofits (Austin, 2000a; Austin, Leonard & Quinn, 2004; Austin & Elias, 2001). Many corporations encourage their management employees to volunteer as board members of nonprofits and some have supported formal governance training and placement (Epstein & McFarlan, 2011; Korngold, 2005; Austin, 1998). In these more elaborated forms, the collaboration migrates from the philanthropic stage towards the transactional stage. This reveals that as the partners broaden the Resource Type from just cash to also include their employees, they can create new opportunities for value creation. The benefits accrue at the meso level for both partnering organizations and at the micro level for the employees. However, a critical19 determinant of how much value is created is the type of skills the employee volunteers bring to the collaboration. If they bring specialized skills rather than just their time and manual labor, then the potential value added is greater (Kanter, 1999; Vian, Feeley, Macleod, Richards & McCoy, 2007). To conclude this subsection, we note that traditional philanthropic collaboration largely involves sole creation rather than co-creation of value. Each partner provides inputs – the corporation gives funds and the nonprofit delivers a social service. The degree of interaction is generally quite limited and the functions rather independent. There is synergistic value in that complementary resources come together that enable the nonprofit to produce social value which in turn gives rise indirectly to economic value for the company. There are benefits at the meso, micro, and macro levels, but they are relatively less robust than at the subsequent stages in the CC. The search for greater value gave rise to a move toward “strategic philanthropy” as part of the CSR evolution. While it has taken many different forms, one of the most noted was that put forth by Porter and Kramer (2002) and was an intellectual precursor to their 2006 analysis of the links between CSR and competitive advantage (Porter & Kramer 2006), and their 2011 conceptualization of shared value discussed above in the CSR Evolution section. They emphasize the importance of having corporate philanthropy be “context focused,” aimed at strengthening their social, economic, and political operating environments that greatly determine a company’s ability to compete. In effect, they are seeking what our CVC Framework labels linked interests between companies and communities. This is tied to the creation of synergistic value, as they contend that “social and economic goals are not inherently conflicting but integrally connected.” Two further value elements in their concept concern the type of resources deployed and how they are used. They stress the importance of giving not only money but also leveraging organizations’ special capabilities to strengthen each other and their joint efforts, asserting, “Philanthropy can often be the most cost-effective way to improve its competitive context, enabling companies to leverage the efforts and infrastructure of nonprofits and other institutions” (Porter and Kramer,2002, p. 61). These shifts move collaborations further along the value creation spectrum and toward higher stages of engagement on the Collaboration Continuum. Transactional Collaborations Transactional relationships differ from philanthropic along several dimensions as elaborated previously, but we focus here on the value aspects. Salient among these is that the directionality of the resource flow shifts from unilateral to bilateral. There is an explicit exchange of resources and reciprocal value creation (Googins & Rochlin, 2000). There is higher resource complementarity and the type of transferred resources the partners’ are deploying is often more specialized assets with their greater value generating potential (Waddell, 2000). The partners have linked interests in that creating value for oneself is dependent on creating it for the other. Associational value is more salient and organizational fit is more essential to value creation. The value creation tends to be more quantifiable and the benefits to the organizations more direct, however, there is less certainty regarding the realization of improved societal welfare. 20 The types of collaborations that characterize the Transactional stage include Cause-Related Marketing (CRM), event and other sponsorships, name and logo licensing agreements, and other specific projects with clear objectives, assigned responsibilities, programmed activities, and predetermined timetables. The various certification arrangements between businesses and nonprofits would also be encompassed within the transactional collaboration category. Selsky and Parker (2010) consider these transactional collaborations as arising from a “Resource Dependency Platform” and the partners’ motivation is primarily self-interest and secondarily the social issue. Varadarajan and Menon’s (1988) early article on CRM indicated many benefits from CRM, but pointed to revenue enhancement as the “main objective.” IEG, the leading advisory agency on event sponsorships, estimated that sponsorships in 2010 were $17.2 billion in North America and $46.3 billion globally, with Europe and Asia Pacific being the other primary areas. While sponsorship of sporting events is the largest category, social cause sponsorships grew the fastest at 6.7% and arts at 2.7% (IEG, 2011). Cone’s (2004) longitudinal consumer survey revealed that 91% indicated they would have a more positive attitude toward a product or a company when it supports a social cause, up from 83% in 1993, because it wins their trust. Furthermore, 84% compared to 66% in 1993 indicated that they would be likely to switch brands of similar quality and price if one product was associated with a social cause. These respondents also stated that a company’s commitment to a social issue was relevant to their decisions regarding which companies to work for, have in their communities, recommend to others, and invest in. Hoeffler and Keller (2002) assert that these campaigns can increase brand awareness, image, credibility, feelings, community, and engagement. Heal’s (2008) as well as Marin, Ruiz & Rubio’s (2009) research revealed identification with the company, emotional connection, and buyer brand loyalty increased when associated with a social cause. Associational Value is the central benefit accruing to the company, and the various forms of CRM, sponsorships, and certifications aim to make more salient that association, with the hope that sales will be enhanced. However, intermediating variables can affect the realization of the potential associational value, such as, product type, perceived motivation of campaign and company’s CSR record, and size of contribution (Smith & Langford, 2009; Bhattarchaya & Sen, 2004; Strahilevitz & Myers, 1998; Strahilevitz, 1999; Strahilevitz, 2003). Although buyer intentions are often not realized, some survey evidence revealed that UK consumers actually did switch brands, try a new product, or increase purchases of a product due to its association with a charity’s cause (Farquason, 2000). Hiscox and Smyth (2008) researched the following question: “A majority of surveyed consumers say they would be willing to pay extra for products made under good working conditions rather than in sweatshops, but would they really do so?” The results from experiments that they conducted in a major retail store in New York City showed that “Sales rose by 12- 26% for items labelled as being made under good labor standards. Moreover, demand for the labelled products actually rose when prices were increased. Raising prices of labelled goods by 10% actually increased their sales by an additional 21-31%.” Castaldo, Perrini, Misani and Tencati (2009) confirmed the importance of trust to consumers’ decision-making in the purchase of Fair-Trade labelled products. Certified products can even elicit a willingness to pay a premium price from environmentally conscious consumers (Thompson, Anderson, Hansen & Kahle, 2010). Collaboration with certifying organizations is one mechanism for gaining consumer trust, but the company’s CSR reputation also proved to be a key source of trust. The strength of that reputation also provides some “insurance” in the form of resistance by consumers to negative information about its CSR activities (Eisingerich, Rubera, Seifert & Bhardwaj,21 2011). The effectiveness of a CRM campaign can be enhanced or decreased depending on the specific methods used to implement it, e.g., the frequency of repetition of the CRM claims as a means of overcoming consumer skepticism (Singh, Kristensen & Villaseñor, 2009). The primary benefit being sought by the nonprofits is the revenue from the company, often a percentage of sales if a product is being promoted around the cause in a special campaign, or a prearranged fee. American Express’s affinity marketing campaign that donated a percentage of sales or a fee for new card applications resulted in a 28% increase in card usage in the first month and a 45% rise in applications, producing $1.7 million for the restoration of the Statue of Liberty. Coca Cola’s six week promotion to support Mothers Against Drunk Driving boosted sales 490% and provided the nonprofit with 15 cents for each case sold (Gray & Hall, 1998). The associated publicity of the cause and the collaborating nonprofit can also be valuable to the nonprofit and generate some social value through the form of greater public awareness of the need. Because the associational relationship is closer and more visible in these transactional relationships, the risks to the partners’ respective brands, i.e., the creation of negative value, is also greater (Wymer & Samu, 2003; Andreasen, 1996; Haddad & Nanda, 2001). Basil and Herr (2003) point to the risk of negative attitudes toward the nonprofit arising from inappropriate organizational fit between the partners. Berger, Cunningham and Drumwright (2004) also stress the importance of alignment of missions, resources, management, work force, target market, product/cause, culture, business cycle, and evaluation if the partners are to realize the full benefits their social alliance. Gourville and Rangan (2004) present a model and examples that show how appropriate fit allows the partners to generate value beyond the “first order” direct benefits of enhanced revenues for the company and fees for the nonprofit, to produce “second order” benefits. For the firm these could include strengthening relationships with employees, investors, and the larger community, and for the nonprofit they could include greater name recognition and a widening of its donor base. Good fit enables the generation of synergistic value, and the better the fit, the greater the value creation. Beyond these benefits accruing at the meso level to the partnering organizations, there remains the issue of to what extent these transactional collaborations generate societal benefits. Some have asserted that these are largely commercial undertakings rather than social purpose alliances (Galskiewicz & Sinclair Colman, 2006; Porter & Kramer, 2006). It is a fact that many CRM undertakings are funded from corporate marketing budgets rather than their philanthropic funds and their effects on consumer intentions and actions are measured. This is evidence that companies recognize the business case for supporting nonprofits in this manner, and it also creates access for nonprofits to a much larger pool of corporate resources for social causes than just the philanthropy budget. However, there is little parallel effort documented in the literature to measure the presumed resultant societal benefit, although environmental collaborations seem to assess impact outcomes more often. As in the Philanthropic Stage, there exists the assumption that by channelling resources to the nonprofit social value creation will be enabled. To the extent that more resources are generated for the nonprofit via the transactional arrangements than would have occurred from a traditional donation, then the potential for greater value exists. 22 In assessing social value generation, it is important to differentiate among types of transactional collaborations. Seitanidi and Ryan (2007), for example, distinguish between “commercial sponsorship” and “socio-sponsorship” based on predominant purpose, with the former aimed primarily at generating revenues for the partners and the latter at meeting social needs, although benefits also accrue to the partnering organizations. At the macro level, the heightened publicity for the cause may create larger awareness of a problem and steps for remediation. For example, Avon’s social cause partnerships with breast cancer organizations in over 50 countries has resulted in $700 million being donated since 1992 to these nonprofits and over 100,000 women being educated about breast cancer early detection, diagnosis, and treatment (Avon Foundation for Women, 2011). Gourville and Rangan (2004) provide a useful methodology that is aimed at assessing the first and second order benefits of CRM to both business and nonprofit partners, which facilitates more constructive discussions in the value capture negotiations, however, they do not provide guidance for assessing the societal value generated. Lim’s useful review (2010) also provides very helpful methodologies for assessing the corporate value of transactional and other CSR efforts, but the focus is primarily on the business benefits, direct and derived. Nonetheless, he also describes a variety of approaches and methodologies for measuring social impact, including some references with examples applied to collaborations in different social sectors to which we will return to in our subsequent outcomes section. Integrative Collaborations A collaboration that evolves into the integrative stage changes the relationship in many fundamental ways including the value creation process. Organizational fit becomes more synchronous: partners’ missions, values, and strategies find much greater congruency as a result of working together successfully and developing deeper relationships and greater trust. The discovery of linked interests and synergistic value creation provides an incentive for collaborating ever more closely to co-create even more value. The strategic importance of the collaboration becomes significant and is seen as integral to the success of each organization, but beyond this, greater priority is placed on producing societal betterment. Good collaboration produces better collaboration, creating a virtuous cycle. But arriving at this state requires much effort and careful relational processes on many fronts, including reconciling their different value creation logics (Le Ber and Branzei, 2010a). Achieving this value frame fit can occur progressively as a relationship evolves through the stages or over time within the integrative stage on the Collaboration Continuum. The value creation equation changes in the integrative relationship compared to the more common transactional relationships particularly in terms of the type of resources and how they are used. The partners increasingly use more of their key assets and core competencies, but rather than just using them in an isolated fashion to perform an activity that produces value for the collaboration (as often occurs in transactional collaborations), they combine these key resources. The directionality of the resource flow is conjoined. Jeff Swartz, CEO of Timberland and also formerly Chair of the Board of its NPO partner City Year, described their integrative relationship: “Our organization and their organization, while not completely commingled, are much more linked....While we remain separate organizations, when we come together to do things we become one organization” (Austin, 2000a, p. 27). The23 importance of this intermingling is that it creates an entirely new constellation of productive resources, which in turn holds potential for co-creating greater value for the partners and for society through synergistic innovative solutions. Kanter (1999) cited examples of each partner combining their complementary competencies to create innovative solutions, e.g., in welfare-to work programs: “while Marriott provides uniforms, lunches, training sites, program management, on the-job training, and mentoring, its partners help locate and screen candidates and assist them with housing, child care, and transportation” (p. 129). In IBM’s Reinventing Education collaboration with schools, the company’s staff had their offices in the schools and they interacted constantly with the teachers in a continuous co-creation process of feedback and development. Whereas transactional collaborations tend to be clearly defined and for a specified time period, in the integrative stage innovative co-creation has a different dynamic, as Kanter noted: “Like any R&D project, new-paradigm partnerships require sustained commitment. The inherent uncertainty of innovation - trying something that has never been done before in that particular setting - means that initial project plans are best guesses, not firm forecasts” (p. 130). Rondinelli and London (2003) provide several examples of “highly intensive” collaborations between environmental NPOs and companies in which the partners integrated their respective expertise to co- create innovative solutions aimed at improving environmentally company products and processes. The Alliance for Environmental Innovation worked in integrated, cross-functional teams with UPS and its suppliers combining their respective technical expertise on material usage lifecycles in a collective discovery process that “created new designs and technologies, resulting in an almost 50 percent reduction in air pollution, a 15 percent decline in wastewater discharge, and 12% less in energy usage” (p. 72). These outcomes are societal benefits that simultaneously generate economic benefits to the company. Alliance’s aspiration is to create Best Practices that will be emulated throughout a sector, thereby multiplying the social value creation. There were clearly linked interests giving rise to synergistic value. Holmes and Moir (2007) suggest that when the collaboration has a narrow scope, then the innovation is likely to be incremental, whereas a more open-ended search would potentially produce more radical and even unexpected results. In the integrative stage, while benefits to the partners remain a priority, generating societal value takes on greater importance. This emerges from the company’s values when generating social value has become an integral part of its core strategy. A company cannot undertake an integrative collaboration until its CSR has reached an integrative state. For example, as Googins, Mirvis and Rochelin (2007) report, one of IBM’s values is “innovation that matters for the world” with its corollary “collaboration that matters.” The company holds that in its “socio-commercial efforts, the community comes first. Only when the company proves its efforts in society…does it …leverage marketing or build commercial extensions.” IBM’s CEO Sam Palmisano explained, “It’s who we are; it’s how we do business; it’s part of our values; it’s in the DNA of our culture” (p. 123). The more CSR is institutionalized the more co- creation becomes part of the value-creation process, i.e., it moves from sole creation to co-creation. It is in the integrative stage that interaction value emerges as a more significant benefit derived from the closer and richer interrelations between partners. Bowen, Newenham-Kahindi & Herremans (2010) assert that “value is more likely to be created through engagement which is relational rather than24 transactional” (p. 311). The intangible assets that are produced – e.g., trust, learning, knowledge, communication, transparency, conflict management, social capital, social issues sensitivity- have intrinsic value to partnering organizations, individuals, and the larger society, but in addition are enablers of integrative collaboration. While these intangibles and processes will be further discussed in the subsequent section on collaboration implementation, it is worth noting that various researchers have pointed to these elements as essential to co-creation of value (Austin, 2000ab; Berger, Cunningham & Drumwright, 2004; Bowen, Newenham-Kahindi & Herremans, 2010; Bryson, Crosby & Middleton Stone, 2006; Googins, Mirvis & Rochlin, 2007; Googins & Rochlin, 2000; Le Ber & Branzei, 2010b; 2011; Selsky & Parker, 2005; Selsky & Parker, 2010; Rondinelli & London, 2003; Sagawa & Segal, 2000; Seitanidi, 2010; Seitanidi & Ryan, 2007). Integrative collaborations are much more complex and organic than transactional arrangements. They require deployment of more valuable resources and demand more managerial and leadership effort, and therefore entail a much deeper commitment. The compensation for these greater investments in co-creation is greater value for the partners and society. The substantiating evidence from the literature comes primarily via case studies, which is an especially appropriate methodology for describing, analyzing, and understanding the partnering processes. However, the specific pathways for the co- creation of value have not received the thoroughness of scrutiny that their importance merits, particularly, as we elaborate subsequently, the outcomes for societal welfare at the macro, meso, and micro levels. Transformational Collaborations We now briefly offer a possible extension of Austin’s Collaboration Continuum with the addition of this fourth stage: Transformational Collaborations. This is a theoretical rather than an empirically-based conceptualization. It would build on but move beyond the integrative stage and emerge as a yet higher level of convergence. The primary focus in this stage is to co-create transformative change at the societal level. There is shared learning about social needs and partners’ roles in meeting those needs, which Selsky and Parker (2010) refer to as a “Social Issues Platform” for the collaboration. Partners not only agree on what the social issue they want to address because it affects them both (Waddock, 1989) but they also agree that their intention is to transform their own processes or to deliver transformation through a social innovation that will change for the best the lives of those affected by the social problem. The end beneficiaries take a more active role in the transformation process (Le Ber & Branzei, 2010b). The aim is to create “disruptive social innovations” (Christensen, Baumann, Ruggles & Sadtler, 2006). This stage represents collaborative social entrepreneurship which, “aims for value in the form of large-scale, transformational benefit that accrues either to a significant segment of society or to society at large” (Martin & Osberg 2007; Nelson & Jenkins, 2006). Interdependence and collective action is the operational modality. One form might be the joint creation of an entirely new hybrid organization. For example, Pfizer and Edna McConnel Clark Foundation joined together to create the International Trachoma Institute as a way to most effectively achieve their goal of eliminating Trachoma (Barrett, Austin & McCarthy, 2000). As the social problems25 being addressed become more urgent or complex, the need to involve other organizations in the solution also increases, giving rise to multi-party, multi-sector collaborations. The transformative effects would not only be in social, economic, or political systems, but also be transformational for the partnering organizations. The collaboration would change each organization and its people in profound, structural, and irreversible ways. We will now examine the third component of the CVC Framework, partnership processes, where the potential and the creation of value will be discussed. Partnership Processes This section of the paper reviews the literature on nonprofit-business partnership processes that contribute importantly to the co-creation of value in the partnership formation and implementation phases. Understanding the process of the partnership formation phase is important as it provides indications of the potential for co-creation of value which is likely to take place during the subsequent partnership implementation phase in which partners’ resources are deployed and the key interactions occur for the co-creation of value. We discuss first the key processes that indicate the potential for the co-creation of value in the partnership formation. Next we move to the examination of the partnership selection as the connecting process between partnership formation and implementation. Finally, we discuss the micro-processes and dynamics that contribute to the co-creation of value in the implementation phase where value is created by the partners. Partnership Formation: Potential for Co-creation of Value Partnership formation (Selsky & Parker, 2005) is usually expressed in the literature as initial conditions (Bryson, Crosby & Middleton Stone, 2006), problem-setting processes (McCann, 1983; Gray, 1989), coalition building (Waddock, 1989), and preconditions for partnerships (Waddell & Brown, 1997). Some scholars present formation as part of the partnership selection process (McCann, 1983; Gray, 1989; Waddock, 1989), hence the processes of formation and implementation appear to “overlap and interact” (McCann, 1983, p. 178), while others suggest that partnership formation consists of a distinct phase or a set of preconditions (Waddell & Brown, 1997; Seitanidi, Koufopoulos & Palmer, 2010). We propose that the selection stage is positioned in a grey area functioning as a bridge between partnership formation and implementation. Conceptually and analytically we follow Seitanidi, Koufopoulos and Palmer (2010) and Seitanidi and Crane (2009) by separating the two in order to discuss the co-creation of value. McCann (1983, p. 178), however, suggests “processes greatly overlap and interact”, which is observed in the extension of processes across the formation, selection and implementation. For example, it is not unusual that pre-selection of partners and due diligence are not always easy or clear and neither positioned within a discrete stage. As Vurron, Dacin and Perrini (2010) remark the time dimension in the analysis of cross sector social partnerships (Selsky & Parker, 2005) is represented by studies that examine the static characteristics of partnerships (Bryson, Crosby & Middleton Stone, 2006) and process-based views (Seitanidi & Crane, 2009) that “extend the debate to the variety of managerial26 challenges and conditions affecting collaborations as they progress through stages” (Vurron, Dacin and Perrini (2010, p.41). Partnership formation is a process originating either prior to or during the previous interactions (Bryson, Crosby & Middleton Stone, 2006) across the same or other partners, for either philanthropic or transactional relationships (Austin, 2000b). Hence, formation can be seen as an early informal assessment mechanism that evaluates the suitability of a collaboration to evolve into an integrative or transformational relationship where the long term value creation potential of the partnership for the partners and society is higher (Austin, 2000a). Underestimating the costs and negative effects of poor organizational pairing can be the result of insufficient experience in co-creation of value, planning and preparation (Berger, Cunningham & Drumwright, 2004; Jamali & Keshishian, 2009). Often managers “think about it” but they do not usually invest “a huge amount of time in that process” (Austin, 2000a, p. 50). Such neglect carries consequences, as due diligence and relationship building are key process variables that can determine the fit between the partners. This process will increase managers’ ability to anticipate and capture the full potential for the partnership for both the business and the nonprofit partner. More importantly, the steps that we discuss below will provide early indications of the benefits that are likely to be produced by both organizations collectively (i.e., at the partnership level) (Gourville & Rangan, 2004; Clarke & Fuller, 2010) indicating the co-creation of value and the potential to externalize the value to society. However, deciding which partner holds the highest potential for the production of synergistic value is time consuming and challenging. The difficulties in undertaking cross-sectoral partnering and particularly developing integrative and transformational collaborations are extensively documented in the literature (Kolk, Van Tulder, & Kostwinder, 2008; Bryson, Crosby & Middleton Stone, 2006; Teegen, Doh & Vachani, 2004; Austin, 2000a; Crane, 2000; 1998), as well as the misunderstandings and power imbalances (Berger, Cunningham & Drumwright, 2004; Seitanidi & Ryan, 2007). Achieving congruence in their mission, strategy and values during the partnership relationship has been deemed particularly significant (Austin, 2000a), however, sectoral differences across the profit and nonprofit organizations create barriers. Differences in goals and characteristics (McFarlan, 1999), values, motives and types of constituents (Di Maggio & Anheier, 1990; Crane, 1998; Milne, Iyer & Gooding- Williams, 1996; Alsop, 2004), objectives, (Heap, 1998; Stafford & Hartman, 2001), missions (Shaffer & Hillman, 2000; Westley & Vedenburg, 1997), and organizational characteristics and structures (Berger, Cunningham & Drumwright, 2004) require early measures of fit that can provide indications for the potential of co-creation of value. The partners’ differences consist at the same time “both obstacles and advantages to collaboration” (Austin, 2010, p. 13) that can be the source of potential complementary value creation (Yaziji & Doh, 2009). Bryson, Crosby and Middleton Stone (2006, p. 46) suggest: “As a society, we rely on the differential strengths of the for-profit, public and non-profit sectors to overcome the weaknesses or failures of the other sectors and to contribute to the creation of public value”. Berger, Cunningham and Drumwright (2004) suggest that many of the partnership problems, but not all, can be predictable and dealt with. Such problems include: misunderstandings, misallocation of costs and benefits, mismatches of power, lack of complementarity in skills, resources and decision making styles, mismatching of time scales and mistrust. They propose a useful set of nine measures of fit and compatibility that can assist the partners to assess the existing and potential degree of fit including mission, resources, management, work force, target market, product/cause, cultural, cycle and evaluation fit (Ibid., p. 69-76). However, they assert that the measures of fit more crucial for the initial stages are the mission fit, resource fit, management fit and evaluation fit. In the case of a new partnership it would be rather difficult to examine the management fit at the formation phase; hence27 we discuss this issue in partnership implementation. We extend this fit framework by adding further measures of fit that contribute to the anticipation of problems while focusing on the maximization of the potential of the co-creation of value at the partnership formation stage. Partnership Fit Potential Partnership fit refers to the degree organizations can achieve congruence in their perceptions, interests, and strategic direction. As pointed out by Weiser, Kahane, Rochlin & Landis (2006, p. 6) “the correct partnership is everything,” hence when organizations are in the process of either deepening an existing collaboration (previously philanthropic or transactional) or experimenting with a new collaboration they should seek early indications of partnership fit. An important mechanism (Bryson, Crosby & Middleton Stone, 2006) that offers an indication of value co-creation potential is the initial articulation of the social problem that affects both partners (Gray, 1989; Waddock, 1986). Examining partners’ social problem frames reveals commonalities or differences on how they perceive the dimensions of a social problem (McCann, 1983). The process of articulation can identify incompatibilities signalling the need for either frame realignment or abandoning their collaborative efforts. Provided there is sufficient common ground, the partners will identify next if their individual interests are sufficiently linked (Logsdon, 1991). This process will assist partners to understand how they view value -both benefits and costs- and if required to reconcile any divergent value creation frames. Part of this process is developing an early understanding of how the social problem might be addressed through the partners’ capabilities and developing an insight into how the benefits of the partnership will escalate from the meso to the macro level, i.e., how society is going to be better off due to the partnering efforts of the business and nonprofit organizations (Austin, 2000b). This moves the concerns “beyond how the benefit pie is divided among the collaborators … to the potential of cross sector partnerships to be a significant transformative force in society” (Austin, 2010, p. 13). Importantly, moving beyond the social problem focus to the societal level is encouraging the partners to look at the partnership’s “broader political implications” (Crane, 2010, p. 17), elevating social partnerships to global governance mechanisms (Crane, 2010). In effect, if the partners are able to link their interests, and also draw links with the broader societal betterment, it would provide an early indication of high potential for co-creation of value for the social good, i.e., synergistic value capture at the societal level. The more the social problem is linked to the interests of the organizations the higher the potential to institutionalize the co-creation process within the organizations which will lead to better value capture by the partners and intended or unintended beneficiaries (Le Ber & Branzei, 2010a). Resource fit is a further step that refers to the resource complementarity, a precondition for collaboration. The compatibilities and differences across the partners allow for diverse combinations of tangible and intangible resources into unique resource amalgamations that can benefit not only the partners in new ways, but more importantly externalize the socio-economic innovation value produced to society. In order to assess the complementarity of the resources it is important to recognize the resource types that each partner has the potential to contribute, including tangible (money, land, facilities, machinery, supplies, structures, natural resources) and intangible resources (knowledge, capabilities, management practices and skills). Intangibles were considered as early as in 1987 the most valuable for a company (Itami & Roehl, 1987) together with core competencies (Prahalad & Hamel, 1990), which have a high potential to increase the value of the company (Sanchez, Chaminade & Olea, 2000) or the nonprofit organization. Galbreath suggests that what constitutes value and what the rules of value creation are is one of the most far-reaching changes in the twenty-first century. Moving from28 the tradition of tangible to intangibles and relationship assets constitute a change in perceiving where the value of the organizations is positioned today: “what becomes easily apparent is that the firm’s success is ultimately derived from relationships, both internal and external” (Galbreath, 2002, p. 118). An interlinked issue to the resource fit is the resource flow across the partners, i.e., the extent the exchange of resources is unilateral or bilateral and reciprocal. During the co-creation of value the exchange of resources is required to be reciprocal and mutli-directional involving both tangible and intangible resources. Familiarizing oneself with the partner organizations and their resource availability is a requirement in order to assess the type and complementarity of resources. The directionality of resources will not be easily assessed at the formation phase unless the partners had previous interactions (Goffman, 1983) or information is available from their previous interactions with other partners. Differences across the partners include misunderstandings of each other’s motivations due to unfamiliarity (Long & Arnold, 1995; Kolk, Van Tulder, & Westdijk, 2006; Huxham & Vangen, 2000) often leading to distrust (Rondinelli & London, 2003) that can undermine the formation and implementation processes (Rondinelli & London, 2003). Examining the partners’ motivations can provide an early indication of partners’ intentions and expected benefits (Seitanidi, 2010), offering some evidence of the transformative intention of the partnership (Seitanidi, Koufopoulos & Palmer, 2010). Due to the required time horizon (Austin, 2000a; Rondinelli & London, 2003) of such integrative and transformational relationships it is important to include in the formation analysis instances of previous value creation through the production of “first” (direct transfer of monetary funds) and “second order” benefits (e.g., improved employee morale, increased productivity, better motivated sales force) (Gourville & Rangan, 2004). This process will safeguard more appropriate fit between the organizations and will enable the generation of synergistic value, which is likely to lead to greater value creation. Linked to the motives is the mission of each partner organization. A particularly important measure to assess if the organizations are compatible is the mission fit. When the mission of each organization is strongly aligned with the partnership (Berger, Cunningham & Drumwright, 2004; Gourville & Rangan, 2004) the relationship has more potential to be important to both organizations. In the case of co- creation of value organizations might even use the partnership as a way to redefine their mission (Berger, Cunningham & Drumwright, 2004), which will develop a stronger connection with the partnership and each other. Hence the first step in assessing organizational fit is to examine the mission fit across the partner organizations. The previous experience of the partners (Hardy, Lawrence & Phillips, 2006), including their unique organizational histories (Barnett, 2007) in developing value relations, is an important determinant for the potential partnership fit indicating the ability of the partners to uncover novel capabilities and improve their prospects for social value creation (Brickson, 2007; Plowman, Baker, Kulkarni, Solansky & Travis, 2007). This will indicate the degree of “structural embeddedness” (Bryson, Crosby & Middleton Stone, 2006, p. 46), i.e., how positively the partners have interacted in the past (Jones, Hesterly & Borgatti, 1997; Ring & Van de Ven, 1994) in producing value. Therefore, in order for the partners not to rely “on the shadow of the future” (Rodinelli & London, 2003, p. 71) the history of interactions between the two organizations or with previous partners will provide an indication of the partners’ relevant value creation experience for integrative or transformative relations (Seitanidi, Koufopoulos & Palmer, 2010). Because organizations exist in turbulent environments, their history is dynamic and reassessment becomes a continual exercise (Selsky, Goes, & Babüroglu, 2007).29 One of the central motives for the formation of partnerships for both partners is to gain visibility (Gourville & Rangan, 2004) that can be expressed as reputation (Tully, 2004), public image (Heap, 1998; Rodinelli & London, 2003; Alsop, 2004), and desire to improve public relations (Milne, Iyer & Gooding- Williams, 1996). Visibility contributes to social license to operate, access to local communities (Heap, 1998; Greenall & Rovere, 1999) for high risk industries, credibility (Gourville & Rangan, 2004), and increased potential for funding from the profit sector (Heap, 1998; Seitanidi, 2010). In effect, positive visibility is a highly desired outcome for the partners. Although positive reputation is an intangible resource, we consider visibility a fit measure that takes place either explicitly or implicitly during the formation phase. Organizations consider the degree of their partners’ visibility and the extent it is positive or negative at a very early stage. In some cases a corporation may consider appropriate a partner with medium or low visibility in order to avoid attracting unnecessary publicity to its early attempts of setting up a partnership, as was the case with the Rio Tinto-Earthwatch partnership (Seitanidi, 2010). On the other hand negative visibility might create a unique opportunity for the co- creation of value for the partners and for society as it holds the potential for social innovation and change (Le Ber & Branzei, 2010a; Seitanidi, 2010). It is essential that both partners are comfortable with the potential benefits and costs of their partner’s visibility which will contribute to the organizational fit and the potential for co-creation of value. Finally, Rondinelli and London (2003) refer to the importance of identifying pre-partnership champions, particularly senior executives with a long term commitment who will play a key-role in developing cross- functional teams within and across the partnership. The compatibility of the partnership champions in both organizations is a key-determinant for the potential partnership fit which will extend to the people they will both select as part of the members of each organization’s partnership teams. Below we summarise the measures of fit that were discussed above. INSERT FIGURE 2 HERE Figure 2: Partnership formation: Partnership fit potential Partnership Fit Potential Initial articulation of the social problem Identify linked interests and resources across partners and for social betterment Identify partners’ motives and missions Identify stakeholders affected by each of the partners Identify the history of interactions and visibility fit Identify Pre-partnership Champions .30 Partnership Implementation: Selection, Design, and Institutionalization for Synergistic Value Partnerships In order to examine the value creation processes in the implementation phase we employ the micro- stage model of Seitanidi and Crane (2009) which responded to previous calls (Godfrey & Hatch, 2007; Clarke, 2007a; 2007b; Waddock, 1989) for more studies on the processes of interactions required in order to deepen our understanding. The model moves beyond the chronological progression models that define broad stages (Bryson, Crosby & Middleton Stone, 2006; Berger, Cunningham & Drumwright, 2004; Googins & Rochlin, 2000; Wilson & Charlton, 1997; Westley & Vredenburg, 1997; McCann, 1983) providing a process-based dynamic view (Vurron, Dacin & Perinni, 2010) by introducing micro-processes as a way of overcoming implementation difficulties (Pressman & Wildavsky, 1973), demonstrating the quality of partnering and allowing for a deeper understanding of the partnership implementation (McCann, 1983). As Godfrey and Hatch (2007, p. 87) remark: “in a world that is increasingly global and pluralistic, progress in our understanding of CSR must include theorizing around the micro-level processes practicing managers engage in when allocating resources toward social initiatives”. Following the selection-design-institutionalization stages the model focuses only on the implementation of partnerships rather than incorporating outcomes as part of the examination of partnership processes (Clarke & Fuller, 2010; Hood, Logsdon & Thompson, 1993; Dalal-Clayton & Bass, 2002). We extend the model of Seitanidi and Crane (2009) by discussing processes that relate to the co-creation of synergistic value. More specifically, we focus on the opportunities for the co-creation of socio-economic value during the implementation phase of partnerships and we discuss how the dynamics between the partners can facilitate these processes. We further indicate the two levels of implementation, organizational and collaborative responding to the call of Clarke & Fuller (2010) for such a separation. . Partner Selection Organizations often collect information or engage in preliminary discussions during the formation stage with several potential partners (Seitanidi, 2010). Only in the selection stage do they decide to proceed with more in depth collection of information that refers to the organization they wish to partner with. Despite being a common reason for partnership failure, poor partner selection (Holmberg & Cummings, 2009) has received relatively limited attention even in the more advanced strategic alliances literature (Geringer, 1991). Selecting the most appropriate partner is a decision that to a large extent determines the success of the partnership. Having identified during the formation stage the key social issue of interest (Waddock, 1989; Selsky & Parker, 2005), the organizations theoretically make a decision whether to embark in an integrative or transformational collaboration or evolve their philanthropic or transactional relationship into these more intense strategic alliances. In the case of a transformational collaboration the partners need to affirm the intent of potential partners to co-create change that will transform their own processes and deliver externally transformation through social innovation that will change for the best the lives of31 those affected by the social problem. In this case additional criteria need to be met by both organizations which we discuss below. Simonin (1997) refers to the “collaborative know-how”, encompassing ”Knowledge, skills and competences” (Draulans, deMan & Volberda, 2003), a distinctive set of skills that are important for the selection of partners. This “alliance process knowledge” requires skills in searching, negotiating as well as terminating relations early on (Kumar & Nti, 1998) that do not hold the potential for the co-creation of value. Partner selection might consist of a long process that can take years or a brief process that will last a few months (Seitanidi, 2010; London & Rondinelli, 2003). Depending on the existence of previous interactions, familiarity and trust between the partners (Selsky & Parker, 2005; 2010; Austin, 2000a), the selection can be either emergent or planned (Seitanidi & Crane, 2009). Inadequate attention to the selection of partners due to lack of detailed analysis is associated with organizational inexperience (Harbison & Pekar, 1998) which can result in short-lived collaborations. The highest potential for encompassing partnership benefits is associated with long-term collaborations, balancing the initial costs and time required during the partnership selection process (Pangarkar, 2003). Developing partnership specific criteria facilitates the process of assessing potential partners; selection criteria may include: industry of interest, scope of operations, cost effectiveness (investment required vs. generation of potential value), time-scales of operation, personal affiliations, availability and type of resources (Holmberg & Cummings, 2009; Seitanidi & Crane, 2009; Seitanidi, 2010). The development of selection criteria will make visible the complementarity potential and point towards a strategic approach (Holmberg & Cummings, 2009) for the creation of value. When the aim is to co-create synergistic value the more compatible the criteria identified by both partners the higher the potential for operational complementarity. The transformational collaboration would require additional criteria such as: identifying the operational area for process changes and identifying the domain for innovation. Despite partnerships being presented as mechanisms for the mitigation of risk (Selsky & Parker, 2005; Tully, 2004; Warner & Sullivan, 2004; Wymer & Samu, 2003; Bendell, 2000b; Heap, 2000; Andrioff & Waddock, 2002; Heap, 1998) and the important role of risk coupled with social value creation enabling the momentum for partnership success (Le Ber & Branzei, 2010b), models of partnership implementation do not usually incorporate risk assessment (for exceptions see Seitanidi, 2010; Seitanidi & Crane, 2009; Le Ber & Branzei, 2010b; Andrioff, 2000). The risk assessment would be a necessary micro-process, particularly in the case of high-negative visibility of one of the partners, in order to assess the potential value loss either due to exposure to public criticism or due to early termination of the partnership as a result of failure to adjust their value creation frames (Le Ber & Banzei, 2010c). Although it is the nonprofit organization’s credibility that may be more at stake by forging a partnership with a business, both are exposed to negative affiliation value (Utting, 2005). We propose a formal and an informal risk assessment process for both partners elaborated by internal and external processes. The formal internal risk assessment process aims to collect interaction intelligence across the potential partner organizations by requesting material such as: internal reports, both process and output reports, also referred as process-centric and plan-centric (Clarke & Fuller, 2010), press releases, external assessment of previous collaborative projects (Utting, 2005). The formal external process aims to collect intelligence from previous partners in order to develop an awareness of any formal incidences that took place or any serious formal concerns that may be voiced by previous partner organizations. Moving to the informal risk assessment process, we follow the suggestions of Seitanidi and Crane (2009) that include an internal process consisting of open dialogue among the constituents of each partner organization (in the case of the nonprofit organization: employees, trustees, members of the board, beneficiaries) and informal meetings between the partners and particularly the potential members of32 the partnership teams. The informal external process consists of open dialogue of each partner with its peer organizations within their own sector and across other sectors in order to collect intelligence such as positive or negative ‘word of mouth’ and anecdotal evidence related to the potential partner. The above processes allow for accountable decision making mechanisms through the voicing of internal and external concerns (Hamman & Acutt, 2003), identifying sources of potential value loss, developing an appreciation of the types of resources available by partners and the outcomes that were previously achieved; hence each partner would be in a much better position to develop a strategy on how to manage potential problems during the value creation processes (London & Rondinelli, 2003) both informally and formally (Seitanidi & Crane, 2009). Figure 3 offers an overview of the process of Partnership Selection. We incorporate feedback loops (Clarke & Fuller, 2010) to demonstrate the role of the risk assessment informing the final options of potential partners. The partnership selection consists predominately of micro-processes that take place on the organizational level of each partner. Furthermore, interactions across multiple stakeholder groups are encouraged during partnership selection as a way of managing power distribution, thereby asserting that collaboration can be a different model of political behaviour rather than being devoid of political dynamics (Gray, 1989). It is only in the next stage (partnership design) where we identify two levels of analysis: the organizational and the ‘coalition framing’ as referred by Choteau & Hicks (2003) or as referred by others the ‘inter-organizational collective’ (Astley, 1984) or collaborative level (Huxham, 1993; Clarke & Fuller, 2010). INSERT FIGURE 3 HERE Figure 3: Partnership Selection for co-creation of value Adapted from Seitanidi & Crane, 2009 Assessing the different NPO or BUS options Assessing co-creation potential & transformational intent Formal Risk Assessment Process Internal Process External Process Open dialogue among employees Informal Meetings between NPO & BUS employees Open dialogue among similar organizations within sector Collecting intelligence from organizations outside sector Deciding Associational Form: Integrative/Transformational Partnership Informal Risk Assessment Process External Process Internal Process Collecting intelligence from previous partners Collecting interaction intelligence across partners Partnership Selection for Co-Creation of Value Developing Partnership Criteria Assessing operational complementarity Risk Assessment Processes Assessing potential sources of value loss33 Partnership Design & Operations Partnership design and operations encompass formal processes that influence the partnership implementation and are considered necessary in order to ensure desirable behavior (Geringer & Hebert, 1989) in order to arrive to the anticipated outcomes. The literature has pointed to several design parameters and operating actions that contribute to partnering effectiveness. In social partnerships, Austin, Leonard, Reficco & Wei-Skillern (2006) suggested that social value is created by missions and design. The partnership design includes the experimentation with the procedural and substantive partnership issues (Gray, 1989) by setting objectives and structural specifications (Glasbergen, 2007; Arya & Salk, 2006; Bryson, Crosby & Middleton Stone, 2006; Andreasen, 1996; Halal, 2001; Austin, 2000b; Googins & Rochlin, 2000) including rules and regulations (Das & Teng, 1998; Gray, 1989), deciding upon the commitment of resources (Bryson, Crosby & Middleton Stone, 2006; Berger, Cunningham & Drumwright, 2004; Austin, 2000a; Googins & Rochlin, 2000; Waddock, 1988), establishing leadership positions (Austin, 2000a; Waddock, 1986), deciding upon the organizational structures of the partnership (Berger, Cunningham & Drumwright, 2004; McCann, 1983) including decisions regarding the teams of each partner, drafting a Memorandum of Understanding (MoU), and agreeing on the partnership management (Seitanidi & Crane, 2009; Austin & Reavis, 2002). The above processes add structural and purpose congruency (Andreasen, 1996) to the partnership and take place both on the organizational and collective level. Each organization internally debates its own priorities and interests and considers its own structures that will generate value on the organizational level. However, partners are also discussing, debating and negotiating on the collective level processes and structures (Clarke & Fuller, 2010; Bowen, Newenham-Kahindi, & Herremans, 2010; Bryson, Crosby & Middleton Stone, 2006) and co-design mechanisms (Seitanidi, 2008) that will collectively add value to the partnership. This is the first instance that they embark on the collective implementation process that requires co-ordination mechanisms (Bryson, Crosby & Middleton Stone, 2006; Selsky & Parker, 2005; Brinkerhoff, 2002; Milne, Iyer & Gooding-Williams, 1996). The decisions gradually reach operationalization and structures are forming, passing through several adaptations due to internal or external factors (Austin, 2000a; Gray, 1989) that lead to the stabilization of partnership content, processes, and structures (Seitanidi & Crane, 2009) until the next cycle of iteration. The time required for the operationalization of processes and structures will depend in part on the resource complementarity between the partners; in case of previous interactions across the partners experimentation and adaptation might be incorporated in one step (Seitanidi & Crane, 2009; Seitanidi, 2010). Recently the literature on social partnerships presented factors that determine the social change potential within the partnership relationship. Seitanidi (2008) suggested that in order for a partnership to increase its social change potential the partners are required to embrace their adaptive responsibilities allowing them to move away from their limiting pre-defined roles and transcend beyond a single dimension of responsibility in order to offer solutions to problems that require fundamental change. The above confirms our assertion that the company’s CSR and perception of its responsibilities need to have evolved in order to be in a position to co-produce synergistic value; similarly Le Ber and Branzei (2010b) proposed that deliberate role recalibration can tighten the coupling between social value creation and risk. As such the above research stresses the need for change within the relationship for the organizations in order to contribute to the potential for change outside the relationship. 34 The above processes constitute forms of formal control mechanisms in collaboration (Das & Teng, 1998). Informal measures of control such as trust-based governance may play a more important role in nonprofit-business partnerships (Rivera-Santos & Rufin, 2010) including managing alliance culture that requires blending and harmonizing two different organizational cultures (Wilkof, Brown & Selsky, 1995). Other key processes include: charismatic leadership that can inspire employees to participate in the partnership (Bhattacharya, Sen & Korschun, 2008; Berger, Cunningham & Drumwright, 2004; Andreasen, 1996) and facilitate an emotional connection with the social cause (Austin, 2000a); forms of communication that enable formation of trust (Austin, 2000a; Googins & Rochlin, 2000), mutual respect, openness and constructive criticism to both external and internal audiences (Austin, 2000a); continual learning (Bowen, Newenham-Kahindi, & Herremans, 2010; Senge, Dow & Neath, 2006; London & Rondinelli, 2003; Austin, 2000a), managing conflict (Seitanidi, 2010; Covey & Brown, 2001; Gray, 1989), and encouraging open dialogue (Elkington & Fennell, 1998). The above informal processes determine the alliance viability (Arya & Salk, 2006) and contribute to the co-creation of value. Although the formal measures are likely to be introduced in an early stage and play an important role in developing familiarity across the organizations, the informal measures are more likely to be effective in tensions around indeterminacy, vagueness, balancing the interpretations between the partners (Ben, 2007; Orlitzky, Schmidt & Rynes, 2003), and uncertainty in the process of partnerships (Waddock, 1991) by exerting symbolic power that can influence individual organizations and industry macroculture (Harris & Crane, 2002). The above informal measures are both enablers of value contributing to the creation and capture of value as it emerges and hence play a role in preventing value erosion; they also align value closer to the intangible resources, e.g., reputation, trust, relational capital, learning, knowledge, joint problem-solving, communication, coordination, transparency, accountability, and conflict resolution contributing to the co-creation of value. As such the above constitute processes that produce benefits for both partners and society and generate interaction value. In addition, the nonprofit sector has multiple bottom-lines and accountabilities towards their own stakeholders (Anheier & Hawkes, 2008; Mowjee, 2001; Commins, 1997; Edwards & Hulme, 1995) that are required to be respected by the profit sector during the process of engagement. Both partners are required to move their sense of responsibility from reactive and pro-active to adaptive in order to facilitate transformational interactions (Seitanidi, 2008). Such process adaptations take place both at the organizational level of each partner, during the interaction of the partners and at collaborative level (Clarke & Fuller, 2010). Figure 4 below summarizes the partnership design and operations that set up the structures and processes that will generate value, both formal and informal, identifies and mobilizes the resources across the partners in order to recognize the resource complementarities that will determine the co- creation of value. The partners experiment with the design both individually within each organization and collectively. This is the first instance that partners identify their value distance between their resources, goals, perceptions, and capabilities. In the next step the partners will embark in the value frame fusion in order to reconcile iteratively their divergent value creation frames (Le Ber & Branzei, 2010c) and co-create synergistic value. The partnership design may be the end for some partnerships if the partners realize that their value distance is too great. The double arrows in figure 4 demonstrate feedback loops across processes that lead to redesign and adaptations.35 INSERT FIGURE 4 HERE Figure 4: Partnership Design & Operations Partnership Design & Operations Experimentation Setting up structures & processes for co-creation of value Adaptations Iterations of processes & structures Operationalization Gradual stabilisation of processes & structures Collective Experimentation Organizational Experimentation Organizational Adaptations Collective Adaptations Exit Strategy36 Partnership Institutionalization A partnership has reached institutionalization when its structures, processes and programmes are accepted by the partner organizations (Seitanidi & Crane, 2009) and their constituents and are embedded within the existing strategy, values, structures, and administrative systems of the profit and nonprofit organizations. Following the gradual stabilization of structures and processes (partnership operationalization) organizational and personal familiarization leads to the gradual institutionalization of the partnership relationship within both organizations. The level of institutionalization can be tested in two ways: (1) the extent the partnership remains intact regardless of crisis situations it may face and (2) the relationship sustains changes of key-people in the partnership (e.g., departure of the partnership manager) (Seitanidi & Crane, 2009). Nonprofit-business partnerships represent contradictory value frames (Le Ber & Branzei, 2010b; Yaziji & Doh, 2009; Bryson, Crosby & Middleton Stone, 2006; Selsky & Parker, 2005; Teegen, Doh & Vachani, 2004; Austin, 2000; Gray, 1989; Waddock, 1988) due to the different sectors represented and their associated beliefs, motives, and logics. If the partners are to co-create socio-economic value, they are required to adjust their value frames to reach frame convergence (Noy, 2009) or frame fusion (Le Ber & Branzei, 2010b). Frame fusion is defined as “the construction of a new prognostic 1 frame that motivates and disciplines partners’ cross sector interactions while preserving their distinct contribution to value creation”, preserving the identity and differences of each partner (Le Ber & Branzei, 2010b, p. 164). Achieving value frame fusion (Le Ber & Branzei, 2010b) not only assists in overcoming the partners’ differences but also allows for transformation of the “current means into co-created goals with others who commit to building a possible future” (Dew, Read, Sarasvathy, & Wiltbank, 2008, p. 983). Anticipating each partner’s frame and intentionally adjusting their own (Le Ber & Branzei, 2010c) consists of iterative processes, taking place in and as a result of interactions (Kaplan, 2008) that gradually allow for micro-adjustments that lead to alignment that increases the potential for identifying complementarities. The above process takes place by each partner perceiving the strategic direction of the partner’s decisions (Kaplan, 2008), observing organizational change processes (Balogun & Johnson, 2004), participating in multiplayer interaction (Croteau & Hicks, 2003; Kaplan & Murray, 2008), monitoring and interpreting each other’s frames (Le Ber & Branzei, 2010c). Partners’ conceptions of the environment and perception of their own role in the partnership can lead to variations in commitment (Crane, 1998). Hence the value frame fusion plays an important role in the alignment of perceptions and the creation of a mutual language by developing a vocabulary of meaning (Crane, 1998). We position the co-creation of synergistic value within the partnership institutionalization as the value frame fusion is likely to take place within an advanced relationship stage. Stafford, Polonsky & Hartman (2000, p. 122) provide evidence on how the partners align their socio-economic value frames in order to co-create “entrepreneurial innovations that address environmental problems and result in operational efficiencies, new technologies and marketable ‘green’ products”. They demonstrate that in some cases partners may consciously decide to embark into a transformational collaboration (Stafford & Hartman, 2001); however we assume that in most cases the social change or social innovation potential emerges within the process (London & Rondinelli, 2003; Austin, 2000a). If frame fusion is not successful, then it is 1 Diagnostic frames are encoders of individuals’ experiences that assist in the assessment of a problem and prognostic frames are the use of the experiences in order to assess a possible solution (Le Ber & Branzei, 2010c; Kaplan, 2008)37 likely that frame divergence will shape the degree to which the organization will pursue its strategy, if at all, and to what degree change will be created (Kaplan, 2008). In fact, “it is the interactions of individuals in the form of framing contests” that shape the outcomes (Kaplan, 2008, p. 744). The plurality of frames and the existence of conflict (Glynn, 2000; Gray, 1989) within a partnership allows for divergent frames that can consist of opportunities for co-creation. Particularly novel tasks (Seitanidi, 2010; Le Ber & Branzei, 2010c; Heap, 2000), allow for balancing potential bias associated with power dynamics (Utting, 2005; Tully, 2004; Millar, Choi & Chen, 2004; Hamman & Acutt, 2003; Crane, 2000; Bendell & Lake, 2000). Adaptations are essential for survival (Kaplan, 2008) and present opportunities on the individual, organizational and sectoral levels (Seitanidi & Lindgreen, 2010) to unlearn and (re) learn how to frame and act collectively in order to develop a synergistic framework, essential for providing solutions to social problems. The value capture will depend on the interlinked interests of the partners which will influence the level of institutionalization of the co-creation of value (Le Ber & Branzei, 2010a). After the frame fusion and co-creation of value, the institutionalization process enters a point of emerged collective meaning between the partner organizations, which require a re-institutionalization of partnership processes, structures and programs after each cycle of co-creation of value. When the partners have captured either unilaterally or jointly (Le Ber & Branzei, 2010a; Makadok, 2001) some value, a necessary prerequisite for the continuous co-creation of value, they are ready for the next iteration of co-creation of value. Innovation value is what reinvigorates and sustains the institutionalization of a partnership. Despite improvements in procedural aspects of partnerships including independent monitoring of partnership initiatives (Utting, 2005) and developing informal risk assessment processes (Seitanidi, 2010), partnerships are still faced with concerns. Reed and Reed (2009) refer to: the accountability of partnerships particularly to the beneficiaries; the appropriateness of the standards developed, effectiveness and enforceability of the mechanisms they establish; and their role as mechanisms for greenwashing and legitimizing self-regulation in order to keep at bay state regulation. Furthermore, the power asymmetries associated with NPO and BUS partners (Seitanidi & Ryan, 2007) and the exercise of control of corporate partners (Reed & Reed, 2009; Le Ber & Branzei, 2010a; Utting, 2005) in the process of interaction has fuelled concern from NPOs regarding the loss of control in decision making (Brown, 1991). Hence calls for shared (Austin, 2000a; Ashman, 2000), consensus (Elbers, 2004) decision making and co-regulation (Utting, 2005) have been suggested in order to balance the power dynamics across the partners. Decentralized control of the partnership implementation by allowing multiple stakeholders to voice concerns within the partnership implementation process and incorporating feedback loops (Clarke & Fuller, 2010) can address the previous criticisms. As such, decentralized social accountability check- points would need to be incorporated in the implementation of partnerships in order to increase societal determination by inviting suggestions from the ground and facilitating answerability, enforceability, and universality (Newell, 2002; Utting, 2005). In effect, the co-creation of socio-economic value would be the result of a highly engaged and decentralized community of voices and would also allow for the diffusion of outcomes pointing towards a participative, network perspective (Collier & Esteban, 1999; Heuer, 2011), including engagement with fringe stakeholders as a means to achieve creative destruction and innovation for the partners and society (Gray, 1989; Murphy & Arenas, 2010). The above expands prioritization of a few stakeholders to the engagement of many stakeholders associated directly or indirectly with partners pointing towards what Gray (1989) termed “global interdependence”. Hence, while in the previous philanthropic, transactional and less so integrative38 stages partnerships were concentrating in the nonprofit-business dyad, the more we move towards the transformational stage the partnership requires the consideration, involvement, and prioritization of a plurality of stakeholders, suggesting a network perspective of stakeholders (Collier & Esteban, 1999; Rowley, 1997; Donaldson & Preston, 1995; Nohria, 1992; Granovetter, 1985). The President and CEO of Starbucks testifies to the efforts of business to broaden the engagement with stakeholders (In Austin, Gutiérrez, Ogliastri, & Reficco, (2007: 28): “[Our stakeholders] include our partners (employees), customers, coffee growers, and the larger community”. As Austin, Gutiérrez, Ogliastri, and Reficco, (2007, p. 28) remark other companies include in their broadening definition of stakeholders “representatives of nonprofits, workers, and grassroots associations in their governance bodies, or create ad hoc bodies for them, such as advisory boards or social councils”. The more inclusive the engagement is the higher the potential for co-creation of value on multiple levels achieving plurality of frames and decreasing the accountability deficit of partnerships. As social betterment becomes more central in the integrative and transformational stages of collaboration, the role of engagement with multiple stakeholders becomes a key component in the co- creation process and in re-shaping the dialogue (Cornelious & Wallace, 2010; Fiol, Pratt & O’ Connor, 2009; Barrett, Austin & McCarthy, 2002; Israel, Schulz, Parker, & Becker, 1998) by contributing diverse voices in the value frame fusion during the implementation process (Le Ber & Branzei, 2010c). Multi- stakeholder engagement during the partnership is the intentional maximization of interaction with diverse stakeholder groups, including latent and fringe groups (Le Ber & Branzei, 2010a; Murphy & Arenas, 2010; Mitchell, Agle & Wood, 1997), during the partnership implementation in order to increase the potential for value creation and allow for value capture on multiple levels. The co-creation process that aims to deliver social betterment (more on the transformational rather the integrative stage) will assume a much larger and diverse constituency. Embedding the partnership institutionalization across interested communities introduces a new layer of partnership institutionalization outside the dyad of the profit and nonprofit organizations. Figure 5 below presents the partnership institutionalization process based on the above discussion of the literature. The institutionalization process commences by embedding the partnership relationship within each organization. After they reach value frame fusion a re-institutionalization of partnership processes, structures and programmes between the partners is required based on the new emerged shared perceptions. The inner circle of process change demonstrates the iterative processes of internal value creation that lead to the development of new capabilities and skills, passing through the frame fusion, identification of complementarities, and value perceptions of each partner. The external circle demonstrates the institutionalization of stakeholder and beneficiary voice in the partnership process, appearing as co-creation value in cycle 1. Partnerships have the potential to deliver several cycles of value creation depending on the quality of the processes, the evolution of the partners’ interests and capabilities, and changes in the environment. Value renewal is a prerequisite for the co-creation and capture of value. Partnerships may end unexpectedly before the value capture by the partners or beneficiaries or after one value creation cycle due to their dynamic character or due to external changes. The above testify that the relationship process is the source of value for both partners and society.39 INSERT FIGURE 5 HERE Figure 5: Partnership Institutionalization Partnership Institutionalization Relationship Mastering Managing crises, accepting differences as a source of value, Identifying Complementarities Use of generic & distinctive competences Bilateral & reciprocal exchange of resources Linking interests, aligning value perceptions (benefits & costs) Partner Frame A Partner Frame B Organizational Adaptations Collective Adaptations Personal Familiarization Developing personal relations & familiarization Frame Fusion Frame convergence while preserving differences Co-Creation of Synergistic Socio-Economic Value Outcomes: Social Innovation Partner Value Perception A Partner Value Perception B Process change Exit Strategy Partner Value Capture A Partner Value Capture B Co-creation value cycle 1 Stakeholder group 1 Value beneficiary Stakeholder group 3 Value beneficiary Stakeholder group 2 Value beneficiary40 London and Rondinelli (2003) employ the HBS partnership case study of Austin and Reavis (2002) between Starbucks and Conservation International-CI in order to describe the partnership phases from the formation and the first meeting of the partners, the negotiation period that lasted four months, and the partnership design, i.e., setting up core partnership operations, including training provided to local growers in organic farming methods by CI and the provision of organic seeds and fertilizers to farmers at nominal prices giving them access to high quality resources which were made possible due to the funding provided by Starbucks; and setting up quality control mechanisms to sustain the required Starbucks quality for coffee. The outcomes of the partnership were: 40% on average increase in the farmers’ earnings, 100% growth in the cooperatives international coffee sales, and the provision of $200,000 to farmers in the form of loans through the local cooperatives. We used their description to unpack and describe below the co-creation process in partnerships that aim to deliver synergistic value. During the formation, selection and the early design of the partnership the partners have originally only information about each other, i.e., who Starbucks and CI are, the industry and product/service proposition, and their interest in developing a collaboration with an organization from a different economic sector; the basic information about the key product/service proposition gradually increases, first within the members of the partnership team and later it diffuses within other departments of the organization; due to the intensification of the interactions gradually the information is transformed from information to knowledge, i.e., the meetings and intensification of interactions facilitate the transformation of information to knowledge (e.g., why Starbucks is interested in CI, how they are planning to work with a partner, under what conditions, what is unique about the partner’s product/service proposition, what are the constituent elements of the partner’s identity/product/service). The explicit knowledge about each gradually increases and is combined with the increased familiarity, due to the interactions, that incorporates tacit knowledge about each other (e.g., how the organization works, the mechanisms and processes they have in place, culture of the organization). When tacit knowledge meets positive informal conditions that lock the emotional involvement of the partners within the interactions, then a higher level of knowledge is exchanged with enthusiasm and pride and with the explicit aim to share the unique resources of the organization. As the partnership progresses the knowledge about the partner organization, its resources and use of resources becomes deeper and for the members of the partnership teams the knowledge about their partner turns into a capability, i.e., at this stage the partner is able to apply the knowledge in the context of its own organization. Having arrived at a deep mutual knowledge about each other’s organizations and the development of new capabilities, the partners are able to speak the ”same language” and embark in the co-creation process that may involve the creation of new products, services and the co-creation of new skills that they will be able to apply in the domain of common interest where the collaborative strategy takes place resulting in change or social innovation. The following figure 6 demonstrates the process that we describe above: how the sector/organization based information turns into concrete knowledge and then a capability that can be applied in the context of the partner organization and due to the multiple uses of such new capabilities partners are able to develop new products/services that constitute social innovation or change as they contribute positively to society or minimize the previous harm.41 INSERT FIGURE 6 HERE Figure 6: Information to knowledge, to capability to change and innovation (note: the change cloud is connected with the nonprofit capability with a standard shape connector that does not denote any particular meaning in the shape below) The partnership implementation is the value creation engine of cross sector interactions where the internal, external change and innovation can be either planned or emergent. The co-creation process not only requires the partners’ interests to be linked but also to be embedded in the local communities of beneficiaries and stakeholders in order to incorporate perceptions of value beyond the partnership dyad and hence facilitate the value capture and diffusion on different levels. In the next section we discuss the evaluation of the partnership implementation before proceeding to the partnership outcomes section. Change NPO information NPO information BUS information BUS BUS information Knowledge BUS Knowledge NPO Knowledge NPO Knowledge NPO capability BUS capability NEW capability Social Innovation42 Evaluation of Partnership Implementation Process outcomes, in contrast to programmatic outcomes which we discuss in the next section, concentrate on how to improve efficiency and effectiveness of the partnership implementation process (Brinkerhoff, 2002). Continuous assessment during the implementation is an important part of the partnership process as it can improve service delivery, enhance efficiency (Brinkerhoff, 2002), assist in making tactical decisions (Schonberger, 1996), propose adjustments in the process, and importantly “explain what happened and why” (Sullivan & Skelcher, 2003). It can also encourage the involvement of beneficiaries and stakeholder groups in order to include their voices in the process (Sullivan & Skelcher, 2003). Furthermore, the process assessment can provide indications of how to strengthen the long term partnership value creation (Kaplan & Norton, 1992) and in effect avoid delays in achieving impact (Weiss, Miller Anderson & Lasker, 2002). Difficulties associated with setting, monitoring, and assessing process outcomes include measurement (e.g., articulating the level of familiarization between members of the partnership, monitoring the evolution of relations, and assessing the level of partnership institutionalization) (Shah & Singh, 2001) and attribution, i.e., “how can we know that this particular process or institutional arrangement causes this particular outcome” (Brinkerhoff, 2002, p. 216). Hence, evaluation frameworks for the implementation of partnerships are relatively scarce (El Ansari & Weiss, 2005; Dowling, Powell & Glendinning, 2004; El Ansari, Phillips & Hammick, 2001). Frameworks exist for the evaluation of the performance in partnerships in general (Huxham & Vangen, 2000; Audit Commission, 1998; Cropper, 1996), for the assessment of public sector networks (Provan & Milward, 2001), urban regeneration (Rendon, Gans & Calleroz, 1998), and more frequently in the health field (Markwell, Watson, Speller, Platt & Younger, 2003; Hardy, Hudson & Waddington, 2000; Watson, Speller, Markwell & Platt, 2000); no framework, to our knowledge, concentrates on the nonprofit- business dyad. Brinkerhoff (2002, p. 216) suggested that “we need to examine partnerships both as means and as end in itself”. Provan and Milward (2001) proposed a framework for the evaluation of public sector networks at the level of 1/community; 2/ network (e.g., number of partners, number of connections between organizations, range of services provided) and 3/ organization/participant. Brinkerhoff (2002) criticized the above framework suggesting that it neither examines the quality of the relationship among the partners nor offers suggestions that can improve the outcomes. Criteria for relationship evaluation in the health field include: “willingness to share ideas and resolve conflict, improve access to resources, shared responsibility for decisions and implementation, achievement of mutual and individual goals, shared accountability of outcomes, satisfaction with relationships between organizations, and cost effectiveness” (Leonard, 1998, p. 5). Interestingly the Ford Foundation Urban Partnership Program, in the education field, provided an example of partnership relationship assessment (Rendon, Gans & Calleroz, 1998) which included the partner stakeholders agreeing on their own indicators. Brinkerhoff’s (2002) assessment approach addresses two aims: “1/ improve the partnership practice in the context of programme implementation; 2/ refine and test hypothesis regarding the contribution of the partnership in the partnership performance and outcomes and 3/ suggest lessons for future partnership work in order to maximise its potential to enhance outcomes” (Brinkerhoff, 2002, p. 216). Her framework, incorporating qualitative and quantitative indicators, emphasizes relationship outcomes and addresses the evaluation challenges of integrating both process and institutional arrangements in performance measurement, allowing for continuous assessment and encouraging dialogue and a shared understanding. 43 With regards to the synergistic results of partnerships, which are usually not well articulated and measured (Brinkerhoff, 2002; Dobbs, 1999), an interesting quantitative study on health partnerships (Weiss, Miller Anderson & Lasker, 2002) suggested that assessing the level of synergy in partnerships provides a useful way to determine the degree that the implementation process is effective prior to measuring the impacts of partnerships. They conceptualized synergy at the partnership level “combining the perspectives, knowledge and skills of diverse partners in a way that enables a partnership to (1) think in new and better ways about how it can achieve its goals; (2) plan more comprehensive, integrated programs; and (3) strengthen its relationship to the broader community” (Weiss, Miller Anderson & Lasker, 2002, p. 684). The study examined the following dimensions of partnership functioning they hypothesized to be related to partnership synergy: leadership, administration and management, efficiency, nonfinancial resources, partner involvement challenges, and community- related challenges. The findings demonstrate that the partnership synergy is closely associated with effective leadership and partnership efficiency. Regarding leadership, high levels of synergy were associated with “facilitating productive interactions among the partners by bridging diverse cultures, sharing power, facilitating open dialogue, and revealing and challenging assumptions that limit thinking and action” (Weiss, Miller Anderson & Lasker, 2002, p. 693). These findings are in agreement with previous research suggesting that leaders who are able to understand the differences across sectors, perspectives, empower partners, and act as boundary spanners are important for partnerships (Alter & Hage, 1993; Wolff, 2001; Weiner & Alexander, 1998). Furthermore, partnership efficiency, i.e., the degree of achieving partnership optimization through the partners’ time, financial, and in-kind resources had also a significant effect on synergy. The above are some of the factors that influence the implementation and can potentially be set-up by design (Austin & Reavis, 2002). One of the most detailed assessment tools by Markwell, Watson, Speller, Platt and Younger (2003) is looking at six major areas of implementation: leadership, organization, strategy, learning, resources and programs; each element is divided into several sections providing a well elaborated tool. It assess issues such as: the level of representation of each partner within the partnership relationship, the extent to which the partnership builds on each partner’s individual way of working, if the partnership has in place a community involvement strategy, if multidisciplinary training in partnership skills is looked at, if partners have been able to manage conflict, among other issues. All of the above questions aim at addressing the process of co-creation of value and allow for re-designing the partnership operations in a more efficient and effective way. In the above section we looked at partnership processes, as constructive exchanges (King, 2007), that have the potential to be critically important in providing solutions to social problems. Partnerships are laboratories of social change as they have the ability to internalize externalities and transform them to solutions, innovation, and thereby social change. The basic assumption of most research in social partnerships is that organizations interact for private gain and to improve their own welfare (King, 2007). We contend that corporations and nonprofit organizations interact for private and societal gain, and this interaction improves the welfare of both parties and society. In the next section we move to the discussion of partnership outcomes, the different level of value capture, and methods for the evaluation of outcomes.44 SYNERGISTIC VALUE OUTCOMES: LOCI, EXCELLENTIA AND DISTINCTUS OF VALUE We are experiencing an unprecedented proliferation of “accelerated interdependence” (Austin, 2000b, p. 69) across the public, profit, and nonprofit sectors due to the double devolution in functions (from central governments to the local authorities) and in sectors (from the public to the private and nonprofit) (Austin, 2000b). The increasing fiscal needs of the public and nonprofit sectors contribute to the diffusion of responsibilities promoting cross sector collaboration as an effective and efficient approach to manage assets and provide solutions to social problems (Austin, 2000b). However, the intense needs for resources can capture the critical role of the state and in some cases of the nonprofit sector (Seitanidi, 2010; Bendell, 2000a,b; Raftopoulos, 2000; Mitchell, 1998; Ndegwa, 1996). Hence, criticism towards partnerships (Reed & Reed, 2009; Biermann, Chan, Mert & Pattberg, 2007; Hartwich, Gonzalez & Vieira, 2005) and the outcomes achieved (Austin, 2010; Seitanidi, 2010; Brinkerhoff, 2007) is not a surprise, but rather a call for a paradigm change. The examination of nonprofit-business partnership outcomes (Selsky & Parker, 2005) is an evolving area in practice and research, particularly when the focus is not only on the benefits for the partners but for society (Austin, 2010; Seitanidi & Lindgreen, 2010; Margolis & Walsh, 2003; Austin, 2000). Although what makes collaboration possible is “the need and the potential” for benefit (Wood & Gray, 1991, p. 161) given that social partnerships aim to address social issues (Waddock, 1988), the definition of what constitutes positive partnership outcomes “should encompass the social value generated by the collaboration” (Austin, 2000b, p. 77) on different levels. The shift in the literature from social partnerships (Waddock, 1988) to strategic partnerships (Warner & Sullivan, 2004; Birch, 2003; Elkington & Fennell, 2000; Andrioff, 2000) seems to be turning full circle as new found significance is assigned to collective impact (Kania & Krammer, 2010), social value measurement (Mulgan, 2010), and the very recent creation of a new class of assets, named by JP Morgan and the Rockefeller Foundation, ‘impact investment’ that aim to “create positive impact beyond the financial return” (O’Donohoe, Leijonhufvud, Saltuk, Bugg-Levine, & Brandeburg, 2010, p.5): “… investors rejecting the notion that they face a binary choice between investing for maximum risk- adjusted returns or donating for social purpose, the impact investment market is now at a significant turning point as it enters the mainstream. … Impact investments are investments intended to create positive impact beyond financial return. As such, they require the management of social and environmental performance (for which early industry standards are gaining traction among pioneering impact investors) in addition to financial risk and return. We distinguish impact investments from the more mature field of socially responsible investments (“SRI”), which generally seek to minimize negative impact rather than proactively create positive social or environmental benefit.” The significance of impact investments, supported by two global institutions, a traditionally financial JP Morgan and an integrally social Rockefeller Foundation, lies in the institutionalization of the paradigm shift and in the change in the signification of the constitution of value. Reconfiguring the meaning of financial value by incorporating social value as a pre-condition for the inclusion of business in these assets is of critical importance. In the report by O’Donohoe, Leijonhufvud, Saltuk, Bugg-Levine, & Brandeburg (2010, p.7) the pre-condition reads: “The business (fund manager or company) into which, the investment is made should be designed with intent to make a positive impact. This differentiates impact investments from investments that have unintentional positive social or environmental consequences”. Socio–economic value creation enters the mainstream not only as a suggestion from philanthropy and the social sector, but also as a condition from the markets signalling what constitutes a45 priori an acceptable outcome. The re-constitution of value creates a unique opportunity for intentional social change mechanisms to provide opportunities for social impact as forms of superior value creation for economic and social returns, not only for few but for many. In order to assess if nonprofit-business partnerships constitute such intentional mechanisms for social change and innovation we need to locate where value is created (loci 2 of value creation), how the value is assessed (excellentia 3 of value creation), and if the value created can make a difference to society (distinctus 4 of value creation), which we discuss in the following sections. Where Value is created: Loci of Value Creation An important constituent of our framework is establishing the loci of value creation while incorporating multi-level value assessment by introducing three levels of analysis: organizational, individual and societal. The focus in this element of the framework is on who benefits from the collaboration. Collaborations generate value, often simultaneously, at multiple levels –meso, micro, and macro. For our purpose of examining value, we distinguish two loci: within the collaboration and external to it. Internally, we examine value accruing at the meso and micro levels for the partnering organizations and the individuals within those organizations. Externally, we focus on the macro or societal level where social welfare is improved by the collaboration in the form of benefits at the micro (to individual recipients), meso (other organizations), and macro (systemic changes) levels. Internal Value Creation Meso level - The most common focus in the literature and in practice is on the value accruing to the partners, which are the organizational benefits that enhance the performance of the company or the nonprofit. Below we discuss in turn the benefits for the business and for nonprofits. For companies the cited business benefits of collaboration summarized here include enhancement of: company, brand reputation and image (Yaziji & Doh, 2009; Greenall and Rovere; 1999; Heap, 1998); legitimacy (Yaziji & Doh, 2009); corporate values (Austin, 2000b; Crane, 1997); community and government relations (Seitanidi, 2010; Pearce & Doh, 2005; Austin, 2000a); employee morale, recruitment, motivation, skills, productivity, and retention (Bishop & Green, 2008; Googins & Rochlin, 2000; Pearce & Doh, 2005; Turban & Greening, 1997); consumer preference (Heal, 1998; Brown, & Dacin, 1997); market intelligence and development (Milne, Iyer & Gooding-Williams, 1996); market, product, process innovation and learning (Austin, 2000b; Googins & Rochlin, 2000; Kanter, 1999); stakeholder communication and accountability (Bowen, Newham-Kahindi & Herremans, 2010; Pearce & Doh, 2005; Andreasen, 1996); external risk management (Selsky & Parker, 2005; Tully, 2004; Wymer & Samu, 2003; Bendell, 2000a; Das & Teng, 1998); competitiveness (Porter & Kramer, 2002); innovation (Yaziji & Doh, 2009; Stafford, Polonsky, & Hartman, 2000; Austin, 2000a); adaptation of new management practices due to the interaction with nonprofit organizations (Drucker, 1989). As a result, the financial performance and corporate sustainability can be strengthened. In the above cases the value of the partnership is located within the partner organizations. 2 In Latin locus refers to the place, location, situation, spot; loci is the plural, i.e., where we position the value creation. 3 In Latin excellentia refers to excellence, merit, worth, i.e., what is the worth of value creation 4 In Latin distinctus refers to difference, i.e., the difference of the value creation 46 On the other hand, business can incur costs including increased need in resource allocation and skills; increased risk of losing exclusivity in social innovation (Yaziji & Doh, 2009); internal & external scepticism and scrutiny (Yaziji & Doh, 2009); potential for reduced competitiveness due to open access innovation (Stafford, Polonsky, & Hartman, 2000); increased credibility costs in case of unforeseen partnership exit or reputational damage due to missed opportunity of making a difference (Steckel, Simons, Simons & Tanen, 1999). For nonprofits the summarized cited benefits of collaboration include: financial support received by the business (Yaziji & Doh, 2009; Brown, & Kalegaonkar, 2002; Googins & Rochlin, 2000; Galaskiewicz, 1985) increased visibility (Seitanidi, 2010; Gourville & Rangan, 2004; Austin, 2000); credibility and opportunities for learning (Yajizi & Doh, 2009; Austin, 2000b; Googins & Rochlin, 2000; Huxham, 1996); development of unique capabilities and knowledge creation (Porter & Kramer, 2011; Yaziji & Doh, 2009; Hardy, Phillips & Lawrence, 2003; Googins & Rochlin, 2000; Gray, 1989; Huxham, 1996), increased public awareness on the social issue (Gourville & Rangan, 2004; Waddock & Post, 1995); increase in support for organizational mission (Pearce & Doh, 2005); access to networks (Millar, Choi & Chen, 2004; Yaziji & Doh, 2009; Heap, 1998); technical expertise (Vock, van Dolen & Kolk, 2011; Seitanidi, 2010; Austin, 2000a); increased ability to change behaviour (Gourville & Rangan, 2004; Waddock and Post, 1995); opportunities for innovation (Holmes, & Moir, 2007; Stafford, Polonsky, & Hartman, 2000); opportunities for processes based improvements (Seitanidi, 2010); increased long term value potential (Le Ber & Branzei, 2010a, b; Austin, 2000a, b); increase in volunteer capital (Vock, van Dolen & Kolk, 2011; Googins & Rochlin, 2000); positive organizational change (Seitanidi, 2010; Glasbergen, 2007; Waddock & Post 2004; Murphy & Bendell, 1999); and sharing leadership (Bryson & Crosby, 1992). As a result, attainment of its social mission can be strengthened. Costs for the nonprofit organizations are often reported to be more than the costs for business (Seitanidi, 2010; Yajizi & Doh, 2009; Ashman, 2001) and may include the decrease in potential donations due to the high visibility of a wealthy partner (Gourville & Rangan, 2004); increased need for resource allocation and skills (Seitanidi, 2010); internal and external scepticism ranging from decrease in volunteer and trustee support to reputational costs (Yaziji & Doh, 2009; Millar, Choi & Chen, 2004; Rundall, 2000); decrease in employee productivity; increased costs due to unforeseen partner’s exit from partnership; effectiveness and enforceability of the developed mechanisms; legitimizing mechanism of ”greenwashing” (Utting, 2005). Micro level - Collaborations can produce benefits within the partnering organizations for individuals. This value can be twofold: instrumental and psychological. From the practical side, working in cross- sector collaboration can, for example, provide new or strengthened managerial skills, leadership opportunities, technical and sector knowledge, broadened perspectives. On the emotional side, the individual can gain psychic satisfaction from contributing to social betterment and developing new friendships with colleagues from the partnering organization. The micro level benefits are largely under- explored in the literature despite the broad acceptance that implementing CSR programmes should benefit a wide range of stakeholders beyond the partner organizations (Green, & Peloza, 2011; Vock, van Dalen & Kolk, 2011; Bhattacharya & Sen, 2004), including employees and consumers. In a recent study Vock, van Dolen and Kolk (2011) argue that the participation of employees in partnerships can affect consumers either favorably or unfavorably. The effect on consumers will depend on how they perceive the employees’ involvement with the cause, i.e., whether they perceive that during work hours the cause distracts employees from serving customer needs well. 47 Bhattacharya, Sen and Korschun (2008) reported that a company’s involvement in CSR programs can satisfy several psychological needs including personal growth, the employees’ own sense of responsibility for the community and reduction in levels of stress. A precondition of the above is that employees should get involved in the relevant programs. More instrumental benefits comprise the development of new skills, building a connection between the company and the employee, particularly when there are feelings of isolation due to physical distance between the employee and the central office; potential career advancement (Burchell & Cook, 2011); using the resultant positive reputation as a “shield” for the employee when local populations are negative towards the company (Bhattacharya, Sen, & Korschun, 2008). Similar psychological mechanisms associated with the enthusiasm of employees have the potential to cause spillover effects, triggering favourable customer reactions (Kolk, Van Dolen & Vock, 2010). Employee volunteering, an important component of partnerships (Austin, 2000a), may improve the work motivation and job performance (Bartel, 2001; Jones, 2007), customer orientation,, and productivity, and in effect benefit consumers (Vock, van Dolen & Kolk (2011). The partnership literature makes extensive reference to the partnership outcomes, concentrating more on the benefits rather the costs, that contribute to the value creation internally, either for the profit or the nonprofit partners as demonstrated above. However, there is a notable lack of systematic in-depth analysis of outcomes beyond the descriptive level; in effect, the full appreciation of the benefits and costs remains unexplored. The majority of the literature discusses outcomes as part of a partnership conceptual framework or by reporting outcomes as one of the partnership findings. A limited number of studies are available on addressing outcomes as a focal issue and offering an outcomes-centred conceptualization (Hardy, Phillips & Lawrence, 2003; Austin & Reavis, 2002). The above is surprising as partnerships are related to improved outcomes; furthermore, as an interdisciplinary setting partnerships have been associated with the potential to link different levels of analysis (Seitanidi & Lindgreen, 2010), practices across sectors (Waddock, 1988), and address how society is better off as a result of the cross sector interactions (Austin, 2000a). A precondition to address the above is to study the links across levels and loci of benefits. As Bhattacharya, Korschun, & Sen, (2009) remark in order to understand the full impact of CSR initiatives we first need to understand how CSR can benefit individual stakeholders. Similarly, Waddock (2011) refers to the individual level of analysis as the “difference makers” comprising the fundamental element for the development of institutional pressures. Hence, either the effects of initiatives to individuals or the role of individuals in affecting value creation require further analysis on the micro level. Table 1 below presents the categorization of benefits on different levels of analysis and according to the loci of value. Understanding the links across the different levels of value creation and value capture is challenging. Interestingly the most recent research on the micro level of analysis is leading in capturing the interaction level across the internal/external dimension (employees/customers) of benefits (Vock, Van Dolen & Kolk, 2011; Kolk, Van Dolen & Vock, 2010). The conceptualization of the links between employees and customers herald a new research domain that captures the missing links of cause and effect in partnerships either directly or indirectly and focuses on interaction as a level of analysis. In Table 1 value creation is divided also according to the production of ‘first’ (direct transfer of monetary funds) and ‘second order’ benefits and costs (e.g., improved employee morale, increased productivity, better motivated sales force) (Gourville & Rangan, 2004) providing a time and value dimension in the categorization. INSERT TABLE 1 HERE48 External Value Creation Macro level - Beyond the partnering organizations and their individuals, collaborations aim to generate social and economic value for the broader external community or society. While actions that alleviate problems afflicting others can take countless forms, we define collaborative value creation at the macro level as societal betterment that benefits others beyond the collaborating organizations but due to their joint actions. External to the partnering organizations, the collaboration can create social value for individuals – targeted beneficiaries with needs that are attended to by the collaborative action. It can also strengthen other social, economic, or political organizations that are producers of social value, and hence increase society’s capacity to create social well-being. At a broader societal level the collaboration may also contribute to welfare enhancing systemic change in institutional arrangements, sectoral relationships, societal values and priorities, and social service and product innovations. The benefits accruing to the partnering organizations and their individuals internal to the collaboration are ultimately due to the value created external to the social alliance. Ironically, while societal betterment is the fundamental purpose for cross-sector collaborative value creation, this is the value dimension that is least thoroughly dealt with in the literature and in practice. We provide examples of value creation external to the partnership in Table 1. On the macro level the benefits for individuals or beneficiaries include the creation for value for customers as we have seen above, an indirect benefit (Vock, Ven Dolen & Kolk, 2011; Kolk, Van Dolen & Vock, 2010) mediated by the direct benefit that accrues to the employees as a result of partnerships. Creating direct value for customers is an important distinction between philanthropic and integrative/transformational interactions for socio-economic benefit (Reficco & Marquez, 2009). Rufin and Rivera-Santos (2008) pointed to the linearity that characterizes business value-chains, i.e., “a sequential process in which different actors members contribute to value creation in a chronological sequence, with each member receiving a product and enhancing it through the addition of value before handing to the next” (Reficco & Marquez, 2009, p. 6). However, in nonprofit-business partnerships the duality of the nature of benefits (economic and social) exhibit non-linearity (Reficco & Marquez, 2009) in the process of value creation. Hence, the isolation and attribution of socio-economic benefit is rather complex. An example of a socio-economic customer benefit derived from the collaboration of HP and an African social enterprise mPedigree. The solution they developed of cloud and mobile technology allows customers to check the genuineness of drugs in Africa and avoid taking counterfeit drugs which in effect saves lives (Bockstette & Stamp, 2011). Individuals that may benefit from partnerships include the beneficiaries of the partnership programs such as the dairy farmers receiving support in rural areas, creation of jobs for women in rural India (Bockstette & Stamp, 2011) or increasing by 40% the income of coffee farmers earnings in Mexico and the quality of coffee produced for Starbucks’s customers (Austin & Reavis, 2002). Costs might include accountability and credibility issues and possible problems with administering the solution. The benefits for other organizations result from the complexity that surrounds social problems and the interdependence across organizations and sectors. Addressing poverty requires tackling issues in education and health, hence, administering a solution crosses other organizational domains that interface with the central issue of the partnership. For example, when the partnership between Starbucks and Conservation international aimed at improving the quality of coffee for its customers and increasing the income of the Mexican farmers it also increased 100% the growth in the local cooperatives’ coffee sales and in addition resulted in the development of another partnership between49 the company and Oxfam (Austin & Reavis, 2003). Potential costs include expenses for the development of new markets and appropriateness of the standards developed. The overall benefits, for example, of reduced pollution, deaths, increasing recycling, improved environmental standards result in value to society at large benefiting many people and organizations either directly or indirectly. For example, by reducing the drug abuse society benefits by controlling the work time loss, health problems, and crime rates related to drugs (Waddock & Post, 1995). Moving to systemic benefits for other organizations can include the adoption of technological advantage through available open innovation/intellectual property, changing processes of “doing business” that may result in industry wide changes. For example, developing environmentally friendly technology between a firm and an environmental organization in order to decrease the environmental degradation and in effect creating new industry standards (Stafford, Polonsky, & Hartman, 2000); changing a banks’ lending policies in order to facilitate job creation for socially disadvantaged young people leading to change in banking industry policies (Seitanidi, 2008); contributing to the development of community infrastructure; increasing the paid-time allocation for employee community service; developing a foundation that supports community initiatives (Austin, 2000a). In all the above examples the value is located outside the partner organizations. In cases where partners raise claims that are unable to be substantiated, possible costs can include decrease in the credibility of the institution of partnerships to deliver societal benefits, increase in cynicism, and potential decrease in institutional trust in business and nonprofit organizations. Waddock and Post (1995) suggested that catalytic alliances focus their efforts for a brief period of time in the generation of public awareness through the media for complex and worsening social problems. Some of the characteristics of catalytic alliances are quite different from the nonprofit-business partnerships (temporary nature vs. long term; direct vs. indirect long-term shifts in public attitude). However, they have some unique characteristics that potentially can be beneficial for partnerships: they are driven by a core central vision rather than the instrumentality that predominately characterizes cross-sector partnerships (Selsky & Parker, 2005). Hence, catalytic alliances successfully link the work of previously fragmented agencies that used to work on related issues (e.g., hunger and homelessness) (Waddock & Post, 1995, p. 959). Equally they allow for an expectation gap to emerge “between the current state of action on an issue and the public’s awareness of the issue. The ‘expectations gap’ actually induces other organizations and institutions to take action on the issue. … the money paled by comparison to the organizational process stimulated” (Waddock & Post, 1995, p. 959). Social partnerships develop socio-economic value for a broad constituency. Hence, they address the societal level and function increasingly as governance mechanisms (Crane, 2010) while providing diverse and multiple benefits. In effect, they will be required to move from an instrumental to an encompassing normative approach focusing on a central vision which can assist in the engagement with internal and external stakeholders early on and produce “catalytic-or ripple-effect” (Waddock & Post, 1995) that will be beneficial on all levels of analysis directly or through the virtuous circle of value creation. How Value is assessed: Excellentia (worth) of Value Creation “The perceived worth of an alliance is the ultimate determinant of, first whether it will be created and second whether it will be sustained” (Austin, 2000b, p. 87). A necessary prerequisite for the continuous co-creation of value is the ability of each partner to capture some of the value either unilaterally or50 jointly during value cycles (Le Ber & Branzei, 2010a; Makadok, 2001), not always proportionately to the value generation of each partner, as value capture is not dependent on the value generation (Lepak, Smith & Taylor, 2007). The co-creation of economic (EV) and social (SV) value in partnerships should be more/different than the value originally created by each organization separately as this remains a strong motivation for the partners to engage in long-term interactions. In order to assess the socio-economic value of the partnership outcomes created the partners are required to define economic and social value. For both businesses and nonprofits EV is “defined as financial sustainability; i.e., an organization’s capacity to operate indefinitely” (Márquez, Reficco & Berger, 2010, p. 6). On the other hand SV has been associated in the context of partnerships “meet(ing) society’s broader challenges“ (Porter & Kramer, 2011, p. 4); similarly “meeting social needs in ways that improve the quality of life and increase human development over time (Hitt, Ireland, Sirmon & Trahms, 2011, p. 68), including attempts “that enrich the natural environment and/or are designed to overcome or limit others’ negative influences on the physical environment” (ibid). Although previously doing well and doing good were separate functions associated with different sectors, today they are seen as “manifestations of the blended value proposition” (Emerson, 2003, p. 35) or of the more recent “shared value” (Porter & Kramer, 2011). The value capture on the different levels of analysis is dependent on the source that initiates the creation of value (Lepak & Smith, 2007) as internal and external stakeholders of the partnership may hold different perceptions as to what is valuable due to different “knowledge, goals, context conditions that affect how the novelty and appropriateness of the new value will be evaluated” (ibid, p. 191). The outcome assessment in partnerships is likely to increase as cross sector collaborations proliferate (Sullivan & Skelcher, 2003) and there will be more pressure to understand the consequences of partnerships (Biermann, Mol, & Glasbergen, 2007). Different forms of collaboration will have varying degrees of evaluation difficulty associated with the availability and quality of data,and the experience of organizations in employing both qualitative and qualitative measures of assessment. Some of the difficulties in assessing the socio-economic value in partnerships are: 1/ the subjectivity associated with valuing the outcomes, i.e., what is considered acceptable, appropriate and of value for whom (Mulgan, 2010; Lepak & Smith, 2007; Austin, 2003; Amabile, 1996); 2/ the variation in the valuations of stakeholders of a company’s CSR implementation programs by country and culture (Endacott, 2003); 3/ the attribution to a particular program, particularly for companies that have a sophisticated CSR portfolio of activities (Peloza & Shang, 2010; Peloza, 200;) or a portfolio of partnerships (Austin, 2003; Hoffman, 2005); 4/ the lack of consistency in employing CSR metrics (Peloza & Shang, 2010; Peloza, 2009); 5/ many companies lack an explicit mission statement for their social performance activities against which they would have to perform (Austin, Gutiérrez, Ogliastri & Reficco, 2007) 6/ attribution of a particular outcome to a specific partnership program (Brinkerhoff, 2002) 7/ combining all the elements of a partnership relationship 8/ methodological challenges in the measurement due to the intangible character of many outcomes associated with partnerships, requirements for documented, likely, and perceived effects of partnerships (Jorgensen, 2006; Sullivan & Skelcher, 2003). Austin, Stevenson and Wei-Skillern (2006) summarize the difficulties concisely: “The challenge of measuring social change is great due to nonquantifiability, multicausality, temporal dimensions, and perspective differences of the social impact created”.51 Despite the above difficulties the three key-reasons why businesses should aim to strengthen their financial metrics regarding CSP are suggested by Peloza (2009): 1/as a method to facilitate cost effective decision making; 2/ as a measure to avoid interference in the allocation of resources due to the lack of hard data; 3/ as an instrument to enable inclusion of CSP budgets in the mainstream budgeting of companies. Similarly, demands for metrics for nonprofit organizations to measure social value have emerged due to: 1/ the need to demonstrate the effectiveness of programs to foundations; 2/ provide justification for the continuation of funding from public authorities; 3/ provide hard data to investors similar to those that are used to measure profit; 4/ to demonstrate impact to all stakeholders (funders, beneficiaries, partners). Mulgan(2010)provides a synopsis of ten methods out of hundreds that exist for calculating social value which often are competing . He notes that despite the enthusiasm that surrounds such methods, in reality they are used by few as guidance in decision making; furthermore, the fragmentation that exists in the use of different metrics by each group (group 1: NGOs & foundations, group 2: governments group 3: academics) provides an explanation of why metrics are not used in practice. He further remarks that due to the subjective nature of value the tools that are used for the assessment do not reflect this fact and in effect are misaligned with the strategic and operational priorities of an organization. Mulgan (2010) points that while in business different tools are used for “accounting to external stakeholders, managing internal operations, and assessing societal impact” (p. 40). However, when social value is measured in nonprofit organizations, measurements are conflated into one, comprising another reason for the failure of metrics to influence decisions. A further difficulty is estimating the benefit that will be produced in the future due to a recent action relative to the cost by using social return on investment (SROI). His advice for constructing value as the director of the Young Foundation is that metrics should be used for the three roles they can perform: “external accountability, internal decision making, and assessment of broader social impact” (Mulgan, 2010, p. 42). He suggests that funders must adapt their frameworks to the particular organization they are interested in assessing and, more importantly, the metrics must disclose their inherent subjectivity and need to be employed in a proportionate way depending on the size of the nonprofit organization. Table 2 provides an overview of the methods to measure social value, providing a brief description, an example, and the problems usually associated with each method. INSERT TABLE 2 HERE52 Table 2: 10 Ways to Measure Social Value, adopted from Mulgan 2010, p. 41.53 Implicit in the above list but explicitly evident in the extant literature of social partnership and nonprofit/development is the interchangeable use of the terms outcomes and impact (Jorgensen, 2006; Sullivan & Skelcher, 2003; Vendung, 1997) which results in difficulties in the categorization, comparison, and discussion of the issues around assessment and evaluation. Most of the available literature discusses evaluation parameters and provides frameworks for evaluation that are usually associated with performance of the partnership. As Preskill and Jones (2009, p. 3) suggest: “ evaluation is about asking and answering questions that matter-about programs, processes, products, policies and initiatives”. When evaluation works well, it provides information to a wide range of audiences that can be used to make better decisions, develop greater appreciation and understanding, and gain insights for action”. We have discussed such issues in the section of evaluation of partnership implementation. In this section we are concerned with the assessment of outcomes. Measurement is not a frequent topic thus far as the discussion appears to have entered only recently the outcomes of partnerships. This appears also to be the case in the nonprofit-government and government-business (PPPs) “arenas of partnerships” (Selsky & Parker, 2005). Andrews and Entwistle (2010, p. 680) suggest that in the context of public sector partnerships “very few studies, however, have examined whether the benefits assumed by sectoral rationales for partnership are actually realized (for partial exceptions, see Provan and Milward 1995; Leach, Pelkey, and Sabatier 2002; Arya and Lin 2007)”. On the other hand, assessing value in philanthropic and transactional approaches (sponsorship and cause related marketing) is a well-established practice that involves sophisticated metrics (Bennett, 1999; Irwin & Asimakopoulos, 1992; Meenaghan, 1991; Wright, 1988; Burke Marketing Research, 1980). This is due to the following reasons: historical data are available that assist in developing objective standards; the evolution of metrics has taken place through time; the assessment involves less complicated metrics as the activities assessed can be attributed to the philanthropic or transactional interaction in more direct ways. The agency that has developed sophisticated metrics for transactional forms of interactions is IEG (2011), the leading provider of valuation and measurement research in the global sponsorship industry. Based on 25 years of experience they have developed a methodology that captures the value for the sponsorship and cause related marketing incorporating the assessment of tangible and intangible benefits (examples of the criteria used include: impressions in measured and non-measured media, program book advertising, televised signage, tickets, level of audience loyalty, degree of category exclusivity, level of awareness of logos), the geographic research/impact (estimation of the size and value of the market where the sponsor will promote its sponsored activity), cost/benefit ratio (assessment of the costs and benefits and recognizing the risks and rewards associated with sponsorship), price adjusters/market factors (allowing for the incorporation of factors that are unique to each sponsor, length of the sponsor’s commitment and the fees for the sponsorship). As it becomes evident from the above, the assessment uses the ‘value-for-money’ analysis which tends to employ a single criterion, usually quantitative, allowing the comparison across data (Sullivan & Skelcher, 2003), but leaves unaddressed societal outcomes. Indicators for the synergistic outcomes may include: “aspects of program performance that relate to advantages beyond what the actors could have independently produced” (Brinkerhoff, 2002, p. 225- 226); developing links with other programs and actors; enhanced capacity of the individuals involved in the partnership and influence of individual partners; and multiplier effects (extension or development of new programs) (Brinkerhoff, 2002). Examples of multiplier effects could be building a degree of good will towards the BUS partner in quite important players in environmental sector, hence creating a buffer zone between the BUS and the Nonprofit sector where previously relationships were antagonistic (Seitanidi, 2010). Attribution, however, remains problematic for the CVC as it is difficult to provide54 evidence for the value-added that derives from the partnership. Brinkerhoff (2002) offers that it is usually perception and consensus based and subjective, hence relates to each partner’s level of satisfaction from the relationship, which will also provide an indication of the relationship’s sustainability. Although reference is being made often to the synergistic results of partnerships, they are rarely fully articulated and measured (Dobbs, 1999; Brinkerhoff, 2002). Looking only on outcomes and ignoring the process dimension usually is linked with sacrificing long-term value creation for the benefit of short-term performance (Kaplan & Norton, 2001). Hence, by establishing single or pluralistic criteria (i.e., stakeholder based evaluations) according to the interests involved in the partnership (Sullivan & Skelcher, 2003), deciding the standards of performance against each criterion, and measuring the performance of the collaboration, one constructs the logic of evaluation (Fournier, 1995). The achievement of process outcomes is linked with the program outcomes that are concerned with the total benefits of the partnership minus the costs. Some methods predominately prioritize the costs/benefit analysis at the expenses often of “questions of quality and appropriateness” (Sullivan & Skelcher, 2003, p. 189). Alternative methods of bottom-up evaluation include stakeholder based approaches such as a ‘theory of change approach’ defined as “a systematic and cumulative study of the links between the activities, outcomes and contexts of the initiative” (Connell & Kubisch, 1998, p. 16). This approach aims to encourage stakeholders to participate in the evaluation that assists in connecting the social problem with the context, specifying the strategies that can lead to long-term outcomes. An alternative approach is “interactive evaluation” (Owen & Rogers 1999) where the views of the stakeholders represent the “local experience and expertise in delivering change and improvement, seeing evaluation as an empowering activity for those involved in the programme and attempting to address problems not addressed before or employing approaches that are new to the organisation. The value of interactive evaluation lies in the fact that it aims to encourage a learning culture and is appropriate for use in innovative programmes” (Sullivan & Skelcher, 2003, p. 197). The more nonprofit-business partnerships will embrace their role as global governance mechanisms (Crane, 2010), the more they will be required to align their evaluation methods with public policy partnerships. We consider that partnerships require a three point evaluation equal to social development and change (Oakley, Pratt & Clayton, 1998; Blankenberg, 1995): process outcomes, program outcomes, and impact. Process outcomes (also expressed as outputs in several frameworks) are associated with the effort in partnerships. We suggested the evaluation of partnership implementation as the point of assessment, which was discussed at the end of the implementation phase. When the point of measurement is the effectiveness we suggested above the effectiveness of the partnership by assessing the program outcomes. In the next section we examine the impact of partnerships and we associate the point of measurement with change, i.e., the difference from the original social problem that the partnership addressed. What is the impact: Distinctus (difference) of Value Creation Impact refers to the “long term and sustainable changes introduced by a given intervention in the lives of the beneficiaries” (Oakley, Pratt & Clayton, 1998, p. 36). They can relate to anticipated or unanticipated changes caused by the partnership to the beneficiaries or others and can range from positive to negative (Oakley, Pratt & Clayton, 1998; Blankenberg, 1995). We adapt the definition of55 impact assessment for development interventions to the partnership context by Oakley, Pratt & Clayton (1998, p. 36): Partnership impact assessment refers to the evaluation of how and to what extent partnership interventions cause sustainable changes in living conditions and behaviour of beneficiaries and the effects of these changes on others and the socio-economic and political situations in society. Impacts can emerge, for example, from hiring practices, emissions and production and can respectively provide outcomes such as increased diversity in the workplace, reduction of emissions and increased safety conditions in production. Capturing the impacts, however, often requires intelligence that exceeds the abilities of single organizations and mimics the data gathering process of the government for measuring large scale phenomena such as poverty, health pandemics, and so forth. The impacts of the above examples for a company would respectively refer to improved employment rates in workforce across the world, improved air quality/biodiversity, reduction in accident rates. In the case of a nonprofit-business partnership capturing the impact of the co-creation process would require attributing the partnership effects from other related efforts of the business and the nonprofit organization. In addition, it would require developing an understanding of the expectations of the stakeholders within context (social, political, economic and environmental) (Oakley, Pratt & Clayton, 1998). Hence, it is not a surprise that even in development interventions very few evaluations move beyond outcomes to impacts (citied in: Oakley, Pratt & Clayton, 1998, p. 37). Scholars in CSR have called for research to provide evidence not only for the existence of positive social change but to move towards how change is being achieved (Aguilera, Rupp, Williams, & Ganapathi, 2007; McWilliams & Wright, 2006). Companies initially responded by including in their CSR reports lists of their social programs and initiatives demonstrating their actions on social issues. However, CSR reports neither offered a coherent nor a strategic framework “instead they aggregate anecdotes about uncoordinated initiatives to demonstrate a company’s social sensitivity” (Porter & Kramer, 2006, p. 3). The majority of companies include: “reductions in pollution, waste, carbon emissions, or energy use, for example, may be documented for specific divisions or regions but not for the company as a whole. Philanthropic initiatives are typically described in terms of dollars or volunteer hours spent but almost never in terms of impact” (ibid). Hence, it appears that companies have been at best reporting outcomes of social and environmental initiatives, and at times suggesting they represented impacts. This is more evident in the high profile rankings such as the FTSE4GOOD and the Dow Jones Sustainability Indexes. Although they intend to present rigorous impact indicators comprising of social and environmental effects, in fact the lack of consistency, the existence of variable weighting of criteria, the lack of external verification, the statistical insignificance of the answers provided in the surveys, and often the inadequate proxies employed reveal the difficulties associated in reporting impacts systematically and consistently even for professional and large organizations (Porter& Kramer, 2006; Chatterji & Levine, 2006). Hence, the challenge is not only finding new ways to co-create socio-economic value through partnerships in order to achieve simultaneously positive impact for both business and society (Kolk, 2004), directly or indirectly (Maas & Liket, 2010), but also to develop indicators and measure it. Despite numerous papers that refer to impact (Atkinson, 2005) either by mentioning the call for business to report on their “numerous and complex social impacts of their operations” (Rodinelli & London, 2003, p. 62) or by suggesting key principles for successful collaborative social initiatives that can contribute to the impact (Pearce II & Doh, 2005), the fact remains that there is a lack of studies that focus on impact assessment of nonprofit-business partnerships. 56 A recent study developed by the UK’s biggest organization for the promotion of CSR: Business in the Community and Cranfield University’s Doughty Centre for Corporate Responsibility (BITC & Doughty Report, 2011) identified 60 benefits for business that are clustered in seven areas one of which is the ‘direct financial impact’ of CSR activities. Also one of the report’s increasingly important future trends is ‘macro-level sustainable development’, defined as: “the somewhat undefined benefits from contributing to sustainable development. This relates to the impact and responsibilities an organisation has in relation to a geographically wide level of economic, social and environmental issues – at a ‘macro level’. Here, ‘macro level’ means society and nature as a whole, encompassing not just an organisation and its immediate interactions, but sustainable development in its industry, country, region and indeed planet.” Although the report does not separate the outcome from the impact level in the presentation of benefits, it provides examples of macro-level issues such as “health inequalities or access to healthcare; poor education; ageing populations; lack of investment in sciences or arts and innovation generation; the rights of workers, children and sex/race equality; and environmental issues such as climate change, deforestation, pollution, ocean health, extinction of species, and urbanisation.” (ibid, p. 17). The report further suggests that ‘macro-level sustainable development’ is a recent (in 2008/09) addition to the reported business benefits. Studies that refer to the evaluation of collaborative networks suggest that due to the multidimensionality of the programs involved there is a need for a combination of randomized control trials in combination with more flexible forms of evaluation that involve researchers and practitioners combining their knowledge during workshops in order to establish links between actions and outcomes while using multiple criteria for measuring success based on local knowledge (Head, 2008; Schorr, 1988). In particular, Head (2008) cautions against premature judgements that may be drawn by funders that do not realize that initiatives may take 4-6 years until the beginning of the implementation phase. In collaborative environmental management Koontz and Thomas (2006) suggest as unanswered the question: “To what extent does collaboration lead to improved environmental outcomes?” (p. 111). They offer suggestions for measuring environmental impact through outcomes such as perceptions of changes in: environmental quality, in land cover, in biological diversity, in parameters appropriate to a specific resource (e.g., water biochemical oxygen demand, ambient pollution levels) (ibid, p. 115). They also recommend that academics should not attempt to pursue large impact questions but rather to collaborate with practitioners for the design, monitoring of outputs and funding of the required longitudinal and cross-sectional studies (ibid, p. 117). In the context of partnerships for development Kolk, Van Tulder & Kostwinder (2008, p. 271) group the changes, benefits, and results of partnerships to the wider society as “the final and ultimate outcomes”. Although the word outcome is used, impact is assumed as they suggest that the best way to assess the outcomes is by their “direct and indirect impact on the Millennium Development Goals.” In fact, the Business in Development program of the Dutch national committee for international cooperation and sustainable development (NCDO) developed a methodology measuring the contribution of the private sector to the Millennium Development Goals (MDGs) (NCDO, 2006). The methodology of the report highlights the measurement of impact and indirect contributions of MNCs and stresses that the lack of availability of information was a significant factor in the scoring developed. Furthermore, it remarked that due to the differences in the nature of the participating businesses testing the measurement framework it would be impossible to compare their performance. Conclusions from the report included: that the contribution and attention given by companies to the MDGs can be measured; it is clearer where, how and why companies contribute to the MDGs; understanding the focus of a company’s MDG efforts can help it choose which NGOs to partner with to achieve even better MDG impact (NCDO, 2006: 7-12). 57 Currently the largest research and knowledge development initiative of the European Commission is under way aiming to measure impact. The research project commenced in March 2010 and will conclude in March 2013, the “Impact Measurement and Performance Analysis of CSR” (IMPACT) project which is hoping to break new ground in addressing questions across multiple levels and dimensions, combining four empirical methods -econometric analysis, in-depth case studies, network analysis, and Delphi analysis. The research will address how CSR impacts sustainability and competitiveness in the EU across 27 countries (Impact, 2010). Although impact is frequently used to denote effectiveness, outcomes, or performance, it is often the case that its contextual and temporal meaning is understood in an evolutionary and interactive way. Demonstrating the importance of impact and its original response Nike stated in its 2005 CSR report: “A critical task in these last two years was to focus on impact and develop a systematic approach to measure it. We’re still working hard at this. How do we know if a worker’s experience on the contract factory floor has improved, or if our community investments helped improve a young person’s life? We’re not sure anyone has cornered the market in assessing real, qualitative social impact. We are grappling with those challenges now. In FY07-08, we will continue working with key stakeholders to determine the best measures. We aim to have a simple set of agreed upon indicators that form a baseline and then to measure in sample areas around the world” (Nike, 2005, p. 11). In its 2009 CSR report Nike acknowledged that solutions require industry-level and systemic change that will have to pass through ‘new approaches to innovation and collaboration’. Interestingly the report states: “Our aim is to measure our performance and report accurate data. At times, that means systems and methodology for gathering information need to change even as we collect data, as we learn more about whether we are asking the right questions and whether we are getting the information that will help us to answer them rather than just information. (p. 18)”. The company also reported that it aimed at developing targets and metrics around programs for excluded youth around the world, which demonstrates the policy-type-thinking required for the development of impact indicators, processes to monitor, report, and advocate, usually associated with nonprofit organizations that need to be developed as new competencies from business and their partners. Figure 7 below demonstrates the evolution of understanding in the process of monitoring and collection of data that contribute to the understanding and reporting of impacts. INSERT FIGURE 7 HERE58 Figure 7: Workplace impact in factories, adopted by the Nike CSR Report (Nike, 2009, p. 37).59 Unilever’s recent ‘Sustainable Living Plan’, launched in 2010, aims to capture holistically the company’s social, environmental and economic impacts around the world. The focus on the multidimensional effects demonstrate that some companies are moving forward for the moment with aspirational impact targets that if achieved will demonstrate a significant forward step in delivering socio-economic and environmental progress around the world. Unilever has developed 50 impact targets that are grouped under the following priorities of the plan to be achieved by 2020: (1) To help more than one billion people take action to improve their health and well-being; (2) To halve the environmental footprint of the making and use of its products; (3) To source 100% of its agricultural raw materials sustainably. The impacts are associated with increasing the positive and reducing the negative impact. Working in partnership is central as NGOs provide the local connection that facilitates the implementation of the programs (Unilever, 2010). Also, impacts are captured at the local level hence the role of local partners and government/local authorities is profound in capturing, measuring and reporting impact. However, the compilation of impact reports will be the responsibility of MNCs. Hence they will need to demonstrate transparency, accountability, and critical reflection if they wish the reports to play an important substantive role rather than just being cosmetic. Demonstrating, for example, missed impact targets and capturing the reasons behind the misses will be important in not only raising awareness of the difficulties associated with impact measurement and reporting but also calling for the assistance of other actors in pursuing more effectively impact targets in the next value capture circle. The nonprofit sector is similarly under pressure to demonstrate “its own effectiveness as well as that of their partners. They need to be able to identify the difference their efforts (and funds) have made to the poorest and most vulnerable communities; as well as to demonstrate that these efforts are effective in bringing about change” (O’Flynn, 2010, p. 1). The UK Charity Commission, for example, requires NGOs to report against their core strategic objectives. The above are in addition to the moral obligation of nonprofits to demonstrate accountability and appreciation of their impacts (ibid). An interesting insight in O’Flynn’s (2010) paper from the nonprofit sector is that due to the distance between an organization’s initial intervention and the systems it aims to affect, it is very difficult to claim with confidence a direct impact that is attributable. Changes in complex systems are likely to be influenced by a range of factors and hence it is impossible for a nonprofit to claim attribution (ibid). Moving away from this complexity, development organizations have started working and documenting their contribution to change instead of their attribution (ibid). Based on the views of partnership practitioners that participated in a study of the ‘Partnering Initiative’, one of the priorities for the future regarding evaluation will be the need to develop tools for measuring the impact of the beneficiaries, the impact on partners, the unexpected outcomes (Serafin, Stibbe, Bustamante, & Schramme, 2008). Moving to the transactional forms of interaction, as a proxy to examining partnership impact, Maas and Liket (2010) in a recent empirical study examined the extent to which the corporate philanthropy of 500 firms listed in the Dow Jones Sustainability Index (DJSI) is strategic as indicated by the measurement of their philanthropic activities’ impact along three dimensions: society, business and reputation/stakeholder satisfaction. The authors suggested that despite the lack of common practice in how impact is measured, it appeared that 76% of the DJSI firms measure some sort of impact of their philanthropic activities, predominately impact on society and on reputation and stakeholder satisfaction. More likely to measure impact are larger firms with substantial philanthropic budgets, from Europe and North America and from the financial sector. Following long standing pressures for strategic corporate philanthropy to demonstrate value, Lim (2010) has produced a report that aims to offer guidance on the measurement of value of corporate philanthropy. Transactional and integrative approaches of cross sector interaction share similar60 challenges in addressing questions about impact, including the long-term nature of the outcomes and impact, the complexity in measuring the results, they both aspire to affect social change which is a lengthy process and the context specific character of the interventions (Lim, 2010). We provide below (Table 3) a brief overview of the measures that can be employed for impact assessment for corporate philanthropy and partnerships. Despite the lack of agreement on definitions on what constitutes social value and on ways to measure, Lim (2010) suggests that the attempt to measure it is beneficial in itself as it encourages rigour in the process, improvement and making explicit the assumptions of the partners. Articulating impact requires developing a ‘baseline’ as a starting point; developing indicators is associated only with some of the methods; most of the methods involve a degree of monitoring and developing a final reporting on the impacts. The Illustrative methods provide a soft approach but not necessarily less effective in identifying the problems in delivering and increasing the impact of interventions. They are appropriate when it is impossible to develop indicators, or develop experimental procedures. Experimental methods are conducted to be able to explain some kind of causation. Lim (2010, p. 9) suggests that experimental methods or formal methods should be employed for (1) “reasonably mature programs that represent an innovative solution and wherein the funder and /or grantee seeks to prove to other funders or NGOs that it should be scaled-up” and (2) “programs wherein the cost of risk of failure is high (e.g., those with highly vulnerable beneficiaries)”. These are the only methods that can prove definite causation and attribution. Alternatives to experimental methods are practical methods of measuring intermediate outcomes that allow for identifying improvement opportunities. The two practical methods that we listed in Table 3 are presented by Lim (2010, each associated with different applications: the outcomes measurement: (1) “programs wherein the funder is involved in the program’s design and management and shares responsibility for its success. (2) Programs wherein funders and grantees desire frequent and early indicators in order to make real-time adjustments to interventions and strategy” (ibid). Regarding the impact achievement potential, Lim (2010) states that they are more appropriate for the start-up programs in their early stages and in interventions that the funder is not involved in the management. INSERT TABLE 3 HERE Due to the multidimensionality of nonprofit-business partnerships operating on multiple levels and producing a wide range of effects, it is difficult in most cases to set up experiments in order to establish causality due to their operation within dynamic adaptive systems of multiple interactions. We borrow the term of ‘panarchy’ of Gunderson and Holling (2001) to refer to the evolving nature of complex adaptive systems as a set of adaptive cycles. Applying to partnerships panarchy theory would suggest that the interlinked and never-ending cycles of value creation at each level and the links between them represent a nested set of adaptive cycles taking place in spatial and temporal scales. In order to increase the effectiveness of the monitoring in dynamic environments as Gunderson and Holling (2001) advocate, it might be possible to identify the points at which it is possible for the system to accept positive change. In this way the partners will acknowledge the effects the interactive and non-linear effects of the dynamics of the different levels of change. Managers can get a more in-depth understanding of the role their actions play in influencing socio-economic and environmental systems and instil in the systems positive input to further encourage positive social change. Theories of social change might also prove useful as they examine the connection between the micro and macro levels (Hernes, 1976). In order for a partnership to address its impacts it requires identification of the social issue that the collaboration will address and the articulation of the effects of impacts on different targets. For61 example, following the categorization of our outcomes model: internal/external to the partnership and on different levels: macro, meso and micro can provide a systematic organization of the impacts. The extent to which a partnership delivers synergistic impacts is the critical test of the collaboration. The partners need to ask: did our collaboration make a difference, to whom and how? Following from our study the next section provides brief conclusions and suggestions for future explorations. Table 3: Impact Assessment methodologies, adopted and compiled based a wide range of sources including: Cooperrider, Sorensen, Yaeger, & Whitnet, 2001; O’Flynn, 2010; Lim, 2010; Jorgensen, 2006; Description Usage Type of Method ILLUSTRATIVE METHODS Stories of Change (The Most Significant Change-MSC) Stories of change are used to illustrate change rather than measure change The method is employed to provide insights into the perceptions and expectation of stakeholders that participate in the process of evaluation. Selecting through the process stories allows expert panels to identify change/impact stories. MSC does not make use of pre- defined indicators, especially ones that have to be counted and measured. The technique is applicable in many different sectors, including agriculture, education and health, and especially in development programs. It is also applicable to many different cultural contexts. MSC has been used in a wide variety of countries by a range of organizations. Appreciative Enquiry Developing community maps visualisation and recording of changes in the lives of stakeholders Used in place of the traditional problem solving approach-finding what is wrong and forging solutions to fix the problems-Appreciative Inquiry seeks what is "right" in an organization. It is a habit of mind, heart, and imagination that searches for the success, the life-giving force, the incidence of joy. It moves toward what the organization is doing right and provides a frame for creating an imagined future that builds on and expands the joyful and life-giving realities as the metaphor and organizing principle of the organization. Future methods: Delphi Survey technique Prediction based on experts A panel of experts judges the timing, probability, importance and implications of factors, trends, and events regarding the problem in question by creating a list of statements/questions and applying ratings; next a first draft report is developed allowing for revisions based on feedback which is incorporating I the final report. EXPERIMENTAL METHODS Experiments with randomized or matched controls Comparison between the control and experimental group It consists of a form of scientific experiment usually employed for testing the safety (adverse drug reactions), effectiveness of healthcare services, health technologies. Before the intervention to be studied subjects are randomly allocated to receive one or other of the alternative treatments under study. After randomization, the two (or more) groups of subjects are followed up in exactly the same way, and the only differences between the care they receive, is for example, the policy implementation of a partnership program. The method used in psychology and education. Matched subject design uses separate experimental groups for each particular treatment, but relies upon matching every subject in one group with an equivalent in another. The idea behind this is that it reduces the chances of an influential variable skewing the results by negating it.62 Shadow Controls Expert judgement A judgement of an expert is employed to assess the success of a programme. Such a design is useful when there is limited scope for a control group. The predictions (shadow controls) are followed by comparisons to the outcome data at the end of the programme. Important feedback is about the programme’s effectiveness is provided. The method is used in healthcare. PRACTICAL METHODS Outcome Measurement Data collected on national in combination with mutually agreed assumptions between the partners Funder and grantee co-design the program and measurement process. Experts may be consulted for advice; data is collected in house by the nonprofit organization with the assistance of the funder (technological or management). Instead of control groups, national databases may be used of comparison purposes. Most organizations appear to you this method. Impact achievement potential Reliance on the grantees (nonprofit organization’s) measurement standards The funder accepts the self-reporting claims as reliable, particularly in the case where the nonprofit organization might have available measures, demographics. FILLING THE GAPS & PUSHING THE FRONTIERS We end by providing a few concluding observations and suggesting some avenues of further exploration to advance our collective knowledge. The Collaborative Value Creation Framework provided an analytical vehicle for reviewing the CSR and cross-sector collaboration literature relevant to the research question How can collaboration between businesses and NPOs most effectively co-create significant economic and social value for society, organizations, and individuals? The analytical framework for Collaborative Value Creation allows for a deeper understanding of interactions that contribute to value creation. Building on earlier research, the purpose of the framework is twofold: first, it seeks to provide guidance to researchers and practitioners who would like to assess the success of their cross sector interactions in producing value. Second, it aims to promote consistency and maximize comparability between processes and outcomes of collaboration. The Collaborative Value Creation framework is a conceptual and analytical vehicle for the examination of partnerships as multi-dimensional and multi-level value creation vehicles and aims to assist researchers and practitioner to position and assess collaborative interactions. The intention of the framework is not to prescribe a fixed approach to value creation but to provide a frame for those seeking to maximize value creation across all levels of social reality. Practitioners should feel at liberty to adapt the framework to meet their particular requirements. Researchers should employ either the entire or elements of the CVC framework in order to examine the value creation spectrum, the relationship stages, partnering processes, and outcomes. The first CVC component aims to examine what are the sources of value employed by the partners, how they are used and to what effect (types of value produced); the second component aims to position partners’ cross sector interactions within the63 collaboration continuum’s stages and examine the nature of the relationship according to the value descriptors (see figure 1); the third component answers the question how does the partnership processes contribute to the value co-creation of the partners on the macro-meso and micro levels. As such it identifies who is involved and how in the partnership and aims to maximize the interactive co- creation of value through processes. The final component of partnership outcomes positions the value of each partner per level of analysis to facilitate the assessment of the benefits and costs. It concludes with the examination of the outcomes and impact of partnerships in order to develop comparable mechanisms of value assessment both qualitative and quantitative. Figure 8 presents a summary view of the Framework’s Value Creation Spectrum’s key variables (collaboration stages, value sources, value types) and how they change as partnerships evolve from sole- creation to co-creation. The underlying general hypothesis is that greater value is produced the more one moves toward co-creation. INSERT FIGURE 8 HERE Figure 8: COLLABORATIVE VALUE CREATION SPECTRUM Form Sole-Creation----------------------------------------------------------? Co-Creation Stages Philanthropic------------------------------------?Integrative/Transformational Resource Complementarity Low---------------------------------------------------------------------------------?High Resource Type Generic---------------------------------------------------? Distinctive Competency Resource Directionality Unilateral-----------------------------------------------------------------? Conjoined Linked Interests Weak/Narrow------------------------------------------------------? Strong/Broad Associational Value Modest--------------------------------------------------------------------------? High Transferred Resource Value Depreciable-------------------------------------------------------------? Renewable Interaction Value Minimal--------------------------------------------------------------------? Maximal Synergistic Value Least-----------------------------------------------------------------------------? Most Innovation Value Seldom---------------------------------------------------------------------? Frequent64 It is clear from the literature review that value creation through collaboration is recognized as a central goal, but it is equally clear that it has not been analyzed by researchers and practitioners to the extent or with the systematic rigor that its importance merits. While many of the asserted benefits (and costs) of collaboration rest on strong hypotheses, there is a need for additional empirical research – quantitative and qualitative, case study and survey – to produce greater corroborating evidence. There has been in recent years an encouraging uptake in research in this direction, as well as growing attention by practitioners. The CVC Framework’s Value Creation Spectrum offers a set of variables and hypotheses in terms of sources and types of value that may help focus such research. Similarly, the models of the Partnering Processes identify multiple value determinants that merit additional study. There is a need for field- based research that documents specific value creation pathways. In all of this the focus is on the factors enhancing co-creation, particularly Synergistic Value. There is a need to demonstrate how and to what extent economic value creates social value and vice versa, whether simultaneously or sequentially. Understanding more deeply this virtuous value circle is at the heart of the paradigm change. It is hoped that such additional research will lead to further elaboration, revision, and refinement of the Framework’s theoretical construct. In terms of the Collaboration Continuum there is a need to deepen our understanding of the enabling factors that permit collaborative relationships to enter into the Integrative and Transformational stages. Within these higher level collaborations, one needs to document how the co-creation process operates, renews, and grows. Given that these partnering forms are less common and more complex than earlier stages such as philanthropic and transactional, in-depth case studies are called for, with longitudinal or retrospective analyses required to capture the evolutionary dynamics (Koza, & Lewin, 2000). Of particular interest are the processes producing Innovation Value as a higher form of synergistic co- creation. In the Outcomes area it was evident from the literature review that impact at the societal level is relatively neglected in terms of documentation. Perhaps because of measurement complexity and costs, there is a tendency to assume societal betterment rather than assess it specifically. Consequently, the core question of How is society better off due to the collaboration? remains underdocumented. Collaborations do not always produce value as sometimes partners reach bad solutions, create new problems and may not solve the problems they originally aimed at addressing (Bryson et al, 2006; Austin, 2000a). The partnership literature is in the early stages of addressing issues of mapping the value creation road on different levels of analysis. The macro level benefits and costs would require longitudinal studies of groups of researchers collaborating across interrelated fields, across multiple organizations in order to capture how a direct social benefit has long term economic effects across organizations. Such research teams have not yet emerged as policy makers have also only recently demonstrated an interest in capturing impacts (ESRC, 2011). Furthermore, multi-level value assessment, i.e., introducing all three levels of analysis: organizational, individual and social is a recent focus in the literature (Seitanidi & Lindgreen, 2010). Examples include the study of the impact of social regeneration through partnership in disadvantaged communities (Cornelious & Wallace, 2010); studying the65 orchestration of multilevel coordination that shapes relational processes of frame fusion in the process of value creation (Le Ber & Branzei, 2010c); addressing the reciprocal multi-level of change through the interplay between organizational, individual and social levels of reality in the stage of partnership formation (Seitanidi, Koufopoulos & Palmer, 2010). The empirical studies that aim to capture social, societal or systemic benefits (Seitanidi, 2010) employ the perceptions of organizational actors in the focal organizations without involving beneficiary voices, or if they make reference to the beneficiaries, they employ a theoretical perspective (Le Ber & Branzei, 2010a). Overcoming the existing limitations of research that focus on single organizations requires a shift in focus, means, and methods. Such changes will allows us to capture the interconnections of cross sector social interactions on multiple levels and possibly unlock the secrets to the ability of our societies to achieve positive social change intentionally in a short period of time. Lastly, for CSR scholars there is the symmetry hypothesis that corporations must have advanced to the higher levels of CSR in order to engage effectively in the higher levels of collaborative value co-creation, with the latter being evidence of the former. Table 4 below offers new avenues for research within each CVC component and it contributes possible research questions that cut across the different components of our value creation framework. INSERT TABLE 4 HERE This literature review and conceptual paper are intended to help partnership professionals think systematically about their partnerships as internal and external value creation mechanisms. What partners do and how they implement partnerships will have an impact on the micro-meso and macro levels whether or not partners are considering co-creation of value explicitly or implicitly during the partnership processes. Similarly, value creation will have an effect on the partners and society. The CVC framework we propose can improve the understanding of value creation processes in partnerships, and anticipate the outcomes of partnerships on different levels of analysis. Given that our starting premise for this article was that value creation is the fundamental justification for cross-sector collaboration, our ending aspiration is that embedded in the minds of every collaboration scholar and practitioner be the following mandatory question: How will my research or my action contribute to the co-creation of value? 66 Table 4: RESEARCH AVENUES BY CVC COMPENENT Collaborative Value Creation Components Research Avenues Component I: Value Creation Spectrum Is resource complementarity dependent on organizational fit? And what are the factors that affect the resource complimentary for maximizing co-creation of value? How do generic and organization specific assets/ competencies contribute to the co-creation of synergistic value? Which distinctive competencies of the organization contribute most to the co-creation of value? And how? How do different combinations of resource types across the partners produce economic and social value? What are the evolving patterns of value creation per resource type, resource directionality? How can partners link their interests with the social good? Does co-creation of synergistic economic and social value depend on the degree the interests of the partners are linked with each other and the social good? Are associational, transferred, interaction and synergistic value produced in different degrees across the collaboration continuum? What is the relationship between the different types of value produced? What is the role of tangible and intangible resources in co-creating social value? How can partners achieve value renewal? Component II: Relationship Stages How do the value descriptors associated with the nature of the relationship in the Collaboration Continuum relate to each stage of the continuum in different fields of partnerships? How can the Collaboration Continuum be associated with the evolution of appreciation of social responsibilities in organizations? What forms of cross sector social interactions can be grouped under the transformation stage of the Collaboration Continuum? What sources and types of value are associated with each stage of the Collaboration Continuum (Philanthropic, Transactional, Integrative and Transformational)? What are the key enablers of moving to each higher level of collaboration in the Continuum? Component III: Partnering Processes How can partners maximize their partnership fit potential? How do partners articulate social problems and how do they develop frames that connect them with their interests and the social good? Do partners’ motives link with their partnership strategies? How can we examine systematically the history of the partners’ interactions in time? What is the role of partnership champions before and during the partnership? Should partners reconcile their value frames, to what extent and how? How can partnership process increase the potential for co-creation of synergistic value? How can partnerships strengthen their accountability through their processes mechanisms? How can partnership processes enhance societal outcomes? How can the processes in partnerships facilitate the development of new capabilities and skills? How can processes in partnerships facilitate value renewal? How can evaluation of the partnership implementation strengthen the value creation process?67 How can the evaluation of partnership implementation improve the benefits for both partners but also for society? Component IV: Partnering Outcomes How do partners view their own and each other’s benefits and costs from the collaboration? How is social value generated as a result of the partnership outcomes? Do partnerships constitute intentional social change mechanisms? And how? How do the loci of value creation in partnerships interact? Are the multiple levels of value creation interdependent and what are the links between the micro-meso and macro levels? What is the relation between benefits and costs in partnerships? What are the links between the social and economic value creation and the different types of benefits and costs in partnerships? What are the partnership benefits and the costs for the stakeholders? And the beneficiaries of partnerships? How can we conceptualize the links between the benefits and costs in cross sector social partnerships? How does external value created in partnerships contribute to the socio economic value creation for the partners? How do partnerships’ direct and indirect benefits link to the different levels of value creation (macro-meso-micro)? What is the role of vision in producing socio economic value in partnerships? How can the different types of value be assessed in partnerships? How can we develop a systematic and transparent value assessment in partnerships? How can assessment in partnerships strengthen decision making? How can indicators of value assessment in partnerships account for the different levels of value creation? How can we connect the different points of evaluation in partnerships (process outcomes, program outcomes and impact) to strengthen value creation on different levels? How can we assess the long term impact of partnerships? Which are the most appropriate methods to assess impact? To what extent do partnerships deliver synergistic impacts? For whom? And how? Overarching themes across components How and to what extent does economic value create social value and vice versa? Is social and economic value being created simultaneously or sequentially? Can we invent a new measure that assess multidimensional (economic- social-environmental) and multilevel (macro-meso-micro) value? How do partnerships re-constitute value? How can partnership function as global mechanisms of societal governance?68 REFERENCES REFERENCES Aguilera, R. V., Rupp, D. E., Williams, C. A., & Ganapathi, J. (2007). Putting the S back in corporate social responsibility: A multilevel theory of social change in organizations. Academy of Management Review, 32(3), 836-863. Ählström, J., & Sjöström, E. (2005). CSOs and business partnerships: Strategies for interaction. Business Strategy and the Environment, 14(4), 230-240. Alsop, R. J. (2004). The 18 immutable laws of corporate reputation. New York: Free Press. Alter, C., & Hage, J. (1993). Organizations working together. Newbury Park, CA: Sage. Amabile, T. M. (1996). Creativity in context. (Update to The social psychology of creativity.) Boulder, CO: Westview Press. Andreasen, A. R. (1996). Profits for nonprofits: Find a corporate partner. Harvard Business Review, 74(6), 47-50, 55-59. Andrews, R., & Entwistle, T. (2010). Does cross-sectoral partnership deliver? An empirical exploration of public service effectiveness, efficiency, and equity. Journal of Public Administration Research and Theory, 20(3), 679–701. Andrioff, J. (2000). Managing social risk through stakeholder partnership building: Empirical descriptive process analysis of stakeholder partnerships from British Petroleum in Colombia and Hoechst in Germany for the management of social risk. PhD thesis, Warwick University. Andrioff, J., & Waddock, S. (2002). Unfolding stakeholder management. In J. Andriof & S. Waddock (Eds.), Unfolding Stakeholder Thinking (pp. 19-42). Sheffield: Greenleaf Publishing. Anheier, H. K., & Hawkes, A. (2008). Accountability in a globalised world. In F. Holland (Eds.), Global Civil Society 2007/08: Communicative power and democracy. Beverly Hills: Sage. Argenti, P. A. (2004). Collaborating with activists: how Starbucks works with NGOs. California Management Review, 47(1), 91-116. Arya, B., & Salk, J. E. (2006). Cross-sector alliance learning and effectiveness of voluntary codes of corporate social responsibility. Business Ethics Quarterly, 16(2), 211-234. Ashman, D. (2000). Promoting corporate citizenship in the global south: Towards a model of empowered civil society collaboration with business. IDR Reports, 16(3), 1-24. Ashman, D. (2001). Civil society collaboration with business: Bringing empowerment back in. World Development, 29(7), 1097-1113.69 Astley, W. G. (1984). Toward an appreciation of collective strategy. Academy of Management Review, 9, 526–535. Audit Commission (1998). A Fruitful Partnership. London: Audit Commission. Austin, J. E. (1998). Business leaders and nonprofits. Nonprofit Management and Leadership, 9(1), 39-51. Austin, J. E. (2000a). The collaboration challenge: How nonprofits and businesses succeed through strategic alliances. San Francisco: Jossey-Bass Publishers. Austin, J. E. (2000b). Strategic collaboration between nonprofits and businesses. Nonprofit and Voluntary Sector Quarterly, 29 (Supplement 1), 69-97. Austin, J. E. (2003). Strategic alliances: Managing the collaboration portfolio. Stanford Social Innovation Review, 1(2), 49-55. Austin, J. E. (2010). From organization to organization: On creating value. Journal of Business Ethics, 94 (Supplement 1), 13-15. Austin, J. E., & Elias, J. (2001). Timberland and community involvement. Harvard Business School Case Study. Austin, J. E., Gutiérrez, R., Ogliastri, E., & Reficco, E. (2007). Capitalizing on convergence. Stanford Social Innovation Review, Winter, 24-31. Austin, J. E., Leonard, H. B., & Quinn, J. W. (2004). Timberland: Commerce and justice. Boston: Harvard Business School Publishing. Austin, J. E., Leonard, H. B., Reficco, E., & Wei-Skillern, J. (2006). Social entrepreneurship: It’s for corporations, too. In A. Nicholls (Eds.), Social entrepreneurship: New models of sustainable social change (pp. 169-180). Oxford: Oxford University Press. Austin, J., & Reavis, C. (2002). Starbucks and conservation international. Cambridge, MA: Harvard Business School Case Services. Austin, J. E., Reficco, E., Berger, G., Fischer, R. M., Gutierrez, R., Koljatic, M., Lozano, G., Ogliastri, E., & SEKN team (2004). Social partnering in Latin America: Lessons drawn from collaborations of business and civil society organizations. Cambridge, MA: Harvard University Press. Austin, J. E., Stevenson, H., & Wei-Skillern, J. (2006). Social and commercial entrepreneurship: The same, different, or both? Entrepreneurship Theory and Practice, 30(1), 1-22. Avon Foundation for Women (2011). The avon breast cancer crusade. Retrieved from www.avonfoundation.org/breast-cancer-crusade. Balogun, J., & Johnson, G. (2004). Organizational restructuring and middle manager sensemaking. Academy of Management Journal, 47, 523–549.70 Barnett, M. L. (2007). Stakeholder influence capacity and the variability of financial returns to corporate social responsibility. The Academy of Management Review, 32(3), 794-816. Barrett, D., Austin, J. E., & McCarthy, S. (2000). Cross sector collaboration: Lessons from the international Trachoma Initiative. In M. R. Reich (Eds.), Public-private partnerships for public health. Cambridge, MA: Harvard University Press. Bartel, C. A. (2001). Social comparisons in boundary-spanning work: Effects of community outreach on members’ organizational identity and identification. Administrative Science Quarterly, 46, 379-413. Barton, D. (2011). Capitalism for the long term. Harvard Business Review, March. Basil, D. Z., & Herr, P. M. (2003). Dangerous donations? The effects of cause-related marketing on charity attitude. Journal of Nonprofit & Public Sector Marketing, 11(1), 59-76. Ben, S. (2007). New processes of governance: Cases for deliberative decision-making. Managerial Law, 49(5/6), 196-205. Bendell, J. (2000b). A no win-win situation? GMOs, NGOs and sustainable development. In J. Bendell (Eds.), Terms for endearment: Business, NGOs and sustainable development (pp. 96-110). Sheffield: Greenleaf Publishing. Bendell, J. (2000a). Working with stakeholder pressure for sustainable development. In J. Bendell (Eds.), Terms for endearment: Business, NGOs and sustainable development (pp. 15-110). Sheffield: Greenleaf Publishing. Bendell, J. (2004). Flags of convenience? The global compact and the future of the United Nations. ICCSR Research Paper Series, 22. Bendell, J., & Lake, R. (2000). New Frontiers: Emerging NGO activities and accountability in business. In J. Bendell (Eds.), Terms for endearment: Business, NGOs and sustainable development (pp. 226-238). Sheffield: Greenleaf Publishing. Bennett, R. (1999). Sports sponsorship, spectator recall and false consensus. European Journal of Marketing, 33(3/4), 291-313. Berger, I. E., Cunningham, P. H., & Drumwright, M. E. (2004). Social alliances: Company/nonprofit collaboration. California Management Review, 47(1), 58-90. Bhattacharya, C. B., Korschun, D., & Sen, S. (2009). Strengthening stakeholder-company relationships through mutually beneficial corporate social responsibility initiatives. Journal of Business Ethics, 85 (Supplement 2), 257–272. Bhattacharya, C. B. & Sen, S. (2004). Doing better at doing good: When, why and how consumers respond to social initiatives. California Management Review, 47(1), 9-24. 71 Bhattacharya, C. B., Sen, S., & Korschun, D. (2008). Using corporate social responsibility to win the war for talent. MIT Sloan Management Review, 49(2), 37-44. Biermann, F., Chan, M., Mert, A., & Pattberg, P. (2007). Multi-stakeholder partnerships for sustainable development: Does the promise hold? In P. Glasbergen, F. Biermann & A. P. J. Mol (Eds.), Partnerships, governance and sustainable development: Reflections on theory and practice (239-260). Cheltenham: Edward Elgar. Biermann, F., Mol, A. P. J., & Glasbergen, P. (2007). Conclusion: Partnerships for sustainability – reflections on a future research agenda. In P. Glasbergen, F. Biermann & A. P. J. Mol (Eds.), Partnerships, governance and sustainable development: Reflections on theory and practice (288-300). Cheltenham: Edward Elgar. Birch, D. (2003). Doing Business in New Ways. The Theory and Practice of Strategic Corporate Citizenship with Specific Reference to Rio Tinto’s Community Partnerships. A Monograph. Corporate Citizenship Unit, Deakin University, Melbourne Bishop, M., & Green, M. (2008). Philanthrocapitalism: How giving can save the world. New York: Bloomsbury Press. BITC & Doughty Report (2011). The Business Case of CSR for being a responsible business. Business in the Community and the Doughty Centre for Corporate Responsibility. Available from: www.bitc.org.uk/research Accessed June 2011. Blankenberg, F. (1995). Methods of impact assessment research programme, resource pack and discussion. The Hague: Oxfam UK/I and Novib. Bockstette, V., & Stamp, M. (2011). Creating shared value: A how-to guide for the new corporate (r)evolution. Retrieved from http://www.fsg.org/Portals/0/Uploads/Documents/PDF/Shared_Value_Guide.pdf?cpgn=WP%20DL%20- %20HP%20Shared%20Value%20Guide [Accessed May 5, 2011]. Boschee, J., & McClurg, J. (2003). Toward a better understanding of social entrepreneurship: some important distinctions. Retrieved from http://www.se-alliance.org/. Boston College Center for Corporate Citizenship & Points of Light Foundation (2005). Measuring employee volunteer programs: The human resources model. Retrieved from http://www.bcccc.net. Bowen, H. R. (1953). Social responsibilities of the businessman. New York: Harper & Row. Bowen, F., Newenham-Kahindi, A., & Herremans, I. (2010). When suits meets roots: The antecedents and consequences of community engagement strategy. Journal of Business Ethics, 95(2), 297-318.72 Bowman, C. & Ambrosini, V. (2000). Value creation versus value capture: Towards a coherent definition of value in strategy. British Journal of Management, 11, 1-15. Brammer, S. J. & Pavelin, S. (2006). Corporate Reputation and Social Performance: The Importance of Fit. Journal of Management Studies 43:3 May, pp. 435-454. Brickson, S. L. (2007). Organizational identity orientation: The genesis of the role of the firm and distinct forms of social value. Academy of Management Review, 32, 864-888. Brinkerhoff, J. M. (2002). Assessing and improving partnership relationships and outcomes: A proposed framework. Evaluation and Program Planning, 25 (3), 215-231. Brinkerhoff, J. M. (2007). Partnerships as a means to good governance: Towards an evaluation framework. In P. Glasbergen, F. Biermann & A. P. J. Mol (Eds.), Partnerships, governance and sustainable development: Reflections on theory and practice (68-92). Cheltenham: Edward Elgar. Bromberger, A. R. (2011). A new type of hybrid. Stanford Social Innovation Review, Spring, 48-53. Brown, L. D. (1991). Bridging organizations and sustainable development. Human Relations, 44(8), 807- 831. Brown, T. J., & Dacin, P. A., (1997). The Company and the Product: Corporate Associations and Consumer Product Responses. The Journal of Marketing Vol. 61, No. 1 (Jan., 1997), pp. 68-84. Brown, L. D., & Kalegaonkar, A. (2002). Support organizations and the evolution of the NGO Sector. Nonprofit and Voluntary Sector Quarterly, 31(2), 231-258. Bryson, J., & Crosby, B. (1992). Leadership for the common good: Tackling public problems in a shared power world. San Francisco: Jossey Bass. Bryson, J. M., Crosby, B. C., & Middleton Stone, M. (2006). The design and implementation of cross- sector collaborations: Propositions from the literature. Public Administration Review, 66, 44-55. Burchell, J, & Cook, J. (2011, July 6-9). Deconstructing the myths of employer sponsored volunteering schemes. Paper presented at the 27th EGOS Colloquium in Gothenburg, Sweden (Theme 16). Burke, L., & Logsdon, J. M. (1996). How corporate social responsibility pays off. Long RangePlanning, 29 (4), 495-502. Burke Marketing Research (1980). Day-after Recall TV Commercial Testing. Columbus: Burke Inc. C&E (2010). Corporate-NGO Partnership Barometer Summary Report. Retrieved from http://www.candeadvisory.com/sites/default/files/report_abridged.pdf [Accessed January, 2011]. Cairns, B., Harris, M., & Hutchison, R. (2010, June 29). Collaboration in the voluntary sector: A meta- analysis. IVAR Anniversary Event. 73 Campbell, J. L. (2007). Why would corporations behave in socially responsible ways? An institutional theory of corporate social responsibility. The Academy of Management Review, 32(3), 946-967. Carroll, A. B. (1999). Corporate social responsibility: Evolution of a definitional construct. Business & Society, 38(3), 268-295. Carroll, A. B. (2006). Corporate social responsibility: A historical perspective. In M. J. Epstein & K. O. Hanson (Eds.), The accountable corporation: Corporate social responsibility (pp. 3-30). Westport, CT: Praeger Publishers. Carrigan, M. (1997). The great corporate giveaway - can marketing do good for the do-gooders? European Business Journal, 9(4), pp. 40–46. Castaldo, S., Perrini, F., Misani, N., & Tencati, A. (2009). The missing link between corporate social responsibility and consumer trust: The case of fair trade products. Journal of Business Ethnics, 84(1), 1- 15. Croteau, D. & Hicks, L. (2003). Coalition Framing and the challenge of a consonant frame pyramid: The case of collaborative response to homelessness. Social Problems, 50(2), 251-272. Christensen, C. M., Baumann, H., Ruggles, R., & Sadtler, T. M. (2006). Disruptive innovation for social change. Harvard Business Review, 84(12), 96-101. Clarke, A. (2007a, May 24). Cross sector collaborative strategic management: Regional sustainable development strategies. Presentation at the Scoping Symposium: The future challenges of cross sector interactions, London, England. Clarke, A. (2007b, April 19-20). Furthering collaborative strategic management theory: Process model and factors per phase. Presented at the Sprott Doctoral Symposium, Ottawa, Canada. Clarke, A., & Fuller, M. (2010). Collaborative strategic management: Strategy formulation and implementation by multi-organizational cross-sector social partnerships. Journal of Business Ethics, 94 (Supplement 1), 85-101. Collier, J., & Esteban, R. (1999). Governance in the participative organization: Freedom, creativity and ethics. Journal of Business Ethics, 21, 173-188. Commins , S . (1997). World vision international and donors: Too close for comfort. In M. Edwards & D. Hulme (Eds.), NGOs, states and donors: Too close for domfort? (pp. 140-155). Basingstoke/London: The Save the Children Fund. Cone (2004). Corporate citizenship study: Building brand trust. Retrieved from: http://www.coneinc.com/content10862004. Connell, J. P. & Kubisch, A. C. (1998). Applying a theory of change approach to the evaluation of comprehensive community initiatives: Progress, prospects and problems, In: Fulbright-Anderson, K.,74 Kubisch, A.C. and Connell, J. P. (eds) (1998), New Approaches to Evaluating Community initiatives, vol. 2: Theory, Measurements and Analysis (Washington, Dc: Aspen Institute). Cook, J. & Burchell, J. (2011, July 6-9). Deconstructing the myths of employer sponsored volunteering schemes. Paper presented at the 27th EGOS Colloquium in Gothenburg, Sweden (Theme 16). Cooperrider, D. Sorensen, P.F. Yaeger, T.F & Whitnet, D. (2001) Appreciative Inquiry. An emerging direction for organization development. Stipes. Cooper, T. L., Bryer, T. A., & Meek, J. C. (2006). Citizen-centered collaborative public management. Public Administration Review, 66, 76-88. Cornelious, N., & Wallace, J. (2010). Cross-sector partnerships: City regeneration and social justice. Journal of Business Ethics, 94 (Supplement 1), 71-84. Covey, J., & Brown, L. D. (2001). Critical co-operation: An alternative form of civil society- business engagement. IDR Reports, 17(1), 1-18. Crane, A. (1997). Rhetoric and reality in the greening of organizational culture. In G. Ledgerwood (Eds.), Greening the boardroom: Corporate environmental governance and business sustainability (pp.130-144). Sheffield: Greenleaf Publishing. Crane, A. (1998). Exploring green alliances. Journal of Marketing Management, 14(6), 559-579. Crane, A. (2000). Culture clash and mediation: Exploring the culture dynamics of business-NGO collaboration. In J. Bendell (Eds), Terms for endearment: Business, NGOs and sustainable development (pp. 163-177). Sheffield: Greenleaf Publishing. Crane, A. (2010). From governance to governance: On blurring boundaries. Journal of Business Ethics, 94 (Supplement 1), 17-19. Crane, A., & Matten, D. (2007). Business ethics: Managing corporate citizenship and sustainability in the age of globalization. Oxford: Oxford University Press. Cropper, S. (1996). Collaborative working and the issue of sustainability. In C. Huxham (Eds.), Creating Collaborative Advantage (pp. 80-100). London: Sage. Croteau, D., & Hick, L. (2003). Coalition framing and the challenge of a consonant frame pyramid: The case of a collaborative response to homelessness. Social Problems, 50, 251–272. Dalal-Clayton, B., & Bass, S. (2002). Sustainable development strategies: A resource handbook. London, The International Institute for Environment and Development: Earthscan Publications Ltd. Das, T. K., & Teng, B. S. (1998). Between trust and control: Developing confidence in partner cooperation in alliances. The Academy of Management Review, 23(3), 491-512. 75 Davies, R. & Dart, J. (2005). The ‘Most Significant Change’ (MSC) Technique. A Guide to Its Use. Version 1.00 – April 2005. Available from: http://www.mande.co.uk/docs/MSCGuide.pdf De Bakker, F. G. A., Groenewegen, P., & Den Hond, F. (2005). A bibliometric analysis of 30 years of research and theory on corporate social responsibility and corporate social performance. Business & Society, 44(3), 283-317. De Beers Group (2009). Report to Society 2009. Living up to diamonds. Retrieved from http://www.debeersgroup.com/ImageVault/Images/id_2110/scope_0/ImageVaultHandler.aspx Dees, J. G. (1998a). Enterprising nonprofits. Harvard Business Review, January-February, 55-67. Dees, J. G. (1998b). The meaning of ‘social entrepreneurship. Comments and suggestions contributed from the Social Entrepreneurship Funders Working Group, Center for the Advancement of Social Entrepreneurship. Fuqua School of Business: Duke University. Dees, J. G., & Anderson, B. B. (2003). Sector-bending: Blurring lines between nonprofit and for-profit. Society, 40(4), 16-27. Deloitte (2004). Deloitte volunteer IMPACT survey. Retrieved from: http://www.deloitte.com/view/en_US/us/Services/additional-services/chinese-services- group/039d899a961fb110VgnVCM100000ba42f00aRCRD.htm. Dew, N., Read, S., Sarasvathy, S. D., & Wiltbank, R. (2008). Effectual versus predictive logics in entrepreneurial decision-making: Differences between experts and novices. Journal of Business Venturing, 24, 287–309. Di Maggio, P., & Anheier, H. (1990). The sociology of the non-profit sector. Annual Review of Sociology, 16, 137-159. Dobbs, J. H. (1999). Competition’s new battleground: The integrated value chain. Cambridge, MA: Cambridge Technology Partners. Donaldson, T., & Preston, L. E. (1995). The stakeholder theory of the corporation: Concepts, evidence, and implications. Academy of Management Review, 20(1), 65-91. Dowling, B., Powell, M., & Glendinning, C. (2004) Conceptualising successful partnerships. Health and Social Care in the Community, 12(4), 309-317 Draulans, J., deMan, A. P., & Volberda, H. W. (2003). Building alliance capability: Managing techniques for superior performance. Long Range Planning, 36(2), 151-166. Drucker, P., E. (1989). What Business can Learn from Nonprofits. Harvard Business Review, July-August: 88-93.76 Ebrahim, A . (2003). Making sense of accountability: Conceptual perspectives for northern and southern nonprofits. Nonprofit Management and Leadership, 14(2), 191-212. Eccles, R. G., Newquist, S. C., & Schatz, R. (2007). Reputation and its risks. Harvard Business Review, 85(2), 104-114, 156. Edwards, M., & Hulme, D. (1995). Performance and accountability: Introduction and overview. In M. Edwards & D. Hulme (Eds.), Beyond the magic bullet: Non-governmental organizations-performance and cccountability (pp. 3-16). London: Earthscan Publications. Egri, C. P., & Ralston, D. A. (2008). Corporate responsibility: A review of international management research from 1998 to 2007. Journal of International Management, 14, 319–339. Eisingerich, A. B., Rubera, G., Seifert, M., & Bhardwaj, G. (2011). Doing good and doing better despite negative information? The role of corporate social responsibility in consumer resistance to negative information. Journal of Service Research, 14(1), 60-75. El Ansari, W., Phillips, C., & Hammick, M. (2001). Collaboration and partnership: Developing the evidence base. Health and Social Care in the Community, 9, 215–227. El Ansari, W., & Weiss, E. S. (2005). Quality of research on community partnerships: Developing the evidence base. Health Education Research, 21(2), 175-180. Elbers, W. (2004). Doing business with business: Development NGOs interacting with the corporate sector. Retrieved from http://www.evertvrmeer.nl/download.do/id/100105391/cd/true/ Elkington, J. (1997). Cannibals with forks: The triple bottom line of 21st century business. Oxford: Capstone Publishing. Elkington, J. (2004). The triple bottom line: Sustainability’s accountants. In M. J. Epstein & K. O. Hanson (Eds.), The accountable corporation: Corporate social responsibility (pp. 97-109). Westport, CT: Praeger Publishers. Elkington, J., & Fennell, S. (2000). Partners for sustainability. In J. Bendell (Eds.), Terms for endearment: Business, NGOs and sustainable development (pp. 150-162). Sheffield: Greenleaf Publishing. Emerson, J. (2003). The blended value proposition: Integrating social and financial returns. California Management Review, 45(4), 35-51. Endacott, R. W. J. (2003). Consumers and CSRM: A national and global perspective. Journal of Consumer Marketing, 21(3), 183-189. Epstein, M. J., & McFarlan, F. W. (2011). Joining a nonprofit board: What you need to know. San Francisco: Jossey-Bass. Farquason, A. (2000, November 11). Cause and effect. The Guardian.77 Finn, C. B. (1996). Utilizing stakeholder strategies for positive collaborative outcomes. In C. Huxham (Eds.), Creating Collaborative Advantage (pp. 152-164). London: Sage. Fiol, C. M., Pratt, M. G., & O’Connor, E. J. (2009). Managing intractable identity conflicts. Academy of Management Review, 34, 32–55. Forsstrom, B. (2005). Value Co-Creation in Industrial Buyer-Seller Partnerships – Creating and Exploiting Interdependencies An Empirical Case Study. ABO AKADEMIS FORLAG – ABO AKADIMI UNIVERSITY PRESS Fournier, D. (1995). Establishing evaluative conclusions: A distinction between general and working logic. New Directions for Evaluation, 68, 15-32. Freeman, R. E. (1984). Strategic management: A stakeholder approach. Boston: Pitman Publishing. Freeman, R. E. (1999). Divergent stakeholder theory. Academy of Management Review, 24, 233-236. Friedman, M. (1962). Capitalism and Freedom. Chicago: University of Chicago Press. Friedman, M. (1970, September 13). The social responsibility of business is to increase its profits. New York Times Magazine, 122-126. Galaskiewicz, J. (1985). Interorganizational relations. Annual Review of Sociology, 11, 281-304. Galaskiewicz, J. (1997). An urban grants economy revisited: Corporate charitable contributions in the Twin Cities, 1979-81, 1987-89. Administrative Science Quarterly, 42, 445-471. Galaskiewicz, J., & Sinclair Colman, M. (2006). Collaboration between corporations and nonprofit organizations. In R. Steinberg & W. W. Powel (Eds.), The non-profit sector: A research handbook (pp. 180- 206). New Haven, CT: Yale University Press. Galaskiewicz, J., & Wasserman, S. (1989). Mimetic processes within an interorganizational field: An empirical test. Administrative Science Quarterly, 34, 454-479. Galbreath, J. R. (2002). Twenty first century management rules: The management of relationships as intangible assets. Management Decision, 40(2), 116-126. Garriga, E., & Melé, D. (2004). Corporate social responsibility theories: Mapping the territory. Journal of Business Ethics, 53, 51-71. Gerde, V. W., & Wokutch, R. E. (1998). 25 years and going strong: A content analysis of the first 25 years of the social issues in management division proceedings. Business & Society, 37(4), 414-446. Geringer, J. M. (1991). Strategic determinants of partner selection criteria in international joint ventures. Journal of International Business Studies, 22, 41-62.78 Geringer, J.M., & Herbert, L. (1989). Measuring performance of international joint ventures. Journal of International Business Studies, 22, 249-263. Giving USA Foundation, 2010, Giving USA 2010: The Annual report of Philanthropy for the Year 2009, Indianapolis, Indiana: The Center on Philanthropy at Indiana University Glasbergen, P. (2007). Setting the scene: The partnership paradigm in the making. In P. Glasbergen, F. Biermann & A. P. J. Mol (Eds.), Partnerships, governance and sustainable development: Reflections on theory and practice (pp. 1-28). Cheltenham: Edward Elgar. Glasbergen, P., Biermann, F., & Mol, A. P. J. (2007). Partnerships, governance and sustainable development: Reflections on theory and practice. Cheltenham: Edward Elgar Publishing Limited. GlobeScan (2003). Corporate Social Responsibility Monitor. Retrieved from http://www.deres.org.uy/home/descargas/guias/GlobalScan_Monitor_2003.pdf GlobeScan (2005). Corporate Social Responsibility Monitor. Retrieved from http://www.deres.org.uy/home/descargas/guias/GlobalScan_Monitor_2005.pdf Glynn, M. A. (2000). When cymbals become symbols: Conflict over organizational identity within a symphony orchestra. Organization Science, 11, 285–298. Godfray, P. C., & Hatch, N. W. (2007). Researching corporate social responsibility: An agenda for the 21st century. Journal of Business Ethics, 70, 87-98. Godfrey, P. C., Merrill, C. B., & Hansen, J. M. (2009). The relationship between corporate social responsibility and shareholder value: An empirical test of the risk management hypothesis. Strategic Management Journal, 30(4), 425-445. Goodpaster, K. E., & Matthews, J. B. (1982). Can a corporation have a conscience? Harvard Business Review, January-February, 132-141. Goffman, E. (1983). The interaction order. American Sociological Review, 48(1), 1–17. Googins, B. K., Mirvis, P. H., & Rochlin, S. A. (2007). Beyond good company: Next generation corporate citizenship. New York: Palgrave MacMillan. Googins, B. K., & Rochlin, S. A. (2000). Creating the partnership society: Understanding the rhetoric and reality of cross-sectoral partnerships. Business and Society Review, 105(1), 127-144. Gourville, J. T., & Rangan, V. K. (2004). Valuing the cause marketing relationship. California Management Review, 47(1), 38-57. Granovetter, M. (1985). Economic action and social structure: The problem of embeddedness. American Journal of Sociology, 91, 481-510.79 Gray, B. (1989). Collaborating. San Francisco: Jossey-Bass. Gray, S., & Hall, H. (1998). Cashing in on charity’s good name. The Chronicle of Philanthropy, 25, 27-29. Green, T., & Peloza, J. (2011). How does corporate social responsibility create value for consumers? Journal of consumer marketing, 28(1), 48-56. Greenall, D., & Rovere, D. (1999). Engaging stakeholders and business-NGO partnerships in developing countries. Ontario: Centre for innovation in Corporate Social Responsibility. Greening, D. W., & Turban, D. B. (2000). Corporate social performance as a competitive advantage in attracting a quality workforce. Business & Society, 39(3), 254-280. Griffin, J. J., & Mahon, J. F. (1997). The corporate social performance and corporate financial performance debate: Twenty-five years of incomparable research. Business & Society, 36(1), 5-31. Grolin, J. (1998). Corporate legitimacy in risk society: The case of Brent Spar. Business Strategy and the Environment, 7(4), 213-222. Gunderson, L.H. and Holling, C. S. (2001). Panarchy: understanding transformations in humans and natural systems. Washington DC: Island Press. Haddad, K. A., & Nanda, A. (2001). The American Medical Association-Sunbeam deal (A - D). Harvard Business School Case Study. Halal, W. E. (2001). The collaborative enterprise: A stakeholder model uniting probability and responsibility. Journal of Corporate Citizenship, 1(2), 27-42. Hamman, R., & Acutt, N. (2003). How should civil society (and the government) respond to ‘corporate social responsibility’? A critique of business motivations and the potential for partnerships. Development Southern Africa, 20(2), 255-270. Hammond, A. L., Kramer, W. J., Katz, R. S., Tran, J. T., & Walker, C. (2007). The next four billion: Market size and business strategy at the base of the pyramid. Washington DC: International Finance Corporation/ World Resources Institute. Harbison, J. R., & Pekar, P. (1998). Smart alliances: A practical guide to repeatable success. San Francisco: Jossey-Bass. Hardy, B., Hudson, B., & Waddington, E. (2000). What makes a good partnership? Leeds: Nuffield Institute. Hardy, C., Lawrence, T. B., & Phillips, N. (2006). Swimming with sharks: Creating strategic change through multi-sector collaboration. International Journal of Strategic Change Management, 1, 96-112. Hardy, C., Phillips, N., & Lawrence, T. B. (2003). Resources, knowledge and influence: The organizational effects of interorganizational collaboration. Journal of Management Studies, 40, 321–47.80 Harris, L. C., & Crane, A. (2002). The greening of organizational culture: Managers’ views on the depth, degree and diffusion of change. Journal of Organizational Change Management, 15(3), 214-234. Hartwich, F., Gonzalez, C., & Vieira, L. F. (2005). Public-private partnerships for innovation-led growth in agrichains: A useful tool for development in Latin America? ISNAR Discussion Paper, 1. Washington, DC: International Food Policy Research Institute. Head, B. W. (2008). Assessing Network-based collaborations. Effectiveness for whom? Public Management Review, 10(6), pp. 733-749. Heal, G. (2008). When principles pay: Corporate social responsibility and the bottom line. New York: Columbia University Press. Heap, S. (1998). NGOs and the private sector: Potential for partnerships? INTRAC Occasional Papers Series, 27. Hernes, G. (1976). Structural Change in social processes. The American journal of Sociology, 82(3),pp. 513-547. Heap, S. (2000). NGOs engaging with business: A world of difference and a difference to the world. Oxford: Intrac Publications. Heath, R. L. (1997). Strategic issues management: Organizations and public policy challenge. Thousand Oaks, CA: Sage. Hendry, J. R. (2006). Taking aim at business: What factors lead environmental non-governmental organizations to target particular firms? Business & Society, 45(1), 47-86. Heugens, P. P. M. A. R. (2003). Capability building through adversarial relationships: A replication and extension of Clarke and Roome (1999). Business Strategy and the Environment, 12, 300-312. Heuer, M. (2011). Ecosystem cross-sector collaboration: Conceptualizing an adaptive approach to sustainable governance. Business Strategy and the Environment, 20, 211-221. Hill, C. W. L., & Jones, T. M. (1985). Stakeholder agency theory. Journal of Management Studies, 29(2), 131-154. Hiscox, M. & Smyth, N. (2008). Is there Consumer Demand for Improved Labor Standards? Evidence from Field Experiments in Social Product Labeling Version. Harvard University Research Paper, 3/21/08. Hitt, M. A., Ireland, R. D., Sirmon, D. G., & Trahms, C. (2011). Strategic entrepreneurship: Creating value for individuals, organizations, and society. Academy of Management Perspectives, 25(2), 57-75.81 Hoeffler, S., & Keller, K. L. (2002). Building brand equity through corporate societal marketing. Journal of Public Policy & Marketing, 21(1), 78-89. Hoffman, W. H. (2005). How to manage a portfolio of alliances. Long Range Planning, 38(2), 121-143. Holmberg , S. R., & Cummings, J. L. (2009). Building successful strategic alliances: Strategic process and analytical tool for selecting partner industries and firms. Long Range Planning, 42(2), 164-193. Holmes, S., & Moir, L. (2007). Developing a conceptual framework to identify corporate innovations through engagement with non-profit stakeholders. Corporate Governance, 7(4), 414-422. Hood, J. N., Logsdon J. M., & Thompson J. K. (1993). Collaboration for social problem-solving: A process model. Business and Society, 32(1), 1–17. Hustvedt, G., & Bernard, J. C. (2010). Effects of social responsibility labelling and brand on willingness to pay for apparel. International Journal of Consumer Studies, 34(6), 619-626. Huxham, C. (1993). Pursuing collaborative advantage. The Journal of Operational Research Society, 44(6), 599–611. Huxham, C. (1996). Collaboration and collaborative advantage. In C. Huxham (Eds.), Creating Collaborative Advantage (pp. 1-18). London: Sage. Huxham, C., & Vangen, S. (2000). Leadership in the shaping the implementation of collaborative agendas: How things happen in a (not quite) joined up world. Academy of Management Journal, 43(6) 1159-1175. IEG (2011). Sponsorship spending: 2010 proves better than expected; Bigger gains set for 2011. IEG Insights. Retrieved from www.sponsorship.com/ieg-insights/sponsorship-spending. Impact (2010). Impact Research project: Impact Measurement and Performance Analysis of CSR. Available from: http://www.eabis.org/projects/project-detail-view.html?uid=18 Accessed: January 2011. Irwin, R. L., & Asimakopoulos, M. K. (1992). An approach to the evaluation and selection of sport sponsorship proposals. Sport Marketing Quarterly, 1(2), 43-51. Israel, B., Schulz, A. J., Parker, E. A., & Becker, A. B. (1998). Review of community-based research: Assessing partnership approaches to improve public health. Annual Review of Public Health, 19, 173- 202. Itami, H., & Roehl, T. (1987). Mobilizing invisible assets. Cambridge, MA: Harvard University Press. Jackson, I., & Nelson, J. (2004). Profits with principles: Seven strategies for creating value with values. Currency/Doubleday. 82 Jamali, D., & Keshishian T. (2009). Uneasy alliances: Lessons learned from partnerships between businesses and NGOs in the context of CSR. Journal of Business Ethics, 84(2), 277–295. Jensen, M. C. (2002). Value maximization, stakeholder theory, and the corporate objective function. Business Ethics Quarterly, 12(2), 235-256. Jones, T. M. (1995). Instrumental stakeholder theory: A synthesis of ethics and economics. Academy of Management Review, 20(2), 404-437. Jones, D. A. (2007). Corporate volunteer programs and employee responses: How serving the community also serves the company. Socially Responsible Values on Organizational Behaviour Interactive Paper Session at the 67th annual meeting of the Academy of Management. Philadelphia, United States, 6-7 August 2006. Jones, C., Hesterly, W., & Borgatti, S. ( 1997). A general theory of network governance: Exchange conditions and social mechanisms. Academy of Management Review, 22(4), 911-945. Jones, T. M., & Wicks, A. C. (1999). Convergent stakeholder theory. Academy of Management Review, 24 (2), 206-221. Jorgensen, M. (2006, August 14). Evaluating cross-sector partnerships. Working paper presented at the conference: Public-private partnerships in the post WWSD context, Copenhagen Business School. Kaku, R. (1997). The path of Kyosei. Harvard Business Review, 75(4), 55-63. Kania, J. & Krammer, M (2010). Collective impact. Stanford Social Innovation Review. Winter. Kanter, R, M. (1983). The Change Masters: Innovation for productivity in the American corporation. New York: Simon and Schuster. Kanter, R. M. (1994). Collaborative advantage: Successful partnerships manage the relationship, not just the deal. Harvard Business Review, July-August, 96-108. Kanter, R. M. (1999). From spare change to real change: The social sector as beta site for business innovation. Harvard Business Review, May-June, 122-132. Kaplan, S. (2008). Framing contests: Strategy making under uncertainty. Organization Science, 19, 729– 752. Kaplan, R. S., & Murray, F. ( 2008). Entrepreneurship and the construction of value in biotechnology. In N. Philips, D. Griffiths & G. Sewell (Eds.), Technology and organization: essays in honour of Joan Woodward (Research in the sociology of organizationorganizations). Bingley: Emerald Group. Kaplan, R.S. & Norton, D.P. (1992). The balanced scorecard—Measures that drive performance. Harvard Business Review, January–February, 71–79. Kaplan, R. S., & Norton, D. P. (2001). Transforming the balanced scorecard from performance measurement to strategic management: Part I. Accounting Horizons, 15(1), 87–104.83 King, A. (2007). Cooperation between corporations and environmental groups: A transaction cost perspective. Academy of Management Review, 32, 889-900. Koehn, N. F., & Miller, K. (2007). John Mackey and whole foods market. Harvard Business School Case Study (9-807-111). Kolk, A. (2004). MVO vanuit bedrijfskundig en beleidsmatig perspectief, het belang van duurzaam management. Management en Organisatie, 4(5), pp.112-126. Kolk, A., Van Dolen, W., & Vock, M. (2010). Trickle effects of cross-sector social partnerships. Journal of Business Ethics, 94 (Supplement 1), 123-137. Kolk, A., Van Tulder, R., & Kostwinder, E. (2008). Partnerships for development. European Management Journal, 26(4), 262-273. Kolk, A., Van Tulder, R., & Westdijk, B. (2006). Poverty allevation as business strategy? Evaluating commitments of frontrunner multinational corporations. World Development, 34(5), 789–801. Koontz, T. M. and Thomas, C. W. (2006) What do we know and need to know about the environmental outcomes f collaborative management? Public Administration Review, December, 111-121. Korngold, A. (2005). Leveraging goodwill: Strengthening nonprofits by engaging businesses. San Francisco: Jossey-Bass. Kotler, P., & Lee, N. R. (2009). Up and out of poverty: The social marketing solution. Uppersaddle River, New Jersey: Pearson Education Publishing. Kotler, P., & Zaltman, G. (1971). Social marketing: An approach to planned social change. The Journal of Marketing, 35(3), 3-12. Kourula, A. & Laasonen, S. (2010). Nongovernmental Organizations in Business and Society, Management, and International Business – Review and Implications 1998-2007. Business and Society, 49 (1) 3-5 Koza, M. P., & Lewin, A. Y. (2000). The co-evolution of strategic alliances. Organization Science, 9(3), 255-264. Kumar, R., & Nti, K. O. (1998). Differential learning and interaction in alliance dynamics: A process and outcome discrepancy model. Organization Science, 9, 356-367. Lawrence, S., & Mukai, R. (2010). Foundation growth and giving estimates: Current outlook. Foundations Today Series, Foundation Center. Le Ber, M. J., & Branzei, O. (2010a). Towards a critical theory value creation in cross-sector partnerships. Organization, 17(5), 599-629. Le Ber, M. J., & Branzei, O. (2010b). (Re)forming strategic cross-sector partnerships: Relational processes of social innovation. Business & Society, 49(1): 140-172.84 Le Ber, M. J., & Branzei, O. (2010c). Value frame fusion in cross sector interactions. Journal of Business Ethics, 94 (Supplement 1), 163-195. Leonard, L. G. (1998). Primary health care and partnerships: Collaboration of a community agency, health department, and university nursing program. Journal of Nursing Education, 37(3), 144–151. Lepak, D. P., Smith, K. G., & Taylor, M. S. (2007). Value creation and value capture: A multilevel perspective. Academy of Management Review, 32(1), 180-194. Levy, R. (1999). Give and take: A candid account of corporate philanthropy. Boston: Harvard Business School Press. Lim, T. (2010). Measuring the value of corporate philanthropy: Social impact, business benefits, and investor returns. New York: Committee Encouraging Corporate Philanthropy. Lockett, A., Moon, J., & Visser, W. (2006). Corporate social responsibility in management research: Focus, nature, salience and sources of influence. Journal of Management Studies, 43(1), 115-136. Long, F. J., & Arnold, M. B. (1995). The power of environmental partnerships. Fort Worth: Dryden. Logsdon, J. M. (1991). Interests and interdependence in the formation of social problem-solving collaborations. Journal of Applied Behavioral Science, 27(1), 23-37. London, T., & Rondinelli, D. A. (2003). Partnerships for learning: Managing tensions in nonprofit organizations’ alliances with corporations. Stanford Social Innovation Review, 1(3), 28-35. London, T., & Hart, S. L. (2011). Next generation business strategies for the base of the pyramid: New approaches for building mutual value. Upper Saddle River, New Jersey: Pearson Education. Makadok, R. (2001). Towards a synthesis of a resource-based view and dynamic capability views of rent creation. Strategic Management Journal, 22(5), 387-401. Makower, J. (1994). Beyond the bottom line: Putting social responsibility to work for your business and the World. New York: Simon & Schuster. Mancuso Brehm, V. (2001). Promoting effective north-south NGO partnerships: A comparative study of 10 European NGOs. INTRAC Occasional Papers, 35, 1-75. Margolis, J. D., & Walsh, J. P. (2003). Misery loves companies: Rethinking social initiatives by business. Administrative Science Quarterly, 48(2), 268-305. Marin, L., Ruiz, S., & Rubio, A. (2009). The Role of identity salience in the effects of corporate social responsibility on consumer behavior. Journal of Business Ethics, 84, 65-78. Markwell, S., Watson, J., Speller, V., Platt, S., & Younger, T. (2003). The working partnership. Book 3: In- depth assessment. Health Development Agency, NHS. Retrieved from http://www.nice.org.uk/niceMedia/documents/working_partnership_3.pdf85 Márquez, P., Reficco, E., & Berger, G. (2010). Socially inclusive business: Engaging the poor through market initiatives in Iberoamerica. Cambridge: Harvard University Press. Marquis, C., Glynn, M. A., & Davis, G. F. (2007). Community isomorphism and corporate social action. The Academy of Management Review, 32(3), 925-945. Martin, R. L. (2002). The virtue matrix: Calculating the return on corporate social responsibility. Harvard Business Review, 80(3), 68-75. Martin, R. L., & Osberg, S. (2007). Social entrepreneurship: The case for definition. Stanford Social Innovation Review, Spring, 29-39. McCann, J. E. (1983). Design guidelines for social problem-solving interventions. The Journal of Applied behavioural Science, 19(2), 177-189. McFarlan, F. W. (1999). Working on nonprofit boards: Don't assume the shoe fits. Harvard Business Review, November-December, 65-80. McLaughlin, T. A. (1998). Nonprofit mergers & alliances: A strategic planning guide. New York: John Wiley & Sons Meadowcroft, J. (2007). Democracy and accountability: The challenge for cross-sectoral partnerships. In P. Glasbergen, F. Biermann & A. P. J. Mol (Eds.), Partnerships, governance and sustainable development (pp. 194-213). Cheltenham: Edward Elgar. Meenaghan, T. (1991). The role of sponsorship in the marketing communications mix. International Journal of Advertising, 10, 35-47. Millar, C., Choi, J. C., & Chen, S. (2004). Global strategic partnerships between MNEs and NGOs: Drivers of change and ethical issues. Business and Society Review, 109(4), 395-414. Milne, G. R., Iyer, E.., & Gooding-Williams, S. (1996). Environmental organization alliance relationships within and across nonprofit, business, and government sectors. Journal of Public Policy & Marketing, 15 (2), 203-215. Mitchell, J. (1998). Companies in a world of conflict: NGOs, sanctions and corporate responsibility. London: Royal Institute of International Affairs–Earthscan. Mitchell, R. K., Agle, B. R., & Wood, D. J. (1997). Towards a theory of stakeholder identification and salience: Defining the principle of who and what really counts. Academy of Management Review, 22(4), 853-886.86 Montgomery, D. B., & Ramus, C. A. (2007). Including corporate social responsibility, environmental sustainability, and ethics in calibrating Mba job preferences. Stanford Graduate School of Business. Research Collection Lee Kong Chian School of Business, paper 939. Mowjee, T. (2001). NGO – donor funding relationships: UK government and European community finding for the humanitarian aid activities of UK NGOs from 1990 – 1997. Unpublished PhD Thesis, London School of Economics, Centre for Civil Society. Mulgan, G. (2010). Measuring social value. Stanford Social Innovation Review (Summer). Retrieved from http://www.ssireview.org/articles/entry/measuring_social_value/. Murphy, M., & Arenas, D. (2010). Through indigenous lenses: Cross-sector collaborations with fringe stakeholders. Journal of Business Ethics, 94 (Supplement 1), 103-121. Muthuri, J. N., Matten, D., & Moon, J. (2009). Employee volunteering and social capital: Contributions to corporate social Responsibility. British Journal of Management, 20, 75–89. Najam, A. (1996). NGO accountability: A conceptual framework. Development Policy Review, 14 (December), 339-353. NCDO. 2006. Measuring the contribution of the private sector to the achieving the Millennium Development Goals. Version II. Amsterdam: National Committee for International Cooperation and Sustainable Development. Ndegwa, S. (1996). The two faces of civil society: NGOs and politics in Africa. West Hartford, CT: Kumarian. Nelson, J., & Jenkins, B. (2006). Investing in social innovation: Harnessing the potential for partnership between corporations and social entrepreneurs. In F. Perrini (Eds.), The new social entrepreneurship: What awaits social entrepreneurial ventures? (pp. 272-280). Cheltenham: Edgar Elgard Publishing. Newell, P. (2002). From responsibility to citizenship: Corporate accountability for development. IDS Bulletin, 33(2), 91–100. Nohria, N. (1992). Is a network perspective a useful way of studying organizations? In N. Nohria & R. G. Eccles (Eds.), Networks and organizations: Structure, form, and action (pp. 1-22). Boston, MA: Harvard Business School Press. Noy, D. (2009). When framing fails: Ideas, influence, and resources in San Francisco’s homeless policy field. Social Problems, 56, 223–242. Oakley, P., Pratt, B., & Clayton, A. (1998). Outcomes and impact: Evaluating change in the social development. INTRAC NGO Management & Policy Series, 6. 87 O’Cass, A. & Ngo, L. V. (2010) Examining the firm's value creation process: A managerial perspective of the firm's value offering strategy and performance. British Journal of Management Early View (Online Version of Record published 11 May 2010 before inclusion in an issue) O’Donohoe, N. Leijonhufvud, C., Saltuk, Y., Bugg-Levine, A. & Brandeburg, M. (2010). Impact Investments. An emerging Asset class. J.P. Morgan & Rockefeller Foundation, November 2010. O’Flynn, M. (2010). Impact Assessment: Understanding and assessing our contributions to change. Intrac-Inte4rnational NGO Training and Research Centre, M&E Paper 7. Oliver, C. (1990). Determinants of interorganizational relationships: Integration and future directions. Academy of Management Review, 15(2), 241-265. Orlitzky, M., Schmidt, F. L., & Rynes, S. L. (2003). Corporate social and financial performance: A meta- analysis. Organization Studies, 24(3), 403-441. Owen, J. M. & Rogers, P. J. (1999). Program Evaluation: forms and Approaches. Sage. Paine, L. S. (2003). Value shift: Why companies must merge social and financial imperatives to achieve superior performance. New York: McGraw-Hill. Pangarkar, N. (2003). Determinants of alliance duration in uncertain environments: The case of the biotechnology sector. Long Range Planning, 36(3), 269-284. Pearce, J. A., & Doh, J. P. (2005). The high impact of collaborative social initiatives. Sloan Management Review, 46(3), 30-38. Peloza, J. (2009). The challenge of measuring financial impacts from investments in corporate social performance. Journal of Management, 25(6), 1518-1541. Peloza, J., & Shang, J. (2010). How can corporate social responsibility activities create value for stakeholders? A systematic review. Journal of the Academy of Marketing Science, 39, 117-135. Peterson, D. K. (2004). Benefits of participation in corporate volunteer programs: Employees’ perceptions. Personnel Review, 33(6), 615-627. Pfeffer, J., & Salancik, G. (1978). The external control of organizations: A resource dependence perspective. New York: Harper & Row. Plowman, D. A., Baker, L. T., Kulkarni, M., Solansky, S. T., & Travis, D. V. (2007). Radical Change accidentally: The emergence and amplification of small change. Academy of Management Journal, 50, 512-543.88 Polman, R. (2010). The remedies for capitalism. Retrieved from http://www.mckinseyquarterly.com/spContent/2011_04_05a.htm. Porter, M. E. (2010, June 2-3). Creating shared value: The role of corporation in creating economic and social development. Speech at the CECP Corporate Philanthropy Summit, New York. Retrieved from http://www.youtube.com/watch?v=z2oS3zk8VA4. Porter, M. E., & Kramer, M. R. (2002). The competitive advantage of corporate philanthropy. Harvard Business Review, December, 5-16. Porter, M. E., & Kramer, M. R. (2006). Strategy & society: The link between competitive advantage and corporate social responsibility. Harvard Business Review, December, 78-92. Porter, M. E., & Kramer, M. R. (2011). Shared value: How to reinvent capitalism – and unleash a wave of innovation and growth. Harvard Business Review, January-February, 62-77. Portocarrero, F., & Delgado, Á. J. (2010). Inclusive business and social value creation. In P. Márquez, E. Reficco & G. Berger (Eds.), Socially inclusive business: Engaging the poor through market initiatives in Iberoamerica (pp. 261-293). Cambridge: Harvard University Press. Prahalad, C. K. (2005). The fortune at the bottom of the pyramid: Eradicating poverty through profits. Upper Saddle River, New Jersey: Wharton School Publishing. Prahalad, C. K., & Hamel, G. (1990). The core competence of the corporation. Harvard Business Review, May-June, 71-91. Prahalad, C. K., & Hammond, A. (2002) Serving the world’s poor, profitably. Harvard Business Review, September, 4-11. Prahalad, C. K., & Hart, S. (2002). The fortune at the bottom of the pyramid. Strategy + Management, 26, 54-67. Preskill, H., & Jones, N. (2009). A practical guide for engaging stakeholders in developing evaluation questions. Robert Wood Johnson Foundational Evaluation Series. Retrieved from http://www.rwjf.org/pr/product.jsp?id=49951. Pressman, J. L., & Wildavsky, A. B. (1973). Implementation. Berkeley: University of California Press. Provan, K. G., & Milward, J. B. (2001). Do networks really work? A framework for evaluating public- sector organizational networks. Public Administration Review, 61(4), 414–423. Raftopoulos, B. (2000). The state, NGOs, and democratisation. In S. Moyo, J. Makumbe & J. Raftopoulos (Eds.), NGOs, the State and Politics in Zimbabwe. Harare: Southern Africa Printing and Publication. Rangan, V. K., Quelch, J. A., Herrero, G., & Barton, B. (2007). Business solutions for the global poor: Creating social and economic value. San Francisco: Jossey-Bass. 89 Reed, A. M., & Reed, D. (2009). Partnerships for development: Four models of business involvement. Journal of Business Ethics, 90, 3-37. Reficco, E. & Marquez, P. (2009). Inclusive Networks for Building BOP Markets. Business and Society, doi:10.1177/0007650309332353 Rehbein, K., Waddock, S., & Graves, S. B. (2004). Understanding shareholder activism: Which corporations are targeted? Business & Society, 43(3), 239-267. Reis, T. (1999). Unleashing the new resources and entrepreneurship for the common good: A scan, synthesis and scenario for action. Battle Creek, MI: W.K. Kellogg Foundation. Rendon, L. I., Gans, W. L., & Calleroz, M. D. (1998). No pain, no gain: The learning curve in assessing collaboratives. New Directions for Community Colleges, 103, 71–83. Reputation Institute (2011). U.S. RepTrak Pulse 2011. Retrieved from www.US_RepTrak_Pulse_Topline_2011.pdf. Ring , P. S., & Van de Ven, A. H. (1994) . Developmental processes of cooperative interorganizational relationships. Academy of Management Review, 19(1), 90-118. Rivera-Santos, M., & Rufin, C., (2010). Odd couples: Understanding the governance of firm-NGO alliances. Journal of Business Ethics, 94 (Supplement 1), 55-70. Rondinelli, D. A., & London, T. (2003). How corporations and environmental groups cooperate: Assessing cross-sector alliances and collaborations. Academy of Management Executive, 17(1), 61-76. Rowley, T. J. (1997). Moving beyond dyadic ties: A network theory of stakeholder influences. The Academy of Management Review, 22(4), 887-910. Rowley, T. J., & Berman, S. (2000). A brand new brand of corporate social performance. Business & Society, 39(4), 397-418. Rundall, P., (2000). The perils of partnership - an NGO perspective. Addiction, Volume 95, Issue 10, pages 1501–1504. Sagawa, S., & Segal, E. (2000). Common interest, common good: Creating value through business and social sector partnerships. Boston: Harvard Business School Press. Salamon, L. M. (2007). Putting the non-profit sector and volunteering on the economic map. Retrieved from http://www.unv.org/en/news-resources/resources/on-volunteerism/doc/putting-the-non-profit- sector.html. Salamon, L. M., & Anheier, H. K. (1997). Defining the non-profit sector: A cross-national analysis. Manchester: Manchester University Press. Sanchez, P., Chaminade, C., & Olea, M. (2000). Management of intangibles: An attempt to build a theory. Journal of International Capital, 1(4), 312-27.90 Schonberger, R. J. (1996). Backing off from the bottom line. Executive Excellence, May, 16-17. Schorr, L. B. (1988). Determining what works in Social programs and social policies: towards a more inclusive knowledge base. Washington DC: Brookings. Schuler, D. A., & Cording, M. (2006). A corporate social performance-corporate financial performance behavioral model for consumers. Academy of Management Review, 31(3), 540–558. Shaffer, B., & Hillman, A. (2000). The development of business-government strategies by diversified firms. Strategic Management Journal,21(2),175-190. Shah, J., & Singh, N. (2001). Benchmarking internal supply-chain performance: Development of a framework. Journal of Supply Chain Management, 37(1), 37–47. Seitanidi, M.M. (2008). Adaptive responsibilities: Non-linear interactions across social sectors. Cases from cross sector partnerships. Emergence: Complexity & Organization E:CO Journal, 10(3), 51-64. Seitanidi, M. M. (2010). The Politics of Partnerships. A Critical Examination of Nonprofit-Business Partnerships. Springer. Seitanidi, M. M., & Crane, A. (2009). Implementing CSR through partnerships: understanding the selection, design and institutionalisation of nonprofit-business partnerships. Journal of Business Ethics, 85, 413-429. Seitanidi, M. M., Koufopoulos, D., & Palmer, P. (2010.) Partnership formation for change: Indicators for transformative potential in cross sector social partnerships. Journal of Business Ethics, 94 (Supplement 1), 139-161. Seitanidi, M. M., & Lindgreen, A. (2010). Cross sector social interactions. Journal of Business Ethics, 94 (Supplement 1), 1-7. Seitanidi, M. M., & Ryan, A. (2007). A critical review of forms of corporate community involvement: From philanthropy to partnerships. International Journal of Nonprofit and Voluntary Sector Marketing, 12(3), 247-266. Selsky, J. W., Goes, J., & Babüroglu, O. N. (2007). Contrasting perspectives of strategy making: Applications in ‘hyper’ environments. Organization Studies, 28(1), 71-94. Selsky, J. W., & Parker, B. (2005). Cross-sector partnerships to address social issues: Challenges to theory and practice. Journal of Management, 31(6), 849-873. Selsky, J. W., & Parker, B. (2010). Platforms for cross-sector social partnerships: prospective sensemaking devices for social benefit. Journal of Business Ethics, 94, 21-37. Senge, P. M., Dow, M., & Neath, G. (2006). Learning together: New partnerships for new times. Corporate Governance, 6(4), 420-430. 91 Serafin, R., Stibbe, D. Bustamante, C. & Schramme, C. (2008). Current practice in the evaluation of cross- sector partnerships for sustainable development. The Partnering initiative, TPI Working paper No. 1/2008. Simonin, B. L. (1997). The importance of collaborative know-how: An empirical test of the learning organization. Academy of Management Journal, 40(5), 1150-1174. Singh, S., Kristensen, L., & Villseñor, E. (2009). Overcoming skepticism toward cause related claims: The case of Norway. International Marketing Review, 26(3), 312-326. Smith, V., & Langford, P. (2009). Evaluating the impact of corporate social responsibility programs on consumers. Journal of Management and Organization, 15(1), 97-109. Social Enterprise Knowledge Network (SEKN) (2006). Effective Management of Social Enterprise: Lessons from Business and Civil Society Organizations in Iberoamerica. Cambridge: Harvard University Press Stafford, E. R., & Hartman, C. L. (2001). Greenpeace’s ‘Greenfreeze Campaign’: Hurdling competitive forces in the diffusion of environmental technology innovation. In K. Green, P. Groenewegen & P. S. Hofman (Eds.), Ahead of the curve: Cases of innovation in environmental management (pp. 107-132). Dordrecht: Kluwer Academic Publishers. Stafford, E. R., Polonsky, M. J., & Hartman, C. L. (2000). Environmental NGO-business collaboration and strategic bridging: A case analysis of the Greenpeace-Foron alliance. Business Strategy and the Environment, 9(2), 122-135. Steckel, R., Simon, R., Simons, J., & Tanen, N (1999). Making money while making a difference: How to profit with a nonprofit partner. New Lenox: High Tide Press. Strahilevitz, M. (1999). The effects of product type and donation magnitude on willingness to pay more for a charity-linked brand. Journal of Consumer Psychology, 8(3), 215-241. Strahilevitz, M. (2003). The effects of prior impressions of a firm’s ethics on the success of a cause- related marketing campaign: Do the good look better while the bad look worse? Journal of Nonprofit & Public Sector Marketing, 11(1), 77-92. Strahilevitz, M., & Myers, J. G. (1998). Donations to charity as purchase incentives: How well they work may depend on what you are trying to sell. The Journal of Consumer Research, 24(4), 434-446. Sullivan, H., & Skelcher, C. (2003). Working across boundaries: Collaboration in public services. Palgrave Macmillan. Swartz, J. (2010). Timberland’s CEO on standing up to 65,000 angry activists. Harvard Business Review, September, 39-43. 92 Teegen, H., Doh, J. P., & Vachani, S. (2004). The importance of nongovernmental organizations (NGOs) in global governance and value creation: An international business research agenda. Journal of International Business Studies, 35, 463-483. Thompson, D. W., Anderson, R. C., Hansen, E. N., & Kahle, L. R. (2010). Green segmentation and environmental certification: Insights from forest products. Business Strategy and the Environment, 19(5), 319–334. Thompson, J. (2008). Social enterprise and social entrepreneurship: where have we reached? A summary of issues and discussion points. SocialEnterprise Journal, 4 (2), 149-161 Tully, S. (2004, June). Corporate-NGO partnerships as a form of civil regulation: Lessons from the energy and biodiversity initiative. Discussion Paper 22, ESRC Centre for Analysis of Risk and Regulation (CARR), London School of Economics. Unilever(2010). Sustainable Living Plan. Small Actions, Big Difference. Available from: http://www.sustainable-living.unilever.com/the-plan/health-hygiene/lifebuoy/ Accessed: 15 June 2011. Uniliver, (2011). Sustainable Living Plan. http://www.sustainable-living.unilever.com/the-plan/health-hygiene/pureit/ Utting, P. (2005). Rethinking business regulation: From self-regulation to social control. Programme Paper 15 on Technology, Business and Society, United Nations Research Institute for Social Development (NRISD), Geneva. Van Tulder, R., & Kolk, A. (2007). Poverty alleviation as a business issue. In C. Wankel (Eds.), 21stCentury management: A reference handbook (pp. 95-105). London: Sage. Varadarajan, P. R., & Menon, A. (1988). Cause-related marketing: A coalignment of marketing strategy and corporate philanthropy. The Journal of Marketing, 52(3), 58-74. Vendung, E. (1997). Public policy and program evaluation. New Brunswick: Transaction Publishers. Vian, T., Feeley, F., MacLeod, W., Richards, S. C., & McCoy, K. (2007). Measuring the impact of international corporate volunteering: Lessons learned from the Global Health Fellows Program of Pfizer Corporation. Final Report. Boston, MA: Center for International Health, Boston University School of Public Health. Visser, W. (2011). The Age of Responsibility: CSR 2.0 and the New DNA of Business. West Sussex, U.K.: John Wiley & Sons Ltd. Vock, M., Van Dolen, W., & Kolk, A. (2011). Micro-level interactions in business-nonprofit partnerships. Business & Society (Forthcoming). Vurron, C., Dacin, T., & Perinni, F. (2010). Institutional antecedents of partnering for social change: How institutional shape cross sector social partnerships. Journal of Business Ethics, 94 (Supplement 1), 39-53.93 Waddell, S. (2000). Complementary resources: The win-win rationale for partnership with NGOs. In J. Bendell (Eds), Terms for endearment: Business, NGOs and sustainable development (pp. 193-206). Sheffield: Greenleaf Publishing. Waddell, S., & Brown, L. D. (1997). Fostering intersectoral partnering: A guide to promoting cooperation among governments, business, and civil society actors. IDRC Reports, 13(3). Waddock, S. A. (1986). Public-private partnerships as social product and process. Research in Corporate Social Performance and Policy, 8, 273-300. Waddock, S. A. (1988). Building successful partnerships. Sloan Management Review (summer), 17-23. Waddock, S. A. (1989). Understanding social partnerships: An evolutionary model of partnership organizationorganizations. Administration & Society, 21(1): 78-100. Waddock, S. A. (1991). A typology of social partnership organizations. Administration & Society 22(4), 480–516. Waddock, S. A., & Post, J. (1995). Catalytic alliances for social problem solving. Human Relations, 48(8), 951-973. Walsh, J. P., Weber, K., & Margolis, J. D. (2003). Social issues and management: Our lost cause found. Journal of Management, 29(6), 859-881. Warner, M., & Sullivan, R. (2004). Putting partnerships to work: Strategic alliances for development between government and private sector and civil society. Sheffield: Greenleaf Publishing. Watson, J., Speller, V., Markwell, S., & Platt S. (2000). The Verona Benchmark: Applying evidence to improve the quality of partnership. Promotion & Education, 7, 16-23. Waygood, S., & Wehrmeyer, W. (2003). A critical assessment of how non-governmental organizations use the capital markets to achieve their aims: A UK study. Business Strategy and the Environment, 12(6), 372-385. Weiner, B. J., & Alexander, J. A. (1998). The challenges of governing public-private community health partnerships. Health Care Management Review, 23, 39-55. Weiser, J., Kahane, M., Rochlin, S., & Landis, J. (2006). Untapped: Creating value in underserved markets. San Francisco: Berrett-Koehler. Weiss, E. S., Miller Anderson, R., & Lasker, R. D. (2002). Making the most of collaboration: Exploring the relationship between partnership synergy and partnership functioning. Health Education & Behavior, 29, 683-698. Westley, F., & Vredenburg, H. (1997). Interorganizational collaboration and the preservation of global biodiversity. Organization Science, 8(4), 381-403.94 Wilkof, M. V., Brown, D. W., & Selsky, J. W., (1995). When stories are different: The influence of corporate culture mismatches on interorganizational relations. Journal of Applied Behavioral Sciences, 31, 373-388. Wilson, A., & Charlton, K. (1997). Making partnerships work. : A practical guide for the public, private, voluntary and community sectors. London: J. Roundtree Foundation. Wolff, T. (2001). Community coalition building: Contemporary practice and research. American Journal of Community Psychology, 29(2), 165-172. Wood, D. J. (1991). Corporate social performance revisited. The Academy of Management Review, 16(4), 691-718. Wood, D. J. & Gray, B. (1991). Toward a comprehensive theory of collaboration. Journal of Applied Behavioral Science 27 ( 2), 139-162. Wright, R. (1988). Measuring awareness of British football sponsorship. European Research, May, 104- 108. Wymer, W. W. Jr., & Samu, S. (2003). Dimensions of business and nonprofit collaborative relationships. Journal of Nonprofit & Public Sector Marketing, 11(1), 3-22. Yaziji, M. (2004). Turning gadflies into allies. Harvard business Review, 82 (2), 110-115, 124. Yaziji, M., & Doh, J. (2009). NGOs and corporations: Conflict and collaboration. New York: Cambridge University Press Zadek, S. (2001). The civil corporation: The new economy of corporate citizenship. London: Earthscan Publications. Zadek, S. (2004). The path to corporate responsibility. Harvard Business Review, 82(12), 125-132.CLUSTERS OF ENTREPRENEURSHIP
|
CD ROM Annuaire d'Entreprises France prospect (avec ou sans emails) : REMISE DE 10 % Avec le code réduction AUDEN872
10% de réduction sur vos envois d'emailing --> CLIQUEZ ICI Retour à l'accueil, cliquez ici NBER WORKING PAPER SERIES CLUSTERS OF ENTREPRENEURSHIP Edward L. Glaeser William R. Kerr Giacomo A.M. Ponzetto Working Paper 15377 http://www.nber.org/papers/w15377 NATIONAL BUREAU OF ECONOMIC RESEARCH 1050 Massachusetts Avenue Cambridge, MA 02138 September 2009 Comments are appreciated and can be sent to eglaeser@harvard.edu, wkerr@hbs.edu, and gponzetto@crei.cat. Kristina Tobio provided excellent research assistance. We thank Zoltan J. Acs, Jim Davis, Mercedes Delgado, Stuart Rosenthal, Will Strange, and participants of the Cities and Entrepreneurship conference for advice on this paper. This research is supported by Harvard Business School, the Kauffman Foundation, the National Science Foundation, and the Innovation Policy and the Economy Group. The research in this paper was conducted while the authors were Special Sworn Status researchers of the US Census Bureau at the Boston Census Research Data Center (BRDC). Support for this research from NSF grant (ITR-0427889) is gratefully acknowledged. Research results and conclusions expressed are our own and do not necessarily reflect the views of the Census Bureau or NSF. This paper has been screened to insure that no confidential data are revealed. Corresponding author: Rock Center 212, Harvard Business School, Boston, MA 02163; 617-496-7021; wkerr@hbs.edu. The views expressed herein are those of the author(s) and do not necessarily reflect the views of the National Bureau of Economic Research. © 2009 by Edward L. Glaeser, William R. Kerr, and Giacomo A.M. Ponzetto. All rights reserved. Short sections of text, not to exceed two paragraphs, may be quoted without explicit permission provided that full credit, including © notice, is given to the source.Clusters of Entrepreneurship Edward L. Glaeser, William R. Kerr, and Giacomo A.M. Ponzetto NBER Working Paper No. 15377 September 2009 JEL No. J00,J2,L0,L1,L2,L6,O3,R2 ABSTRACT Employment growth is strongly predicted by smaller average establishment size, both across cities and across industries within cities, but there is little consensus on why this relationship exists. Traditional economic explanations emphasize factors that reduce entry costs or raise entrepreneurial returns, thereby increasing net returns and attracting entrepreneurs. A second class of theories hypothesizes that some places are endowed with a greater supply of entrepreneurship. Evidence on sales per worker does not support the higher returns for entrepreneurship rationale. Our evidence suggests that entrepreneurship is higher when fixed costs are lower and when there are more entrepreneurial people. Edward L. Glaeser Department of Economics 315A Littauer Center Harvard University Cambridge, MA 02138 and NBER eglaeser@harvard.edu William R. Kerr Rock Center 212 Harvard Business School Boston, MA 02163 wkerr@hbs.edu Giacomo A.M. Ponzetto CREI - Universitat Pompeu Fabra C/ Ramon Trias Fargas, 25-27 08005 Barcelona Spain gponzetto@crei.cat1 Introduction Economic growth is highly correlated with an abundance of small, entrepreneurial Örms. Figure 1 shows that a 10% increase in the number of Örms per worker in 1977 at the city level correlates with a 9% increase in employment growth between 1977 and 2000. This relationship is even stronger looking across industries within cities. This relationship has been taken as evidence for competition spurring technological progress (Glaeser et al., 1992), product cycles where growth is faster at earlier stages (Miracky, 1993), and the importance of entrepreneurship for area success (Acs and Armington, 2006; Glaeser, 2007). Any of these interpretations is compatible with Figure 1ís correlation, however, and the only thing that we can be sure of is that entrepreneurial clusters exist in some areas but not in others. We begin by documenting systematically some basic facts about average establishment size and new employment growth through entrepreneurship. We analyze entry and industrial structures at both the region and city levels using the Longitudinal Business Database. Section 2 conÖrms that the strong correlation in Figure 1 holds true under stricter frameworks and when using simple spatial instruments for industrial structures. A 10% increase in average establishment size in 1992 associates with a 7% decline in subsequent employment growth due to new startups. Employment growth due to facility expansions also falls by almost 5%. We further document that these reductions come primarily through weaker employment growth in small entrants. What can explain these spatial di§erences? We Örst note that the connection between average establishment size and subsequent entrepreneurship is empirically stronger at the city-industry level than on either dimension individually. This suggests that simple theories emphasizing just industry-wide or city-wide forces are insu¢ cient. Theories must instead build upon particular city-industry traits or on endogenous spatial sorting and organizational forms due to interactions of city traits with industry traits. We consider three broad rationales. The Örst two theories emphasize spatial di§erences in net returns to entrepreneurship, while the last theory emphasizes spatial di§erences in the supply of entrepreneurs. The former theories are more common among economists. They assume that entrepreneurs choose locations and compete within a national market, so that the supply of entrepreneurship is constant over space. This frictionless setting would not hold for concrete manufacturing, of course, but would be a good starting point for many industries. Entrepreneurship is then evident where Örm proÖts are higher or where Öxed costs are lower, either of which increases the net returns to opening a new business. These spatial di§erences could be due to either exogenous or endogenous forces. To take Silicon Valley as an example, one story would suggest that Silicon Valleyís high rate of entrepreneurship over the past 30 years was due to abnormal returns in Californiaís computer sector as the industry took o§. These returns would need to have been greater than Californiaís and the 1computer industryís returns generally, perhaps descending from a technological breakthrough outside of the existing core for the industry (e.g., Duranton, 2007; Kerr, this issue). On the other hand, Saxenianís (1994) classic analysis of Silicon Valley noted its abundance of smaller, independent Örms relative to Bostonís Route 128 corridor. Following Chinitz (1961) and Jacobs (1970), Saxenian argued that these abundant small Örms themselves caused further entrepreneurship by lowering the e§ective cost of entry through the development of independent suppliers, venture capitalists, entrepreneurial culture, and so on. While distinct, both of these perspectives argue that spatial di§erences in net returns to entrepreneurship are responsible for the di§erences in entrepreneurship rates that we see empirically. An alternative class of theories, which Chinitz also highlighted, is that the supply of entrepreneurship di§ers across space. Heterogeneity in supply may reáect historical accident or relatively exogenous variables. William Shockleyís presence in Silicon Valley was partly due to historical accident (Shockleyís mother), and entrepreneurs can be attracted to Californiaís sunshine and proximity to Stanford independent of di§erences in net returns. Several empirical studies Önd entrepreneurs are more likely to be from their region of birth than wage workers, and that local entrepreneurs operate stronger businesses (e.g., Figueiredo et al., 2002; Michelacci and Silva, 2007). Immobile workers may possess traits that lend them to entrepreneurship (e.g., high human capital). Although quite di§erent internally, these theories broadly suggest that semi-permanent di§erences in entrepreneurial supply exist spatially. 1 While theories of the last kind are deserving of examination, they do not Öt easily into basic economic models that include both Örm formation and location choice. Section 3 presents just such a model that draws on Dixit and Stiglitz (1977). The baseline model illustrates the Örst class of theories that focus on the returns to entrepreneurship, as well as the di¢ culties of reconciling heterogeneity in entrepreneurial supply with the canonical framework of spatial economics. Two basic, intuitive results are that there will be more startups and smaller Örms in sectors or areas where the Öxed costs of production are lower or where the returns to entrepreneurship are higher. In the model, higher returns are due to more inelastic demand. A third result formalizes Chinitzís logic that entrepreneurship will be higher in places that have exogenously come to have more independent suppliers. Multiple equilibria are possible where some cities end up with a smaller number of vertically integrated Örms, like Pittsburgh, and others end up with a larger number of independent Örms. But, our model breaks with Chinitz by assuming a constant supply of entrepreneurs across space. While we assume that skilled workers play a disproportionately large role in entrepreneurship, we also require a spatial equilibrium that essentially eliminates heterogeneity in entrepreneurship supply. In a sense, the model and our subsequent empirical work show how far one can get without assuming that the supply of entrepreneurship di§ers across space (due to 1 These explanations are not mutually exclusive, especially in a dynamic setting. Areas that develop entrepreneurial clusters due to net returns may acquire attributes that promote a future supply of entrepreneurs independent of the factors. 2one or more of the potential theories). We operationalize this test by trying to explain away the average establishment size e§ect. Section 4 presents evidence on these hypotheses. Our Örst tests look at sales per worker among small Örms as a proxy for the returns to entrepreneurship. The strong relationship between initial industry structure and subsequent entry does not extend to entrepreneurial returns. While some entrepreneurial clusters are likely to be demand driven, the broader patterns suggest that higher gross returns do not account for the observed link between lower initial establishment size and subsequent entry prevalent in all sectors. We likewise conÖrm that di§erences in product cycles or region-industry age do not account for the patterns. These results are more compatible with views emphasizing lower Öxed costs or a greater supply of entrepreneurs. Our next two tests show that costs for entrepreneurs matter. Holding city-industry establishment size constant, subsequent employment growth is further aided by small establishments in other industries within the city. This result supports the view that having small independent suppliers and customers is beneÖcial for entrepreneurship (e.g., Glaeser and Kerr, 2009). We Önd a substantially weaker correlation between city-level establishment size and the facility growth of existing Örms, which further supports this interpretation. We also use labor intensity at the region-industry level to proxy for Öxed costs. We Önd a strong positive correlation between labor intensity and subsequent startup growth, which again supports the view that Öxed costs are important. However, while individually powerful, neither of these tests explains away much of the basic establishment size e§ect. We Önally test sorting hypotheses. The linkage between employment growth and small establishment size is deeper than simple industry-wide or city-wide forces like entrepreneurs generally being attracted to urban areas with lots of amenities. Instead, as our model suggests, we look at interactions between city-level characteristics and industry-level characteristics. For example, the model suggests that entrepreneurship will be higher and establishment size lower in high amenity places among industries with lower Öxed costs. The evidence supports several hypotheses suggested by the model, but controlling for di§erent forces again does little to explain away the small establishment size e§ect. Neither human capital characteristics of the area nor amenities can account for much of the observed e§ect. In summary, our results document the remarkable correlation between average initial establishment size and subsequent employment growth due to startups. The evidence does not support the view that this correlation descends from regional di§erences in demand for entrepreneurship. The data are more compatible with di§erences in entrepreneurship being due to cost factors, but our cost proxies still do not explain much of the establishment size e§ect. Our results are also compatible with the Chinitz view that some places just have a greater supply of entrepreneurs, although this supply must be something quite di§erent from the overall level of human capital. We hope that future work will focus on whether the small establishment size e§ect reáects entrepreneurship supply or heterogeneity in Öxed costs that we have been unable 3to capture empirically. 2 2 Clusters of Competition and Entrepreneurship We begin with a description of the Longitudinal Business Database (LBD). We then document a set of stylized facts about employment growth due to entrepreneurship. These descriptive pieces particularly focus on industry structure and labor intensity to guide and motivate the development of our model in Section 3. 2.1 LBD and US Entry Patterns The LBD provides annual observations for every private-sector establishment with payroll from 1976 onward. The Census Bureau data are an unparalleled laboratory for studying entrepreneurship rates and the life cycles of US Örms. Sourced from US tax records and Census Bureau surveys, the micro-records document the universe of establishments and Örms rather than a stratiÖed random sample or published aggregate tabulations. In addition, the LBD lists physical locations of establishments rather than locations of incorporation, circumventing issues related to higher legal incorporations in states like Delaware. Jarmin and Miranda (2002) describe the construction of the LBD. The comprehensive nature of the LBD facilitates complete characterizations of entrepreneurial activity by cities and industries, types of Örms, and establishment entry sizes. Each establishment is given a unique, time-invariant identiÖer that can be longitudinally tracked. This allows us to identify the year of entry for new startups or the opening of new plants by existing Örms. We deÖne entry as the Örst year in which an establishment has positive employment. We only consider the Örst entry for cases in which an establishment temporarily ceases operations (e.g., seasonal Örms, major plant retoolings) and later re-enters the LBD. Second, the LBD assigns a Örm identiÖer to each establishment that facilitates a linkage to other establishments in the LBD. This Örm hierarchy allows us to separate new startups from facility expansions by existing multi-unit Örms. Table 1 characterizes entry patterns from 1992 to 1999. The Örst column refers to all new establishment formations. The second column looks only at those establishments that are not part of an existing Örm in the database, which we deÖne as entrepreneurship. The Önal column 2 In a study of entrepreneurship in the manufacturing sector, Glaeser and Kerr (2009) found that the Chinitz e§ect was a very strong predictor of new Örm entry. The e§ect dominated other agglomeration interactions among Örms or local area traits. This paper seeks to measure this e§ect for other sectors and assess potential forces underlying the relationship. As such, this paper is also closely related and complementary to the work of Rosenthal and Strange (2009) using Dun and Bradstreet data. Beyond entrepreneurship, Drucker and Feser (2007) consider the productivity consequences of the Chinitz e§ect in the manufacturing sector, and Li and Yu (2009) provide evidence from China. Prior work on entry patterns using the Census Bureau data include Davis et al. (1996), Delgado et al. (2008, 2009), Dunne et al. (1989a, 1989b), Haltiwanger et al. (this issue), and Kerr and Nanda (2009a, 2009b). 4looks at new establishments that are part of an existing Örm, which we frequently refer to as facility expansions. Over the sample period, there were on average over 700,000 new establishments per annum, with 7.3 million employees. Single-unit startups account for 80% of new establishments but only 53% of new employment. Facility expansions are, on average, about 3.6 times larger than new startups. Table 1 documents the distribution of establishment entry sizes for these two types. Over 75% of new startups begin with Öve or fewer employees, versus fewer than half of entrants for expansion establishments of existing Örms. About 0.5% of independent startups begin with more than 100 workers, compared to 4% of expansion establishments. Across industries, startups are concentrated in services (39%), retail trade (23%), and construction (13%). Facility expansions are concentrated in retail trade (32%), services (30%), and Önance, insurance, and real estate (18%). The growing region of the South has the most new establishment formations, and regional patterns across the two classes of new establishments are quite similar. This uniformity, however, masks the agglomeration that frequently exists at the industry level. Well-known examples include the concentration of the automotive industry in Detroit, tobacco in Virginia and North Carolina, and high-tech entrepreneurship within regions like Silicon Valley and Bostonís Route 128. 2.2 Industry Structure and Entrepreneurship Table 2 shows the basic fact that motivates this paper: the correlation between average establishment size and employment growth. We use both regions and metropolitan areas for spatial variation in this paper. While we prefer to analyze metropolitan areas, the city-level data become too thin for some of our variables when we use detailed industries. The dependent variable in the Örst three columns is the log employment growth in the region-industry due to new startups. The dependent variable for the second set of three columns is the log employment growth in the region-industry due to new facility expansions that are part of existing Örms. Panel A uses the log of average establishment size in the region-industry as the key independent variable. Panel B uses the HerÖndahl-Hirschman Index (HHI) in the region-industry as our measure of industrial concentration. Regressions include the initial periodís employment in the region as a control variable. For each industry, we exclude the region with the lowest level of initial employment. This excluded region-industry is employed in the instrumental variable speciÖcations. Crossing eight regions and 349 SIC3 industries yields 2,712 observations as not every region includes all industries. Estimations are unweighted and cluster standard errors by industry. The Örst regression, in the upper left hand corner of the table, shows that the elasticity of employment growth in startups to initial employments is 0.97. This suggests that, holding mean establishment size constant, the number of startups scales almost one-for-one with existing employment. The elasticity of birth employment with respect to average establishment size in the 5region-industry is -0.67. This relationship is both large and precisely estimated. It suggests that, holding initial employments constant, a 10% increase in average establishment size is associated with a 7% decline in the employment growth in new startups. These initial estimates control for region Öxed e§ects (FEs) but not for industry FEs. Column 2 includes industry FEs so that all of the variation is coming from regional di§erences within an industry. The coe¢ cient on average establishment size of -0.64 is remarkably close to that estimated in Column 1. In the third regression, we instrument for observed average establishment size using the mean establishment size in the excluded region by industry. This instrument strategy only exploits industry-level variation, so we cannot include industry FEs. The estimated elasticities are again quite similar. These instrumental speciÖcations suggest that the central relationship is not purely due to local feedback e§ects, where a high rate of growth in one particular region leads to an abundance of small Örms in that place. Likewise, the relationship is not due to measuring existing employment and average establishment size from the same data. Panel B of Table 2 considers the log HHI index of concentration within each region-industry. While the model in the next section suggests using average establishment size to model industrial structure, there is also a long tradition of empirically modeling industrial structure through HHI metrics. 3 The results using this technique are quite similar to Panel A. A 10% increase in region-industry concentration in 1992 is associated with a 4% decline in employment due to new startups over 1992-1999. The coe¢ cient on initial region-industry employment, however, is lower in this case. When not controlling for initial establishment size, there is a less than one-for-one relationship between initial employment and later growth through startups. Column 2 of Panel B again models industry FEs. The coe¢ cients are less stable than in the upper panel. The elasticity of startup employment to the HHI index continues to be negative and extremely signiÖcant, but it loses over 50% of its economic magnitude compared to the Örst column. Column 3 instruments using the concentration level in the omitted region. The results here are quite similar to those in the Örst column. Columns 4 to 6 of Table 2 consider employment growth from new facility expansions by multiunit Örms instead of new startups. These new establishments are not new entrepreneurship per se, but instead represent existing Örms opening new production facilities, sales o¢ ces, and similar operations. Nevertheless, formations of new establishments represent more discontinuous events than simple employment growth at existing plants. Again, there is a strong negative e§ect of mean establishment size in the region-industry and subsequent employment growth due to facility expansions. The e§ect, however, is weaker than in the startup regressions. The results are basically unchanged when we include industry FEs or in the instrumental variables regression. These conclusions are also mirrored in Panel Bís estimations using HHI concentration measures. 3 The appendix also reports estimations using the share of employees in a region-industry working in establishments with 20 employees or fewer. This modelling strategy delivers similar results to mean establishment size or HHI concentration. 62.3 Variations by Sector Figures 2a and 2b document estimations of the relationship between establishment entry rates and initial region-industry structure by sector. The underlying regressions, which are reported in the appendix, include region and industry FEs and control for log initial employment in region-industry. The squares document the point estimates, and the lines provide conÖdence bands of two standard errors. Negative coe¢ cients again associate greater entry over 1992-1999 with smaller average establishment size by region-industry in 1992. Figure 2a shows that the average establishment size e§ect is present for startups in all sectors to at least a 10% conÖdence level. The elasticity is largest and most precisely estimated for manufacturing at greater than -0.8; the elasticity estimate for Önance, insurance, and real estate is the weakest but still has a point estimate of -0.2. On the other hand, Figure 2b shows the average establishment e§ect is only present for facility expansions in manufacturing, mining, and construction. This relative concentration in manufacturing is striking, as this sector was the subject of the original Chinitz study and much of the subsequent research. The di§erence in levels between Figures 2a and 2b also speaks to concentration among startupsó in every sector, the average establishment size e§ect is largest for new entrepreneurs. 4 2.4 Entry Size Distribution Table 3 quantiÖes how these e§ects di§er across establishment entry sizes. Table 1 shows that most new establishments are quite small, while others have more than 100 workers. We separate out the employment growth due to new startups into groupings with 1-5, 6-20, 21-100, and 101+ workers in their Örst year of observation. Panel A again considers average Örm size, while Panel B uses the HHI concentration measure. These estimations only include region FEs, and the appendix reports similar patterns when industry FEs are also modelled. A clear pattern exists across the entry size distribution. Larger average establishment size and greater industrial concentration retard entrepreneurship the most among the smallest Örms. For example, a 10% increase in mean establishment size is associated with a 12% reduction in new employment growth due to startups with Öve workers or fewer. The same increase in average establishment size is associated, however, with a 1% reduction in new employment growth due to entering Örms with more than 100 employees. The patterns across the columns show steady declines in elasticities as the size of new establishments increases. The impact for new Örms with 6-20 workers is only slightly smaller than the impact for the smallest Örms, while the elasticity for entrants with 21-100 employees is 50% smaller. Larger establishments and greater concentration are associated with a decrease in the number of smaller startups, but not a decrease in the number of larger startups. 4We have separately conÖrmed that none of the results for new startups reported in this paper depend upon the construction sector, where startups are over-represented in Table 1. 73 Theoretical Model This section presents a formal treatment of entrepreneurship and industrial concentration. We explore a range of di§erent explanations for the empirical observation that startup activity has a strong negative correlation with the size of existing Örms. Our goal is to produce additional testable implications of these explanations. We develop a simple model based on monopolistic competition following the classic approach of Dixit and Stiglitz (1977). Entrepreneurs create Örms that earn proÖts by selling imperfectly substitutable goods that are produced with increasing returns to scale. The startup costs of entrepreneurship are Önanced through perfectly competitive capital markets, and no contractual frictions prevent Örms from pledging their future proÖts to Önanciers. Each company operates over an inÖnite horizon and faces a constant risk of being driven out of business by an exogenous shock, such as obsolescence of its product or the death of an entrepreneur whose individual skills are indispensable for the operation of the Örm. These simple dynamics generate a stationary equilibrium, so that we can focus on the number and size of Örms and on the level of entrepreneurial activity in the steady state. The baseline model enables us to look at the role of amenities, Öxed costs, and proÖtability in explaining Örm creation. Several of its empirical predictions are very general: for instance, essentially any model would predict that an exogenous increase in proÖtability should result in an endogenous increase in activity. An advantage of our approach is that di§erent elements can easily be considered within a single standard framework. We also extend the model to address multiple human capital levels and to allow for vertical integration. 3.1 Baseline Model Consider a closed economy with a perfectly inelastic factor supply. There are I cities characterized by their exogenous endowments of real estate Ki and by their amenity levels ai such that ai > ai+1 for all i. There is a continuum of industries g 2 [0; G], each of which produces a continuum of di§erentiated varieties. Consumers have identical homothetic preferences deÖned over the amenities a of their city of residence, the amount of real estate K that they consume for housing, and their consumption qg () of each variety in each industry. SpeciÖcally, we assume constant elasticity of substitution (g) > 1 across varieties in each sector and an overall Cobb-Douglas utility function U = log a +Competitiveness_Index_2007
|
CD ROM Annuaire d'Entreprises France prospect (avec ou sans emails) : REMISE DE 10 % Avec le code réduction AUDEN872
10% de réduction sur vos envois d'emailing --> CLIQUEZ ICI Retour à l'accueil, cliquez ici Environmental Federalism in the European Union and the United States
|
CD ROM Annuaire d'Entreprises France prospect (avec ou sans emails) : REMISE DE 10 % Avec le code réduction AUDEN872
10% de réduction sur vos envois d'emailing --> CLIQUEZ ICI Retour à l'accueil, cliquez ici David Vogel, Michael Toffel, Diahanna Post, and Nazli Z. Uludere Aragon Working papers are in draft form. This working paper is distributed for purposes of comment and discussion only. It may not be reproduced without permission of the copyright holder. Copies of working papers are available from the author. Environmental Federalism in the European Union and the United States David Vogel Michael Toffel Diahanna Post Nazli Z. Uludere Aragon Working Paper 10-0851 Environmental Federalism in the European Union and the United States David Vogel, Michael Toffel, Diahanna Post, and Nazli Z. Uludere Aragon February 21, 2010 SUMMARY The United States (US) and the European Union (EU) are federal systems in which the responsibility for environmental policy-making is divided or shared between the central government and the (member) states. The attribution of decision-making power has important policy implications. This chapter compares the role of central and local authorities in the US and the EU in formulating environmental regulations in three areas: automotive emissions for health related (criteria) pollutants, packaging waste, and global climate change. Automotive emissions are relatively centralised in both political systems. In the cases of packaging waste and global climate change, regulatory policy-making is shared in the EU, but is primarily the responsibility of local governments in the US. Thus, in some important areas, regulatory policy-making is more centralised in the EU. The most important role local governments play in the regulatory process is to help diffuse stringent local standards through more centralised regulations, a dynamic which has become recently become more important in the EU than in the US. INTRODUCTION In the EU and the US, responsibility for the making of environmental policy is divided between EU and federal institutions, on the one hand, and local institutions, on the other. The former is comprised of the EU and the US federal government, while the latter consist of state and local governments in the US, and member states and subnational authorities in the EU. 1 Historically, environmental rules and regulations were primarily made at the state or local level on both sides of the Atlantic. However, the emergence of the contemporary environmental movement during the late 1960s and early 1970s led to greater centralisation of environmental policy-making in both the US and Europe. In the US, this change occurred relatively rapidly. By the mid 1970s, federal standards had been established for virtually all forms of air and water pollution. By the end of the decade, federal regulations governed the protection of endangered species, drinking water quality, pesticide approval, the disposal of hazardous wastes, surface mining, and forest management, among other policy areas. 1 For ease of presentation, we refer at times to both of the former as central authorities and both of the latter as states. 2 The federalisation of US environmental policy was strongly supported by pressure from environmental activists, who believed that federal regulation was more likely to be effective than regulation at the state level. In Europe, this change occurred more gradually, largely because the Treaty of Rome contained no provision providing for environmental regulation by the European Community (EC). Nonetheless, more than 70 environmental directives were adopted between 1973 and 1983. Following the enactment of the Single European Act in 1987, which provided a clear legal basis for EC environmental policy and eased the procedures for the approval of Community environmental directives, EC environmental policy-making accelerated. Originally primarily motivated by the need to prevent divergent national standards from undermining the single market, it became an increasingly important focus of EC/EU policy in its own right. Each successive treaty has strengthened the EU’s commitment to and responsibility for improving environmental quality and promoting sustainable development throughout Europe. Thus, notwithstanding their different constitutional systems, in both the EU and the US, the locus of environmental policy-making has become increasingly centralised over the last three decades. Nevertheless, state governments continue to play a critical role in environmental regulation on both sides of the Atlantic. Most importantly, states remain an important locus of policy innovation and agenda setting. In many cases, new areas of environmental policy are first addressed at the state level and subsequently adopted by the central authority. Many state regulations remain more stringent or comprehensive than those of the central authority; in some policy areas, states retain primary responsibility. In other cases, responsibility for environmental policy-making is shared by both levels of government. Not surprisingly, in both federal systems, there are ongoing disputes about the relative competence of central and state authorities to regulate various dimensions of environmental policy. We explore the dynamics of federal environmental policy-making in both the US and the EU. At what level of government are new standards initiated? Under what circumstances are state regulations diffused to other states and/or adopted by the central authority? Under what circumstances can or do 3 states maintain regulations that are more stringent than those of other states? We focus on the development of US and EU regulatory policies in three areas: automobile emissions for criteria pollutants, packaging waste, and global climate change. Each policy area reflects a different stage in the evolution of environmental policy. These cases also demonstrate the differences and the similarities in the patterns of environmental policy-making in the US and the EU. Automobile emissions typify the first generation of environmental regulation. A major source of air pollution, particularly in urban areas, automobiles were among the first targets of environmental regulation during the 1960s and 1970s and they remain an important component of environmental policy in every industrialized country. Packaging typifies the next generation of environmental regulation. Its emergence on the policy agenda during the 1980s reflected the increased public concern about the scarcity of landfills and the need to conserve natural resources. Unlike automobile regulation, which primarily affects only two industries, albeit critical ones (automotive manufacturers and the refiners of gasoline), packaging waste regulations affect virtually all manufacturers of consumer goods. The increased priority of reducing packaging waste and promoting re-use and recycling symbolises a shift in the focus of environmental regulation from reducing pollution to promoting eco-efficiency. Global climate change represents a more recent dimension of environmental policy. It first surfaced during the mid-1980s, but it has become much more salient over the last decade. This policy area exemplifies the increasingly important international dimension of environmental regulation: global climate change both affects and is affected by the regulatory policies of virtually all countries. It also illustrates the growing economic scope of environmental regulation: few economic activities are likely to be unaffected by policies aimed at reducing the emissions of carbon dioxide and other greenhouse gases. These three policy areas provide a useful window on the changing dynamics of the relationship between state and central regulation in the US and the EU. Since the mid-1980s, automobile emissions standards have been more centralised in the EU than in the US. The US permits states to 4 adopt more stringent standards, while the EU does not. However, both the EU and the US have progressively strengthened their regulations governing automotive emissions and fuel composition, though US federal emission standards remain more stringent than EU ones, with the exception of lead in gasoline (petrol) which has now been phased out on both sides of the Atlantic. For its part, California, which is permitted its own emissions standards, has become a world leader in the effort to encourage the development and marketing of low- and zero-emission vehicles. The dynamics of the regulation of packaging waste differs considerably. In the US, the federal government plays little or no role in setting standards for packaging waste: packaging, recycling, and waste disposal are primarily the responsibility of state or local governments. However, the lack of federal standards has neither prevented nor discouraged many state governments from adopting their own regulations. There has been considerable innovation at the state level: a number of local governments have developed ambitious programmes to reduce packaging waste and promote recycling. There has been little pressure for federal standards and the federal government has not attempted to limit state regulations with one important exception: federal courts have repeatedly found state restrictions on ‘imports’ of garbage to violate the interstate commerce clause of the US constitution. 2 In the EU, the situation is more complex. Member states began to regulate packaging waste during the 1980s, while the EU became formally involved in this policy area in 1994. However, in contrast to automotive emissions, the responsibility for packaging regulation remains shared between central and state authorities. There is considerable diversity among state regulations, and member states continue to play an important role in policy innovation, often adopting regulations that are more stringent than those of the EU. State packaging waste regulations have been an ongoing source of conflict between central and local authorities, with the European Commission periodically challenging particular state regulations on the grounds of their incompatibility with the single market. In addition, the EU has imposed maximum as well as minimum standards for waste recovery, though this is likely to change 2 Berland, 1992. 5 soon. On balance, EU packaging standards are more stringent and comprehensive than those in the US. Europe’s ‘greener’ member states have made more ambitious efforts to reduce packaging waste than have their American state counterparts, while the EU’s Packaging Waste Directive provides a centralised floor on state standards which does not exist in the US. Nevertheless, there have been a number of important US state standards. In the case of climate policy, important initiatives and commitments to reduce emissions of greenhouse gases have been undertaken in the EU at both the central and state levels with one often complementing and reinforcing the other. In the US, by contrast, federal regulations restricting greenhouse gases had yet to be implemented as of early 2010. As in the case of packaging waste policies, there have been a number of state initiatives. But in contrast to the regulation of packaging waste, the lack of central regulation of climate policy has become politically salient, even causing conflict over the legal authority of states to establish policies in this area. The gap between US and EU regulatory policies regarding climate change is more substantial than the gaps in the other two policy areas. The EU and each member state have formally ratified the Kyoto Protocol, while the US has not. Since American states cannot enter into international environmental agreements, this means that no US regulatory authority is under any international obligation to regulate carbon dioxide emissions. While all EU member states have adopted climate change policies, many states in the US have not. Moreover, most US state regulations tend to be weaker than those adopted or being adopted by the EU. The EU has established a regulatory regime based on emissions trading and shared targets to facilitate member states’ carbon dioxide reduction programmes, while in the critical area of vehicle emissions, the US central government was, until recently, an obstacle to more stringent state regulations. AUTOMOBILE EMISSIONS United States 6 The six common air pollutants are particulate matter, ground-level ozone, 3 carbon monoxide, oxides of sulphur (mainly sulphur dioxide), oxides of nitrogen (mainly nitrogen dioxide), and lead. 4 In US EPA parlance, these are also known as “criteria pollutants,” since their permissible levels are established using two sets of criteria, developed according to scientific guidelines. 5 Mobile sources, which include automobiles, are significant contributors to ground-level ozone and fine particulate matter pollution in many US cities, and also cause carbon monoxide and nitrogen dioxide emissions. Historically, motor vehicles were also the largest source of airborne lead emissions, but the removal lead from gasoline has dramatically reduced lead emissions from transport. Of the six criteria pollutants, only sulphur dioxide emissions, which are largely the result of fossil fuel combustion by power plants, are not substantially attributable to motor vehicles. 6 The regulation of air pollutants (emissions) from automobiles in the US began in 1960 when the state of California enacted the Motor Vehicle Pollution Control Act. This statute established a state board to develop criteria to approve, test, and certify emission control devices. 7 Within two years, the board had certified seven devices that were bolt-on pollution controls, such as air pumps that improve combustion efficiency 8 and required their installation by 1965. 9 After opposing emissions standards in the mid-1960s, ‘the automobile industry began to advocate federal emissions standards for automobiles [after] California had adopted state standards, and a number of other states were considering similar legislation.’ 10 In 1965, Congress enacted the federal Motor Vehicle Air Pollution Control Act, which authorised the establishment of auto emissions standards. The first federal standards were imposed for 1968 model year vehicles for carbon monoxide and hydrocarbons. 11 Two years later, in 1967, Congress responded to the automobile industry’s concerns about the difficulty of complying with different state standards by declaring that federal emission controls 3 Ground-level ozone is different from the beneficial ozone that forms a natural layer in the earth’s stratosphere, shielding us from excessive solar radiation. 4 United States Environmental Protection Agency (from here onwards, US EPA or EPA), 2006. 5 Primary standards are based on human health criteria, and secondary standards on environmental criteria. 6 In countries where the use of low-sulphur diesel fuels have not become widespread, yet diesel vehicle use is common, motor vehicles could be a source of sulphur-dioxide emissions. Some fuels used in marine or rail transport also contain high amounts of sulphur. 7 Percival et al., 1992. 8 California EPA, 2001. 9 Percival et al., 1992. 10 Revesz, 2001: 573. 11 Hydrocarbons are emissions resulting from the incomplete combustion of fuels and a precursor to ground-level ozone pollution. 7 would preempt all state emission regulations. However, an exception was made for California, provided that the state afforded adequate lead time to permit development of the necessary technology, given the cost of compliance within that time. 12 The exemption was granted ‘in recognition of the acute automobile pollution problems in California and the political power of the California delegation in the House of Representatives’. 13 One legal scholar noted, ‘The legislative history of the 1967 waiver provision suggests two distinct rationales for its enactment: (1) providing California with the authority to address the pressing problem of smog within the state; and (2) the broader intention of enabling California to use its developing expertise in vehicle pollution to develop innovative regulatory programs.’ 14 In 1970, President Nixon asked Congress to pass more stringent standards based on the lowest pollution levels attainable using developing technology. 15 Congress responded by enacting the technology-forcing Clean Air Act Amendments of 1970, which required automakers to reduce their emissions of carbon monoxide and hydrocarbons by 90 per cent within five years and their emissions of nitrogen oxides by 90 per cent within six years. 16 These drastic reductions were intended to close the large gap between ambient urban air pollution concentrations and the federal health-based Nationally Uniform Ambient Air Quality Standards (NAAQS) established pursuant to the US Clean Air Act. 17 Once again, California was permitted to retain and/or enact more stringent standards, though these were specified in federal law. 18 The 1977 amendments to the Clean Air Act established more stringent emissions standards for both automobiles and trucks and once again permitted California to adopt more stringent standards. In 1990, the Clean Air Act was again amended: ‘the California Air Resources Board old tailpipe emissions standards for new cars and light duty trucks sold in that state were adopted by Congress . . . 12 US EPA, 1999. 13 Rehbinder and Stewart, 1985: 114. 14 Chanin, 2003: 699. 15 Percival et al., 1992. 16 Rehbinder and Stewart, 1985. 17 Congress based its 90 per cent reduction on ‘the simple notion that since air pollution levels in major cities were approximately five times the expected levels of the NAAQSs, emissions would need to be reduced by at least 80 per cent, with an additional 10 per cent necessary to provide for growing vehicle use’ (Percival et al., 1992: 834). 18 California EPA, 2001. 8 as the standard to be met by all new vehicles.’ 19 In addition to again waiving federal preemption for California, the 1990 legislation for the first time authorised any state that was not meeting NAAQS for automotive pollutants to adopt California’s standards. 20 As a result, two regimes for automotive emission regulation emerged: one based on federal standards and the other on California’s. This regulatory policy reflected ‘a compromise between two interests: the desire to protect the economies of scale in automobile production and the desire to accelerate the process for attainment of the NAAQS’. 21 Thus, while automotive emission standards were primarily shaped by federal legislation, the federal government provided states with the opportunity to choose between two sets of standards. While allowing states to opt for a stricter emissions regime, the Clean Air Act Amendments of 1990 also called for a gradual strengthening of federal automobile emissions standards, to be promulgated by the US EPA. The so-called ‘Tier I’ standards were implemented between 1994 and 1997. The more stringent ‘Tier II’ standards were issued by the EPA in February 2000, and phased-in between 2004 and 2009. There were two important components of the Tier II standards. The first was a dramatic reduction in sulphur amounts in gasoline (by 90 per cent), achieved by the strong advocacy of a coalition of environmental and public health organisations, and state and local environmental agencies. 22 The second was a requirement for all light trucks, passenger cars, medium-duty sport utility vehicles and passenger vans to be subject to the same emissions standards by model year 2009. 23 California has continued to play a pioneering role in shaping automotive emissions policy. In 1990, the state adopted a programme to encourage Low-Emission Vehicles (LEV). This included a ZeroEmission Vehicle (ZEV) programme meant to jump-start the market for these vehicles. The ZEV programme required that such vehicles comprise at least 2 per cent of new car sales by 1998, 5 per 19 Bryner, 1993: 150. 20 Chanin, 2003; Revesz, 2001. 21 Revesz, 2001: 586. 22 This group included the Clean Air Trust and the Environmental Defense Fund, the STAPPA/ALAPCO (State and Territorial Air Pollution Program Administrators / Association of Local Air Pollution Control Officials), a nationwide organisation of state and local pollution control officials, and American Lung Association. In fact, the automakers were also in favour of the proposal to reduce sulphur content of gasoline, without which it would have been difficult to deliver the companion Tier 2 emission reductions. 23 All vehicles up to 8,500 pounds GVWR (gross vehicle weight rating) are subject to Tier 2 standards. Also, these standards are the same whether a vehicle uses gasoline, diesel or any other fuel; in other words, they are “fuel neutral.” (US EPA, 2000) 9 cent by 2001, and 10 per cent by 2003. When this requirement was approved, the only feasible technology that met ZEV standards were electric vehicles, whose emissions were over 90 per cent lower than those of the cleanest gasoline vehicles, even when including the emissions from the power plants generating the electricity required to recharge them. 24 Massachusetts and New York subsequently adopted the California LEV plan. However, in 1992, New York’s decision was challenged in the courts by the automobile manufacturers on the grounds that it was sufficiently different from California’s to constitute a third automotive emission requirement, which the Clean Air Act explicitly prohibits. Shortly afterwards, the manufacturers filed another suit against both states arguing that, since their standards were not identical with those of California, they were preempted by the Clean Air Act. As a result, both states were forced to modify their standards. 25 In 1998, California’s Air Resources Board (California ARB) identified diesel particulate matter as a toxic air contaminant. 26 The state subsequently launched a Diesel Risk Reduction Plan in 2000 to reduce diesel particulate emissions by 75 per cent within ten years. The plan established requirements for low-sulphur diesel fuel and particulate standards for new diesel engines and vehicles, and new filters for existing engines. 27 In this case, federal and California initiatives moved in tandem. Shortly after California acted, the EPA also announced more stringent standards for new diesel engines and fuels in order to make heavy-duty trucks and buses run cleaner. The EPA adopted a new rule in January 2001 that required a more than 30 times reduction in the sulphur content of diesel fuels (from 500 parts per million to 15 parts per million), which matched the California standard. 28 The resulting fuel, called ultra-low sulphur diesel, has been available across the country starting October 2006. By the end of 2010, all highway diesel fuel sold within the US will be ultra-low sulphur diesel. 29 24 California Air Resources Board, 2001. 25 In December 1997, the EPA issued regulations for the ‘National Low Emission Vehicle’ (NLEV) program. This voluntary program was the result of an agreement between nine Northeastern states and the auto manufacturers. It allowed vehicles with more stringent emission standards to be introduced in states that opt for the NLEV program before the Tier 2 regulations came into effect. Vehicles complying with NLEV were made available in the participating states for model year 1999 and nationwide for model year 2001. The standards under the NLEV program were equivalent to the California Low Emission Vehicle program, essentially harmonising the federal and California motor vehicle standards (US EPA, 1998). 26 California EPA, 2001. 27 California Air Resources Board, 2001. 28 The Highway Diesel Rule (US EPA, 2001). 29 The EPA rule requires that by December 1, 2014 all non-road, locomotive and marine diesel fuel sold in the US to be ultra-low sulphur diesel as well. California’s rule accelerates this by three to five years. 10 More recently, California’s automotive emissions standards have become a source of conflict with the federal government. Two novel California regulations, which the state claims are designed to reduce automobile emissions, have been challenged by both the automotive industry and the federal government on the grounds that they indirectly regulate fuel efficiency, an area of regulation which Congress has assigned exclusively to the Federal government. 30 The first case involves a modification California made to its ZEV programme in 2001 that allowed automakers to earn ZEV credits for manufacturing compressed natural gas, gasoline-electric hybrid, and methanol fuel cell vehicles. 31,32 General Motors and DaimlerChrysler sued California’s ARB over a provision that allowed manufacturers to earn ZEV credits by using technology such as that included in gasoline-electric hybrid vehicles, which were already being sold by their rivals Honda and Toyota. Because hybrids still use gasoline, General Motors and DaimlerChrysler argued that California’s efforts were effectively regulating fuel economy. 33 The US Justice Department supported the auto manufacturers’ claim on the grounds that the Energy Policy and Conservation Act provides that when a federal fuel-economy standard is in effect, a state or a political subdivision of a state may not adopt or enforce a regulation related to fuel-economy standards. 34 California responded by claiming that it was acting pursuant to its exemption under the US Clean Air Act to regulate auto emissions. In June 2002, a Federal District Court issued a preliminary injunction prohibiting the Air Resources Board from enforcing its regulation. 35 In response, the ARB modified the ZEV programme to provide two alternative routes for automakers to meet ZEV targets. 36 At the same time, California imposed new regulations which required that the auto industry sell increasing numbers of fuel-cell vehicles in the 30 In the Energy Policy and Conservation Act of 1975, Congress established exclusive Federal authority to regulate automotive fuel economy, through the Corporate Average Fuel Economy (CAFE) standards. 31 At the same time, California extended ZEV market share requirements to range from 10 per cent in 2003 up to 16 per cent in 2018 (California Air Resources Board, 2001). 32 The second dispute concerns climate change and is discussed below. 33 Parker, 2003. 34 Yost, 2002. 35 California Air Resources Board, 2003. 36 According to the California Air Resources Board (2003), ‘Auto manufacturers can meet their ZEV obligations by meeting standards that are similar to the ZEV rule as it existed in 2001. This means using a formula allowing a vehicle mix of 2 per cent pure ZEVs, 2 per cent AT-PZEVs (vehicles earning advanced technology partial ZEV credits) and 6 per cent PZEVs (extremely clean conventional vehicles). Or manufacturers may choose a new alternative ZEV compliance strategy, meeting part of their ZEV requirement by producing their sales-weighted market share of approximately 250 fuel cell vehicles by 2008. The remainder of their ZEV requirements could be achieved by producing 4 per cent AT-PZEVs and 6 per cent PZEVs. The required number of fuel cell vehicles will increase to 2,500 from 2009-11, 25,000 from 2012-14 and 50,000 from 2015 through 2017. Automakers can substitute battery electric vehicles for up to 50 per cent of their fuel cell vehicle requirements’. 11 state over the next decade. 37 However, in the summer of 2003, both automobile firms dropped their suits against California after its regulatory authorities agreed to expand their credit system for hybrids to encompass a broader range of vehicles. 38 European Union As in the US, in Europe, the regulations of state governments have been an important driver for centralised automotive emissions standards, with Germany typically playing the role in Europe that California has played in the US. The EU has progressively strengthened its automotive emissions standards, both to improve environmental quality and to maintain a single market for vehicles. However, European standards were strengthened at a much slower rate than were those in the US, and they were harmonised much later. Thus, in 1989, the EU imposed standards to be implemented in 1992 that were based on US standards implementing legislation enacted in 1970 and 1977, while the EU did not establish uniform automotive emissions requirements until 1987, although some fuel content standards were harmonised earlier. However, unlike in the US, which has continued to maintain a two-tiered system – and indeed extended it in 1977 by giving states the option of adopting either federal or California standards, in Europe, centralised standards for automobile emissions have existed since 1987. During the 1970s and 1980s, there was considerably more tension between central and state regulations in the EU than in the US. Recently, the opposite has been the case. During the 1960s, France and Germany imposed limits on emissions of carbon monoxide and hydrocarbons for a wide range of vehicles, thus forcing the EC to issue its first automotive emissions standards in 1970 in order to prevent these limits from serving as obstacles to internal trade. Shortly afterwards, there was substantial public pressure to reduce levels of airborne lead, a significant portion of which came from motor vehicles. The first restrictions were imposed by Germany, which in 1972 announced a two-stage reduction: the maximum lead content in gasoline was initially capped at 0.4 grams per litre in 1972, to be further reduced to 0.15 grams per litre in 1976. The United Kingdom 37 Hakim, 2003a. 38 Hakim, 2003b. 12 (UK) also enacted restrictions on lead in gasoline in 1978, though less severe than Germany (0.45 grams per litre). With no restrictions imposed by any other member state, the resulting disparity in national rules and regulations represented an obstacle to the free movement of both fuel and motor vehicles within the EC. For not only did these divergent national product regulations limit intra-EC trade in gasoline, but since different car engines were designed to run on fuels containing different amounts of lead, they created a barrier to intra-Community trade in motor vehicles themselves. Accordingly, the EC introduced a directive in 1978 that imposed a minimum and a maximum limit for lead content in gasoline (0.15 and 0.40 grams per litre, respectively), with both standards to go into effect in 1981. While the minimum requirement effectively allowed member states like Germany to establish the strict national limit they sought, it also prevented any member state from requiring lead-free gasoline and potentially disrupting the single market. In 1985, as a result of continued pressure from both Germany and Britain, the European Council required unleaded gasoline to be available in all member states by October 1989. The maximum lead content in gasoline was also further reduced to 0.15 gram per litre, and member states were encouraged to comply as quickly as possible. Two years later, member states were allowed to ban leaded gasoline, should they chose to. In 1998, all Western European and several central European countries agreed to end the sale of leaded gasoline by 2005. Unlike the lead standard, in the establishment of which the German regulations played an important role, the EC’s standards for sulphur in fuel did not reflect the policy preferences of any member state. The sulphur standard adopted in 1975 required all countries, including France, Germany, and the UK, to reduce their sulphur emissions. 39 France, for instance, had already adopted standards for sulphur in diesel fuel in 1966, but the more stringent levels in the European-wide standard forced the French standards lower as well. Germany’s standard was adopted at the same time and was similar to that of the EC. The auto emissions standards adopted in the EC during the 1970s were not mandatory. In fact, until 39 Bennett, 1991. 13 1987, member states were permitted to have standards less stringent than the European-wide standards, although they could not refuse to register or sell a vehicle on their territory if it met EC maximum standards. In effect, these early standards were maximum or ceiling requirements that were not developed not by the EC but instead were based heavily on emissions standards of the United Nations Economic Council for Europe. In 1985, the German minister responsible for environmental affairs announced, on his own initiative, that as of 1989 all cars marketed in Germany would be required to meet US automotive emissions standards, commonly referred to as ‘US ’83’. The adoption of these standards required the installation of catalytic converters, which could only use unleaded gasoline. This created two problems within Europe. Most importantly, it meant that automobiles produced in France and Italy, whose producers lacked the technology to incorporate the converters into their smaller vehicles, would be denied access to the German market. In addition, it meant that German tourists who drove their cars to southern Europe would be stranded, owing to the unavailability of unleaded gasoline in Greece and Italy. Germany’s insistence on requiring stringent standards for vehicles registered in its country forced the EU to adopt uniform automobile emissions standards. This in turn led to a bitter debate over the content of these standards, pitting the EU’s greener member states (Germany, Denmark, and the Netherlands) against the EU’s (other) major automobile producers (the UK, France, and Italy), who favoured more flexible standards. The resulting Luxembourg Compromise of 1987 established different emissions standards for different sizes of vehicles with different timetables for compliance. It thus represented the first uniform set of automotive emissions standards within Europe. These standards have been subsequently strengthened several times, though on balance they remain less stringent than those of the United States, most notably for diesel emissions, which are regulated less stringently in the EU than in the US. During the 1990s, the politics of automobile emissions standards became much less affected by member state differences or tensions between central and state standards. The most important initiative of this period, the Auto-Oil Programme, first adopted in 1996, was aimed at bringing 14 together the Commission and the auto and oil industries to work on comprehensive ways to reduce pollution. After a series of negotiations, the programme ultimately tightened vehicle emission limits and fuel quality standards for sulphur and diesel, and introduced a complete phase-out of leaded gasoline. 40 In 2003, the EU approved a Directive requiring that all road vehicle fuels be sulphur-free by 2009. With the finalisation of Auto-Oil I and II, as the programmes are known, the shift from state to centralised automotive emission requirements appears to be complete. The debates and negotiations over proposals to regulate pollution from vehicles now take place between the automakers and oil producers on the one hand, and the Commission, the Council, and European Parliament (EP) on the other hand. PACKAGING WASTE United States The regulation of packaging wastes is highly decentralised in the US. The role of the federal government remains modest and virtually all policy initiatives have taken place at the local level. While the 1976 Resource Conservation and Recovery Act (RCRA) established stringent requirements for the management of hazardous wastes, the RCRA also declared that the regulation of landfills accepting municipal solid waste (MSW) was to remain primarily the domain of state and local governments. 41 As a result, there is considerable disparity in the handling of packaging wastes throughout the US. On balance, US standards tend to be considerably laxer than those in the EU. While many state legislatures have established recycling goals, few have prescribed mandatory targets. 42 The US generates more MSW per capita than any other industrialised country, and 50 per cent more than most European countries. 43 From 1995 to 1998, the percentage of the MSW generated that has been recovered for recycling remained steady at 44 in the US, while it rose from 55 to 69 in Germany, 40 McCormick, 2001. 41 US EPA, 2003a, 2003b, 2003c. 42 American Forest & Paper Association, 2003. 43 The latest OECD figures report that Americans generate 760 kg per capita, the French 510, the British 560, and Germans 540 (OECD, 2004). 15 owing in part to Germany’s Packaging Ordinance. 44 State and local governments have implemented several policy mechanisms to reduce MSW, including packaging waste. Deposit-refund schemes, minimum recycling content requirements, community recycling programmes, and disposal bans are among the most common policy mechanisms designed to divert materials to recycling from waste streams destined for landfills or incinerators. Eleven states have developed deposit-refund schemes to encourage the recycling of beverage containers. 45 When Oregon passed the first bottle bill requiring refundable deposits on all beer and soft-drink containers in 1971, its objective was to control litter rather than to spur recycling. When the city of Columbia, Missouri, passed a bottle bill in 1977, it became the first local containerdeposit ordinance in the US and remained the only local initiative until it was repealed in 2002. 46 In general, deposit-refund laws require consumers of soft drinks and beer packaged in glass, metal, and plastic containers to pay a deposit that is refundable when the container is returned. 47 These schemes typically do not require, however, that these containers be recycled or reused. 48 California recently expanded its programme to include non-carbonated beverages, which added roughly 2 billion containers, nearly 40 per cent of which are plastic. 49 To reduce the burden on landfills and incinerators, whose construction and expansion are increasingly politically infeasible owing to community objections, many states and local governments have developed recycling programmes that enable or require the recycling of various materials. Such programmes remain exclusively the purview of state and local government because national laws do not allow EPA to establish federal regulations on recycling. 50 Virtually all New Yorkers, 80 per cent of the Massachusetts population, and 70 per cent of Californians have access to curbside recycling. 51 Recycling programmes typically include paper as well as metal and glass containers, while some 44 OECD, 2002. 45 The eleven states with deposit-refund schemes on soft-drink containers are California, Connecticut, Delaware, Hawaii, Iowa, Maine, Massachusetts, Michigan, New York, Oregon, and Vermont. Hawaii’s law takes effect in 2005 (Container Recycling Institute, 2003). 46 Container Recycling Institute, 2003. 47 Some deposit refunds are being expanded to include office products, while Maine and Rhode Island have created deposit-refund schemes for lead-acid/automobile batteries (US EPA, 1999). 48 McCarthy, 1993. 49 US EPA, 2003a, 2003b, 2003c. 50 Cotsworth, 2002. 51 Dietly, 2001. 16 programmes also include containers of particular plastic resins. In Oregon, container glass comprises nearly 4 per cent of that state’s total solid waste stream, and its deposit-refund and collection schemes resulted in 55 per cent of this glass being collected and recycled. 52 Sixty per cent of Oregon’s recycled container glass comes from its deposit-refund scheme, 25 per cent is collected from residential curbside programmes, and the remainder comes from commercial solid-waste hauler programmes, disposal sites, and other private recycling activities. A few states have sought to facilitate recycling by banning packaging that is particularly difficult to recycle, such as aseptic drink boxes, which are made of paper, foil, and plastic layers that are difficult to separate. Connecticut banned plastic cans in anticipation of obstacles this product would pose to materials recovery. In 1989, Maine banned aseptic drink boxes because of a concern about their ability to be recycled, though this restriction was subsequently repealed. The Wisconsin Legislature considered imposing a ban on the sale of aseptic drink boxes and bimetal cans (drink cans with aluminium sides and bottom and a steel top). Instead, the state enacted an advisory process permitting it to review a new packaging design if the packaging proved difficult to recycle. In addition, a few states, including Wisconsin and South Dakota, have banned the disposal of some recyclable materials to bolster their recycling rates. 53 Some states require certain types of packaging to contain some minimum amount of recycled material. Oregon’s 1991 Recycling Act required that by 1995, 25 per cent of the rigid plastic packaging containers (containing eight ounces to five gallons) sold in that state must contain at least 25 per cent recycled content, be made of a plastic material that is recycled in Oregon at a rate of at least 25 per cent, or be a reusable container made to be reused at least five times. 54 This law also requires glass containers to contain 35 per cent recycled content by 1995 and 50 per cent by 2000. 55 California requires manufacturers of newsprint, plastic bags, and rigid plastic containers to include 52 Oregon Department of Environmental Quality, 2003. 53 Thorman et al., 1996. 54 All rigid plastic container manufacturers have been in compliance with the law since it entered into force a decade ago, because the aggregate recycling rate for rigid plastic containers has remained between 27-30 per cent since the law took effect (Oregon Department of Environmental Quality, 2003). 55 Thorman et al., 1996. 17 minimum levels of recycled content in their products or to achieve minimum recycling rates. Manufacturers of plastic trash bags are required to include minimum percentages of recycled plastic post-consumer material in trash bags they sell in California. California’s 1991 Rigid Plastic Packaging Container (RPPC) Act sought to reduce the amount of plastic being landfilled by requiring that containers offered for sale in the state meet criteria akin to those laid down in the Oregon law. These criteria ‘were designed to encourage reuse and recycling of RPPCs, the use of more post-consumer resin in RPPCs and a reduction in the amount of virgin resin employed RPPCs’. 56 Wisconsin’s Act on Recycling & Management of Solid Waste requires that products sold in the state must use a package made from at least 10 per cent recycled or remanufactured material by weight. 57 Industrial scrap, as well as pre- and post-consumer materials, counts towards the 10 per cent requirement. Exemptions are provided for packaging for food, beverages, drugs, cosmetics, and medical devices that lack FDA approval. However, according to the President of the Environmental Packaging International, Wisconsin has done little enforcement of its 10 per cent recycled content law. 58 Governments at the federal, state, county, and local levels have also promulgated policies prescribing government procurement of environmentally preferable products. 59 In 1976, Congress included in RCRA requirements that federal agencies, as well as state and local agencies that use appropriated federal funds, that spend over a threshold amount on particular items to purchase products with recycled content when their cost, availability, and quality are comparable to those of virgin products, though the RCRA does not authorise any federal agency to enforce this provision. 60 States requiring government agencies to purchase environmentally preferable products include California, Georgia, Oregon, and Texas. California’s State Assistance for Recycling Markets Act of 1989 and Assembly Bill 11 of 1993 required government agencies to give purchasing preference to recycled products and mandated that increasing proportions of procurement budgets be spent on products with minimum levels of recycled content. Accordingly, the California Integrated Waste 56 California Integrated Waste Management Board, 2003. 57 Plastic Shipping Container Institute, 2003. 58 Bell, 1998. 59 California Integrated Waste Management Board, 2003; Center for Responsive Law, 2003. 60 US EPA, 2003a, 2003b, 2003c. 18 Management Board (CIWMB) developed the State Agency Buy Recycled Campaign, requiring that every State department, board, commission, office, agency-level office, and cabinet-level office purchase products that contain recycled materials whenever they are otherwise similar to virgin products. Procurement represents one of the few areas in which there have been federal initiatives. A series of Presidential Executive Orders issued throughout the 1990s sought to stimulate markets for environmentally preferable products and to reduce the burden on landfills. 61 In 1991, President George Bush issued an Executive Order to increase the level of recycling and procurement of recycled-content products. In 1993, President Bill Clinton issued an Executive Order that required federal agencies to purchase paper products with at least 20 per cent post-consumer fibre and directed the US EPA to list environmentally preferable products, such as those with less cumbersome packaging. Clinton raised this recycled-content threshold to 30 per cent in a subsequent Executive Order in 1998. 62 At the national level, several Congressional attempts to pass a National Bottle Bill between 1989 and 2007 were defeated. Most recently, a bill was introduced in 2009 as the “Bottle Recycling Climate Protection Act of 2009” (H.R. 2046), but it has yet to be adopted. According to the non-profit Container Recycling Institute, a key reason why bottle bills have not spread to more states or become national law is ‘the tremendous influence the well-funded, politically powerful beverage industry lobby wields’. 63 Thus, packaging waste policies remain primarily the responsibility of state and local governments. European Union The EU’s efforts to control packaging waste contrast sharply with those of the US in two ways. First, with the enactment of the 1994 EU Directive on Packaging and Packaging Waste, central authorities have come to play a critical role in shaping politics to reduce packaging waste within Europe. Thus, in 61 Lee, 1993. 62 Barr, 1998. 63 Container Recycling Institute, 2003. 19 Europe, in marked contrast to the US, this area of environmental policy is shared between central and state governments. Second, unlike in the US, where federal authorities have generally been indifferent to state policies to promote the reduction of packaging waste, in Europe, such policies have frequently been challenged by Brussels (the Commission) on the grounds that they interfere with the single market. In addition, the EU’s 1994 Packaging Directive established maximum as well as minimum recycling targets, while maximums have never existed in the US. As a result, some member states have been forced by Brussels to limit the scope and severity of their regulations. Historically, recycling policies were made exclusively by the member states. In 1981, Denmark enacted legislation requiring that manufacturers market all beer and soft drinks in reusable containers. Furthermore, all beverage retailers were required to take back all containers, regardless of where they had been purchased. To facilitate this recycling programme, only goods in containers that were approved in advance by the Danish environmental protection agency could be sold. Thus, a number of beverage containers produced in other member states could not be sold in Denmark. Foreign beverage producers complained to the European Commission that the Danish requirement constituted a ‘qualitative restriction on trade’, prohibited by the Treaty of Rome. The Commission agreed. When Denmark’s modified regulation in 1984 failed to satisfy the Commission, the EC brought a complaint against Denmark to the European Court of Justice (ECJ). In its decision, the ECJ upheld most of the provisions of the Danish statute, noting that the Commission itself had no recycling programme. The Court held that since protecting the environment was ‘one of the Community’s central objectives’, environmental protection constituted ‘a mandatory requirement capable of limiting the application of Article 30 of the Treaty of Rome’. 64 This was the first time the Court had sanctioned an environmental regulation that clearly restricted trade. The result of the ECJ’s ruling was to give a green light to other national recycling initiatives. Irish authorities proceeded with a ban on non-refillable containers for beer and soft drinks, while a number of Southern member states promptly restricted the sale of beverages in plastic bottles in order to 64 Vogel, 1995: 87. 20 protect the environment, and, not coincidently, domestic glass producers. The Netherlands, Denmark, France, and Italy promptly introduced their own comprehensive recycling plans. The most farreaching initiative to reduce recycling waste, however, was undertaken by Germany. The 1991 German packaging law was a bold move towards a ‘closed loop’ economy in which products are reused instead of thrown away. It established very high mandatory targets, requiring that 90 per cent of all glass and metals, as well as 80 per cent of paper, board, and plastics be recycled. In addition, only 28 per cent of beer and soft drinks could be sold in disposable containers. The law also established ‘take-back’ requirements on manufacturers, making them responsible for the ultimate disposal of the packaging in which their products were sold and shipped. A quasi-public system was established to collect and recycle packaging, with the costs shared by participating firms. In addition to making it more difficult for foreign producers to sell their products in Germany, the so-called Töpfer Law distorted the single market in another way. The plan’s unexpected success in collecting packaging material strained the capacity of Germany’s recycling system, thus forcing Germany to ‘dump’ its excess recycled materials throughout the rest of Europe. This had the effect of driving down prices for recycled materials in Europe, and led to the improper disposal of waste in landfills in other countries. 65 Yet, the ECJ’s decision in the Danish Bottle Case, combined with its fear of being labelled ‘anti-green’, made it difficult for the Commission to file a legal challenge to the German regulation. Accordingly, the promulgation of waste management policy now moved to the EU level. In 1994, following nearly three years of intense negotiations, a Directive on Packaging Waste was adopted by a qualified majority of member states with opposition from Germany, the Netherlands, Denmark, and Belgium. It required member states to recover at least half of their packaging waste and recycle at least one-quarter of it, within five years. Ireland, Greece, and Portugal were given slightly lower targets. More controversially, the Directive also established maximum standards: nations wishing to recycle more than 65 per cent of their packing waste could do so, but only if they had the facilities to 65 Comer, 1995. 21 use their recycled products. It was this provision which provoked opposition. The Packaging Waste Directive has played a critical role in strengthening packaging waste regulations and programmes throughout much of Europe, particularly in Great Britain and the South of Europe. As in the case of automobile emissions standards, it illustrates the role of the EU in diffusing the relatively stringent standards of some member states throughout Europe. Moreover, the decrease in some state standards as a result of the 1994 Directive was modest. 66 Member states continue to innovate in this policy area and these innovations have on occasion sparked controversy within the EU. For example, in 1994, the European Commission began legal proceedings against Germany, claiming that a German requirement that 72 per cent of drink containers be refillable was interfering with efforts to integrate the internal market. Germany has proposed to do away with the requirement owing to pressure from the Commission, but it remains a pending legal issue. This packaging waste dispute tops the list of key single market disputes identified by the Commission in 2003, and the outcomes of numerous other cases hinge on its resolution. 67 In 2001, Germany adopted a policy requiring deposits on non-refillable (one-way) glass and plastic bottles and metal cans in order to encourage the use of refillable containers. This law, which went into effect in 2003, aroused considerable opposition from the German drinks industry, which held it responsible for a dramatic decline in sales of beer and soft drinks and the loss of thousands of jobs. In addition, the European Commission, acting in response to complaints from non-German beverage producers, questioned the legality of the German scheme. The Commission agreed that the refusal of major German retailers to sell one-way drink containers had disproportionately affected bottlers of imported drinks, a position which was also voiced by France, Italy, and Austria. However, after the German government promised to revise its plan in order to make it compliant with EU law, the Commission decided not to take legal action. As occurred during the previous decade, the extent to which new packaging waste initiatives by member states threaten or are perceived to threaten the single market has put pressure on the EU to 66 Haverland, 1999. 67 Environment Daily, 2001a, 2003d. 22 adopt harmonised standards. As the European Environmental Bureau noted in response to the Commission’s decision to sue Germany over national rules protecting the market share of refillable drinks containers, ‘national reuse systems will come under pressure if the Commission continues to legally attack them at the same time it fails to act at the European level’. 68 In 2004, the Commission and the EP revised the 1994 Packaging Waste Directive by not only establishing stricter recycling targets, but also differentiating these targets by materials contained in packaging waste (such as glass, metal, plastic and wood). 69 The majority of member states were allowed until the end of 2008 to comply. 70 The Directive asks the Commission to review progress and, if necessary, recommend new recycling targets every five years. In 2006, the Commission recommended that the targets specified in the 2004 amendment should remain in effect for the time being, while new members catch up with these standards. 71 CLIMATE CHANGE United States In the US, greenhouse gas emissions remain largely unregulated by the federal government. In the 1990s, the Clinton Administration participated in the United Nations’ effort to establish a treaty governing greenhouse gas emissions. While the US signed the Kyoto Protocol, no US President has submitted it to the Senate for ratification. Soon after taking office, the Bush Administration declared it would not support the Kyoto Protocol. Also refusing to propose any regulations for carbon dioxide emissions, it instead chose to encourage industry to adopt voluntary targets, through its Global Climate Change Initiative. The Congress has also not adopted any legislation establishing mandatory reductions in greenhouse gas emissions, though in 2007 it did enact legislation strengthening vehicle fuel economy standards for the first time in more than two decades. In 2009, a climate change bill 68 Environment Daily, 2001b. 69 European Parliament and Council, 2004. 70 With the exception of Greece, Ireland and Portugal, which were allowed until the end of 2011, due to some geographical peculiarities of these countries (presence of numerous islands within their borders and difficult terrain) and low levels of existing use of packaging materials. A subsequent amendment in 2005 allowed new member states additional time for implementation; as late as 2015 in the case of Latvia (European Parliament and Council, 2005). 71 European Commission, 2006a. 23 establishing a cap and trade scheme to reduce greenhouse gas emissions passed the US House of Representatives, 72 and the US EPA has acknowledged it could regulate greenhouse gas emissions under the federal Clean Air Act. Meanwhile, the lack of federal regulation has created a policy vacuum that a number of states have filled. While ‘some significant legislation to reduce greenhouse gases was enacted during the late 1990s, such as Oregon’s pioneering 1997 law that established carbon dioxide standards for new electrical power plants . . . [state] efforts to contain involvement on climate change have been supplanted in more recent years with an unprecedented period of activity and innovation’. 73 By 2003, the US EPA had catalogued over 700 state policies to reduce greenhouse gas emissions. 74 A 2002 report identified ‘new legislation and executive orders expressly intended to reduce greenhouse gases have been approved in approximately one-third of the states since January 2000, and many new legislative proposals are moving ahead in a large number of states’. 75 New Jersey and California were the first states to introduce initiatives that directly target climate change. In 1998, the Commissioner of New Jersey’s Department of Environmental Protection (DEP) issued an Administrative Order that established a goal for the state to reduce greenhouse gas emissions to 3.5 per cent below the 1990 level by 2005, making New Jersey the first state to establish a greenhouse gas reduction target. 76 The DEP has received signed covenants from corporations, universities, and government agencies across the state pledging to reduce their greenhouse gas emissions, though nearly all are unenforceable. In an unusual move, the state’s largest utility signed a covenant that includes a commitment to monetary penalties if it fails to attain its pledged reductions. Other states have employed air pollution control regulation and legislation to cap carbon dioxide emissions from large source emitters such as power plants. Massachusetts became the first state to impose a carbon dioxide emission cap on power plants when Governor Jane Swift established a multi- 72 The American Clean Energy and Security Act of 2009 (ACES) in the 111 th US Congress (H.R.2454), also known as the WaxmanMarkey Bill after its authors Representatives Henry A. Waxman (Democrat, California) and Edward J. Markey (Democrat, Massachusetts). The bill proposes a national cap-and-trade program for greenhouse gases to tackle climate change. It was approved by the House of Representatives on June 26, 2009, and has been placed on the Senate calendar. 73 Rabe, 2002: 7. 74 US EPA, 2003c. 75 Rabe, 2002: 7. 76 New Jersey Department of Environmental Protection, 1999. 24 pollutant cap for six major facilities in 2001 that requires ‘each plant to achieve specified reduction levels for each of the pollutants, including a ten per cent reduction from 1997-1999 carbon dioxide levels by the middle-to-latter stages of the current decade’. 77 The New Hampshire Clean Power Act of 2002 required the state’s three fossil-fuel power plants to reduce their carbon dioxide emissions to 1990 levels by the end of 2006. 78 Oregon created the first formal standard in the US for carbon dioxide releases from new electricity generating facilities by requiring new or expanded power plants to emit no more than 0.675 pounds of carbon dioxide per kilowatt-hour, a rate that was 17 per cent below that of the most efficient natural-gas-fired plant operating in the US at the time. 79 In 2001, all six New England states pledged to reduce their carbon dioxide emissions to 10 per cent below 1990 levels by 2020. 80 By 2007, this joint commitment evolved into a ten-state, mandatory capand-trade program called the Regional Greenhouse Gas Initiative (RGGI). 81 As of early 2010, the initiative only encompassed fossil-fuel fired electric power plants operating in these states with capacity greater than 25 megawatts. 82 During the first two compliance periods (running from 2009 through 2014), the goal of RGGI is to stabilize carbon dioxide emission levels. After that, the emissions cap will be reduced by an additional 2.5 percent each year through 2018. As a result, the emissions budget in 2018 will be 10 per cent below the starting budget in 2009. 83 Under the program, participating states conduct quarterly auctions to distribute allowances, which can then be traded in a secondary market. Recent auction clearing prices have generally remained under four dollars per (short) ton. 84 The prices of allowances exchanged in the secondary market were even lower. 85 Another regional market-based program, called the Western Climate Initiative (WCI), is under 77 Rabe, 2002: 16. 78 New Hampshire Department of Environmental Services, 2002. 79 Rabe, 2002. 80 New England Governors/Eastern Canadian Premiers, 2001. 81 The member states of RGGI are Connecticut, Delaware, Maine, Maryland, Massachusetts, New Hampshire, New Jersey, New York, Rhode Island, and Vermont. Pennsylvania is an observer. 82 RGGI, 2009a. 83 The initial regional emissions cap is set at 188 million short tons of carbon dioxide per year. This amount is about 4 per cent above annual average regional emissions measured during 2000-2004 (RGGI, 2007). 84 RGGI, 2009b. 85 RGGI, 2009c. 25 development. This program targets the western states and provinces of the US and Canada. 86 The goal of WCI is a 15 per cent reduction in greenhouse gas emissions from 2005 levels by 2020. Similar to the RGGI, the WCI will be a cap-and-trade program and have three-year compliance periods. But unlike the RGGI, it will not be limited to carbon dioxide emissions or solely target the electric power sector. When fully implemented in 2015, the WCI is expected to cover nearly 90 per cent of greenhouse gas emissions in participating jurisdictions. Also, WCI members are required to auction off only a portion of total allowances (10 per cent at the outset, increasing to at least 25 per cent by 2020) and may choose to allocate the remainder to participating installations free of charge. 87 A third regional program is under development, based on the Midwestern Greenhouse Gas Reduction Accord (Accord) 88 signed in November 2007 by the governors of several US Midwestern states 89 and the Canadian province of Manitoba. The Accord also calls for the creation of a cap-andtrade program similar to those of RGGI and the WCI, to be operational by 2012. Proposed design features mostly resemble the WCI (for instance, allocating allowances through a combination of auctions as well as free distribution, the inclusion of all greenhouse gases, and coverage of multiple industries). On the other hand, it has some specific features for the protection of industrial interests of the region, such as the exclusion of carbon dioxide emissions from burning of biofuels (like ethanol and biodiesel) from the program. If implemented, contingent on the possible development of a federal cap-and-trade program, the goal of the Accord is to achieve a 20 per cent reduction in greenhouse gas emissions from 2005 levels by 2020. 90 In addition to these three multi-state initiatives, several states have been pursuing indirect means to reduce greenhouse gas emissions. 91 For example, more than half the US states have enacted legislation that requires utilities to provide a certain percentage of electricity generated from 86 As of January 2010, members of WCI are the US states of Arizona, California, Montana, New Mexico, Oregon, Utah and Washington, and the Canadian provinces of British Columbia, Manitoba, Ontario, and Quebec. Several other Western states and the province of Alberta are observers. 87 WCI, 2009. 88 Midwestern Greenhouse Gas Reduction Accord, 2007. 89 These are Illinois, Iowa, Kansas, Michigan, Minnesota and Wisconsin. The observing states are Indiana, Ohio and South Dakota. 90 Midwestern Greenhouse Gas Reduction Accord, 2009. 91 Rabe, 2002. 26 renewable energy sources. 92 By early 2010, nearly 20 states had already implemented, or were currently implementing, mandatory greenhouse gas emissions reporting rules. 93 Such programs attempt to mimic the US EPA Toxic Release Inventory Program’s success in spurring voluntary emission reductions by requiring public reporting of toxic releases by power plants. In 2002, 11 state Attorneys General wrote an open letter to President George W. Bush calling for expanded national efforts to reduce greenhouse gas emissions 94 and indicated their commitment to intensify state efforts if the federal government failed to act. In 2002, California passed legislation requiring its California Air Resources Board to develop and adopt greenhouse gas emission-reduction regulations by 2005 for passenger vehicles and light duty trucks, starting with vehicles manufactured in the 2009 model year. This made California the first legislative body in the US to enact legislation aimed at curbing global warming emissions from vehicles. As The New York Times pointed out, ‘Though the law applies only to cars sold in California, it will force the manufacturers to develop fuel-efficient technologies that all cars can use. This ripple effect will be even greater if other states follow California’s lead, as the Clean Air Act allows them to do.’ 95 Indeed, bills have been introduced in almost twenty other state assemblies since then, calling for the adoption of California’s automotive greenhouse gas standard. A diverse group of states (14 in total that include Arizona, Oregon, New Mexico, New York, Pennsylvania, Massachusetts, Virginia and Florida) ultimately passed legislation adopting the California standard. 96 During the Bush Administration, the marked divergence between state and federal policies in this area led to a flurry of lawsuits. Two of these are worth noting. The first was brought by automotive manufacturers against the state of California. Stating its intention to challenge California’s GHG standard in federal court, the president of the Alliance of Automobile Manufacturers argued that 92 As of January 2010, 29 states and the District of Columbia have enacted laws imposing these “renewable portfolio standards” (Database of State Incentives for Renewables and Efficiency, 2010). 93 As of September 2009, the following states had already developed, or were in the process of developing, mandatory greenhouse gas reporting rules: California, Colorado, Connecticut, Delaware, Hawaii, Iowa, Maine, Maryland, Massachusetts, New Jersey, New Mexico, North Carolina, Oregon, Virginia, Washington, West Virginia, and Wisconsin (US EPA, 2009a). 94 The states are Alaska, New Jersey, New York, California, Maryland, and all six New England states (Sterngold, 2002). 95 The New York Times, 2002. 96 The complete list is as follows: Washington, Oregon, Arizona, New Mexico, Florida, Virginia, Maryland, Pennsylvania, New Jersey, New York, Connecticut, Rhode Island, Massachusetts, New Hampshire and Maine. In addition, as of January 2010, three other states have proposals to adopt the California standard: Montana, Utah and Colorado (Pew Center on Global Climate Change, 2010). 27 ‘[F]ederal law and common sense prohibit each state from developing its own fuel-economy standards’. 97 The suit, filed by auto manufacturers against California Air Resource Board in 2004, was dismissed in 2007. 98 The second suit was brought against the federal government by several states, mainly as a challenge to the EPA’s position that it lacked the authority to regulate carbon dioxide emissions under the Clean Air Act. In 2003, upon the EPA’s denial of a petition to regulate tailpipe emissions of greenhouse gases, several states filed a lawsuit against the federal government claiming that the EPA is required by the Clean Air Act to regulate carbon dioxide emissions as an air pollutant because these emissions contribute to global warming. 99 Initially the case was dismissed, but the petitioners, which included 12 states, several cities and US territories as well as environmental groups, asked for a Supreme Court review. The resulting landmark case Massachusetts v. EPA was decided in favour of the petitioners in 2007. 100 In its decision, the Supreme Court found that “[b]ecause greenhouse gases fit well within the [Clean Air] Act’s capacious definition of ‘air pollutant,’ EPA has statutory authority to regulate emission of such gases from new motor vehicles.” 101 Two years later, the EPA officially acknowledged that it had both legal and scientific grounds to regulate greenhouse gas emissions. 102 On a parallel tack, California had requested a so-called ‘Clean Air Act waiver’ from the EPA in order to implement its 2002 statute. 103 After waiting for several years for a response from the EPA, California sued to compel the agency to make a decision. The EPA denied California’s waiver request in December 2007. However, the waiver denial elicited a second lawsuit by California in 2008, and which was later joined by fifteen other states and five environmental organizations. Ultimately, the Obama Administration asked the EPA to review its decision, after which California was granted the waiver in June 2009. 104 97 Keating, 2002. 98 Pew Center on Global Climate Change, 2008. 99 Johnson, 2003. 100 Meltz, 2007. 101 Massachusetts v. E.P.A., 127 S.Ct. 1438 (2007), p. 4. 102 US EPA, 2009b. 103 According to the Clean Air Act, states have the right to implement stricter standards on air pollutants, but the EPA must grant them a waiver to do so. 104 US EPA, 2009c. 28 The waiver decision has signalled a warming of relations between states and the federal government on the issue of climate change. In return for granting the waiver, the federal government secured the commitment of California, 105 along with of a broad set of stakeholders including auto manufacturers, to adopt uniform federal vehicle fuel economy standards (known as CAFE, short for Corporate Average Fuel Economy, standards) and to regulate greenhouse gas emissions from transport, whose implementation the Obama Administration accelerated by executive order. An update to the CAFE standards—the first such proposal in several decades—was passed as part of the Energy Independence and Security Act of 2007, during the Bush Administration. However, implementation of the Act’s CAFE provision required a subsequent rulemaking by the US Department of Transportation (US DOT), which was never made. In January 2009, the US DOT announced that it would defer any rulemaking on the new CAFE standards to the incoming administration. 106 That rulemaking was promptly issued in March 2009, though only for the model year 2011, since the Obama Administration ordered the US DOT to study the feasibility of even more stringent standards for later years. (Even the standards for model year 2011 are approximately one mile per gallon stricter than the recommendation of the previous administration.) 107 In September 2009, the US EPA and US DOT issued a draft joint rulemaking that proposed national standards to regulate vehicle fuel economy, and, for the first time in US history, greenhouse gas emissions from transport (National Program). 108 Under the original proposals of the Energy Independence and Security Act, the average nationwide fuel economy would have reached 35 miles per gallon by 2020, compared to about 25 miles per gallon in 2009. The National Program mandates a nationwide average of 35.5 miles per gallon by 2016, and once finalized, it would bring the rest of the country up to California’s current standards. Another draft rulemaking by the EPA, also issued in September 2009, would require any large stationary emitters of greenhouse gases such as power plants and industrial facilities, whether new or 105 US EPA, 2009d. 106 US DOT, 2009a. 107 US DOT, 2009b. 108 US EPA, 2009e. 29 undergoing modifications, to obtain operating permits from the agency. The rule would cover facilities with more than 25,000 tons of greenhouse gas emissions per year and the permits would be issued based on a facility’s ability to utilize best practices to control such emissions. 109 This proposal has so far been interpreted as a strategic move by the Obama Administration to compel the Congress to pass more comprehensive legislation dealing with climate change. As of early 2010, the draft National Program rulemaking was in the process of becoming finalized. But it remained unclear whether the EPA would pursue the draft rulemaking on the permitting of large emitters, or defer to the Congress. Thus, in contrast to developments in the area of packaging waste, the lack of federal regulations for greenhouse gas emissions has become a political issue in the US. Clearly, the issue of climate change is much more politically salient in the US than is the issue of packaging waste. Thus, proposals to address the former but not the latter frequently come before Congress. Finally, while packaging waste can be seen as a problem which can be effectively addressed at the local or state level, global climate change clearly cannot. Even the regulatory efforts of the most ambitious states will have little impact on global climate change in the absence of federal regulations that impose limits on carbon dioxide emissions throughout the US. European Union By contrast, both the EU and individual EU member states have been active in developing policies to mitigate climate change. In the early 1990s, several countries (including Finland, the Netherlands, Sweden, Denmark, and Germany) had adopted or were about to adopt taxes on either carbon dioxide specifically or energy more generally. Concerned that such taxes would undermine the single market, the EU attempted to establish a European energy tax. 110 The EU’s 1992 proposal was for a combined tax on both carbon dioxide emissions and energy, with the goal of reducing overall EU emissions by the year 2000 to their 1990 levels. However, this proposal was vehemently opposed by the UK, which 109 US EPA, 2009f. 110 Zito, 2000. 30 was against European-wide tax policies, and to a lesser extent by France, which wanted a tax on carbon dioxide only rather than the combined tax. By the end of 1994, the European Council abandoned its efforts and agreed to establish voluntary guidelines for countries that were interested in energy taxes. 111 In 1997, the Commission again proposed a directive to harmonise and, over time, increase taxes on energy within the EU; that proposal was finally approved in March 2003. It contained numerous loopholes for energy-intensive industry and transition periods for particular countries and economic sectors. 112 Thus, while the EU has had to retreat from its efforts to impose a carbon/energy tax, it has succeeded in establishing the political and legal basis to harmonise such taxes throughout the EU. In March 2002, the Council of Ministers unanimously adopted a legal instrument obliging each state to ratify the Kyoto Protocol, which they have subsequently done. Under the terms of this treaty, overall EU emissions must be reduced by at least 8 per cent of their 1990 levels by 2008-2012. The so-called ‘EU bubble’ in Article 4 of the Kyoto Protocol allows countries to band together in voluntary associations to have their emissions considered collectively. However, even before Kyoto was formally ratified, the EU had begun efforts to implement its provisions. In June 1998, a Burden Sharing Agreement gave each member state an emissions target which collectively was intended to reach the 8 per cent reduction target. In the spring of 2000, the EU officially launched the European Climate Change Program, which identified more than 40 emission-reduction measures. One of the fundamental emission reduction measures put forth by the EU has been emissions trading. The EU proposed a Directive for a system of emissions trading and harmonising domestic arrangements within the Community in 2001. 113 The Directive entered into force on October 25, 2003, creating the first international emissions trading system in the world, the EU Emissions Trading System (ETS). Under the Directive, governments are given the freedom to allocate permits as they see fit; the European Commission will not place limits on allowances, although the member states are 111 Collier, 1996. 112 Environment Daily, 1997, 2003b. 113 Smith and Chaumeil, 2002. 31 asked to keep the number of allowances low and in line with their Kyoto commitment. 114 The first trading (or compliance) period was 2005 through 2007. During the second compliance period, which runs from 2008 through 2012, the EU ETS will encompass as many as 10,000 industrial and energy installations, which are estimated to emit nearly half of Europe’s carbon dioxide emissions. 115 In 2007, the EU officially committed to reduce the Community’s aggregate greenhouse gas emissions by at least 20 per cent below the 1990 levels by the year 2020. Consistent with this commitment and in anticipation of a new international accord to succeed the Kyoto Protocol, the European Parliament amended the EU ETS directive in 2009. 116 This amendment puts forth some important changes to take effect in the third compliance period of the EU ETS, starting 2013. First, the majority of the emission allowances, which have so far been allocated by the member governments free-of-charge, would instead be sold via auction. Moreover, measures governing the EU ETS, including the determination of total allowances and the auction process, use of credits, and the monitoring, reporting and verification of emissions would be centralised under the Commission’s authority. The EU ETS is gradually being extended to include additional economic sectors. For example, emissions from international aviation will be subject to the EU ETS starting January 1, 2012. 117 As of early 2010, it was anticipated that international maritime emissions would be included next. 118 The efforts at the European level have been paralleled by a number of member-state policy initiatives. Among the earliest efforts was an initiative by Germany in which a government commission established the goal of reducing carbon dioxide emissions by 25 per cent by 2005 and 80 per cent by 2050, though these targets were subsequently relaxed owing to concerns about costs. Germany subsequently enacted taxes on energy, electricity, building standards, and emissions. The German federal government has negotiated voluntary agreements to reduce carbon dioxide emissions with virtually every industrial sector. From 2002 to 2006, the UK operated a voluntary greenhouse 114 Environment Daily, 2003c, 2003e. 115 European Commission, 2006b. 116 European Parliament and Council, 2009a. 117 Kanter, 2008. 118 Reuters, 2007 and UN Conference on Trade and Development, 2009. 32 gas-emissions trading scheme, involving nearly fifty industrial sectors, which served as a pilot for the current EU ETS. The British government simultaneously levied a tax on energy use (the so-called climate change levy) with reduced rates for firms and sectors that have met their emission-reduction targets. Like its German counterpart, the British government has officially endorsed very ambitious targets for the reduction of carbon dioxide emissions. This requires, among other policy changes, that a growing share of electricity be produced using renewable sources. While both Germany and the UK have reduced carbon dioxide emissions in the short run, their ability to meet the Kyoto targets to which they are now legally committed remains problematic. Other countries, such as France, Belgium, and the Netherlands, have established a complex range of policies, including financial incentives to purchase more fuel-efficient vehicles, investments in alternative energy, changes in transportation policies, voluntary agreements with industry, and the limited use of energy taxes. In 2002, Denmark approved legislation phasing out three industrial greenhouse gases controlled by Kyoto. In order to utilize demand-side management and energy efficiency measures for environmental protection, including greenhouse gas emissions reduction, the EU also issued a directive specifically addressing energy efficiency in 2006. 119 This directive calls for five-year action plans to be developed by the Commission towards achieving the EU’s goal of 20 per cent reduction in consumption of primary energy by 2020, 120 and has established an indicative energy savings target of 9 per cent to be reached within nine years (i.e., 1 per cent annually), starting in 2008. The directive allows each member to develop its own national action plan to achieve this target (or better). However, as this directive is not legally binding, participation and adherence by member states remain uneven. One of the novel energy savings mechanisms supported by the directive involves the use of tradable white certificates. This is a market-based mechanism whereby energy savings are certified and transformed into the so-called tradable white certificates that can then be traded in a secondary market, similar to allowances in an emissions trading system. A few EU member states (such as 119 European Parliament and Council, 2006. 120 Europa: Summaries of EU legislation, 2008, 2009. 33 France, Italy and the UK) have experimented with white certificate markets, but the voluntary nature of energy efficiency targets across the EU, fragmented action plans of member states towards achieving energy savings and challenges involving the market interactions between tradable white certificates, green certificates (or renewable energy certificates 121 ) and greenhouse gas allowances have so far limited market development. 122 Another example of centralised EU regulation in climate change involves carbon dioxide emissions from passenger vehicles. Starting in 1999, the EU has required all new cars sold within the EU to display labels indicating their fuel efficiency and carbon dioxide emissions. Most recently, a regulation enacted in 2009 requires auto manufactures to limit their fleet-wide average carbon dioxide emissions or pay an ‘emissions premium’ (penalty). 123 The emission limits and penalties will gradually be strengthened during the adjustment period of 2012 through 2018. In 2012, only 65 per cent of each manufacturer’s passenger car fleet will be required meet the baseline of 130 grams of carbon dioxide per kilometre. By 2020, a manufacturer’s entire fleet must have average carbon dioxide emissions 95 grams per kilometre or less. The penalty will be incremental during the adjustment period, starting from €5 for the first gram per kilometre of emissions over the limit, and rising up to €95 for additional grams per kilometre. By 2019, it will be fixed at €95 for each gram per kilometre. ANALYSIS The dynamics of the relationship between central and state authorities varies considerably across these six case studies. In three cases (automobile emissions in the EU and the US, and packaging waste policies in the EU), state governments have been an important source of policy innovation and diffusion. In these cases, state authorities were the first to regulate, and their regulations resulted in 121 Renewable energy certificates represents a similar concept to tradable white certificates and emissions allowances. In case of renewable energy certificates, energy generated from approved renewable energy resources is certified and traded in a secondary market, and can be applied as offsets towards reducing the greenhouse gas emission burden of an installation. 122 Mundaca and Neij, 2007 and Labanca and Perrels, 2008. 123 European Parliament and Council, 2009. 34 the adoption of more stringent regulatory standards by the central government. In the case of climate change policies, both EU and member state regulations have proceeded in tandem, with one reinforcing the other. In the two remaining cases (packaging waste and climate change in the US), American states have been a source of policy innovation, but not of significant policy diffusion. To date, state initiatives in these policy areas have not prompted an expansion of federal regulation, though some state regulations have diffused to other states. The earlier US pattern of automotive emissions standards, in which California and other states helped ratchet up federal standards, has so far not applied to either of these policy areas. However, over the years, the issue of climate change has become more politically significant than packaging waste, and the extended pressure by the states may generate some form federal action on climate under the Obama Administration. Moreover, as climate change gains prominence as the broader environmental threat, automotive emissions are increasingly evaluated in the same context. As a result, this potential federal action on climate change may be twopronged. As of early 2010, even stricter automobile fuel economy and emissions standards—proposed to be on par with those of California—were already on the drawing board. In fact, the associated draft rulemaking, which sets national standards for vehicle greenhouse gas emissions for the first time, was the result of an agreement between the federal government and California. This action on motor vehicle greenhouse gas emissions may then be followed by legislative or regulatory action directed at other sources of greenhouse gas emissions. 124 On the other hand, in Europe, relatively stringent state environmental standards continue to drive or parallel more closely the adoption of more stringent central standards. Thus, in the EU, the dynamics of the interaction between state and central authorities has become much more significant than in the US. Why has this occurred? Three factors are critical: two are structural and one is political. First, in the EU, states play a direct role in the policy-making process through their representation in the 124 Legislative action could consist of the Congress passing a climate change bill that might call for a nationwide cap-and-trade scheme in greenhouse gases. Regulatory action could involve the US EPA issuing a rulemaking to establish carbon dioxide regulation, as mentioned earlier. The agency could perhaps even establish a cap-and-trade market similar to the existing markets for nitrogen oxides and sulphur dioxide. The regulatory path has the potential to be more contentious than the legislative path. 35 Council of Ministers, the EU’s primary legislative body. This provides state governments with an important vehicle to shape EU policies. In fact, many European environmental standards originate at the national level; they reflect the successful effort of a member state to convert its national standards into European ones. In the US, by contrast, state governments are not formally represented in the federal government. While representatives and senators may reflect the policy preferences of the states from which they are elected, the states themselves enjoy no formal representation, unlike in the EU where they are represented on the Council of Ministers. Consequently, for example, the senators and representatives from California enjoy less influence over US national environmental legislation than does Germany’s representative in the Council of Ministers. Second, the single market is more recent and more politically fragile in the EU than in the US. The federal government’s legal supremacy over interstate commerce dates from the adoption of the US constitution, while the EU’s constitutional authority and political commitment to create and maintain a single market is less than two decades old. Accordingly, the European central government appears more sensitive to the impact of divergent standards on its internal market than does the US central government. For example, the US federal government explicitly permits two different standards for automotive emissions, while the EU insists on a uniform one. Likewise, the US federal government appears relatively indifferent to the wide divergence in state packaging waste regulations; only state regulations restricting imports of hazardous wastes and garbage have been challenged by federal authorities. 125 By contrast, distinctive state packaging waste standards have been an important source of legal and political tension within the EU, prompting efforts to harmonise standards at the European level, as well as legal challenges to various state regulations by the Commission. There are numerous state standards for packaging waste in the US that would probably prompt a legal challenge by the Commission were they adopted by an EU member state. Significantly, the EU has established maximum state recovery and recycling goals, while the US central government has not. This means 125 Stone, 1990. 36 that when faced with divergent state standards, particularly with respect to products, the EU is likely to find itself under more pressure than the US central government to prevent them from interfering with the single market. Accordingly, they must be either challenged or harmonised. In principle, harmonisation need not result in more stringent standards. In fact, the EU’s Packaging Directive imposes both a ceiling and a floor. But for the most part, coalitions of the EU’s greener member states have been successful in pressuring the EU to adopt directives that generally strengthen European environmental standards. The political influence of these states has been further strengthened by the role of the European Commission, which has made an institutional and political commitment to improving European environmental quality; consequently, the Commission typically prefers to use its authority to force states to raise their standards rather than lower them. In addition, the increasingly influential role of the EP, in which green constituencies have been relatively strongly represented, has also contributed to strengthening EU environmental standards. The third factor is a political one. During the 1960s and 1970s, there was a strong political push in the US for federal environmental standards. According to environmentalists and their supporters, federal regulation was essential if the US was to make effective progress in improving environmental quality. And environmentalists were influential enough to secure the enactment of numerous federal standards, which were generally more stringent than those at the state level. Thus, the centre of gravity of US environmental regulation shifted to Washington. After the Republican Party’s capture of both chambers of Congress in 1994, followed by the two-term Republican presidency starting in 2000, relatively few more-stringent environmental standards were adopted. During this period, the national political strength of environmentalists and their supporters diminished. Nevertheless, environmentalists and their supporters continued to be relatively influential in a number of American states. In part, this outburst of state activity has been a response to their declining influence in Washington. By 2008, a major discontinuity had emerged between the environmental policies of many US states and those of the federal government. This has meant that, unlike in the 1960s and 1970s, more stringent state standards have had much less impact on the 37 strengthening federal standards. Indeed, in marked contrast to two decades ago, when the automobile emissions standards of California and other states led to the progressive strengthening of federal standards in this critical area of environmental policy, California’s recent policy efforts to regulate automobiles as part of a broader effort to reduce greenhouse gas emissions were initially challenged by the federal government on the grounds that they violated federal fuel-economy standards, an area of regulatory policy in which the federal government has exclusive authority but which it did not strengthen for more than two decades. The Obama Administration has sought to reinvigorate the federal government’s environmental policy role, most notably in the critical area of global climate change. It has also reduced some of the friction between states and the federal government in the critical area of greenhouse gas emissions from motor vehicles. In the EU, the political dynamics of environmental regulation differ markedly. The 1990s witnessed both the increased political influence of pro-environmental constituencies within the EU – by the end of that decade, green parties had entered the governments of five Western European nations – and a decline in the influence of green pressure groups in the US federal government. During this period, a number of EU environmental policies became more centralised and stringent than those of the US. 126 Paradoxically, while the US federal government exercises far more extensive authority than the EU, in each of three cases we examined, EU environmental policy is now more centralised than that in the US. CONCLUSION The focal cases are summarised in Table 9.1. We conclude with general observations about the dynamics of environmental policy in the federal systems of the US and the EU. On one hand, the continued efforts of states in the US and member states of the EU to strengthen a broad range of environmental regulations suggest that fears of a regulatory race to the bottom may be misplaced. Clearly, concerns that strong regulations will make domestic producers vulnerable to competition 126 Vogel, 2003. 38 from producers in political jurisdictions with less stringent standards have not prevented many states on both sides of the Atlantic from enacting many relatively stringent and ambitious environmental standards. On the other hand, the impact of such state policies remains limited, in part because not all states choose to adopt or vigorously enforce relatively stringent standards. Thus, in the long run, there is no substitute for centralised standards; they represent the most important mechanism of policy diffusion. Table 9.1 Comparison of environmental regulations ____________________________________________________________________ Policy EU Status US Status area chronology chronology ____________________________________________________________________ Auto emissions State to central Centralised State to central Shared Packaging waste State to shared Contested State Uncontested Climate change Shared Uncontested State Contested ____________________________________________________________________ Accordingly, the most important role played by state standards is to prompt more stringent central ones. But unless this dynamic comes into play, the effectiveness of state environmental regulations will remain limited. In the areas of both global climate change and packaging waste, virtually all state regulations of the US are less stringent than those of the EU. It is not coincidental that the case we examined in which EU and US standards are the most comparable – and relatively stringent – is automobile emissions, in which the US central government plays a critical role. By contrast, the lack of central regulations for both packaging waste and climate change clearly reflects and reinforces the relative laxity of US regulations in these policy areas. The EU’s more centralised policies in both areas reflect the greater vigour of its recent environmental efforts. REFERENCES American Forest & Paper Association (2003), ‘State recycling goals and mandates’, http://www.afandpa.org/content/navigationmenu/environment_and_recycling/recycling/state_recycling_goals/state_recyc ling_goals.htm. Barr, S. (1998), ‘Clinton orders more recycling; Government agencies face tougher requirements on paper’, The Washington Post, September 16, A14. Bell, V. (1998), ‘President, Environmental Packaging International, environmental packaging compliance tips’, http://www.enviro-pac.com/pr02.htm, August. 39 Bennett, G. (ed.) (1991), Air Pollution Control in the European Community: Implementation of the EC Directives in the Twelve Member States, London: Graham and Trotman. Berland, R. (1992), ‘State and local attempts to restrict the importation of solid and hazardous waste: Overcoming the dormant commerce clause’, University of Kansas Law Review, 40(2), 465-497. Bramley, M. (2002), ‘A comparison of current government action on climate change in the U.S. and Canada’, Pembina Institute for Appropriate Development, http://www.pembina.org/publications_item.asp?id=129. Bryner, G. (1993), ‘Blue skies, green politics’, Washington, DC: Congressional Quarterly Press. California Air Resources Board (2000), ‘California’s diesel risk reduction program: Frequently asked questions (FAQ)’, http://www.arb.ca.gov/diesel/faq.htm. California Air Resources Board (2001), ‘Fact sheet: California’s Zero Emission Vehicle Program’, http://www.arb.ca.gov/msprog/zevprog/factsheets/evfacts.pdf. California Air Resources Board (2003), ‘Staff report: Initial statement of reasons, 2003; Proposed amendments to the California Zero Emission Vehicle Program regulations’, http://www.arb.ca.gov/regact/zev2003/isor.pdf. California Environmental Protection Agency (EPA) (2001), ‘History of the California Environmental Protection Agency’, http://www.calepa.ca.gov/about/history01/ arb.htm. California Integrated Waste Management Board (2003), ‘Buy recycled: Web resources’, http://www.ciwmb.ca.gov/buyrecycled/links.htm. Center for Responsive Law (2003), ‘Government purchasing project “State government environmentally preferable purchasing policies”’, http://www.gpp.org /epp_states.html. Chanin, R. (2003), ‘California’s authority to regulate mobile source greenhouse gas emissions’, New York University Annual Survey of American Law, 58, 699-754. Collier, U. (1996), ‘The European Union’s climate change policy: Limiting emissions or limiting powers?’, Journal of European Public Policy, 3(March), 122-138. Comer, C. (1995), ‘Federalism and environmental quality: A case study of packaging waste rules in the European Union’, Fordham Environmental Law Journal, 7, 163-211. Container Recycling Institute (2003), ‘The Bottle Bill Resource Guide’, http://www.bottlebill.org. Cotsworth, E. (2002), ‘Letter to Anna K. Maddela’, yosemite.epa.gov/osw /rcra.nsf/ea6e50dc6214725285256bf00063269d/290692727b7ebefb85256c6700700d50?opendocument. Dietly, K. (2001), Research on Container Deposits and Competing Recycling Programs, presentation to the Columbia, Missouri Beverage Container Deposit Ordinance Law Study Committee Meeting, 1 November. Database of State Incentives for Renewables and Efficiency (2010), ‘Summary Maps: Renewable Portfolio Standards’, http://www.dsireusa.org/documents/ summarymaps/RPS_map.ppt, accessed January 16, 2010. Environment Daily (1997), March 13. Environment Daily (2001a), March 29. Environment Daily (2001b), October 4. Environment Daily (2003a), February 28. Environment Daily (2003b), March 21. Environment Daily (2003c), April 2. Environment Daily (2003d), May 5. Environment Daily (2003e), July 2. Europa: Summaries of EU legislation (2008), ‘Action Plan for Energy Efficiency (2007-12)’, http://europa.eu/ legislation_summaries/energy/energy_efficiency/l27064_en.htm, accessed January 24, 2010. Europa: Summaries of EU legislation (2009), ‘Energy efficiency for the 2020’, http://europa.eu/legislation _summaries/energy/energy_efficiency/en0002_en.htm, accessed January 24, 2010. European Commission (2006a), ‘Report from the Commission to the Council and the European Parliament on the implementation of directive 94/62/ec on packaging and packaging waste and its impact on the environment, as well as on the functioning of the internal market’, http://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri= COM:2006:0767:FIN:EN:HTML, accessed January 29, 2010. European Commission (2006b), ‘MEMO/06/452: Questions and Answers on Emissions Trading and National Allocation Plans for 2008 to 2012’, http://ec.europa.eu/environment/climat/pdf/m06_452_en.pdf, accessed January 25, 2010. European Commission (2009), ‘Reducing CO2 emissions from light-duty vehicles’, http://ec.europa.eu/environment/air/transport/co2/co2_home.htm, accessed January 18, 2010. European Parliament and Council (2004), ‘Directive 2004/12/EC of the European Parliament and of the Council of 11 February 2004 amending Directive 94/62/EC on packaging and packaging waste’, Official Journal L047, February 18, 2004, p. 0026-0032. European Parliament and Council (2005), ‘Directive 2005/20/EC of the European Parliament and of the Council of 9 March 2005 amending Directive 94/62/EC on packaging and packaging waste’, Official Journal L070, March 16, 2005, p. 0017- 0018. European Parliament and Council (2006), ‘Directive 2006/32/EC of the European Parliament and of the Council of 5 April 2006 on energy end-use efficiency and energy services and repealing Council Directive 93/76/EEC’, Official Journal L114, April 27, 2006, p. 0064-0084. European Parliament and Council (2009a), ‘Directive 2009/29/EC of 23 April 2009 amending Directive 2003/87/EC so as to improve and extend the greenhouse gas emission allowance trading scheme of the Community’, Official Journal L140, June 6, 2009, p. 0063-0087. 40 European Parliament and Council (2009b), ‘Regulation EC443/2009 of 23 April 2009, setting emission performance standards for new passenger cars as part of the Community’s integrated approach to reduce CO2 emissions from light-duty vehicles,’ Official Journal L140, June 6, 2009, p. 001-0015. Hakim, D. (2003a), ‘California regulators modify Auto Emissions Mandate’, The New York Times, April 25, A24. Hakim, D. (2003b), ‘Automakers drop suits on air rules’, The New York Times, August 12, A1, C3. Haverland, M. (1999), Regulation and Markets: National Autonomy, European Integration and the Politics of Packaging Waste, Amsterdam: Thela Thesis. Johnson, K. (2003), ‘3 States sue E.P.A. to regulate emissions of carbon dioxide’, The New York Times, June 5. Kanter, J. (2008), ‘Europe Forcing Airlines to Buy Emissions Permits’, The New York Times, October 24. Keating, G. (2002), ‘Californian governor signs landmark Auto Emissions Law’, Reuters, July 23, http://www.enn.com/news/wire-stories/2002/07/07232002/ s_47915. asp. Nicola Labanca, N. and Perrels, A. (2008), ‘Tradable White Certificates--a promising but tricky policy instrument (Editorial)’, Energy Efficiency, 1(November), p. 233-236. Lee, G. (1993), ‘Government purchasers told to seek recycled products; Clinton Executive Order revises standards for paper’, The Washington Post, October 21, A29. Massachusetts v. E.P.A., 127 S.Ct. 1438 (2007). The full text of the decision is available at http://www.supremecourtus.gov/opinions/06pdf/05-1120.pdf, accessed January 18, 2010. McCarthy, J. (1993), ‘Bottle Bills and curbside recycling: Are they compatible?’, Congressional Research Service (Report 93-114 ENR), http://www.ncseonline org/ nle/crsreports/pollution. McCormick, J. (2001), Environmental Policy in the European Union, New York: Palgrave. Meltz, R. (2007), ‘The Supreme Court’s Climate Change Decision: Massachusetts v. EPA’, May 18, Congressional Research Service Report RS22665, http://assets.opencrs.com/rpts/RS22665_20070518.pdf, accessed January 18, 2010. Midwestern Greenhouse Gas Reduction Accord (2007). The full text of the 2007 Accord is available at http://www.midwesternaccord.org/midwesterngreenhousegas reductionaccord.pdf, accessed January 25, 2010. Midwestern Greenhouse Gas Reduction Accord (2009), ‘Draft Final Recommendations of the Advisory Group’, http://www.midwesternaccord.org /GHG%20Draft%20Advisory%20Group%20Recommendations.pdf, accessed January 25, 2010. Mundaca, L. and Neij, L. (2007), ‘Package of policy recommendations for the assessment, implementation and operation of TWC schemes’, Euro White Cert Project Work Package 5, http://www.ewc.polimi.it/documents/Pack_Policy_ Recommendations.pdf, accessed January 18, 2010. New England Governors/Eastern Canadian Premiers (2001), ‘Climate Change Action Plan 2001’, http://www.massclimateaction.org/pdf/necanadaclimateplan.pdf. New Hampshire Department of Environmental Services (2002), ‘Overview of HB 284: The New Hampshire Clean Power Act, ground-breaking legislation to reduce multiple harmful pollutant from New Hampshire’s electric power plants’, http://www.des.state.nh.us/ard/cleanpoweract.htm. New Jersey Department of Environmental Protection (1999), ‘Sustainability Greenhouse Action Plan’, http://www.state.nj.us/dep/dsr/gcc/gcc.htm. New York Times, The (2002), ‘California’s message to George Pataki (Editorial)’, July 24, A18. OECD (2002), Sustainable Development: Indicators to Measure Decoupling of Environmental Pressure from Economic Growth, SG/SD(2002)1/FINAL, May, Paris: OECD. OECD (2004), Environmental Data Compendium: Selected Environmental Data, OECD EPR/Second Cycle, February 9, Paris: OECD. Oregon Department of Environmental Quality (2003), ‘Oregon Container Glass Recycling Profile’, http://www.deq.state.or.us/wmc/solwaste/glass.html. Parker, J. (2003), ‘California board’s boundaries debated: Automakers say it oversees emissions, not fuel economy’, Detroit Free Press, May 7. Percival, R., A. Miller, C, Schroeder, and J. Leape (1992), Environmental Regulation: Law, Science and Policy. Boston: Little, Brown & Co. Pew Center on Global Climate Change (2008), ‘Central Valley Chrysler-Jeep Inc. v. Goldstone’, http://www.pewclimate.org/judicial-analysis/CentralValleyChrysler Jeep-v-Goldstone, accessed January 28, 2010. Pew Center on Global Climate Change (2010), ‘Vehicle Greenhouse Gas Emissions Standards’, http://www.pewclimate.org/sites/default/modules/usmap/pdf.php? file=5905, accessed January 16, 2010. Plastic Shipping Container Institute (2003), ‘Wisconsin solid waste legislative update’, http://www.pscionline.org. Rabe, B. (2002), ‘Greenhouse & statehouse: The evolving state government role in climate change’, Pew Center on Global Climate Change, http://www.pewclimate.org/global-warming-in-depth/all_reports/greenhouse_and_statehouse_/. Rehbinder, E. and R. Stewart (1985), Integration Through Law: Europe and American Federal Experience, vol. 2: Environmental Protection Policy, New York: Walter de Gruyter. Reuters (2007), ‘EU confirms to propose ships join emissions trade’, April 16. Revesz, R. (2001), ‘Federalism and environmental regulation: A public choice analysis’, Harvard Law Review, 115, 553- 641. RGGI (2007), ‘Overview of RGGI CO2 Budget Trading Program’, http://rggi.org/docs/program_summary_10_07.pdf, accessed January 24, 2010. RGGI (2009a), ‘RGGI Fact Sheet’, http://www.rggi.org/docs/RGGI_Executive%20 Summary_4.22.09.pdf, accessed January 25, 2010. 41 RGGI (2009b), Auction Results, http://www.rggi.org/co2-auctions/results, accessed January 25, 2010. RGGI (2009c), RGGI CO2 Allowance Tracking System (COATS): Public Reports: Transaction price reports for January 1, 2009 through December 31, 2009, https://rggi-coats.org/eats/rggi/index.cfm?fuseaction=reportsv2.price_rpt& clearfuseattribs=true, accessed January 25, 2010. Smith, M. and T. Chaumeil (2002), ‘Greenhouse gas emissions trading within the European Union: An overview of the proposed European Directive’, Fordham Environmental Law Journal, 13(Spring), 207-225. Sterngold, J. (2002), ‘State officials ask Bush to act on global warming’, The New York Times, July 17, A12. Stone, J. (1990), ‘Supremacy and commerce clause: Issues regarding state hazardous waste import bans’, Columbia Journal of Environmental Law, 15(1), 1-30. Thorman, J., L. Nelson, D. Starkey, and D. Lovell (1996), ‘Packaging and waste management; National Conference of state legislators’, http://www.ncsl.org/ programs/esnr/rp-pack.htm. UN Conference on Trade and Development (2009), ‘Maritime Transport and the Climate Change Challenge: Summary of Proceedings’, Multi-Year Expert Meeting on Transport and Trade Facilitation, February 16-18, Geneva. US Department of Transportation (2009a), ‘Statement from the Department of Transportation’, January 7, 2009, http://www.dot.gov/affairs/dot0109.htm. US Department of Transportation (2009b), ‘Average Fuel Economy Standards, Passenger Cars and Light Trucks, Model Year 2011’, March 9, 2009. US EPA (1998), ‘Control of Air Pollution from New Motor Vehicles and New Motor Vehicle Engines: Finding of National Low Emission Vehicle Program in Effect’, March 2, 63 Federal Register 926. US EPA (1999), ‘California State Motor Vehicle Pollution Control Standards; Waiver of federal preemption’, http://www.epa.gov/otaq/regs/ld-hwy/evap/waivevap.pdf. US EPA (2000), ‘Control of Air Pollution from New Motor Vehicles: Tier 2 Motor Vehicle Emission Standards and Gasoline Sulfur Control Requirements; Final Rule’, February 10, 65 Federal Register 6697. US EPA (2001), ‘Control of Air Pollution from New Motor Vehicles: Heavy-Duty Engine and Vehicle Standards and Highway Diesel Fuel Sulfur Control Requirements’, January 18, 66 Federal Register 5001. US EPA (2003a), ‘Federal and California Exhaust and Evaporative Emission Standards for Light-Duty Vehicles and LightDuty Trucks’, Report EPA420-B-00-001, http://www.epa.gov/otaq/stds-ld.htm. US EPA (2003b), ‘Municipal Solid Waste (MSW): Basic facts’, http://www.epa.gov/apeoswer/non-hw/muncpl/facts.htm. US EPA (2003c), ‘Global warming: State actions list’, yosemite.epa. gov/oar/globalwarming.nsf/content/actionsstate.html. US EPA (2006), ‘What Are the Six Common Air Pollutants?’, http://www.epa.gov/air/urbanair/, accessed February 5, 2010. US EPA (2009a), ‘Regulatory Impact Analysis for the Mandatory Reporting of Greenhouse Gas Emissions Final Rule (GHG Reporting): Final Report’, September 2009, http://www.epa.gov/climatechange/emissions/downloads09/GHG_RIA.pdf, accessed January 28, 2010. US EPA (2009b), ‘Endangerment and Cause or Contribute Findings for Greenhouse Gases under the Clean Air Act’, http://www.epa.gov/climatechange/ endangerment.html, accessed January 18, 2010. US EPA (2009c), ‘California Greenhouse Gas Waiver Request’, http://www.epa.gov/oms/climate/ca-waiver.htm, accessed January 18, 2010. US EPA (2009d), ‘Commitment Letters: California Governor Schwarzenegger’, http://www.epa.gov/otaq/climate/regulations/calif-gov.pdf, accessed January 18, 2010. US EPA (2009e), ‘EPA and NHTSA Propose Historic National Program to Reduce Greenhouse Gases and Improve Fuel Economy for Cars and Trucks’, http://epa.gov/otaq/climate/regulations/420f09047a.htm, accessed January 18, 2010. US EPA (2009f), ‘Prevention of Significant Deterioration and Title V Greenhouse Gas Tailoring Rule,’ http://www.epa.gov/NSR/fs20090930action.html, accessed February 5, 2010. Vogel, D. (1995), Trading Up; Consumer and Environmental Regulation in a Global Economy, Cambridge: Harvard University Press. Vogel, D. (2003), ‘The hare and the tortoise revisited: The new politics of consumer and environmental regulation in Europe’, British Journal of Political Science, 33(4), 557-580. WCI (2009), ‘The WCI Cap & Trade Program’, at http://www.western climateinitiative.org/the-wci-cap-and-trade-program, and ‘The WCI Cap & Trade Program: Frequently Asked Questions’, http://www.westernclimateinitiative.org /the-wcicap-and-trade-program/faq, both last accessed January 25, 2010. Yost, P. (2002), ‘Bush administration is against California’s Zero Emissions Requirement for Cars’, Environmental News Network, http://www.enn.com/news/ wire-stories/2002/10/10102002/ap_48664.asp. Zito, A. (2000), Creating Environmental Policy in the European Union, New York: St. Martin’s Press.Failing to learn and learning to fail (intelligently): How great organizations put failure to work to improve and innovat
|
CD ROM Annuaire d'Entreprises France prospect (avec ou sans emails) : REMISE DE 10 % Avec le code réduction AUDEN872
10% de réduction sur vos envois d'emailing --> CLIQUEZ ICI Retour à l'accueil, cliquez ici HARVARD AND CHINA
|
CD ROM Annuaire d'Entreprises France prospect (avec ou sans emails) : REMISE DE 10 % Avec le code réduction AUDEN872
10% de réduction sur vos envois d'emailing --> CLIQUEZ ICI Retour à l'accueil, cliquez ici HARVARD AND CHI N A A R E S E A R C H S Y M P O S I U M M A R C H 2 0 1 0 E X E C U T I V E S U M M A R I E S O F S E L E C T E D S E S S I O N SH A R VA R D A N D C H I N A : A R E S E A R C H SY M P O S I U M | 2 C O P Y R I G H T © 2 0 1 0 P R E S I D E N T & F E L LOWS O F H A R VA R D C O L L E G E SESSIONS WELCOME AND OPENING PLENARY PAGE 3 THE CHINESE CENTURY? PAGE 6 CHINA—DYNAMIC , IMPORTANT AND DIFFERENT PAGE 9 THE MORAL LIMITS OF MARKETS PAGE 12 WHO CARES ABOUT CHINESE CULTURE? PAGE 15 MANAGING CRISES IN CHINA PAGE 18 CHINA’S NEWEST REVOLUTION: HEALTH FOR ALL? PAGE 21 INNOVATIONS CHANGING THE WORLD: NEW TECHNOLOGIES, HARVARD, AND CHINA PAGE 25 CLOSING REMARKS (F. WARREN MCFARLAN) PAGE 28 CLOSING REMARKS (DREW GILPIN FAUST) PAGE 31H A R VA R D A N D C H I N A : A R E S E A R C H SY M P O S I U M | 3 OVERVIEW The world is going through the second great wave of globalization. Globalization isn’t just economic; education is also globalizing.Amid this globalization wave, the engagement of China and America is critical as the economies of these two countries will shape the world economy. It is important for Harvard University and Harvard Business School to be part of the engagement between China and America.The creation of the Harvard Center Shanghai represents a next stage of Harvard’s engagement in Asia. It is another step in the journey of becoming a truly global university. CONTEXT Dean Light and Professor Palepu reflected on the role that globalization plays in education, the journey to create the Harvard Center Shanghai, and the mutual benefits of deepened engagement in China. SPEAKERS Jay O. Light George F. Baker Professor of Administration and Dean, Harvard Business School Krishna G. Palepu Ross GrahamWalker Professor of Business Administration and Senior Associate Dean for International Development, Harvard Business School WELCOME AND OPENING PLENARYHarvard Business School’s process of globalizing has many important elements. These elements include having a global: • Student body. Twenty years ago, Harvard Business School had a relatively small number of international students and few Chinese students.Today, HBS has quite a few Chinese students and the student body is highly international. • Faculty.Today HBS’s faculty comes from across the world, including a half dozen faculty who understand Mandarin, several of whom also can teach in Mandarin.The faculty also includes Bill Kirby, one of theWest’s foremost China historians, who splits his time between HBS and Harvard College. • Curriculum. HBS’s curriculum and cases have become global relatively quickly.There are now courses on doing business in China, immersion programs—including programs in China and elsewhere in Asia—and many other global components in the curriculum. • Alumni group. As HBS students are increasingly international, so too are the school’s alumni. In Shanghai, there is an increasingly active alumni organization. In addition to HBS’s global focus, Harvard University also has adopted a more global perspective.The university is seeking to leverage the work and interest of the entire Harvard community in the global arena. For example, Shanghai is a hotbed for undergraduate internships. One seemingly simple change that will allow students from across Harvard to engage in international opportunities is Harvard’s decision to move to an integrated school-wide calendar.This common calendar will allow coordination of programs across different schools and will make it easier for students to engage in these coordinated global programs. While these elements are important for Harvard to become a truly international university, it also became apparent that being part of the engagement between China and America required that Harvard have a greater presence in China. So, in the last two years, the decision was made to pursue a footprint in China, specifically in Shanghai. Shanghai is the right city and this footprint is in the right place—a central location in Shanghai, on top of two key subway lines. It is important for Harvard to be part of the globalization of the economy and education. Harvard and China have a long-shared history. During the first great wave of globalization around 100 years ago, education also was being globalized.There were students at Harvard College from Shanghai as well as other locations in China.The first classes at Harvard Business School included students from Shanghai. Also, Harvard Medical School was active in Shanghai. Then the world changed. FollowingWorldWar I,the Great Depression, and World War II, the previous wave of globalization gave way to very local political and economic attitudes. Economically and educationally, China and America were not linked. Now, we find ourselves in the second great wave of globalization, which has been building over the past two decades.Today,the world economy and education are being globalized in unprecedented ways. China is now the world’s second-largest economy; the future of the global economy depends on the ability of China and America to engage with each other in a constructive, integrative way. In the long term, engagement between China and America is critical.Recognizing this,it became clear that Harvard should be part of this engagement. “I believe the Harvard Business School and Harvard University must be part of that engagement and must be an important part of understanding how the world economy, the Chinese economy, and the American economy are evolving, and how we can engage with each other.” — Jay O. Light H A R VA R D A N D C H I N A : A R E S E A R C H SY M P O S I U M | 4 WELCOME AND OPENING PLENARY KEY TAKEAWAYS“We could enable [Chinese CEOs in an executive program] to experience Harvard Business School without having to go to Boston, and that’s a real landmark.” — Krishna G. Palepu Historically, great universities have been located in countries with great economies. The stellar universities of Britain, Germany, and America all rose as their societies rose. By taking a global perspective and by opening research and education centers around the globe, particularly in Shanghai, Harvard Business School and Harvard University are seeking to become the first school to maintain its prominent stature as economic forces shift around the globe. The opening of the Harvard Center Shanghai demonstrates the continuing commitment to becoming a truly global university. As a scholar who studies multinationals in emerging markets, Professor Palepu knows how hard it is for organizations to make the commitments that are necessary to transform themselves into global enterprises.The opening of the Harvard Center Shanghai demonstrates such a commitment by Harvard. It marks a continuing evolution in HBS’s global journey. Beginning around 1995, HBS began opening global research centers around the world.The first of these research centers opened in Hong Kong and the school now has six centers, which have contributed significantly to the school’s curriculum. About five years ago, a faculty committee chaired by Professor Palepu recommended expanding and converting these research centers into research and education centers. The rationale was that, in HBS’s view, there isn’t a distinction between research and education, and the uniqueness of HBS is that synergy between research and education. But part of this evolution requires physical infrastructure where classes can be taught.The infrastructure in Shanghai is the type of educational infrastructure that is needed. H A R VA R D A N D C H I N A : A R E S E A R C H SY M P O S I U M | 5 WELCOME AND OPENING PLENARY KEY TAKEAWAYSH A R VA R D A N D C H I N A : A R E S E A R C H SY M P O S I U M | 6 OVERVIEW Many people say the 21 st century will be the “Chinese Century.” However, similar statements made a century ago didn’t come to fruition.Yet for those who have spent time in the country, it is hard to doubt that China will play a critical world role in the next 100 years. China is rapidly moving forward in pursuing unfulfilled dreams in areas of infrastructure, entrepreneurship, and education. Still, as central of a role as China will play, this century won’t belong exclusively to China.This will be a century for all in the world who share common aspirations and who work and learn together to solve common problems. CONTEXT Professor Kirby shared his thoughts on whether this will be China’s century. He looked back at the past century and examined the key factors propelling China forward. SPEAKER William C. Kirby Spangler Family Professor of Business Administration, Harvard Business School;T.M. Chang Professor of China Studies, Faculty of Arts and Sciences; Chairman, Harvard China Fund; Director, Fairbank Center for Chinese Studies THE CHINESE CENTURY?China’s rise in the 21 st century is based on its recovery in the 20 th century. Some people claim the 21 st century will be the “Chinese Century,” which is hard to question. But viewing this as China’s century doesn’t come at the exclusion of other countries; it comes as part of a global community. In large measure, China’s success in the 21 st century is based on its recovery in the 20 th century and its pursuit of longstanding, unfulfilled dreams. “If China is in some measure to define the 21 st century, it is because of its recovery and rise in the 20 th .” — William C. Kirby The longstanding dreams China is working to fulfill are: • An infrastructure dream. China is built on a long tradition of infrastructure. In his book The International Development of China, published in 1922, Sun Yat-sen envisioned a modern China with 100,000 miles of highway and a gorgeous dam. He foresaw a “technocracy,” which has been translated in Chinese as “the dictatorship of the engineers” (an apt definition of China’s government today).This infrastructure dream is becoming a reality as China builds highways, airports,telecommunication systems, and a dam that couldn’t be built anywhere else except in China. • A private enterprise dream.While the government is building the infrastructure,the private sector is building a rapidly growing middle class and a consumer economy.This economy includes proliferating retail stores and new Chinese brands (many of which are targeted to “Mr. and Mrs. China”). No one knows how large the middle class is, with the best guess in the 200–250 million range. There is a group in China termed the “urban middle class.”These individuals are 20–50 years old; 80% own their own home, and most don’t have a mortgage; 23% have more than one property.About one-third have a car; they love to travel; and they are beginning to buy stocks. (However, the gap between this new urban middle class and the rural—a gap that has always existed—is growing fast.) One hundred years ago, China seemed on the verge of the “Chinese Century,” but it didn’t come to fruition. In the early 1900s, many experts thought China was on the verge of the “Chinese Century.” A host of books proclaimed China’s awakening.This view was based on: • A revolution in business. China was experiencing its first golden age of capitalism. China had a sizeable middle class and the glamorous city of Shanghai—not Tokyo or Hong Kong—was the international center of East Asian commerce. It was also a golden age for entrepreneurship. • The formation of Asia’s first republic. About 100 years ago, under Sun Yat-sen, China engaged in a grand historical experiment in forming Asia’s first republic. • A revolution in education. In the first half of the 20 th century, China developed one of the strongest higher education systems in the world. Based on the political climate, the business environment, and the educational system, it was an optimistic time in China. But China’s politics took a decidedly military turn with a series of leaders cut from the same cloth—Yuan Shikai, Chiang Kai-shek, Zhu De, and Mao Zedong.This military turn set China back, but it also provided the foundation for China’s global strength; China could not be defeated by Japan inWorldWar II and could not be intimidated by the Soviet Union. Ultimately, China’s first golden age was undone by the Japanese invasion, the Communist rebellion, and above all, the ruinous policies of the first 30 years of the People’s Republic. China’s entrepreneurs were forced underground and overseas and China’s progressive universities were swept away. “At a time when the rest of East Asia prospered, China went backward.” — William C. Kirby H A R VA R D A N D C H I N A : A R E S E A R C H SY M P O S I U M | 7 THE CHINESE CENTURY? KEY TAKEAWAYSOther Important Points • Three Shanghais.Within the borders of Shanghai are three different Shanghais: 1) the old walled city of Shanghai, which housed some 400,000 Chinese when Westerners first settled in Shanghai in the 1840s; 2) the Bund, which became a major financial hub; and 3) the new Shanghai, which is the Shanghai of the future. • China’s constitution. In the 1910s, Chinese PresidentYuan Shikai asked Harvard’s President Eliot to recommend an advisor to help draft a new constitution for China. Eliot recommended Frank Goodnow,the leading political scientist of the day. Goodnow drafted two constitutions:the first made Shikai president for life and the second would have made him emperor, had he not died first. • 180 degrees. About 100 years ago, America sold textiles and clothes to China and Americans bought Chinese railway bonds, which were viewed as good investments but turned out to be worthless.Today, Americans buy their textiles and clothes from China and China buys American treasury bonds, which hopefully fare better than the Chinese railway bonds. The changes in consumption in the huge new middle class are changing entire industries, such as agriculture.There are major changes in how food is grown, distributed, and sold—without using more land.This includes the dairy industry and the growing Chinese wine industry. • An education dream. No story is more central to China’s future than education. (Chinese families will delay any purchase in order to fund education.) China is rapidly building massive, modern university campuses, such as Chongqing University.These universities will be a welcome challenge to American universities and other leading global schools. “It is this area [education] that I think will clearly determine whether or not this will be China’s century.” — William C. Kirby Harvard shares China’s dream of training and educating future global leaders.This is seen through the fact that each of Harvard’s schools has important relationships in China. Harvard and China share common educational challenges.Among them are to: – Not simply train, but educate the whole person. – Educate a person not simply as a citizen of a country, but as a citizen of the world. – Measure and value not only research, but teaching and inspiration. – Extend the promise of higher education beyond the upper and middle classes. – Determine the proper level of governance and autonomy so universities can serve a broad public purpose. H A R VA R D A N D C H I N A : A R E S E A R C H SY M P O S I U M | 8 THE CHINESE CENTURY? KEY TAKEAWAYSH A R VA R D A N D C H I N A : A R E S E A R C H SY M P O S I U M | 9 OVERVIEW The Harvard Center Shanghai’s state-of-the-art bilingual facility expands access to the HBS experience to non-English-speaking executives in China. Featuring high-tech equipment and world-class interpreters, the facility disintegrates language barriers for an uncompromised HBS classroom experience.The case method, the fast-paced exchanges, and the cold calls are all there. CONTEXT Professor McFarlan shared his experiences spearheading HBS’s executive education ventures in China and described the state-of-the-art, HBS-style, bilingual presentation space at the new Harvard Center Shanghai facility. Participants then experienced the facility for themselves as Professor Li Jin led discussion of an actual HBS case. SPEAKERS F.Warren McFarlan Albert H. Gordon Professor of Business Administration, Emeritus, Harvard Business School Li Jin Associate Professor of Business Administration, Harvard Business School CHINA—DYNAMIC , IMPORTANT AND DIFFERENT• An hour’s class requires a team of three translators, each working 20-minute stints. • The room has double-sized blackboards: half for Chinese, half for English. “By the time you’re 15 minutes into it, you literally forget that you’re not in an English-speaking classroom.” — F. Warren McFarlan Despite all the high-tech equipment, people are the critical link. While high-tech equipment makes the facility translationcapable, it is the people—faculty and translators—who are most critical to delivering an uncompromised HBS educational experience. A bilingual presentation is quite labor-intensive behind the scenes: • Two professors are necessary for blackboard notes in both languages; they need to confer in advance to coordinate plans. • Slides must be translated in advance. Getting translations done in time requires coordination. • During class, professors must become skilled at realizing who is speaking by the red lights since there is no voice change for translated material. Complicating this a bit is a 15-second lag time before the translation arrives. • No less than expert translating skills are a must. “The critical link lies behind the glass walls; you must have world-class interpretation simultaneously.” — F. Warren McFarlan The bilingual facility dramatically expands access to the HBS educational experience. Chinese executives who would not have been able to experience HBS are now able to do so, thanks to the presentation space at the Harvard Center Shanghai. Its capabilities were demonstrated by a recent program at the Center. It consisted of 66 CEOs, 65 of whom didn’t speak English. Without this facility,these individuals would not have been able to participate in this HBS program. Since 2001, HBS and its Chinese business school partners have provided bilingual executive education in China. Harvard Business School has offered executive education programs in China in partnership with leading Chinese business schools since 2001. Professor McFarlan spearheaded the first co-branded program with Tsinghua University (at the request of HBS graduate and former U.S. Treasury Secretary Henry Paulson when he was CEO of Goldman Sachs). Two-thirds of the instructors in this seminal program were HBS faculty, one-third wereTsinghua professors trained in HBS methods.The program was bilingual from day one, with classes conducted in both Chinese and English (realtime translation of classroom exchanges was transmitted by earphones) and HBS case studies focused on Chinese companies and are available in both languages. Harvard’s bilingual classroom disintegrates language barriers to deliver an uncompromised classroom experience. Creators of the HBS/Tsinghua program knew that only real-time translation would allow the fast-paced, interactive experience of an HBS classroom to be replicated in a bilingual setting. “Sequential translation wouldn’t work,” said Professor McFarlan.“The pace of the class would slip; you’d lose 50%.” In the Harvard Center Shanghai’s state-ofthe-art bilingual facility, content lost in translation is no greater than 5%–10%. The presentation space looks much like its classroom counterpart in Boston, with some critical differences: • At each seat are headphones with settings for English and Chinese. Professors who aren’t bilingual wear earphones as well. • Students desiring (or called upon) to speak push a button, which flashes a red light, telling translators at the back of the room whom to tune into. • Teams of expert linguists deliver immediate translations of the exchanges to listeners’ earphones. H A R VA R D A N D C H I N A : A R E S E A R C H SY M P O S I U M | 1 0 CHINA—DYNAMIC, IMPORTANT AND DIFFERENT KEY TAKEAWAYSWith the need to bridge language barriers in education and business only rising in our complex, globalized world, facilities with built-in translation capability are the wave of the future. Despite their high price tag ($3 million), many more are bound to be built. “There isn’t another classroom in China that is like this.” — F. Warren McFarlan Case Discussion Professor Li Jin’s class discussion featured a 2007 HBS case that was previously used in the course Doing Business in China and is now taught to all first-year HBS students in the required Finance course.The case is about three competitors in China’s new media advertising market. It focuses on the decisions that altered their market positioning and led to their ultimate consolidation. The case described unpredictable actions and unforeseeable events that highlighted the different ways that CEOs in China might think about their companies (e.g.,like legacies to be built and nurtured, or as pigs to be fattened and sold). A CEO’s mindset might be based on whether the CEO was an entrepreneur/founder or a professional manager brought in to run a company. The case also demonstrated how unpredictable events in the quickly evolving Chinese market can open windows of opportunity that are soon slammed permanently shut. Those who act quickly, anticipate the future moves of others, and view situations in nontraditional ways can be rewarded, while those who sit tight will lose ground. H A R VA R D A N D C H I N A : A R E S E A R C H SY M P O S I U M | 1 1 CHINA—DYNAMIC, IMPORTANT AND DIFFERENT KEY TAKEAWAYSH A R VA R D A N D C H I N A : A R E S E A R C H SY M P O S I U M | 1 2 OVERVIEW Without realizing it, societies around the world have drifted from market economies into market societies. Marketbased thinking has permeated all aspects of society, affecting societal norms in areas of life not traditionally influenced by markets. The problem:When a society decides that a market is acceptable in a particular area—that certain goods/services may be bought and sold—it is deciding that the goods/services can be valued as commodities. But some aspects of life are damaged, degraded, or corrupted if they are commoditized. Missing in today’s market societies is attention to the moral limits of markets. Societies need to decide which social norms are worth preserving and should not be part of a market. CONTEXT Professor Sandel described the growing role that markets play and asserted that markets need to have moral limits. SPEAKER Michael Sandel AnneT. and Robert M. Bass Professor of Government, Faculty of Arts and Sciences THE MORAL LIMITS OF MARKETS• Social services: For-profit schools, hospitals, and prisons are proliferating as market-based approaches come to these areas. A trend in education is paying children to read. Concierge medical services in the United States and scalping of doctor appointments in China create markets for access to medical services. In 2000, India legalized commercial surrogacy and a market for low-cost, outsourced providers is developing. • The environment: The idea of tradable pollution permits and carbon offsets creates markets for polluting. • Immigration: Proposals have been made to make a market for immigration by selling the right to immigrate to America for perhaps $50,000 or $100,000.Another idea is a market for refugees. Countries would each have a quota, which they could sell or trade. Markets such as these will inevitably affect social norms, often in unexpected ways. For example, if children are paid to read, will they become conditioned to only read when paid and not read for the intrinsic value of reading? Or, if polluters can simply trade pollution permits, does that make pollution acceptable and fail to motivate behavior change? “Pure free-market economists assume that markets do not taint the goods they regulate.This is untrue. Markets leave their mark on social norms.” — Michael Sandel Society must ask, “What should be the moral limits of markets?” The examples of market-based approaches are unsettling. Even if the parties involved in a market-based transaction consent (which is not always the case; in some instances they are coerced), these market-based ideas are distasteful. Most people find the idea of a refugee market distasteful, even if it helps refugees. A market for refugees changes a society’s view of who the displaced are and how they should be treated. It encourages market participants to think of refugees as a product, a commodity. The role of markets has grown in our lives. The world has become infatuated with markets. In recent decades, societies around the world have embraced market thinking, market institutions, market reasoning, and market values.The focus on markets is based on the abundance created by markets. The fact is that markets are powerful mechanisms for organizing product activity and they create efficiency. Often overlooked is the fact that markets can affect society’s norms. The application of market thinking to non-market areas of life assumes that markets are merely mechanisms, innocent instruments.This is untrue.Markets touch—and can sometimes taint—the goods and social practices they govern. An example comes from a study dealing with childcare centers.To solve the problem of parents coming late to pick up their children, centers imposed a fine for late pickups. The social norm had been that late parents felt shame for inconveniencing the teachers to stay late.When this norm was replaced with a monetary penalty, a market for late pickups was created—and late pickups increased. Parents now considered a late pickup as an acceptable service for which they could simply choose to pay.The presence of the market changed the societal norm. “The market is an instrument, but it is not an innocent one.What begins as a market mechanism can become a market norm.” — Michael Sandel Market-based thinking and approaches have the potential to affect social norms in many areas where norms were traditionally non-market areas of life.These include: • The human body: Black markets exist for organ sales. Some marketers are now paying individuals for tattooing the company’s logo on their bodies. Infertile American couples are outsourcing pregnancy to low-priced surrogates in India. H A R VA R D A N D C H I N A : A R E S E A R C H SY M P O S I U M | 1 3 THE MORAL LIMITS OF MARKETS KEY TAKEAWAYSOther Important Points • Collaborative education. At www.justiceharvard.org, anyone can attend Professor Sandel’s popular Justice class.This virtual classroom features videos of lectures including student exchanges, the reading list, discussion guides, and a discussion blog. The site had more than one million viewers in its first few months.Translated versions appear on Chinese websites (which is fine with Professor Sandel if they are accurate). Experiments in virtual classrooms offer opportunities for collaboration between Harvard and university partners in China. Live video-linked classrooms would create a “global public square” permitting discussions in real time. Such discussions would illuminate East/West similarities and differences, leading to more nuanced understanding of both cultures. It is often assumed that the two cultures’ conceptions of justice, liberty, and rights are fixed, but the reality is more complex. Rich historical traditions contain multiple voices and contrary viewpoints within them. A virtual classroom enabling interaction between students in China and America would enable fascinating comparisons of ethical and philosophical thinking within cultures as well as between them. • Learning and teaching. China has long been a“learning civilization”—evolving through engaging with other civilizations and cultures—while America has been a “teaching” (code for “preaching”) civilization.America could benefit from incorporating China’s learning mindset. When a society embraces a market approach and decides that certain goods may be bought and sold, it is deciding that those goods can be valued as commodities. “Some of the good things in life are damaged or degraded or corrupted if they are turned into commodities.” — Michael Sandel Thus, deciding to create a market and to value a good— whether that is health, education, immigration, or the environment—is not merely an economic question. It is also a political and a moral question. Societies must confront markets’ moral limits. Societies, however, often fail to grapple with such moral questions.This causes market economies to drift imperceptibly into market societies, without it having ever been decided that they do so. “Because we haven’t had that debate about the moral limits of markets, we have drifted from having a market economy to being a market society.” — Michael Sandel The world’s market societies need to recognize the moral limits of markets and to define societal norms worth preserving. Case by case,the moral meaning of goods and services must be figured out and the proper way of valuing them decided. Ideally, this should happen collectively, via public debate. Much thought needs to go into how to keep markets in their proper place. “Only if we recognize the moral limits of markets and figure out how to keep markets in their place can we hope to preserve the moral and civic goods that markets do not honor.” — Michael Sandel H A R VA R D A N D C H I N A : A R E S E A R C H SY M P O S I U M | 1 4 THE MORAL LIMITS OF MARKETS KEY TAKEAWAYSH A R VA R D A N D C H I N A : A R E S E A R C H SY M P O S I U M | 1 5 OVERVIEW Currently, there is tremendous interest in China and Chinese culture.As China grows in wealth and influence,those who do business and study in China want to learn and understand the culture.The reality, however,is that China does not have a singular culture that can easily be understood. China doesn’t have “a culture”; it has “culture.” Elements of China’s culture include its history, poetry, literature, art, food, and contemporary culture, including movies, television, fashion, and books. It also has cultured people who are educated and worldly. Those who believe the various aspects of China’s culture are all based on history are misinformed. All aspects of China’s culture and its societal practices (including business practices) are incredibly dynamic and constantly changing. CONTEXT The speakers discussed why it is so difficult to try to define Chinese culture and offered perspectives on China’s cultural history and modern cultural practices. SPEAKERS Peter K. Bol Charles H. Carswell Professor of East Asian Languages and Civilizations, Faculty of Arts and Sciences, Director of the Center for Geographic Analysis, Institute for Quantitative Social Science Xiaofei Tian Professor of Chinese Literature, Faculty of Arts and Sciences WHO CARES ABOUT CHINESE CULTURE?In the sixteenth century, sea transportation brought with it the opportunity for the exchange of ideas between Europeans and Chinese, creating links between the East and West that continue today.The Chinese civil service exam, for example, became the basis for the British civil service exam, eventually serving as a model throughout theWestern world. Since the late nineteenth century, China has been actively absorbingWestern influences. It is worthwhile to note that a value not considered native Chinese at the time it is introduced eventually may became part of the criteria that is used to describe what is Chinese today. Globalization creates the need to maintain a sense of native identity. As the forces of globalization grow, there is a strong impulse in China to maintain a sense of local and native identity. Chinese citizens are brought together by a real sense of belief that they share a common identity and culture. But there is some danger in this way of thinking. By basing this sense of national identity on perceptions about the country’s cultural past, the Chinese are relinquishing their claim to the present and the future. If all modern culture is bound to what is considered foreign and everything native belongs to the ancient past, Chinese cultural tradition loses the very elements that make it dynamic. China can no longer afford to be self absorbed and must allow the knowledge of world cultures to become part of Chinese culture. “A point of danger is that this way of thinking leads to the ossification of the cultural past so the vibrant, dynamic, complex,cultural tradition of China is reduced to a one-dimensional monolithic entity.” — Xiaofei Tian Chinese culture is not easily defined. Wide diversity and lack of a central, contemporary Chinese “culturescape” make defining Chinese culture in a singular way difficult. Chinese culture is a mixture of many elements, both native and foreign, that are constantly evolving. From an ideological standpoint, Confucianism is considered by many as the core of Chinese culture, yet this is a flawed premise.Although Confucianism is definitely a part of China, it is only one part of a much larger picture. It also could be argued that Chinese culture is embodied by its traditional poetry and the aesthetic experience it elicits.Yet, this notion of Chinese culture is incompatible with the dogma that exists in the Confucian Classics. Aspects of culture in China can be found by studying China’s history, literature, religion, food, and popular culture, including movies, television, books, and fashion. But as the diversity of each of these areas demonstrates, there is a huge variety, constant change, and no singular definition of culture in China. “There is no China culture; there’s culture in China.” — Peter Bol (corroborated by Xiaofei Tian) There is a difference between culture and a cultured person, whether Chinese or American.The values that a society’s culture promotes do not necessarily reflect the values that a cultured person holds, such as being educated and worldly. For a cultured person, culture matters, and debates over the hopes and best ideas for society are linked to actual practices and how people live. Chinese culture is a dynamic, continually evolving tradition. The Chinese cultural tradition is vibrant, dynamic, complex, and ever changing. In the fourth, fifth, and sixth centuries, the translation of the Buddhist text from Sanskrit into Chinese led to an incredible cultural transformation in China. H A R VA R D A N D C H I N A : A R E S E A R C H SY M P O S I U M | 1 6 WHO CARES ABOUT CHINESE CULTURE? KEY TAKEAWAYSOther Important Points • University o erings. Elite Chinese universities have begun offering liberal arts education programs, allowing students to take courses across departments. • American managers. Few American managers speak Chinese and most are ignorant about Chinese history and practices. • A negotiating culture. China has much more of a negotiation culture than the United States, where people are more accepting of rules and authority. Schools of higher education must educate students about the history and tradition of different cultures. Many of today’s college students are products of diverse transnational backgrounds;they are multilingual and have a global perspective. In addition, the new professional managerial class conducts business on a global basis. “This new global elite needs a new forum of linguistic and symbolic capital that is transnational, so world languages, world literatures, and world cultures must be offered at higher education institutions.” — Xiaofei Tian To fit with this reality, schools of higher education must offer courses that teach the comparative history and tradition of different cultures, giving students the opportunity to study, examine, and interpret different cultures in the new global context. “The challenge as China grows in wealth and power is to make the next generation of cultured students aware that China’s cultural heritage is part of humanity’s cultural heritage.” — Peter Bol Attendees commented that they understand the difficulty of defining “the culture of China.” However, as individuals and companies doing business in China,they still expressed a desire to better understand China.The speakers distinguished between “common practices” and a deep societal culture.With effort, it is possible to gain some degree of understanding about common practices. However, as with culture, practices constantly are changing. Learning about the country and its practices can be facilitated by learning the language, learning about the country’s history, and reading the country’s literature. H A R VA R D A N D C H I N A : A R E S E A R C H SY M P O S I U M | 1 7 WHO CARES ABOUT CHINESE CULTURE? KEY TAKEAWAYSH A R VA R D A N D C H I N A : A R E S E A R C H SY M P O S I U M | 1 8 OVERVIEW Crises often highlight shortcomings in governments’ ability to safeguard people from harm and to contain fallout from unforeseen scenarios. Retrospective analysis provides rich learning opportunities for addressing shortcomings and preventing or mitigating similar damage in the future. Governments have as much to learn from other nations’ crisis experiences as they do from their own. CONTEXT Professor Farrell discussed implications of the financial crisis for regulatory policy decisions facing governments. Professor Howitt discussed lessons in crisis management from recent disasters in the United States and China. SPEAKERS Allen Ferrell Harvey Greenfield Professor of Securities Law, Harvard Law School Arnold Howitt Adjunct Lecturer in Public Policy and Executive Director of the Roy and Lila Ash Institute for Democratic Governance and Innovation, Harvard Kennedy School Michael B. McElroy Gilbert Butler Professor of Environmental Studies, Harvard School of Engineering and Applied Sciences MANAGING CRISES IN CHINA• Having the right capital requirements. Reforms in capital requirements might include mechanisms allowing institutions to draw down capital during a crisis versus having to raise it mid-crisis. • Having resolution mechanisms that address moral hazards. Needed are mechanisms to wind down financially insolvent institutions that ensure creditors experience losses— so there is incentive to avoid undue risk in the future. • Having regulators trained in both economics and law.The SEC has expertise in law but lacks expertise in economics;the Federal Reserve is strong in economics but lacks deep expertise in regulation. Both are needed. • Minimizing the role of credit rating agencies in bringing complex products to market. U.S. securities law enshrined the positions of the incumbent ratings agencies, forcing investment banks to use the agencies to rate complex structured products that the agencies lacked expertise to understand.These regulations should be repealed. “I would highly encourage China and other countries to avoid the U.S. regulatory treatment of credit rating agencies.” — Allen Ferrell • The systemic significance of non-depository-taking institutions, such as Fannie Mae and Freddie Mac, Bear Stearns, and Lehman. • The instability of the repo market as a financing source. The crisis has taught much about how reliance on the repo market (i.e., overnight lending) affects leverage in the system—both degree of leverage and how it interacts with capital. Key Takeaways (Disasters) Recent disasters in the United States and China highlight both nations’ shortcomings in crisis management. This century, both the United States and China have been affected by traumatic events. The United States lived through the 9/11 terrorist attacks and anthrax scares as well as Hurricane Katrina; China had the SARS epidemic, the Wenchuan earthquake, and the blizzards of 2008. Key Takeaways (Financial Crisis) U.S. regulators’ focus is misplaced: The financial crisis was about standard banking activities; not proprietary trading. Looking at the composition of the U.S. banking sector’s losses and write-downs stemming from the financial crisis is instructive, holding lessons for regulatory policy. The breakdown: • More than half (55%) of losses came from traditional lending activities: 34% from direct real estate lending and 20% or so from other kinds of direct lending. • About 31% of losses resulted from banks’ exposures to securitized products (not from securitization processes per se). From a regulatory standpoint, a bank’s exposure to its own products is a good thing, giving it “skin in the game.” • Losses from proprietary trading were relatively trivial at only 2%. • A similarly small portion of crisis-related losses came from banks’ private equity activities (about 1%). Despite the focus in the United States on proprietary trading as an area in need of reform (e.g.,theVolcker proposal), the financial crisis had little to do with proprietary trading. The vast majority of banking losses (85%) reflected positions that soured for various reasons in the standard bank activities of lending and securitization. “The moral of this story is that the losses were driven by the traditional activities of the banks . . . which is potentially relevant to thinking about Asian regulation.” — Allen Ferrell With Asian banks heavily involved in traditional banking, the crisis holds regulatory lessons relevant for them. The Asian financial sector is heavily involved in direct lending, less so in securitization at this time. (Hopefully, given the importance of securitization for funding, that will change.) Given this business mix, the U.S. financial crisis holds relevance for Asian financial sector regulation going forward. Some lessons include the importance of: H A R VA R D A N D C H I N A : A R E S E A R C H SY M P O S I U M | 1 9 MANAGING CRISES IN CHINA KEY TAKEAWAYS• Awareness of the critical interdependency between local and national capacities. More than 90% of people rescued from theWenchuan earthquake rubble were saved by family or friends; not by the central government’s late-arriving responders. Neither national nor local governments can manage crises on their own. Needed are management systems capable of rapid but decentralized support and connections between national and local capacity. • Stronger local capacities. Localities need to improve their capability to handle as much of a disaster’s effects as possible, since outside aid is often slow to arrive. Once it does, local and national responders need to work closely together. • Faster national capacities. Central governments should focus on accelerating their responses and improving their ability to operate in a decentralized fashion. “[We need to] think about the roles of local government and remote aid to prepare management systems capable of a rapid but decentralized surge of support.” — Arnold Howitt Other Important Points • Shadowy bailout motivation.Transparent counterparty data is lacking to assess the systemic risk had AIG failed. Goldman Sachs says it didn’t have significant counterparty exposure, having hedged its AIG positions; whether that was the case for other counterparties is unclear.The Inspector General’s bailout report suggests a rationale was protecting AIG shareholders—less appropriate a motivation than mitigating systemic risks. • Short shrift for recovery preparation.There are three kinds of disaster preparation: 1) prevention/mitigation (e.g., building codes); 2) emergency response; and 3) recovery. Preparing for recovery is often overlooked. As a result, money is thrown at recovery immediately after an event, and often wasted at great social cost. The two governments’ responses highlight shortcomings in crisis management, including the ability to prepare for emergencies,manage events during crises, and recover from them. China and the United States have structural similarities that make their problems of disaster management similar, including: 1) large and diverse land areas; 2) multilayered governments; and 3) high regional variation in emergency response capabilities. These factors contribute to the chaos in disaster situations. Local resources are often overwhelmed.The arrival of national resources on the scene is delayed by travel time; once there, outside personnel lack local awareness, slowing rescue efforts. Agencies not accustomed to interacting don’t know how to collaborate and cooperate. Lack of coordination causes inefficiencies; confusion reigns; the delays carry a social cost. Crisis management systems should reflect local/ national interdependencies and be capable of rapid, decentralized support. Governments face diverse crisis threats: natural disasters, infrastructure or technology system failures, infectious diseases, purposeful harm. Preparing for emergency response is difficult for governments; crisis management is unlike governments’ typical activities.The work is crucial, involving urgent responses to high-stakes situations that come without warning in unknown locations. Quick and effective action is needed; responders can’t afford the time to learn as they go along. Emergency preparation requires tough tradeoffs between financial cost and resource effectiveness. Capacity must be kept in reserve so it can be utilized effectively with little notice; yet governments don’t want to spend a lot on expensive resources to prepare for contingencies that might not occur.The ability to get resources to distant disasters as quickly as needed might be sacrificed for reasons of cost. Effective emergence preparedness requires: • Crisis management systems that facilitate collaboration. Organizational and communication systems should be in place before a disaster strikes, should facilitate collaboration/ cooperation among agencies, and should have flexible processes to allow for improvisation. H A R VA R D A N D C H I N A : A R E S E A R C H SY M P O S I U M | 2 0 MANAGING CRISES IN CHINA KEY TAKEAWAYSH A R VA R D A N D C H I N A : A R E S E A R C H SY M P O S I U M | 2 1 OVERVIEW Corresponding with China’s economic growth has been an amazing increase in life expectancy and a significant improvement in the public health care system, with childhood vaccinations providing just one important example. However, China still faces enormous health care–related challenges. There are huge disparities in access to care and the quality of care received; the current payment system is largely out-of-pocket and many people can’t afford care; and chronic diseases and mental health issues are on the rise. The Chinese government, well aware of the situation and issues, is undertaking the largest,most ambitious health care reform program in the world.The goals of this program include providing basic health insurance coverage for at least 90% of the population by 2011 and establishing universal access to health care by 2020. Through both long-term research projects and numerous collaborative programs, Harvard has played and is continuing to play an important role in helping to shape China’s health care policies and practices. CONTEXT The panelists reviewed linkages between Harvard and China’s health care sector and discussed the monumental transformation taking place in China both in health care and in society. SPEAKERS Barry R. Bloom Harvard University Distinguished Service Professor and Joan L. and Julius H. Jacobson Professor of Public Health, Harvard School of Public Health Arthur M. Kleinman Esther and Sidney Rabb Professor of Anthropology, Faculty of Arts and Sciences; Professor of Medical Anthropology and Professor of Psychiatry, Harvard Medical School;Victor andWilliam Fung Director, Harvard University Asia Center Yuanli Liu Senior Lecturer on International Health, Harvard School of Public Health CHINA’S NEWEST REVOLUTION: HEALTH FOR ALL?Since 1949, China has made tremendous progress in improving the health of its citizens, but huge challenges remain. Prior to 1949 there was essentially no functioning health care system in China.There were widespread famine, epidemic disease, infanticide, and other catastrophic tragedies. Approximately 20 million Chinese were killed in the war with Japan between 1937 and 1945, and 200 million Chinese were displaced due toWorldWar II and the country’s civil war.While the first part of the 20th century saw dramatic improvement in life expectancy in much of the world, in China it went from 25 years in 1900 to just 28 years in 1949. During this time, there also were enormous disparities between the rich and poor, and between urban and rural. (While disparities exist today, they pale in comparison to the disparities prior to 1949.) But beginning with China’s liberation in 1949, health became a national priority. Dr. Bloom recounted a conversation with Dr. Ma, a Western physician who played a huge role in organizing public health in China.When asked how such a poor country could make health such a priority, Dr. Ma said,“I thought we fought the Revolution to serve the people.” In public health terms, serving the people means: 1) keeping people healthy and preventing disease; for example, through clean water and vaccinations; 2) providing access to affordable, high-quality health care; and 3) providing health security and equitable distribution of health services. Between 1949 and 2007, life expectancy increased from 28 years to almost 73 years.This is based on an increased standard of living, increased urbanization, and development of a public health system that focused on key basics such as childhood immunizations. China immunized hundreds of millions of children, which kept them from dying under the age of five. Harvard and China have a long, rich history of working together in the health care arena. In the aftermath of SARS, which wasn’t handled well by China, researchers at the Harvard School of Public Health did epidemiologic modeling that showed how to stop the epidemic.After presenting the findings to top people in the Ministry of Health, including China’s Minister of Health, Harvard was asked to help develop a program to avoid the outbreak of a catastrophic infectious disease.This program has involved providing high-level executive training to more than 300 leaders in China’s Central and 31 Provincial Ministries of Health.The program recently has been reoriented, with significant input from many in the Harvard medical community to provide training on managing hospitals. This post-SARS program actually built on significant linkages between Harvard and China.A 30-year research study of the respiratory function of Chinese workers in textile mills and a 20-year study on how to provide health insurance for people in rural China have had a huge influence on policy. The School of Public Health has intensive programs where students look at some aspect of the medical system and write papers about their observations, which have received much interest by China’s Ministry of Health. In addition, Harvard has held two forums involving multiple Harvard faculty members on subjects of interest to Chinese leaders, such as poverty alleviation. Dr. Kleinman, who heads Harvard’s Asia Center, said that across Harvard there are more than 50 faculty members who work principally on China, and the projects involving China at Harvard Medical School and other areas throughout Harvard are too numerous to count. “The engagement with China across our university is profound and incredibly broad.” — Arthur M. Kleinman H A R VA R D A N D C H I N A : A R E S E A R C H SY M P O S I U M | 2 2 CHINA’S NEWEST REVOLUTION: HEALTH FOR ALL? KEY TAKEAWAYSChina is embarking on the most ambitious health care reform in the world. The Chinese government is aware of the health care challenges the country faces and has undertaken a remarkable health care reform process.This began in 2005 with the passage of a rural health improvement plan.That concentrated the country’s focus on improving the health care system and the health of the people in China. “This is the first time since the founding of the People’s Republic that China has begun developing a long-term strategic plan for its health sector.” — Yuanli Liu The Harmonious Society Program followed.This program set up 14 ministries and a slew of think tanks to make recommendations on health care reform. In an extraordinary act for China, a draft of the reform plan was posted on the Internet for one month and there were more than 30,000 responses.The government listened and responded by making 190 changes.The result is a serious action plan and a significant investment to address some of China’s long-term health care challenges. “The most radical, extensive,far-reaching plan for health reform of any country in the world has been committed to by the government of China . . . it is, I think, the most exciting development in health reform anywhere in the world.” — Barry R. Bloom This plan, which was announced in April 2009, has a goal of providing basic health insurance for at least 90% of the population by 2011 and establishing universal access to health care by 2020. The focus on health in China is part of the reassessment of culture, values, and norms taking place in China. In the era of Maoism, when China’s public health system was being built, the state regarded the individual as owing his or her life to the state and the party. In the current period of China’s economic reforms, there has been a shift. Now the view is that the state owes the individual a good life, or at least a chance at a good life. While tremendous progress has been made, significant challenges still remain.These include: • The system of paying for care and the cost of care. Currently, 60% of health care in China is paid for by individuals on an out-of-pocket basis.This is the least efficient,most expensive way to pay for care, and for many people makes health care unattainable. The largest complaint of the Chinese population is that they cannot afford health care, and many people forego being admitted to the hospital because they are unable to pay. Also, the cost of health care is actually the cause for about 15% of all bankruptcies in China. • Incentives. The current payment system involves government price setting for many services, such as hospitalization fees.The result is that health care providers overuse and overcharge in other areas, like drugs and tests. Drugs represent 45% of health care spending in China, compared to about 10% in the United States. (These drugs, which are often of questionable quality, are in many instances sold by doctors where they represent a significant source of revenue—and a major conflict of interest. A prime example is saline injections, which many patients now expect and demand, even though they have no medicinal value.) • Disparities.There remain significant disparities in the access to and quality of care between rich and poor, and urban and rural.The gaps are large and are increasing. • Infectious diseases. About half of all Hepatitis B cases are found in China, as are about one-third of TB cases.The mobility of the population makes it easier than ever to spread diseases, as seen through the HIV-AIDS epidemic and the spread of H1N1. • Chronic diseases. As China’s economy has developed, a consequence has been increased rates of chronic diseases, which are responsible for more than 80% of all deaths. The increase in chronic disease—including diabetes and cardiovascular diseases—is related to people living longer, high pollution, and behaviors such as smoking. • Mental health issues. As China has become more prosperous, there have been increases in all categories of mental disorders, anxiety disorders, depression, suicide, substance abuse, and STD rates. H A R VA R D A N D C H I N A : A R E S E A R C H SY M P O S I U M | 2 3 CHINA’S NEWEST REVOLUTION: HEALTH FOR ALL? KEY TAKEAWAYSAlong with this shift in the roles of the state and the individual, individual attitudes, behaviors, and morals have changed.There is a rise in materialism and cynicism, and a breakdown in Confucian values.There is a rise in nationalism, deepening corruption, an almost caste-like distinction between rural and urban, a distrust of physicians, institutions, and agencies, and a concern with public ethics. There also is a high divorce rate, a high suicide rate, and a sexual revolution is underway. A boom in self-help books and in psychotherapy also is taking place. It is in this environment that health care reform is happening.The process of reforming health care is about more than just health care; it is part of a society undergoing transformation. People are thinking of themselves and their lives differently and have different expectations of the government. Other Important Points • One child. The changes going on in China include a reassessment of the country’s one-child strategy. • Health data. In previous years, the quality of health data in China was questionable, but new data systems have been put in place and significantly improved the data being collected. • Qualified health minister. China’s current health minister is an internationally regarded physician who doesn’t seem to be very political.This reflects a trend of filling key positions with technically competent people. H A R VA R D A N D C H I N A : A R E S E A R C H SY M P O S I U M | 2 4 CHINA’S NEWEST REVOLUTION: HEALTH FOR ALL? KEY TAKEAWAYSH A R VA R D A N D C H I N A : A R E S E A R C H SY M P O S I U M | 2 5 OVERVIEW Technological innovations are changing and will continue to change every aspect of how we live, work, and learn. They are changing how people communicate and how we spend our time. Among the most exciting innovations are those in areas of mobility, cloud computing, social networking, platforms,location-based services, and visual search. Increasingly, China is playing a key role in today’s technology innovations. CONTEXT ProfessorYoffie discussed the innovations in technology that are having a huge impact on how we do things, including in education. SPEAKER David B.Yoffie Max and Doris Starr Professor of International Business Administration; Senior Associate Dean and Chair of Executive Education, Harvard Business School INNOVATIONS CHANGING THE WORLD: NEW TECHNOLOGIES, HARVARD, AND CHINAThe iPad will ultimately be a highly disruptive device with the potential to change how media are disseminated and consumed; this includes potentially changing how textbooks are delivered.These and other emerging technologies will impact how students study and how professors do research.The traditional ways of disseminating knowledge through books and articles will need to evolve. Cloud computing is changing how information and applications are stored and delivered. Through the remote delivery of computing power, storage, and applications, cloud computing is quickly changing how information is delivered. From a corporate standpoint,the economics of cloud computing are remarkable. Information delivered through huge data centers built by companies such as Amazon and Google cuts costs by a factor of seven.This fundamentally alters the IT cost equation for all companies, regardless of size.Applications that have historically been hosted on inhouse servers—from customer relationship management (CRM) to enterprise resource planning (ERP)—are now moving to outsourced cloud-hosted servers and data centers. A leading example is Salesforce.com. “The economics of cloud computing are extraordinarily compelling . . . no matter what size company you are, can you imagine the possibility of cutting your [IT] cost by a factor of seven?” — David B. Yoffie On the consumer side, cloud computing is and will be everywhere: in music, video, applications, and photos. It is likely that within 18 months, instead of our personal computers storing our music, our libraries will be moved to the cloud. User concerns about security are the largest drawback to cloud computing.This is a critical issue that needs to be addressed on an ongoing basis. Innovations occur when platforms are developed on which applications reside. In addition to changing how data is delivered, cloud computing also is becoming a “platform.”This means it is the basis for providing a set of applications that deliver ongoing value. HBS is creating the future by leveraging the Harvard Center Shanghai facility and emerging technologies. Harvard University and Harvard Business School have an explicit strategy of becoming truly global institutions. Establishing the Harvard Center Shanghai facility builds on Harvard’s long-standing involvement in Asia. It creates an opportunity for deeper engagement and collaboration with the country that is the fastest- growing producer of technology in the world. HBS views this as an opportunity to accelerate innovation in management,technology, and collaboration on the technological shifts that are changing the way we work, study, and socially interact. Powerful mobile computing is changing how people use technology. The massive shift of Internet use to handheld devices is fundamentally changing technology and the way it is used. The shift away from PC-centric computing to handheld computing is made possible by Moore’s law, which holds that chip processing power will double roughly every 18- 24 months, and the costs will be halved. (The law has held since Gordon Moore conceived of it in 1964. Today an Intel chip the size of a fingernail has 2.9 billion transistors and does a teraflop of processing per second.) “This creates the opportunity to put a supercomputer into your hand.” — David B. Yoffie This geometric increase in processing power has led to the development of powerful handheld devices. For example, the 2009 iPhone has identical technical specifications to the iMac, the most powerful desktop computer in 2001.Today, handheld devices allow us to do things on a mobile basis that we previously couldn’t do. Beyond just phones are other types of mobile devices. eReader devices such as Amazon’s Kindle and Apple’s iPad are creating a rapidly growing eBook market. Now available in a hundred countries, eBooks grew 100% in 2009 alone. At Amazon, for books available in electronic form, 50% of the books’ sales are in eBook form.This past Christmas, the company sold more eBooks than hard copy books. H A R VA R D A N D C H I N A : A R E S E A R C H SY M P O S I U M | 2 6 INNOVATIONS CHANGING THE WORLD: NEW TECHNOLOGIES, HARVARD, AND CHINA KEY TAKEAWAYS• Location-based services. These services, such as YELP and Urbanspoon, identify your location and offer information about local restaurants, hotels, and other services.An application called Foursquare allows a person to see where his or her friends are. Location-based services also can provide navigation and will ultimately deliver advertising on a location basis. • Visual search. An example of visual search is a new phonebased application offered by Google called Google Goggles. It uses pictures to search the web and immediately provide information. For example, if you take a picture of a restaurant, it will give you reviews of the restaurant before you walk in.Visual search has the potential to significantly impact how students learn and interact with their professors, challenging traditional methods of engagement. Other Important Points • Predicting the future. It is impossible to predict the future. Experts in 1960 offered numerous predictions about life in 2000 that failed to come to fruition. One prediction some experts got right was the linkage of computers (essentially the Internet).The one prediction that fell short was a prediction of 200,000 computers in the United States.The actual number is around 300 million. • Internet tra ic. Cisco projects that Internet traffic will grow 66 times by 2013, a compounded annual growth rate of 130%. • Generational Internet use. In the United States, the portion of senior citizens who use email (91%) is comparable to the baby boomers (90%), though 70% of boomers shop online versus just 56% of seniors. • Texting volume. The average U.S. teenager sends almost 2,300 text messages per month. In China, during the week of the Chinese NewYear, 13 billion texts were sent. • People will pay. Some people have the perception that everything on the Internet is free, but that is not the case. The success of iTunes, where songs are sold for $0.99, shows that people will pay when something is priced correctly. The iPhone is a platform.There are now 140,000 applications for the iPhone, which have been downloaded more than 3 billion times; 1 billion downloads were made just in the fourth quarter of 2009. Facebook is a platform for which 350,000 applications have been written and downloaded half a billion times. In addition, people are looking at the following as potential platforms: • Cars. Ford plans to incorporate iPhone applications in their next generation of vehicles. • Television.TV will be a huge platform of the future, serving as a basis for social media, social interaction, and social networks. • Cities. NewYork City has decided to become a platform. The city held a competition, inviting the public to develop applications using raw municipal data. One of the winners created an application that allows you to hold up your phone; it automatically figures out where you are and gives you directions to the next subway stop. “Learning how to play with all these platforms may be absolutely critical to the long-run success of any company, because these platforms are becoming ubiquitous. It's a new way of thinking about the interaction between a supplier and a customer.” — David B. Yoffie Social networks are altering social patterns and how people spend their time. Social networks have global reach, with more than 830 million users. Facebook (the dominant player outside of China) andYouTube have replaced old Internet companies such asYahoo and Microsoft. Facebook users spend 90 billion minutes per month on the site. In China,Tencent has been a successful social networking company. Future innovations are being shaped by the integration of mobility, social networking, and cloud computing. Among the many future innovations that are coming, two types of innovations stand out: H A R VA R D A N D C H I N A : A R E S E A R C H SY M P O S I U M | 2 7 INNOVATIONS CHANGING THE WORLD: NEW TECHNOLOGIES, HARVARD, AND CHINA KEY TAKEAWAYSH A R VA R D A N D C H I N A : A R E S E A R C H SY M P O S I U M | 2 8 OVERVIEW The links between Harvard and Shanghai go way back and have dramatically accelerated in the past decade—even in just the past four years—through a series of programs conducted in partnership with Chinese universities. HBS has written dozens of new cases, hired new Mandarin-speaking faculty, and added new courses, all to address the tremendous interest in China. HBS’s focus on China is not just because of China’s huge population, but because of the enormous opportunities in the country in industries such as software development. China is no longer just a manufacturing center. It has a highly literate, educated workforce and is in the process of climbing the IT value chain.While little known in theWest, China is giving birth to a new generation of formidable technology companies. CONTEXT Professor McFarlan discussed what HBS is doing in China and reflected on why it is so important for HBS to have a significant presence in the country. SPEAKER F.Warren McFarlan Albert H. Gordon Professor of Business Administration, Emeritus, Harvard Business School CLOSING REMARKS• Tsinghua. HBS has a six-week program with Tsinghua University and China Europe International Business School (CEIBS).This program consists of two weeks in Tsinghua, two weeks at CEIBS, and two weeks in Boston. Another program between HBS and Tsinghua, focused on private equity and venture capital, is about to be launched. • CEIBS. In addition to the program with Tsinghua, CEIBS and HBS have a program for CEOs of companies ranging from $500 million to a few billion dollars. Almost none of these CEOs speak English, yet they are being exposed to HBS cases. • Beijing University. HBS is partnering with Beijing University on two programs: Driving Corporate Performance and Designing and Executing Strategy. • Fudan University. HBS has partnered with Fudan University on three programs: growing professional service firms, creating value through service excellence, and strategy and profitable growth. “None of this existed 10 years ago and almost none of it existed four years ago.” — F. Warren McFarlan Continuing China’s economic growth requires moving up the IT value chain. China’s economic growth over the past 30 years will be extremely difficult to replicate. Increasing per- capita GNP requires different strategies. In particular, it requires increasing productivity by leveraging IT. But leveraging IT— by climbing the IT value chain—doesn’t mean just purchasing hardware and software. Leveraging IT to increase productivity is about services, operating differently, and engaging in change management. China is where the United States was 30 years ago, and they don’t realize how difficult it is to climb the IT value chain.Yet, this is where the key to continued economic growth resides. Harvard Business School’s efforts to re-engage in China began in earnest in the late 1990s. The history of Harvard Business School in Shanghai reaches back to HBS’s second MBA class, which had two individuals from Shanghai. By the mid-1920s,the first Harvard Business School Club of Shanghai was formed, which lasted until 1944. Following a 30-year disruption due to political factors, conversations about re-engaging with China began again in 1978 when four HBS faculty members, including Professor McFarlan,traveled to China.While the interest in China was high, no specific plans took place. Then, in 1997, recognition that HBS was underinvested in Asia led to the decision to establish a research center in Hong Kong.This Center has produced cases and done extensive research. At about the same time that HBS decided to establish a presence in Asia, the school was approached regarding teaching Tsinghua University how to conduct executive education.This eventually led to a one-week, dual-language program, co-taught by the two schools, called Managing in the Age of Internet.This initial partnership led to the development of the more expansive program that exists today. HBS programs in China have grown rapidly in recent years, several built on alliances with Chinese universities. Interest at HBS regarding China is incredibly high.There is now a second-year course called Doing Business in China.There are dozens of cases about China, 11 technical notes, and multiple books. HBS has five faculty members who are fluent in Mandarin, and 30 HBS faculty members will work, visit, teach, and do research in China this year. Sixty Harvard MBA students have PRC passports. The Harvard Center Shanghai makes new types of programs possible. In 2010, the Center will host 15 weeks of programs, none of which existed four years ago.HBS’s programs in China are largely based on partnerships with the leading universities in the country.These include: H A R VA R D A N D C H I N A : A R E S E A R C H SY M P O S I U M | 2 9 CLOSING REMARKS KEY TAKEAWAYSTo the surprise of many, China is an emerging IT superpower. The conventional wisdom is that China is a center of lowcost manufacturing and India is the center of IT globalization. Certainly, India has been where the action is, but a new story is emerging.As China consciously seeks to move up the IT value chain, it is rapidly becoming a formidable player in the world of IT. China’s population is literate and educated. (Literacy rates are 93-95%, which are far higher than India’s.) China’s telecommunications infrastructure and bandwidth are massive and growing; there are almost 800 million cell phones in the country. Already,leading technology companies like IBM,Microsoft, and Hewlett Packard have established strong presences in the country. “It is an information-enabled society with massive investments in [technological] infrastructure.” — F. Warren McFarlan H A R VA R D A N D C H I N A : A R E S E A R C H SY M P O S I U M | 3 0 CLOSING REMARKS KEY TAKEAWAYSH A R VA R D A N D C H I N A : A R E S E A R C H SY M P O S I U M | 3 1 OVERVIEW Harvard and China have a long, rich history of partnership and collaboration.Today, collaborative- learning programs exist in each of Harvard’s schools and departments.As the world of higher education becomes increasingly global,the level of collaboration between Harvard and China will only deepen.The Harvard Center Shanghai represents another important step in this collaboration, providing unparalleled opportunities. CONTEXT President Faust talked about the relationship between Harvard and China in the context of the global expansion of higher education. SPEAKER Drew Gilpin Faust Lincoln Professor of History, Faculty of Arts and Sciences and President of Harvard University CLOSING REMARKSAt Harvard, East Asian studies has become a hallmark of the university.The Harvard-Yenching Library has more than one million volumes, making it the largest university East Asian collection outside of Asia.Today, more than 370 courses are offered in East Asian studies in a wide range of subjects, such as history and literature;courses are taught in sevenAsian languages, with more than 600 students enrolled. Opening the Harvard Center Shanghai provides an opportunity for Harvard to reaffirm and enhance its commitment to China. The privilege of universities is to take the long view, as the Harvard Center Shanghai does, and to invest in projects that draw on relationships and knowledge to seize a better future. Harvard’s wide array of projects and partners in China and across Asia are a testament to this long view and to planting seeds for the future. Examples include: • Harvard Business School has published more than 300 cases, articles, and books on China. HBS also is coordinating student immersion experiences in China. • At Harvard’s Fairbanks Center, faculty are working with two Chinese university partners to create a free, online biographical database for China. Collaboration over nearly a decade has created a geographic database of anything that can be mapped covering 17 centuries of Chinese history. • Harvard Medical School has partnerships in China for clinical education and research. • Harvard’s Graduate School of Design has programs and exchanges with China. • Harvard Law School maintains a broad range of involvement with Chinese legal development on everything from trade to intellectual property to legal education. The collaboration that has produced the Harvard Center Shanghai creates unparalleled opportunities. The Harvard Center Shanghai is a space that was designed for academic collaboration. It will be a hub for learning, seminars, executive training, and collaborative programs between Harvard faculty and Chinese universities, organizations, and government. The facility will provide new opportunities for Harvard alumni and for current students who participate in internship programs. This facility results from a tremendous amount of collaboration: between Harvard and multiple alumni; between Harvard and Chinese government officials; and among multiple areas within Harvard (Harvard Business School, the Faculty of Arts and Sciences, the Harvard China Fund, the Office of the Provost, and the Vice Provost for International Affairs).These efforts are consistent with President Faust’s vision of “one university.” There is a long history of collaboration between Harvard and China. Harvard’s first instructor in Chinese arrived in Cambridge (after a journey of nine months) and began teaching Chinese to undergraduates in 1879. Shortly after that, Chinese students began arriving at Harvard and were soon studying in every department and school. By 1908,they had formed a Chinese club. Between 1909 and 1929, about 250 Chinese students graduated from Harvard.These individuals made remarkable contributions in China, with almost half of them becoming professors and more than one dozen becoming university presidents. During this time, a graduate of Harvard Law School helped establish China’s first modern law school, ushering in a century of collaboration between Harvard and China’s legal system. In 1911, graduates of Harvard Medical School created the firstWestern medical school in China.This was the first of many connections in public health and medicine between Harvard and China. H A R VA R D A N D C H I N A : A R E S E A R C H SY M P O S I U M | 3 2 CLOSING REMARKS KEY TAKEAWAYSContrary to predictions of protectionism among nations or schools,the stakes and the players are not national;they are global.As the new Harvard Center Shanghai demonstrates, we are increasingly in a world of universities without borders. Universities exchange faculty and students as never before, and engage in international collaboration and problem solving. Higher education is developing a global meritocracy: underway are a great brain race and a global exchange of ideas.The expanding quality and quantity of universities in Asia and elsewhere open unimagined new possibilities for understanding and discovery.This is a race where everyone wins. “Increasingly we are in a world of universities without borders.” — Drew Gilpin Faust By teaching creative and critical thinking, universities prepare students for an uncertain world. We live in uncertain times.We can prepare but we can’t predict. In such an environment, students need to learn to think creativity and critically;to improvise;to manage amid uncertainty.The intense interactive case study method used at Harvard Business School and Harvard Law School has never been more important.Through this method education unfolds from a vivid debate. Teaching the case method in China is just one more way in which Harvard and China are collaborating. For the past five years, at the request of the Chinese Ministry of Education, HBS faculty have worked with more than 200 top Chinese faculty and deans in case method and participantcentered learning programs. • The Harvard School of Public Health worked with the Chinese government over the past four years on an analysis and plan to provide health insurance to 90% of the Chinese population. • The Harvard China Project based at Harvard’s School of Engineering andApplied Sciences is studying air pollution and greenhouse gases.This project draws on faculty from several Harvard departments and Chinese universities. • Harvard’s Kennedy School is involved in multiple collaborations with Chinese partners on clean energy and advanced training programs in policy and crisis management. • The Harvard China Fund, a university-wide academic venture fund, has made dozens of faculty grants for research partnerships and has placed more than 100 undergraduates in summer internships in China. These endeavors are a sampling of the collaborative tradition between Harvard and partners in China.These partnerships will share ideas and generate new ones. Higher education is increasingly global, which benefits all participants. We live in a moment of furious transformation, particularly in higher education. Nowhere is that transformation happening faster than in Asia. In China,the transformation is analogous to the “big bang.” “In a single decade, along with the world’s fastest-growing economy, China has created the most rapid expansion of higher education in human history.” — Drew Gilpin Faust This is a moment of tremendous opportunity. It is no coincidence that the second major expansion of Asian studies occurred at Harvard in the 20 years afterWorldWar II, when the number of undergraduates in American colleges increased by 500% and the number of graduate students rose almost 900%. China now faces similar opportunities. H A R VA R D A N D C H I N A : A R E S E A R C H SY M P O S I U M | 3 3 CLOSING REMARKS KEY TAKEAWAYSWWW.HBS.EDUTHE MODERN HISTORY OF EXCHANGE RATE ARRANGEMENTS: A REINTERPRETATION
|
CD ROM Annuaire d'Entreprises France prospect (avec ou sans emails) : REMISE DE 10 % Avec le code réduction AUDEN872
10% de réduction sur vos envois d'emailing --> CLIQUEZ ICI Retour à l'accueil, cliquez ici THE QUARTERLY JOURNAL OF ECONOMICS Vol. CXIX February 2004 Issue 1 THE MODERN HISTORY OF EXCHANGE RATE ARRANGEMENTS: A REINTERPRETATION* CARMEN M. REINHART AND KENNETH S. ROGOFF We develop a novel system of reclassifying historical exchange rate regimes. One key difference between our study and previous classi?cations is that we employ monthly data on market-determined parallel exchange rates going back to 1946 for 153 countries. Our approach differs from the IMF of?cial classi?cation (which we show to be only a little better than random); it also differs radically from all previous attempts at historical reclassi?cation. Our classi?cation points to a rethinking of economic performance under alternative exchange rate regimes. Indeed, the breakup of Bretton Woods had less impact on exchange rate regimes than is popularly believed. I. INTRODUCTION This paper rewrites the history of post-World War II exchange rate arrangements, based on an extensive new monthly data set spanning across 153 countries for 1946 –2001. Our approach differs not only from countries’ of?cially declared classi?- cations (which we show to be only a little better than random); it also differs radically from the small number of previous attempts at historical reclassi?cation. 1 * The authors wish to thank Alberto Alesina, Arminio Fraga, Amartya Lahiri, Vincent Reinhart, Andrew Rose, Miguel Savastano, participants at Harvard University’s Canada-US Economic and Monetary Integration Conference, International Monetary Fund-World Bank Joint Seminar, National Bureau of Economic Research Summer Institute, New York University, Princeton University, and three anonymous referees for useful comments and suggestions, and Kenichiro Kashiwase, Daouda Sembene, and Ioannis Tokatlidis for excellent research assistance. Data and background material to this paper are available at http://www.puaf.umd.edu.faculty/papers/reinhart/reinhart.htm. 1. The of?cial classi?cation is given in the IMF’s Annual Report on Exchange Rate Arrangements and Exchange Restrictions, which, until recently, asked member states to self-declare their arrangement as belonging to one of four categories. © 2004 by the President and Fellows of Harvard College and the Massachusetts Institute of Technology. The Quarterly Journal of Economics, February 2004 1As a ?rst innovation, we incorporate data on parallel and dual exchange rate markets, which have been enormously important not only in developing countries but in virtually all the European countries up until the late 1950s, and sometimes well beyond. We argue that any classi?cation algorithm that fails to distinguish between uni?ed rate systems (with one of?cial exchange rate and no signi?cant “black” or parallel market) and all others is fundamentally awed. Indeed, in the vast majority of multiple exchange rate or dual systems, the oating dual or parallel rate is not only a far better barometer of monetary policy than is the of?cial exchange rate, it is often the most economically meaningful rate. 2 Very frequently—roughly half the time for of?cial pegs—we ?nd that dual/parallel rates have been used as a form of “back door” oating, albeit one usually accompanied by exchange controls. The second novelty in our approach is that we develop extensive chronologies of the history of exchange arrangements and related factors, such as exchange controls and currency reforms. Together with a battery of descriptive statistics, this allows us to draw a nuanced distinction between what countries declare as their of?cial de jure regime, and their actual de facto exchange rate practices. To capture the wide range of arrangements, our approach allows for fourteen categories of exchange rate regimes, ranging from no separate legal tender or a strict peg to a dysfunctional “freely falling” or “hyperoat.” Some highlights from our reclassi?cation of exchange rate arrangements are as follows. First, dual, or multiple rates, and parallel markets have prevailed far more frequently than is commonly acknowledged. In 1950, 45 percent of the countries in our sample had dual or multiple rates; many more had thriving parallel markets. Among the industrialized economies, dual or multiple rates were the Previous studies have either extended the four-way of?cial classi?cation into a more informative taxonomy (see Ghosh et al. [1997]), or relied largely on statistical methods to regroup country practices (see Levy-Yeyati and Sturzenegger [2002]). The Fund, recognizing the limitations of its former strategy, revised and upgraded the of?cial approach toward classifying exchange rate arrangements in 1997 and again in 1999. Notably, all these prior approaches to exchange rate regime classi?cation, whether or not they accept the country’s declared regime, have been based solely on of?cial exchange rates. 2. When we refer to multiple exchange rates in this context, we are focusing on the cases where one or more of the rates is market-determined. This is very different from the cases where the multiple of?cial rates are all ?xed and simply act as a differential tax on a variety of transactions. Dual markets are typically legal, whereas parallel markets may or may not be legal. 2 QUARTERLY JOURNAL OF ECONOMICSnorm in the 1940s and the 1950s, and in some cases, these lasted until much later. Our data lend strong support to the view stressed by Bordo [1993] that Bretton Woods encompassed two very different kinds of exchange rate arrangements in the preand postconvertibility periods and that the period of meaningful exchange rate stability was quite short-lived. In the developing world, such practices remained commonplace through the 1980s and 1990s and into the present. We show that market-determined dual/parallel markets are important barometers of underlying monetary policy. This may be obvious in cases such as modern-day Myanmar where the parallel market premium at the beginning of 2003 exceeded 700 percent. As we show, however, the phenomenon is much more general, with the parallel market premium often serving as a reliable guide to the direction of future of?cial exchange rate changes. Whereas dual/parallel markets have been marginal over some episodes, they have been economically important in others, and there are many instances where only a few transactions take place at the of?cial rate. To assess the importance of secondary (legal or illegal) parallel markets, we collected data that allow us to estimate export misinvoicing practices, in many cases going back to 1948. These estimates show that leakages from the of?cial market were signi?cant in many of the episodes when there were dual or parallel markets. Second, when one uses market-determined rates in place of of?cial rates, the history of exchange rate policy begins to look very different. For example, it becomes obvious that de facto oating was common during the early years of the Bretton Woods era of “pegged” exchange rates. Conversely, many “oats” of the post-1980s turn out to be (de facto) pegs, crawling pegs, or very narrow bands. Of countries listed in the of?cial IMF classi?cation as managed oating, 53 percent turned out to have de facto pegs, crawls, or narrow bands to some anchor currency. Third, next to pegs (which account for 33 percent of the observations during 1970 –2001 (according to our new “Natural” classi?cation), the most popular exchange rate regime over modern history has been the crawling peg, which accounted for over 26 percent of the observations. During 1990 to 2001 this was the most common type of arrangement in emerging Asia and Western Hemisphere (excluding Canada and the United States), making up for about 36 and 42 percent of the observations, respectively. Fourth, our taxonomy introduces a new category: freely fallEXCHANGE RATE ARRANGEMENTS 3ing, or the cases where the twelve-month ination rate is equal to or exceeds 40 percent per annum. 3 It turns out to be a crowded category indeed, with about 12 1 2 percent of the observations in our sample occurring in the freely falling category. As a result, “freely falling” is about three times as common as “freely oating,” which accounts for only 4 1 2 percent of the total observations. (In the of?cial classi?cation, freely oating accounts for over 30 percent of observations over the past decade.) Our new freely falling classi?cation makes up 22 and 37 percent of the observations, respectively, in Africa and Western Hemisphere (excluding Canada and the United States) during 1970 –2001. In the 1990s freely falling accounted for 41 percent of the observations for the transition economies. Given the distortions associated with very high ination, any ?xed versus exible exchange rate regime comparisons that do not break out the freely falling episodes are meaningless, as we shall con?rm. There are many important reasons to seek a better approach to classifying exchange rate regimes. Certainly, one is the recognition that contemporary thinking on the costs and bene?ts of alternative exchange rate arrangements has been profoundly in- uenced by the large number of studies on the empirical differences in growth, trade, ination, business cycles, and commodity price behavior. Most have been based on the of?cial classi?cations and all on of?cial exchange rates. In light of the new evidence we collect, we conjecture that the inuential results in Baxter and Stockman [1989]—that there are no signi?cant differences in business cycles across exchange arrangements—may be due to the fact that the of?cial historical groupings of exchange rate arrangements are misleading. The paper proceeds as follows. In the next section we present evidence to establish the incidence and importance of dual or multiple exchange rate practices. In Section III we sketch our methodology for reclassifying exchange rate arrangements. Section IV addresses some of the possible critiques to our approach, compares our results with the “of?cial history,” and provides examples of how our reclassi?cation may reshape evidence on the links between exchange rate arrangements and various facets of economic activity. The ?nal section reiterates some of the main 3. We also include in the freely falling category the ?rst six months following an exchange rate crisis (see the Appendix for details), but only for those cases where the crisis marked a transition from a peg or quasi-peg to a managed or independent oat. 4 QUARTERLY JOURNAL OF ECONOMICS?ndings, while background material to this paper provides the detailed country chronologies that underpin our analysis. II. THE INCIDENCE AND IMPORTANCE OF DUAL AND MULTIPLE EXCHANGE RATE ARRANGEMENTS In this section we document the incidence of dual or parallel markets (legal or otherwise) and multiple exchange rate practices during post-World War II. We then present evidence that the market-determined exchange rate is a better indicator of the underlying monetary policy than the of?cial exchange rate. Finally, to provide a sense of the quantitative importance for economic activity of the dual or parallel market, we present estimates of “leakages” from the of?cial market. Speci?cally, we provide quantitative measures of export misinvoicing practices. We primarily use monthly data on of?cial and market-determined exchange rates for the period 1946 –2001. In some instances, the data for the market-determined rate is only available for a shorter period and the background material provides the particulars on a country-by-country basis. The pre-1999 marketdetermined exchange rate data come from various issues of Pick’s Currency Yearbook, Pick’s Black Market Yearbooks, and World Currency Reports, and the of?cial rate comes from the same sources and as well as the IMF. The quotes are end-of-month exchange rates and are not subject to revisions. For the recent period (1999 –2001) the monthly data on market-determined exchange rates come from the original country sources (i.e., the central banks), for those countries where there are active parallel markets for which data are available. 4 Since our coverage spans more than 50 years, it encompasses numerous cases of monetary reforms involving changes in the units of account, so the data were spliced accordingly to ensure continuity. II.A. On the Popularity of Dual and Multiple Exchange Rate Practices Figure I illustrates de facto and de jure nonuni?ed exchange rate regimes. The ?gure shows the incidence of exchange rate arrangements over 1950 –2001, with and without stripping out 4. These countries include Afghanistan, Angola, Argentina, Belarus, Belize, Bolivia, Burundi, Congo (DCR), Dominican Republic, Egypt, Ghana, Iran, Libya, Macedonia, Mauritania, Myanmar, Nigeria, Pakistan, Rwanda, Tajikistan, Turkmenistan, Ukraine, Uzbekistan, Yemen, Yugoslavia, and Zimbabwe. EXCHANGE RATE ARRANGEMENTS 5cases of dual markets or multiple exchange rates. The IMF classi?cation has been simpli?ed into what it was back in the days of Bretton Woods—namely, Pegs and Other. 5 The dark portions of the bars represent cases with uni?ed exchange rates, and the lightly shaded portion of each bar separates out the dual, multiple, or parallel cases. In 1950 more than half (53 percent) of all arrangements involved two or more exchange rates. Indeed, the heyday of multiple exchange rate practices and active parallel markets was 1946 –1958, before the restoration of convertibility in Europe. Note also, that according to the of?cial IMF classi?- cation, pegs reigned supreme in the early 1970s, accounting for over 90 percent of all exchange rate arrangements. In fact, over half of these “pegs” masked parallel markets that, as we shall show, often exhibited quite different behavior. 5. For a history of the evolution of the IMF’s classi?cation strategy, see the working paper version of this paper, Reinhart and Rogoff [2002]. FIGURE I The Incidence of Dual or Multiple Exchange Rate Arrangements, 1950–2001: Simpli?ed IMF Classi?cation Sources: International Monetary Fund, Annual Report on Exchange Arrangements and Exchange Restrictions and International Financial Statistics; Pick and Se´dillot [1971]; International Currency Analysis, World Currency Yearbook, various issues. Exchange rate arrangements classi?ed as “Other” include the IMF’s categories of limited exibility, managed oating, and independently oating. 6 QUARTERLY JOURNAL OF ECONOMICSII.B. The Market-Determined Exchange Rate as an Indicator of Monetary Policy While the quality of data on market-determined rates is likely to vary across countries and time, we nevertheless believe these data to be generally far better barometers of the underlying monetary policy than are of?cial exchange rates. For instance, if the laxity in monetary policy is not consistent with maintaining a ?xed of?cial exchange rate, one would expect that the marketdetermined rate starts depreciating ahead of the inevitable devaluation of the of?cial rate. When the of?cial realignment occurs—it is simply a validation of what had previously transpired in the free market. Indeed, this is the pattern shown in the three panels of Figure II for the cases of Bolivia, Indonesia, and Iran— many more such cases are displayed in the ?gures that accompany the 153 country chronologies. 6 This pattern also emerges often in the developed European economies and Japan in the years following World War II. To illustrate more rigorously that the market-based exchange rate is a better indicator of the monetary policy stance than the of?cial rate, we performed two exercises for each country. First, we examined whether the market-determined exchange rate systematically predicts realignments in the of?cial rate, as suggested in Figure II. To do so, we regressed a currency crash dummy on the parallel market premium lagged one to six months, for each of the developing countries in our sample. 7 If the market exchange rate consistently anticipates devaluations of the of?cial rate, its coef?cient should be positive and statistically signi?cant. If, in turn, the of?cial exchange rate does not validate the market rate, then the coef?cient on the lagged market exchange rate will be negative or simply not signi?cant. Table I summarizes the results of the country-by-country time series probit regressions. In the overwhelming number of cases (97 percent), the coef?cient on the market-determined exchange rate is positive. In about 81 percent of the cases, the sign on the coef?cient was positive and statistically signi?cant. Indeed, for 6. See “Part I. The Country Chronologies and Chartbook, Background Material to A Modern History of Exchange Rate Arrangements: A Reinterpretation” at http://www.puaf.umd.edu/faculty/papers/reinhart/reinhart.htm. 7. Two de?nitions of currency crashes are used. A severe currency crash refers to a 25 percent or higher monthly depreciation which is at least 10 percent higher than the previous month’s depreciation. The “milder” version represents a 12.5 percent monthly depreciation which is at least 10 percent above the preceding month’s depreciation; see details in the Appendix. EXCHANGE RATE ARRANGEMENTS 7FIGURE II Of?cial Exchange Rates Typically Validate the Changes in the Market Rates Sources: Pick and Se´dillot [1971]; International Currency Analysis, World Currency Yearbook, various issues. 8 QUARTERLY JOURNAL OF ECONOMICSWestern Hemisphere as a region, the coef?cient on the parallel premium was signi?cant for all the countries in our sample. These ?ndings are in line with those of Bahmani-Oskooee, Miteza, and Nasir [2002], who use panel annual data for 1973– 1990 for 49 countries and employ a completely different approach. Their panel cointegration tests indicate that the of?cial rate will systematically adjust to the market rate in the long run. Second, we calculated pairwise correlations between ination (measured as the twelve-month change in the consumer price index) and the twelve-month percent change in the of?cial and market exchange rates, six months earlier. If the market rate is a better pulse of monetary policy, it should be (a priori) more closely correlated with ination. As shown in Table II, we ?nd that for the majority of cases (about three-quarters of the countries) the changes in market-determined exchange rates have higher correlations with ination than do changes in the of?cial rate. 8 An interesting exception to this pattern of higher correla- 8. Note that, due to data limitations, we use of?cial prices rather than black market or “street” prices to measure ination here. Otherwise, the dominance of the market-determined rates in this exercise would presumably be even more pronounced. TABLE I IS THE PARALLEL MARKET RATE A GOOD PREDICTOR OF CRASHES IN THE OFFICIAL EXCHANGE RATE? SUMMARY OF THE PROBIT COUNTRY-BY-COUNTRY ESTIMATION Regression, DO t 5 a 1 b DPt2 i 1 ut “Mild” crash Percent of countries for which: b . 0 97.1 b . 0 and signi?cant a 81.4 b , 0 2.9 b , 0 and signi?cant a 1.4 Sources: Pick’s Currency Yearbook, World Currency Report, Pick’s Black Market Yearbook, and the authors’ calculations. DOt is a dummy variable that takes on the value of 1 when there is a realignment in the of?cial exchange rate along the lines described below and 0 otherwise, a and b are the intercept and slope coef?cients, respectively (our null hypothesis is b . 0), DPt2i is the twelve-monthchange in the parallel exchange rate, lagged one to six months (the lags were allowed to vary country by country, as there was no prior reason to restrict dynamics to be the same for all countries) and ut is a random disturbance. Two de?nitions of currency crashes are used in the spirit of Frankel and Rose [1996]. A “severe” currency crash refers to a 25 percent or higher monthly depreciation, which is at least 10 percent higher than the previousmonth’s depreciation.The “mild” version represents a 12.5 percent monthly depreciation, which is at least 10 percent above the preceding month’s depreciation. Since both de?nitions of crash yield similar results, we report here only those for the more inclusive de?nition. The regression sample varies by country and is determined by data availability. a. At the 10 percent con?dence level or higher. EXCHANGE RATE ARRANGEMENTS 9tions between the market-determined exchange rate changes and ination is for the industrial countries in the “Convertible Bretton Woods” period (1959 –1973), an issue that merits further study. II.C. How Important Are Parallel Markets? There are cases where the parallel (or secondary) exchange rate applies only to a few limited transactions. An example is the “switch pound” in the United Kingdom during September 1950 through April 1967. 9 However, it is not unusual for dual or parallel markets (legal or otherwise) to account for the lion’s share of transactions with the of?cial rate being little more than symbolic. As Kiguel, Lizondo, and O’Connell [1997] note, the of?cial rate typically diminishes in importance when the gap between the of?cial and market-determined rate widens. To provide a sense of the comparative relevance of the dual or parallel market, we proceed along two complementary dimensions. First, we include a qualitative description in the countryspeci?c chronologies (see background material) of what transactions take place in the of?cial market versus the secondary market. Second, we develop a quantitative measure of the potential size of the leakages into dual or parallel exchange markets. 10 9. For example, while the United Kingdom of?cially had dual rates through April 1967, the secondary rate was so trivial (both in terms of the premium and the volume of transactions it applied to) that it is classi?ed as a peg in our classi?cation scheme (see background material). In the next section we describe how our classi?cation algorithm deals with these cases. 10. For instance, according to Claessens [1997], export underinvoicing hit a historic high in Mexico during 1982—the crisis year in which the dual market was TABLE II INFLATION, OFFICIAL AND MARKET-DETERMINED EXCHANGE RATES: COUNTRY-BY-COUNTRY PAIRWISE CORRELATIONS Percent of countries for which the correlations of: The market-determined exchange rate and ination are higher than the correlations of the of?cial rate and ination 73.7 The market-determined exchange rate and ination are lower than the correlations of the of?cial rate and ination 26.3 Sources: International Monetary Fund, International Financial Statistics, Pick’s Currency Yearbook, World Currency Report, Pick’s Black Market Yearbook, and the authors’ calculations. The correlations reported are those of the twelve-month percent change in the consumer price index with the twelve-month percent change in the relevant bilateral exchange rate lagged six months. 10 QUARTERLY JOURNAL OF ECONOMICSFollowing Ghei, Kiguel, and O’Connell [1997], we classify episodes where there are dual/parallel markets into three tiers according to the level (in percent) of the parallel market premium: low (below 10 percent), moderate (10 percent or above but below 50), and high (50 percent and above). For the episodes of dual/ parallel markets, we provide information about which category each episode falls in (by calculating the average premium for the duration of the episode). In addition to the information contained in the premium, we constructed an extensive database on export misinvoicing, or the difference between what a country reports as its exports and what other countries report as imports from that country, adjusted for shipping costs. Historically, there are tight links between capital ight, export underinvoicing, and the parallel market premium. 11 As with the parallel market premium, we divide the export misinvoicing estimates into three categories (as a percent of the value of total exports): low (less than 10 percent of exports), moderate (10 to 15 percent of exports), and high (above 15 percent). For Europe, Japan, and the United States, misinvoicing calculations start in 1948, while for the remaining countries these start in 1970. In the extensive background material to this paper, we show, for each episode, which of the three categories is applicable. Finally, we construct a score (1 for Low, 2 for Moderate, and 3 for High) for both of these proxies for leakages. The combined score on the estimated size of the leakages (these range from 2 to 6) is also reported. 12 Table III, which shows the evolution of export misinvoicing (as a percent of the value of total exports) and the parallel market premium (in percent) across regions and through time, provides a general avor of the size of potential leakages from the of?cial market. According to our estimates of misinvoicing (top panel), the regional patterns show the largest leakages for the Caribbean and non-CFA Sub-Saharan Africa 1970 –2001, with averages in the 30 to 50 percent range. The lowest estimates of misinvoicing (8 to 11 percent) are for Western Europe, North America, and the introduced. Similar statements can be made about other crisis episodes that involved the introduction of exchange controls and the segmentation of markets. 11. See Kiguel, Lizondo, and O’Connell [1997] and the references contained therein. 12. See “Part II. Parallel Markets and Dual and Multiple Exchange Rate Practices: Background Material to A Modern History of Exchange Rate Arrangements: A Reinterpretation” at http://www.puaf.umd.edu/faculty/papers/reinhart/reinhart.htm. EXCHANGE RATE ARRANGEMENTS 11TABLE III LEAKAGES: EXPORT MISINVOICING AND THE PARALLEL MARKET PREMIUM ABSOLUTE VALUE OF EXPORT MISINVOICING (AS A PERCENT OF THE VALUE OF EXPORTS) Descriptive statistics Mean absolute value (by decade) Min. Max. St. dev 48–49 50–59 60–69 70–79 80–89 90–01 70–01 World 7.0 39.8 8.4 12.8 10.9 9.9 24.7 22.1 26.0 24.4 North Africa 2.5 59.9 10.3 ... ... ... 7.2 8.3 16.1 10.9 CFA 12.6 48.3 8.4 ... ... ... 28.5 21.7 21.5 23.8 Rest of Africa 16.3 201.9 33.5 ... ... ... 23.4 23.4 53.6 34.1 Middle East and Turkey 9.1 45.4 9.6 ... ... ... 30.7 16.7 17.4 21.5 Developing Asia and Paci?c 9.5 79.1 16.9 ... ... ... 31.4 14.9 24.1 23.5 Industrialized Asia 3.7 18.2 3.3 11.2 14.2 13.9 14.6 12.0 10.3 12.2 Caribbean 9.7 136.0 33.2 ... ... ... 30.8 48.9 60.0 47.0 Central and South America 12.0 49.6 8.2 ... ... ... 26.1 36.0 30.4 30.8 Central and Eastern Europe 2.5 50.0 18.3 ... ... ... 46.6 15.4 7.4 22.1 Western Europe 2.4 16.9 3.0 14.1 10.4 10.0 11.6 7.6 7.7 8.9 North America 0.6 22.6 5.9 4.6 9.4 3.8 16.0 11.4 4.8 10.4 Monthly average parallel market premium (excluding freely falling episodes, in percent) Descriptive statistics Average (by decade) Min. Max. St. dev 46–49 50–59 60–69 70–79 80–89 90–98 46–98 World 11.6 205.9 35.4 137.8 56.7 38.1 31.3 57.8 52.6 54.1 North Africa 21.2 164.8 41.4 ... 9.9 35.7 30.7 108.6 62.0 53.6 CFA 26.4 12.7 2.7 ... ... ... 0.0 1.2 1.8 0.9 Rest of Africa 1.7 322.5 73.9 31.9 6.9 33.7 113.7 112.7 107.7 71.0 Middle East and Turkey 5.1 493.1 99.6 54.6 81.0 26.0 21.4 146.5 193.2 88.6 Developing Asia and Paci?c 23.7 660.1 95.0 143.5 60.9 168.9 44.7 43.1 12.1 72.9 Industrialized Asia 26.9 815.9 107.6 324.4 43.0 12.0 3.6 1.3 1.5 36.1 Caribbean 223.8 300.0 42.8 ... ... 29.6 30.2 56.8 53.6 42.3 Central and South America 3.0 716.1 78.5 49.1 133.0 16.4 18.6 74.8 8.4 51.0 Western Europe 25.6 347.5 48.6 165.5 17.0 1.2 2.0 1.7 1.2 16.9 North America 24.3 49.7 3.3 7.2 0.5 0.0 1.1 1.4 1.6 1.3 Sources: International Monetary Fund, Direction of Trade Statistics, International Financial Statistics, Pick’s Currency Yearbook, World Currency Report, Pick’s Black Market Yearbook, and authors’ calculations. To calculate export misinvoicing, let XWi 5 imports from country i, as reported by the rest of the world (CIF basis), Xi 5 exports to the world as reported by country i, Z 5 imports CIF basis/imports COB basis, then export misinvoicing 5 (XWi /Z) 2 Xi . The averages reported are absolute values as a percent of the value of total exports. The parallel premium is de?ned as 100 3 [(P 2 O)/O)], where P and O are the parallel and of?cial rates, respectively. The averages for the parallel premium are calculated for all the countries in our sample in each region, as such, it includes countries where rates are uni?ed and the premium is zero or nil. 12 QUARTERLY JOURNAL OF ECONOMICSCFA Franc Zone. It is also noteworthy that, although low by the standards of other regions, the export misinvoicing average in 1970 –2001 for Western Europe is half of what it was in 1948 – 1949. Yet these regional averages may understate the importance of misinvoicing in some countries. For example, the maximum value for 1948 –2001 for Western Europe (16.9 percent) does not reect the fact that for Spain misinvoicing as a percent of the value of exports amounted to 36 percent in 1950, a comparable value to what we see in some of the developing regions. As to the regional average parallel market premium shown in the bottom panel of Table III, all regions fall squarely in the Moderate-to-High range (with the exception of North America, Western Europe, and CFA Africa). In the case of developing Asia, the averages are signi?cantly raised by Myanmar and Laos. It is worth noting the averages for Europe and industrialized Asia in the 1940s are comparable and even higher than those recorded for many developing countries, highlighting the importance of acknowledging and accounting for dual markets during this period. To sum, in this section we have presented evidence that leads us to conclude that parallel markets were both important as indicators of monetary policy and as representative of the prices underlying an important share of economic transactions. It is therefore quite reasonable to draw heavily on the dual or parallel market data in classifying exchange rate regimes, the task to which we now turn. III. THE “NATURAL” CLASSIFICATION CODE: A GUIDE We would describe our classi?cation scheme as a “Natural” system that relies on a broad variety of descriptive statistics and chronologies to group episodes into a much ?ner grid of regimes, rather than the three or four buckets of other recent classi?cation strategies. 13 The two most important new pieces of information we bring to bear are our extensive data on market-determined dual or parallel exchange rates and detailed country chronologies. The data, its sources, and country coverage are described along with the chronologies that map the history of exchange rate arrangements for each country in the detailed background mate- 13. In biology, a natural taxonomic scheme relies on the characteristics of a species to group them. EXCHANGE RATE ARRANGEMENTS 13rial to this paper. To verify and classify regimes, we also rely on a variety of descriptive statistics based on exchange rate and ination data from 1946 onwards; the Appendix describes these. III.A. The Algorithm Figure III is a schematic summarizing our Natural Classi?- cation algorithm. First, we use the chronologies to sort out for separate treatment countries with either of?cial dual or multiple rates or active parallel (black) markets. 14 Second, if there is no dual or parallel market, we check to see if there is an of?cial preannounced arrangement, such as a peg or band. If there is, we examine summary statistics to verify the announced regime, going forward from the date of the announcement. If the regime is veri?ed (i.e., exchange rate behavior accords with the preannounced policy), it is then classi?ed accordingly as a peg, crawling peg, etc. If the announcement fails veri?cation (by far the most common outcome), we then seek a de facto statistical classi?cation using the algorithm described below, and discussed in greater detail in the Appendix. Third, if there is no preannounced path for the exchange rate, or if the announced regime cannot be veri?ed by the data and the twelve-month rate of ination is below 40 percent, we classify the regime by evaluating exchange rate behavior. As regards which exchange rate is used, we consider a variety of potential anchor currencies including the US dollar, deutsche mark, euro, French franc, UK pound, yen, Australian dollar, Italian lira, SDR, South African rand, and the Indian rupee. A reading of the country chronologies makes plain that the relevant anchor currency varies not only across countries but sometimes within a country over time. (For example, many former British colonies switched from pegging to the UK pound to pegging to the US dollar.) Our volatility measure is based on a ?ve-year moving window (see the Appendix for details), so that the monthly exchange rate behavior may be viewed as part of a larger, continuous, regime. 15 14. See background material posted at http://www.puaf.umd.edu/faculty/ papers/reinhart/reinhart.htm. 15. If the classi?cation is based on exchange rate behavior in a particular year, it is more likely that one-time events (such as a one-time devaluation and repeg) or an economic or political shock leads to labeling the year as a change in regime, when in effect there is no change. For example, Levy-Yeyati and Sturzenegger [2002], who classify regimes one year at a time (with no memory), classi?ed all CFA zone countries as having an intermediate regime in 1994, when 14 QUARTERLY JOURNAL OF ECONOMICSthese countries had a one-time devaluation in January of that year. Our algorithm classi?es them as having pegs throughout. The ?ve-year window also makes it less likely that we classify as a peg an exchange rate that did not move simply because it was a tranquil year with no economic or political shocks. It is far less probable that there are no shocks over a ?ve-year span. FIGURE III A Natural Exchange Rate Classi?cation Algorithm EXCHANGE RATE ARRANGEMENTS 15We also examined the graphical evidence as a check on the classi?cation. In practice, the main reason for doing so is to separate pegs from crawling pegs or bands and to sort the latter into crawling and noncrawling bands. Fourth, as we have already stressed, a straightforward but fundamental departure from all previous classi?cation schemes is that we create a new separate category for countries whose twelve-month rate of ination is above 40 percent. These cases are labeled “freely falling.” 16 If the country is in a hyperination (according to the classic Cagan [1956] de?nition of 50 percent or more monthly ination), we categorize the exchange rate regime as a “hyperoat,” a subspecies of freely falling. In Figure IV, bilateral exchange rates versus the US dollar are plotted for two countries that have been classi?ed by the IMF (and all previous classi?cation efforts) as oating over much of the postwar period—Canada and Argentina. 17 To us, lumping the Canadian oat with that of Argentina during its hyperination seems, at a minimum, misleading. As Figure IV illustrates, oating regimes look rather different from freely falling regimes—witness the orders of magnitude difference in the scales between Canada (top of page) and Argentina (bottom). This difference is highlighted in the middle panel, which plots the Canadian dollar-US dollar exchange rate against Argentina’s scale; from this perspective, it looks like a ?xed rate! The exchange rate histories of other countries that experienced chronic high ination bouts—even if these did not reach the hyperination stage—look more similar to Argentina in Figure IV than to Canada. 18 In our view, regimes associated with an utter lack of monetary control and the attendant very high ination should not be automatically lumped under the same exchange rate arrangement as low ination oating regimes. On these grounds, freely falling needs to be treated as a separate category, much in the same way that Highly Indebted Poorest Countries (HIPC) are treated as a separate “type” of debtor. 16. In the exceptional cases (usually the beginning of an ination stabilization plan) where, despite ination over 40 percent, the market rate nevertheless follows a con?rmed, preannounced band or crawl, the preannounced regime takes precedence. 17. For Argentina, this of course refers to the period before the Convertibility Plan is introduced in April 1991 and for Canada the post-1962 period. 18. Two-panel ?gures, such as that shown for Chile (Figure V), for each country in the sample are found in the background material alongside the country-speci?c chronologies. 16 QUARTERLY JOURNAL OF ECONOMICSFIGURE IV The Essential Distinction between Freely Floating and Falling Sources: Pick and Se´dillot [1971]; International Currency Analysis, World Currency Yearbook, various issues. EXCHANGE RATE ARRANGEMENTS 17In step 5 we take up those residual regimes that were not classi?ed in steps 1 through 4. These regimes become candidates for “managed” or “freely” oating. 19 To distinguish between the two, we perform some simple tests (see the Appendix) that look at the likelihood the exchange rate will move within a narrow range, as well as the mean absolute value of exchange rate changes. When there are dual or parallel markets and the parallel market premium is consistently 10 percent or higher, we apply steps 1 through 5 to our data on parallel exchange rates and reclassify accordingly, though in our ?ner grid. 20 III.B. Using the Chronologies The 153 individual country chronologies are also a central point of departure from all previous efforts to classify regimes. In the ?rst instance the data are constructed by culling information from annual issues of various secondary sources, including Pick’s Currency Yearbook, World Currency Yearbook, Pick’s Black Market Yearbook, International Financial Statistics, the IMF’s Annual Report on Exchange Rate Arrangements and Exchange Restrictions, and the United Nations Yearbook. Constructing our data set required us to sort and interpret information for every year from every publication above. Importantly, we draw on national sources to investigate apparent data errors or inconsistencies. More generally, we rely on the broader economics literature to include pertinent information, such as the distribution of transactions among of?cial and parallel markets. 21 The chronologies allow us to date dual or multiple exchange rate episodes, as well as to differentiate between preannounced pegs, crawling pegs, and bands from their de facto counterparts. We think it is important to distinguish between, say, de facto pegs or bands from announced pegs or bands, because their properties are potentially different. 22 At the very least, we want to provide future researchers with the data needed to ask a variety of questions about the role of exchange rate arrangements. The 19. Our classi?cation of “freely oating” is the analogue of “independently oating” in the of?cial classi?cation. 20. When the parallel market premium is consistently (i.e., all observations within the ?ve-year window) in single digits, we ?nd that in nearly all these cases the of?cial and parallel rates yield the same classi?cation. 21. See Marion [1994], for instance. 22. Policy-makers may not be indifferent between the two. In theory, at least, announcements of pegs, bands, and so on can act as a coordinating device which, by virtue of being more transparent, could invite speculative attacks. 18 QUARTERLY JOURNAL OF ECONOMICSchronologies also ag the dates for important turning points, such as when the exchange rate ?rst oated, or when the anchor currency was changed. Table IV gives an example of one of our 153 chronologies (see background material) for the case of Chile. The ?rst column gives critical dates. Note that we extend our chronologies as far back as possible (even though we can only classify from 1946 onwards); in the case of Chile we go back to 1932. The second column lists how the arrangement is classi?ed. Primary classi?cation refers to the classi?cation according to our Natural algorithm, which may or may not correspond to the of?cial IMF classi?cation (shown in parentheses in the second column of Table IV). Secondary and tertiary classi?cations are meant only to provide supplemental information, as appropriate. So, for example, from November 1952 until April 1956, Chile’s ination was above 40 percent, and hence, its primary classi?cation is freely falling—that is, the only classi?cation that matters for the purposes of the Natural algorithm. For those interested in additional detail, however, we also note in that column that the market-determined exchange rate was a managed oat along the lines described in detail in the Appendix (secondary) and that, furthermore, Chile had multiple exchange rates (tertiary). This additional information may be useful, for example, for researchers who are not interested in treating the high ination cases separately (as we have done here). In this case, they would have suf?cient information to place Chile in the 1952–1956 period in the managed oat category. Alternatively, for those researchers who wish to treat dual or multiple exchange rate practices as a separate category altogether (say, because these arrangements usually involve capital controls), the second column (under secondary or tertiary classi?cation) provides the relevant information to do that sorting accordingly. As one can see, although Chile uni?ed rates on September 1999, it previously had some form of dual or multiple rates throughout most of its history. In these circumstances, we reiterate that our classi?cation algorithm relies on the market-determined, rather than the of?cial exchange rate. 23 Over some 23. The other Chronologies do not contain this information, but the annual of?cial IMF classi?cation for the countries in the sample is posted at http:// www.puaf.umd.edu/faculty/papers/reinhart/reinhart.htm. EXCHANGE RATE ARRANGEMENTS 19TABLE IV A SAMPLE CHRONOLOGY NI THE NATURAL CLASSIFICATION SCHEME: CHILE, 1932–2001 Date Classi?cation pr mi ary/secondary/tertiary (of?cial MI F classi?cation in parentheses) Comments September 16, 1925–April 20, 1932 Peg Go dl standard. Foreign exchange controls are ni troduced on July 30, 1931. April 20, 1932–1937 Dual market Pound Sterl ni g is reference cu ency. Suspension of go dl standard. 1937–February 1946 Managed oat ni g M/ ult pi le rates US dol al r becomes the reference cu ency. March 1946–May 1947 Freely af ll ni g M/ anaged oat ni g M/ ult pi le rates June 1947–October 1952 Managed oat ni g M/ ult pi le rates November 1952–April 16, 1956 Freely af ll ni g M/ anaged oat ni g M/ ult pi le rates April 16, 1956–August 1957 Freely af ll ni g M/ anaged oat ni g D/ ual market Rate structure is s mi pl ?i ed, and a dual market is created. September 1957–June 1958 Managed oat ni g D/ ual market July 1958–January 1, 1960 Freely af ll ni g M/ anaged oat ni g D/ ual market January 1, 1960–January 15, 1962 Peg to US dol al r The escudo rep al ces the peso. January 15, 1962–November 1964 Freely af ll ni g M/ anaged oat ni g M/ ult pi le rates Freely af ll ni g s ni ce April 1962. December 1964–June 1971 Managed oat ni g M/ ult pi le rates P( eg) July 1971–June 29, 1976 Freely af ll ni g M/ ult pi le exchange rates P( eg through 1973 m- anaged oat ni g afte wr ards) On September 29, 1975, the peso rep al ced the escudo. October 1973 c al ss ?i es as a hyperoat. June 29, 1976–January 1978 Freely af ll ni g C/ rawl ni g peg to US dol al r M( anaged oat ni g) February 1978–June 1978 Preannounced crawl ni g peg to US dol al r F/ reely fall ni g M( anaged oat ni g) The Tablita P al n. July 1978–June 30, 1979 Preannounced crawl ni g peg to US dol al r P( eg) The Tablita P al n. June 30, 1979–June 15, 1982 Peg to US dol al r P( eg) The second phase of the Tablita P al n. June 15, 1982–December 1982 Freely af ll ni g M/ anaged oat ni g D/ ual market January 1983–December 8, 1984 Managed oat ni g D/ ual market M( anaged oat ni g) Parallel market prem ui m reaches 102 percent ni early 1983. On March 1983 the ni tention to follow a P rule was announced. 20 QUARTERLY JOURNAL OF ECONOMICSTABLE IV (CONT NI UED) Date Classi?cation pr mi ary/secondary/tertiary (of?cial MI F classi?cation in parentheses) Comments December 8, 1984–January 1988 Managed oat ni g D/ ual market M( anaged oat ni g) rule. The o ?f c ai l rate is kept with ni a 62% crawl ni g band to US dol al r. February 1988–January 1, 1989 De facto crawl ni g band around US dol al r D/ ual market M( anaged oat ni g) rule. 65% band. O ?f c ai l preannounced 63% crawl ni g band to US dol al r. While the of?c ai l rate rema ni s with ni the preannounced band, parallel market prem ui m rema ni ni double digits. January 1, 1989–January 22, 1992 Preannounced crawl ni g band around US dol al r D/ ual market M( anaged oat ni g) rule. Band w di th is 65%. January 22, 1992–January 20, 1997 De facto crawl ni g band around US dol al r D/ ual market M( anaged oat ni g) rule. Band is 65%. There is an o ?f c ai l preannounced 610% crawl ni g band to US dol al r. Parallel prem ui m falls below 15 percent and ni to s ni gle digits. January 20, 1997–June 25, 1998 De facto crawl ni g band to US dol al r D/ ual market M( anaged oat ni g) Of?c ai l prea ounced crawl ni g 612 5. % band to US dol al r; de facto band is 65%. June 25, 1998–September 16, 1998 Preannounced crawl ni g band to US dol al r D/ ual market M( anaged oat ni g) 62 7. 5% band. September 16, 1998–December 22, 1998 Preannounced crawl ni g band to US dol al r D/ ual market M( anaged oat ni g) 63 5. % band. December 22, 1998–September 2, 1999 Preannounced crawl ni g band to US dol al r D/ ual market M( anaged oat ni g) 68% band. September 2, 1999–December 2001 Managed oat ni g (Independently oat ni g) Rates are un ?i ed. Reference currency is the US dollar. Data availability: Of?cial rate, 1900:1–2001:12. Parallel rate, 1946:1–1998:12. EXCHANGE RATE ARRANGEMENTS 21periods the discrepancy between the of?cial and parallel rate, however, proved to be small. For example, from January 1992 onwards the parallel market premium remained in single digits, and our algorithm shows that it makes little difference whether the of?cial or parallel rate is used. In these instances, we leave the notation in the second column that there are dual rates (for information purposes), but also note in the third column that the premium is in single digits. As noted, Chile has also experienced several periods where the twelve-month monthly ination exceeded 40 percent. Our algorithm automatically categorizes these as freely falling exchange rate regimes—unless there is a preannounced peg, crawling peg, or narrow band that is veri?ed, as was the case when the Tablita program was introduced on February 1978. The third column in our chronology gives further sundry information on the regime—e.g., the width of the announced and de facto bands, etc. For Chile, which followed a crawling band policy over many subperiods, it is particularly interesting to note the changes over time in the width of the bands. The third column also includes information about developments in the parallel market premium and currency reform. As an example of the former, we note that since 1992 the parallel premium slipped into single digits; an example of the latter is given for Chile when the peso replaced the escudo in 1975. The top panel of Figure V plots the path of the of?cial and market-determined exchange rate for Chile from 1946. It is evident that through much of the period shown the arrangement was one of a crawling peg or a crawling band, with the rate of crawl varying through time and notably slowing as ination began to stabilize following the Tablita plan of the early 1980s. The bottom panel plots the parallel market premium (in percent). This pattern is representative of many other countries in our sample; the premium skyrockets in the periods of economic and political instability, declines into single digits as credible policies are put in place and capital controls are eased. As we will discuss in the next section, the Chilean case is also illustrative, in that crawling pegs or bands are quite common. Figure VI, which shows the path of the exchange rate for the Philippines, India, and Greece, provides other examples of the plethora of crawling pegs or bands in our sample. 22 QUARTERLY JOURNAL OF ECONOMICSFIGURE V Chile: Of?cial and Market-Determined Exchange Rates and the Parallel Market Premium January 1946–December 1998 Sources: InternationalMonetary Fund, Annual Report on Exchange Arrangements and Exchange Restrictions and International Financial Statistics; Pick and Se´dillot [1971]; International Currency Analysis, World Currency Yearbook, various issues. EXCHANGE RATE ARRANGEMENTS 23FIGURE VI The Prevalence of Crawling Pegs and Bands Sources: Pick and Se´dillot [1971]; International Currency Analysis, World Currency Yearbook, various issues. 24 QUARTERLY JOURNAL OF ECONOMICSIII.C. Alternative Taxonomies: Comparing the Basic Categories Altogether, our taxonomy of exchange rate arrangements includes the fourteen classi?cations sketched in Table V (or ?fteen if hyperoats are treated as a separate category). Of course, fourteen (or ?fteen) buckets are not exhaustive, for example, if one wishes to distinguish between forward- and backward-looking crawls or bands, along the lines of Cottarelli and Giannini [1998]. Given that we are covering the entire post-World War II period, we did not have enough information to make that kind of ?ner distinction. Conversely, because we sometimes want to compare our classi?cation regime with the coarser of?cial one, we also show how to collapse our fourteen types of arrangements into ?ve broader categories; see Table V, where the least exible arrangements are assigned the lowest values in our scale. TABLE V THE FINE AND COARSE GRIDS OF THE NATURAL CLASSIFICATION SCHEME Natural classi?cation bucket Number assigned to category in: Fine grid Coarse grid No separate legal tender 1 1 Preannounced peg or currency board arrangement 2 1 Preannounced horizontal band that is narrower than or equal to 62% 3 1 De facto peg 4 1 Preannounced crawling peg 5 2 Preannounced crawling band that is narrower than or equal to 62% 6 2 De facto crawling peg 7 2 De facto crawling band that is narrower than or equal to 62% 8 2 Preannounced crawling band that is wider than 62% 9 2 De facto crawling band that is narrower than or equal to 65% 10 3 Noncrawling band that is narrower than or equal to 62% a 11 3 Managed oating 12 3 Freely oating 13 4 Freely falling (includes hyperoat) 14 5 Source: The authors. a. By contrast to the common crawling bands, a noncrawling band refers to the relatively few cases that allow for both a sustained appreciation and depreciation of the exchange rate over time. While the degree of exchange rate variability in these cases is modest at higher frequencies (i.e., monthly), lower frequency symmetric adjustment is allowed for. The Appendix provides a detailed discussion of our classi?cation algorithm. EXCHANGE RATE ARRANGEMENTS 25In the ?ner grid, we distinguish between preannounced policies and the less transparent de facto regimes. Since the former involve an explicit announcement while the latter leave it to ?nancial market analysts to determine the implicit exchange rate policy, in the ?ner classi?cation we treat preannouncement as less exible than de facto. We accordingly assign it a lower number in our scale. Those not interested in testing whether announcements serve as a coordinating device (say, to make a speculative attack more likely) and only interested in sorting out the degree of observed exchange rate exibility will prefer the coarser grid. However, even in the coarse grid, it is imperative to treat freely falling as a separate category. IV. THE “NATURAL” TAXONOMY: CRITIQUES AND COMPARISONS As the previous section described, our classi?cation strategy relies importantly on the observed behavior of the market-determined exchange rate. In this section we ?rst address some potential critiques of our approach, including whether a country’s international reserve behavior should affect its classi?cation, and whether we may be mislabeling some regimes as pegs or crawls simply due to the absence of shocks. We then proceed to compare our results with the “of?cial history,” and provide examples of how our reclassi?cation may reshape some of the existing evidence on the links between exchange rate arrangements and various facets of economic activity. IV.A. The Trilogy: Exchange Rates, Monetary Policy, and Capital Controls To capture the nuances of any exchange rate arrangement, one might also want information on the presence and effectiveness of capital controls, the modalities of (sterilized or unsterilized) foreign exchange intervention, and the extent to which interest rates (or other less conventional types of intervention) are used as a means to stabilize the exchange rate. Since, for the purposes of universality, our classi?cation rests squarely on the univariate time series behavior of the nominal exchange rates (combined with historical chronologies), in this subsection we address some of these limitations to our approach. Some studies have reclassi?ed exchange rate arrangements by also factoring in the behavior of foreign exchange reserves as 26 QUARTERLY JOURNAL OF ECONOMICSreported by the IMF’s International Financial Statistics. 24 However, as Calvo and Reinhart [2002] note, using reserves has serious limitations. In Brazil and in over two dozen other countries, foreign exchange market intervention is frequently done through purchases and sales of domestic dollar-linked debt. 25 This debt is not reected in the widely used IFS reserve data, neither were the massive interventions of the Thai authorities in the forward market during 1997 and in South Africa thereafter. Furthermore, as ?nancial liberalization has spread throughout the globe, there has been a widespread switch from direct intervention in the foreign exchange market to the use of interest rate policy in the 1990s as a means to stabilize the exchange rate. 26 Picking up on this kind of policy intervention requires having the policy interest rate—the equivalent of the federal funds rate for the United States—for each country. Such data are very dif?cult to come by, and none of the other efforts at reclassi?cation have dealt with issue. Other issues arise in the context of the links between monetary, capital controls, and exchange rate policy. In particular, while ?xing the exchange rate (or having narrow bands, or crawling pegs, or bands) largely de?nes monetary policy, our two most exible arrangement categories (managed or freely oating) do not. Floating could be consistent with monetary targets, interest rate targets, or ination targeting, the latter being a relatively recent phenomenon. 27 Since our study dates back to 1946, it spans a sea change in capital controls and monetary policy regimes, and it is beyond the scope of this paper to subdivide the monetary policy framework for the most exible arrangements in 24. For instance, the algorithm used by Levy-Yeyati and Sturzenegger [2002] also uses (besides the exchange rate) reserves and base money. This gives rise to many cases of what they refer to as “one classi?cation variable not available.” This means that their algorithm cannot provide a classi?cation for the United Kingdom (where it is hard to imagine such data problems) until 1987 and—in the most extreme of cases—some developing countries cannot be classi?ed for any year over their 1974–2000 sample. 25. See Reinhart, Rogoff, and Savastano [2003] for a recent compilation of data on domestic dollar-linked debt. 26. There are plenty of recent examples where interest rates were jacked up aggressively to fend off a sharp depreciation in the currency. Perhaps one of the more obvious examples is in the wake of the Russian default in August 1998, when many emerging market currencies came under pressure and countries like Mexico responded by doubling interest rates (raising them to 40 percent) within a span of a couple of weeks. 27. Indeed, several of the ination targeters in our sample (United Kingdom, Canada, Sweden, etc.) are classi?ed as managed oaters. (However, it must also be acknowledged that there are many different variants of ination targeting, especially in emerging markets.) EXCHANGE RATE ARRANGEMENTS 27our grid. Apart from exchange rate policy, however, our study sheds considerable light on the third leg of the trinity—capital controls. While measuring capital mobility has not been the goal of this paper, our data consistently show that the parallel market premium dwindles into insigni?cance with capital market integration, providing a promising continuous measure of capital mobility. IV.B. Exchange Rates and Real Shocks Ideally, one would like to distinguish between exchange rate stability arising from deliberate policy actions (whether its direct foreign exchange market intervention or interest rate policy, as discussed) and stability owing to the absence of economic or political shocks. In this subsection we provide evidence that, if the exchange rate is stable and it is accordingly treated in our de jure approach to classi?cation, it is typically not due to an absence of shocks. Terms of trade shocks are a natural source of potential shocks, particularly for many developing countries. Similarly, the presence (or absence) of shocks is likely to be reected in the volatility of real GDP. To investigate the incidence and size of terms of trade shocks, we constructed monthly terms of trade series for 172 countries over the period 1960 –2001. 28 The terms of trade series is a geometric weighted average of commodity prices (?xed weights based on the exports of 52 commodities). Table VI presents a summary by region of the individual country ?ndings. The ?rst column shows the share of commodities in total exports, while s Dtot denotes the variance of the monthly change in the terms of trade of the particular region relative to Australia. Australia is our benchmark, as it is both a country that is a primary commodity exporter and has a oating exchange rate that, by some estimates, approximates an optimal response to terms of trade shocks (see Chen and Rogoff [2003]). The next three columns show the variance of the monthly change in the terms of trade of the region relative to Australia (s Dtot), exchange rate of the individual region relative to Australia (s De) and the variance of the annual change in real GDP of the region relative to Australia (s Dy). The last two columns show the 28. Table VI is based on the more extensive results in Reinhart, Rogoff, and Spilimbergo [2003]. 28 QUARTERLY JOURNAL OF ECONOMICSvariance of the exchange rate relative to the variance of the terms of trade (s De)/(s Dtot) and output (s De)/(s Dy), respectively. A priori, adverse terms of trade shocks should be associated with depreciations and the converse for positive terms of trade shocks; greater volatility in the terms of trade should go handin-hand with greater volatility in the exchange rate. (In Chen and Rogoff [2003] there is greater volatility even under optimal policy.) Table VI reveals several empirical regularities: (a) most countries (regions) have more variable terms of trade than Australia—in some cases, such as the Middle East and the Caribbean, as much as three or four times as variable; (b) real GDP is also commonly far more volatile than in Australia; (c) most countries’ exchange rates appear to be far more stable than Australia’s, as evidenced by relatively lower variances for most of the groups; (d) following from the previous observations, the last two columns show that for most of the country groupings that the variance of exchange rate changes is lower than that of changes in the terms of trade or real GDP. Taken together, the implication of these ?ndings is that if the exchange rate is not moving, it is TABLE VI TERMS OF TRADE, OUTPUT, AND EXCHANGE RATE VARIABILITY VARIANCE RATIOS (NORMALIZED TO AUSTRALIA AND EXCLUDES FREELY FALLING EPISODES) Region Share s Dtot s De s Dy s De s Dtot s De s Dy North Africa 0.51 3.29 0.93 2.54 0.64 0.23 Rest of Africa (excluding CFA) 0.56 2.92 2.87 2.50 1.29 1.38 Middle East 0.60 4.15 0.95 3.48 0.33 0.50 Development Asia/Paci?c 0.34 2.02 0.85 2.40 0.54 0.44 Industrialized Asia 0.18 0.82 0.97 1.15 1.23 0.86 Caribbean 0.50 4.15 0.67 2.40 0.20 0.35 Central America 0.62 3.02 0.49 2.11 0.21 0.28 South America 0.63 2.03 1.08 2.15 0.66 0.52 Central East Europe 0.24 0.60 1.03 1.51 1.66 0.78 Western Europe 0.18 1.75 0.84 1.25 0.76 0.56 North America 0.33 1.64 0.60 1.12 0.47 0.54 Source: Reinhart, Rogoff, and Spilimbergo [2003] and sources cited therein. The variable de?nitions are as follows: Share 5 share of primary commodities to total exports; the next three columns show the variance of the monthly change in the terms of trade of the region relative to Australia (s Dtot), the variance of the monthly change in the exchange rate of the individual region relative to Australia (s De), and the variance of the annual change in real GDP of the region relative to Australia (s Dy); the last two columns show the variance of the exchange rate relative to the variance of the terms of trade (s De)/(s Dtot) and output (s De)/(s Dy), respectively. EXCHANGE RATE ARRANGEMENTS 29not for lack of shocks. Of course, terms of trade are only one class of shocks that can cause movement in the exchange rate. Thus, considering other kinds of shocks—political and economic, domestic, and international—would only reinforce the results presented here. IV.C. Fact and Fiction: Natural and Arti?cial? We are now prepared to contrast the of?cial view of the history of exchange rate regimes with the view that emerges from employing our alternative methodology. To facilitate comparisons, we will focus mainly on the coarse grid version of the Natural system. Figure VII highlights some of the key differences between the Natural and IMF classi?cations. The dark portions of the bars denote the cases where there is overlap between the IMF and the Natural classi?cation. 29 The white bar shows the cases where the IMF labels the regime in one way (say, a peg in 1970 –1973) and the Natural labels it differently. Finally, the striped portions of the bars indicate the cases where the Natural classi?cation labels the regime in one way (say, freely falling, 1991–2001) and the IMF labels differently (say, freely oating). As shown in Figure VII, according to our Natural classi?cation system, about 40 percent of all regimes in 1950 were pegs (since many countries had dual/parallel rates that did not qualify as pegs). Figure VII also makes plain that some of the “pegs” in our classi?cation were not considered pegs under the of?cial classi?cation; in turn, our algorithm rejects almost half of the of?cial pegs as true pegs. Our reclassi?cation of the early postwar years impacts not only on developing countries, but on industrialized countries as well; nearly all the European countries had active parallel markets after World War II. A second reason why our scheme shows fewer pegs is that the IMF’s pre-1997 scheme allowed countries to declare their regimes as “pegged to an undisclosed basket of currencies.” This notably nontransparent practice was especially popular during the 1980s, and it was also under this that a great deal of managed oating, freely oating, and freely falling actually took place. For the period 1974 –1990 the of?cial classi?cation has roughly 60 percent of all regimes as pegs; our classi?cation has only half as many. Again, as we see in Figure VII, this comparison 29. Speci?cally, both classi?cations assigned the regime for a particular country in a given particular year to the same category. 30 QUARTERLY JOURNAL OF ECONOMICSunderstates the differences since some of our pegs are not of?cial pegs and vice versa. For the years 1974 –1990, and 1991–2001, one can see two major trends. First, “freely falling” continues to be a signi?cant category, accounting for 12 percent of all regimes from 1974 –1990, and 13 percent of all regimes from 1991–2001. For the transition economies in the 1990s, over 40 percent of the observations are in the freely falling category. Of course, what we are reporting in Figure VII is the incidence of each regime. Clearly, future research could use GDP weights and—given that FIGURE VII Comparison of Exchange Rate Arrangements According to the IMF Of?cial and Natural Classi?cations, 1950–2001 Sources: International Monetary Fund, Annual Report on Exchange Arrangements and Exchange Restrictions and International Financial Statistics; Pick and Se´dillot [1971]; International Currency Analysis, World Currency Yearbook, various issues. The dark bars show the overlap between the IMF and Natural classi?cation (i.e., for that particular year the IMF and Natural classi?cations coincide); the white bars show the cases where the IMF classi?cation labeled the regime in one way (say, a peg in 1974–1990) and the Natural classi?cation labeled it differently; the striped bars indicate the cases where the Natural classi?cation labeled the regime in one way (say, freely falling) and the IMF labeled it differently, (say, freely oating). EXCHANGE RATE ARRANGEMENTS 31low-income countries are disproportionately represented in the freely falling category—this would reveal a lower importance to this category. 30 Second, the Natural classi?cation scheme reveals a bunching to the middle in terms of exchange rate exibility, when compared with the of?cial monetary history of the world. Limited exibility—which under the Natural classi?cation is dominated by de facto crawling pegs—becomes notably more important. From being a very small class under the of?cial scheme, the Natural classi?cation algorithm elevates limited exibility to the second most important grouping over the past decade, just behind pegs. Another startling difference is the reduced importance of freely oating. According to the of?cial classi?cation, more than 30 percent of countries were independently oating during 1991– 2001. According to the Natural classi?cation, less than 10 percent were freely oating. This is partly a manifestation of what Calvo and Reinhart [2002] term “fear of oating,” but equally because we assign high ination oats (including ones that are of?cially “pegs”) to our new freely falling category. Indeed, more countries had freely falling exchange rates than had freely oating exchange rates! The contrast between the IMF and Natural classi?cation systems becomes even more striking when one sees just how small the overlap is between the two classi?cations country by country and year by year. As shown in Table VII, if the IMF designation of the regime is a peg (1970 –2001), there is a 44 percent probability that our algorithm will place it into a more exible arrangement. If the of?cial regime is a oat, there is a 31 percent chance we will categorize it as a peg or limited exibility. If the of?cial regime is a managed oat, there is a 53 percent chance our algorithm will categorize it as a peg or limited exibility. Whether the of?cial regime is a oat or peg, it is virtually a coin toss whether the Natural algorithm will yield the same result. The bottom of the table gives the pairwise correlation between the two classi?cations, with the of?cial classi?cation running from 1 (peg) to 4 (independently oating), and the Natural classi?cation running from 1 (peg) to 5 (freely falling). The simple correlation coef?cient is only 0.42. As one can con?rm from 30. GDP weights and population weights would, of course, present very different pictures. For example, the United States and Japan alone would increase the world’s share of oaters if it were GDP weights, while weight by population would increase the weight of ?xers by China alone. 32 QUARTERLY JOURNAL OF ECONOMICSthe chronologies, the greatest overlap occurs in the classi?cation of the G3 currencies and of the limited exibility European arrangements. Elsewhere, and especially in developing countries, the two classi?cations differ signi?cantly, as we shall see. IV.D. The Pegs That Float Figure VIII plots the parallel market premium since January 1946, in percent, for Africa, Asia, Europe, and Western Hemisphere. As is evident from the Figure VIII, for all the regions except Europe, it would be dif?cult to make the case that the breakdown of Bretton Woods was a singular event, let alone a sea change. 31 For the developing world, the levels of pre- and post-1973 volatilities in the market-determined exchange rate, as revealed by the parallel market premium, are remarkably similar. Note that for all regions, we exclude the freely falling episodes that would signi?cantly increase the volatility but also distort the scale. To give a avor of the cross-country variation within region and across time, the dashed line plots the regional average plus one standard deviation (calculated across countries and shown as a ?ve-year moving average). As regards Europe, the story told by Figure VIII is consistent with the characterization of the Bretton Woods system as a period of when true exchange rate stability was remarkably short-lived. From 1946 until the arrival of the late 1950s, while Europe was not oating in the modern sense—as most currencies were not 31. We plot the premium rather than the market-determined rate, as it allows us to aggregate across countries in comparable units (percent). TABLE VII FLOATING PEGS AND PEGGED FLOATS: REVISITING THE PAST, 1970–2001 Conditional probability that the regime is: In percent “Other” according to NC a conditional on being classi?ed “Peg” by IMF 44.5 “Peg” or “Limited Flexibility” according to NC conditional on being classi?ed “Managed Floating” by IMF 53.2 “Peg” or “Limited Flexibility” according to NC conditional on being classi?ed “Independently Floating” by IMF 31.5 Pairwise correlation between IMF and NC classi?cations 42.0 Sources: The authors’ calculations. a. NC refers to the Natural Classi?cation; “Other” according to NC includes limited exibility, managed oating, freely oating, and freely falling. EXCHANGE RATE ARRANGEMENTS 33FIGURE VIII Average Monthly Parallel Market Premium: 1946–1998 Sources: International Monetary Fund, Annual Report on Exchange Arrangements and Exchange Restrictions and International Financial Statistics; Pick and Se´dillot [1971]; International Currency Analysis, World Currency Yearbook, various issues. The solid line represents the average monthly parallel market premium while the dashed line shows the ?ve-year moving average of plus one standard deviation. The regional averages are calculated excluding the freely falling episodes. 34 QUARTERLY JOURNAL OF ECONOMICSconvertible—it had some variant of de facto oating under the guise of pegged of?cial exchange rates. Each time of?cial rates are realigned, the story had already unfolded in the parallel market (as shown earlier in Figure II). While the volatility of the gap between the of?cial rate and the market exchange rate is not quite in the order of magnitude observed in the developing world, the volatility of the parallel rate is quite similar to the volatility of today’s managed or freely oating exchange rates. 32 There are many cases that illustrate clearly that little changed before and after the breakup of Bretton Woods. 33 Clearly, more careful statistical testing is required to make categorical statements about when a structural break took place; but it is obvious from the ?gures that whatever break might have taken place hardly lives up to the usual image of the move from ?xed to exible rates. IV.E. The Floats That Peg Figure IX provides a general avor of how exchange rate exibility has evolved over time and across regions. The ?gure plots ?ve-year moving averages of the probability that the monthly percent change in the exchange rate remains within a 2 percent band for Africa, Asia, Europe, and Western Hemisphere (excluding only the United States). Hence, under a pegged arrangement, assuming no adjustments to the parity, these probabilities should equal 100 percent. As before, we exclude the freely falling episodes. For comparison purposes, the ?gures plot the unweighted regional averages against the unweighted averages for the “committed oaters.” (The committed oaters include the following exchange rates against the dollar: Yen, DM (euro), Australian dollar, and the UK pound.) The dashed lines, which show plus/minus one standard deviation around the regional averages, highlight the differences between the group of oaters and the regional averages. It is evident for all regions (this applies the least to Africa) that the monthly percent variation in the exchange rate has 32. See Bordo [1993] on Bretton Woods and Bordo [2003] on a historical perspective on the evolution of exchange rate arrangements. 33. The country-by-country ?gures in “The Country Chronologies and Chartbook, Background Material to A Modern History of Exchange Rate Arrangements: A Reinterpretation” at http://www.puaf.umd.edu/faculty/papers/reinhart/ reinhart.htm are particularly revealing in this regard. EXCHANGE RATE ARRANGEMENTS 35FIGURE IX Absolute Monthly Percent Change in the Exchange Rate: Percent of Observations within a 62 Percent Band (?ve-year moving average) Sources: International Monetary Fund, Annual Report on Exchange Arrangements and Exchange Restrictions and International Financial Statistics; Pick and Se´dillot [1971]; International Currency Analysis, World Currency Yearbook, various issues. The solid line represents the average for the group while the dashed lines show plus/minus one standard deviation. The regional averages are calculated excluding the freely falling episodes. 36 QUARTERLY JOURNAL OF ECONOMICStypically been kept to a minimum—there is a great deal of smoothing of exchange rate uctuations in all regions when compared with the usual monthly variations of the committed oaters. The smoothing is most evident in Asia where the index hovers around 90 percent for most of the period, versus 60 –70 percent for the oaters. Hence, over time, the nature of the classi?cation problem has evolved from labeling something as a peg when it is not, to labeling something as oating when the degree of exchange rate exibility has in fact been very limited. IV.F. Does the Exchange Rate Regime Matter? The question of whether the exchange rate arrangement matters for various facets of economic activity has, indeed, been a far-reaching issue over the years in the literature on international trade and ?nance, and is beyond the scope of this paper. In this subsection we present a few simple exercises that do not speak to possible causal patterns between exchange rate regimes and economic performance, but are meant as illustrative of the potential usefulness of our classi?cation. First, consider Table VIII, which separates dual/parallel markets from all the other regimes where the “exchange rate is unitary,” to employ the language of the IMF. The top row shows average ination rates and real per capita GDP growth for the period 1970 –2001 for dual arrangements separately from all other regimes. This two-way split drastically alters the picture presented by the IMF’s classi- ?cation in the top and fourth rows of Table IX, which does not TABLE VIII INFLATION AND PER CAPITA REAL GDP GROWTH: A COMPARISON OF DUAL (OR MULTIPLE) AND UNIFIED EXCHANGE RATE SYSTEMS, 1970–2001 Regime Average annual ination rate Average per capita real GDP growth Uni?ed exchange rate 19.8 1.8 Dual (or multiple) exchange rates 162.5 0.8 Sources: International Monetary Fund, Annual Report on Exchange Arrangements and Exchange Restrictions and International Financial Statistics, Pick and Se´dillot [1971], International Currency Analysis, World Currency Yearbook, various issues. The averages for the two regime types (uni?ed and dual) are calculated on a country-by-country and year-by-year basis. Thus, if a country has a uni?ed exchange rate for most of the year, the observation for that year is included in the averages for uni?ed rates; if in the following year that same country introduces a dual market (or multiple rate) for most of the year, the observation for that year is included in the average for dual rates. This treatment allows us to deal with transitions across regime types over time. EXCHANGE RATE ARRANGEMENTS 37treat dual markets as a separate category. Dual (or multiple) exchange rate episodes are associated with an average ination rate of 163 percent versus 20 percent for uni?ed exchange markets—growth is one percentage point lower for dual arrangements. The explanation for this gap between the outcomes shown in Table VIII and the IMF’s in Table IX is twofold. First, 62 percent of the freely falling cases during 1970 –2001 were associated with parallel markets or dual or multiple exchange rates. Second, the high ination cases classi?ed by the IMF as freely oating were moved to the freely falling category Natural classi- ?cation. Again, we caution against overinterpreting the results in Table VIII as evidence of causality, as exchange controls and dual markets are often introduced amid political and economic crises—as the recent controls in Argentina (2001) and Venezuela (2003) attest. As Table IX highlights, according to the IMF, only limited exibility cases record moderate ination. On the other hand, freely oating cases record the best ination performance (9 percent) in the Natural classi?cation. Freely falling regimes exhibit an average annual ination rate 443 percent versus an ination average in the 9 to 17 percent range for the other categories (Table IX). TABLE IX DO CLASSIFICATIONS MATTER? GROWTH, INFLATION, AND TRADE ACROSS REGIMES: 1970–2001 Classi?cation scheme Peg Limited exibility Managed oating Freely oating Freely falling Average annual ination rate IMF Of?cial 38.8 5.3 74.8 173.9 n.a. Natural 15.9 10.1 16.5 9.4 443.3 Average annual per capita real GDP growth IMF Of?cial 1.4 2.2 1.9 0.5 n.a. Natural 1.9 2.4 1.6 2.3 22.5 Exports plus imports as a percent of GDP IMF Of?cial 69.9 81.0 65.8 60.6 n.a. Natural 78.7 80.3 61.2 44.9 57.1 Source: International Monetary Fund, World Economic Outlook. An n.a. denotes not available. The averages for each regime type (peg, limited exibility, etc.) are calculated on a country-by-country and year-by-year basis. Thus, if a country has a pegged exchange rate for most of the year, the observation for that year is included in the averages for pegs; if in the following year that same country has a managed oat for most of the year, the observation for that year is included in the average for managed oats. This treatment allows us to deal with transitions across regime types over time. 38 QUARTERLY JOURNAL OF ECONOMICSThe contrast is also signi?cant both in terms of the level of per capita GDP (Figure X) and per capita growth (Figure XI and Table IX). Freely falling has the lowest per capita income (US $3,476) of any category—highlighting that the earlier parallel to the HIPC debtor is an apt one—while freely oating has the highest (US $13,602). In the of?cial IMF classi?cation, limited exibility, which was almost entirely comprised of European countries, shows the largest per capita income. Growth is negative for the freely falling cases (22.5 percent) versus growth rates in the 1.6 –2.4 percent range for the other categories. Once freely falling is a separate category, the differences between our other classi?cations pale relative to the differences between freely falling and all others (Table VIII). In the of?cial IMF classi?cation, freely oating shows a meager average growth rate of 0.5 percent for the independently oating cases. For the Natural classi?cation, the average growth rate quadruples for the oaters to 2.3 percent. Clearly, this exercise highlights the importance of treating the freely falling episodes separately. FIGURE X PPP Adjusted GDP per Capita across Regime Types: 1970–2001 (averaging over all regions) EXCHANGE RATE ARRANGEMENTS 39V. CONCLUDING REMARKS According to our Natural classi?cation, across all countries for 1970 –2001, 45 percent of the observations of?cially labeled as a “peg” should, in fact, have been classi?ed as limited exibility, managed or freely oating— or worse, “freely falling.” PostBretton Woods, a new type of misclassi?cation problem emerged, and the odds of being of?cially labeled a “managed oat” when there was a de facto peg or crawling peg were about 53 percent. We thus ?nd that the of?cial and other histories of exchange rate arrangements can be profoundly misleading, as a striking number of pegs are much better described as oats, and vice versa. These misclassi?cation problems may cloud our view of history along some basic dimensions. Using the IMF’s classi?cation FIGURE XI Real per Capita GDP Growth across Regime Types: 1970–2001 (averaging over all regions) Sources: International Monetary Fund, Annual Report on Exchange Arrangements and Exchange Restrictions and International Financial Statistics; Pick and Se´dillot [1971]; International Currency Analysis, World Currency Yearbook, various issues. The averages for each regime type (peg, limited exibility, etc.) are calculated on a country-by-country and year-by-year basis. Thus, if a country has a pegged exchange rate for most of the year, the observation for that year is included in the averages for pegs; if in the following year that same country has a managed oat for most of the year, the observation for that year is included in the average for managed oats. This treatment allows us to deal with transitions across regime types over time. 40 QUARTERLY JOURNAL OF ECONOMICSfor the period 1970 to 2001, for instance, one would conclude that a freely oating exchange rate is not a very attractive option—it produces an average annual ination rate of 174 percent and a paltry average per capita growth rate of 0.5 percent. This is the worst performance of any arrangement. Our classi?cation presents a very different picture: free oats deliver an average in- ation that is less than 10 percent (the lowest of any exchange rate arrangement), and an average per capita growth rate of 2.3 percent. Equally importantly, we ?nd that uni?ed exchange rate regimes vastly outperform dual or multiple exchange rate arrangements, although one cannot necessarily interpret these differences as causal. While we have focused in this paper on the exchange rate arrangement classi?cation issue, the country histories and data provided in this paper may well have consequences for theory and empirics going forward, especially the issue of accounting for dual an parallel markets. In her classic history of the IMF de Vries [1969] looked back at the early years of the Bretton Woods regime and noted: Multiple exchange rates were one of the ?rst problems that faced the Fund in 1946, and have probably been its most common problem in the ?eld of exchange rates. An impressive number and diversity of countries in the last twenty years have experimented with one form or another of what the Fund has called multiple currency practices, at least for a few if not most of their transactions . . . The problem of multiple rates, then, never seems entirely at an end. Thirty-four years have passed since this history was written, and multiple exchange rate practices are showing no signs of becoming passe´ . On December 2001 Argentina suspended convertibility and, in so doing, segmented the market for foreign exchange, while on February 7, 2003, Venezuela introduced strict new exchange controls—de facto creating a multiple exchange rate system. Some things never change. APPENDIX: THE DETAILS OF THE “NATURAL” CLASSIFICATION This appendix describes the details of our classi?cation algorithm, which is outlined in Section III of the paper. We concentrate on the description of the ?ne grid as shown in Table V. A. Exchange Rate Flexibility Indices and Probability Analysis Our judgment about the appropriate exchange rate classi?- cation is shaped importantly by the time-series of several meaEXCHANGE RATE ARRANGEMENTS 41sures of exchange rate variability, based on monthly observations and averaged over two-year and ?ve-year rolling windows. The ?rst of these measures is the absolute percent change in the monthly nominal exchange rate. We prefer the mean absolute change to the variance to minimize the impact of outliers. These outliers arise when, for example, there are long periods in which the exchange rate is ?xed but, nonetheless, subject to rare but large devaluations. To assess whether exchange rate changes are kept within a band, we calculate the probabilities that the exchange rate remains within a plus/minus 1, 2, and 5 percent-wide band over any given period. Two percent seems a reasonable cutoff to distinguish between the limited exibility cases and more exible arrangements, as even in the Exchange Rate Mechanism arrangement in Europe 62 1 4 bands were allowed. As with the mean absolute deviation, these probabilities are calculated over twoyear and ?ve-year rolling windows. Unless otherwise noted in the chronologies, we use the ?ve-year rolling windows as our primary measure for the reasons discussed in Section III of the paper. These rolling probabilities are especially useful to detect implicit unannounced pegs and bands. B. De Jure and de Facto Pegs and Bands Where the chronologies show the authorities explicitly announcing a peg, we shortcut the de facto dating scheme described below and zero in on the date announced as the start of the peg. We then con?rm (or not) the peg by examining the mean absolute monthly change over the period following the announcement. The chronologies we develop, which give the day, month, and year when a peg becomes operative, are essential to our algorithm. There are two circumstances where we need to go beyond simply verifying the announced peg. The ?rst case is where our chronologies indicate that the peg applies only to an of?cial rate and that there is an active parallel (of?cial or illegal) market. As shown in Figure III, in these cases we apply the same battery of tests to the parallel market exchange rate as we do to the of?cial rate in a uni?ed market. Second, there are the cases where the of?cial policy is a peg to an undisclosed basket of currencies. In these cases, we verify if the “basket” peg is really a de facto peg to a single dominant currency (or to the SDR). If no dominant currency can be identi?ed, we do not label the episode as a peg. Potentially, of course, 42 QUARTERLY JOURNAL OF ECONOMICSwe may be missing some de facto basket pegs, though in practice, this is almost certainly not a major issue. We now describe our approach toward detecting de facto pegs. If there is no of?cially announced peg, we test for a “de facto” peg in two ways. First, we examine the monthly absolute percent changes. If the absolute monthly percent change in the exchange rate is equal to zero for four consecutive months or more, that episode is classi?ed (for however long its lasts) as a de facto peg if there are no dual or multiple exchange rates. This allows us to identify short-lived de facto pegs as well as those with a longer duration. For instance, this ?lter allowed us to identify the Philippines’ de facto peg to the US dollar during 1995–1997 in the run-up to the Asian crisis as well as the numerous European de facto pegs to the DM well ahead of the introduction of the euro. Second, we compute the probability that the monthly exchange rate change remains within a 1 percent band over a rolling ?ve-year period: 34 P~e , 1%!, where e is the monthly absolute percentage change in the exchange rate. If this probability is 80 percent or higher, then the regime is classi?ed as a de facto peg or crawling peg over the entire ?ve-year period. If the exchange rate has no drift, it is classi?ed as a ?xed parity; if a positive drift is present, it is labeled a crawling peg; and, if the exchange rate also goes through periods of both appreciation and depreciation, it is dubbed a “noncrawling” peg. Our choice of an 80 percent threshold is not accidental, but rather we chose this value because it appears to do a very good job at detecting regimes one would want to label as pegs, without drawing in a signi?cant number of “false positives.” Our approach regarding preannounced and de facto bands follows exactly the same process as that of detecting preannounced and de facto pegs, we simply replace the 61% band with a 62% band in the algorithm. If a band is announced and the chronologies show a uni?ed exchange market, we label the episode as a band unless it had already been identi?ed as a de facto peg by the criteria described earlier. But, importantly, we also verify whether the announced and de facto bands coincide, espe- 34. There are a handful of cases where a two-year window is used. In such instances, it is noted in the chronologies. EXCHANGE RATE ARRANGEMENTS 43cially as there are numerous cases where the announced (de jure) band is much wider than the de facto band. 35 To detect such cases, we calculate the probability that the monthly exchange rate change remains within a 62% band over a rolling ?ve-year period: P~e , 2%!. If this probability is 80 percent or higher, then the regime is classi?ed as a de facto narrow horizontal, crawling, or noncrawling band (which allows for both a sustained appreciation and depreciation) over the period through which it remains continuously above the 80 percent threshold. In the case where the preannounced bands are wide (meaning equal to or greater than 65%), we also verify 65% bands. The speci?cs for each case are discussed in the country chronologies. For instance, as shown earlier in Table IV, in the case of Chile we found that the de facto band during 1992–1998 was narrower (65%) than that which was announced at the time (610% and 612.5%). In the case of Libya, which had an announced 77 percent wide band along a ?xed central parity pegged to the SDR over the March 1986 –December 2001, we detected a 65% crawling band to the US dollar. C. Freely Falling As we emphasize in the text, there are situations, almost invariably due to high ination or hyperination, in which there are mega-depreciations in the exchange rate on a routine and sustained basis. We have argued that it is inappropriate and misleading to lump these cases—which is what all previous classi?cations (IMF or otherwise) do—with oating rate regimes. We label episodes freely falling on the basis of two criteria. First, periods where the twelve-month rate of ination equals or exceeds 40 percent are classi?ed as freely falling unless they have been identi?ed as some form of preannounced peg or preannounced narrow band by the above criteria. 36 The 40 percent 35. Mexico’s exchange rate policy prior to the December 1994 crisis is one of numerous examples of this pattern. Despite the fact that the band was widening over time, as the oor of the band was ?xed and the ceiling was crawling, the peso remained virtually pegged to the US dollar for extended periods of time. 36. It is critical that the peg criteria supersede the high ination criteria in the classi?cation strategy, since historically a majority of ination stabilization efforts have used the exchange rate as the nominal anchor and in many of these episodes ination rates at the outset of the peg were well above our 40 percent threshold. 44 QUARTERLY JOURNAL OF ECONOMICSination threshold is not entirely arbitrary, as it has been identi?ed as an important benchmark in the literature on the determinants of growth (see Easterly [2001]). As a special subcategory of freely falling, we dub as hyperoats those episodes that meet Cagan’s [1956] classic de?nition of hyperination (50 percent or more ination per month). A second situation where we classify an exchange rate regime as freely falling are the six months immediately following a currency crisis—but only for those cases where the crisis marks a transition from a ?xed or quasi-?xed regime to a managed or independently oating regime. 37 Such episodes are typically characterized by exchange rate overshooting. This is another situation where a large change in the exchange rate does not owe to a deliberate policy; it is the reection of a loss of credibility and recurring speculative attacks. To date these crisis episodes, we follow a variant of the approach suggested by Frankel and Rose [1996]. Namely, any month where the depreciation exceeds or equals 12 1 2 percent and also exceeds the preceding month’s depreciation by at least 10 percent is identi?ed as a crisis. 38 To make sure that this approach yields plausible crisis dates, we supplement the analysis with our extensive country chronologies, which also shed light on balance of payments dif?culties. 39 Since, as a rule, freely falling is not typically an explicit arrangement of choice, our chronologies also provide for all the freely falling cases, the underlying de jure or de facto arrangement (for example, dual markets, independently oating, etc.). D. Managed and Freely Floating Our approach toward identifying managed and freely oating episodes is basically to create these classes out of the residual pool of episodes that, after comprehensive application of our algorithm, have not been identi?ed as an explicit or implicit peg or some form of band, and that are not included in the freely 37. This rules out cases where there was a devaluation and a repeg and cases where the large exchange rate swing occurred in the context of an already oating rate. 38. Frankel and Rose [1996] do not date the speci?c month of the crisis but the year; their criteria call for a 25 percent (or higher) depreciation over the year. 39. For instance, the Thai crisis of July 1997 does not meet the modi?ed Frankel-Rose criteria. While the depreciation in July exceeded that of the preceding month by more than 10 percent, the depreciation of the Thai Baht in that month did not exceed 25 percent. For these cases, we rely on the chronologies of events. EXCHANGE RATE ARRANGEMENTS 45falling category. To proxy the degree of exchange rate exibility under freely oating and managed oats, we construct a composite statistic, e/P~e , 1%!, where the numerator is the mean absolute monthly percent change in the exchange rate over a rolling ?ve-year period, while the denominator ags the likelihood of small changes. For de jure or de facto pegs, this index will be very low (close to or equal to zero), while for the freely falling cases it will be very large. As noted, we only focus on this index for those countries and periods which are candidates for freely or managed oating. We tabulate the frequency distribution of our index for the currencies that are most transparently oating, these include US dollar/DM-euro, US dollar/yen, US dollar/UK pound, US dollar/Australian dollar, and US dollar/New Zealand dollar beginning on the date in which the oat was announced. We pool the observations (the ratio for rolling ?ve-year averages) for all the oaters. So, for example, since Brazil oated the real in January 1999, we would calculate the ratio only from that date forward. If Brazil’s ratio falls inside the 99 percent con?dence interval (the null hypothesis is freely oating and hence the rejection region is located at the lower tail of the distribution of the oater’s group), the episode is characterized as freely oating. If that ratio falls in the lower 1 percent tail, the null hypothesis of freely oating is rejected in favor of the alternative hypothesis of managed oat. It is important to note that managed by this de?nition does not necessarily imply active or frequent foreign exchange market intervention—it refers to the fact that for whatever reason our composite exchange rate variability index, e/P(e , 1%), does not behave like the indices for the freely oaters. E. Dual or Multiple Exchange Rate Regimes and Parallel Markets Dual rates are essentially a hybrid arrangement. There are cases or periods in which the premium is nil and stable so that the of?cial rate is representative of the underlying monetary policy. The of?cial exchange rate could be pegged, crawling, or maintained within some bands, or in a few cases allowed to oat. But there are countless episodes where the divergence between the of?cial and parallel rate is so large that the picture is incomplete without knowledge of what the parallel market rate is doing. The 46 QUARTERLY JOURNAL OF ECONOMICScountry chronologies are critical in identifying these episodes. In the cases where dual or multiple rates are present or parallel markets are active, we focus on the market-determined rates instead of the of?cial exchange rates. As shown in Figure III, we subject the market-determined exchange rate (dual, multiple, or parallel) to the battery of tests described above. 40 This particular category will especially reshape how we view the 1940s through the 1960s, where about half the cases in the sample involved dual markets. UNIVERSITY OF MARYLAND, COLLEGE PARK HARVARD UNIVERSITY REFERENCES Bahmani-Oskooee, Mohsen, Ilir Miteza, and A. B. M. Nasir, “The Long-Run Relationship between Black Market and Of?cial Exchange Rates: Evidence from Panel Cointegration,” Economics Letters, LXXVI (2002), 397–404. Baxter, Marianne, and Alan Stockman, “Business Cycle and Exchange Rate Regime: Some International Evidence,” Journal of Monetary Economics, XXIII (1989), 377– 400. Bordo, Michael, “The Bretton Woods International Monetary System: A Historical Overview,” in A Retrospective on the Bretton Woods System, Michael Bordo and Barry Eichengreen, eds. (Chicago, IL: University of Chicago Press, 1993), pp. 3–98. ——, “Exchange Rate Regimes in Historical Perspective,” National Bureau of Economic Research Working Paper No. 9654, 2003. Cagan, Philip, “The Monetary Dynamics of Hyperination,” in Studies in the Quantity Theory of Money, Milton Friedman, ed. (Chicago, IL: University of Chicago Press, 1956), pp. 25–117. Calvo, Guillermo A., and Carmen M. Reinhart, “Fear of Floating,” Quarterly Journal of Economics, CXVII (2002), 379– 408. Chen, Yu-chin, and Kenneth S. Rogoff, “Commodity Currencies,” Journal of International Economics, VX (2003), 133–160. Claessens, Stijn, “Estimates of Capital Flight and Its Behavior,” Revista de Ana´ lisis Econo´mico, XII (1997), 3–34. Cotarelli, C., and C. Giannini. “Credibility Without Rules? Monetary Frameworks in the Post Bretton-Woods Era,” IMF Occasional Paper No. 154 (Washington, DC: International Monetary Fund, 1998). de Vries, Margaret G., “Multiple Exchange Rates,” in The International Monetary Fund 1945–1965, Margaret de Vries and J. Keith Horse?eld, eds. (Washington, DC: International Monetary Fund, 1969), pp. 122–151. Easterly, William, The Elusive Quest for Growth (Cambridge, MA: MIT Press, 2001). Frankel, Jeffrey A., and Andrew K. Rose, “Currency Crashes in Emerging Markets: An Empirical Treatment,” Journal of International Economics, XXXXI (1996), 351–368. Ghei, Nita, Miguel A. Kiguel, and Stephen A. O’Connell, “Parallel Exchange Rates in Developing Countries: Lessons from Eight Case Studies,” in Parallel Exchange Rates in Developing Countries, Miguel Kiguel, J. Saul Lizondo, and Stephen O’Connell, eds. (New York, NY: Saint Martin’s Press, 1997), pp. 17–76. 40. There are a few such cases in the sample, where only government transactions take place at the of?cial rate. EXCHANGE RATE ARRANGEMENTS 47Ghosh, Atish, Anne-Marie Gulde, Jonathan Ostry, and Holger Wolfe, “Does the Nominal Exchange Rate Regime Matter?” National Bureau of Economic Research Working Paper No. 5874, 1997. International Currency Analysis, World Currency Yearbook (New York, NY: International Currency Analysis, 1983–1998), various issues. International Monetary Fund, Annual Report on Exchange Restrictions (Washington, DC: International Monetary Fund, 1949–1978), various issues. International Monetary Fund, Annual Report on Exchange Arrangements and Exchange Restriction (Washington, DC: International Monetary Fund, 1979– 2001), various issues. Kiguel, Miguel, J. Saul Lizondo, and Stephen A. O’Connell, eds., Parallel Exchange Rates in Developing Countries (New York, NY: Saint Martin’s Press, 1997). Levy-Yeyati, Eduardo, and Federico Sturzenegger, “Classifying Exchange Rate Regimes: Deeds versus Words,” mimeo, Universidad Torcuato Di Tella, 2002. Marion, Nancy P., “Dual Exchange Rates in Europe and Latin America,” World Bank Economic Review, VIII (1994), 213–245. Pick, Franz, World Currency Reports (New York, NY: Pick Publishing Corporation, 1945–1955), various issues. ——, Black Market Yearbook (New York, NY: Pick Publishing Corporation, 1951– 1955), various issues. ——, Pick’s Currency Yearbook (New York, NY: Pick Publishing Corporation, 1955–1982), various issues. ——, World Currency Reports (New York, NY: International Currency Analysis Inc., 1983–1998), various issues. Pick, Franz, and Rene´ Se´dillot, All the Monies of the World: A Chronicle of Currency Values (New York, NY: Pick Publishing Corporation, 1971). Reinhart, Carmen M., and Kenneth S. Rogoff, “A Modern History of Exchange Rate Arrangements: A Reinterpretation,” National Bureau of Economic Research Working Paper No. 8963, 2001. Reinhart, Carmen M., and Kenneth S. Rogoff, “Parts I and II. Background Material to a Modern History of Exchange Rate Arrangements: A Reinterpretation,” mimeo, International Monetary Fund, Washington, DC, 2003 at http:// www.puaf.umd.edu/faculty/papers/reinhart/reinhart.htm. Reinhart, Carmen M., Kenneth S. Rogoff, and Miguel A. Savastano, “Addicted to Dollars,” National Bureau of Economic Research Working Paper No. 10015. 2003. Reinhart, Carmen M., Kenneth S. Rogoff, and Antonio Spilimbergo, “When Hard Shocks Hit Soft Pegs,” mimeo, International Monetary Fund, Washington, DC, 2003. United Nations, United Nations Yearbook (New York: United Nations, 1946– 1960), various issues. 48 QUARTERLY JOURNAL OF ECONOMICSInflation Bets or Deflation Hedges
|
CD ROM Annuaire d'Entreprises France prospect (avec ou sans emails) : REMISE DE 10 % Avec le code réduction AUDEN872
10% de réduction sur vos envois d'emailing --> CLIQUEZ ICI Retour à l'accueil, cliquez ici Ináation Bets or Deáation Hedges? The Changing Risks of Nominal Bonds John Y. Campbell, Adi Sunderam, and Luis M. Viceira 1 First draft: June 2007 This version: March 21, 2011 1 Campbell: Department of Economics, Littauer Center, Harvard University, Cambridge MA 02138, USA, and NBER. Email john_campbell@harvard.edu. Sunderam: Harvard Business School, Boston MA 02163. Email asunderam@hbs.edu. Viceira: Harvard Business School, Boston MA 02163 and NBER. Email lviceira@hbs.edu. We acknowledge the extraordinarily able research assistance of Johnny Kang. We are grateful to Geert Bekaert, Andrea Buraschi, Jesus Fernandez-Villaverde, Wayne Ferson, Javier Gil-Bazo, Pablo Guerron, John Heaton, Ravi Jagannathan, Jon Lewellen, Monika Piazzesi, Pedro Santa-Clara, George Tauchen, and seminar participants at the 2009 Annual Meeting of the American Finance Association, Bank of England, European Group of Risk and Insurance Economists 2008 Meeting, Sixth Annual Empirical Asset Pricing Retreat at the University of Amsterdam Business School, Harvard Business School Finance Unit Research Retreat, Imperial College, Marshall School of Business, NBER Fall 2008 Asset Pricing Meeting, Norges Bank, Society for Economic Dynamics 2008 Meeting, Stockholm School of Economics, Tilburg University, Tuck Business School, and Universidad Carlos III in Madrid for helpful comments and suggestions. This material is based upon work supported by the National Science Foundation under Grant No. 0214061 to Campbell, and by Harvard Business School Research Funding.Abstract The covariance between US Treasury bond returns and stock returns has moved considerably over time. While it was slightly positive on average in the period 1953ñ 2009, it was unusually high in the early 1980ís and negative in the 2000ís, particularly in the downturns of 2001ñ2 and 2008ñ9. This paper speciÖes and estimates a model in which the nominal term structure of interest rates is driven by four state variables: the real interest rate, temporary and permanent components of expected ináation, and the ìnominal-real covarianceî of ináation and the real interest rate with the real economy. The last of these state variables enables the model to Öt the changing covariance of bond and stock returns. Log bond yields and term premia are quadratic in these state variables, with term premia determined by the nominal-real covariance. The concavity of the yield curveó the level of intermediate-term bond yields, relative to the average of short- and long-term bond yieldsó is a good proxy for the level of term premia. The nominal-real covariance has declined since the early 1980ís, driving down term premia.1 Introduction Are nominal government bonds risky investments, which investors must be rewarded to hold? Or are they safe investments, whose price movements are either inconsequential or even beneÖcial to investors as hedges against other risks? US Treasury bonds performed well as hedges during the Önancial crisis of 2008ñ9, but the opposite was true in the early 1980ís. The purpose of this paper is to explore such changes over time in the risks of nominal government bonds. To understand the phenomenon of interest, consider Figure 1, an update of a similar Ögure in Viceira (2010). The Ögure shows the history of the realized beta (regression coe¢ cient) of 10-year nominal zero-coupon Treasury bonds on an aggregate stock index, calculated using a rolling three-month window of daily data. This beta can also be called the ìrealized CAPM betaî, as its forecast value would be used to calculate the risk premium on Treasury bonds in the Capital Asset Pricing Model (CAPM) that is often used to price individual stocks. Figure 1 displays considerable high-frequency variation, much of which is attributable to noise in the realized beta. But it also shows interesting low-frequency movements, with values close to zero in the mid-1960ís and mid-1970ís, much higher values averaging around 0.4 in the 1980ís, a spike in the mid-1990ís, and negative average values in the 2000ís. During the two downturns of 2001ñ3 and 2008ñ9, the average realized beta of Treasury bonds was about -0.2. These movements are large enough to cause substantial changes in the Treasury bond risk premium implied by the CAPM. Nominal bond returns respond both to expected ináation and to real interest rates. A natural question is whether the pattern shown in Figure 1 is due to the changing beta of ináation with the stock market, or of real interest rates with the stock market. Figure 2 summarizes the comovement of ináation shocks with stock returns, using a rolling three-year window of quarterly data and a Örst-order quarterly vector autoregression for ináation, stock returns, and the three-month Treasury bill yield to calculate ináation shocks. Because high ináation is associated with high bond yields and low bond returns, the Ögure shows the beta of realized deáation shocks (the negative of ináation shocks) which should move in the same manner as the bond return beta reported in Figure 1. Indeed, Figure 2 shows a similar history for the deáation beta as for the nominal bond beta. Real interest rates also play a role in changing nominal bond risks. In the period 1since 1997, when long-term Treasury ináation-protected securities (TIPS) were Örst issued, Campbell, Shiller, and Viceira (2009) report that TIPS have had a predominantly negative beta with stocks. Like the nominal bond beta, the TIPS beta was particularly negative in the downturns of 2001ñ3 and 2008ñ9. Thus not only the stock-market covariances of nominal bond returns, but also the covariances of two proximate drivers of those returns, ináation and real interest rates, change over time. In the CAPM, assetsí risk premia are fully explained by their covariances with the aggregate stock market. Other modern asset pricing models allow for other ináuences on risk premia, but still generally imply that stock-market covariances have considerable explanatory power for risk premia. Time-variation in the stock-market covariances of bonds should then be associated with variation in bond risk premia, and therefore in the typical shape of the Treasury yield curve. Yet the enormous literature on Treasury bond prices has paid little attention to this phenomenon. This paper begins to Öll this gap in the literature. We make three contributions. First, we write down a simple term structure model that captures time-variation in the covariances of ináation and real interest rates, and therefore of nominal bond returns, with the real economy and the stock market. Importantly, the model allows these covariances, and the associated risk premia, to change sign. It also incorporates more traditional ináuences on nominal bond prices, speciÖcally, real interest rates and both transitory and temporary components of expected ináation. Second, we estimate the parameters of the model using postwar quarterly US time series for nominal and ináation-indexed bond yields, stock returns, realized and forecast ináation, and realized second moments of bond and stock returns calculated from daily data within each quarter. The use of realized second moments, unusual in the term structure literature, forces our model to Öt the phenomenon of interest. Third, we use the estimated model to describe how the changing stock-market covariance of bonds should have altered bond risk premia and the shape of the Treasury yield curve. The organization of the paper is as follows. Section 2 reviews the related literature. Section 3 presents our model of the real and nominal term structures of interest rates. Section 4 describes our estimation method and presents parameter estimates and historical Ötted values for the unobservable state variables of the model. Section 5 discusses the implications of the model for the shape of the yield curve and the movements of risk premia on nominal bonds. Section 6 concludes. An Appendix to this paper available online (Campbell, Sunderam, and Viceira 2010) presents details of the model solution and additional empirical results. 22 Literature Review Nominal bond risks can be measured in a number of ways. A straightforward approach is to measure the covariance of nominal bond returns with some measure of the marginal utility of investors. According to the Capital Asset Pricing Model (CAPM), for example, marginal utility can be summarized by the level of aggregate wealth. It follows that the risk of bonds can be measured by the covariance of bond returns with returns on the market portfolio, often proxied by a broad stock index. Alternatively, one can measure the risk premium on nominal bonds, either from average realized excess bond returns or from variables that predict excess bond returns such as the yield spread (Shiller, Campbell, and Schoenholtz 1983, Fama and Bliss 1987, Campbell and Shiller 1991) or a more general linear combination of forward rates (Stambaugh 1988, Cochrane and Piazzesi 2005). If the risk premium is large, then presumably investors regard bonds as risky. This approach can be combined with the Örst one by estimating an empirical multifactor model that describes the cross-section of both stock and bond returns (Fama and French 1993). These approaches are appealingly direct. However, the answers they give depend sensitively on the sample period that is used. The covariance of nominal bond returns with stock returns, in particular, is extremely unstable over time and even switches sign (Li 2002, Guidolin and Timmermann 2006, Christiansen and Ranaldo 2007, David and Veronesi 2009, Baele, Bekaert, and Inghelbrecht 2010, Viceira 2010). The average level of the nominal yield spread is also unstable over time as pointed out by Fama (2006) among others. An intriguing fact is that the movements in the average yield spread seem to line up to some degree with the movements in the CAPM beta of bonds. The average yield spread, like the CAPM beta of bonds, was lower in the 1960ís and 1970ís than in the 1980ís and 1990ís. Viceira (2010) shows that both the short-term nominal interest rate and the yield spread forecast the CAPM beta of bonds over the period 1962ñ2007. On the other hand, during the 2000ís the CAPM beta of bonds was unusually low while the yield spread was fairly high on average. Another way to measure the risks of nominal bonds is to decompose their returns into several components arising from di§erent underlying shocks. Nominal bond returns are driven by movements in real interest rates, ináation expectations, and the risk premium on nominal bonds over short-term bills. Several papers, including Barsky (1989), Shiller and Beltratti (1992), and Campbell and Ammer (1993) have estimated the covariances of these components with stock returns, assuming the 3covariances to be constant over time. The literature on a¢ ne term structure models also proceeds by modelling state variables that drive interest rates and estimating prices of risk for each one. Many papers in this literature allow the volatilities and risk prices of the state variables to change over time, and some allow risk prices and hence risk premia to change sign. 2 Several recent a¢ ne term structure models, including Dai and Singleton (2002) and Sangvinatsos and Wachter (2005), are highly successful at Ötting the moments of nominal bond yields and returns. Some papers have also modelled stock and bond prices jointly, but no existing models allow bond-stock covariances to change sign. 3 The contributions of our paper are Örst, to write down a simple term structure model that allows for bond-stock covariances that can move over time and change sign, and second, to confront this model with historical US data. The purpose of the model is to Öt new facts about bond returns in relation to the stock market, not to improve on the ability of a¢ ne term structure models to Öt bond market data considered in isolation. Our introduction of a time-varying covariance between state variables and the stochastic discount factor, which can switch sign, means that we cannot write log bond yields as a¢ ne functions of macroeconomic state variables; our model, like those of Beaglehole and Tenney (1991), Constantinides (1992), Ahn, Dittmar and Gallant (2002), and Realdon (2006), is linear-quadratic. 4 To solve our model, we use a general result on the expected value of the exponential of a non-central chi-squared 2Dai and Singleton (2002), Bekaert, Engstrom, and Grenadier (2005), Sangvinatsos and Wachter (2005), Wachter (2006), Buraschi and Jiltsov (2007), and Bekaert, Engstrom, and Xing (2009) specify term structure models in which risk aversion varies over time, ináuencing the shape of the yield curve. These papers take care to remain in the essentially a¢ ne class described by Du§ee (2002). 3 Bekaert et al. (2005) and other recent authors including Mamaysky (2002) and díAddona and Kind (2006) extend a¢ ne term structure models to price stocks as well as bonds. Bansal and Shaliastovich (2010), Eraker (2008), and Hasseltoft (2008) price both stocks and bonds in the longrun risks framework of Bansal and Yaron (2004). Piazzesi and Schneider (2006) and Rudebusch and Wu (2007) build a¢ ne models of the nominal term structure in which a reduction of ináation uncertainty drives down the risk premia on nominal bonds towards the lower risk premia on ináationindexed bonds. Similarly, Backus and Wright (2007) argue that declining uncertainty about ináation explains the low yields on nominal Treasury bonds in the mid-2000ís. 4Du¢ e and Kan (1996) point out that linear-quadratic models can often be rewritten as a¢ ne models if we allow the state variables to be bond yields rather than macroeconomic fundamentals. Buraschi, Cieslak, and Trojani (2008) also expand the state space to obtain an a¢ ne model in which correlations can switch sign. 4distribution which we take from the Appendix to Campbell, Chan, and Viceira (2003). To estimate the model, we use a nonlinear Öltering technique, the unscented Kalman Ölter, proposed by Julier and Uhlmann (1997), reviewed by Wan and van der Merwe (2001), and recently applied in Önance by Binsbergen and Koijen (2008). 3 A Quadratic Bond Pricing Model We now present a term structure model that allows for time variation in the covariances between real interest rates, ináation, and the real economy. In the model, both real and nominal bond yields are linear-quadratic functions of the vector of state variables and, consistent with the empirical evidence, the conditional volatilities and covariances of excess returns on real and nominal assets are time varying. 3.1 The SDF and the real term structure We start by assuming that the log of the real stochastic discount factor (SDF), mt+1 = log (Mt+1), follows the process: mt+1 = xt + 2 m 2 + "m;t+1; (1) whose drift xt follows an AR(1) process subject to a heteroskedastic shock t "x;t+1 and a homoskedastic shock "X;t+1: xt+1 = x (1 x ) + xxt + t "x;t+1 + "X;t+1: (2) The innovations "m;t+1, "x;t+1, and "X;t+1 are normally distributed, with zero means and constant variance-covariance matrix. We allow these shocks to be cross-correlated and adopt the notation 2 i to describe the variance of shock "i , and ij to describe the covariance between shock "i and shock "j . To reduce the complexity of the equations that follow, we assume that the shocks to xt are orthogonal to each other; that is, xX = 0. The state variable xt is the short-term log real interest rate. The price of a single-period zero-coupon real bond satisÖes P1;t = Et [exp fmt+1g] ;so that its yield 5Capitalizing On Innovation: The Case of Japan
|
CD ROM Annuaire d'Entreprises France prospect (avec ou sans emails) : REMISE DE 10 % Avec le code réduction AUDEN872
10% de réduction sur vos envois d'emailing --> CLIQUEZ ICI Retour à l'accueil, cliquez ici Robert Dujarric and Andrei Hagiu Working papers are in draft form. This working paper is distributed for purposes of comment and discussion only. It may not be reproduced without permission of the copyright holder. Copies of working papers are available from the author. Capitalizing On Innovation: The Case of Japan Robert Dujarric Andrei Hagiu Working Paper 09-114Capitalizing On Innovation: The Case of Japan1 By Robert Dujarric2 and Andrei Hagiu3 Abstract Japan’s industrial landscape is characterized by hierarchical forms of industry organization, which are increasingly inadequate in modern sectors, where innovation relies on platforms and horizontal ecosystems of firms producing complementary products. Using three case studies - software, animation and mobile telephony -, we illustrate two key sources of inefficiencies that this mismatch can create, all the while recognizing that hierarchical ecosystems have played a major role in Japan’s success in manufacturing-driven industries (e.g. Toyota in automobiles and Nintendo with videogames). First, hierarchical industry organizations can “lock out” certain types of innovation indefinitely by perpetuating established business practices. For example, the strong hardware and manufacturing bias and hierarchical structures of Japan’s computer and electronics firms is largely responsible for the virtual non-existence of a standalone software sector. Second, even when the vertical hierarchies produce highly innovative sectors in the domestic market, the exclusively domestic orientation of the “hierarchical industry leaders” can entail large missed opportunities for other members of the ecosystem, who are unable to fully exploit their potential in global markets. For example, Japan’s advanced mobile telecommunications systems (services as well as handsets) suffer from a “Galapagos effect”: like the unique fauna of these remote islands they are only found in the Japanese archipelago. Similarly, while Japanese anime is renowned worldwide for its creativity, there is no global Japanese anime content producer comparable to Disney or Pixar. Instead, anime producers are locked into a highly fragmented domestic market, dominated by content distributors (TV stations and DVD companies) and advertising agencies. We argue that Japan has to adopt legislation in several areas in order to address these inefficiencies and capitalize on its innovation: strengthening antitrust and intellectual property rights enforcement; improving the legal infrastructure (e.g. producing more corporate lawyers); lowering barriers to entry for foreign investment and facilitating the development of the venture capital sector. 1 The authors would like to thank Mayuka Yamazaki from the Harvard Business School Japan Research Center for her assistance throughout the project; Curtis Milhaupt (discussant) and participants at the Columbia Law School conference on Business Law and Innovation for very helpful comments on the first version of this paper. They are also grateful to the Research Institute for Economy Trade and Industry (RIETI) where they were visiting fellows, and (for Robert Dujarric) Temple University, Japan Campus and the Council on Foreign Relations/Hitachi Fellowship in Japan. 2 Temple University, Japan Campus. robertdujarric@gmail.com 3 Harvard Business School. ahagiu@hbs.edu1. Introduction Japan faces two interconnected challenges. The first one is common to all advanced economies: the rising competition from lower-cost countries with the capacity to manufacture mid-range and in some cases advanced industrial products. For Japan this includes not only China but also South Korea. Though South Korea is by no means a low-wage nation, the combination of lower costs (not only labor but also land and a lower cost of living) than Japan with a very advanced industrial base makes it a formidable competitor in some sectors. Unlike – or to a significantly greater extent than – other advanced economies e.g. the United States, Japan also confronts a challenge posed by the global changes in the relative weights of manufacturing and services, including soft goods, which go against the country’s longstanding comparative advantage and emphasis on manufacturing. A growing share of global value chains is now captured by services and soft goods, such as software, while the percentage which accrues to manufacturing is declining. Many of the new industries that have been created or grown rapidly in the past twenty years have software and information platforms at their core: PCs (operating systems such as Windows); the Internet (web browser such as Firefox, Internet Explorer, Safari); online search, information and e-commerce (Amazon, Bloomberg, eBay, Facebook); digital media (Apple’s iPod and iTunes combination); etc. In this context, it is striking that, as Japan has become more economically advanced, its strengths have continued to be in manufacturing. . When it comes to services and soft goods (software, content), it has either failed to produce competitive companies, or, when it has, these companies have failed to establish themselves in foreign markets. There are, for example, no truly global Japanese hotel chains, nor do any Japanese corporations compete internationally with DHL, FedEx and UPS; there are no Japanese global information services companies comparable to Bloomberg, Google and Thomson Reuters, nor is there any international Japanese consulting or accounting firm. Even more strikingly, Japanese companies are also absent from international markets in sectors which are very strong at home, such as mobile telecommunications and anime production.The principal thesis we lay out in the current paper is that these weaknesses can be attributed to Japan’s hierarchical, vertically integrated and manufacturing-driven forms of industry organization, which are increasingly inadequate in modern sectors, where innovation relies on platforms and horizontal ecosystems of firms producing complementary products. Using three case studies - software, animation and mobile telephony - we illustrate two key sources of inefficiencies that this mismatch can create, all the while recognizing that hierarchical ecosystems have played a major part in Japan’s success in manufacturing-driven industries (e.g. Toyota in automobiles, Nintendo and Sony in videogames). First, hierarchical industry organizations can “lock out” certain types of innovation indefinitely by perpetuating established business practices. For example, the strong hardware and manufacturing bias of Japan’s computer and electronics firms is largely responsible for the virtual non-existence of a standalone software sector. Second, even when the vertical hierarchies produce highly innovative sectors in the domestic market, the exclusively domestic orientation of the “hierarchical industry leaders” can entail large missed opportunities for other members of the ecosystem, who are unable to fully exploit their potential in global markets. For example, Japan’s advanced mobile telecommunications systems (services as well as handsets) suffer from a “Galapagos effect”: like the unique fauna of these remote islands they are only found in the Japanese archipelago. Similarly, while Japanese anime is renowned worldwide for its creativity, there is no global Japanese anime content producer comparable to Disney or Pixar. Instead, anime producers are locked into a highly fragmented domestic market, dominated by content distributors (TV stations and DVD companies) and advertising agencies. Consequently, Japan is facing the challenge of creating a post-industrial exporting base. This in turns requires an environment conducive to innovation. Japanese policymakers are aware of the issue. Many have called for efforts to replicate Silicon Valley, while others hope that the next Microsoft will be Japanese. These ideas, as interesting as they are, can only come to fruition decades from now. Silicon Valley is the product of over half a century of development. Its foundations include massive levels of highskilled immigration, well-funded, cosmopolitan, dynamic and competitive private and public universities, a very liquid labor market, a vibrant venture capital industry, an enormous Pentagon R&D budget, and the common law. Japan’s chances of duplicating another Silicon Valley are therefore rather low. There are however soft good and service industries in which Japan is already very strong, such as mobile telephony and anime. These are “low hanging fruits,” which offer far better prospects for Japanese industry internationally than competing with Silicon Valley. We argue that Japan has to adopt legislation in several areas in order to address the inefficiencies described above and capitalize on its innovation capabilities in these sectors: strengthening antitrust and intellectual property rights enforcement; improving the legal infrastructure (e.g. producing more business law attorneys); lowering barriers to entry for foreign investment and facilitating the development of the venture capital sector. The rest of the paper is organized as follows. In the next section we provide a brief overview and background on the fundamental shift spearheaded by computer-based industries from vertically integrated to horizontal, platform-driven industrial structures. Section 3 describes the historical characteristics of Japanese innovative capabilities. In section 4 we use three industry case studies (software, animation and mobile telecommunications) to illustrate how Japan’s manufacturing-inspired modes of industrial organization are preventing the country from taking advantage of its innovative power. Finally, in section 5 we lay out some possible solutions and we conclude in section 6. 2. The new order of industrial innovation: ecosystems and platf orms The rapid development of computer-based industries since the second half of the twentieth century has spearheaded and accelerated the shift from vertically integrated, hierarchical industry structures (e.g. mainframes) to horizontal structures, composed of platform-centered ecosystems (e.g. PCs). While this change has been pervasive throughout most sectors of the economy, it has been most salient in technology industries with short product life-cycles. As a result, the nature of competition and competitive advantage has shifted away from pursuing quality through tightly integrated vertical “stacks” of components and towards building scalable “multi-sided platforms” (cf. Evans Hagiu and Schmalensee (2006)), connecting various types of interdependent complementors and end-users (e.g. videogame consoles - game developers; Windows - software application developers and hardware manufacturers). Personal Computers (PCs): the quintessential ecosystem Ecosystems are most simply defined as constellations of firms producing complementary products or essential components of the same system. Today’s PC industry is the archetype of modern ecosystems. There are two critical components, the operating system and the microprocessor, which are controlled by two companies – Microsoft and Intel. The other ecosystem participants “gravitate” around the two “ecosystem leaders” (cf. Gawer and Cusumano 2002): hardware manufacturers (OEMs) like Dell, HP, Toshiba and Sony, independent software developers such as Intuit and Adobe Systems, third party suppliers of hardware accessories and, last but not least, end users. Ecosystem leadership is defined by three elements: i) control of the key standards and interfaces which allow the components supplied by various ecosystem participants to work with each other (e.g. the application programming interfaces - APIs - controlled by Windows); ii) control of the nature and timing (pace) of innovation throughout the industry (e.g. Intel’s successive generations of microprocessors and Microsoft’s successive versions of Windows) and iii) ability to appropriate a large share of the value created by the entire ecosystem. Microsoft in particular has positioned Windows as the multi-sided platform at the center of the PC ecosystem. Its power comes from generating network effects through the interdependence between the participations of the other ecosystem members: the value to users increases with the number and quality of independent application developers which support Windows and vice versa, third-party software vendors are drawn to Windows in proportion to the latter’s installed base of users. One source of restraint (today more so than in the 1990s) on Microsoft and Intel abusing their eco-system leadership is the existence of second-tier players in their respective markets, who could provide alternatives. Thus Linux, Google’s office suite, AMD, and Apple act as brakes on the possible misuse of ecosystem leadership on the part of the Microsoft and Intel. The fear of anti-trust action further restrains Microsoft and Intel from aggressive behavior against the other members of the ecosystem. These factors (competition and anti-trust regulations) are essential. Without them the ecosystem might degenerate into a slow moving institution, more preoccupied with extracting economic rent from consumers than with innovation and price competition. It is important to emphasize that the horizontal PC ecosystem that we know today has little to do with the structure of the PC industry at its beginning in the early 1980s. And even less to do with the structure of the computer industry in the early 1950s. At that time, each computer was on its own island. Only large corporations, government agencies, and universities bought mainframe computers, and they did so from a few large companies like Burroughs, UNIVAC, NCR, Control Data Corporation, Honeywell and IBM. Customers were buying vertically integrated hardware-software systems. IBM emerged as the clear leader from this pack by being first to adopt a modular and ecosystem-based approach with its System 360: it adopted standardized interfaces and allowed outside companies to supply select parts of the computer system (e.g. external hard drives). Nevertheless, this remained largely a vertically integrated approach as the main components – hardware, processor and operating system - were done in house. The radical change occurred in 1980, when IBM decided that the only way to get ahead of its competitors in the PC business (Apple, Commodore and Tandy) was to outsource the operating system and the microprocessor to Microsoft and Intel in order to speed up the innovation cycle. The strategy worked in that the IBM PC became the dominant personal computer. It backfired when Microsoft and Intel took control of the PC ecosystem and licensed their platforms to other OEMs such as Compaq, HP and Dell, which eventually relegated IBM to “one of the crowd”. IBM’s original PC business, ThinkPad, is now a subsidiary of the Chinese computer manufacturer Lenovo. Economic drivers of vertical disintegration and ecosystem structures While at first glance it may seem that every step of vertical disintegration in the computer industry was a strategic decision involving real tradeoffs (e.g. giving up some control vs. accelerating investment throughout the ecosystem) that could have gone either way, there is a clear sense in which the process of vertical disintegration was inevitable due to technological and economic factors beyond the control of any single actor. And this process has occurred (or is occurring) in many other technology industries: videogames, smart mobile phones, wireless mobile services, home entertainment devices, etc. There are three fundamental forces driving vertical disintegration. First, rapid technological progress leads to economies of specialization. Except in the very early stages of an industry, vertically integrated firms cannot move the innovation frontier in all segments of the value chain. As industries grow, there is scope for specializing in some layers (a key strategic decision then becomes which layers to keep in-house and which to open to third parties) and bringing other firms on board in order to develop the others. The second important factor in the evolution of technology-based industries is modularity and the emergence of standards (cf. Baldwin and Clark 1999). Increasing productivity throughout the value chain naturally drive firms to design their products and services in a modular fashion, with well-specified interfaces, which can be used by different production units within the same company or by third-party suppliers if applicable (this is related to the first factor mentioned above). The third and final driver of vertical disintegration is increasing consumer demand for product variety. The vertically integrated model works well for one-size-fitsall solutions. As soon as customers demand horizontally differentiated products, it becomes hard for one integrated firm to satisfy the entire spectrum of customer demands. This tension was famously described by Henry Ford: “We are happy to supply any car color as long as it is black.” Therefore, vertical disintegration is more likely to occur in industries with a large number of consumers with diverse needs than in markets with a small number of clients with similar needs. Thus, ecosystems are the natural consequence of vertical disintegration. They have become the most efficient market-based solution to the problem of producing complex systems in a large variety of technology-intensive industries, satisfying a large variety of end user demands and maintaining a sufficiently high rate of innovation throughout the system. It is important to emphasize however that not every industry will move towards horizontal, platform-centered ecosystems. For example, Airbus and Boeing, the two biggest players in the commercial airliner business, have increasingly relied on outsourcing and risk-sharing partners. Boeing’s latest jetliner, the 787, relies on risk-sharing partners involved in key R&D decisions, and much of the plane is actually not made but Boeing itself. Still, neither Airbus nor Boeing have created an ecosystem similar to the PC industry. Both companies sit at the apex of the industrial pyramid, make the key decisions, and sell the product directly to the customer (as opposed to Microsoft and Intel, where PCs are actually sold by the manufacturers such as Lenovo or Dell, which assemble the computers). This can be explained, among other factors, by the small number of customers (airlines and governments) for products with extremely high unit costs; the need to maintain extremely demanding and well-documented safety standards; and the direct involvement of governments in a sector with close links to national defense. 4 In light of our argument in this paper it may seem perhaps surprising that the best description of the necessity of relying on ecosystems that we have encountered comes from a senior executive at a Japanese high technology firm – NTT DoCoMo, Japan’s leading mobile operator. In discussing the reasons behind the success of NTT DoCoMo’s i-mode mobile Internet service, he explained: “In today’s IT industries, no major service can be successfully created by a single company.” In the three case studies below, we will see that, despite the success of a few remarkable ecosystem leaders in a few sectors (Nintendo, NTT DoCoMo, Sony and 4 It should also be noted that some of the outsourcing by Airbus and Boeing is motivated by the need to find foreign industrial partners in order to increase the likelihood of sales to the airlines of those countries. Toyota come to mind), these were exceptions in Japan’s broader industrial landscape. Most of Japan’s ecosystems remain strikingly similar to vertical hierarchies and the ecosystem leaders (i.e. the companies at the top of these hierarchies) are predominantly domestically focused, which makes it hard for everyone in the subordinate layers to compete globally. These eco-systems recreate, to some extent, a corporate hierarchy. It is not rare for the eco-system leader (say Toyota) to have equity stakes in some of the subordinate members. In the case of Toyota however, this hierarchical system has produced a highly-competitive international business. This is mainly because value in Toyota’s sector (automobiles) still comes largely from manufacturing rather than from services and soft goods. 3. Historical background on Japan’s innovativeness In order to achieve a better understanding of Japan’s innovation ways, it is helpful to provide a short historical perspective on their evolution. Opening to foreign trade Britain, as the leader of the Industrial Revolution, entered the industrial age on its own terms. Japan had a radically different experience. To preserve their hegemony over the country, the House of Tokugawa, which established the Edo shogunate (1600-1868), banned almost all foreign trade after the 1630s. Despite its isolation 5 , the country was not backward. It possessed a well-functioning bureaucracy and a good transportation network; there was no banditry, and literacy was high by the standards of the age. Commercial activity was modern for the era. Japanese merchants devised some of the world’s first futures trading instruments for Osaka’s commodities exchanges. But isolation froze Japanese technology at a 17 th century level. There were improvements here and there during the two centuries of shogunal power, but nothing on 5 Japan did have some overseas trade through the Ryukyus (Okinawa) and Chinese and Dutch merchants in Japan but foreign commerce was miniscule compared to island nations of similar size such as Britain. the scale of what occurred in Europe. Whereas Europe embraced innovation, the shogunate was fundamentally committed to a static posture, at least compared to European societies. Therefore, when western gunboats breached Japan’s seclusion in the 1850s, the country did not have a single railroad track, whereas Britain, smaller than Japan, already had 10,000 kilometers of railways in 1851. 6 Nor did Japan have any modern industrial base comparable to the ones being developed in Europe and North America. Japan lacked not only hardware, but also the “software” necessary to succeed during the Industrial Revolution. There was no effective civil law system. “Law” meant government edicts; there was no formal concept of civil arbitration with the state acting as a referee by providing both courts and enforcement mechanisms. 7 In fact, Japan did not have a bar with lawyers until the late 19 th century. 8 As long as Japan was cut off from other countries, it could live in peace with its 17 th century palanquins in a 19 th century world of steam engines. Unfortunately for Japan’s shoguns, once the Europeans, Russians, and Americans approached the country’s shore, its industrial immaturity put the very existence of the nation in jeopardy, as the westerners enforced trade agreements on Japan which gave themselves unilateral advantages in commerce and investment (what are known as the “unequal treaties”). Modernization during Meiji era and intellectual heritage Japan succeeded in escaping the stagnation of the Edo Era through a program of rapid modernization that transformed the country into an industrialized society (though it remained much less industrialized, especially in heavy industry, than the West until the 1930s). Still, as noted by Katz (1998), although Meiji Japan welcomed the intellectual contributions of free traders as well as protectionists, the Japanese economy developed along lines that were more restrictive of free trade than Britain and more tolerant of oligopolies and monopolies than the United States (after the adoption of US antitrust 6 Encyclopedia Britannica Online, “History > Great Britain, 1815–1914 > Social cleavage and social control in the early Victorian years > The pace of economic change”, http://www.britannica.com/eb/article- 44926/United-Kingdom 6 November 2006 7 See John Owen Haley, Authority without Power: Law and the Japanese Paradox. New York: Oxford University Press, 1991 (1995 Oxford UP paperback). 8 See Mayumi Itoh, The Hatoyama Dynasty. (New York: Palgrave MacMillan, 2003), p. 21ff. legislation). By the 1930s, due to the deterioration of the international climate and the beginning of the war in Asia (1931 in Manchuria), Japan moved towards more government involvement in the economy. The post-war economic system did retain important aspects of the semi-controlled economy, especially in the the 1940s and 1950s when the government controlled access to foreign exchange. In later years, many of these controls were removed, but the ruling Liberal Democratic Party, in order to ensure social-stability and its own political survival, followed economic policies that often favored oligopolies, protectionism, and hindered foreign investment. Moreover, the combination of the influence of Marxian thought (at least until the 1970s) and anti-liberal conservatism meant that economic liberalism has been on the defensive since 1945. Thus Japanese economic DNA is far less liberal than America’s. The consequences of this intellectual heritage for innovation are threefold. First, it has fostered a strong manufacturing bias, based on the idea that a nation without production facilities is a weak country. Unfortunately for Japan, many of the recent (last 20 years) innovations which have increased productivity and made possible the development of new industries are unrelated to manufacturing. New ways of dealing with new eco-systems, platform-based industries, legal developments in intellectual property (IPR), new financial instruments (admittedly a field currently enjoying a rather negative reputation) are fundamentally tied to service and soft goods sectors. Japan has been ill-equipped to deal with them. Second, besides a continued focus on industry, some form of hostility towards outsiders survives. When a foreign takeover beckons, Japanese corporate leaders’ first reflex is often, though not always, to band together against the alien, rather than seek a way to profit from the new investor. The merger of Nissin and Myojo, both leaders in instant noodles, orchestrated to prevent Steel Partners of the US from acquiring Myojo, is an illustrative example. It kept the foreigners at bay but deprived Myojo’s shareholders of the higher price offered by the Americans. There are, of course, cases of successful foreign investment into Japan (e.g. Renault’s acquisition of a controlling stake in Nissan) but overall, among the major developed economies, Japan is the least hospitable to foreign capital, with foreign direct investment (FDI) stock estimated at 4.1% of gross domestic product (GDP) vs. an average for developed countries of 24.7%. 9 This form of “business xenophobia” has slowed down innovation by preventing foreign ideas and managers from playing a bigger role in the Japanese economy. Third, Japan, like some continental European states from which its economic ideology is derived, has historically been far more tolerant of monopolies and oligopolies. Though anti-trust enforcement has gained somewhat it recent years, it remains deficient by Anglo-American standards. This can have a particularly nefarious impact on innovation. Companies that are already actively involved in international markets will continue to innovate, even if they enjoy monopolistic (or oligopolistic) advantages in their home market, in order to remain competitive abroad. But businesses which are not international and benefit from economic rents derived from monopolistic or oligopolistic arrangements domestically will have fewer innovation incentives. Industrial structures The US Occupation authorities dismantled the zaibatsu (?? - “financial cliques” – same ideographs as the word “chaebol,” used to denote Korea’s family-controlled conglomerates). These were large financial-industrial family conglomerates that controlled Japanese industry and finance. But in the decades following the war, partly as a way to prevent foreign takeovers, Japan developed a complex form of crossshareholdings known as “keiretsu,” (??) or “affiliated companies” by opposition to the family-owned zaibatsus. In some cases these keiretsus were vertical, with one large corporation at the top and affiliates in a subordinate position. In other cases, there was no real center, with several corporations linked by cross-shareholdings and informally coordinated by their top managers . 10 9 16.0% for the US, but as a larger economy, the US should, ceteris parabus, have a lower percentage of FDI stock than Japan, which is three times smaller. Source: UNCTAD, http://www.unctad.org/sections/dite_dir/docs/wir09_fs_jp_en.pdf (accessed 29 September 2009). 10 On corporate governance, see Gilson, Ronald and Curtis J. Milhaupt. “Choice as Regulatory Reform: The Case of Japanese Corporate Governance.” Columbia University Law School Center for Law and Economic Studies Working Paper No. 251 and Stanford Law School John M. Olin Program in Law and Economics Working Paper No. 282, 2004; Hoshi, Takeo and Anil K. Kashyap. Corporate Financing and Governance in Japan: The Road to the Future. Cambridge MA: The MIT Press, 2001; Jackson, Gregory. In the decades which followed the Showa War (1931-45 11 ), Japanese industry showed a great capacity to innovate, both in the area of manufacturing processes and also with the development of new products. Moreover, by breaking the stranglehold of trading companies (sogo shosha ????) Japanese businesses such as Toyota, Sony, and Nintendo were able to conquer international markets. In particular Toyota displayed some of the key strengths of Japanese industry. Its constant focus on product improvement and quality control gave it the credibility to win foreign market share and make its brand, unknown overseas until the 1970s, synonymous with quality. Moreover, Toyota was able to export its industrial ecosystem. As it built factories overseas, many of its Japanese suppliers followed suit, establishing their own plants in foreign countries. In a way, Toyota functioned as a sort of trading company for its suppliers by opening the doors to foreign markets which on their own they would not have been able to access. Legal systems A second factor with a significant bearing on innovation is the legal system. “One of the principal advantages of common law legal systems,” wrote John Coffee of Columbia University Law School, “is their decentralized character, which encourages self-regulatory initiatives, whereas civil law systems may monopolize all law-making initiatives.” 12 This is especially true in new industries where the absence of laws governing businesses leads to officials opposing their veto to new projects on the grounds that they are not specifically authorized by existing regulations. In the United States, innovative legal developments based on the jurisprudence of courts and new types of “Toward a comparative perspective on corporate governance and labour.” Tokyo: Research Institute on the Economy Trade and Industry, 2004 (REITI Discussion Papers Series 04-E-023); Milhaupt, Curtis J. “A Lost Decade for Japanese Corporate Governance Reform?: What’s Changed, What Hasn’t, and Why.” Columbia Law School, The Center for Law and Economic Studies, Working Paper No. 234, July 2003; Miyajima, Hideaki and Fumiaki Kuroki. “Unwinding of Cross-shareholding: Causes, Effects, and Implications.” (Paper prepared for the forthcoming Masahiko Aoki, Gregory Jackson and Hideaki Miyajima, eds., Corporate Governance in Japan: Institutional Change and Organizational Diversity.) October 2004; Patrick, Hugh. “Evolving Corporate Governance in Japan.” Columbia Business School, Center on Japanese Economy and Business, Working Paper 220 (February 2004). 11 To use the term which Yomiuri Shimbun chose among several (Great East Asia War, Pacific War, etc.) to denote the decade and a half of fighting which ended with Japan’s capitulation on 15 August 1945. 12 Coffee, “Convergence and Its Critics,” 1 (abstract). contacts have facilitated the development of new industries, something that is harder in Japan and in other code law legislations. For example, some analysts have noted how U.S. law gives more leeway to create innovative contractual arrangements than German law, 13 on which most of Japan’s legal system is built. Thus entrepreneurs, and businesses in general, are more likely to face legal and regulatory hurdles in code law jurisdictions where adapting the law to new technologies, new financial instruments, and other innovations, is more cumbersome. 3. Three industry case studies The following case studies are designed to illustrate the two key types of inefficiencies which result from the mismatch between Japan’s prevailing forms of industrial structures (vertically integrated and hierarchical) and the nature of innovation in new economy industries such as software and the Internet, where building horizontal platforms and ecosystems is paramount. First, the vertical structures can stifle some forms of innovation altogether (e.g. software). Second, they can limit valuable innovations to the domestic market (e.g. anime and mobile telephony). From these case studies, we can draw some lessons on the steps which Japan could take to enhance its capabilities to harness its strong innovative capabilities. 3.1. Software Given the degree of high-technology penetration in the Japanese economy and the international competitiveness of the hardware part of its consumer electronics sector, the weakness (indeed, the non-existence) of Japan’s packaged software industry looks puzzling. Indeed, software production in Japan has historically suffered from chronic fragmentation among incompatible platforms provided by large systems integrators 13 Steven Casper, “The Legal Framework for Corporate Governance: The Influence of Contract Law on Company Strategies in Germany and the United States,” in Hall and Soskice, eds. Varieties Of Capitalism, 329.(Hitachi, Fujitsu, NEC) and domination by customized software. Despite efforts by the Ministry of the Economy, Trade and Industry (METI, formerly MITI), there are very few small to medium-size software companies in Japan compared to the United States or even Europe. As a result, even the domestic market is dominated by foreign software vendors such as Microsoft, Oracle, Salesforce.com and SAP. Needless to add, there are virtually no standalone software exports from Japan to speak of. There is of course the videogame exception, which we do not include in our discussion here because the videogame market has a dynamic of its own, largely independent of the evolution of the rest of the software industry. There are two root causes for this peculiar situation: a strong preference for customized computer systems by both suppliers and customers and a long-standing bias (also on both sides) in favor of hardware over software. These two factors have perpetuated a highly fragmented, vertically integrated and specialized computer industry structure, precluding the emergence of modular systems and popular software platforms (e.g. Windows). In turn, the absence of such platforms has thwarted the economies of scale needed to offer sufficient innovation incentives to independent software developers, which have played a critical role in the development of the IT industry in the United States. The prevalence of customized computer systems and its origins In the early 1960s MITI orchestrated licensing agreements that paired each major Japanese computer system developer with a U.S. counterpart. Hitachi went with RCA then IBM, NEC with Honeywell, Oki with Sperry Rand, Toshiba with GE, Mitsubishi with TRW and Fujitsu went on its own before joining IBM. The intent was to make sure Japan embarked on the computer revolution and that it competed effectively with thenalmighty IBM. Since each of Japan’s major computer system suppliers had a different U.S. partner however, each had a different antecedent for its operating system. In fact, even IBM-compatible producers only had the instruction set licensed from IBM in common; their operating systems were incompatible among themselves. Very rapidly, each of the Japanese companies found it profitable to lock-in its customers by supplying highly customized software, often free of charge, which meant that clients had only one source of upgrades, support and application development. Over time, many of the former U.S. partners were forced to exit the industry due to intense global competition from IBM. However, their Japanese licensees remained and perpetuated their incompatible systems. Next, in the United States, following a highly publicized antitrust suit, IBM was forced to unbundle its software and hardware in 1969. The IBM System/360 was the first true multi-sided platform in the computer industry, in that it was the first to support thirdparty suppliers of software applications and hardware add-ons. It marked the beginning of the vertical disintegration and modularization of the computer industry. Computer systems were no longer solely provided as fully vertically integrated products; instead, users could mix and match a variety of complementary hardware and software products from independent suppliers. This led to the development of an immensely successful software industry. The new industry became prominent with the workstation and PC revolutions in the early 1980s, which brought computing power into the mainstream through smaller, cheaper, microprocessor-based machines. An important consequence was the great potential created for software/hardware platforms, which a handful of companies understood and used to achieve preeminence in their respective segments: Sun Microsystems in the workstation market, Apple and Microsoft in the PC market. By contrast, in Japan there was no catalyst for such a sweeping modularization and standardization process. Despite the adoption of a US-inspired Anti-Monopoly Law in 1949, enforcement of antitrust in Japan has been weak by US and EU standards (cf. Miwa and Ramseyer (2005)) - no one required the large systems makers to unbundle software from hardware. There were also no incentives to achieve compatibility. During the last three decades, the customized software strategies became entrenched. Clients were increasingly locked into proprietary computer systems and had to set up their own software divisions to further customize these systems, thus increasing sunk costs and reducing the likelihood of switching to newer systems. This vicious cycle essentially locked out any would-be standalone software vendor in the mainframe and minicomputer markets. Japanese computer manufacturers tried to extend the same strategy to the workstation and PC market, but failed due to competitive pressure from foreign (especially American) suppliers. The best known example is NEC, which until around 1992 held a virtual monopoly on the Japanese PC market with its "PC-98." Its hardware platform architecture was closed (like Apple's) and its operating system, though based on DOS, remained incompatible with the popular MS-DOS PC operating system. In the end, however, NEC's monopoly was broken by Dell, Compaq and low-cost Taiwanese PC makers (1991-92). There also seems to have been a preference for customized computing systems and software on the demand-side of the market. In Japan, like everywhere else in the world, the first private sector users of computer systems (mainframes in the beginning) were large corporations. However Japanese corporations have traditionally been strongly committed to adhering to internal business procedures, leading to a "how can we modify the software to fit our operations?" mindset, rather than the "how can we adapt our operations in order to take advantage of this software?" reasoning that prevailed in the U.S. For this reason, Japanese companies preferred to develop long-term relationships with their hardware suppliers and to depend on those suppliers, or on vertically related 14 software developers for highly customized software solutions. As major Japanese companies have generally relied on professionals hired straight of college who stayed with the same employer for their entire professional lives, each Japanese conglomerate has developed its own corporate culture to a greater extent than in the United States where a liquid labor means there is a much greater level of cross-fertilization between firms and consequently less divergence than in Japan in their corporate culture. The prevalence of closed, proprietary strategies prevented the economies of scale necessary for the emergence of a successful, standalone Japanese software industry. No single computing platform became popular enough with users to provide sufficient innovation incentives for packaged application software. 15 14 That is, belonging to the same keiretsu. 15 Even at its height, the standardized NEC PC-98 platform commanded a market roughly four times smaller than its U.S. counterpart for a population half the size of the U.S. Furthermore, it was incompatible Government policies and the hardware bias The second important factor which has shaped the evolution of Japan’s software industry is the longstanding bias in favor of hardware over software. Japanese computer companies' business strategy had always involved giving away software for free along with their hardware systems as a tool to lock in customers. Ironically, this bias was probably inherited from IBM, whose success they were seeking to emulate. IBM itself remained convinced that hardware was the most valuable part of computer systems, which led to its fateful (and, with today’s benefit of hindsight, strategically misguided) 1981 decision to outsource its PC operating system to Microsoft, whose subsequent rise to power signaled the beginning of the software platform era. This development was lost on Japanese computer makers, however, for several years. And MITI, which still viewed IBM as Japan's main competitor, was at that time immersed in a highly ambitious "Fifth Generation Project," a consortium that aimed to build a new type of computer with large-scale parallel-processing capabilities, thus departing from the traditional von Neumann model. The drawback, however, was that the project focused everyone's attention on building highly specialized machines (basically mainframes), whereas the computer industry was moving towards smaller, general purpose machines, based on open and non-proprietary architectures (Unix workstations) or on proprietary but very popular operating system platforms (PCs), which greatly expanded the computer market. MITI and member companies of the FifthGeneration consortium realized only later the potential of making a common, jointlydeveloped software platform available to the general public rather than concentrating on systems designed for a handful of specialized machines. This led to MITI's next initiative, The Real-time Operating-system Nucleus (TRON). The main idea of TRON was to build a pervasive and open (i.e. non-proprietary) software/hardware platform in response to the market dominance of Intel and Microsoft. TRON was supposed to be a cross-device platform: computers and all sorts of other devices everywhere would be linked by the with the MS-DOS PC standard platform, which isolated Japanese PC software developers from the worldwide PC market. same software, thus finally providing a popular platform for Japanese software developers. Although TRON was a promising platform concept; it unfortunately received little support from the major industrial players, in particular NEC, which viewed it as a direct threat to its PC monopoly. More importantly, it could not break into the crucial education market 16 precisely because it was incompatible with both the NEC PC- 98 DOS and the IBM PC DOS standards, both of which had sizable advantages in terms of installed bases of users and applications. Thus, TRON was too little too late: the big winners of the PC and workstation revolutions had already been defined and none of them were Japanese computer companies. Most importantly, the intended creation of an independent Japanese software industry did not materialize. Other factors Comparative studies of the U.S. and Japanese software industries also mention several other factors that further explain the phenomenon described above. One is the relative underdevelopment of the venture capital market for technology-oriented start-up companies in Japan compared to the United States, where venture capital had widely supported the emergence of successful small and medium-size software companies. This gap, however, has been recently narrowed due to METI policies designed to improve the availability of venture capital to technology firms. Another factor is the Japanese system of “life time employment” for regular employees of large businesses, which results in low labor mobility and is quite compatible with the "closed garden" approach to technological innovation. By contrast, high labor mobility has been a crucial driving force behind the "Silicon Valley model" of technological innovation, which is based on spillovers, transfers, cumulative inventions and a high degree of modularity. The latter model seems to have been more appropriate for creating a vibrant software industry. “Life time employment” is losing ground, but the top managerial ranks of large Japanese corporations remain dominated, and often monopolized, by those who have been with the company since they joined the labor market. 16 Callon (1995) contains an informative account of the conflict between METI and the Ministry of Education regarding the adoption of TRON by public educational institutions. 3.2. Animation 17 Few Japanese industries are as specific to Japan and as creative as animation - or “anime” 18 . Japanese anime has gained global popularity: it was estimated to account for 60% of TV anime series worldwide (Egawa et al. 2006). And it has significant influence over many creators outside Japan: the setting of Terminator 2 was influenced by Akira, a classic Japanese anime series; the director of Lilo & Stitch (Disney’s 2002 animation film) admitted that it was inspired by Hayao Miyazaki’s My Neighbor Totoro; The Matrix movies owed the starting point of their story to Ghost in the Shell, a Japanese anime movie created by Production IG; Disney’s immensely popular Lion King (released in 1994) was based on Kimba the White Lion, a 1964 Japanese TV anime series. Yet despite the global influence of Japanese animation, the Japanese anime production companies have never been able to capitalize on the popularity of their creations. The industry is highly fragmented (there are about 430 animation production companies) and dominated by distributors—TV stations, movie distributors, DVD distributors and advertising agencies -, which control funding and hold most of the copyrights on content. As a result, most animation producers are small companies laboring in obscurity. No Japanese animation production company comes even close to the size of Walt Disney Co. or Pixar. In 2005 Disney had revenues of $32 billion, whereas Toei Animation, the largest animation production company in Japan, had revenue of only ¥21 billion ($175 million at the average 2005 exchange rate). Whereas Disney and Pixar spend in excess of ¥10 billion to produce one anime movie; Japanese anime production companies’ average budget is ¥0.2-0.3 billion (Hayao Miyazaki’s Studio Ghibli is an exception: it invests ¥1-3 billion in one production). And while Japanese animes are omnipresent in global markets, Japanese anime production companies have virtually no international business presence. Their lack of business and 17 This subsection draws heavily on Egawa et al. (2006). 18 In this case study “anime” refers to animation motion pictures, as opposed to manga cartoons. financial strength can be traced down to the inefficient mode of organization of the Japanese anime “ecosystem”. Background on Japanese anime The first animation in Japan was created in 1917 with ten minute add-ons to action films. Thereafter, short animation films were produced for educational and advertisement purposes. In early 1950s, Disney’s animation and its world of dreams became very popular in the aftermath of defeat in World War II. In 1956, Toei Doga (current Toei Animation) was established as a subsidiary of Toei, a major film distributor, with the stated objective to become “the Disney of the Orient.” Some anime industry experts trace the current plight of Japanese anime production companies back to the 1963 release of Astro Boy, the first TV anime series. Its creator and producer was Osamu Tezuka, a successful manga (comic book) writer. Being more concerned with making Astro Boy popular rather than with turning it into a financial success, Tezuka accepted the low price offered by a TV station in exchange for distributing the series. In order to keep the production cost to a minimum, he reduced the number of illustrations to a third of the Disney standard (from 24 images per second to 8 images). He felt that Disney’s stories were too simplistic and lacked depth, therefore he believed that the complexity of the Astro Boy story would compensate for the inferior animation quality. Astro Boy became the first big hit in the history of Japanese TV animation, reaching a viewership of over 40% of households. However, due to intensified competition and lack of business acumen, Tezuka’s anime production company (Mushi Production) subsequently ran into financial difficulties and in 1973 filed for bankruptcy. From the early days, the majority of anime productions had derived their content from manga. In 2005, roughly 60% of anime contents were based on manga - the rest were based on novels or original stories created by the production companies themselves. The sales of manga - comic books and magazines - in 2004 were ¥505 billion, and accounted for 22% of the published goods. This was twice as much as the anime industry revenues, which in 2005 stood at ¥234 billion in 2005. Contrary to popular perception in the West, Japanese anime extends far beyond cartoons for children: “to define anime simply as Japanese cartoons gives no sense of the depth and variety that make up the medium. Essentially, anime works include everything that Western audiences are accustomed to seeing in live-action films—romance, comedy, tragedy, adventure, even psychological probing of a kind seldom attempted in recent mass-culture Western film or television.” (Napier 2005) Production committees The structure of the anime industry has not evolved much since its beginnings. The approximately 430 production companies work essentially as contractors for the powerful distribution companies: TV stations, movie distributors, DVD distributors and advertising agencies. And only 30–40 of the producers have the capacity to become main contractors; the rest work as subcontractors for the main contractors. Main contractors are responsible for delivering the end products to TV stations or movie distributors, and took charge of the majority of the processes. Subcontracting companies can only handle one or two processes. It usually takes 4–5 months to produce one 30-minute TV episode. Production of anime movies is even more labor intensive and time consuming: a 60- minute anime movie usually takes over one and a half years. In both TV anime series and anime movies, the labor intensive process of drawing and coloring animations is often outsourced to Asian countries including China, Korea, Taiwan, Philippines, Thailand, Vietnam and India. Most anime projects in Japan are done by “production committees,” an institution specific to the Japanese market, which provides financing and coordinates the distribution of the resulting contents through various channels. These committees have been created in the mid-1980s in order to alleviate the scarcity of funding sources for animation. Indeed, Japanese banks had traditionally been reluctant to lend to businesses which were exclusively focused on “soft” goods (content, software, etc.), particularly when they involved a high degree of risk. 19 As a result, TV stations often had to fund the production 19 Indeed, like for most creative content businesses (movies, novels), only 10 out of every 100 animations make any profits. cost of TV anime series since production companies were small and financially weak. Similarly, movie distributors used to fund the production of anime movies. As production costs increased and new distribution channels appeared however, production committees emerged as the standard funding vehicles for both TV series and movies. At the same time, they also took control of the creative process, as well as marketing and final distribution of the final products. Several types of companies come together in a production committee: TV broadcasting stations, the powerful advertising agencies (Dentsu and Hakuhodo), sponsors (e.g. merchandising companies), movie distributors, video/DVD publishers, and the publishers of the original manga (comic book) whenever the content is based on it. The production committee funds the anime projects and shares revenues and profits from the investments. Each member of the committee makes an investment and in exchange receives: (a) a share of the copyrights (and the associated licensing revenues) linked to the anime in proportion to the initial investment; and (b) the right to distribute the resulting content through the particular member’s channel—broadcasting right for TV stations, distribution right of videos/DVDs for video/DVD publishers. All committee members contribute to some part of the value chain, but TV stations often lead the committee because television is the primary distribution channel. Production committees contract the production of anime works with anime production companies. In most cases, anime producers receive only a fixed payment (about ¥10–¥15 million), which oftentimes is barely sufficient to cover the production cost. Due to the lack of financial resources, production companies have to rely on production committees for funding and in exchange give up copyrights to their own work to the production committees. They are usually not a member of the production committees and as a result do not have access to licensing revenue and cannot share in the upside of successful projects. (By contrast, in the United States, Financial Interest and Syndication Rules (Fin-Syn Rules) established in 1970 by the Federal Communication Commission (FCC) state that copyrights belong to production companies. 20 ) When the anime is the original creations of anime producers, they become a member of the production committee, but typically own a very small stake. Therefore, original creations result in higher profits for anime production companies, but they are also riskier, and it is harder to persuade production committee members to undertake such projects. This system creates a vicious cycle for animation production companies, which keeps them weak and subordinate to the production committees. Most importantly, the production committee members (advertising agencies, TV stations and DVD distributors) are inherently domestic businesses, which therefore also limits the anime producers to the Japanese market, even though their productions might have global appeal. Recent developments Recently, several initiatives have emerged in order to strengthen the rights of animation production companies and to create funding alternatives for anime projects. First, the Association of Japanese Animation was established in May 2002 under the leadership of the Ministry of Economy, Trade and Industry (METI) to strengthen the position of anime producers. Second, intellectual property were made legally defensible through trust arrangements in December 2004. And Mizuho Bank (one of the Japanese megabanks) initiated the securitization of profits deriving from anime copyrights. 21 This allowed Mizuho to extend financing to anime production companies such as Production I.G, which do not have tangible assets suited for collateral. In turn, production companies can invest the proceeds in production committees. To date, Mizuho has financed over 150 anime titles in this way. Third, the funding sources for anime production companies have diversified. Mizuho raised a ¥20 billion fund to invest in new movies including anime. And GDH, a recently founded animation production company, created its own fund for retail investors to finance its new TV series. 22 20 The Ministry of Economics, Trade and Industry, Research on Strengthening Infrastructure for Contents Producer Functions: Animation Production, p. 27, http://www.meti.go.jp/policy/media_contents/. 21 “Mega Banks Expanding Intellectual Property Finance,” Nihon Keizai Shimbun, April 17, 2004. 22 “Rakuten Securities, JDC, and Others Raise Funds from Individual Investors to Produce Anime,” Nikkei Sangyo Shimbun, July 28, 2004. 3.3. Mobile telephony Like animation, mobile telephony provides another illustration of a highly innovative Japanese industry, which has not been able to export its domestic success. Unlike animation however, one needs to travel to Japan in order to observe the tremendous unexploited opportunities of Japan’s mobile phone industry. The Galapagos of mobile phones Japanese owners of cell phones have long enjoyed access to the world’s most advanced handsets and services – years ahead of users anywhere else in the world. Mobile email has been offered since 1999 - it only took off in the United States and Western Europe by 2004-2005 with RIM’s Blackberry devices. Sophisticated ecommerce and other non-voice services were rolled out in Japan starting with the introduction of i-mode in 1999. i-mode was the world’s first proprietary mobile Internet service and to this day remains the most successful one. Launched by NTT DoCoMo, Japan’s largest mobile operator (or carrier), it has spawned a diverse ecosystem of over 100,000 content providers, offering i-mode handset users everything from games and news, to mobile banking, restaurant guides and dating services. KDDI and Softbank, the other two major Japanese carriers, have also introduced similar services. All of them were subsequently enhanced by third-generation networks in 2001 – meanwhile, the first functional 3G services in the rest of the world started to appear only in 2004. Since 2004, again thanks to NTT DoCoMo’s leadership, Japanese mobile phone users can simply waive their handsets in front of contactless readers to pay for purchases in convenience stores, subway turnstiles and many other places. These payment systems include both debit (pre-paid) and credit (post-paid) functionalities. Finally, since 2005, Japanese mobile customers also have access to digital television on their handsets. These last two services have yet to materialize in the rest of the world (with the sole exception of South Korea). Given the Japanese telecommunications industry’s innovative prowess, one would expect to see Japanese handsets occupying leading positions in most international markets (especially in developed economies). Strikingly enough, not only are they far from leading, they are in fact nowhere to be found (as anyone having tried to buy a Japanese mobile handset in the United States can attest). More precisely, in 2007, Nokia had a 38% market share of worldwide cell phone shipments, followed by Samsung with 14.3% and Motorola with 14.1%. No Japanese companies were in the top 5 - altogether, they made up a meager 5% of the global handset market 23 (Sharp, the largest one, barely made it to 1%). 24 Some observers (in Japan) have coined a term for this situation: the Galapagos syndrome. 25 Just like the Galapagos archipelago hosts animal species which do not exist anywhere else in the world, so does Japan host an extremely innovative mobile phone industry completely isolated from the rest of the world. Origins of the Galapagos syndrome What accounts for this isolation and for Japanese handset makers’ inability to build significant presences in international markets? The answer is found in a combination of self-reinforcing factors, the central one of which is a mobile phone industry structure very different from those prevailing in other major markets. Specifically, in Japan, the mobile operators (DoCoMo, KDDI, and Softbank) hold most of the power in the industry and are able to dictate specifications to the other participants – handset makers in particular. By contrast, carriers in other countries have much less leverage in their relationships with handset makers and are willing to make significant compromises in exchange for exclusive rights to highly popular handsets – e.g. Apple’s iPhone or Motorola’s Razr. On the one hand, the centralized, top-down 23 Economisto, 14 October 2008. “Mega competition in mobile phones,” pp. 32-35. 24 Economisto. 14 October 2008. “Mega competition in mobile phones”, p. 42. 25 Ekonomisto, February 26, 2008, “Japan's economic system losing competitiveness due to "Galapagos phenomenon"”. leadership of Japanese mobile carriers has been immensely successful in producing domestic innovation, as described above. It enabled the rapid roll-out and market adoption of complex technologies, such as mobile payments, which require the coordination of many actors in the ecosystem. On the other hand however, the subservience to operators meant that everyone in the ecosystem – including handset makers – ended up focused on serving the domestic market. Indeed, mobile carriers are operating in a fundamentally domestic business: telecommunication regulations around the world have always made it difficult for carriers to expand abroad. The only exceptions are Vodafone and T-Mobile, who have managed to build some meaningful presences outside of their home countries - although few and far-between, and with mixed results. Japan’s NTT DoCoMo, creator of i-mode, the world’s leading mobile internet service, has repeatedly failed in its attempts to export the service in international markets on a significant scale. Today, there are only 6.5 million overseas users of i-mode, roughly 10% of the Japanese total, while DoCoMo’s corresponding overseas revenues in 2007 were less than 2% of total sales. Moreover, the majority of these “international” customers and sales were in fact made up of Japanese users roaming while traveling abroad. 26 The “home bias” of the ecosystem leaders – the mobile operators – was unfortunately transplanted to the Japanese handset manufacturers. The latter ended up focusing most of their R&D resources on integrating the numerous Japan-specific hardware features demanded by the operators (contactless mobile payment systems, twodimensional bar-code scanners, digital TV capability, etc.) into their phones. They developed virtually no standalone market research, marketing and sales capabilities, which are critical for competing in international markets (in Japan that was done for them by the operators). Three additional factors have exacerbated the competitive disadvantage of handset makers in overseas markets. 26 “iMode to retry it in Europe a simple version developed by DoCoMo,” 4 December 2008, Fuji Sankei Business. First, Japan’s large domestic market and the fast growth of its mobile phone sector during the late 1990s and early 2000s was a curse disguised as a blessing. During that period the handset makers perceived no serious incentives (nor urgency) to seek expansion opportunities abroad. The contrast with South Korea is noteworthy here: the domestic Korean mobile phone industry is also largely dominated by the operators (SK Telecom in particular) and has also produced tremendous growth and very advanced services. The difference was that the Korean market was too small (less than half the size of Japan’s) for the domestic handset manufacturers to be satisfied serving it, which led Samsung, LG and others to seek opportunities in international markets from early on – today both are in the top 5 global cell-phone makers. Second, in the late 1990s the Japanese operators chose a second-generation standard for wireless telecommunications which was subsequently rejected in the rest of the world. The early choice allowed the operators to roll out advanced services far ahead of the rest of the world, without having to worry about interoperability (given their inherent domestic focus). For the handset makers, this choice raised further technological barriers to their international expansion, as they became dependent on a technology (through specific investments and resource allocation) which could not be leveraged abroad. Third and perhaps most important, Japanese handset makers have had a longstanding bias in favor of hardware and “monozukuri” (manufacturing)-driven innovation over software-driven innovation – the same bias as their counterparts in the computer industry, which prevented the development of a Japanese software sector (cf. section 3.1. above). Indeed, most Japanese phones are customized for a specific carrier (DoCoMo or KDDI or Softbank) and manufactured “from scratch”, with little concern for creating standardized interfaces and software platforms, which might have enabled them to spread development costs across multiple phone models and create some cost advantage. Japanese handset makers have neither embraced widely used smart-phone software platforms such as Nokia’s Symbian, Microsoft’s Windows Mobile or Google’s Android, nor created any such platforms of their own. Given that hardware design is the part of a mobile phone which varies the most across international markets (unlike the underlying software platforms, which can remain virtually unchanged), it is then no wonder that Japanese cell-phone makers are poorly positioned to adopt their phones to different market needs overseas. The monozukuri bias also explains why, despite their technical prowess, Japanese phone manufacturers have been unable to create a universally appealing device like Apple’s iPhone – which they are now desperately (and unsuccessfully) trying to emulate. In fact, this marks the third time in less than a decade that Apple or another US innovator has come up with a successful product way ahead of Japanese electronics manufacturers, even though the latter had the technological capabilities required to produce it long before Apple. The first episode was Sony’s inability to bring to market a successful digital music player (a category which everyone expected Sony to own, as a natural extension of its widely successful walkman), largely because of an inadequate content business model. This left the gate wide open for Apple’s iPod/iTunes combination to take over the market starting in 2001. The second episode also involved Sony, this time in the market for electronic book readers. Although Sony was the first to commercialize a device based on the underlying electronic ink technology, its eBook (launched in 2005) was largely a failure due – yet again – to an inadequate content business model. Instead, it was Amazon’s Kindle – launched 2 years later - that has come to dominate the category. There is a common and simple lesson here, which seems to have repeatedly eluded Japanese electronics manufacturers in general and handset makers in particular. Hardware and monozukuri have become subordinate to software when it comes to most digital devices: the latter are no longer pure products, but in large part services, in which software plays the key role. It is worthwhile to note that more than 90% of the hardware parts in Apple’s iPods and iPhones come from Asia – the most sophisticated components from Japan. Apple’s only – but essential – innovations are in the user interface and underlying software (QuickTime and iTunes), which allow it to extract most of the value. Although Sony and other Japanese companies clearly understand the importance of content (most visible in the recent Blu-ray vs. HD-DVD format war), they still have not matched Apple, Amazon and others in the ability to merge service, manufacturing and content. It is thus an unsettling paradox (and presumably a frustrating one for the handset makers themselves) that Japanese cell phone manufacturers do so poorly in international markets, where phones are so basic compared to Japan. The explanation is however straightforward: it is not deep technical expertise that matters most; instead, the key capabilities required are brand power, the ability to adapt in order to serve local preferences (sales and marketing savvy), and cost competitiveness. Those are the attributes that have made Nokia, Samsung and Motorola so successful in international markets – and those are the ones which Japanese manufacturers lack the most. It is more important to obtain economies of scale in standardized parts – through outsourcing and reliance on widely available software platforms – than building ultra-sophisticated, customized phones. Some observers argue that the peculiar demands of Japanese consumers drew handset makers into making products that do not sell well in the rest of the world. In our view, this is an unacceptable excuse: Nokia, Motorola and Samsung were all able to conquer international markets with very different demand characteristics than the ones they faced in their respective homes. Take the Chinese market for instance: one could argue that Japanese manufacturers should have an advantage over their Western rivals in China, given their experience with ideogram-based characters and the common cultural roots. But even there, Japanese cell-phone makers have struggled mightily. Today, the top three cell-phone makers in China are Nokia with a 30% market share; Motorola with 18.5% and Samsung 10.8%. None of the Japanese makers has more than 1% and they are behind a number of domestic Chinese manufacturers. Present situation Unfortunately, it took the current economic recession, combined with the saturation of the domestic mobile user market for Japan’s cell-phone manufacturers to realize that their competitive position is profoundly vulnerable and unsustainable. New mobile phone sales in Japan were down 20% in 2008 (compared to 2007) and are expected to decrease even further in 2009. The new government policy requiring operators to clearly distinguish the price of the handset from the price of the service plan has significantly contributed to the drop in new phone sales. Realizing the high prices of the handsets, Japanese consumers have naturally reduced the frequency with which they upgrade to new phones. The Japanese mobile phone industry faces two additional challenges: the decline in the number of teenagers and young adults (down 6.6% for ages 15-24 from 2010 to 2020) due to low fertility, and the arrival of high-performance foreign products, such as the iPhone, Android-powered devices, and BlackBerries. The slowdown in domestic sales has had two effects. One is much needed consolidation and shakeout among handset manufacturers: NEC, Hitachi and Casio have merged their mobile phone units as of September 2009, while Sanyo and Mitsubishi are exiting the business altogether. The second one is a much stronger urgency to seek opportunities abroad. Sharp and Panasonic, the domestic market leaders, have both embarked on ambitious plans to expand their business in China, a market where Japanese handset makers have been notoriously unsuccessful (as mentioned above). These setbacks might turn out to be a welcome wake-up call for Japan’s handset makers by providing sufficient incentives (and urgency) to seek to develop competitive advantage at serving other markets than Japan’s. That requires breaking free from the subservience to mobile operators and from a model which has worked well (too well) in Japan. 4. Discussion and policy implications “Inefficient” and self-sustaining industry structures As we have noted, Japanese industry is surely capable of innovation but it operates in an environment that is not conducive to mobilizing the innovative capabilities of soft goods and service sector businesses, especially in the international arena. Fundamentally, this stems from a mismatch between the country’s vertical and hierarchical industrial organizations and the horizontal, ecosystem-based structures prevailing in “new economy” sectors. The former have proven very efficient in pursuing manufacturing perfection (“kaizen monozukuri”) – a domain in which Japan has excelled. As we have argued in section 2 however, the latter have been the far more effective form of “industry architecture” for driving innovation in most of today’s technology industries, on which services and soft goods rely. This mismatch makes the current organization and performance of some Japanese sectors appear as stuck in inefficient equilibria. Indeed, one important common denominator across the three industry case studies presented above is the prevalence of self-reinforcing mechanisms which have locked the corresponding sectors into highly path-dependent structures. The weakness (or, more precisely, virtual absence) of Japan’s software industry has been perpetuated by large computer system suppliers which have locked their customers from early on into proprietary and incompatible hardwaresoftware systems; as a result, these customers have always found it in their best interest to deepen the customization and rely on the same suppliers for more proprietary systems. Absent any external shock (or public policy intervention), it is hard to see a market opportunity for potential Japanese software companies. In animation, production committees have established a bottleneck over the financing of animation projects, which allows them to obtain most of the copyrights, which in turn deprives anime production companies from the revenues that would enable them to invest in producing their own projects and acquire the corresponding intellectual property rights. Of course, this bottleneck has been perpetuated by the absence of alternative forms of financing: bank loans (Japanese financial institutions have had a long-standing reluctance to invest in businesses with only “soft” collateral) and venture capital (an industry which remains strikingly underdeveloped in Japan). Finally, the wireless communications sector in Japan has developed a top-down way of innovating, in which the mobile operators control end-customers and dictate terms to handset manufacturers, which in turn have never had sufficient incentives to develop their own marketing and independent R&D capabilities. 27 The second aspect that needs to be emphasized is that the hierarchical forms of industrial organization that prevail in some Japanese sectors are not uniformly less 27 I.e. R&D at the mobile service level, as opposed to R&D that simply pushes handset technology, while taking the level of innovation in service and corresponding standards as exogenously given. innovative than the more horizontal modes of organization. By subordinating everyone to the “ecosystem leaders” (i.e. the companies at the top of the industry structure) however, hierarchical structures can create large inefficiencies by preventing companies at lower levels of the hierarchy from capitalizing on their innovations outside of the vertical structure – in particular, in global markets. Indeed, while software has clearly been the Achilles’ heel of Japan’s high-tech and service sectors, animation and mobile telephony are two industries in which Japan has innovated arguably more than any other country in the world. The problem there is that the “ecosystem leaders” – production committee members such as TV stations and, respectively, mobile operators – have Japan-centric interests (television stations and mobile phone service are essentially local businesses due to regulations). This ends up restraining the other members of the ecosystems to the domestic market, when in fact their relevant markets are (should be) global. Of course, in contexts in which the leader is a globally-minded company - such as Sony and Toyota -, all members of the ecosystem benefit. But those situations are the exception rather than the norm. Policy measures to break from inefficient industry structures Extrapolating from the three case studies above, there are several initiatives which Japanese policy-makers could take to remedy the issue of inefficient industry structures. First, despite recent improvements, Japan remains deficient in the enforcement of anti-trust. Monopolies and oligopolies are particularly nefarious in industries where there is a need for constant and fast innovation. The self-reinforcing mechanisms we described earlier (augmented by the importance of established, long-term relationships in Japan) creates high barriers to entry in most Japanese industries which protect incumbents and make it harder for Japanese innovators to succeed. Related to the question of oligopolies and monopolies is the issue of ease of entry and exit. If there is one lesson from Silicon Valley which Japanese policy-makers should take to heart it is that both the birth and the death rate of businesses there are extremely high - as they should be in innovative sectors. This requires not only effective bankruptcy procedures, but also financing mechanisms that accept high rates of failures, liquid employment markets (for those who lose their jobs when their employer goes out of business), and a socio-cultural environment that favors risks without denigrating those who have failed – sometimes several times – in their quest for entrepreneurial success. For example, in the US, one essential catalyst of the PC era and the rise of Microsoft and other software platforms was the unbundling of IBM – the result of antitrust intervention. There was no such intervention in Japan to break the stranglehold of the large computer system manufacturers and enable entry of smaller, innovative software companies. Similarly, as we noted earlier in this paper, antitrust has placed significant constraints on Microsoft’s ability to extend its PC OS monopoly power to the Internet and/or mobile telecommunications. The objective was to make sure the emergence of new software ecosystems and platforms is not stifled. As it has grown more dominant, Google must now also take into account the risk of anti-trust prosecution. This forces it to tread more carefully in its dealings with partners and potential competitors in online search and advertising than it might otherwise do if the anti-trust regime were weaker. Second, the development of new industries based on ecosystems which are not defined by hierarchical relationships requires a strengthening of the legal system in other fields beside antitrust. In hierarchical keiretsu systems, the controlling corporation (or corporations) which sit at the top of the pyramid performs arbitration and enforcement functions for the entire eco-system. Since what is good for the eco-system is – usually - good for them, they have a built-in incentive to take good decisions, though in some cases the interests of smaller players might be at risk. This cannot be a sustainable substitute however for developing a legal infrastructure which supports and encourages innovation and entrepreneurship. In the more flexible and non-hierarchical ecosystems which define many of the innovative industries we have discussed, there is a need for effective third-party enforcement. In the United States, this is performed by civil courts which can adjudicate contractual disputes, and in some cases may involve criminal law, for example in the case of anti-trust violations. In Japan, these mechanisms are less welldeveloped. Despite changes to the regulations pertaining to the bar exam, there is still a shortage of attorneys. Moreover, the entire economy has historically been less reliant on legal remedies, making the entire legal system underdeveloped in this area. There is, both in the United States and abroad, a mistaken view that the US system breeds to many lawyers and litigation. While it may be true that frivolous class action lawsuits hurt the economy, it is America’s rich legal infrastructure that lubricates the wheels of its innovation industry. Third and also part of the legal system remedies is enforcement of intellectual property rights (IPRs). This is perhaps the key institutional ingredient for innovation, especially in the soft goods sector. For many businesses in these industries IPRs are their main asset, in some cases their only one. Japan’s weak IPR regime undermines the balance sheet of innovative companies, makes it harder for them to obtain financing, and diminishes their bargaining power. Animation is a case in point: the production committees have emerged to fill in the institutional gap of recognition and enforcement of copyrights, which would enable anime production companies to finance themselves and develop their own projects. Fourth, venture capital markets, despite some efforts, remain underdeveloped in Japan, which presented an additional hurdle for small companies trying to break away from constraining industry organizations (e.g. animation). Unlike anti-trust and IPRs, this is an area where government action in itself cannot resolve the entire problem. However, the regulatory regime can be altered to make it easier for the venture capital industry to grow faster in Japan. Finally, a necessary policy measure is to further open the country to foreign investment. The difficulty which foreign investors face in Japan deprives Japanese innovative companies of equity partners and business partners, further locking them into domestic ecosystems which may stifle their development. It also makes it harder for Japanese companies to succeed overseas, since foreign investors could help them capture markets outside of Japan. 5. Conclusions Japan presents a unique case of industrial structures which have produced remarkable innovations in certain sectors, but which seem increasingly inadequate to produce innovation in modern technology industries, which rely essentially on horizontal ecosystems of firms producing complementary products. As our three cases studies of software, animation and mobile telephony illustrate, there are two potential sources of inefficiencies that this mismatch can create. First, the Japanese hierarchical industry organizations can simply “lock out” certain types of innovation indefinitely by perpetuating established business practices: this is the case with software, an industry from which Japan is almost entirely absent. Second, even when the vertical hierarchies produce highly innovative sectors in the domestic market – as is the case with animation and wireless mobile communications -, the exclusively domestic orientation of the “hierarchical industry leaders” can entail large missed opportunities for other members of the ecosystem, who are unable to fully exploit their potential in global markets. We have argued that improving Japan’s ability to capitalize on its innovations will require certain policy measures, aiming to alter legislation and incentives that stifle innovation: strengthening the enforcement of antitrust and intellectual property rights, strengthening the legal infrastructure (e.g. related to contractual disputes), lowering barriers to entry for foreign investment. On the other hand, private sector initiative is also critical, which requires the development of the venture capital sector, a key and necessary ingredient for stimulating innovation in modern industries. Understanding the nature of the new innovation-producing ecosystems which have developed in industries associated with the new economy (software, Internet and mobile communications) will help Japanese policy-makers and managers develop better ways for Japanese business to take advantage of its existing strengths to expand innovation beyond the industrial sphere into the realm of internationally-competitive service and soft goods sector enterprises. 6. References Abe, Masahiro and Takeo Hoshi. “Corporate Finance and Human Resource Management.” June 2004. Aghion, Philippe, “A Primer on Innovation and Growth.” October 2006. Bruegel Policy Brief, Issue 2006/06. http://www.bruegel.org/Public/Publication_detail.php?ID=1169&publicationID=1265 26 June 2007. Ahmadjian, Christina L. and Patricia Robinson. “Safety in Numbers: Downsizing and the Deinstitionalization of Permanent Employment in Japan.” Administrative Science Quarterly, Vol. 46, No. 4 (December 2001), pp. 622-654. American Chamber of Commerce in Japan. “FDI Policy in Japan – from goals to reality. Education.” ACCJ FDI Task Force Specific Policy Recommendation #4. March 2004. American Chamber of Commerce in Japan. “FDI Policy in Japan – from goals to reality. Pharmaceuticals.” ACCJ FDI Task Force Specific Policy Recommendation #3. February 2004. Amin, Mohammad and Aaditya Mattoo, “Do Institutions Matter More for Services?” World Bank Policy Research Working Paper 4032, October 2006. Washington DC: The World Bank, 2006. Baldwin, Carliss Y. and Kim B. Clark Design Rules: The Power Of Modularity. Cambridge, MA: MIT Press 1999. Bandyopadhyay, Subhayu, Cletus C. Coughlin and Howard J. Wall. “Ethnic Networks and U.S. Exports.” Bonn: Forschungsinstitut zur Zukunft der Arbeit/Institute for the Study of Labor. IZA Discussion Paper 1998, March 2006. Broda, Christian and David E. Weinstein. “Happy News From the Dismal Science: Reassessing Japanese Fiscal Policy and Sustainability.” Cambridge, MA: National Bureau of Economic Research, December 2004 (Working Paper 10988). Brown, Todd R.N. “The Importation and Alteration of Western Legal Concepts in Early Meiji japan, 1867- 1899.” Paper for Modern Japanese History class, Johns Hopkins SAIS 11 December 1997. Calder, Kent E. Strategic Capitalism: Private Business and Public Purpose in Japanese Industrial Finance. Princeton: Princeton University Press, 1995 [first printing 1993]. Callon, Scott. Divided Sun: MITI and the Breakdown of Japanese High-Tech Industrial Policy, 1975-1993. Stanford University Press, 1995. Campbell-Kelly, Martin. From Airline Reservations to Sonic the Hedgehog: A History of the Software Industry. Cambridge: MIT Press, 2003. Chopra, Sunil. “Seven-Eleven Japan Co.” Evanstown IL: Kellogg School of Management (Northwestn University) case study, KEL026 Revised 14 Feb. 2005. Clark, Tim and Carl Kay. Saying Yes to Japan: How Outsiders Are Reviving a Trillion Dollar Services Market. New York: Vertical, 2005. Dam, Kenneth W. “Institutions, History and Economic Development.” Chicago: University of Chicago Law School, John M. Olin Law and Economics Working Paper No. 271 (2 nd Series), January 2006. Development Bank of Japan. Behavior Trends of Japanese Banks toward the Corporate Sector and Their Impact on the Economy. Tokyo: Development Bank of Japan, Economic and Industrial Research Department, Research Report No. 32, October 2002. Doi, Takero and Takeo Hoshi. “Paying for the FILP.” NBER Working Paper No. W9385. Revised 1 September 2002. Dore, Ronald P. Stock Market Capitalism: Welfare Capitalism: Japan and Germany versus the Anglo Saxons. New York: Oxford University Press, 2000. Dore, Ronald P., “Deviant or Different? Corporate governance in Japan and Germany.” Corporate Governance: An International Review, Vol. 13, No. 3, May 2005, pp. 437-446. Dore, Ronald P., “For whose benefit the corporation?” (Summary of Dare no tame no kaisha ni sure ka. Economist Intelligence Unit. Country Report: Japan. Various issues. Egawa, Masako, Andrei Hagiu, Tarun Khanna, Felix Oberholzer-Gee and Chisato Toyama “Production IG: Challenging The Statu Quo.” Harvard Business School case study No. 9-707-454, 2006. Evans, David, Andrei Hagiu and Richard Schmalensee Invisible Engines: How Software Platforms Drive Innovation and Transform Industries. Cambridge, MA: MIT Presss, 2006. Feldman, Robert Alan. “Japan Economics – Perversity and Futility Attack Reform – and Lose.” Tokyo: Morgan Stanley, 18 August 2003. Fridenson, Patrick. “La différence des entreprises japonaises,” in Jean-François Sabouret (dir.), “La dynamique du Japon,” Paris, Editions Saint-Simon, 2005, pp. 321-331. Fuess, Scott M., Jr. “Working Hours in Japan: Who Is Time-Privileged?” Institute for the Study of Labor (IZA), Discussion Paper No. 2195 (July 2006). Fujitsu Research Institute. Industrial Restructuring in Japana and Chances for Growth. Tokyo: Fujitsu Research Center, 26 May 2003. Fukao, Kyoji. Inward FDI and the Japanese Economy. Fukao Kyoji and Amano Tomofumi. “Foreign Direct Investment and the Japanese Economy – Key to Japan’s Revitalization.” 29 October 2003. Fukao, Mitsuhiro. “Japan’s Lost Decade and its Financial System.” The World Economy, Vol 26, March 2003, pp. 365-384. Funken, Katja. “Alternative Dispute Resolution in Japan.” Univ. of Munich School of Law Working Paper No. 24 (June 2003). Gao, Bai. Economic Ideology and Japanese Industrial Policy: Developmentalism from 1931 to 1965. Cambridge: Cambridge University Press, 1997. Gao, Bai. Japan’s Economic Dilemma: The Institutional Origins of Prosperity and Stagnation. Cambridge: Cambridge University Press, 2001. Gawer, Annabelle and Michael Cusumano. Platform Leadership: How Intel, Microsoft, And Cisco Drive Industry Innovation. Boston: Harvard Business School Press, 2002. Gilson, Ronald and Curtis J. Milhaupt. “Choice as Regulatory Reform: The Case of Japanese Corporate Governance.” Columbia University Law School Center for Law and Economic Studies Working Paper No. 251 and Stanford Law School John M. Olin Program in Law and Economics Working Paper No. 282, 2004. Gordon, Robert J. “Two Centuries of Economic Growth: Europe Chaisng the American Frontier.” London: Center for Economic Policy Research, June 2004 (Discussion Paper 4415) Haley, John Owen. Authority without Power: Law and the Japanese Paradox. New York: Oxford University Press, 1991 (1995 Oxford UP paperback). Hall, John W. and Marius B. Jansen, eds. Studies in the Institutional History of Early Modern Japan. Princeton, NJ: Princeton University Press, 1968. Hall, Peter A. and David Soskice, eds. Varieties Of Capitalism: The Institutional Foundations Of Comparative Advantage. Oxford: Oxford University Press, 2001. Hanazaki, Masaharu and Akiyoshi Horiuchi. “A review of Japan’s bank crisis from the governance perspective.” Pacific-Basin Finance Journal 11 (2003) 305-325 Hanazaki, Masaharu and Akiyoshi Horiuchi. “Is Japan’s Financial System Efficient?” Oxford Review of Economic Policy Vol. 16 No. 2 (2000) 61-73. Hanazaki, Masaharu and Akiyoshi Horiuchi. “Can the financial restraint theory explain the postwar experience of Japan’s financial system?” in Fan, Joseph P.H., Masaharu Hanazaki and Juro Teranishi, eds. Designing Financial Systems in East Asia and Japan. London: RoutledgeCurzon, 2004. Hoshi, Takeo. “The Japanese Economy: Macroeconomic Overview.” Presentation at the Asian Public Policy Program, Graduate School of International Corporate Strategy, Hitotsubashi University. June 2004. Hoshi, Takeo and Anil K. Kashyap. Corporate Financing and Governance in Japan: The Road to the Future. Cambridge MA: The MIT Press, 2001. Ito, Takatoshi and Kimie Harada. “Japan Premium and Stock Prices: Two Mirrors of Japanese Banking Crises.” NBER Working Paper 7997 (November 2000). Jackson, Gregory. “Toward a comparative perspective on corporate governance and labour.” Tokyo: Research Institute on the Economy Trade and Industry, 2004 (REITI Discussion Papers Series 04-E-023). Jackson, Gregory and Hideaki Miyajima. “Corporate Governance in Japan: Institutional Change and Organizatinal Diversity.” Revised Draft 25 May 2004. Jacobides, Michael G., T. Knudsen and M. Augier. “Benefiting from innovation: Value creation, value appropriation and the role of industry architectures.” Research Policy 35 (2006) 1200–1221. Junji, Banno. The Political Economy of Japanse Society: Vol. I: The State or the Market? Oxford: Oxford University Press, 1997. Kanaya, Akihiro and David Woo. The Japanese Banking Crisis of the 1990s: Sources and Lessons. Princeton NJ: International Economics Section, Department of Economics, Princeton University, 2001. Katz, Richard. Japan: The System That Soured – The Rise and Fall of the Japanese Economic Miracle. Armonk, NY: M.E. Sharpe, 1998. Katz, Richard. Japanese Phoenix: The Long Road to Economic Revival. Armonk, NY: M.E. Sharpe, 2003. Kawamoto Yuko. “Fixing Japan’s banking system.” The McKinsey Quarterly 2004 (No. 3), pp. 118-112. Keizai Koho Center, Japan 2005: An International Comparison. Tokyo: Keizai Koho Center, 2005. Khanna, Tarun, Krishna G. Palepu and Jayant Sinha. “Strategies That Fit Emerging Markets.” Harvard Business Review, June 2005. Kobayashi, Keichiro, “Laggard Structural Reform Hurt Society’s Weak.” Asahi Shimbun/International Herald Tribune, 12 February 2004 (RIETI reprint). Koda, Yoji. “The Russo-Japanese War: Primary Causes of Japanese Success.” Naval War College Review, Spring 2005 Vol. 58 No. 2. Koo, Richard C. Balance Sheet Recession: Japan’s Struggle with Uncharted Economics and Its Global Implications. Singapore: John Wiley & Sons (Asia), 2003 (original published in Japanese by Tokuma Shoton, 2001). Lincoln, Edward J. Arthritic Japan: The Slow Pace of Economic Reform. Washington DC: Brookings Institution, 2001. Lockwood, William W., ed. The State and Economic Enterprise in Japan. Princeton, NJ: Princeton University Press, 1965. Marshall, Byron K. Capitalism and Nationalism in Prewar Japan: The Ideology of the Business Elite, 1868-1941. Stanford, CA: Stanford Universiy Press, 1967. Mashima R., “The Turning Point for Japanese Software Companies: Can They Compete in the Prepackaged Software Market?” Berkeley Technology Law Journal, Vol. 11, 1997. McKinsey Global Institute. Why the Japanese Economy is not Growing: Micro Barriers to Productivity Growth. Washington, D.C.: McKinsey Global Institute, 2000 Mikuni, Akio and R. Taggart Murphy. Japan’s Policy Trap: Dollars, Deflation, and the Crisis of Japanese Finance. Washington DC: Brookings Institution Press, 2002. Milhaupt, Curtis J. “A Lost Decade for Japanese Corporate Governance Reform?: What’s Changed, What Hasn’t, and Why.” Columbia Law School, The Center for Law and Economic Studies, Working Paper No. 234, July 2003. Milhaupt, Curtis J. and Mark D. West, “The Dark Side of Private Ordering: An Institutional and Empirical Analysis of Organized Crime,” University of Chicago Law Review, Vol. 67 no 1 (Winter 2000). Miwa, Yoshiro and J. Mark Ramseyer, “Toward a theory of jurisdictional competition: the case of the Japanese FTC,” Journal of Competition Law and Economics 1(2):247-277, 2005. Miyagawa Tsutomu. “The Industrial Structure and the Revitalization of the Japanese Economy – From Cyclical Expansion to a Sustained Growth –“ Japan Center for Economic Research (Industry Research Report 2003). March 2004 Miyajima, Hideaki. Ongoing Corporate Board Reforms: Their Causes and Results. (Paper prepared for the the forthcoming Masahiko Aoki, Gregory Jackson and Hideaki Miyajima, eds., Corporate Governance in Japan: Institutional Change and Organizational Diversity.) August 2004. Miyajima, Hideaki and Fumiaki Kuroki. “Unwinding of Cross-shareholding: Causes, Effects, and Implications.” (Paper prepared for the the forthcoming Masahiko Aoki, Gregory Jackson and Hideaki Miyajima, eds., Corporate Governance in Japan: Institutional Change and Organizational Diversity.) October 2004. Morck, Randall and Masao Nakamura. “Been There, Done That: The History of Corporate Ownership in Japan.” Brussels: European Corporate Governance Institute, Finance Working Paper No. 1 20/2003. July 2003 (www.ecgi.org/wp). Moriguchi, Chiaki and Emmanuel Saez, “The Evolution of Income Concentration in Japan, 1885-2002: Evidence from Income Tax Statistics.” Revised on 25 Aug. 2005 (Preliminary). www.econ.barnard.columbia.edu/~econhist/papers/moriguchisaez2.pdf (21 August 2006). Morris-Suzuki, Tessa. A History of Japanese Economic Thought. London: Routledge, 1989. Morris-Suzuki, Tessa, ed. Japanese Capitalism since 1945. Armonk, NY: M.E. Sharpe, 1989. Morris-Suzuki, Tessa. The Technological Transformation of Japan: From the Seventeenth to the Twentyfirst Century. Cambridge: Cambridge University Press, 1994. Najita, Tetsuo and J. Victor Koschmann. Conflict in Modern Japanese History: The Neglected Tradition. Princeton, NJ: Princeton University Press, 1982. Nakamura, Takafusa. Lectures on Modern Japanese Economic History, 1926-1994. Tokyo: LTCB International Library Foundation, 1994. Nakamura, Takafusa. A History of Showa Japan, 1926-1989 (translated by Edwin Whenmouth). Tokyo: University of Tokyo Press, 1998. Nakazato, Minoru, J. Mark Ramseyer, Eric B. Rasmusen, “The Industrial Organization of the Japanese Bar: Levels and Determinants of Attorney Incomes.” Discussion Paper No. 559, Oct. 2006, John M. Olin Center for Law, Economics, and Business, Harvard Law School. (http://ssrn.com/abstract=951622). Napier, Susan J. Anime From Akira to Howl’s Moving Castle. New York: Palgrave Macmillan, 2005. Narita, Junji. “The Economic Consequences of the ‘Price Keeping Operation’ in the Japanese Stock Markets – From August 1992 to November 1993.” September 2002 (Presented at the Center on the Japanese Economy and Business of the Graduate School of Business, Columbia University, on 5 September 2002). North, Douglass C . Structure and Change in Economic History. New York: Norton, 1981. North, Douglass C. Institutions, Institutional Change and Economic Performance. New York: Cambridge University Press, 1990. North, Douglass C. Understanding the Process of Economic Change. Princeton, NJ: Princeton University Press, 2005. North, Douglass C. and Robert Paul Thomas. The Rise of the Western World: A New Economic History. Cambridge: Cambridge University Press, 1973. Odom, William E. and Robert Dujarric. America's Inadvertent Empire. New Haven: Yale University Press, 2004. Olson, Mancur. The Rise and Decline of Nations: Economic Growth, Stagflation, and Social Rigidities. New Haven, Conn.: Yale University Press, 1982. Ono, Hiroshi and Marcus E. Rebick. “Constraints on the Level and Efficient Use of Labor in Japan.” NBER, Working Paper 9484 (February 2003). Ono Hiroshi and Madeline Zabodny. “Gender Differences in Information Technology Usage: A U.S.- Japan Comparison.” Federal Reserve Bank of Atlanta, Working Paper 2004-2 (January 2002). Osano, Hiroshi and Toshiaki Tachibanaki, eds. Banking, Capital Markets and Corporate Governance. New York: Palgrave, 2001. Overholt, William H. Asia, America, and the Transformation of Geopolitics. New York: Cambridge University Press, 2008. Ozawa, Terutomo. “The ‘hidden’ side of the ‘flying-geese’ catch-up model: Japan’s dirigiste institutional setup and a deepening financial morass.” Columbia Business School, Center on Japanese Economy and Business, Working Paper No. 193, July 2001. Paprzycki, Ralph. “What Caused the Recent Surge of FDI into Japan?” Discussion Paper 31. Tokyo: Hitostubashi University, Institute of Economic Research, Hitotsubashi University Research Unit for Statistical Analysis in Social Sciences, April 2004. Park Se-Hark. “Bad Loans and Their Impacts on the Japanese Economy: Conceptual And Practical Issues, and Policy Options.” Discussion Paper A-439. Tokyo: Insitute for Economic Research, Hitotsubashi University. June 2003. Patrick, Hugh. “From Cozy Regulation to Competitive Markets: The Regime Shift of Japan’s Financial System.” Columbia Business School, Center on Japanese Economy and Business, Working Paper No. 186 (April 2001). Patrick, Hugh. “Japan’s Mediocre Economic Performance Persists and Fundamental Problems Remain Unresolved.” 12 December 2002 (to appear in Japanese in Toyo Keizai, 30 January 2003). Patrick, Hugh. “Evolving Corporate Governance in Japan.” Columbia Business School, Center on Japanese Economy and Business, Working Paper 220 (February 2004). Patrick, Hugh and William V. Rapp. The Future Evolution of Japanese-US Competition in Software: Policy Challenges and Strategic Prospects. Report submitted to the United States-Japan Friendship Commission by the Center of Japanese Economy and Business, Columbia Business School, 1995. Patrick, Hugh and Henry Rosovsky, eds. Asia’s New Giant: How the Japanese Economy Works. Washington DC: The Brookings Institution, 1976. Peek, Joe and Eric S. Rosengren. “Unnatural Selection: Perverse Incentives and the Misallocation of Credit in Japan.” NBER Working Paper No. 9643 (April 2003). Porter, Michael E., and Miriko Sakakibara. “Competition in Japan.” Journal of Economic Perspectives (18:1) Winter 2004: 27-50. Ramseyer, J. Mark and Frances M. Rosenbluth. The Politics of Oligarchy: Institutional Choice in Imperial Japan. New York: Cambridge University Press, 1995. Ramseyer, J. Mark and Frances McCall Rosenbluth. Japan's Political Marketplace. Cambridge, MA: Harvard University Press, 1993. Reich, Simon. Fruits of Fascism: Postwar Prosperity in Historical Perspective. Ithaca, NY: Cornell University Press, 1990. Research Institute for Economy Trade and Industry. Fiscal Reform of Japan: Redesigning the Frame of the State. Five proposals for complementary institutional reform: From a “compartmentalized” to a “crosssectional” system. Tokyo, 12 March 2004. Retherfod, Robert D. and Naohiro Ogawa. “Japan’s Baby Bust: Causes, Implications, and Policy Responses.” East West Center Working Papers, No. 118, April 2005. Rhyu, Sang-young and Seungjoo Lee. “Changing Dynamics in Korea-Japan Economic Relations” Policy Ideas and Development Strategies.” Asian Survey, Vol. 46, No. 2 (2006), pp. 195-214. Rosenbluth, Frances McCall, ed. The Political Economy of Japan’s Low Fertility. Stanford, CA: Stanford University Press, 2007. Rosovsky, Henry, ed. Industrialization in Two Systems: Essays in Honor of Alexander Gerschenkron. New York: John Wiley and Sons, 1966. Sakakibara, Eisuke. Strutural Reofrm in Japan: Breaking the Iron Triangle. Washington DC: Brookings Institution Press, 1993. Samuels, Richard J. "Rich Nation, Strong Army" National Security and the Sase, Takao. “The Irresponsible Japanese Top Management Under the Cross-Shareholding Arrangement.” New York: Columbia Business School, Center on Japanese Economy and Business, Occasional Paper Series, Occasional Paper No. 50 (January 2003). Sato, Makoto, “From Foreign Workers to Minority Residents: Diversification of International Migration in Japan.” Ritsumeikan Annual Review of International Studies, 2004 (Vol. 3), pp. 19-34. Sato, Masaru. Kokka no Wana. Shinchosha, 26 March 2005. Sautter, Christian. La France au Miroir du Japon: Croissance ou déclin. Paris : Odile Jacob, 1996. Saxenian, AnnaLee. Silicon Valley’s New Immigrant Entrepreneurs. San Francisco: Public Policy Institute of California, 1999, http://www.ppic.org/content/pubs/report/R_699ASR.pdf (27 June 2007). Schlesinger, Jacob M. Shadow Shoguns : The Rise and Fall of Japan’s Postwar Political Machine. New York: Simon & Schuster, 1997. Shirahase, Sawako. “Women’s Increased Higher Education and the Declining Fertulity Rate in Japan.” Review of Population and Social Policy No. 9, 2000 (47-63) Smith, Thomas C. Native Sources of Japanese Industrialization, 1750-1920. Berkeley and Los Angeles: University of California Press, 1988. Smitka, Michael. “Japan’s Economic Malaise: Three simple models for why Japan’s economy will never grow again.” Version 2 23 May 2003. Spar, Debora. “Toys ‘R’ Us Japan.” Boston MA: Harvard Business School case study 9-796-077, Rev. 25 Feb. 1999. Sumiya, Mikio, A History of Japanese Trade and Industry Policy. Oxford: Oxford University Press, 2000. Tajima, Junko. “Chinese Newcomers in the Global City Tokyo: Social Networks and Settlement Tendencies.” International Journal of Japanese Sociology, Vol 12, No. 1 (November 2003), pp. 68-78. Tett, Gillian. Saving the Sun: A Wall Street Gamble to Rescue Japan from Its Trillion-Dollar Meltdown. London: Random House, 2004. Threadgold, David. “Sumitomo Trust and Banking: Living with compromise.” Tokyo: Fox-Pitt, Kelton Swiss Re Capital Markets (Japan), 26 October 2004. Threadgold, David. “Japanese Mega Banks: Is it Safe?” Tokyo: Fox-Pitt, Kelton Swiss Re Capital Markets (Japan), 24 October 2003. Threadgold, David and Yuki Allyson Honjo, “Japanese Regional Banks: Suburban Values.” Tokyo: FoxPitt, Kelton Swiss Re Capital Markets (Japan), 6 May 2005. Tsuru, Shigeto. Japan’s Capitalism. Cambridge: Cambridge University Press, 1993. Van Wolferen, Karel. The Enigma of Japanese Power: People and Politics in a Stateless Nation. Tokyo: Charles E. Tuttle, 1993 (third printing, 1996).[] Vietor, Richard H.K. “Japan: Deficits, Demography and Deflation.” Boston MA: Harvard Business School case Study 9-706-004, Rev. 22 Sept. 2005. Vogel, Ezra. Japan as Number One: Lessons for America. Cambridge, MA: Harvard University Press, 1980 (5 th printing). Vogel, Steven K. Japan Remodeled: How Government and Industry are Reforming Japanese Capitalism. Ithaca and London: Cornell University Press, 2006. Ward, Robert E. and Dankwart A. Rustow, eds. Political Modernization in Japan and Turkey. Princeton, NJ: Princeton Unviersity Press, 1964. Westney, D. Eleanor. Imitation and Innovation: The Transfer of Western Organizational Patterns to Meiji Japan. New York: toExcel, 1987 [Harvard University Press]. Yafeh, Yishay. “An International Perspective of Japan’s Corporate Groups and Their Prospects.” National Bureau of Economic Research Working Paper No. 9386 (December 2002). Yahara, Hiromichi. The Battle for Okinawa. New York: John Wiley & Sons, 1995. Yamamura, Kozo, ed. The Economic Emergence of Modern Japan. Cambridge: Cambridge University Press, 1997. Yamamura, Kozo and Yasukichi Yasuba. The Political Economy of Japan: Volume 1: The Domestic Transformation. Stanford CA: Stanford University Press, 1987. Yamamura, Kozo, and Wolfgang Streeck, eds. The End of Diversity? Prospects for German and Japanese Capitalism. Ithaca, NY: Cornell University Press, 2003. Yamashita Kazuhito. “Only Japan Left Behind by International Standards.” Weekly Economist, Mainichi Shimbunsha, 23 March 2004 (RIETI reprint). Yasuaki, Chijiwara. “Insights Into Japan-U.S. Relations On the Eve of the Iraq War: Dilemmas over ‘Showing the Flag.’” Asian Survey, Vol. 45, No. 6 (Nov/Dec. 2005), pp. 843-664. Yoffie, David. Power and Protectionism: Strategies Of The Newly Industrializing Countries. New York: Columbia University Press, 1983. Yusuf, Shahid and Kaoru Nabeshima. “Japan’s Changing Industrial Landscape.” Washington DC: World Bank Policy Research Working Ppaer 3758, Nov. 2005.Managing Functional Biases in Organizational Forecasts: A Case Study of Consensus Forecasting in Supply Chain Plannin
|
CD ROM Annuaire d'Entreprises France prospect (avec ou sans emails) : REMISE DE 10 % Avec le code réduction AUDEN872
10% de réduction sur vos envois d'emailing --> CLIQUEZ ICI Retour à l'accueil, cliquez ici Rogelio Oliva and Noel Watson. Working papers are in draft form. This working paper is distributed for purposes of comment and discussion only. It may not be reproduced without permission of the copyright holder. Copies of working papers are available from the author. Managing Functional Biases in Organizational Forecasts: A Case Study of Consensus Forecasting in Supply Chain Planning Rogelio Oliva Noel Watson Managing Functional Biases in Organizational Forecasts: A Case Study of Consensus Forecasting in Supply Chain Planning Rogelio Oliva Mays Business School Texas A&M University College Station, TX 77843-4217 Ph 979-862-3744 | Fx 979-845-5653 roliva@tamu.edu Noel Watson Harvard Business School Soldiers Field Rd. Boston, MA 02163 Ph 617-495-6614 | Fx 617-496-4059 nwatson@hbs.edu Draft: December 14, 2007. Do not quote or cite without permission from the authors. Managing Functional Biases in Organizational Forecasts: A Case Study of Consensus Forecasting in Supply Chain Planning Abstract To date, little research has been done on managing the organizational and political dimensions of generating and improving forecasts in corporate settings. We examine the implementation of a supply chain planning process at a consumer electronics company, concentrating on the forecasting approach around which the process revolves. Our analysis focuses on the forecasting process and how it mediates and accommodates the functional biases that can impair the forecast accuracy. We categorize the sources of functional bias into intentional, driven by misalignment of incentives and the disposition of power within the organization, and unintentional, resulting from informational and procedural blind spots. We show that the forecasting process, together with the supporting mechanisms of information exchange and elicitation of assumptions, is capable of managing the potential political conflict and the informational and procedural shortcomings. We also show that the creation of an independent group responsible for managing the forecasting process, an approach that we distinguish from generating forecasts directly, can stabilize the political dimension sufficiently to enable process improvement to be steered. Finally, we find that while a coordination system—the relevant processes, roles and responsibilities, and structure—can be designed to address existing individual and functional biases in the organization, the new coordination system will in turn generate new individual and functional biases. The introduced framework of functional biases (whether those biases are intentional or not), the analysis of the political dimension of the forecasting process, and the idea of a coordination system are new constructs to better understand the interface between operations management and other functions. Keywords: forecasting, marketing/operations interface, sales and operations planning, organizational issues, case/field study. 1 1. Introduction The importance of forecasting for operations management cannot be overstated. Within the firm, forecast generation and sharing is used by managers to guide the distribution of resources (Antle and Eppen, 1985; Stein, 1997), to provide targets for organizational efforts (Hamel and Prahalad, 1989; Keating et al., 1999), and to integrate the operations management function with the marketing (Crittenden et al., 1993; Griffin and Hauser, 1992), sales (Lapide, 2005; Mentzer and Bienstock, 1998), and product development (Griffin and Hauser, 1996; Wheelwright and Clark, 1992) functions. Errors in forecasting often cross the organizational boundary and translate into misallocation of resources that can impact shareholders’ return on investment (Copeland et al., 1994), and affect customers’ perception of service quality (Oliva, 2001; Oliva and Sterman, 2001). Across the supply chain, forecast sharing is a prevalent practice for proactively aligning capacity and managing supply (Cachon and Lariviere, 2001; Terwiesch et al., 2005). Over the past five years, demand/supply planning processes for planning horizons in the intermediate range have been receiving increasing attention, especially as the information technology originally intended to facilitate this planning has achieved limited success. Crossfunctional coordination among groups such as sales, operations, and finance is needed to ensure the effectiveness of some of these planning processes and the forecasting that supports it. Such processes have been referred to in the managerial literature as sales and operations planning (S&OP) processes (Bower, 2005; Lapide, 2005). Forecasts within this multi-functional setting that characterizes many organizations cannot be operationalized or analyzed in an organizational and political vacuum. However, to date, little research has been done on managing the organizational and political dimensions of generating and improving forecasts in corporate settings; dimensions which determine significantly the overall effectiveness of the forecasting process (Bretschneider and Gorr, 1989, p. 305). 2 We present a case study that illustrates the implementation of an S&OP process, concentrating in detail on the forecasting approach around which the planning process revolves. Our study describes how individuals and functional areas (whether intentionally or not) biased the organizational forecast and how the forecasting process implemented managed those biases in a supply chain setting that requires responsive planning. We define biases broadly here to include those occasioned by functional and individual incentives, and informational or procedural shortcomings. Our analysis reveals that the forecasting process, together with the supporting mechanisms of information exchange and elicitation of assumptions, is capable of managing the political conflict and the informational and procedural shortcomings that accrue to organizational differentiation. We show that the creation of an independent group responsible for managing the forecasting process can stabilize the political dimension sufficiently to enable process improvement to be steered. The deployment of a new system, however, introduces entirely new dynamics in terms of influence over forecasts and active biases. The recognition that the system both needs to account, and is in part responsible, for partners’ biases introduces a level of design complexity not currently acknowledged in the literature or by practitioners. The rest of this paper is structured as follows: In section 2, we review the relevant forecasting literature motivating the need for our case study and articulating hypotheses for findings in our research setting. Our research site and methodological design are described in section 3. In section 4 we report the conditions that triggered the deployment of the forecasting process, assess its impact in the organization, and describe the process, its actors, and dynamics in detail. Section 5 contains the core of our analysis: we analyze the organizational and process changes that were deployed, and assess how intentional and unintentional biases in the organization were managed through these mechanisms. Some of the challenges the organization faces under the new forecasting process are explored in section 6, which also provides a framework for understanding the need to continuously 3 monitor and adapt to the processes. The paper concludes with an evaluation of the implications of our findings for practitioners and researchers. 2. Research Motivation Most organizations use forecasts as input to comprehensive planning processes—such as financial planning, budgeting, sales planning, and finished goods inventory planning—that are charged with accomplishing particular goals. This implies that the forecast needs not only to be accepted by external parties, but also to guide efforts of the organization. Thus, an important measure of forecast effectiveness is how much they support these planning needs. The fit between forecasting and planning is an under-studied relationship in the literature, but at a minimum level, the forecast process needs to match the planning process in terms of the frequency and speed in which the forecast is produced. The forecasting horizon and accuracy of the forecast should be such that it allows the elaboration and execution of plans to take advantage of the forecast (Makridakis et al., 1998; Mentzer and Bienstock, 1998). For example, a planning approach such as Quick Response (Hammond, 1990) requires as input a sense of the uncertainty surrounding the forecasts in order to manage production. Thus, the forecasting process complementing such a planning process should have a means of providing a relative measure of uncertainty (Fisher et al., 1994; Fisher and Raman, 1996). Nevertheless, forecasting is not an exact science. In an organizational setting, the forecasting process requires information from multiple sources (e.g., intelligence about competitors, marketing plans, channel inventory positions, etc.) and in a variety of formats, not always amenable to integration and manipulation (Armstrong, 2001b; Fildes and Hastings, 1994; Lawrence et al., 1986; Makridakis et al., 1998). Existing case studies in the electronic and financial industries (e.g., Hughes, 2001; Watson, 1996) emphasize the informational deficiency in creating organization forecasts as a result of poor communication across functions. The multiplicity of data sources and 4 formats creates two major challenges for a forecasting process. First, since not all information can be accurately reflected in a statistical algorithm, judgment calls are a regular part of forecasting processes (Armstrong, 2001a; Sanders and Manrodt, 1994; Sanders and Ritzman, 2001). The judgmental criteria to make, adjust, and evaluate forecasts can result in individual and functional limitations and biases that potentially compromise the quality of the forecasts. Second, since the vast majority of the information providers and the makers of those judgment calls are also the users of the forecast, there are strong political forces at work explicitly attempting to bias the outcome of the process. Thus the forecasting process, in addition to fitting with the organization planning requirements, needs to explicitly manage the biases (whether individual or functional) that might affect the outcome of the process. We recognize two potential sources of biases in the organization — intentional and unintentional — that incorporate the judgmental, informational, and political dynamics that affect forecasting performance. In the following subsections, we provide analytical context from relevant literature to articulate frameworks and expectations that will help the reader to assimilate the case details in these two dimensions. 2.1 Managing Biases due to Incentive Misalignment and Dispositions of Power Intentional sources of bias (i.e., an inherent interest and ability to maintain a level of misinformation in the forecasts) are created by incentive misalignment across functions coupled with a particular disposition of power within the organization. Local incentives will drive different functional groups to want to influence the forecast process in directions that might benefit their own agenda. For example, a sales department — compensated through sales commissions — might push to inflate the forecast to ensure ample product availability, while the operations group — responsible for managing suppliers, operating capacity, and inventories — might be interested in a forecast that smoothes demand and eliminate costly production swings (Shapiro, 1977). Power is the ability of 5 the functional group to influence the forecast, and is normally gained by access to a resource (e.g., skill, information) that is scarce and valued as critical by the organization, and the ability to leverage such resources is contingent to the degree of uncertainty surrounding the organizational decision-making process (Salancik and Pfeffer, 1977). For example, the power that a sales organization could extract from intimate knowledge of customer demand diminishes as that demand becomes stable and predictable to the rest of the organization. Mahmoud et al. (1992) in discussing the gap between forecasting theory and practice, refers in particular to the effects of the disparate functional agendas and incentives as the political gap, while according to Hanke and Reitsch (1995) the most common source of bias in a forecasting context is political pressure within a company. Thus, forecasts within a multi-functional setting cannot be operationalized or analyzed in an organizational and political vacuum. As sources of incentive misalignment and contributors to the dispositions of power within the organization, disparate functional agendas and incentives, standardized organizational decision-making processes, and shared norms and values, all have an impact on the forecasting process and forecast accuracy (Bromiley, 1987). However, most of the academic literature only examines the individual and group unintentional biases that can affect forecasting ex situ (Armstrong, 2001a), with little research directed at managing the multi-objective and political dimensions of forecast generation and improvement in corporate settings (Bretschneider and Gorr, 1989; Deschamps, 2004). Research on organizational factors and intentional sources of biases in forecasting has been done in the public sector where political agendas are explicit. This research suggests that directly confronting differences in goals and assumptions increases forecast accuracy. Bretschneider and Gorr (1987) and Bretschneider et al. (1989) found that a state’s forecast accuracy improved if forecasts were produced independently by the legislature and executive, and then combined through a formal consensus procedure that exposed political positions and forecast assumptions. Deschamps 6 (2004) found forecast accuracy to be improved by creating a neutral negotiation space and an independent political agency with dedicated forecasters to facilitate the learning of technical and consensus forecasting skills. As different organizational functions have access to diverse commodities of power (e.g., sales has a unique access to current customer demand) we recognize that each group will have unique ways to influence the outcome of the forecasting process. The process through which groups with different interests reach accommodation ultimately rests on this disposition of power and it is referred to in the political science and management literatures as a political process (Crick, 1962; Dahl, 1970; Pfeffer and Salancik, 1974; Salancik and Pfeffer, 1977). In forecasting, a desirable outcome of a well-managed political contention would be a process that enables the known positive influences on forecast accuracy while weakening the negative influences on forecast accuracy. That is, a politically savvy process should take into consideration the commodities of power owned by the different functional areas and the impact that they might have on forecast accuracy, and explicitly manage the disposition of power to minimize negative influences on forecast accuracy. 2.2 Abating Informational and Procedural Blind Spots Although functional goals and incentives can translate into intentional efforts to bias a forecast, other factors can affect forecasts in ways which managers might not be aware. Thus, we recognize unintentional, but systematic, sources of forecast error resulting from what we term blind spots, ignorance in specific areas which affect negatively an individual’s or group’s forecasts. Blind spots can be informational — related to an absence of otherwise feasibly collected information on which a forecast should be based — or procedural — related to the algorithms and tasks used to generate forecasts given the information available. This typology is an analytic one; the types are not always empirically distinct. Some informational blind spots could result from naiveté in forecasting methodology (procedural blind spot) that does not allow the forecaster to use the available 7 information. Yet, while the two types may intermingle in an empirical setting, they tend to derive from different conditions and require different countermeasures. We expect then that a forecasting process should try to manage the informational and procedural blind spots that may exist for the process. Some individual biases that have been shown to affect subjective forecasting include over-confidence, availability, anchor and adjustment, and optimism (Makridakis et al., 1998). Forecasters, even when provided with statistical forecasts as guides, have difficulty assigning less weight to their own forecasts (Lim and O'Connor, 1995). Cognitive information processing limitations and other biases related to the selection and use of information can also compromise the quality of plans. Gaeth and Shanteau (1984), for example, showed that irrelevant information aversely affected judgment, and Beach et al. (1986) showed that when the information provided is poor, forecasters might expend little effort to ensure that forecasts are accurate. Such individual biases can affect both the quality of the information collected and used to infer forecasts (informational blind spots), and the rules of inference themselves (procedural blind spots). Research suggests process features and processing capabilities that might potentially mitigate the effect of individual biases. For example, combining forecasts with other judgmental or statistical forecasts tends to improve forecast accuracy (Lawrence et al., 1986). Goodwin and Wright (1993) summarize the research and empirical evidence that supports six strategies for improving judgmental forecasts: using decomposition, improving forecasters’ technical knowledge, enhancing data presentation, mathematically correcting biases, providing feedback to forecasters to facilitate learning, and combining forecasts or using groups of forecasters. Group forecasting is thought to contribute two important benefits to judgmental forecasting: (1) broad participation in the forecasting process maximizes group diversity, which reduces political bias and the tendency to cling to outmoded assumptions, assumptions that can contribute to both 8 procedural and informational blind spots (Voorhees, 2000), and (2) the varied people in groups enrich the contextual information available to the process, reducing informational blind spots and thereby improving the accuracy of forecasts (Edmundson et al., 1988; Sanders and Ritzman, 1992). Some researchers maintain that such variety is even useful for projecting the expected accuracy of forecasts (Gaur et al., 2007; Hammond and Raman, 1995). Group dynamics can, however, have unwanted effects on the time to achieve consensus, the quality of consensus (whether true agreement or acquiescence), and thus, the quality of the forecasts. Kahn and Mentzer (1994), who found that a team approach led to greater satisfaction with the forecasting process, also reported mixed results regarding the benefits of group forecasting. Dysfunctional group dynamics reflect group characteristics such as the participants’ personal dynamics, politics, information asymmetries, differing priorities, and varying information assimilation and processing capabilities. Group processes can vary in terms of the degree of interaction afforded participants and the structure of the rules for interaction. The most popular structured, non-interacting, group forecasting approach is the Delphi method wherein a group’s successive individual forecasts elicits anonymous feedback in the form of summary statistics (Rowe and Wright, 2001). Structured interacting groups, those with rules governing interaction, have not been found to perform significantly worse than groups that use the Delphi method (Rowe and Wright, 1999). However, Ang and O’Connor (1991) found that modified consensus (in which an individual’s forecast was the basis for the group’s discussion) outperformed forecasts based on group mean, consensus, and Nominal Group Technique (Delphi with some interaction). 2.3 Conclusions from Review The above review suggests that while the current academic literature recognizes the need for an understanding of the organizational and political context in which the forecasting process takes place, the literature still lacks the operational and organizational frameworks for analyzing the 9 generation of organizational forecasts. Our research aims to address this shortcoming by developing insights into managing the impact of the organizational and political dimensions of forecasting. The literature does lead us to expect a forecasting process that is attuned to the organizational and political context in which it operates, to be based on a group process, to combine information and forecasts from multiple sources, and to be deliberate about the way it allows different interests to affect forecast accuracy. We opted to explore this set of issues through a case study since the forecasting process has not been analyzed previously from this perspective, and our interest is to develop the constructs to understand its organizational and political context (Meredith, 1998). We consequently focus our analysis not on the forecast method (the specific technique used to arrive at a forecast), but on the forecasting process, that is, the way the organization has systematized information gathering, decision-making, and communication activities, and the organizational structure that supports that process. 3. Research Methodology 3.1 Case Site The case site is a northern California-headquartered consumer electronics firm called Leitax (name has been disguised) that sold its products primarily through retailers such as Best Buy and Target and operated distribution centers (DCs) in North America, Europe, and the Far East. The Leitax product portfolio consisted of seven to nine models, each with multiple SKUs that were produced by contract-manufacturers with plants in Asia and Latin America. The product life across the models, which was contracting, ranged from nine to fifteen months, with high-end, feature-packed, products tending to have the shortest product lives. The site was chosen because prior to the changes in the forecasting process, the situation was characterized by having shortcomings along the two dimensions described above. That is, the forecasting process was characterized by informational and procedural blind spots and was marred by intentional manipulation of information to advance functional agendas. The case site represents 10 an exemplar for the study of the management of these dimensions, and constitutes a unique opportunity to test the integration of the two strands of theory that make explicit predictions about unintentional and intentional biases (Yin, 1984). The forecasting approach introduced was considered at least reasonably successful by many of the organizational participants and its forecasting accuracy, and accompanying improvements of operational indicators (e.g., inventory turns, obsolescence), corroborates this assessment. The issues and dynamics addressed by the implementation of the participatory forecasting process are issues that are not unique to Leitax, but characterize a significant number of organizations. Thus, the site provides a rich setting in which to seek to understand the dynamics involved in managing an organizational forecasting process and from which we expect to provoke theory useful for academics and practitioners alike. Our case study provides one reference for managing these organizational forecasts within an evolving business and operations strategy. As such, it does more to suggest potential relationships, dynamics, and solutions, than to definitively define or propose them. 3.2 Research Design Insights were derived primarily from an intensive case study research (Eisenhardt, 1989; Yin, 1984) with the following protocol: the research was retrospective; the primary initiative studied, although evolving, was fully operational at the time the research was undertaken. Data were collected through 25 semi-structured, 45- to 90-minute interviews conducted with leaders, analysts, and participants from all functional areas involved in the forecasting process, as well as with heads of other divisions affected by the process. The interviews were supplemented with extensive reviews of archival data including internal and external memos and presentations, and direct observation of two planning and forecasting meetings. The intent of the interviews was to understand the interviewees’ role in the forecasting process, their perception of the process, and to explore explicitly the unintentional biases due to blind spots as well as the political agendas of the different 11 actors and functional areas. To assess the political elements of the forecasting process, we explicitly asked interviewees about their incentives and goals. We then triangulated their responses with answers from other actors and asked for explanations for observed behavior during the forecasting meetings. When appropriate, we asked interviewees about their own and other parties’ sources of power, i.e., the commodity through which they obtained the ability to influence an outcome—e.g., formal authority, access to important information, external reputation (Checkland and Scholes, 1990). Most interviews were conducted in the organization’s northern California facility, with some follow-up interviews done by telephone. Given the nature of the research, interviewees were not required to stay within the standard questions; interviewees perceived to be exploring fruitful avenues were permitted to continue in that direction. All interviews were recorded. Several participants were subsequently contacted and asked to elaborate on issues they had raised or to clarify comments. The data is summarized in the form of a detailed case study that relates the story of the initiative and current challenges (Watson and Oliva, 2005). Feedback was solicited from the participants, who were asked to review their quotations, and the case, for accuracy. The analysis of the data was driven by three explicit goals: First, to understand the chronology of the implemented changes and the motivation behind those changes (this analysis led to the realization of mistrust across functional areas and the perceived biases that hampered the process). Second, to understand and to document the implemented forecasting process, the roles that different actors took within the process, and the agreed values and norms that regulated interactions within the forecasting group; and third, to assess how different elements of the process addressed or mitigated the individual or functional biases identified. 4. Forecasting at Leitax The following description of the consensus forecasting process at Leitax was summarized from the interviews with the participants of the process. The description highlights the political dimension of 12 the situation at Leitax by describing the differing priorities of the different functional groups and how power to influence the achievement of those priorities was expressed. 4.1 Historical and Organizational Context Prior to 2001, demand planning at Leitax was ill-defined, with multiple private forecasts the norm. For new product introductions and mid-life product replenishment, the sales directors, (Leitax employed sales directors for three geographical regions—the Americas; Europe, the Middle East, and Africa; and Asia Pacific—and separate sales directors for Latin America and Canada) made initial forecasts that were informally distributed to the operations and finance groups, sometimes via discussions in hallways. These shared forecasts were intended to be used by the operations group as guides for communicating build or cancel requests to the supply chain. The finance group, in turn, would use these forecasts to guide financial planning and monitoring. These sales forecasts, however, were often mistrusted or second-guessed when they crossed into other functional areas. For example, with inventory shortages as its primary responsibility, the operations group would frequently generate its own forecasts to minimize the perceived exposure to inventory discrepancies, and marketing would do likewise when it anticipated that promotions might result in deviations from sales forecasts. While the extent of bias in the sales forecast was never clearly determined; the mere perception that sales had an incentive to maintain high inventory positions in the channel was sufficient to compromise the credibility of its forecasts. Sales might well have intended to communicate accurate information to the other functions, but incentives to achieve higher sell-in rates tainted the objectivity of its forecasting, which occasioned the other functions’ distrust and consequent generation of independent forecasts. Interviewees, furthermore, suspected executive forecasts to be biased by goal setting pressures, operational forecasts to be biased by inventory liability and utilization policies, and finance forecasts to be biased by market expectations and profitability 13 thresholds. These biases stem from what are believed to be naturally occurring priorities of these functions. Following two delayed product introductions that resulted in an inventory write-off of approximately 10% of FY01-02 revenues, major changes were introduced during the fall of 2001 including the appointment of a new CEO and five new vice-presidents for product development, product management, marketing, sales, and operations. In April 2002, the newly hired director of planning and fulfillment launched a project with the goal of improving the velocity and accuracy of planning information throughout the supply chain. Organizationally, management and ownership of the forecasting process fell to the newly created Demand Management Organization (DMO), which had responsibility for managing, synthesizing, challenging, and creating demand projections to pace Leitax’s operations worldwide. The three analysts who comprised the group, which reported to the director of planning and fulfillment, were responsible not only for preparing statistical forecasts but also for supporting all the information and coordination requirements of the forecasting process. By the summer of 2003, a stable planning and coordination system was in place and by the fall of 2003, Leitax had realized dramatic improvements in forecasting accuracy. Leitax defined forecast accuracy as one minus the ratio of the absolute deviation of sales from forecast to the forecast (FA=1-|sales-forecast|/forecast). Three-month ahead sell-through (sell-in) forecast accuracy improved from 58% (49%) in the summer of 2002 to 88% (84%) by fall 2003 (see Figure 1). Sell-in forecasts refer to expected sales from Leitax’s DCs into their resellers, and sell-through forecasts refer to expected sales from the resellers. Forecast accuracy through ’05 was sustained at an average of 85% for sell-through. Better forecasts translated into significant operational improvements: Inventory turns increased to 26 in Q4 ’03 from 12 the previous year, and average on hand inventory decreased from $55M to $23M. Excess and obsolescence costs decreased from an average of $3M 14 for fiscal years 2000-2002 to practically zero in fiscal year 2003. The different stages of the forecasting process are described in detail in the next section. 4.2 Process Description By the fall of 2003, a group that included the sales directors and VPs of marketing, product strategy, finance, and product management, were consistently generating a monthly forecast. The process, depicted in Figure 2, begins with the creation of an information package, referred to as the business assumptions package, from which functional forecasts are created. These forecasts are combined and discussed at consensus forecasting meetings until there is a final forecast upon which there is agreement. Business Assumptions Package The starting point for the consensus forecasting process, the business assumptions package (BAP), contained price plans for each SKU, intelligence about market trends and competitors’ products and marketing strategies, and other information of relevance to the industry. The product planning and strategy, marketing, and DMO groups guided assessments of the impact of the information on future business performance entered into the BAP (an Excel document with multiple tabs for different types of information and an accompanying PowerPoint presentation). These recommendations were carefully labeled as such and generally made in quite broad terms. The BAP generally reflected a one-year horizon, and was updated monthly and discussed and agreed upon by the forecasting group. The forecasting group generally tried not to exclude information deemed relevant from the BAP even when there were differences in opinion about the strength of the relevance. The general philosophy was that of an open exchange of information that at least one function considered relevant. Functional Forecasts Once the BAP was discussed, the information in it was used by three groups: product planning and strategy, sales, and the DMO, to elaborate functional forecasts at the family level, leaving the 15 breakdown of that forecast into specific SKU demand to the sales and packing schedules. The three functional forecasts were made for sell-through sales and without any consideration to potential supply chain capacity constraints. Product planning and strategy (PPS), a three-person group that supported all aspects of product life cycle from launch to end-of-life, and assessed competitive products and effects of price changes on demand, prepared a top-down forecast of global expected demand. The PPS forecast reflected a worldwide estimate of product demand derived from product and region specific forecasts based on historical and current trends of market-share and the current portfolio of products being offered by Leitax and its competitors. The PPS group relied on external market research groups to spot current trends, and used appropriate history as precedent in assessing competitive situations and price effects. The sales directors utilized a bottom-up approach to generate their forecast. Sales directors from all regions aggregated their own knowledge and that of their account managers about channel holdings, current sales, and expected promotions to develop a forecast based on information about what was happening in the distribution channel. The sales directors’ bottom-up forecast was first stated as a sell-in forecast. Since incentives for the sales organization were based on commissions on sell-in, this was how account managers thought of the business. The sell-in forecast was then translated into a sell-through forecast that reflected the maximum level of channel inventory (inventory at downstream DC’s and at resellers). The sales directors’ bottom-up forecast, being based on orders and retail and distribution partner feedback, was instrumental in determining the first 13 weeks of the master production schedule. The DMO group prepared, on the basis of statistical inferences from past sales, a third forecast of sell-through by region intended primarily to provide a reference point for the other two forecasts. Significant deviations from the statistical forecast would require that the other forecasting groups investigate and justify their assumptions. 16 The three groups’ forecasts were merged into a proposed consensus forecast using a formulaic approach devised by the DMO that gave more weight to the sales directors’ forecast in the short term. Consensus Forecast Meetings The forecasting group met monthly to evaluate the three independent forecasts and the proposed consensus forecast. The intention was that all parties at the meeting would understand the assumptions that drove each forecast and agree to the consensus forecast based on their understanding of these assumptions and their implications. Discussion tended to focus on the nearest two quarters. In addition to some detail planning for new and existing products, the consensus forecast meetings were also a source of feedback on forecasting performance. In measuring performance, the DMO estimated the 13-week (the longest lead-time for a component in the supply chain) forecasting accuracy based on the formula that reflected the fractional forecast error (FA=1-|sales-forecast|/forecast). Finalizing Forecasts The agreed upon final consensus forecast (FCF) was sent to the finance department for financial roll up. Finance combined the FCF with pricing and promotion information from the BAP to establish expected sales and profitability. Forecasted revenues were compared with the company’s financial targets; if gaps were identified, an attempt was made to ensure that the sales department was not under-estimating market potential. If revisions made at this point did not result in satisfactory financial performance, the forecasting group would return to the business assumptions and, together with the marketing department, revise the pricing and promotion strategies to meet financial goals and analyst expectations. These gap-filling exercises, as they were called, usually occurred at the end of each quarter and could result in significant changes to forecasts. The approved FCF was released and used to generate the master production schedule. Operations validation of the FCF was ongoing. The FCF was used to generate consistent and 17 reliable production schedules for Leitax’s contract manufacturers and distributors. Suppliers responded by improving the accuracy and opportunity of information flows regarding the status of the supply chain and their commitment to produce received orders. More reliable production schedules also prepared suppliers to meet future expected demand. Capacity issues were communicated and discussed in the consensus meetings and potential deviations from forecasted sales incorporated in the BAP. 5. Analysis In this section we examine how the design elements of the implemented forecasting process addressed potential unintentional functional biases (i.e., informational and procedural blind spots), and resolved conflicts that emerge from misalignments of functional incentives. We first take a process perspective and analyze how each stage worked to minimize functional and collective blind spots. In the second subsection, we present an analysis of how the process managed the commodities of power to improve forecast accuracy. Table 1 summarizes the sources of intentional and unintentional biases addressed by each stage of the consensus forecasting process. 5.1 Process Analysis Business Assumptions Package The incorporation of diverse information sources is one of the main benefits reported for group forecasting (Edmundson et al., 1988; Sanders and Ritzman, 1992). The BAP document explicitly incorporated and assembled information in a common, sharable format that facilitated discussion by the functional groups. The sharing of information not only eliminated some inherent functional blind spots, but also provided a similar starting point for, and thereby improved the accuracy of, the individual functional forecasts (Fildes and Hastings, 1994). The guidance and recommendations provided by the functional groups’ assessments of the impact of information in the BAP on potential demand represented an additional point of convergence for assimilating diverse information. The fact that the functions making these assessments were expected to have greater 18 competencies for determining such assessments, helped to address potential procedural blind spots for the functions that used these assessments. The fact that these assessments and interpretations were explicitly labeled as such made equally explicit their potential for bias. Finally, the generation of the BAP in the monthly meetings served as a warm-up to the consensus forecasting meeting inasmuch as it required consensus about the planning assumptions. Functional Forecasts The functional forecasts that were eventually combined into the proposed consensus forecast were generated by the functional groups, each following a different methodological approach. Although the BAP was shared, each group interpreted the information it contained according to its own motivational or psychological biases. Moreover, there existed private information that had not been economical or feasible to include in, or that had been strategically withheld from, the BAP (e.g., actual customer intended orders, of which only sales was cognizant). The combination of the independently generated forecasts using even a simple average would yield a forecast that captured some of the unique and relevant information in, and thereby improved the accuracy of, the constituent forecasts (Lawrence et al., 1986). At Leitax, the functional forecasts were combined into the proposed consensus forecast using an algorithm more sophisticated that the simple average, based, as the literature recommends (Armstrong, 2001b), on the track record of the individual forecasts. By weighting the sales directors’ forecast more heavily in the short-term and the PPS’s forecast more heavily in the long-term, the DMO recognized each function’s different level of intimacy with different temporal horizons, thereby reducing the potential impact of functional blind spots. Through this weighting, the DMO also explicitly managed each group’s degree of influence on the forecasting horizon, which could have served as political appeasement. Consensus Forecasting Meetings The focus of the forecasting process on sell-through potentially yielded a clearer signal of market demand as sell-in numbers tended to be a distorted signal of demand; the sales force was known to 19 have an incentive to influence sell-in in the short-term and different retailers had time-varying appetites for product inventory. Discussion in the monthly consensus forecasting meetings revolved mainly around objections to the proposed consensus forecast. In this context, the proposed consensus forecast provided an anchoring point that was progressively adjusted to arrive at the final consensus forecast (FCF). Anchoring on the proposed consensus forecast not only reduced the cognitive effort required of the forecasting team members, but also eliminated their psychological biases and reduced the functional biases that might still be present in the functional forecasts. There is ample evidence in the literature that an anchoring and adjustment heuristic improves the accuracy of a consensus approach to forecasting (Ang and O'Connor, 1991). Discussion of objections to the proposed consensus forecast was intended to surface the private information or private interpretation of public information that motivated the objections. These discussions also served to reveal differences in the inference rules that functions used to generate forecasts. Differences might result from information that was not revealed in the BAP, from incomplete rules of inference (i.e., rules that do not consider all information), or from faulty rules of inference (i.e., rules that exhibited inconsistencies in logic). Faulty forecast assumptions were corrected and faulty rules of inference refined over time. The consensus meetings were also a source of feedback to the members of the forecasting group on forecasting performance. The feedback rendered observable not only unique and relevant factors that affect the accuracy of the overall forecasting process, but, through the three independent functional forecasts, other factors such as functional or psychological biases. For example, in early 2004 the DMO presented evidence that sale’s forecasts tended to over-estimate near- and underestimate long-term sales. Fed back to the functional areas, these assessments of the accuracy of their respective forecasts created awareness of potential blind spots. The functional forecasts’ historical accuracy also served to guide decision-making under conditions that demanded precision such as 20 allocation under constrained capacity or inventory. The director of planning and fulfillment’s selection of a measure of performance to guide these discussions is also worthy of note. Some considered this measure of accuracy, which compared forecasts to actual sales as if actual sales represented true demand, simplistic. Rather than a detailed, complex measure of forecast accuracy, he opted to use a metric that in its simplicity was effective only in providing a directional assessment of forecast quality (i.e., is forecast accuracy improving over time?). Tempering the pursuit of improvement of this accuracy metric, the director argued that more sophisticated metrics (e.g., considering requested backlog to estimate final demand) would be more uncertain, convey less information, and prevent garnering sufficient support to drive improvement of the forecasting process. Supporting Financial and Operational Planning Leitax’s forecasting process, having the explicit goal of supporting financial and operational planning, allowed these functions to validate the agreed upon consensus forecast by transforming it into a revenue forecast and a master production schedule. Note, however, the manner in which exceptions to the forecast were treated: if the financial forecast was deemed unsatisfactory or the production schedule not executable because of unconsidered supply chain issues, a new marketing and distribution plan was developed and incorporated in the BAP. Also, note that this approach was facilitated by the process ignoring capacity constraints in estimating demand. It was common before the implementation of the forecasting process for forecasts to be affected by perceptions of present and future supply chain capacity, which resulted in a subtle form of self-fulfilling prophecy; even if manufacturing capacity became available, deflated forecasts would have positioned lower quantities of raw materials and components in the supply chain. By reflecting financial goals and operational restrictions in the BAP and asking the forecasting group (and functional areas) to update their forecasts based on the new set of assumptions, instead of adjusting the final consensus forecast directly, Leitax embedded the forecasting process in the 21 planning process. Reviewing the new marketing and product development plans reflected in the BAP, and validating it through the lenses of different departments via the functional and consensus forecast, essentially ensured that all of the functional areas involved in the process were re-aligned with the firm’s needs and expectations. Separation of the forecasting and decision-making processes has been found to be crucial to forecast accuracy (Fildes and Hastings, 1994). We discuss the contributions of this process to cross-functional coordination and organizational alignment in a separate paper (Oliva and Watson, 2006). 5.2 Political Analysis As shown in Table 1, certain components of the forecasting process dealt directly with the biases created by incentive misalignment. However, the implementation of the forecasting process was accompanied with significant structural additions, which we examine here via a political analysis. As mentioned in the section 2, we expect the forecasting process to create a social and procedural context that enables, through the use of commodities of power, the positive influences on forecast accuracy, while weakening the influence of functional biases that might reduce the forecast accuracy. The most significant component of this context is the creation of the DMO. Politically, the DMO was an independent group with responsibility for managing the forecasting process. The introduction of an additional group and its intrinsic political agenda might increase the complexity of the forecasting process and thereby reduce its predictability or complicate its control. However, the DMO, albeit neutral, was by no means impotent. Through the mandate to manage the forecasting process and being accountable for its accuracy, the DMO had the ability to determine the impact of different functions on forecast accuracy and to enforce procedural changes to mediate their influence. Specifically, related to biases due to incentive misalignment, because the DMO managed all exchanges of information associated with the process, it determined how other functions’ power and influence would be expressed in the forecasts and could enforce the 22 expression of this influence in production requests and inventory allocation decisions. The direct empowerment of the DMO group at Leitax resulted from its relationship with the planning function that made actual production requests and inventory allocations. The planning function, in turn, derived its power from the corporate mandate for a company turnaround. While the particular means of empowerment of the DMO group are not consequential — alternative sources of power could have been just as affective—the fact that DMO was empowered was crucial for the creation and the success of the forecasting process. The empowerment of the DMO may seem antithetical to a consensual approach. In theory, the presence of a neutral body has been argued to be important for managing forecasting processes vulnerable to political influence (Deschamps, 2004), as a politically neutral actor is understood to have a limited desire to exercise power and is more easily deferred to for arbitration. In practice, an empowered entity such as the DMO needs to be careful to use this power to maintain the perception of neutrality. In particular, the perception of neutrality was reinforced by the DMO’s mandate to manage the forecasting process (as opposed to actual forecasts), the simplicity and transparency of the information exchanges (basic Excel templates), and performance metrics (recall the director’s argument for the simplest measure of forecast accuracy). The forecasting process is itself an example of the empowerment of a positive influence on forecasting performance. The feasibility of the implemented forecasting process derived from the creation of the DMO and the director’s ability to assure the attendance and participation of the VPs in the consensus forecasting meetings. While the forecasting process might have been initially successful because of this convening power, the process later became self-sustaining when it achieved credibility among the participants and the users of the final consensus. At that point in time, the principal source of power (ability to influence the forecast) became expertise and internal reputation as recognized by the forecasting group based on past forecasting performance. 23 Interestingly, this historical performance also reinforced the need for a collaborative approach to forecasting as no function had distinguished itself as possessing the ability to manage the process single-handedly. Nevertheless, since the forecasting approach accommodated some influence by functional groups, the DMO could be criticized for not eliminating fully opportunities for incentive misalignment. Functional groups represent stakeholders with information sets and goals relevant to the organization’s viability, thus, it is important to listen to those interests. It is, however, virtually impossible to determine a priori whether the influence of any function will increase or decrease forecast accuracy. Furthermore, its own blind spots precluded the DMO from fully representing these stakeholders. Consequently, it is conceivably impossible to eliminate incentive misalignment entirely if stakeholder interests are to be represented in the process. Summarizing, the DMO managed the above complicating factors in its development of the forecasting process by generating the proposed consensus forecast and having groups react to, or account for, major differences with it. The process implemented by the DMO shifted the conversation from functional groups pushing for their respective agendas, to justifying the sources of the forecasts and explicitly recognizing areas of expertise or dominant knowledge (e.g., sales in the short-term, PPS in the long term). The participatory process and credibility that accrued to the forecasting group consequent to improvements in forecast accuracy made the final consensus forecast more acceptable to the rest of the organization and increased its effectiveness in coordinating procurement, manufacturing, and sales (Hagdorn-van der Meijden et al., 1994). 6. Emerging Challenges The deployment of a new system can introduce entirely new dynamics in terms of influence over forecasts and active biases. Here, we describe two missteps suffered in 2003 and relate performance feedback from participants in the consensus forecasting process and then explore the implications 24 for the design of the process and the structure that supports it. 6.1 Product Forecasting Missteps The first misstep occurred when product introduction and early sales were being planned for a new product broadly reviewed and praised in the press for its innovative features. Although the forecasting process succeeded in dampening to some degree the specialized press’ enthusiasm, the product was nevertheless woefully over-forecasted and excess inventory resulted in a write-off of more than 1% of lifetime volume materials cost. The second misstep occurred when Leitax introduced a new product that was based on a highly successful model currently being sold to the professional market. Leitax considered the new product inferior in quality since it was cheaper to manufacture and targeted it at “prosumers,” a marketing segment considered to be between the consumer and professional segments. Despite warnings from the DMO suggesting the possibility of cannibalization, the consensus forecast had the existing product continuing its impressive sales rate throughout the introduction of the new product. The larger-than-expected cannibalization resulted in an obsolescence write off for the existing product of 3% of lifetime volume materials cost. These two missteps suggest a particular case of “groupthink” (Janis, 1972), whereby optimism, initially justified, withstands contradictory data or logic as functional (or individual) biases common to all parties tend to be reinforced. Since the forecasting process seeks agreement, when the input perspectives are similar but inaccurate, as in the case of the missteps described above, the process can potentially reinforce the inaccurate perceptions. In response to these missteps, the DMO group considered changing the focus of the consensus meetings from the next two quarters towards the life-cycle quantity forecasts for product families and allowing the allocation to quarters to be more historically driven. This would serve to add another set of forecasts to the process to help improve accuracy. This focus on expected sales over the life of the product would also help mediate the intentional biases driven by natural interest in 25 immediate returns that would surface when the two nearest quarters were instead the focus. The DMO group, however, had to be careful about how the changes were introduced so as to maintain its neutral stance and not create the perception of generating forecasts rather than the forecasting process. 6.2 Interview Evaluations General feedback from interviewees reported lingering issues with process compliance. For instance, more frequently than the DMO expected, the process yielded a channel inventory level greater than the desired 7 to 8 weeks. This was explained by overly optimistic forecasts from sales and sales’ over selling into the channel in response to its incentives. Some wondered about the appropriate effect of the finance group on the process. Sales, for example, complained that finance used the consensus meetings to push sales for higher revenues. Gap-filling exercises channeling feedback from finance back into the business assumptions, sometimes effected significant changes to forecasts that seemed inappropriate. The inappropriate effects of sales and finance described above can be compared with the dynamics that existed before implementation to reveal emerging challenges associated with the forecasting process. For example, under DMO’s inventory allocation policies, the only line of influence for sales is its forecasts — the process had eliminated the other sources of influence that sales had. Thus, sales would explicitly bias its forecasts in an attempt to swing regional sales in the preferred direction. For finance, the available lines of influence are the gap-filling exercises and the interaction within the consensus forecasting meetings. Given that the incentives and priorities of these functions had not changed, the use of lines of influence in this manner is not unexpected. However, it is not easy to predict exactly how these lines of influence will be used. 6.3 Implications for Coordination System Design The consensus forecasting process occasioned lines of influence on forecasts to be used in ways that were not originally intended, and did not always dampen justifiable optimism regarding product 26 performance. The latter dynamic can be characterized as a group bias whereby functional (individual) biases/beliefs common to all parties tend to be reinforced. Since the process seeks agreement, when the input perspectives are similar but inaccurate, as in the case of the missteps described above, the process can potentially reinforce the inaccurate perceptions. Both dynamics illustrate how, in response to a particular set of processes, responsibilities, and structures — what we call a coordination system (Oliva and Watson, 2004) — new behavioral dynamics outside of those intended by the process might develop, introducing weaknesses (and conceivably strengths) not previously observed in the process. In principle, a coordinating system should be designed to account and compensate for individual and functional biases of supply chain partners. But coordination system design choices predispose individual partners to certain problem space, simplifications, and heuristics. Because the design of a coordinating system determines the complexity of each partner's role, it is also, in part, responsible for the biases exhibited by the partners. In other words, changes attendant on a process put in place to counter particular biases might unintentionally engender a different set of biases. The recognition that a coordinating system both needs to account, and is in part responsible, for partners’ biases, introduces a level of design complexity not currently acknowledged. Managers need to be aware of this possibility and monitor the process in order to identify unintended adjustments, recognizing that neither unintended behavioral adjustments nor their effects are easily predicted given the many process interactions that might be involved. This dual relationship between the coordination system and associated behavioral schema (see Figure 3), although commonly remarked in the organizational theory literature (e.g., Barley, 1986; Orlikowski, 1992), has not previously been examined in the forecasting or operations management literatures. 7. Conclusion The purpose of case studies is not to argue for specific solutions, but rather to develop explanations 27 (Yin 1984). By categorizing potential sources of functional biases into a typology—intentional, that is, driven by incentive misalignment and dispositions of power, and unintentional, that is, related to informational and procedural blind spots—we address a range of forecasting challenges that may not show up as specifically as they do at Leitax, but are similarly engendered. By a complete mapping of the steps of the forecasting process, its accompanying organizational structure and its role within the planning processes of the firm, we detail the relevant elements of an empirically observed phenomenon occurring within its contexts. By capturing the political motivations and exchanges and exploring how the deployed process and structure mitigated the existing biases, we assess the effectiveness of the process in a dimension that has largely been ignored by the forecasting literature. Finally, through the assessment of new sources of biases after the deployment of the coordination system, we identify the adaptive nature of the political game played by the actors. Through the synthesis of our observations on these relevant elements of this coordinated forecasting system, previous findings from the forecasting literature, and credible deductions linking the coordination system to the mitigation of intentional and unintentional biases identified and the emergence of new ones, we provide sufficient evidence for the following propositions concerning the management of organizational forecasts (Meredith 1998): Proposition I: Consensus forecasting, together with the supporting elements of information exchange and assumption elicitation, can prove a sufficient mechanism for constructively managing the influence of both biases on forecasts while being adequately responsive to managing a fast-paced supply chain. Proposition II: The creation of an independent group responsible for managing the consensus forecasting process, an approach that we distinguish from generating forecasts directly, provides an effective way of managing the political conflict and informational and procedural shortcomings occasioned by organizational differentiation. Proposition III: While a coordination system—the relevant processes, roles and responsibilities, and structure—can be designed to address existing individual and functional biases in the organization, the new coordination system will in turn generate new individual and functional biases. 28 The empirical and theoretical grounding of our propositions suggest further implications for practitioners and researchers alike. The typology of functional biases into intentional and unintentional highlights managers’ need to be aware that better and more integrated information may not be sufficient for a good forecast, and that attention must be paid as well to designing the process so that the social and political dimensions of the organization are effectively managed. Finally, new intentional and unintentional biases can emerge directly from newly implemented processes. This places a continuous responsibility on managers monitoring implemented systems for emerging biases and understanding the principles for dealing with different types of biases, to make changes to these systems to maintain operational and organizational gains. Generating forecasts may involve an ongoing process of iterative coordination system improvement. For researchers in operations management and forecasting methods, the process implemented by Leitax might be seen, at a basic level, as a “how to” for implementing in the organization many of the lessons from the research in forecasting and behavioral decision-making. More important, the case illustrates the organizational and behavioral context of forecasting, a context that, to our knowledge, had not been fully addressed. Given the role of forecasting in the operations management function, and as argued in the introduction, future research is needed to continue to build frameworks for managing forecasting along the organizational and political dimensions in operational settings. Such research should be primarily empirical, including both exploratory and theory building methodology that can draw heavily from the current forecasting literature, which has uncovered many potential benefits for forecasting methods ex situ. References Ang, S., M.J. O'Connor, 1991. The effect of group-interaction processes on performance in timeseries extrapolation. Int. J. Forecast. 7 (2), 141-149. Antle, R., G.D. Eppen, 1985. Capital rationing and organizational slack in capital-budgeting. Management Sci. 31 (2), 163-174. 29 Armstrong, J.S. (ed.), 2001a. Principles of Forecasting. Kluwer Academic Publishers, Boston. Armstrong, J.S., 2001b. Combining forecasts. In: J.S. Armstrong (Ed), Principles of Forecasting. Kluwer Academic Publisher, Boston, pp. 417-439. Barley, S., 1986. Technology as an occasion for structuring: Evidence from observations of CT scanners and the social order of radiology departments. Adm. Sci. Q. 31, 78-108. Beach, L.R., V.E. Barnes, J.J.J. Christensen-Szalanski, 1986. Beyond heuristics and biases: A contingency model of judgmental forecasting. J. Forecast. 5, 143-157. Bower, P., 2005. 12 most common threats to sales and operations planning process. J. Bus. Forecast. 24 (3), 4-14. Bretschneider, S.I., W.L. Gorr, 1987. State and local government revenue forecasting. In: S. Makridakis, and S.C. Wheelwright (Eds), The Handbook of Forecasting: A Manager's Guide. Wiley, New York, pp. 118-134. Bretschneider, S.I., W.L. Gorr, 1989. Forecasting as a science. Int. J. Forecast. 5 (3), 305-306. Bretschneider, S.I., W.L. Gorr, G. Grizzle, E. Klay, 1989. Political and organizational influences on the accuracy of forecasting state government revenues. Int. J. Forecast. 5 (3), 307-319. Bromiley, P., 1987. Do forecasts produced by organizations reflect anchoring and adjustment. J. Forecast. 6 (3), 201-210. Cachon, G.P., M.A. Lariviere, 2001. Contracting to assure supply: How to share demand forecasts in a supply chain. Management Sci. 47 (5), 629-646. Checkland, P.B., J. Scholes, 1990. Soft Systems Methodology in Action. Wiley, Chichester, UK. Copeland, T., T. Koller, J. Murrin, 1994. Valuation: Measuring and Managing the Value of Companies, 2nd ed. Wiley, New York. Crick, B., 1962. In Defence of Politics. Weidenfeld and Nicolson, London. Crittenden, V.L., L.R. Gardiner, A. Stam, 1993. Reducing conflict between marketing and manufacturing. Ind. Market. Manag. 22 (4), 299-309. Dahl, R.A., 1970. Modern Political Analysis, 2nd ed. Prentice Hall, Englewood Cliffs, NJ. Deschamps, E., 2004. The impact of institutional change on forecast accuracy: A case study of budget forecasting in Washington State. Int. J. Forecast. 20 (4), 647-657. Edmundson, R.H., M.J. Lawrence, M.J. O'Connor, 1988. The use of non-time series information in sales forecasting: A case study. J. Forecast. 7, 201-211. Eisenhardt, K.M., 1989. Building theories from case study research. Acad. Manage. Rev. 14 (4), 532-550. 30 Fildes, R., R. Hastings, 1994. The organization and improvement of market forecasting. J. Oper. Res. Soc. 45 (1), 1-16. Fisher, M.L., A. Raman, 1996. Reducing the cost of demand uncertainty through accurate response to early sales. Oper. Res. 44 (1), 87-99. Fisher, M.L., J.H. Hammond, W.R. Obermeyer, A. Raman, 1994. Making supply meet demand in an uncertain world. Harvard Bus. Rev. 72 (3), 83-93. Gaeth, G.J., J. Shanteau, 1984. Reducing the influence of irrelevant information on experienced decision makers. Organ. Behav. Hum. Perf. 33, 263-282. Gaur, V., S. Kesavan, A. Raman, M.L. Fisher, 2007. Estimating demand uncertainty using judgmental forecast. Man. Serv. Oper. Manage. 9 (4), 480-491. Goodwin, P., G. Wright, 1993. Improving judgmental time series forecasting: A review of guidance provided by research. Int. J. Forecast. 9 (2), 147-161. Griffin, A., J.R. Hauser, 1992. Patterns of communication among marketing, engineering and manufacturing: A comparison between two new product teams. Management Sci. 38 (3), 360- 373. Griffin, A., J.R. Hauser, 1996. Integrating R&D and Marketing: A review and analysis of the literature. J. Prod. Innovat. 13 (1), 191-215. Hagdorn-van der Meijden, L., J.A.E.E. van Nunen, A. Ramondt, 1994. Forecasting—bridging the gap between sales and manufacturing. Int. J. Prod. Econ. 37, 101-114. Hamel, G., C.K. Prahalad, 1989. Strategic intent. Harvard Bus. Rev. 67 (3), 63-78. Hammond, J.H., 1990. Quick response in the apparel Industry. Harvard Business School Note 690- 038. Harvard Business School, Boston. Hammond, J.H., A. Raman, 1995. Sport Obermeyer Ltd. Harvard Business School Case 695-002. Harvard Business School, Boston. Hanke, J.E., A.G. Reitsch, 1995. Business Forecasting, 5th ed. Prentice Hall, Englewood Cliffs, NJ. Hughes, M.S., 2001. Forecasting practice: Organizational issues. J. Oper. Res. Soc. 52 (2), 143-149. Janis, I.L., 1972. Victims of Groupthink. Houghton Mifflin, Boston. Kahn, K.B., J.T. Mentzer, 1994. The impact of team-based forecasting. J. Bus. Forecast. 13 (2), 18- 21. Keating, E.K., R. Oliva, N. Repenning, S.F. Rockart, J.D. Sterman, 1999. Overcoming the improvement paradox. Eur. Mgmt. J. 17 (2), 120-134. Lapide, L., 2005. An S&OP maturity model. J. Bus. Forecast. 24 (3), 15-20. 31 Lawrence, M.J., R.H. Edmundson, M.J. O'Connor, 1986. The accuracy of combining judgmental and statistical forecasts. Management Sci. 32 (12), 1521-1532. Lim, J.S., M.J. O'Connor, 1995. Judgmental adjustment of initial forecasts: Its effectiveness and biases. J. Behav. Decis. Making 8, 149-168. Mahmoud, E., R. DeRoeck, R. Brown, G. Rice, 1992. Bridging the gap between theory and practice in forecasting. Int. J. Forecast. 8 (2), 251-267. Makridakis, S., S.C. Wheelwright, R.J. Hyndman, 1998. Forecasting: Methods and Applications, 3rd ed. Wiley, New York. Mentzer, J.T., C.C. Bienstock, 1998. Sales Forecasting Management. Sage, Thousand Oaks, CA. Meredith, J., 1998. Building operations management theory through case and field research. J. Oper. Manag. 16, 441-454. Oliva, R., 2001. Tradeoffs in responses to work pressure in the service industry. California Management Review 43 (4), 26-43. Oliva, R., J.D. Sterman, 2001. Cutting corners and working overtime: Quality erosion in the service industry. Management Sci. 47 (7), 894-914. Oliva, R., N. Watson. 2004. What drives supply chain behavior? Harvard Bus. Sch., June 7, 2004. Available from: http://hbswk.hbs.edu/item.jhtml?id=4170&t=bizhistory. Oliva, R., N. Watson, 2006. Cross functional alignment in supply chain planning: A case study of sales & operations planning. Working Paper 07-001. Harvard Business School, Boston. Orlikowski, W., 1992. The duality of technology: Rethinking the concept of technology in organizations. Organ. Sci. 3 (3), 398-427. Pfeffer, J., G.R. Salancik, 1974. Organizational decision making as a political process: The case of a university budget. Adm. Sci. Q. 19 (2), 135-151. Rowe, G., G. Wright, 1999. The Delphi technique as a forecasting tool: Issues and analysis. Int. J. Forecast. 12 (1), 73-92. Rowe, G., G. Wright, 2001. Expert opinions in forecasting: The role of the Delphi technique. In: J.S. Armstrong (Ed), Principles of Forecasting. Kluwer Academic Publishers, Norwell, MA, pp. 125-144. Salancik, G.R., J. Pfeffer, 1977. Who gets power – and how they hold on to it: A strategiccontingency model of power. Org. Dyn. 5 (3), 3-21. Sanders, N.R., L.P. Ritzman, 1992. Accuracy of judgmental forecasts: A comparison. Omega 20, 353-364. Sanders, N.R., K.B. Manrodt, 1994. Forecasting practices in U.S. corporations: Survey results. Interfaces 24, 91-100. 32 Sanders, N.R., L.P. Ritzman, 2001. Judgmental adjustment of statistical forecasts. In: J.S. Armstrong (Ed), Principles of Forecasting. Kluwer Academic Publishers, Boston, pp. 405-416. Shapiro, B.P., 1977. Can marketing and manufacturing coexist? Harvard Bus. Rev. 55 (5), 104-114. Stein, J.C., 1997. Internal capital markets and the competition for corporate resources. Journal of Finance 52 (1), 111-133. Terwiesch, C., Z.J. Ren, T.H. Ho, M.A. Cohen, 2005. An empirical analysis of forecast sharing in the semiconductor equipment supply chain. Management Sci. 51 (2), 208-220. Voorhees, W.R., 2000. The impact of political, institutional, methodological, and economic factors on forecast error. PhD dissertation, Indiana University. Watson, M.C., 1996. Forecasting in the Scottish electronics industry. Int. J. Forecast. 12 (3), 361- 371. Watson, N., R. Oliva, 2005. Leitax (A). Harvard Business School Case 606-002. Harvard Business School, Boston. Wheelwright, S.C., K.B. Clark, 1992. Revolutionizing Product Development. Wiley, New York. Yin, R., 1984. Case Study Research. Sage, Beverly Hills, CA. Figure 1. Forecast Accuracy Performance † 0% 20% 40% 60% 80% 100% Dec-Feb 2002 Mar-May 2002 Jun-Aug 2002 Sep-Nov 2002 Dec-Feb 2003 Mar-May 2003 Jun-Aug 2003 Sep-Nov 2003 Accuracy Goal Sell-thorugh Sell-in Project Redesign Go-Live † The dip forecasting performance in Sept-Nov 2003 was as a result of a relocation of a distribution center. 33 Figure 2. Consensus Forecasting Process Industry Info Historical Info Sales Info Statistical Forecast (DMO) Top-down Forecast (PPS) Bottom-up Forecast (SD) Consensus Forecast Joint Planning Business Assumptions Package Figure 3. Dual Relationship between Coordination System and Behavioral Dynamics individual or functional biases coordination system processes roles structure values influence the design create / generate Table 1: Process Steps and Biases Mitigated Consensus Forecasting Process Procedural Blind Spot Informational Blind Spot Incentive Misalignment Business Assumptions Package Multiple sources ? Multiple interpretations ? Interpretation source explicitly labeled ? Functional forecasts Private info not in BAP ? Functional interpretation of assumptions ? Aggregate forecasts at family level ? Ignoring planning expectations supply chain constraints ? ? Proposed Consensus Forecast Weighted average of functional forecasts ? Weights in terms of past proven performance ? Initial anchoring for consensus process ? Final consensus meeting Resolution of diverging forecast ? Uncover private information used in functional forecasts ? Uncover private interpretation of public information ? ? Forecast Review Financial and Operational ? ? BAP revision ? ?Managing Functional Biases in Organizational Forecasts: A Case Study of Consensus Forecasting in Supply Chain Plannin
|
CD ROM Annuaire d'Entreprises France prospect (avec ou sans emails) : REMISE DE 10 % Avec le code réduction AUDEN872
10% de réduction sur vos envois d'emailing --> CLIQUEZ ICI Retour à l'accueil, cliquez ici Rogelio Oliva and Noel Watson. Working papers are in draft form. This working paper is distributed for purposes of comment and discussion only. It may not be reproduced without permission of the copyright holder. Copies of working papers are available from the author. Managing Functional Biases in Organizational Forecasts: A Case Study of Consensus Forecasting in Supply Chain Planning Rogelio Oliva Noel Watson Managing Functional Biases in Organizational Forecasts: A Case Study of Consensus Forecasting in Supply Chain Planning Rogelio Oliva Mays Business School Texas A&M University College Station, TX 77843-4217 Ph 979-862-3744 | Fx 979-845-5653 roliva@tamu.edu Noel Watson Harvard Business School Soldiers Field Rd. Boston, MA 02163 Ph 617-495-6614 | Fx 617-496-4059 nwatson@hbs.edu Draft: December 14, 2007. Do not quote or cite without permission from the authors. Managing Functional Biases in Organizational Forecasts: A Case Study of Consensus Forecasting in Supply Chain Planning Abstract To date, little research has been done on managing the organizational and political dimensions of generating and improving forecasts in corporate settings. We examine the implementation of a supply chain planning process at a consumer electronics company, concentrating on the forecasting approach around which the process revolves. Our analysis focuses on the forecasting process and how it mediates and accommodates the functional biases that can impair the forecast accuracy. We categorize the sources of functional bias into intentional, driven by misalignment of incentives and the disposition of power within the organization, and unintentional, resulting from informational and procedural blind spots. We show that the forecasting process, together with the supporting mechanisms of information exchange and elicitation of assumptions, is capable of managing the potential political conflict and the informational and procedural shortcomings. We also show that the creation of an independent group responsible for managing the forecasting process, an approach that we distinguish from generating forecasts directly, can stabilize the political dimension sufficiently to enable process improvement to be steered. Finally, we find that while a coordination system—the relevant processes, roles and responsibilities, and structure—can be designed to address existing individual and functional biases in the organization, the new coordination system will in turn generate new individual and functional biases. The introduced framework of functional biases (whether those biases are intentional or not), the analysis of the political dimension of the forecasting process, and the idea of a coordination system are new constructs to better understand the interface between operations management and other functions. Keywords: forecasting, marketing/operations interface, sales and operations planning, organizational issues, case/field study. 1 1. Introduction The importance of forecasting for operations management cannot be overstated. Within the firm, forecast generation and sharing is used by managers to guide the distribution of resources (Antle and Eppen, 1985; Stein, 1997), to provide targets for organizational efforts (Hamel and Prahalad, 1989; Keating et al., 1999), and to integrate the operations management function with the marketing (Crittenden et al., 1993; Griffin and Hauser, 1992), sales (Lapide, 2005; Mentzer and Bienstock, 1998), and product development (Griffin and Hauser, 1996; Wheelwright and Clark, 1992) functions. Errors in forecasting often cross the organizational boundary and translate into misallocation of resources that can impact shareholders’ return on investment (Copeland et al., 1994), and affect customers’ perception of service quality (Oliva, 2001; Oliva and Sterman, 2001). Across the supply chain, forecast sharing is a prevalent practice for proactively aligning capacity and managing supply (Cachon and Lariviere, 2001; Terwiesch et al., 2005). Over the past five years, demand/supply planning processes for planning horizons in the intermediate range have been receiving increasing attention, especially as the information technology originally intended to facilitate this planning has achieved limited success. Crossfunctional coordination among groups such as sales, operations, and finance is needed to ensure the effectiveness of some of these planning processes and the forecasting that supports it. Such processes have been referred to in the managerial literature as sales and operations planning (S&OP) processes (Bower, 2005; Lapide, 2005). Forecasts within this multi-functional setting that characterizes many organizations cannot be operationalized or analyzed in an organizational and political vacuum. However, to date, little research has been done on managing the organizational and political dimensions of generating and improving forecasts in corporate settings; dimensions which determine significantly the overall effectiveness of the forecasting process (Bretschneider and Gorr, 1989, p. 305). 2 We present a case study that illustrates the implementation of an S&OP process, concentrating in detail on the forecasting approach around which the planning process revolves. Our study describes how individuals and functional areas (whether intentionally or not) biased the organizational forecast and how the forecasting process implemented managed those biases in a supply chain setting that requires responsive planning. We define biases broadly here to include those occasioned by functional and individual incentives, and informational or procedural shortcomings. Our analysis reveals that the forecasting process, together with the supporting mechanisms of information exchange and elicitation of assumptions, is capable of managing the political conflict and the informational and procedural shortcomings that accrue to organizational differentiation. We show that the creation of an independent group responsible for managing the forecasting process can stabilize the political dimension sufficiently to enable process improvement to be steered. The deployment of a new system, however, introduces entirely new dynamics in terms of influence over forecasts and active biases. The recognition that the system both needs to account, and is in part responsible, for partners’ biases introduces a level of design complexity not currently acknowledged in the literature or by practitioners. The rest of this paper is structured as follows: In section 2, we review the relevant forecasting literature motivating the need for our case study and articulating hypotheses for findings in our research setting. Our research site and methodological design are described in section 3. In section 4 we report the conditions that triggered the deployment of the forecasting process, assess its impact in the organization, and describe the process, its actors, and dynamics in detail. Section 5 contains the core of our analysis: we analyze the organizational and process changes that were deployed, and assess how intentional and unintentional biases in the organization were managed through these mechanisms. Some of the challenges the organization faces under the new forecasting process are explored in section 6, which also provides a framework for understanding the need to continuously 3 monitor and adapt to the processes. The paper concludes with an evaluation of the implications of our findings for practitioners and researchers. 2. Research Motivation Most organizations use forecasts as input to comprehensive planning processes—such as financial planning, budgeting, sales planning, and finished goods inventory planning—that are charged with accomplishing particular goals. This implies that the forecast needs not only to be accepted by external parties, but also to guide efforts of the organization. Thus, an important measure of forecast effectiveness is how much they support these planning needs. The fit between forecasting and planning is an under-studied relationship in the literature, but at a minimum level, the forecast process needs to match the planning process in terms of the frequency and speed in which the forecast is produced. The forecasting horizon and accuracy of the forecast should be such that it allows the elaboration and execution of plans to take advantage of the forecast (Makridakis et al., 1998; Mentzer and Bienstock, 1998). For example, a planning approach such as Quick Response (Hammond, 1990) requires as input a sense of the uncertainty surrounding the forecasts in order to manage production. Thus, the forecasting process complementing such a planning process should have a means of providing a relative measure of uncertainty (Fisher et al., 1994; Fisher and Raman, 1996). Nevertheless, forecasting is not an exact science. In an organizational setting, the forecasting process requires information from multiple sources (e.g., intelligence about competitors, marketing plans, channel inventory positions, etc.) and in a variety of formats, not always amenable to integration and manipulation (Armstrong, 2001b; Fildes and Hastings, 1994; Lawrence et al., 1986; Makridakis et al., 1998). Existing case studies in the electronic and financial industries (e.g., Hughes, 2001; Watson, 1996) emphasize the informational deficiency in creating organization forecasts as a result of poor communication across functions. The multiplicity of data sources and 4 formats creates two major challenges for a forecasting process. First, since not all information can be accurately reflected in a statistical algorithm, judgment calls are a regular part of forecasting processes (Armstrong, 2001a; Sanders and Manrodt, 1994; Sanders and Ritzman, 2001). The judgmental criteria to make, adjust, and evaluate forecasts can result in individual and functional limitations and biases that potentially compromise the quality of the forecasts. Second, since the vast majority of the information providers and the makers of those judgment calls are also the users of the forecast, there are strong political forces at work explicitly attempting to bias the outcome of the process. Thus the forecasting process, in addition to fitting with the organization planning requirements, needs to explicitly manage the biases (whether individual or functional) that might affect the outcome of the process. We recognize two potential sources of biases in the organization — intentional and unintentional — that incorporate the judgmental, informational, and political dynamics that affect forecasting performance. In the following subsections, we provide analytical context from relevant literature to articulate frameworks and expectations that will help the reader to assimilate the case details in these two dimensions. 2.1 Managing Biases due to Incentive Misalignment and Dispositions of Power Intentional sources of bias (i.e., an inherent interest and ability to maintain a level of misinformation in the forecasts) are created by incentive misalignment across functions coupled with a particular disposition of power within the organization. Local incentives will drive different functional groups to want to influence the forecast process in directions that might benefit their own agenda. For example, a sales department — compensated through sales commissions — might push to inflate the forecast to ensure ample product availability, while the operations group — responsible for managing suppliers, operating capacity, and inventories — might be interested in a forecast that smoothes demand and eliminate costly production swings (Shapiro, 1977). Power is the ability of 5 the functional group to influence the forecast, and is normally gained by access to a resource (e.g., skill, information) that is scarce and valued as critical by the organization, and the ability to leverage such resources is contingent to the degree of uncertainty surrounding the organizational decision-making process (Salancik and Pfeffer, 1977). For example, the power that a sales organization could extract from intimate knowledge of customer demand diminishes as that demand becomes stable and predictable to the rest of the organization. Mahmoud et al. (1992) in discussing the gap between forecasting theory and practice, refers in particular to the effects of the disparate functional agendas and incentives as the political gap, while according to Hanke and Reitsch (1995) the most common source of bias in a forecasting context is political pressure within a company. Thus, forecasts within a multi-functional setting cannot be operationalized or analyzed in an organizational and political vacuum. As sources of incentive misalignment and contributors to the dispositions of power within the organization, disparate functional agendas and incentives, standardized organizational decision-making processes, and shared norms and values, all have an impact on the forecasting process and forecast accuracy (Bromiley, 1987). However, most of the academic literature only examines the individual and group unintentional biases that can affect forecasting ex situ (Armstrong, 2001a), with little research directed at managing the multi-objective and political dimensions of forecast generation and improvement in corporate settings (Bretschneider and Gorr, 1989; Deschamps, 2004). Research on organizational factors and intentional sources of biases in forecasting has been done in the public sector where political agendas are explicit. This research suggests that directly confronting differences in goals and assumptions increases forecast accuracy. Bretschneider and Gorr (1987) and Bretschneider et al. (1989) found that a state’s forecast accuracy improved if forecasts were produced independently by the legislature and executive, and then combined through a formal consensus procedure that exposed political positions and forecast assumptions. Deschamps 6 (2004) found forecast accuracy to be improved by creating a neutral negotiation space and an independent political agency with dedicated forecasters to facilitate the learning of technical and consensus forecasting skills. As different organizational functions have access to diverse commodities of power (e.g., sales has a unique access to current customer demand) we recognize that each group will have unique ways to influence the outcome of the forecasting process. The process through which groups with different interests reach accommodation ultimately rests on this disposition of power and it is referred to in the political science and management literatures as a political process (Crick, 1962; Dahl, 1970; Pfeffer and Salancik, 1974; Salancik and Pfeffer, 1977). In forecasting, a desirable outcome of a well-managed political contention would be a process that enables the known positive influences on forecast accuracy while weakening the negative influences on forecast accuracy. That is, a politically savvy process should take into consideration the commodities of power owned by the different functional areas and the impact that they might have on forecast accuracy, and explicitly manage the disposition of power to minimize negative influences on forecast accuracy. 2.2 Abating Informational and Procedural Blind Spots Although functional goals and incentives can translate into intentional efforts to bias a forecast, other factors can affect forecasts in ways which managers might not be aware. Thus, we recognize unintentional, but systematic, sources of forecast error resulting from what we term blind spots, ignorance in specific areas which affect negatively an individual’s or group’s forecasts. Blind spots can be informational — related to an absence of otherwise feasibly collected information on which a forecast should be based — or procedural — related to the algorithms and tasks used to generate forecasts given the information available. This typology is an analytic one; the types are not always empirically distinct. Some informational blind spots could result from naiveté in forecasting methodology (procedural blind spot) that does not allow the forecaster to use the available 7 information. Yet, while the two types may intermingle in an empirical setting, they tend to derive from different conditions and require different countermeasures. We expect then that a forecasting process should try to manage the informational and procedural blind spots that may exist for the process. Some individual biases that have been shown to affect subjective forecasting include over-confidence, availability, anchor and adjustment, and optimism (Makridakis et al., 1998). Forecasters, even when provided with statistical forecasts as guides, have difficulty assigning less weight to their own forecasts (Lim and O'Connor, 1995). Cognitive information processing limitations and other biases related to the selection and use of information can also compromise the quality of plans. Gaeth and Shanteau (1984), for example, showed that irrelevant information aversely affected judgment, and Beach et al. (1986) showed that when the information provided is poor, forecasters might expend little effort to ensure that forecasts are accurate. Such individual biases can affect both the quality of the information collected and used to infer forecasts (informational blind spots), and the rules of inference themselves (procedural blind spots). Research suggests process features and processing capabilities that might potentially mitigate the effect of individual biases. For example, combining forecasts with other judgmental or statistical forecasts tends to improve forecast accuracy (Lawrence et al., 1986). Goodwin and Wright (1993) summarize the research and empirical evidence that supports six strategies for improving judgmental forecasts: using decomposition, improving forecasters’ technical knowledge, enhancing data presentation, mathematically correcting biases, providing feedback to forecasters to facilitate learning, and combining forecasts or using groups of forecasters. Group forecasting is thought to contribute two important benefits to judgmental forecasting: (1) broad participation in the forecasting process maximizes group diversity, which reduces political bias and the tendency to cling to outmoded assumptions, assumptions that can contribute to both 8 procedural and informational blind spots (Voorhees, 2000), and (2) the varied people in groups enrich the contextual information available to the process, reducing informational blind spots and thereby improving the accuracy of forecasts (Edmundson et al., 1988; Sanders and Ritzman, 1992). Some researchers maintain that such variety is even useful for projecting the expected accuracy of forecasts (Gaur et al., 2007; Hammond and Raman, 1995). Group dynamics can, however, have unwanted effects on the time to achieve consensus, the quality of consensus (whether true agreement or acquiescence), and thus, the quality of the forecasts. Kahn and Mentzer (1994), who found that a team approach led to greater satisfaction with the forecasting process, also reported mixed results regarding the benefits of group forecasting. Dysfunctional group dynamics reflect group characteristics such as the participants’ personal dynamics, politics, information asymmetries, differing priorities, and varying information assimilation and processing capabilities. Group processes can vary in terms of the degree of interaction afforded participants and the structure of the rules for interaction. The most popular structured, non-interacting, group forecasting approach is the Delphi method wherein a group’s successive individual forecasts elicits anonymous feedback in the form of summary statistics (Rowe and Wright, 2001). Structured interacting groups, those with rules governing interaction, have not been found to perform significantly worse than groups that use the Delphi method (Rowe and Wright, 1999). However, Ang and O’Connor (1991) found that modified consensus (in which an individual’s forecast was the basis for the group’s discussion) outperformed forecasts based on group mean, consensus, and Nominal Group Technique (Delphi with some interaction). 2.3 Conclusions from Review The above review suggests that while the current academic literature recognizes the need for an understanding of the organizational and political context in which the forecasting process takes place, the literature still lacks the operational and organizational frameworks for analyzing the 9 generation of organizational forecasts. Our research aims to address this shortcoming by developing insights into managing the impact of the organizational and political dimensions of forecasting. The literature does lead us to expect a forecasting process that is attuned to the organizational and political context in which it operates, to be based on a group process, to combine information and forecasts from multiple sources, and to be deliberate about the way it allows different interests to affect forecast accuracy. We opted to explore this set of issues through a case study since the forecasting process has not been analyzed previously from this perspective, and our interest is to develop the constructs to understand its organizational and political context (Meredith, 1998). We consequently focus our analysis not on the forecast method (the specific technique used to arrive at a forecast), but on the forecasting process, that is, the way the organization has systematized information gathering, decision-making, and communication activities, and the organizational structure that supports that process. 3. Research Methodology 3.1 Case Site The case site is a northern California-headquartered consumer electronics firm called Leitax (name has been disguised) that sold its products primarily through retailers such as Best Buy and Target and operated distribution centers (DCs) in North America, Europe, and the Far East. The Leitax product portfolio consisted of seven to nine models, each with multiple SKUs that were produced by contract-manufacturers with plants in Asia and Latin America. The product life across the models, which was contracting, ranged from nine to fifteen months, with high-end, feature-packed, products tending to have the shortest product lives. The site was chosen because prior to the changes in the forecasting process, the situation was characterized by having shortcomings along the two dimensions described above. That is, the forecasting process was characterized by informational and procedural blind spots and was marred by intentional manipulation of information to advance functional agendas. The case site represents 10 an exemplar for the study of the management of these dimensions, and constitutes a unique opportunity to test the integration of the two strands of theory that make explicit predictions about unintentional and intentional biases (Yin, 1984). The forecasting approach introduced was considered at least reasonably successful by many of the organizational participants and its forecasting accuracy, and accompanying improvements of operational indicators (e.g., inventory turns, obsolescence), corroborates this assessment. The issues and dynamics addressed by the implementation of the participatory forecasting process are issues that are not unique to Leitax, but characterize a significant number of organizations. Thus, the site provides a rich setting in which to seek to understand the dynamics involved in managing an organizational forecasting process and from which we expect to provoke theory useful for academics and practitioners alike. Our case study provides one reference for managing these organizational forecasts within an evolving business and operations strategy. As such, it does more to suggest potential relationships, dynamics, and solutions, than to definitively define or propose them. 3.2 Research Design Insights were derived primarily from an intensive case study research (Eisenhardt, 1989; Yin, 1984) with the following protocol: the research was retrospective; the primary initiative studied, although evolving, was fully operational at the time the research was undertaken. Data were collected through 25 semi-structured, 45- to 90-minute interviews conducted with leaders, analysts, and participants from all functional areas involved in the forecasting process, as well as with heads of other divisions affected by the process. The interviews were supplemented with extensive reviews of archival data including internal and external memos and presentations, and direct observation of two planning and forecasting meetings. The intent of the interviews was to understand the interviewees’ role in the forecasting process, their perception of the process, and to explore explicitly the unintentional biases due to blind spots as well as the political agendas of the different 11 actors and functional areas. To assess the political elements of the forecasting process, we explicitly asked interviewees about their incentives and goals. We then triangulated their responses with answers from other actors and asked for explanations for observed behavior during the forecasting meetings. When appropriate, we asked interviewees about their own and other parties’ sources of power, i.e., the commodity through which they obtained the ability to influence an outcome—e.g., formal authority, access to important information, external reputation (Checkland and Scholes, 1990). Most interviews were conducted in the organization’s northern California facility, with some follow-up interviews done by telephone. Given the nature of the research, interviewees were not required to stay within the standard questions; interviewees perceived to be exploring fruitful avenues were permitted to continue in that direction. All interviews were recorded. Several participants were subsequently contacted and asked to elaborate on issues they had raised or to clarify comments. The data is summarized in the form of a detailed case study that relates the story of the initiative and current challenges (Watson and Oliva, 2005). Feedback was solicited from the participants, who were asked to review their quotations, and the case, for accuracy. The analysis of the data was driven by three explicit goals: First, to understand the chronology of the implemented changes and the motivation behind those changes (this analysis led to the realization of mistrust across functional areas and the perceived biases that hampered the process). Second, to understand and to document the implemented forecasting process, the roles that different actors took within the process, and the agreed values and norms that regulated interactions within the forecasting group; and third, to assess how different elements of the process addressed or mitigated the individual or functional biases identified. 4. Forecasting at Leitax The following description of the consensus forecasting process at Leitax was summarized from the interviews with the participants of the process. The description highlights the political dimension of 12 the situation at Leitax by describing the differing priorities of the different functional groups and how power to influence the achievement of those priorities was expressed. 4.1 Historical and Organizational Context Prior to 2001, demand planning at Leitax was ill-defined, with multiple private forecasts the norm. For new product introductions and mid-life product replenishment, the sales directors, (Leitax employed sales directors for three geographical regions—the Americas; Europe, the Middle East, and Africa; and Asia Pacific—and separate sales directors for Latin America and Canada) made initial forecasts that were informally distributed to the operations and finance groups, sometimes via discussions in hallways. These shared forecasts were intended to be used by the operations group as guides for communicating build or cancel requests to the supply chain. The finance group, in turn, would use these forecasts to guide financial planning and monitoring. These sales forecasts, however, were often mistrusted or second-guessed when they crossed into other functional areas. For example, with inventory shortages as its primary responsibility, the operations group would frequently generate its own forecasts to minimize the perceived exposure to inventory discrepancies, and marketing would do likewise when it anticipated that promotions might result in deviations from sales forecasts. While the extent of bias in the sales forecast was never clearly determined; the mere perception that sales had an incentive to maintain high inventory positions in the channel was sufficient to compromise the credibility of its forecasts. Sales might well have intended to communicate accurate information to the other functions, but incentives to achieve higher sell-in rates tainted the objectivity of its forecasting, which occasioned the other functions’ distrust and consequent generation of independent forecasts. Interviewees, furthermore, suspected executive forecasts to be biased by goal setting pressures, operational forecasts to be biased by inventory liability and utilization policies, and finance forecasts to be biased by market expectations and profitability 13 thresholds. These biases stem from what are believed to be naturally occurring priorities of these functions. Following two delayed product introductions that resulted in an inventory write-off of approximately 10% of FY01-02 revenues, major changes were introduced during the fall of 2001 including the appointment of a new CEO and five new vice-presidents for product development, product management, marketing, sales, and operations. In April 2002, the newly hired director of planning and fulfillment launched a project with the goal of improving the velocity and accuracy of planning information throughout the supply chain. Organizationally, management and ownership of the forecasting process fell to the newly created Demand Management Organization (DMO), which had responsibility for managing, synthesizing, challenging, and creating demand projections to pace Leitax’s operations worldwide. The three analysts who comprised the group, which reported to the director of planning and fulfillment, were responsible not only for preparing statistical forecasts but also for supporting all the information and coordination requirements of the forecasting process. By the summer of 2003, a stable planning and coordination system was in place and by the fall of 2003, Leitax had realized dramatic improvements in forecasting accuracy. Leitax defined forecast accuracy as one minus the ratio of the absolute deviation of sales from forecast to the forecast (FA=1-|sales-forecast|/forecast). Three-month ahead sell-through (sell-in) forecast accuracy improved from 58% (49%) in the summer of 2002 to 88% (84%) by fall 2003 (see Figure 1). Sell-in forecasts refer to expected sales from Leitax’s DCs into their resellers, and sell-through forecasts refer to expected sales from the resellers. Forecast accuracy through ’05 was sustained at an average of 85% for sell-through. Better forecasts translated into significant operational improvements: Inventory turns increased to 26 in Q4 ’03 from 12 the previous year, and average on hand inventory decreased from $55M to $23M. Excess and obsolescence costs decreased from an average of $3M 14 for fiscal years 2000-2002 to practically zero in fiscal year 2003. The different stages of the forecasting process are described in detail in the next section. 4.2 Process Description By the fall of 2003, a group that included the sales directors and VPs of marketing, product strategy, finance, and product management, were consistently generating a monthly forecast. The process, depicted in Figure 2, begins with the creation of an information package, referred to as the business assumptions package, from which functional forecasts are created. These forecasts are combined and discussed at consensus forecasting meetings until there is a final forecast upon which there is agreement. Business Assumptions Package The starting point for the consensus forecasting process, the business assumptions package (BAP), contained price plans for each SKU, intelligence about market trends and competitors’ products and marketing strategies, and other information of relevance to the industry. The product planning and strategy, marketing, and DMO groups guided assessments of the impact of the information on future business performance entered into the BAP (an Excel document with multiple tabs for different types of information and an accompanying PowerPoint presentation). These recommendations were carefully labeled as such and generally made in quite broad terms. The BAP generally reflected a one-year horizon, and was updated monthly and discussed and agreed upon by the forecasting group. The forecasting group generally tried not to exclude information deemed relevant from the BAP even when there were differences in opinion about the strength of the relevance. The general philosophy was that of an open exchange of information that at least one function considered relevant. Functional Forecasts Once the BAP was discussed, the information in it was used by three groups: product planning and strategy, sales, and the DMO, to elaborate functional forecasts at the family level, leaving the 15 breakdown of that forecast into specific SKU demand to the sales and packing schedules. The three functional forecasts were made for sell-through sales and without any consideration to potential supply chain capacity constraints. Product planning and strategy (PPS), a three-person group that supported all aspects of product life cycle from launch to end-of-life, and assessed competitive products and effects of price changes on demand, prepared a top-down forecast of global expected demand. The PPS forecast reflected a worldwide estimate of product demand derived from product and region specific forecasts based on historical and current trends of market-share and the current portfolio of products being offered by Leitax and its competitors. The PPS group relied on external market research groups to spot current trends, and used appropriate history as precedent in assessing competitive situations and price effects. The sales directors utilized a bottom-up approach to generate their forecast. Sales directors from all regions aggregated their own knowledge and that of their account managers about channel holdings, current sales, and expected promotions to develop a forecast based on information about what was happening in the distribution channel. The sales directors’ bottom-up forecast was first stated as a sell-in forecast. Since incentives for the sales organization were based on commissions on sell-in, this was how account managers thought of the business. The sell-in forecast was then translated into a sell-through forecast that reflected the maximum level of channel inventory (inventory at downstream DC’s and at resellers). The sales directors’ bottom-up forecast, being based on orders and retail and distribution partner feedback, was instrumental in determining the first 13 weeks of the master production schedule. The DMO group prepared, on the basis of statistical inferences from past sales, a third forecast of sell-through by region intended primarily to provide a reference point for the other two forecasts. Significant deviations from the statistical forecast would require that the other forecasting groups investigate and justify their assumptions. 16 The three groups’ forecasts were merged into a proposed consensus forecast using a formulaic approach devised by the DMO that gave more weight to the sales directors’ forecast in the short term. Consensus Forecast Meetings The forecasting group met monthly to evaluate the three independent forecasts and the proposed consensus forecast. The intention was that all parties at the meeting would understand the assumptions that drove each forecast and agree to the consensus forecast based on their understanding of these assumptions and their implications. Discussion tended to focus on the nearest two quarters. In addition to some detail planning for new and existing products, the consensus forecast meetings were also a source of feedback on forecasting performance. In measuring performance, the DMO estimated the 13-week (the longest lead-time for a component in the supply chain) forecasting accuracy based on the formula that reflected the fractional forecast error (FA=1-|sales-forecast|/forecast). Finalizing Forecasts The agreed upon final consensus forecast (FCF) was sent to the finance department for financial roll up. Finance combined the FCF with pricing and promotion information from the BAP to establish expected sales and profitability. Forecasted revenues were compared with the company’s financial targets; if gaps were identified, an attempt was made to ensure that the sales department was not under-estimating market potential. If revisions made at this point did not result in satisfactory financial performance, the forecasting group would return to the business assumptions and, together with the marketing department, revise the pricing and promotion strategies to meet financial goals and analyst expectations. These gap-filling exercises, as they were called, usually occurred at the end of each quarter and could result in significant changes to forecasts. The approved FCF was released and used to generate the master production schedule. Operations validation of the FCF was ongoing. The FCF was used to generate consistent and 17 reliable production schedules for Leitax’s contract manufacturers and distributors. Suppliers responded by improving the accuracy and opportunity of information flows regarding the status of the supply chain and their commitment to produce received orders. More reliable production schedules also prepared suppliers to meet future expected demand. Capacity issues were communicated and discussed in the consensus meetings and potential deviations from forecasted sales incorporated in the BAP. 5. Analysis In this section we examine how the design elements of the implemented forecasting process addressed potential unintentional functional biases (i.e., informational and procedural blind spots), and resolved conflicts that emerge from misalignments of functional incentives. We first take a process perspective and analyze how each stage worked to minimize functional and collective blind spots. In the second subsection, we present an analysis of how the process managed the commodities of power to improve forecast accuracy. Table 1 summarizes the sources of intentional and unintentional biases addressed by each stage of the consensus forecasting process. 5.1 Process Analysis Business Assumptions Package The incorporation of diverse information sources is one of the main benefits reported for group forecasting (Edmundson et al., 1988; Sanders and Ritzman, 1992). The BAP document explicitly incorporated and assembled information in a common, sharable format that facilitated discussion by the functional groups. The sharing of information not only eliminated some inherent functional blind spots, but also provided a similar starting point for, and thereby improved the accuracy of, the individual functional forecasts (Fildes and Hastings, 1994). The guidance and recommendations provided by the functional groups’ assessments of the impact of information in the BAP on potential demand represented an additional point of convergence for assimilating diverse information. The fact that the functions making these assessments were expected to have greater 18 competencies for determining such assessments, helped to address potential procedural blind spots for the functions that used these assessments. The fact that these assessments and interpretations were explicitly labeled as such made equally explicit their potential for bias. Finally, the generation of the BAP in the monthly meetings served as a warm-up to the consensus forecasting meeting inasmuch as it required consensus about the planning assumptions. Functional Forecasts The functional forecasts that were eventually combined into the proposed consensus forecast were generated by the functional groups, each following a different methodological approach. Although the BAP was shared, each group interpreted the information it contained according to its own motivational or psychological biases. Moreover, there existed private information that had not been economical or feasible to include in, or that had been strategically withheld from, the BAP (e.g., actual customer intended orders, of which only sales was cognizant). The combination of the independently generated forecasts using even a simple average would yield a forecast that captured some of the unique and relevant information in, and thereby improved the accuracy of, the constituent forecasts (Lawrence et al., 1986). At Leitax, the functional forecasts were combined into the proposed consensus forecast using an algorithm more sophisticated that the simple average, based, as the literature recommends (Armstrong, 2001b), on the track record of the individual forecasts. By weighting the sales directors’ forecast more heavily in the short-term and the PPS’s forecast more heavily in the long-term, the DMO recognized each function’s different level of intimacy with different temporal horizons, thereby reducing the potential impact of functional blind spots. Through this weighting, the DMO also explicitly managed each group’s degree of influence on the forecasting horizon, which could have served as political appeasement. Consensus Forecasting Meetings The focus of the forecasting process on sell-through potentially yielded a clearer signal of market demand as sell-in numbers tended to be a distorted signal of demand; the sales force was known to 19 have an incentive to influence sell-in in the short-term and different retailers had time-varying appetites for product inventory. Discussion in the monthly consensus forecasting meetings revolved mainly around objections to the proposed consensus forecast. In this context, the proposed consensus forecast provided an anchoring point that was progressively adjusted to arrive at the final consensus forecast (FCF). Anchoring on the proposed consensus forecast not only reduced the cognitive effort required of the forecasting team members, but also eliminated their psychological biases and reduced the functional biases that might still be present in the functional forecasts. There is ample evidence in the literature that an anchoring and adjustment heuristic improves the accuracy of a consensus approach to forecasting (Ang and O'Connor, 1991). Discussion of objections to the proposed consensus forecast was intended to surface the private information or private interpretation of public information that motivated the objections. These discussions also served to reveal differences in the inference rules that functions used to generate forecasts. Differences might result from information that was not revealed in the BAP, from incomplete rules of inference (i.e., rules that do not consider all information), or from faulty rules of inference (i.e., rules that exhibited inconsistencies in logic). Faulty forecast assumptions were corrected and faulty rules of inference refined over time. The consensus meetings were also a source of feedback to the members of the forecasting group on forecasting performance. The feedback rendered observable not only unique and relevant factors that affect the accuracy of the overall forecasting process, but, through the three independent functional forecasts, other factors such as functional or psychological biases. For example, in early 2004 the DMO presented evidence that sale’s forecasts tended to over-estimate near- and underestimate long-term sales. Fed back to the functional areas, these assessments of the accuracy of their respective forecasts created awareness of potential blind spots. The functional forecasts’ historical accuracy also served to guide decision-making under conditions that demanded precision such as 20 allocation under constrained capacity or inventory. The director of planning and fulfillment’s selection of a measure of performance to guide these discussions is also worthy of note. Some considered this measure of accuracy, which compared forecasts to actual sales as if actual sales represented true demand, simplistic. Rather than a detailed, complex measure of forecast accuracy, he opted to use a metric that in its simplicity was effective only in providing a directional assessment of forecast quality (i.e., is forecast accuracy improving over time?). Tempering the pursuit of improvement of this accuracy metric, the director argued that more sophisticated metrics (e.g., considering requested backlog to estimate final demand) would be more uncertain, convey less information, and prevent garnering sufficient support to drive improvement of the forecasting process. Supporting Financial and Operational Planning Leitax’s forecasting process, having the explicit goal of supporting financial and operational planning, allowed these functions to validate the agreed upon consensus forecast by transforming it into a revenue forecast and a master production schedule. Note, however, the manner in which exceptions to the forecast were treated: if the financial forecast was deemed unsatisfactory or the production schedule not executable because of unconsidered supply chain issues, a new marketing and distribution plan was developed and incorporated in the BAP. Also, note that this approach was facilitated by the process ignoring capacity constraints in estimating demand. It was common before the implementation of the forecasting process for forecasts to be affected by perceptions of present and future supply chain capacity, which resulted in a subtle form of self-fulfilling prophecy; even if manufacturing capacity became available, deflated forecasts would have positioned lower quantities of raw materials and components in the supply chain. By reflecting financial goals and operational restrictions in the BAP and asking the forecasting group (and functional areas) to update their forecasts based on the new set of assumptions, instead of adjusting the final consensus forecast directly, Leitax embedded the forecasting process in the 21 planning process. Reviewing the new marketing and product development plans reflected in the BAP, and validating it through the lenses of different departments via the functional and consensus forecast, essentially ensured that all of the functional areas involved in the process were re-aligned with the firm’s needs and expectations. Separation of the forecasting and decision-making processes has been found to be crucial to forecast accuracy (Fildes and Hastings, 1994). We discuss the contributions of this process to cross-functional coordination and organizational alignment in a separate paper (Oliva and Watson, 2006). 5.2 Political Analysis As shown in Table 1, certain components of the forecasting process dealt directly with the biases created by incentive misalignment. However, the implementation of the forecasting process was accompanied with significant structural additions, which we examine here via a political analysis. As mentioned in the section 2, we expect the forecasting process to create a social and procedural context that enables, through the use of commodities of power, the positive influences on forecast accuracy, while weakening the influence of functional biases that might reduce the forecast accuracy. The most significant component of this context is the creation of the DMO. Politically, the DMO was an independent group with responsibility for managing the forecasting process. The introduction of an additional group and its intrinsic political agenda might increase the complexity of the forecasting process and thereby reduce its predictability or complicate its control. However, the DMO, albeit neutral, was by no means impotent. Through the mandate to manage the forecasting process and being accountable for its accuracy, the DMO had the ability to determine the impact of different functions on forecast accuracy and to enforce procedural changes to mediate their influence. Specifically, related to biases due to incentive misalignment, because the DMO managed all exchanges of information associated with the process, it determined how other functions’ power and influence would be expressed in the forecasts and could enforce the 22 expression of this influence in production requests and inventory allocation decisions. The direct empowerment of the DMO group at Leitax resulted from its relationship with the planning function that made actual production requests and inventory allocations. The planning function, in turn, derived its power from the corporate mandate for a company turnaround. While the particular means of empowerment of the DMO group are not consequential — alternative sources of power could have been just as affective—the fact that DMO was empowered was crucial for the creation and the success of the forecasting process. The empowerment of the DMO may seem antithetical to a consensual approach. In theory, the presence of a neutral body has been argued to be important for managing forecasting processes vulnerable to political influence (Deschamps, 2004), as a politically neutral actor is understood to have a limited desire to exercise power and is more easily deferred to for arbitration. In practice, an empowered entity such as the DMO needs to be careful to use this power to maintain the perception of neutrality. In particular, the perception of neutrality was reinforced by the DMO’s mandate to manage the forecasting process (as opposed to actual forecasts), the simplicity and transparency of the information exchanges (basic Excel templates), and performance metrics (recall the director’s argument for the simplest measure of forecast accuracy). The forecasting process is itself an example of the empowerment of a positive influence on forecasting performance. The feasibility of the implemented forecasting process derived from the creation of the DMO and the director’s ability to assure the attendance and participation of the VPs in the consensus forecasting meetings. While the forecasting process might have been initially successful because of this convening power, the process later became self-sustaining when it achieved credibility among the participants and the users of the final consensus. At that point in time, the principal source of power (ability to influence the forecast) became expertise and internal reputation as recognized by the forecasting group based on past forecasting performance. 23 Interestingly, this historical performance also reinforced the need for a collaborative approach to forecasting as no function had distinguished itself as possessing the ability to manage the process single-handedly. Nevertheless, since the forecasting approach accommodated some influence by functional groups, the DMO could be criticized for not eliminating fully opportunities for incentive misalignment. Functional groups represent stakeholders with information sets and goals relevant to the organization’s viability, thus, it is important to listen to those interests. It is, however, virtually impossible to determine a priori whether the influence of any function will increase or decrease forecast accuracy. Furthermore, its own blind spots precluded the DMO from fully representing these stakeholders. Consequently, it is conceivably impossible to eliminate incentive misalignment entirely if stakeholder interests are to be represented in the process. Summarizing, the DMO managed the above complicating factors in its development of the forecasting process by generating the proposed consensus forecast and having groups react to, or account for, major differences with it. The process implemented by the DMO shifted the conversation from functional groups pushing for their respective agendas, to justifying the sources of the forecasts and explicitly recognizing areas of expertise or dominant knowledge (e.g., sales in the short-term, PPS in the long term). The participatory process and credibility that accrued to the forecasting group consequent to improvements in forecast accuracy made the final consensus forecast more acceptable to the rest of the organization and increased its effectiveness in coordinating procurement, manufacturing, and sales (Hagdorn-van der Meijden et al., 1994). 6. Emerging Challenges The deployment of a new system can introduce entirely new dynamics in terms of influence over forecasts and active biases. Here, we describe two missteps suffered in 2003 and relate performance feedback from participants in the consensus forecasting process and then explore the implications 24 for the design of the process and the structure that supports it. 6.1 Product Forecasting Missteps The first misstep occurred when product introduction and early sales were being planned for a new product broadly reviewed and praised in the press for its innovative features. Although the forecasting process succeeded in dampening to some degree the specialized press’ enthusiasm, the product was nevertheless woefully over-forecasted and excess inventory resulted in a write-off of more than 1% of lifetime volume materials cost. The second misstep occurred when Leitax introduced a new product that was based on a highly successful model currently being sold to the professional market. Leitax considered the new product inferior in quality since it was cheaper to manufacture and targeted it at “prosumers,” a marketing segment considered to be between the consumer and professional segments. Despite warnings from the DMO suggesting the possibility of cannibalization, the consensus forecast had the existing product continuing its impressive sales rate throughout the introduction of the new product. The larger-than-expected cannibalization resulted in an obsolescence write off for the existing product of 3% of lifetime volume materials cost. These two missteps suggest a particular case of “groupthink” (Janis, 1972), whereby optimism, initially justified, withstands contradictory data or logic as functional (or individual) biases common to all parties tend to be reinforced. Since the forecasting process seeks agreement, when the input perspectives are similar but inaccurate, as in the case of the missteps described above, the process can potentially reinforce the inaccurate perceptions. In response to these missteps, the DMO group considered changing the focus of the consensus meetings from the next two quarters towards the life-cycle quantity forecasts for product families and allowing the allocation to quarters to be more historically driven. This would serve to add another set of forecasts to the process to help improve accuracy. This focus on expected sales over the life of the product would also help mediate the intentional biases driven by natural interest in 25 immediate returns that would surface when the two nearest quarters were instead the focus. The DMO group, however, had to be careful about how the changes were introduced so as to maintain its neutral stance and not create the perception of generating forecasts rather than the forecasting process. 6.2 Interview Evaluations General feedback from interviewees reported lingering issues with process compliance. For instance, more frequently than the DMO expected, the process yielded a channel inventory level greater than the desired 7 to 8 weeks. This was explained by overly optimistic forecasts from sales and sales’ over selling into the channel in response to its incentives. Some wondered about the appropriate effect of the finance group on the process. Sales, for example, complained that finance used the consensus meetings to push sales for higher revenues. Gap-filling exercises channeling feedback from finance back into the business assumptions, sometimes effected significant changes to forecasts that seemed inappropriate. The inappropriate effects of sales and finance described above can be compared with the dynamics that existed before implementation to reveal emerging challenges associated with the forecasting process. For example, under DMO’s inventory allocation policies, the only line of influence for sales is its forecasts — the process had eliminated the other sources of influence that sales had. Thus, sales would explicitly bias its forecasts in an attempt to swing regional sales in the preferred direction. For finance, the available lines of influence are the gap-filling exercises and the interaction within the consensus forecasting meetings. Given that the incentives and priorities of these functions had not changed, the use of lines of influence in this manner is not unexpected. However, it is not easy to predict exactly how these lines of influence will be used. 6.3 Implications for Coordination System Design The consensus forecasting process occasioned lines of influence on forecasts to be used in ways that were not originally intended, and did not always dampen justifiable optimism regarding product 26 performance. The latter dynamic can be characterized as a group bias whereby functional (individual) biases/beliefs common to all parties tend to be reinforced. Since the process seeks agreement, when the input perspectives are similar but inaccurate, as in the case of the missteps described above, the process can potentially reinforce the inaccurate perceptions. Both dynamics illustrate how, in response to a particular set of processes, responsibilities, and structures — what we call a coordination system (Oliva and Watson, 2004) — new behavioral dynamics outside of those intended by the process might develop, introducing weaknesses (and conceivably strengths) not previously observed in the process. In principle, a coordinating system should be designed to account and compensate for individual and functional biases of supply chain partners. But coordination system design choices predispose individual partners to certain problem space, simplifications, and heuristics. Because the design of a coordinating system determines the complexity of each partner's role, it is also, in part, responsible for the biases exhibited by the partners. In other words, changes attendant on a process put in place to counter particular biases might unintentionally engender a different set of biases. The recognition that a coordinating system both needs to account, and is in part responsible, for partners’ biases, introduces a level of design complexity not currently acknowledged. Managers need to be aware of this possibility and monitor the process in order to identify unintended adjustments, recognizing that neither unintended behavioral adjustments nor their effects are easily predicted given the many process interactions that might be involved. This dual relationship between the coordination system and associated behavioral schema (see Figure 3), although commonly remarked in the organizational theory literature (e.g., Barley, 1986; Orlikowski, 1992), has not previously been examined in the forecasting or operations management literatures. 7. Conclusion The purpose of case studies is not to argue for specific solutions, but rather to develop explanations 27 (Yin 1984). By categorizing potential sources of functional biases into a typology—intentional, that is, driven by incentive misalignment and dispositions of power, and unintentional, that is, related to informational and procedural blind spots—we address a range of forecasting challenges that may not show up as specifically as they do at Leitax, but are similarly engendered. By a complete mapping of the steps of the forecasting process, its accompanying organizational structure and its role within the planning processes of the firm, we detail the relevant elements of an empirically observed phenomenon occurring within its contexts. By capturing the political motivations and exchanges and exploring how the deployed process and structure mitigated the existing biases, we assess the effectiveness of the process in a dimension that has largely been ignored by the forecasting literature. Finally, through the assessment of new sources of biases after the deployment of the coordination system, we identify the adaptive nature of the political game played by the actors. Through the synthesis of our observations on these relevant elements of this coordinated forecasting system, previous findings from the forecasting literature, and credible deductions linking the coordination system to the mitigation of intentional and unintentional biases identified and the emergence of new ones, we provide sufficient evidence for the following propositions concerning the management of organizational forecasts (Meredith 1998): Proposition I: Consensus forecasting, together with the supporting elements of information exchange and assumption elicitation, can prove a sufficient mechanism for constructively managing the influence of both biases on forecasts while being adequately responsive to managing a fast-paced supply chain. Proposition II: The creation of an independent group responsible for managing the consensus forecasting process, an approach that we distinguish from generating forecasts directly, provides an effective way of managing the political conflict and informational and procedural shortcomings occasioned by organizational differentiation. Proposition III: While a coordination system—the relevant processes, roles and responsibilities, and structure—can be designed to address existing individual and functional biases in the organization, the new coordination system will in turn generate new individual and functional biases. 28 The empirical and theoretical grounding of our propositions suggest further implications for practitioners and researchers alike. The typology of functional biases into intentional and unintentional highlights managers’ need to be aware that better and more integrated information may not be sufficient for a good forecast, and that attention must be paid as well to designing the process so that the social and political dimensions of the organization are effectively managed. Finally, new intentional and unintentional biases can emerge directly from newly implemented processes. This places a continuous responsibility on managers monitoring implemented systems for emerging biases and understanding the principles for dealing with different types of biases, to make changes to these systems to maintain operational and organizational gains. Generating forecasts may involve an ongoing process of iterative coordination system improvement. For researchers in operations management and forecasting methods, the process implemented by Leitax might be seen, at a basic level, as a “how to” for implementing in the organization many of the lessons from the research in forecasting and behavioral decision-making. More important, the case illustrates the organizational and behavioral context of forecasting, a context that, to our knowledge, had not been fully addressed. Given the role of forecasting in the operations management function, and as argued in the introduction, future research is needed to continue to build frameworks for managing forecasting along the organizational and political dimensions in operational settings. Such research should be primarily empirical, including both exploratory and theory building methodology that can draw heavily from the current forecasting literature, which has uncovered many potential benefits for forecasting methods ex situ. References Ang, S., M.J. O'Connor, 1991. The effect of group-interaction processes on performance in timeseries extrapolation. Int. J. Forecast. 7 (2), 141-149. Antle, R., G.D. Eppen, 1985. Capital rationing and organizational slack in capital-budgeting. Management Sci. 31 (2), 163-174. 29 Armstrong, J.S. (ed.), 2001a. Principles of Forecasting. Kluwer Academic Publishers, Boston. Armstrong, J.S., 2001b. Combining forecasts. In: J.S. Armstrong (Ed), Principles of Forecasting. Kluwer Academic Publisher, Boston, pp. 417-439. Barley, S., 1986. Technology as an occasion for structuring: Evidence from observations of CT scanners and the social order of radiology departments. Adm. Sci. Q. 31, 78-108. Beach, L.R., V.E. Barnes, J.J.J. Christensen-Szalanski, 1986. Beyond heuristics and biases: A contingency model of judgmental forecasting. J. Forecast. 5, 143-157. Bower, P., 2005. 12 most common threats to sales and operations planning process. J. Bus. Forecast. 24 (3), 4-14. Bretschneider, S.I., W.L. Gorr, 1987. State and local government revenue forecasting. In: S. Makridakis, and S.C. Wheelwright (Eds), The Handbook of Forecasting: A Manager's Guide. Wiley, New York, pp. 118-134. Bretschneider, S.I., W.L. Gorr, 1989. Forecasting as a science. Int. J. Forecast. 5 (3), 305-306. Bretschneider, S.I., W.L. Gorr, G. Grizzle, E. Klay, 1989. Political and organizational influences on the accuracy of forecasting state government revenues. Int. J. Forecast. 5 (3), 307-319. Bromiley, P., 1987. Do forecasts produced by organizations reflect anchoring and adjustment. J. Forecast. 6 (3), 201-210. Cachon, G.P., M.A. Lariviere, 2001. Contracting to assure supply: How to share demand forecasts in a supply chain. Management Sci. 47 (5), 629-646. Checkland, P.B., J. Scholes, 1990. Soft Systems Methodology in Action. Wiley, Chichester, UK. Copeland, T., T. Koller, J. Murrin, 1994. Valuation: Measuring and Managing the Value of Companies, 2nd ed. Wiley, New York. Crick, B., 1962. In Defence of Politics. Weidenfeld and Nicolson, London. Crittenden, V.L., L.R. Gardiner, A. Stam, 1993. Reducing conflict between marketing and manufacturing. Ind. Market. Manag. 22 (4), 299-309. Dahl, R.A., 1970. Modern Political Analysis, 2nd ed. Prentice Hall, Englewood Cliffs, NJ. Deschamps, E., 2004. The impact of institutional change on forecast accuracy: A case study of budget forecasting in Washington State. Int. J. Forecast. 20 (4), 647-657. Edmundson, R.H., M.J. Lawrence, M.J. O'Connor, 1988. The use of non-time series information in sales forecasting: A case study. J. Forecast. 7, 201-211. Eisenhardt, K.M., 1989. Building theories from case study research. Acad. Manage. Rev. 14 (4), 532-550. 30 Fildes, R., R. Hastings, 1994. The organization and improvement of market forecasting. J. Oper. Res. Soc. 45 (1), 1-16. Fisher, M.L., A. Raman, 1996. Reducing the cost of demand uncertainty through accurate response to early sales. Oper. Res. 44 (1), 87-99. Fisher, M.L., J.H. Hammond, W.R. Obermeyer, A. Raman, 1994. Making supply meet demand in an uncertain world. Harvard Bus. Rev. 72 (3), 83-93. Gaeth, G.J., J. Shanteau, 1984. Reducing the influence of irrelevant information on experienced decision makers. Organ. Behav. Hum. Perf. 33, 263-282. Gaur, V., S. Kesavan, A. Raman, M.L. Fisher, 2007. Estimating demand uncertainty using judgmental forecast. Man. Serv. Oper. Manage. 9 (4), 480-491. Goodwin, P., G. Wright, 1993. Improving judgmental time series forecasting: A review of guidance provided by research. Int. J. Forecast. 9 (2), 147-161. Griffin, A., J.R. Hauser, 1992. Patterns of communication among marketing, engineering and manufacturing: A comparison between two new product teams. Management Sci. 38 (3), 360- 373. Griffin, A., J.R. Hauser, 1996. Integrating R&D and Marketing: A review and analysis of the literature. J. Prod. Innovat. 13 (1), 191-215. Hagdorn-van der Meijden, L., J.A.E.E. van Nunen, A. Ramondt, 1994. Forecasting—bridging the gap between sales and manufacturing. Int. J. Prod. Econ. 37, 101-114. Hamel, G., C.K. Prahalad, 1989. Strategic intent. Harvard Bus. Rev. 67 (3), 63-78. Hammond, J.H., 1990. Quick response in the apparel Industry. Harvard Business School Note 690- 038. Harvard Business School, Boston. Hammond, J.H., A. Raman, 1995. Sport Obermeyer Ltd. Harvard Business School Case 695-002. Harvard Business School, Boston. Hanke, J.E., A.G. Reitsch, 1995. Business Forecasting, 5th ed. Prentice Hall, Englewood Cliffs, NJ. Hughes, M.S., 2001. Forecasting practice: Organizational issues. J. Oper. Res. Soc. 52 (2), 143-149. Janis, I.L., 1972. Victims of Groupthink. Houghton Mifflin, Boston. Kahn, K.B., J.T. Mentzer, 1994. The impact of team-based forecasting. J. Bus. Forecast. 13 (2), 18- 21. Keating, E.K., R. Oliva, N. Repenning, S.F. Rockart, J.D. Sterman, 1999. Overcoming the improvement paradox. Eur. Mgmt. J. 17 (2), 120-134. Lapide, L., 2005. An S&OP maturity model. J. Bus. Forecast. 24 (3), 15-20. 31 Lawrence, M.J., R.H. Edmundson, M.J. O'Connor, 1986. The accuracy of combining judgmental and statistical forecasts. Management Sci. 32 (12), 1521-1532. Lim, J.S., M.J. O'Connor, 1995. Judgmental adjustment of initial forecasts: Its effectiveness and biases. J. Behav. Decis. Making 8, 149-168. Mahmoud, E., R. DeRoeck, R. Brown, G. Rice, 1992. Bridging the gap between theory and practice in forecasting. Int. J. Forecast. 8 (2), 251-267. Makridakis, S., S.C. Wheelwright, R.J. Hyndman, 1998. Forecasting: Methods and Applications, 3rd ed. Wiley, New York. Mentzer, J.T., C.C. Bienstock, 1998. Sales Forecasting Management. Sage, Thousand Oaks, CA. Meredith, J., 1998. Building operations management theory through case and field research. J. Oper. Manag. 16, 441-454. Oliva, R., 2001. Tradeoffs in responses to work pressure in the service industry. California Management Review 43 (4), 26-43. Oliva, R., J.D. Sterman, 2001. Cutting corners and working overtime: Quality erosion in the service industry. Management Sci. 47 (7), 894-914. Oliva, R., N. Watson. 2004. What drives supply chain behavior? Harvard Bus. Sch., June 7, 2004. Available from: http://hbswk.hbs.edu/item.jhtml?id=4170&t=bizhistory. Oliva, R., N. Watson, 2006. Cross functional alignment in supply chain planning: A case study of sales & operations planning. Working Paper 07-001. Harvard Business School, Boston. Orlikowski, W., 1992. The duality of technology: Rethinking the concept of technology in organizations. Organ. Sci. 3 (3), 398-427. Pfeffer, J., G.R. Salancik, 1974. Organizational decision making as a political process: The case of a university budget. Adm. Sci. Q. 19 (2), 135-151. Rowe, G., G. Wright, 1999. The Delphi technique as a forecasting tool: Issues and analysis. Int. J. Forecast. 12 (1), 73-92. Rowe, G., G. Wright, 2001. Expert opinions in forecasting: The role of the Delphi technique. In: J.S. Armstrong (Ed), Principles of Forecasting. Kluwer Academic Publishers, Norwell, MA, pp. 125-144. Salancik, G.R., J. Pfeffer, 1977. Who gets power – and how they hold on to it: A strategiccontingency model of power. Org. Dyn. 5 (3), 3-21. Sanders, N.R., L.P. Ritzman, 1992. Accuracy of judgmental forecasts: A comparison. Omega 20, 353-364. Sanders, N.R., K.B. Manrodt, 1994. Forecasting practices in U.S. corporations: Survey results. Interfaces 24, 91-100. 32 Sanders, N.R., L.P. Ritzman, 2001. Judgmental adjustment of statistical forecasts. In: J.S. Armstrong (Ed), Principles of Forecasting. Kluwer Academic Publishers, Boston, pp. 405-416. Shapiro, B.P., 1977. Can marketing and manufacturing coexist? Harvard Bus. Rev. 55 (5), 104-114. Stein, J.C., 1997. Internal capital markets and the competition for corporate resources. Journal of Finance 52 (1), 111-133. Terwiesch, C., Z.J. Ren, T.H. Ho, M.A. Cohen, 2005. An empirical analysis of forecast sharing in the semiconductor equipment supply chain. Management Sci. 51 (2), 208-220. Voorhees, W.R., 2000. The impact of political, institutional, methodological, and economic factors on forecast error. PhD dissertation, Indiana University. Watson, M.C., 1996. Forecasting in the Scottish electronics industry. Int. J. Forecast. 12 (3), 361- 371. Watson, N., R. Oliva, 2005. Leitax (A). Harvard Business School Case 606-002. Harvard Business School, Boston. Wheelwright, S.C., K.B. Clark, 1992. Revolutionizing Product Development. Wiley, New York. Yin, R., 1984. Case Study Research. Sage, Beverly Hills, CA. Figure 1. Forecast Accuracy Performance † 0% 20% 40% 60% 80% 100% Dec-Feb 2002 Mar-May 2002 Jun-Aug 2002 Sep-Nov 2002 Dec-Feb 2003 Mar-May 2003 Jun-Aug 2003 Sep-Nov 2003 Accuracy Goal Sell-thorugh Sell-in Project Redesign Go-Live † The dip forecasting performance in Sept-Nov 2003 was as a result of a relocation of a distribution center. 33 Figure 2. Consensus Forecasting Process Industry Info Historical Info Sales Info Statistical Forecast (DMO) Top-down Forecast (PPS) Bottom-up Forecast (SD) Consensus Forecast Joint Planning Business Assumptions Package Figure 3. Dual Relationship between Coordination System and Behavioral Dynamics individual or functional biases coordination system processes roles structure values influence the design create / generate Table 1: Process Steps and Biases Mitigated Consensus Forecasting Process Procedural Blind Spot Informational Blind Spot Incentive Misalignment Business Assumptions Package Multiple sources ? Multiple interpretations ? Interpretation source explicitly labeled ? Functional forecasts Private info not in BAP ? Functional interpretation of assumptions ? Aggregate forecasts at family level ? Ignoring planning expectations supply chain constraints ? ? Proposed Consensus Forecast Weighted average of functional forecasts ? Weights in terms of past proven performance ? Initial anchoring for consensus process ? Final consensus meeting Resolution of diverging forecast ? Uncover private information used in functional forecasts ? Uncover private interpretation of public information ? ? Forecast Review Financial and Operational ? ? BAP revision ? ?Perspectives on Psychological Science
|
CD ROM Annuaire d'Entreprises France prospect (avec ou sans emails) : REMISE DE 10 % Avec le code réduction AUDEN872
10% de réduction sur vos envois d'emailing --> CLIQUEZ ICI Retour à l'accueil, cliquez ici Science http://pps.sagepub.com/ Perspectives on Psychological http://pps.sagepub.com/content/6/1/9 The online version of this article can be found at: DOI: 10.1177/1745691610393524 Perspectives on Psychological Science 2011 6: 9 Michael I. Norton and Dan Ariely Building a Better America--One Wealth Quintile at a Time Published by: http://www.sagepublications.com On behalf of: Association For Psychological Science Additional services and information for Perspectives on Psychological Science can be found at: Email Alerts: http://pps.sagepub.com/cgi/alerts Subscriptions: http://pps.sagepub.com/subscriptions Reprints: http://www.sagepub.com/journalsReprints.nav Permissions: http://www.sagepub.com/journalsPermissions.nav Downloaded from pps.sagepub.com at Harvard Libraries on February 3, 2011Building a Better America—One Wealth Quintile at a Time Michael I. Norton 1 and Dan Ariely 2 1 Harvard Business School, Boston, MA, and 2 Department of Psychology, Duke University, Durham, NC Abstract Disagreements about the optimal level of wealth inequality underlie policy debates ranging from taxation to welfare. We attempt to insert the desires of ‘‘regular’’ Americans into these debates, by asking a nationally representative online panel to estimate the current distribution of wealth in the United States and to ‘‘build a better America’’ by constructing distributions with their ideal level of inequality. First, respondents dramatically underestimated the current level of wealth inequality. Second, respondents constructed ideal wealth distributions that were far more equitable than even their erroneously low estimates of the actual distribution. Most important from a policy perspective, we observed a surprising level of consensus: All demographic groups—even those not usually associated with wealth redistribution such as Republicans and the wealthy—desired a more equal distribution of wealth than the status quo. Keywords inequality, fairness, justice, political ideology, wealth, income Most scholars agree that wealth inequality in the United States is at historic highs, with some estimates suggesting that the top 1% of Americans hold nearly 50% of the wealth, topping even the levels seen just before the Great Depression in the 1920s (Davies, Sandstrom, Shorrocks, & Wolff, 2009; Keister, 2000; Wolff, 2002). Although it is clear that wealth inequality is high, determining the ideal distribution of wealth in a society has proven to be an intractable question, in part because differing beliefs about the ideal distribution of wealth are the source of friction between policymakers who shape that distribution: Proponents of the ‘‘estate tax,’’ for example, argue that the wealth that parents bequeath to their children should be taxed more heavily than those who refer to this policy as a burdensome ‘‘death tax.’’ We took a different approach to determining the ideal level of wealth inequality: Following the philosopher John Rawls (1971), we asked Americans to construct distributions of wealth they deem just. Of course, this approach may simply add to the confusion if Americans disagree about the ideal wealth distribution in the same way that policymakers do. Thus, we had two primary goals. First, we explored whether there is general consensus among Americans about the ideal level of wealth inequality, or whether differences—driven by factors such as political beliefs and income—outweigh any consensus (see McCarty, Poole, & Rosenthal, 2006). Second, assuming sufficient agreement, we hoped to insert the preferences of ‘‘regular Americans’’ regarding wealth inequality into policy debates. A nationally representative online sample of respondents (N ¼ 5,522, 51% female, mean age ¼ 44.1), randomly drawn from a panel of more than 1 million Americans, completed the survey in December, 2005. 1 Respondents’ household income (median ¼ $45,000) was similar to that reported in the 2006 United States census (median ¼ $48,000), and their voting pattern in the 2004 election (50.6% Bush, 46.0% Kerry) was also similar to the actual outcome (50.8% Bush, 48.3% Kerry). In addition, the sample contained respondents from 47 states. We ensured that all respondents had the same working definition of wealth by requiring them to read the following before beginning the survey: ‘‘Wealth, also known as net worth, is defined as the total value of everything someone owns minus any debt that he or she owes. A person’s net worth includes his or her bank account savings plus the value of other things such as property, stocks, bonds, art, collections, etc., minus the value of things like loans and mortgages.’’ Corresponding Authors: Michael I. Norton, Harvard Business School, Soldiers Field Road, Boston, MA 02163, or Dan Ariely, Duke University, One Towerview Road, Durham, NC 27708 E-mail: mnorton@hbs.edu or dandan@duke.edu Perspectives on Psychological Science 6(1) 9–12 ª The Author(s) 2011 Reprints and permission: sagepub.com/journalsPermissions.nav DOI: 10.1177/1745691610393524 http://pps.sagepub.com Downloaded from pps.sagepub.com at Harvard Libraries on February 3, 2011Americans Prefer Sweden For the first task, we created three unlabeled pie charts of wealth distributions, one of which depicted a perfectly equal distribution of wealth. Unbeknownst to respondents, a second distribution reflected the wealth distribution in the United States; in order to create a distribution with a level of inequality that clearly fell in between these two charts, we constructed a third pie chart from the income distribution of Sweden (Fig. 1). 2 We presented respondents with the three pairwise combinations of these pie charts (in random order) and asked them to choose which nation they would rather join given a ‘‘Rawls constraint’’ for determining a just society (Rawls, 1971): ‘‘In considering this question, imagine that if you joined this nation, you would be randomly assigned to a place in the distribution, so you could end up anywhere in this distribution, from the very richest to the very poorest.’’ As can be seen in Figure 1, the (unlabeled) United States distribution was far less desirable than both the (unlabeled) Sweden distribution and the equal distribution, with some 92% of Americans preferring the Sweden distribution to the United States. In addition, this overwhelming preference for the Sweden distribution over the United States distribution was robust across gender (females: 92.7%, males: 90.6%), preferred candidate in the 2004 election (Bush voters: 90.2%; Kerry voters: 93.5%) and income (less than $50,000: 92.1%; $50,001–$100,000: 91.7%; more than $100,000: 89.1%). In addition, there was a slight preference for the distribution that resembled Sweden relative to the equal distribution, suggesting that Americans prefer some inequality to perfect equality, but not to the degree currently present in the United States. Building a Better America Although the choices among the three distributions shed some light into preferences for distributions of wealth in the abstract, we wanted to explore respondents’ specific beliefs about their own society. In the next task, we therefore removed Rawls’ ‘‘veil of ignorance’’ and assessed both respondents’ estimates of the actual distribution of wealth and their preferences for the ideal distribution of wealth in the United States. For their estimates of the actual distribution, we asked respondents to indicate what percent of wealth they thought was owned by each of the five quintiles in the United States, in order starting with the top 20% and ending with the bottom 20%. For their ideal distributions, we asked them to indicate what percent of wealth they thought each of the quintiles ideally should hold, again starting with the top 20% and ending with the bottom 20%. To help them with this task, we provided them with the two most extreme examples, instructing them to assign 20% of the wealth to each quintile if they thought that each quintile should have the same level of wealth, or to assign 100% of the wealth to one quintile if they thought that one quintile should hold all of the wealth. Figure 2 shows the actual wealth distribution in the United States at the time of the survey, respondents’ overall estimate of that distribution, and respondents’ ideal distribution. These results demonstrate two clear messages. First, respondents vastly underestimated the actual level of wealth inequality in the United States, believing that the wealthiest quintile held about 59% of the wealth when the actual number is closer to 84%. More interesting, respondents constructed ideal wealth distributions that were far more equitable than even their erroneously low estimates of the actual distribution, reporting a desire for the top quintile to own just 32% of the wealth. These desires for more equal distributions of wealth took the form of moving money from the top quintile to the bottom three quintiles, while leaving the second quintile unchanged, evincing a greater concern for the less fortunate than the more fortunate (Charness & Rabin, 2002). We next explored how demographic characteristics of our respondents affected these estimates. Figure 3 shows these estimates broken down by three levels of income, by whether respondents voted for George W. Bush (Republican) or John Kerry (Democrat) for United States president in 2004, and by gender. Males, Kerry voters, and wealthier individuals estimated that the distribution of wealth was relatively more unequal than did women, Bush voters, and poorer individuals. For estimates of the ideal distribution, women, Kerry voters, and the poor desired relatively more equal distributions than did their counterparts. Despite these (somewhat predictable) differences, what is most striking about Figure 3 is its demonstration of much more consensus than disagreement among these different demographic groups. All groups—even the wealthiest respondents—desired a more equal distribution of wealth than what they estimated the current United States level to be, and all groups also desired some inequality—even the poorest respondents. In addition, all groups Fig. 1. Relative preference among all respondents for three distributions: Sweden (upper left), an equal distribution (upper right), and the United States (bottom). Pie charts depict the percentage of wealth possessed by each quintile; for instance, in the United States, the top wealth quintile owns 84% of the total wealth, the second highest 11%, and so on. 10 Norton and Ariely Downloaded from pps.sagepub.com at Harvard Libraries on February 3, 2011agreed that such redistribution should take the form of moving wealth from the top quintile to the bottom three quintiles. In short, although Americans tend to be relatively more favorable toward economic inequality than members of other countries (Osberg & Smeeding, 2006), Americans’ consensus about the ideal distribution of wealth within the United States Fig. 3. The actual United States wealth distribution plotted against the estimated and ideal distributions of respondents of different income levels, political affiliations, and genders. Because of their small percentage share of total wealth, both the ‘‘4th 20%’’ value (0.2%) and the ‘‘Bottom 20%’’ value (0.1%) are not visible in the ‘‘Actual’’ distribution. Fig. 2. The actual United States wealth distribution plotted against the estimated and ideal distributions across all respondents. Because of their small percentage share of total wealth, both the ‘‘4th 20%’’ value (0.2%) and the ‘‘Bottom 20%’’ value (0.1%) are not visible in the ‘‘Actual’’ distribution. Building a Better America 11 Downloaded from pps.sagepub.com at Harvard Libraries on February 3, 2011appears to dwarf their disagreements across gender, political orientation, and income. Overall, these results demonstrate two primary messages. First, a large nationally representative sample of Americans seems to prefer to live in a country more like Sweden than like the United States. Americans also construct ideal distributions that are far more equal than they estimated the United States to be—estimates which themselves were far more equal than the actual level of inequality. Second, there was much more consensus than disagreement across groups from different sides of the political spectrum about this desire for a more equal distribution of wealth, suggesting that Americans may possess a commonly held ‘‘normative’’ standard for the distribution of wealth despite the many disagreements about policies that affect that distribution, such as taxation and welfare (Kluegel & Smith, 1986). We hasten to add, however, that our use of ‘‘normative’’ is in a descriptive sense— reflecting the fact that Americans agree on the ideal distribution—but not necessarily in a prescriptive sense. Although some evidence suggests that economic inequality is associated with decreased well-being and health (Napier & Jost, 2008; Wilkinson & Pickett, 2009), creating a society with the precise level of inequality that our respondents report as ideal may not be optimal from an economic or public policy perspective (Krueger, 2004). Given the consensus among disparate groups on the gap between an ideal distribution of wealth and the actual level of wealth inequality, why are more Americans, especially those with low income, not advocating for greater redistribution of wealth? First, our results demonstrate that Americans appear to drastically underestimate the current level of wealth inequality, suggesting they may simply be unaware of the gap. Second, just as people have erroneous beliefs about the actual level of wealth inequality, they may also hold overly optimistic beliefs about opportunities for social mobility in the United States (Benabou & Ok, 2001; Charles & Hurst, 2003; Keister, 2005), beliefs which in turn may drive support for unequal distributions of wealth. Third, despite the fact that conservatives and liberals in our sample agree that the current level of inequality is far from ideal, public disagreements about the causes of that inequality may drown out this consensus (Alesina & Angeletos, 2005; Piketty, 1995). Finally, and more broadly, Americans exhibit a general disconnect between their attitudes toward economic inequality and their self-interest and public policy preferences (Bartels, 2005; Fong, 2001), suggesting that even given increased awareness of the gap between ideal and actual wealth distributions, Americans may remain unlikely to advocate for policies that would narrow this gap. Acknowledgments We thank Jordanna Schutz for her many contributions; George Akerlof, Lalin Anik, Ryan Buell, Zoe¨ Chance, Anita Elberse, Ilyana Kuziemko, Jeff Lee, Jolie Martin, Mary Carol Mazza, David Nickerson, John Silva, and Eric Werker for their comments; and surveysampling.com for their assistance administering the survey. Declaration of Conflicting Interests The authors declared that they had no conflicts of interest with respect to their authorship or the publication of this article. Notes 1. We used the survey organization Survey Sampling International (surveysampling.com) to conduct this survey. As a result, we do not have direct access to panelist response rates. 2. We used Sweden’s income rather than wealth distribution because it provided a clearer contrast to the other two wealth distribution examples; although more equal than the United States’ wealth distribution, Sweden’s wealth distribution is still extremely top heavy. References Alesina, A., & Angeletos, G.M. (2005). Fairness and redistribution. American Economic Review, 95, 960–980. Bartels, L.M. (2005). Homer gets a tax cut: Inequality and public policy in the American mind. Perspectives on Politics, 3, 15–31. Benabou, R., & Ok, E.A. (2001). Social mobility and the demand for redistribution: The POUM hypothesis. Quarterly Journal of Economics, 116, 447–487. Charles, K.K., & Hurst, E. (2003). The correlation of wealth across generations. Journal of Political Economy, 111, 1155–1182. Charness, G., & Rabin, M. (2002). Understanding social preferences with simple tests. Quarterly Journal of Economics, 117, 817–869. Davies, J.B., Sandstrom, S., Shorrocks, A., & Wolff, E.N. (2009). The global pattern of household wealth. Journal of International Development, 21, 1111–1124. Fong, C. (2001). Social preferences, self-interest, and the demand for redistribution. Journal of Public Economics, 82, 225–246. Keister, L.A. (2000). Wealth in America. Cambridge, England: Cambridge University Press. Keister, L.A. (2005). Getting rich: America’s new rich and how they got that way. Cambridge, England: Cambridge University Press. Kluegel, J.R., & Smith, E.R. (1986). Beliefs about inequality: Americans’ views of what is and what ought to be. New York: Aldine de Gruyter. Krueger, A.B. (2004). Inequality, too much of a good thing. In J.J. Heckman & A.B. Krueger (Eds.), Inequality in America: What role for human capital policies (pp. 1–75). Cambridge, MA: MIT Press. McCarty, N., Poole, K.T., & Rosenthal, H. (2006). Polarized America: The dance of ideology and unequal riches. Cambridge, MA: MIT Press. Napier, J.L., & Jost, J.T. (2008). Why are conservatives happier than liberals? Psychological Science, 19, 565–572. Osberg, L., & Smeeding, T. (2006). "Fair’’ inequality? Attitudes to pay differentials: The United States in comparative perspective. American Sociological Review, 71, 450–473. Piketty, T. (1995). Social mobility and redistributive politics. Quarterly Journal of Economics, 110, 551–584. Rawls, J. (1971). A theory of justice. Cambridge, MA: Harvard University Press. Wilkinson, R., & Pickett, K. (2009). The spirit level: Why greater equality makes societies stronger. New York: Bloomsbury. Wolff, E.N. (2002). Top heavy: The increasing inequality of wealth in American and what can be done about it. New York: New Press. 12 Norton and Ariely Downloaded from pps.sagepub.com at Harvard Libraries on February 3, 2011Payout Taxes and the Allocation of Investment
|
CD ROM Annuaire d'Entreprises France prospect (avec ou sans emails) : REMISE DE 10 % Avec le code réduction AUDEN872
10% de réduction sur vos envois d'emailing --> CLIQUEZ ICI Retour à l'accueil, cliquez ici Payout Taxes and the Allocation of Investment Bo Becker Marcus Jacob Martin Jacob Working Paper 11-040 September 27, 2011 Payout Taxes and the Allocation of Investment ? Bo Becker Harvard University and NBER bbecker@hbs.edu Marcus Jacob EBS European Business School marcus.jacob@ebs.edu Martin Jacob WHU – Otto Beisheim School of Management martin.jacob@whu.edu This draft: September, 2011 ABSTRACT. When corporate payout is taxed, internal equity (retained earnings) is cheaper than external equity (share issues). High taxes will favor firms who can finance internally. If there are no perfect substitutes for equity finance, payout taxes may thus change the investment behavior of firms. Using an international panel with many changes in payout taxes, we show that this prediction holds well. Payout taxes have a large impact on the dynamics of corporate investment and growth. Investment is “locked in” in profitable firms when payout is heavily taxed. Thus, apart from any aggregate effects, payout taxes change the allocation of capital. JEL No. G30, G31, H25. ? We thank Chris Allen and Baker Library Research Services for assistance with data collection. We are grateful to James Poterba, Raj Chetty, Fritz Foley, Jochen Hundsdoerfer, Richard Sansing, Kristian Rydqvist and seminar participants at European Business School, Harvard Business School, Harvard Economics Department, the UNC Tax Symposium, the Nordic Workshop on Tax Policy and Public Economics, and the Stockholm Institute for Financial Research (SIFR) for helpful comments. 1 1. Introduction Corporate payout, in the form of dividends or as repurchases of shares, is subject to taxation in most countries. Such taxes on corporate payout drive a wedge between the cost of internal and external equity (retained earnings and equity issues, respectively). 1 Therefore, higher payout taxes are expected to “lock in” investment in profitable firms, at the expense of firms with good investment opportunities which would require external equity financing to undertake. The empirical relevance of this simple prediction has not been well tested. Despite the large amount of theoretical and empirical research about the effect of dividend taxes on the level of investment and on the valuation of firms (see, e.g., Auerbach 1979a, Bradford 1981, Chetty and Saez 2010, Feldstein 1970, Guenther and Sansing 2006, Harberger 1962, King 1977, Korinek and Stiglitz 2009, Poterba and Summers 1984 and 1985), little is known about the effects of such taxes on the allocation of investment across firms. Yet, the theoretical prediction is very clear: higher payout taxes will increase the wedge between the cost of internal and external equity, and firms with more costly external financing will exhibit greater investment cash flow sensitivities. Put differently, payout taxes favor investment financed by retained earnings over investment financed by equity issues. 2 This can matter for the productivity and nature of investment if a) debt finance is an imperfect substitute for equity (in other words, if the Miller Modigliani propositions do not hold), b) different firms have different investment opportunities, c) the marginal investor is subject to taxation, and d) firms make equity payouts while the tax is in effect. All these conditions have some empirical support. 3 But are such frictions important enough for this to matter 1 To see the tax difference, consider a firm facing a dividend tax rate of t and which has the opportunity to invest one dollar now in order to receive ? in the future. If the firm issues equity, it can pay a dividend of 1+?. The initial investment is paid in capital and not subject to dividend taxes, so the shareholders will receive 1+?(1-t) in after-tax payoff. Alternatively, investors can invest the dollar at a tax-free return (1+r). This firm should invest if ? (1-t)>r. Now consider another firm, which has retained earnings, so that it faces the choice between paying out one dollar, producing (1-t) in after-tax payoff to investors today, which will be worth (1-t)(1+r) tomorrow, or investing, producing (1+?(1-t)) in after-tax dividend for investors. This firm should invest if ?>r. The tax wedge is the difference between the two firms’ investment criteria. Put differently, the after-tax cost of capital is lower for firms with inside equity. Lewellen and Lewellen (2006) develop this intuition and further results in a richer model. We sometimes refer to this prediction as the tax wedge theory. 2 The debate about the impact of payout taxes on the level of investment between the “old view” (Harberger 1962, 1966, Feldstein 1970, Poterba and Summers 1985) and the “new view” (Auerbach 1979a, Bradford 1981, King 1977) can be understood in terms of different assumptions about the marginal source of investment financing. To simplify, the old view assumes that marginal investment is financed by equity issues, so that payout taxes raise the cost of capital and reduce investment. The new view assumes that marginal investment is financed by retained earnings, so that payout taxes do not reduce investment. In practice, firms are likely to differ in their ability to finance investment with internal resources (e.g. Lamont 1997). If they do, the tax rate will affect the allocation of investment. Auerbach (1979b) makes a related point about how firms with and without internal funds should respond differently to dividend taxes. 3 Regarding the imperfect substitutability between debt and equity, see e.g. Myers (1977), Jensen and Meckling (1976). Regarding the variation in investment opportunities across firms, see e.g. Coase (1937) and Zingales (2000). Firms with limited access to internal equity may include entrepreneurial firms and firms with strong growth 2 in practice for investment levels? This paper aims to test the extent to which the “lock in” effect of payout taxes matters empirically. There are several challenges in testing how payout taxes affect the cross-firm allocation of investment. First, large changes in the US tax code are rare. The 2003 tax cut has provided a suitable natural experiment for testing how dividend levels responded to taxes (see Chetty and Saez 2005 and Brown, Liang, and Weisbenner 2007), but investment is a more challenging dependent variable than dividends, so the experiment may not provide sufficient statistical power for examining investment responses. First, unlike dividends, investment is imperfectly measured by accounting data which, for example, leaves out many types of intangible investment such as that in brands and human capital. This means that available empirical proxies (e.g. capital expenditures) are noisy estimates of the true variable of interest. Second, much investment is lumpy and takes time to build, so any response to tax changes is likely slow and more difficult to pinpoint in time. This suggests that a longer time window may be necessary (the payout studies used quarters around the tax change).Third, however, investment is affected by business cycles and other macro-economic trends, so extending the window around a single policy change introduces more noise from other sources, and may not provide better identification. We address these challenges by using an international dividend and capital gains tax data set covering 25 countries over the 19-year period 1990-2008 (Jacob and Jacob 2011). This data set contains fifteen substantial tax reforms and 67 discrete changes in the dividend or capital gains tax rate. With so many tax changes, we have sufficient variation to study the effects of payout taxes on the investment allocation. 4 We use this tax data base to test if the allocation of investment across firms with and without access to internal equity depends on payout taxes. 5 We first run non-parametric tests that contrast the investment by the two groups of firms around tax reforms. We focus on events where payout taxes changed by at least three percentage points and compare the five years preceding the tax change with the two years following it. There are fifteen events with payout tax reductions. The mean tax drop is 9.8 percentage points (median 5.5). There are fourteen tax increase events with a tax change of 8.4 percentage opportunities. Regarding the taxability of the marginal investor, see e.g. our Section 4.4. and note also that in many countries outside the U.S. and the U.K. (for example, in Germany and Austria) investment funds managing private investors’ money are ultimately taxed like private investors. Regarding payout, may firms pay dividends or repurchase shares every year. Others may plan to do so in the future. Korinek and Stiglitz (2010) consider firms’ ability to time their payout around tax changes. 4 Because dividends and share repurchases are treated very differently for tax purposes, we construct a measure of the overall tax burden on payout. We do this by weighting the tax rates on dividends and on capital gains by the observed quantity of each in a country (using amounts of dividends and repurchases from our sample firms over the sample period). We also report results using the dividend tax and using an average payout tax measure adjusted for effective capital gains taxation. We vary assumptions about the amount of taxable capital gains caused by repurchases. Variations of the measurement of taxes produce similar results. See section 3. Data for details. 5 As discussed in detail in the empirical section below, we use a range of variables to classify firms into those with and without access to internal equity, including net income, operating cash flow, and even cash holdings. Neither measure is perfect, since a firm’s perceived access to internal equity must depend on (unobservable) expectations about future years. 3 points (median 5.6). 6 We sort firms into quintiles of the ratio of cash flow to assets in each country-year cell. We then calculate average investment over lagged assets for each quintile. There is no trend in investment for any of the quintiles during the five year period preceding the tax events. After the tax cuts, we observe a significant convergence of the investment rate of high and low cash flow firms (top and bottom quintiles). In other words, firms with limited internal equity increase their investment relative to firms with plenty of internal equity. This is consistent with the tax wedge theory, and suggests that low taxes favor firms with limited access to internal equity. In contrast, following increases in payout taxes there is a divergence of investment of high and low cash flow firms. The estimated effects appear large in both sets of tax reforms. On average, the difference in investment between low and high cash flow firms increases from 5.33% (of assets) to 7.59% following a payout tax increase – a 42% increase. When payout taxes are cut, the difference in investment falls from 7.27% to 5.54% – a decrease by 31%. In other words, for the typical large tax change, a large quantity of investment is estimated to get displaced (when taxes go up, investment flows from firms with limited access to internal equity to those with more internal equity, and vice versa for tax reductions). These non-parametric results are consistent with the predictions of the tax wedge theory: tax increases raise the cost of capital wedge between firms with and without access to internal equity financing, and thereby increases the investment of internally funded firms relative to firms that have limited access to internal equity. Because the panel data set contains multiple tax change events, we can estimate not just the mean treatment effect of a tax change, but also ranges. Only two (three) of the fifteen (fourteen) tax decreases (increases) have difference-in-difference effects that are in conflict with our hypothesis. The other estimates agree with the tax wedge hypothesis, and many point estimates are large: one third of tax decreases events reduce the difference in the investment rate of high and low cash flow firms by at least 2.5 percentage points. About 40% of the tax raises are associated with a point estimate for the increased wedge between high and low cash flow firms by more than 2.5 percentage points. In other words, the effect of tax changes on the relative investment of firms varies quite a bit across events, and is sometimes large. 7 We next turn to parametric tests in the form of linear regressions. The regressions use data from all years, and can integrate both tax increases and decreases in the same specifications. 8 For our baseline 6 We report results for the country-average payout tax rate here, but results are similar with alternative measures, described below. 7 We can also use the individual diff-in-diff point estimates to do non-parametric tests. For example, a sign test of the frequencies with which estimates are positive and negative suggest that we can reject that an increase and a decrease of the investment rate difference are equally likely after a tax increase (decrease) at the 5% (1%) level of statistical significance. 8 The weights placed on different observations also differ between linear regression tests and non-parametric tests. Because of the many differences, it is useful to verify that both methods deliver similar results. 4 tests, we regress investment on firm controls, fixed effects for firms and for country-year cells, and the interaction of the payout tax rate with cash flow. Thanks to the panel structure of the data set, we can allow the coefficient on cash flow to vary across countries and years, in essence replicating the identification strategy of the many studies exploiting the 2003 tax cut in the US, but for the whole panel of 25 country times 19 year. The estimated coefficient for the tax-cash flow interaction variable is consistently positive and significant. In other words, the higher payout taxes are, the stronger is the tendency for investment to occur where cash flows are high. As predicted by the tax wedge theory, payout taxes “lock in” investment in firms generating earnings and cash flow. The estimated magnitudes are large. For example, going from the 25 th percentile of payout tax (15.0%) to the 75 th percentile (32.2%) implies that the effective coefficient on cash flow increases by 0.029, an increase by 33% over the conditional estimate at the 25 th percentile. Like the NP results, this implies that payout taxes have an important effect on the allocation of capital across firms. We report extensive robustness tests for our results. For most tests, we report regression results with three alternative tax rates, with similar results. The results also hold for alternative measures of the ability to finance out of internal resources (e.g. net income instead of cash flow), as well as when controlling for the corporate income tax rate and its interaction with cash flow. We also collect economic policy controls from the World Development Indicators (World Bank 2010). This is to address endogeneity concerns, i.e. to ensure that tax changes are not just fragments of wider structural changes in an economy that change firms' investment behavior around tax reforms. This test shows that payout tax changes appear to have their own very unique and economically significant effect on the allocation of investment (assuming we have identified the relevant set of policy variables). We next examine in greater detail the predictions of the old and new view. A key distinguishing feature of models belonging to the old and new view is whether the marginal source of investment funds is assumed to be internal cash flow or external equity. We hypothesize that both these assumptions may be valid for a subset of firms at any given time. Some firms behave as predicted by the old view, and reduce investment when payout taxes increase. Others behave more like the new view predicts, and respond less. This has two implications. First, this difference in responsiveness to taxes generates the within-country, within-year, cross-firm prediction our paper focuses on. By comparing different firms in the same country and at the same time, we get rid of concerns about omitted aggregate time-series variables. This prediction is what we examine with all our main tests (regressions and non-parametric tests). A second implication is that it becomes interesting to try to identify the relevant groups of firms in the data, and to test their responses. We go about this by differentiating between firms based on three alternative measures. First, we define firms as old view firms if predicted equity sales are above 2% of lagged assets. Second, we look at historical equity issuance by firms. We exploit the fact that such 5 issuance is persistent, so that classifying firms by recent equity issuance likely indicates their ability to issue in the future. 9 Firms with recent equity issuance activity, which are more likely to consider external equity their marginal source of investment funds, correspond most closely to the assumptions of the old view. Third, we classify firms as new view firms if the Kaplan and Zingales (1997) index of financial constraints is above 0.7, and as old view firms otherwise. For all three classifications, there is a sizable difference in the effect of taxation on the marginal source of funds for investment between old view firms and new view firms. For old view firms, the cash flow coefficient is always sensitive to tax rates, as predicted. For new view firms, the coefficient estimate is positive but smaller and insignificant in all specifications. This suggest that both the old and the new view have predictive power, and exactly for the set of firms which match the critical assumptions of the two views. This confirms the mechanism behind the differential responses of investment to tax rates that we have documented earlier: high tax rates drive a wedge between the cost of internal and external equity. We also examine the effect of governance. Chetty and Saez (2010) predict that a dividend tax cut will not affect poorly governed firms in the same way it will well governed firms. In poorly governed firms with much cash, investment is inflated by CEOs who derive private benefits from investment (or from firm size). A tax cut reduces the incentive for cash-rich firms to (inefficiently) over-invest in pet projects because it becomes more attractive for the CEO to get dividends from his shareholdings. It is important to note that the same result does not apply to well governed firms in the model: a tax cut raises equity issues and productive (as well as unproductive) investment by such firms. If Chetty and Saez’ mechanism is important, the pattern we have established in the data between taxes, cash flow and investment, will in fact be driven by the set of well governed firms. 10 To proxy for governance across multiple countries, where laws, practices, and financial development varies substantially, we use the ownership stake of insiders (i.e., corporate directors and officers). This is based on the notion that managers and directors with large stakes have both the power and the incentive to make sure the firm is maximizing value (Shleifer and Vishny 1986, Jensen and Murphy 1990). The insider ownership variable is available for many of our sample firms, and measured fairly consistently across countries. When sorting by insider ownership, we find that firms with very low insider ownership show a less significant response to taxes, whereas firms with strong ownership have larger and more significant responses to 9 In our data, firms that issued any equity in the previous year are 3.9 times as likely to issue again next year. Firms issuing more than 5% of assets over the last year are 7.7 times as likely to do so again this year. These numbers probably reflect capital needs as well as access to the market. There are several possible reasons for this. Issuing costs are high for equity (see Asquith and Mullins, 1986, and Chen and Ritter 2000). However, some firms find it less costly to issue equity, for example because they have a favorable stock valuation (see Baker, Stein, and Wurgler, 2003). 10 The tests of the US tax cut in 2003 have found that governance variables have strong predictive power for firms’ responses to the tax cuts. See, e.g. Chetty and Saez (2005) and Brown, Liang, and Weisbenner (2007). 6 taxes. This is consistent with the Chetty and Saez predictions. Since individual owners (such as insiders) are more likely to be taxable than owners in general (which include tax exempt institutions), this result also highlights that where the marginal shareholder is more likely to be a taxable investor the tax effects are stronger. Finally, we examine how quantities of equity raised respond to taxes. If our identifying assumptions are valid, and if we have identified real variation in the effective taxation as perceived by firms, we would expect to see a drop in equity issuance when taxes go up. We find exactly this: When taxes are high, equity issuance tends to be low. This supports the interpretation that the tax variation we pick up is meaningful. Our results have three main implications. First, it appears that payout taxes influence the allocation of capital across firms. High taxes lock in capital in those firms that generate internal cash flows, ahead of those firms that need to raise outside equity. If firms have different investment opportunities, this means that tax rate changes alter the type of investments being made. For example, high payout taxes may favor established industries. 11 Second, the effect of payout taxes is related to both access to the equity market and governance. Firms which can access the equity market, “old view” firms, are the most affected by tax changes. Firms whose only source of equity finance is internal are less affected by taxes, as predicted by the “new view”. A final source of heterogeneity is governance. Firms where decision makers have low financial stakes are less affected by tax changes, reflecting their propensity to make investment decisions for reasons unrelated to the cost of capital. 12 Third, the relation between cash flow and investment (see e.g. Fazzari, Hubbard, and Petersen 1988, Kaplan and Zingales 1997) appears to partially reflect the difference in the after-tax cost of capital between firms with and without access to inside equity. 2. Taxes on corporate payout across countries 2.1 Tax systems The prerequisite for a useful study of the relationship between payout tax policies and the allocation of investment across countries is a sufficient degree of identifying variation in dividend and capital gains tax regimes and tax rates both across countries and within countries across time. Tables 1 and 2, and Figures 1, 2, and 3 illustrate that this is the case for the 25 countries scrutinized in this study. 11 We consider the allocation across firms an important topic in itself, but there may also be some suggestive implications for aggregate investment. While we do not estimate the impact of taxes on the level of corporate investment directly, our main result is inconsistent with a standard new view model of payout taxes. Hence, our results generally point to the relevance of payout taxes for investment 12 Although, to be precise, our findings do not necessarily support an empire building agency problem. See e.g. Malmendier and Tate (2005) for other possibilities. 7 We count five major tax systems in our data set: classical corporate tax systems, shareholder relief systems, dividend tax exemption systems, and full and partial imputation systems. Classical corporate taxation systems (for example, currently used in Ireland, and previously in the Netherlands or Spain) are characterized by double taxation of corporate profits, that is, income, before it is distributed as dividends, is taxed at the corporate level, and later taxed again as dividend income at the individual shareholder level. This contrasts with shareholder relief systems (for example, currently used in the US, Japan, and Spain) which aim to reduce the full economic burden of double taxation that applies under a pure classical system. For example, at the individual shareholder level, reduced tax rates on dividends received or exclusion of a proportion of dividend income from taxation are common forms of shareholder tax relief. Under an imputation system (for example, used currently in Australia and Mexico, and previously in France), taxes paid by a corporation are considered as paid on behalf of its shareholders. As a result, shareholders are entitled to a credit (the “imputation credit”) for taxes already paid at the corporate level. That is, shareholders are liable only for the difference between their marginal income tax rate and the imputation rate. Full and partial imputation systems are distinguished by the nature of the imputation credit, which may be the full corporate tax or only a fraction thereof. In dividend tax exemption systems (currently only Greece in our sample) dividend income is generally not taxed. 13 Table 1 shows that there have been many changes in payout tax systems over the last two decades. While in the first half of our sample period the classical corporate tax system dominates, from 2005 the shareholder relief system is the most widespread tax system. While there are only five shareholder relief systems in place in 1990, shareholder relief systems can be found in almost 70% of the countries (17) in our sample at the end of the sample period. The reduction in the prevalence of full and partial imputation systems from 11 in 1990 to only 6 in 2008 is largely due to the harmonization of European tax laws that necessitated an abolition of differences in the availability of imputation credits for domestic and foreign investors across EU member states. 2.2 Tax rates The significant trend from imputation systems and classical corporate tax systems to shareholder relief systems naturally coincides with the development of the absolute taxation of dividend income and capital gains. Yet, as Tables 1 and 2 illustrate, tax reforms are not necessarily accompanied by changes in the effective taxation of dividends and capital gains. Rather, much of the dynamics in dividend and capital gains taxation relate to pure rate changes. Changes occur frequently absent any tax system reforms. 13 See La Porta, Lopez-de-Silanes, Shleifer, and Vishny (2000) for additional information on characteristics of the various tax systems. 8 In this study, we are interested in the effective tax burden on dividend income and capital gains faced by individual investors. One concern with our analysis is that the tax rates we measure do not have sufficiently close correspondence with actual share ownership of our sample firms. Rydqvist, Spizman and Strebulaev (2010) point to the reduced role of the taxable investors in recent decades. They suggest that the influence of private investors’ taxes has likely been falling through time. In the extreme, if the marginal investor for every firm is a (tax neutral) institution, individual shareholder taxation should not matter. If this is true for our sample firms, we would find no effect. To the extent that we identify an effect of payout taxes, we can conclude taxable investors have some impact on firm prices (at least for a subset of firms). 14 Similarly, the increasing role of cross-country stock holdings might affect our ability to isolate true tax rates faced on payout by equity owners through the tax rules for domestic investors. Our data do not allow us to identify the fraction of foreign ownership in a company. However, since there is strong evidence of a substantial home bias in national investment portfolios (see, for example, French and Poterba 1991, Mondria and Wu 2010), we believe domestic tax rules are likely the most important source of time series variation in tax rates. The tax rates applicable to domestic investors is the most plausible approximation for the typical investor’s tax burden, especially for smaller firms, where international ownership is likely lower. The first, immediate, observation from Table 2 is that the level of taxation on dividends and share repurchases varies considerably across countries and time. As we report in Panel A of Table 2, the highest average tax rates on dividend income over the sample period can be observed in the Netherlands, Denmark, Switzerland, France, and Ireland. Peak values range from 66.2% in Sweden (1990), to 60.9% in Denmark (1990), to 60.0% in the Netherlands (1990-2000), to 47.3% in Korea (1990-1993), to 46% in Spain (1990/1991, 1993/1994). Over the same period investors faced the lowest average tax burden in Greece – a dividend tax exemption country and the only mandatory dividend country in our sample – and in Mexico, Finland, New Zealand, and Norway. The within-country standard deviation ranges from 10.8% to 20.5%, and the within-country differences between maximum and minimum tax rates from 25% to 38%, for Norway, Sweden, the Netherlands, Japan, the US and Finland, which provide the most variation in dividend tax rates over the sample period (Table 3, Panel A, and Figure 1). In contrast, we observe the most stable tax treatment of dividends in Greece, Mexico, Austria, Poland, and Portugal, where the personal income tax rate fluctuates within a narrow band of at most 5 percentage points 14 The Rydqvist et al prediction seems to be borne out in US dividend policy: Chetty and Saez (2005) and Perez-Gonzalez (2003) show that firms with a large share of institutional (tax exempt) ownership exhibit smaller changes in policy after the 2003 tax cut. For our sample, which contains many non-US firms, tax exempt investors may be a smaller factor. Unfortunately, we lack the requisite ownership data to test whether there is a similar pattern in our sample. 9 difference between peak and lowest taxation over the sample period. On average, the difference between maximum and minimum dividend tax rate in our sample countries in 1990-2008 is 19.9%, thus underpinning the substantial time-variant differences in dividend tax rates. Capital gains taxation across countries is special in many respects and often strongly intertwined with the legal treatment of share repurchases. For example, in some European countries share repurchases were either difficult to implement (for example, France) or illegal (for example, Germany and Sweden) until the turn of the 3rd millennium (Rau and Vermaelen 2002, DeRidder 2009). Moreover, in some countries with high taxes on dividends and low capital gains taxes (such as in Belgium, in the Netherlands before 2001, and in Switzerland since 1998), specific tax provisions existed to discourage share repurchases. In Japan, restrictions on corporate share repurchases thwarted corporations from buying back their own shares until enactment of a special law in 1995. Since the mid-1990s, the Japanese government has gradually relaxed and removed restrictions on share repurchases, originally as a part of emergency economic measures to revitalize the economy and its tumbling stock market (Hashimoto 1998). In Panel B of Table 2 we report capital gains tax rates across our sample countries that take these effects into consideration. The tax rates are applicable to investors with non-substantial shareholdings and holding periods that qualify as long-term investments in accordance with country-specific tax legislation. We show that over the sample period, on average, the most unfavorable tax environment for capital gains prevailed in Denmark, the UK, Australia, the Netherlands, and Canada, while in eight countries capital gains are generally tax exempt. We observe peak capital gains tax rates in the Netherlands (1990-2000), Australia (1990-1999), Poland (1994-1996), and Switzerland (1998-2007). The range of capital gains tax rates is substantial – from 0.0% to 60.0%. With standard deviation greater than 14.5% and differences between maximum and minimum tax rate of 31% to 60%, the Netherlands, Switzerland, Belgium, and Poland exhibit the largest within-country variation in capital gains tax rates across countries (Table 2, Panel B, and Figure 2). In contrast, capital gains taxation is constant in 1990-2008 in Austria, Germany, Greece, Korea, Mexico, New Zealand, and Portugal. On average, the within-country difference between maximum and minimum capital gains tax rate in our sample countries in 1990-2008 is 18.7%, thus providing further ample identifying variation in corporate payout taxation. 3. Data sample 3.1 Firm data We source our firm-level data from the July 2009 edition of the WorldScope database and restrict our analysis to those countries for which conclusive tax data for the full sample period could be obtained. To ensure a meaningful basis for the calculation of our country-level statistics we also exclude from our sample firms from countries for which we have less than 10 observations after the below sample 10 adjustments. The start year of our analysis is 1990. 15 Since accounting data are often reported and collected with a delay, we use data through 2008. We collect data on active as well as dead and suspended listings that fulfill our data requirements to avoid survivorship bias. Table 3 Panel A summarizes the composition of our sample. Financial and utility firms have motives to pay out cash that are different from non-financial firms (see e.g., Dittmar 2000 and Fama and French 2001). We therefore restrict our sample to non-financial and also non-utility firms, defined as firms with SIC codes outside the intervals of 4,900-4,949 and 6,000-6,999. We also exclude firms without an SIC code. We further restrict our sample to firms with non-missing values for dividends to common and preferred shareholders, net income, sales, and total assets for at least 4 consecutive years in the 1988- 2008 period. From the original set of firms, we finally eliminate the following firms: firms with erroneous or missing stock price, dividends, or share repurchase information, firms whose dividends exceed sales, firms with an average weekly capital gain of over 1,000% in one year and finally, firms with closely held shares exceeding 100% or falling short of 0%. To prevent extreme values and outliers from distorting our results we further eliminate, when appropriate, observations of our dependent and independent variables that are not within the 1st and the 99th percentile of observations, and we also drop firm observations with total assets less than USD 10 million (see Baker, Stein, and Wurgler 2003). This returns our basic sample of 7,661 companies (81,222 firm-year observations) from 25 countries. We obtain annual personal income tax, and capital gains tax data for the 25 countries in our sample from Jacob and Jacob (2010). This comprehensive tax data set allows a heretofore unavailable, thorough analysis of payout taxes and the allocation of investment within a multi-country, multi-year framework. We also cross-check our tax classifications and rates against those reported in Rydqvist, Spizman, and Strebulaev (2010) who examine the effect of equity taxation on the evolution of stock ownership patterns in many countries. As in this paper, Rydqvist et al use the top statutory tax rate on dividends and the tax rate on capital gains that qualify as long-term to conduct their analysis. 3.2 Investment variables Table 3 Panel B presents summary statistics for our investment variables. Our proxies for firm investment are threefold. First, we create variable Investment, defined as additions to fixed assets other than those associated with acquisitions 16 (capital expenditure) normalized by total assets. Second, we 15 We start our analysis in 1990 for two reasons. First, WorldScope provides less than comprehensive coverage of individual data items for non-U.S. firms before 1990. An earlier start may thus have biased our results for earlier sub-periods away from international evidence towards evidence from North America. Second, 1990 is a historically logical year to begin. With the transformation into capitalist, democratic systems in 1990, many former communist countries have only begun to incorporate dividends and capital gains taxation in their tax laws. 16 It includes additions to property, plant and equipment, and investments in machinery and equipment. 11 include PPE Growth, the growth in plant, property, and equipment from t-1 to t divided by the end-ofyear t-1 assets. Our final measure of investment intensity is Asset Growth, the ratio of growth in total assets normalized by total assets of the firm. The numerator in our investment variables is measured one year after our total assets variable, the denominator. Before computing investment, we translate capital expenditures, PPE, and total assets in US dollars into real terms (base year 2000) by using the US GNP deflator (World Development Indicators, Worldbank 2010). In our sample, firms on average have capital expenditures amounting to 5.9% of the value of their prior year total assets. The average growth rate in plant, property, and equipment is 8.1% and the average growth rate in total assets of 7.9%. The range of values of investment is considerable – from 0.8% (10 th percentile) to 12.7% (90 th percentile) (Investment), -13.8% to 29.0% (PPE Growth), or -17.0% to 30.8% (Asset Growth). 3.3 Tax variables Summary statistics for tax variables and controls are presented in Panel C of Table 3. All tax rates that we employ apply to investors with non-substantial shareholdings and holding periods that qualify as long-term investments in accordance with country-specific tax legislation. We construct three tax variables. Dividend Tax is the personal income tax rate on dividends in a country and year (in %). 17 Its range of values is wide, from 0% to 66.2% with mean dividend tax burden of 27.8% and standard deviation of 12.6%, reflecting the considerable variation of payout taxes across countries and over time. Effective Tax C is the country-specific weighted effective corporate payout tax rate (in %). It is calculated by weighting the effective tax rate on dividends and share repurchases by the importance of dividends and share repurchases as payout channels in a country over the 1990-2008 period. With this measure, we follow prior analyses of effective capital gains taxation and assume the effective tax rate on capital gains from share repurchases to be one-fourth of the statutory tax rate (see La Porta et al 2000 and Poterba 1987). This way, we control for the effect that capital gains are taxed only at realization and that thus the effective capital gains tax rate may be significantly lower than the statutory rate. 18 The importance weight of dividends in a country is calculated by averaging the dividend-to-assets ratio across firms and years, and then dividing by the average total payout ratio (sum of dividends and share repurchases normalized 17 Imputation credits and country-specific tax exemptions available to investors have been taken into account when calculating this “effective” rate. For example, as per the definition of imputation systems above, if the tax rate on dividend income is 50% and the available imputation credit is 20% then the ‘effective’ rate we employ is 30%. If, as for example in Germany from 2001-2008, 50% of dividend income is tax exempt, then the effective rate is half the statutory tax rate. 18 The assumption that the true tax rate is a quarter of the stated rate is not important to our conclusions. We get very similar magnitudes using other assumptions (including anything in the [0,1] range). 12 by total assets) across firms and years. The share repurchase weight is calculated analogously. 19 Average Tax C, the country weighted average tax, is an alternative measure of the average corporate payout tax rate (in %). It is obtained by weighing each year’s dividend and statutory capital gains tax rate by the relative importance of dividends and share repurchases as payout channels in a country over the sample period. 20 In principle, there are reasons to prefer either of the measures. The dividend tax rate disregards the tax burden of repurchases, but requires no assumptions about the capital gains taxes incurred when firms retain earnings (i.e. retaining earnings makes the share price higher, thereby increasing current capital gains sellers of share, reducing future capital gains taxes for buyers). We have also rerun all our regressions with a weighted average of tax rates where we allowed weights to vary not only by country but also by year (i.e. there is one set of weights for each country-year, which is applied to tax rates that also may vary by country-year). The country-average tax rate may be unrepresentative if the mix of payout varies a lot, but raises fewer endogeneity concerns. In practice, country average tax rates and country-year average tax rates are very similar, and the regression results are very close, so we do not report results for the latter. The mean values of our Effective Tax C and Average Tax C variables are 18.3% and 24.5%, with standard deviations 9.1% and 10.3%. Figure 3 illustrates the inverse cumulative distribution function (CDF) of tax rates across observations in our sample. As is evident, the variation in tax rates is considerable by any of our three tax measures, reflecting the substantial tax experimentation taking place during our sample period. Because of the uneven number of firms across countries, longlived tax systems in large countries (the US and Japan) produce lots of data. 3.4 Other variables Our firm-level variables measure internal funds, capital structure, Tobin’s Q, and growth. The availability of internal funds for investment is measured with three alternative variables: a) Cash Flow is the funds from operations of the company measured as the ratio of cash flow relative to total assets, b) Cash is defined cash holdings over total assets, and c) EBITDA measures earnings before interest, tax, and depreciation as a fraction of total assets. Unlike cash flow, EBITDA does not include tax payments, or increases in working capital. 19 Throughout we use cash dividends only, to avoid that differences in the tax treatment of cash and stock dividends infect our results. Our share repurchase variable is measured by the actual funds used to retire or redeem common or preferred stock and comes from the cash flow statement. 20 Weighing the capital gains tax by the prevalence of repurchases has the important advantage of automatically dealing with limitations on repurchases. If a country has high taxes on dividends and low taxes on repurchases, but severely restricts repurchases through laws and regulations, it is not fair to say that payout faces low taxes. Because we weight by actual quantities, we will put a small weight on the low payout tax rate. 13 We measure capital structure through leverage, defined as total book debt over total book assets. We include Tobin’s Q, the ratio between the market value and replacement value of the physical assets of a firm (Q). This variable can measure future profitability, that is, the quality of investment opportunities, as well as measurement error arising from accounting discrepancies between book capital and economic replacement costs. We include the natural logarithm of growth in sales from year t-2 to t (Sales Growth) and the relative size of a firm (Size) to control for the fact that smaller, high growth firms have greater profitable investment opportunities than bigger and more mature companies. We measure the relative size of a firm as the percentage of sample firms smaller than the firm for each country in each year. The numerator in our firm-level controls is measured one year after our total assets variable, the denominator. All values for these control variables in US dollars are converted into real terms (base year 2000) by using the US GNP deflator. 4. Tests and results 4.1 Internal resources and investment under different taxes: non-parametric results The simplest way of testing how payout taxes impact investment of firms with and without access to internal equity is to track firm investment around tax reforms. We do this in our panel sample by sorting firms in each country-year into quintiles based on the ratio of cash flow to assets. This is meant to capture firms’ ability to finance investment internally. 21 We then calculate average investment over assets for each group in each country-year cell. We demean these ratios by country-year, to account for crosscountry and time variation in average investment levels. Next we identify tax changes, using the countryweighted average payout tax rate (Average Tax C, results are similar with the two alternative measures). We focus on events where payout taxes changed by at least three percentage points. We exclude any events with fewer than thirty observations (firms) in the first year of the tax change. To avoid overlapping periods, and following Korinek and Stiglitz (2009), we further exclude events where a substantial tax cut (increase) is followed by a tax increase (cut) within two years following the original reform (Sweden 1994/1995, Australia 2000/2001, Norway 2001/2002, and Korea 1999/2001). As Korinek and Stiglitz show, where firms perceive tax changes as only temporary, tax changes may generate smaller effects. Since tax reform is often debated extensively, it seems possible that these tax reversals can be predicted by some firms and investors. We further exclude an event where the effects of the payout tax change overlap with a substantial corporate tax reform (Korea 1994). The remaining 29 events include fifteen events with an average tax drop of 9.8 percentage points (median 5.5) and fifteen events with an average tax increase of 8.4 percentage points (median 5.6). 21 Sorting on related variables such as Net Income/Assets gives very similar results. 14 For every event, we track the average ratio of investment to lagged assets for firms in each quintile in the three years leading up to the tax change, the first year when the new rules apply, and the two years following the tax change. Average differences in investment between high and low cash flow firms around the tax events are shown in Figure 4. This graph shows the difference between the average investment of the low and high cash flow quintiles. The point estimate is positive in all years, i.e. the firms with high internal cash flows tend to invest more. There is no apparent trend in the investment rate difference prior to a tax reform. After a tax reform, however, the investment difference follows the direction of the tax change (e.g. the difference increases when taxes are raised and falls when taxes are reduced). In Table 4, we provide a detailed analysis of the relative investment of high and low cash flow firms. The table shows average investment (demeaned by country year) for both pre- and post-reform periods, and for the two groups of firms. The difference and difference-in-difference estimates are shown as well. The time period analyzed around tax events is from four years before to two years after the reform. The effects are in line with the hypothesis that higher taxes should be associated with relatively higher investment in those firms that have access to internal cash (Column 3, Panels A and B). After payout tax increases (decreases) the importance of the availability of internal resources for high investment increases (decreases) significantly. On average, the difference in investment between low and high cash flow firms increases from 5.33% to 7.59% following a payout tax increase. When payout taxes are cut, the difference in investment falls from 7.27% to 5.54%. These results are consistent with the prediction that corporate payout taxes drive a wedge between the cost of inside and outside equity and that high such taxes favor investment by firms with internal resources. The tax-based theory of the cost of capital wedge suggests that firms with inside funding should not respond to tax incentives (they are “new view” firms). Nevertheless, there is movement in the high cash flow group of firms in Table 4 (after a tax increase, they increase investment relative to the median firm), disagreeing with this prediction. There are four possible explanations for the investment changes made for high cash flow firms. First, countercyclical fiscal policy could generate patterns in aggregate investment consistent with Table 4. In principle, forces of political economy could produce endogeneity in either direction: tax increases may be more likely in contractions when the government budget is in deficit or in expansions when there is less political pressure to stimulate the economy with fiscal expansion. Investment tends to fall after tax reductions and rise after tax increases, which might be due to countercyclical tax policy (i.e. taxes are raised at times when investment is temporarily low and can be expected to increase). This type of endogeneity is a key motivator for our approach of using difference-indifference tests with demeaned investment. By looking at relative cross-firm differences in investment 15 within a country and year, we difference out aggregate level effects. 22 A second possibility is that agency problems are a driver of investment in our sample firms in a way consistent with Chetty and Saez (2010): when tax rates go up, pressure to pay out cash is reduced, permitting managers to undertake excessive investment. Unlike the new view, this theory predicts that cash rich firms will respond to tax changes, and that aggregate investment may respond perversely to payout taxes. Third, cash rich firms may experience increase investment opportunities when cash poor firms withdraw. Finally, the aggregate patterns may be related to the permanence of tax changes. Korinek and Stiglitz (2009) predict that a tax cut which is expected (by firms) to be temporary can lead to inter-temporal tax arbitrage: firms want to take advantage of the temporarily low tax by paying out more cash, and do so in part by reducing investment. This tax arbitrage is done by mature (i.e. cash rich) firms who generate the bulk of payout. Thus, there are four reasons that the investment of cash rich firms is correlated with tax changes in the direction evident in Table 4. Importantly, under all four scenarios, our inferences based on the relative investment of high and low cash flow firms remains valid, i.e. the difference-in-difference result tells us that low payout taxes favors cash poor firms in a relative sense. Interpreting aggregate correlations is much more complicated, and we do not hope to be able to tell the possible explanations of the aggregate pattern apart. We believe the lessons learned from the cross-sectional differences are less ambiguous and of great potential importance for understanding corporate investment and for setting public policy. The estimated difference-in-difference estimate varies considerably across events. Figure 5 plots the empirical densities of difference-in-difference estimates for tax decrease and increase events. Two (three) of the fifteen (fourteen) tax decreases (increases) have difference-in-difference effects that are in conflict with our hypothesis. On the contrary, one third of the tax decreases reduce the difference in the ratio of investment to assets between high and low cash flow firms by more than 2.5 percentage points – more than one third of the pre-tax change differences. 40% of the tax raises increased the wedge in investment between high and low cash flow firms by more than 2.5 percentage points, i.e. more than 50% of the pre-tax change differences. 4.2 Internal resources and investment under different taxes: OLS results 22 We expect that endogeneity between payout tax changes and the dispersion of investment (as opposed to the level) is much less likely to be important. The correlation table in Appendix A.IX supports this expectation. It also highlights that tax changes are at best weakly related to other macroeconomic determinants that affect the level of investment in an economy. Tax changes are only weakly correlated with current and prior year GDP growth and not significantly related to other macroeconomic variables with the potential to influence investment: inflation, and cost for setting up businesses (see e.g., Djankov et al. 2010), and government spending measured by subsidies, military expenditures and R&D expenditures. We also implement several robustness tests to control for government policy in various ways (see Section 5). 16 Compared to the non-parametric tests, the regressions have several advantages. They use more of the variation in the data, and can easily integrate both tax increases and decreases in the same specifications. They also allow for more detailed controls of firm heterogeneity. However, it is harder to study the detailed time patterns in the regression tests. By construction, regressions put more weight on those events that happen in countries with many firms (i.e., Japan and the US), 23 although in principle that can be changed by using GLS (we do not do this, although we always cluster errors by country-year, so that we properly take into account the amount of statistical power we have). 24 The regressions exploit all of the variation in tax rates that is visible in Figure 3. For our baseline tests, we regress investment on firm controls, fixed effects for firms and for country-year cells, and the interaction of the payout tax rate with cash flow (we do not include the level of the tax, since this is absorbed by the country-year fixed effects). 25 We control for relative size, Tobin’s q, cash flow, and leverage. We include firm and country-year fixed effects in all our regressions. These help control for business cycles and other macro-economic factors. The main variable of interest is the interaction of internal resources (cash flow) and taxes. If taxes raise the relative cost of external equity, we expect high taxes to coincide with a stronger effect of cash flow on investment (since high cash flow means a firm can finance more investment with cheap internal equity). We therefore predict that the interaction coefficient should be positive. Regression results are reported in Table 5, for each of the three tax variables. The estimated coefficient for the tax-equity interaction variable is consistently positive and significant. In other words, the higher payout taxes are, the stronger is the tendency for investment to occur where retained earnings are high. As predicted by the tax wedge theory, payout taxes “lock in” investment in firms generating earnings and cash flow. The estimated magnitudes are large. For example, going from the 25 th percentile of the country-weighted average tax rate (15.0%) to the 75 th percentile (32.2%) implies that the effective coefficient on cash flow increases by 0.029, an increase by 32.8% over 23 We get similar results when excluding Japanese and U.S. firms (Table A.I of the Appendix). 24 We also test the robustness of our results to regression specifications in which we cluster standard errors at the country level and at the country-industry level. Standard errors for the cash flow*tax interactions obtained from these additional specifications are very similar to those in our baseline tests. They are reported in Table A.II of the Appendix. 25 For brevity, in what follows we only discuss the results obtained by using our Investment dependent variable. The results using our alternative measures of investment, PPE Growth and Asset Growth, align very closely with the results reported in this section. The results are displayed in Table A.III of the Appendix. Of the six coefficient estimates for the interaction of the payout tax rate with cash flow, five are significantly different from zero. We also ensure robustness of our results to alternative ways of scaling our measures of investment. In what follows, we use book assets to scale investment. As our sample includes smaller and nonmanufacturing firms with modest fixed assets and varying degrees of intangible assets this appeared the logical approach (cf. Baker, Stein, and Wurgler 2003). Nevertheless, following Fazzari, Hubbard, and Petersen (1988) and Kaplan and Zingales (1997) we also investigate robustness of our results to using the alternative denominators property, plant, and equipment (PPE) and the book value of fixed assets to scale investment. The estimated coefficients for the tax-cash flow interaction variable are again consistently positive and significant when we use these alternative scale variables for investment. 17 the conditional estimate at the 25 th percentile. Using the country-weighted effective tax rate, the effect is slightly larger. Going from the 25 th percentile (7.8%) to the 75 th percentile (25.2%) implies that the effective coefficient on cash flow increases by 0.037, 36.6% more than the baseline estimate in Table 5. One implication of this is that it appears a large part of the cash flow coefficient in investment regressions may reflect the differential cost of capital for firms with and without access to internal funds (the literature has mainly focused on financial constraints and varying investment opportunities as explanations of such coefficients). The high R-squared in the regressions in Table 5 stems largely from the many firm fixed effects included. On their own, these explain about 52% of the variation in investment rates. This suggests that they may be important to include, and we maintain them in all regressions. In fact, their inclusion does not change our estimates for the tax-cash flow interaction noticeably. We next use alternative measures of internal equity to check the robustness of our results thus far. We use the ratio of EBITDA to lagged assets as an alternative flow measure, and cash to lagged assets as a stock measure. Conceptually, a stock measure may be more natural than a flow measure, but cash may be financed on the margin by debt, in which case this becomes less informative about whether the firm has internal equity. In Table 6, both measures are interacted with all three tax variables. Of the six coefficient estimates, five are significantly different from zero. The magnitudes are smaller than those reported for cash flow in Table 5. We have also used further measures of internal resources, such as net income, or operating income. Results are similar (Table A.IV of the Appendix). In a next step, we consider more flexible econometric specifications. Thanks to the panel structure of the data set, we can allow the coefficient on cash flow to vary across countries and years, in essence replicating the identification strategy of the many studies exploiting the 2003 tax cut in the US (for seventy nine changes across 25 countries). In Table 7 we report regressions including interactions of cash flow with both country and year indicator variables. Allowing the slope on cash flow to vary by country, we can rule out any time-invariant differences in the relation between payout taxes and the allocation of investment in different countries. For example, accounting differences could make cash flow less precisely measured (reported) in some countries, where we would therefore see a smaller slope on cash flow due to attenuation bias. As long as such issues are time-invariant, we can eliminate any effect on our results by including the interaction of country fixed effects with cash flow. The coefficient estimates for the cash flow-payout tax interaction remain statistically significant, and are somewhat large across the board (the firm controls have coefficients that are very similar to base line specifications). In fact, allowing these extra controls the estimated magnitudes are larger than those estimated in Table 5. The effective coefficient on the cash flow*tax interaction increases by 0.0002 (dividend tax), 0.0006 (Effective Tax C), and 0.0004 (Average Tax C) when compared to the coefficients reported in Table 4. 18 The R-squared increases by about twenty-five basis points. Thus, a more conservative estimation technique gives a more precise result in line with the predictions of the tax wedge theory. With the more demanding flexible specifications we address one additional concern. We want to repeat our analysis using cash flow percentile ranks rather than the raw cash flow measure. This addresses concerns that despite our eliminating extreme observations of our key independent variables our results may be sensitive to outliers or to cross-country variation in the standard deviation of cash flow. 26 The results using cash flow percentile ranks are reported in Table 8. Coefficient estimates are more significant than those for the raw CF variables. T-statistics for the coefficients on our cash flow * tax interactions are very high. An auxiliary prediction of the theory of tax-induced cost differences between internal and external equity is that high taxes reduce the need to reallocate resources from profitable to unprofitable firms. Therefore, high taxes should reduce the amount of equity issues. 27 This provides an additional falsification test. We test this by using firm-level data on payout tax and quantities of equity raised. If we cannot see a negative correspondence between payout tax and amount of equity issues, it becomes less plausible that our tax measure properly captures variation in the cost of equity. Table 9 presents tests of the predicted negative relation between taxes and equity issues in our sample. To help control for market timing (as opposed to payout tax timing), we control for recent stock return in the equity issues regressions. As predicted, the coefficient estimate is negative for all three measures of taxes. A ten percentage point increase in the dividend tax rate (the country average payout tax rate) predicts a drop in equity issuance by 9% (12%) of the unconditional mean. High payout taxes are associated with both low investment and low equity issuance among firms with low profits. This is consistent with taxes as a driver of the cost of capital. It also suggests one channel through which the differential investment responses to taxes come about: with lower taxes, domestic stock markets reallocate capital to firms without access to internal cash. 4.3 Difference-in-difference analysis: old view firms vs. new view firms We next sort firms by their likely access to the equity market. This is an important distinguishing feature between new view and old view models. According to the new view, all firms finance internally (on the margin), and therefore do not respond to taxes on payout. According to the old view, all firms finance their investment externally (again, on the margin), and therefore respond to taxes on payout (their 26 Dependent variables are truncated, so to some extent this is already addressed. 27 The same prediction applies to payout: lower taxes should be associated with more payout. However, this prediction is less unique. If firms perceive tax changes as predictable, they may attempt to time payout to times when taxes are low (e.g., Korinek and Stiglitz 2009). It therefore seems that testing equity issues provides better discrimination among theories than testing payout volumes. 19 cost of capital increases in such taxes). We hypothesize that the two assumptions fit different firms. By sorting firms by access to the equity market, we may be able to test the two theories. We attempt to sort firms into those that can source funds in the equity markets (old view) and firms that have to rely more on internal resources to finance investment (new view). To classify firms, we use three methods: predicted equity issues, actual equity issues in preceding years, and the KZ index of financial constraints (Kaplan and Zingales 1997). 28 We estimate the effect of taxation on the cash flow sensitivity of investment separately for the groups of firms. In Table 10, Panel A, we sort firms based on the predicted probability that a firm issues shares using common share free float, share turnover, sales growth, leverage, market capitalization and market-to-book. We define firms as old view firms if predicted equity sales are above 2% of lagged assets. In Panel B, we define firms as old view firms if the sum of the net proceeds from the sale/issue of common and preferred stock over the preceding year exceeded zero, and as new view firms otherwise 29 . In Panel C, we classify firms as new view if the KZ index of financial constraints is above 0.7, and otherwise as old view firms. For all three classifications, there is a sizable difference in the effect of taxation on the marginal source of funds for investment between old view firms and new view firms. The differences between the coefficients are statistically significant at the 5% level or better in each pair of regressions. For old view firms, the cash flow coefficient is always sensitive to tax rates, as predicted. For new view firms, the coefficient estimate is positive but smaller and insignificant in all cases. 4.4 Governance and the impact of taxation on the cash flow sensitivity of investment Studies of the 2003 US tax cut found that governance variables tended to have a large impact on firm responses to the tax cut (Brown et al 2007 and Chetty and Saez 2005). Chetty and Saez (2010) model this, and suggest that poorly governed firms have CEOs who invest for reasons unrelated to the marginal cost and value of investment (i.e., they are unresponsive to the cost of capital). When taxes fall, such CEOs switch from excessive investment to payout, and so lower taxes have important welfare benefits. One prediction of their model is that poorly governed firms will not respond as much to tax changes as well governed firms. To identify governance, we look at directors’ ownership stakes (including officers’) in the company. This is based on the notion that only owners with large stakes have both the power and the incentive to make sure the firm is maximizing value (Shleifer and Vishny 1986, Jensen and Murphy 1990). Additionally, the measure seems plausibly institution-independent, i.e. we expect it to be meaningful across countries and time. Our sample countries vary substantially in terms of legal 28 Note that we cannot condition on payout to distinguish financially constrained vs. unconstrained firms, since payout may be determined simultaneously with investment, which is our dependent variable. 29 Our results are robust to using the dividend tax rate and the country-weighted effective tax rate instead of the country-weighted average tax rate for this analysis (Tables A.V and A.VI of the Appendix). 20 institutions, ownership structure, and other factors. Finally, this measure can be calculated for many of our sample firms (about three quarters of observations). To calculate the fraction of shares held by insiders we use the sum of the outstanding shares of a company held by directors and officers (if above the local legal disclosure requirement) relative to total shares outstanding. 30 The median ownership stake held by insiders is 4.4% for the firms in our sample. With standard deviation of 18.9% and interquartile range of 16.6%, the variation of insider ownership across firms and years is substantial. Particularly low insider ownership stakes are observed, for example, for companies Johnson & Johnson (US, 0.1%), Samsung Fine Chemicals (KOR, 0.1%), and Rentokil (UK, 0.1%). High ownership concentration is observed for, for example, Archon (US, 89%), Grupo Embotella (MEX, 72.4%), and Maxxam (US, 65%). As a comparison, currently over 12% of shares in Microsoft are held by corporate insiders. We observe the lowest insider ownership stakes in Austria (median value of 0.2%), the Netherlands (0.4%), and Japan (0.4%). High ownership concentration is found in Greece (42.6%), Italy (35.9%), and Belgium (11.1%). In the U.S., approximately 8.9% in a company are held by directors and officers in our sample. We sort firms into four quartiles, with respective averages of 0.27%, 2.5%, 10.7% and 41.8% insider ownership. 31 When sorting by insider ownership, and running separate regressions for each subsample, we find that firms with very low insider ownership show much less response to taxes (Table 11). The coefficient estimate is insignificant for the three groups of firms with the lowest ownership and significant for the group with high insider ownership. 32 This is consistent with the Chetty and Saez’(2010) predictions that CEOs with incentives more in line with investors make decisions that are more responsive to tax incentives. More generally, the results may suggest that some firms are more responsive to changes in the cost of capital. However, the differences of the coefficient estimates across groups are not statistically significant, and are therefore only suggestive. Since insiders are individuals, this result also highlights that where the marginal shareholder is more likely to be a taxable investor, the tax effects may be stronger. 30 We obtain insider ownership data from the September 2010 version of the Worldscope database. The disadvantage of Worldscope is that it reports current insider ownership at any given time (or latest available) only. Thus, we have to assume that the fraction of shares held by directors and officers at the time we accessed the data is informative about the fraction of shares historically held by insiders. Prior evidence in the literature suggests that this aspect of the ownership structure usually changes slowly (Zhou, 2001). 31 We get similar results when sorting for each country separately. 32 We get very similar results when we use the dividend tax rate and the country-weighted effective tax rate instead of the country-weighted average tax rate for this analysis (Tables A.VII and A.VIII of the Appendix). 21 5. Robustness to endogeneity concerns about payout taxes We next turn to several important additional robustness tests. One central concern about our results is that tax changes are just fragments of larger policy changes in an economy which coincide with tax reforms and change firms’ investment behavior. After all, governments are unlikely to set their tax policies completely independently of other developments in an economy. In particular, our tests (regressions and non-parametric tests) might be biased if tax changes were motivated by factors related to the relative investment of cash-rich and -poor firms. If, for example, taxation, cash flow and investment all change simultaneously in response to other macroeconomic determinants or government policies then we need to be concerned about endogeneity. Throughout our analyses we have used a number of checks to ensure robustness of our results to endogeneity concerns. For example, in our non-parametric test we have relied on differences in investment across firms instead of investment levels. Similarly, in all regressions we include country-year dummies to ensure that average investment is taken out (and, likewise, any particular government investment initiative that may inflate investment in a given year). Nevertheless we turn to several important additional robustness checks below. They address concerns that tax rates change in response to policy variables or macroeconomic determinants that might also affect the allocation of investment across firms (thus causing false positive conclusions about taxation). We now consider further features of the tax system. We first want to control for the corporate tax rate. Corporate taxes may be connected to payout taxes for many reasons, including government budget trade-offs, and political preferences (i.e. pro-business). Corporate taxes might also affect how important internal resources are for firms. 33 Therefore, if different features of the tax code are correlated, an empirical link between payout taxes and relative investment across firms might be reflective of a true relationship between corporate taxes and relative investment. To make sure our results are not biased in either direction we include the interaction of corporate tax with firm cash flows, we include an interaction of corporate taxes and cash flow in a regression. Here, we need to make a distinction between imputation system and other tax regimes. In imputation systems, corporate and payout taxes are particularly strongly intertwined as corporate tax at the firm level is “pre-paid” on behalf of shareholders and can be credited against payout taxes at the individual shareholder level. Thus, the corporate tax rate is in some way a measure of investor taxes. To distinguish tax systems we thus also add an interaction of cash flow*corporate tax with the dummy variable Imp which takes the value of 1 for imputation systems, and zero otherwise. The results are reported in Table 12. The interaction of corporate tax with cash flow is insignificant in all 33 For example, if many firms are financially constrained, they may be unable to respond to lower corporate tax rates by investing more. In that case, lower tax rates may coincide with lower coefficients on internal resources. 22 specifications, suggesting that outside of imputation systems 34 , the corporate tax rate is not related to our findings. The triple interaction with the imputation system dummy is positive and significant, suggesting that in imputation systems 35 , internal cash flow is a stronger predictor of investment when taxes are high. In other words, internal resources appear to matter more when corporate taxes are high. One interpretation of this coefficient is that when taxes are high, financial constraints bind more than at other times (see e.g. Rauh 2006). Importantly for our purposes, the interaction of cash flow and payout tax is not much affected. The coefficient estimates remain significant (although the significance is somewhat lower for the dividend tax rate), and very close to the baseline regressions in magnitude. Apart from corporate income taxes, we are also concerned about other features of the tax system. Changes to payout taxes may coincide with modifications to the tax code apart from the corporate tax rate. We therefore introduce a set of broad measures of public sector policy as covariates, which may make investment more profitable. More generally, this way we can address legislative endogeneity concerns: if firms with little internal equity increase investment following a payout tax reduction, is that because of the tax cut or did these firms just lobby to make the investment they were planning to do anyway more profitable? We collect alternative indicators of policy preferences for the economies in our sample from the World Development Indicators (World Bank, 2010). We opt for four indicators that measure government policy in three distinct dimensions: government stimulus, consumption climate, and legal environment. We sequentially include each policy control and its interaction with cash flow. To control for the effect of government stimulus programs that may affect investment we use control variables Subsidies, Grants, Social Benefits and Military Expenditure. The former measures government transfers on current account to private and public enterprises, and social security benefits in cash and in kind (relative to total government expense) (Table 13, Panel A). The latter includes all current and capital expenditures on the armed forces (relative to GDP) (Panel B). We measure governments’ stance on consumption through control variable Sales and Turnover Tax. It measures the tax burden on goods and services relative to the value added of industry and services (Panel C). 36 Finally, we measure public spending on research through R&D Expenditures as a fraction of GPD. It measures expenditures on basic research, applied research, and experimental development. (Panel D). We use the more demanding flexible specifications to perform this additional check. Coverage for the world development indicators is generally poorer than for our tax variables over the sample period. In three of the four additional specifications the number of observations is at best half compared to our baseline specifications. Results 34 Austria, Belgium, Denmark, Finland, France, Germany, Greece, Hungary, Ireland, Italy, Netherlands, New Zealand, Norway, Poland, Portugal, Spain, Sweden, Switzerland, the United States (all in 2008). 35 Australia, Canada, Japan, Korea, Mexico, UK (all in 2008). 36 It includes general sales and turnover or value added taxes, selective taxes on services, taxes on the use of goods or property, taxes on extraction and production of minerals, and profits of fiscal monopolies. 23 are reported in Table 13. Despite the reduction in sample size and the additional policy controls the coefficient for the cash flow*tax interaction remains strong and significant in all but two specifications. 37 6. Conclusions Our results have three main implications. First, it appears that payout taxes drive the allocation of capital across firms. High taxes lock in capital in those firms that generate internal cash flows, ahead of those firms that need to raise outside equity. If firms have different investment opportunities, this means that tax rates change the type of investments being made. For example, high payout taxes may favor established industries. Taxes on payout may be as important for investment decisions and the cost of capital as the corporate income tax. 38 Second, the effect of payout taxes is related to both access to the equity market and governance. Firms which can access the equity market, “old view” firms, are the most affected by tax changes. Firms whose only source of equity finance is internal are less affected by taxes, as predicted by the “new view”. A final source of heterogeneity is governance. Firms where decision makers have low financial stakes are less affected by tax changes, reflecting their propensity to make investment decisions for reasons unrelated to the cost of capital. Third, the relation between cash flow and investment (see e.g. Fazzari, Hubbard, Petersen 1988, Kaplan and Zingales 1997, Lamont 1997) appears to partially reflect the difference in the cost of capital between firms with and without access to inside equity. Firms invest more if they have easy access to more resources (see e.g. Lamont 1997 and Rauh 2006), especially internal cash flows. There is a potentially important tax channel through which internal resources affect investment: having internal cash flows implies a lower after-tax cost of equity capital. Thus, tax policy offers one important potential channel for affecting the access to investment resources by firms without retained earnings. 37 When we include all four policy controls the reduction in the number of observations is immense – 77%. Nevertheless, for two of our three tax variables the influence of taxation on the cash flow sensitivity of investment remains statistically significant. 38 In fact, US tax receipts data suggest that payout taxes are quite relevant. From 1960 to 2009, the share of corporate income taxes in U.S. Federal tax receipts fell from 24% to 10% (IRS 2009). A study by the Department of the Treasury, Office of Tax Analysis suggested that individual income taxes on dividends were 13% of Federal tax receipts in 2005. In other words, payout-related taxes may currently raise more revenue than corporate income taxes. 24 References Asquith, Paul and David W. Mullins, 1986, “Equity issues and offering dilution”, Journal of Financial Economics, 15 (1-2): 61–89. Auerbach, Alan J., 1979a, “Wealth maximization and the cost of capital”, Quarterly Journal of Economics, 93 (3): 433–446. Auerbach, Alan, 1979b, “Share Valuation and Corporate Equity Policy,” Journal of Public Economics, 11 (3): 291-305, Baker, Malcolm P., Jeremy C. Stein, and Jeffrey A. Wurgler, 2003, “When Does the Market Matter? Stock Prices and the Investment of Equity-Dependent Firms”, Quarterly Journal of Economics, 118 (3): 969–1006. Becker, Bo, Zoran Ivkovic, and Scott Weisbenner, 2011, ”Local Dividend Clienteles”, Journal of Finance, 66 (2), April. Bernheim, B. Douglas, 1991, “Tax Policy and the Dividend Puzzle”, RAND Journal of Economics, 22 (4): 455–476. Bradford, David F., 1981, “The incidence and allocation effects of a tax on corporate distributions”, Journal of Public Economics, 15 (1): 1–22. Brown, Jeffrey R., Nellie Liang, and Scott Weisbenner, 2007, “Executive Financial Incentives and Payout Policy: Firm Responses to the 2003 Dividend Tax Cut”, Journal of Finance, 62 (4): 1935–1965. Chen, Hsuan-Chi, and Jay Ritter, 2000, “The Seven Percent Solution”, Journal of Finance, 55 (3): 1105– 1131. Chetty, Raj and Emmanuel Saez, 2005, “Dividend Taxes and Corporate Behavior: Evidence from the 2003 Dividend Tax Cut”, Quarterly Journal of Economics, 120 (3): 791–833. Chetty, Raj and Emmanuel Saez, 2010, “Dividend and Corporate Taxation in an Agency Model of the Firm”, American Economic Journal: Economic Policy, 2 (3): 1–31. Coase, Ronald H., 1937, “The Nature of the Firm”, Economica, 4 (16): 386–405. DeRidder, Adri, 2009, “Share Repurchases and Firm Behaviour”, International Journal of Theoretical and Applied Finance, 12 (5): 605–631. Dittmar, Amy, 2000, “Why do Firms Repurchase Stock?”, Journal of Business, 73 (3): 331–355. Djankov, Simeon, Tim Ganser, Caralee McLiesh, Rita Ramalho, and Andrei Shleifer, 2010, “The Effect of Corporate Taxes on Investment and Entrepreneurship”, American Economic Journal: Macroeconomics, 2 (July): 31-64. Fama, Eugene F. and Kenneth R. French, 2001, ”Disappearing dividends: changing firm characteristics or lower propensity to pay?”, Journal of Financial Economics, 60 (1): 3–43. Fazzari, Steven M., R. Glenn Hubbard, and Bruce Petersen, 1988, “Finance Constraints and Corporate Investment”, Brookings Papers on Economic Activity, 1: 141–195. Feldstein, Martin S., 1970, “Corporate Taxation and Dividend Behaviour”, Review of Economic Studies, 37 (1): 57–72. French, Kenneth, and James Poterba, 1991, “Investor Diversification and International Equity Markets”, American Economic Review, 81 (2): 222–226. 25 Gordon, Roger and Martin Dietz, 2006, “Dividends and Taxes”, NBER Working Paper No.12292, forthcoming in Alan J. Auerbach and Daniel Shaviro, editors, Institutional Foundations of Public Finance: Economic and Legal Perspectives, Harvard University Press, Cambridge, MA. Guenther, David A., and Richard Sansing, 2006, “Fundamentals of shareholder tax capitalization”, Journal of Accounting and Economics, 42 (3), 371-383. Harberger, Arnold C., 1962, “The Incidence of the Corporation Income Tax”, Journal of Political Economy, 70 (3): 215–240. Harberger, Arnold C., 1966, “Efficiency effects of taxes on income from capital", in: Marian Krzyzaniak, editor, Effects of corporation income tax, Wayne State University Press, Detroit. Hashimoto, Masanori, 1998, “Share Repurchases and Cancellation”, Capital Market Trend Report 1998- 17, Capital Market Research Group, Nomura Research Institute. Internal Revenue Service, 2009, IRS Data Book 2009. Jacob, Marcus and Martin Jacob, 2011, “Taxation, Dividends, and Share Repurchases: Taking Evidence Global“, SSRN Working Paper. Jensen, Kevin J. and Michael C. Jensen, 1990, “CEO Incentives: It's Not How Much You Pay, But How”, Harvard Business Review, 3 (3): 138–153. Jensen, Michael C., and William H. Meckling, 1976, “Theory of the Firm: Managerial Behavior, Agency Costs and Ownership Structure”, Journal of Financial Economics, 3 (4):305–360. Jensen, Michael C. and Kevin J. Murphy, 1990, “Performance Pay and Top-Management Incentives”, Journal of Political Economy, 98 (2): 225–264. Kaplan, Steven N. and Luigi Zingales, 1997, “Do Investment-Cash Flow Sensitivities Provide Useful Measures of Financing Constraints?”, Quarterly Journal of Economics, 112 (1): 169–215. King, Mervyn A., 1977, Public Policy and the Corporation. Chapman and Hall, London. Korinek, Anton and Joseph E. Stiglitz, 2009, “Dividend Taxation and Intertemporal Tax Arbitrage”, Journal of Public Economics, 93 (1-2): 142–159. La Porta, Rafael, Florencio Lopez-de-Silanes, Andrei Shleifer, and Robert W. Vishny, 2000, “Agency Problems and Dividend Policies around the World”, Journal of Finance, 55 (1): 1–33. Lamont, Owen, 1997, ”Cash Flow and Investment: Evidence from Internal Capital Markets”, Journal of Finance, 52 (1): 83–109. Lewellen, Jonathan and Katharina Lewellen, 2006, “Internal Equity, Taxes, and Capital Structure”, Working Paper, Dartmouth. Malmendier, Ulrike and Geoffrey Tate, 2005, “CEO Overconfidence and Corporate Investment”, Journal of Finance, 60 (6): 2661–2700. Mondria, Jordi, and Thomas Wu. 2010, “The puzzling evolution of the home bias, information processing and financial openness”, Journal of Economic Dynamics and Control, 34(5): 875–896. Myers, Steward C., 1977, “Determinants of Corporate Borrowing”, Journal of Financial Economics, 5 (2): 147–175. Perez-Gonzalez, Francisco, 2003, “Large Shareholders and Dividends: Evidence From U.S. Tax Reforms”, working paper, Columbia University. Poterba, James M., 1987, “Tax Policy and Corporate Savings”, Brookings Papers on Economic Policy, 2: 455–503. 26 Poterba, James M., 2004, “Taxation and Corporate Payout Policy”, American Economic Review, 94 (2): 171–175. Poterba, James M. and Lawrence H. Summers, 1984, “New Evidence That Taxes Affect the Valuation of Dividends”, Journal of Finance, 39 (5): 1397–1415. Poterba, James M. and Lawrence H. Summers, 1985, “The Economic Effects of Dividend Taxation”, In Edward Altman and Marti Subrahmanyam, editors, Recent advances in corporate finance: 227–284. Dow Jones-Irwin Publishing: Homewood, IL. Rau, P. Raghavendra and Theo Vermaelen, 2002, “Regulation, Taxes, and Share Repurchases in the United Kingdom”, Journal of Business, 75 (2): 245–282. Rauh, Joshua, 2006, “Investment and Financing Constraints: Evidence from the Funding of Corporate Pension Plans”, Journal of Finance, 61 (1): 33–71. Rydqvist, Kristian, Joshua Spizman, and Ilya Strebulaev. 2010, “The Evolution of Aggregate Stock Ownership.” Working Paper. Shleifer, Andrei and Robert W. Vishny, 1986, “Large Shareholders and Corporate Control”, Journal of Political Economy, 94 (3): 461–88. Zhou, Xianming, 2001, “Understanding the determinants of managerial ownership and the link between ownership and performance: Comment”, Journal of Financial Economics, 62 (3): 559-571. Zingales, Luigi, 2000, “In Search of New Foundations”, Journal of Finance, 55 (4): 1623–1653. 27 Figure 1 Personal Tax Rates on Dividend Income – High Variation Countries This figure shows dividend tax rates for the six countries in our sample with the largest within-country variation in personal income tax rates on dividend income over the 1990-2008 period. 0 10 20 30 40 50 60 70 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 Tax Rate (%) Year Finland Japan Netherlands Norway Sweden United States Figure 2 Capital Gains Tax Rates – High Variation Countries This figure shows taxation of share repurchases for the six countries in our sample with the largest within-country variation in tax rates on capital gains over the 1990-2008 period. 0 10 20 30 40 50 60 70 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 Tax Rate (%) Year Canada Netherlands Australia Spain Poland Switzerland28 Figure 3 Tax Rates – Distribution over Sample This figure illustrates the distribution of tax rates across 81,222 observations in our sample over the 1990-2008 period. The graph is a transposed cumulative distribution function with number of observations on the x-axis and tax rates on the y-axis. Dividend Tax is the personal income tax rate on dividends (in %). Effective Tax C is the countryweighted effective corporate payout tax rate (in %). It is obtained by weighting each year’s dividend and effective capital gains tax rates by the relative importance of dividends and share repurchases as payout channels (relative to total corporate payout) in a country over the sample period. The effective tax rate on share repurchases equals onefourth of the statutory capital gains tax rate. Average Tax C is an alternative measure of the average corporate payout tax rate (in %). It is calculated by weighting each year’s dividend and statutory capital gains tax rates by the relative importance of dividends and share repurchases as payout channels (relative to total corporate payout) in a country over the sample period. 0 10 20 30 40 50 60 70 0 10,000 20,000 30,000 40,000 50,000 60,000 70,000 80,000 Tax rate (%) Observations Dividend Tax Effective Tax C Average Tax C US 2003-2008: 15%: US 1993-2000: 39.6% Japan 2004-2008: 10%:29 Figure 4 Average Investment by High and Low Cash Flow Firm Quintiles Around Payout Tax Changes of at Least 3 Percentage Points, 1992-2006 This figure shows the average investment by cash flow group for three years around 15 payout tax decreases and 14 payout tax increases in 1992-2006 with at least 30 observations in the country-year. We measure investment by capital expenditures normalized by prior-year total assets (CapEx/A) and demean investment by country-year cell. We then sort firms in each country-year cell into five quintiles according to their cash-flow, and calculate average investment for each quintile. The 14 payout tax increase events are Australia 1993, Canada 1993, Denmark 1993 Denmark 2001, Germany 1994, Germany 1995, Finland 2005, Finland 2006, France 1997, Japan 2000, Norway 2006, Poland 2004, Switzerland 1998, and the US 1993. The 15 tax decrease events include Belgium 2002, Canada 1996, Canada 2001, Canada 2006, Germany 2001, France 2002, Italy 1998, Japan 2004, Netherlands 2001, Poland 2001, Spain 1996, Spain 1999, Spain 2003, US 1997, and the US 2003. 0.04 0.05 0.06 0.07 0.08 -3 -2 -1 0 1 2 Year relative to tax change Tax increase events Tax decrease events30 Figure 5 Difference-in-Difference Estimates, Empirical Distribution This figure presents the empirical distribution of difference-in-difference estimates around tax increase and decrease events. Events are included if they represent a 3 percentage points or larger change in the tax rate, if there are at least 30 firm observations for each year around the change, and if they occur during 1992-2006. For each event, we sort firms in each year into five groups based on cash flows. For each year, the difference in the average investment to lagged assets between the firm quintiles with the highest and lowest cash flows is calculated. The difference-indifference estimate for each event is defined as the change in this difference from the three years before to the three year after the tax change. The graph presents tax decreases and increases separately. 0 1 2 3 4 5 6 7 8 9 -7.5 to -5 -5 to -2.5 -2.5 to 0 -0 to 2.5 2.5 to 5 5 to 7.5 Number of tax events Investment rate differences in percentage points Tax decreases (<-3%) Tax increases (>3%) Mean -1.42 Med. -1.98 Std d. 2.40 N=15 Mean 1.68 Med. 1.36 Std d. 2.53 N=1431 Table 1 Tax Regimes Across 25 Countries (1990-2008) This table reports prevailing tax regimes across 25 countries over the 1990-2008 period. CL, FI, PI, SR, and TE abbreviate classical corporate taxation system, full imputation system, partial imputation system, shareholder relief system, and dividend tax exemption system, respectively. 1 – Split-rate system for distributed and retained earnings. 2 – Individuals had the option to accumulate the dividend grossed up applying a factor of 1.82 combined with a tax credit of 35% on the grossed up dividend. This mechanism is similar to a full imputation system (Source: OECD). Country 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 Australia FI FI FI FI FI FI FI FI FI FI FI FI FI FI FI FI FI FI FI Austria SR SR SR SR SR SR SR SR SR SR SR SR SR SR SR SR SR SR SR Belgium SR SR SR SR SR SR SR SR SR SR SR SR SR SR SR SR SR SR SR Canada PI PI PI PI PI PI PI PI PI PI PI PI PI PI PI PI PI PI PI Denmark CL CL CL CL CL CL CL CL CL CL CL CL CL CL CL SR SR SR SR Finland PI PI PI FI FI FI FI FI FI FI FI FI FI FI FI SR SR SR SR France FI FI FI FI FI FI FI FI FI FI FI FI FI FI FI SR SR SR SR Germany FI 1 FI 1 FI 1 FI 1 FI 1 FI 1 FI 1 FI 1 FI 1 FI 1 FI 1 SR SR SR SR SR SR SR SR Greece - - TE TE TE TE TE TE TE TE TE TE TE TE TE TE TE TE TE Hungary SR SR SR SR SR SR SR SR SR SR SR SR SR SR SR SR SR SR SR Ireland PI PI PI PI PI PI PI PI PI PI CL CL CL CL CL CL CL CL CL Italy FI FI FI FI FI FI FI FI SR SR SR SR SR SR SR SR SR SR SR Japan CL CL CL CL CL CL CL CL SR SR SR SR SR SR SR SR SR SR SR Korea PI PI PI PI PI PI PI PI PI PI PI PI PI PI PI PI PI PI PI Mexico FI 2 FI 2 TE TE TE TE TE TE TE FI FI FI FI FI FI FI FI FI FI Netherlands CL CL CL CL CL CL CL CL CL CL CL SR SR SR SR SR SR SR SR New Zealand FI FI FI FI FI FI FI FI FI FI FI FI FI FI FI FI FI FI FI Norway SR SR FI FI FI FI FI FI FI FI FI PI FI FI FI FI SR SR SR Poland - - - SR SR SR SR SR SR SR SR SR SR SR SR SR SR SR SR Portugal SR SR SR SR SR SR SR SR SR SR SR SR SR SR SR SR SR SR SR Spain CL CL CL CL CL PI PI PI PI PI PI PI PI PI PI PI PI SR SR Sweden CL SR SR SR SR SR SR SR SR SR SR SR SR SR SR SR SR SR SR Switzerland CL CL CL CL CL CL CL CL CL CL CL CL CL CL CL CL CL SR SR United Kingdom PI PI PI PI PI PI PI PI PI PI PI PI PI PI PI PI PI PI PI United States CL CL CL CL CL CL CL CL CL CL CL CL CL SR SR SR SR SR SR 32 Table 2 Personal Income Tax Rates and Capital Gains Tax Rates Across 25 Countries (1990-2008) This table shows effective corporate payout tax rates across 25 countries over the 1990-2008 period. Panel A reports personal income tax rates on dividend income (in %). Panel B reports capital gains tax rates (in %). All capital gains tax rates reported are effective rates incurred by investors with non-substantial shareholdings and holding periods that qualify as long-term investments in accordance with country-specific tax legislation. For example in Denmark, Germany or the United States, capital gains from long-term shareholdings are taxed at the lower rate reported in Panel B. Austria, Italy, and Netherlands are examples for countries where capital gains from substantial shareholdings are taxed at higher rates. A shareholding qualifies as substantial if it exceeds a certain threshold in share capital (for example 5% in the Netherlands). See Jacob and Jacob (2010) for a detailed description of applied tax rates. Panel A: Personal Income Tax Rates on Dividend Income (in %) Country 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 Australia 15.2 15.2 15.2 23.0 23.0 19.5 19.5 19.5 19.5 19.5 22.0 26.4 26.4 26.4 26.4 26.4 23.6 23.6 23.6 Austria 25.0 25.0 25.0 25.0 22.0 22.0 22.0 25.0 25.0 25.0 25.0 25.0 25.0 25.0 25.0 25.0 25.0 25.0 25.0 Belgium 25.0 25.0 25.0 25.0 25.0 15.0 15.0 15.0 15.0 15.0 15.0 15.0 15.0 15.0 15.0 15.0 15.0 15.0 15.0 Canada 38.3 39.1 40.1 43.5 44.6 44.6 37.0 35.8 34.6 33.6 33.2 31.9 31.9 31.9 31.9 31.9 24.4 24.1 23.6 Denmark 60.9 45.0 45.0 40.0 40.0 40.0 40.0 40.0 40.0 40.0 40.0 43.0 43.0 43.0 43.0 43.0 43.0 43.0 45.0 Finland 59.5 55.6 55.9 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 16.0 19.6 19.6 19.6 France 39.9 39.9 39.9 41.8 41.8 42.6 39.0 43.4 41.9 41.9 40.8 40.1 35.6 33.5 33.9 32.3 32.7 32.7 32.7 Germany 26.6 29.7 29.7 26.6 32.9 38.5 38.5 38.5 37.0 37.0 34.0 25.6 25.6 25.6 23.7 22.2 22.2 23.7 26.4 Greece - - 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 Hungary 20.0 20.0 10.0 10.0 10.0 10.0 10.0 10.0 20.0 20.0 20.0 20.0 20.0 20.0 20.0 25.0 25.0 10.0 10.0 Ireland 35.8 35.7 32.0 30.7 30.7 32.0 32.5 34.4 39.3 39.3 44.0 42.0 42.0 42.0 42.0 42.0 42.0 41.0 41.0 Italy 21.9 21.9 23.4 23.4 23.4 23.4 22.2 22.2 12.5 12.5 12.5 12.5 12.5 12.5 12.5 12.5 12.5 12.5 12.5 Japan 35.0 35.0 35.0 35.0 35.0 35.0 35.0 35.0 35.0 35.0 43.6 43.6 43.6 43.6 10.0 10.0 10.0 10.0 10.0 Korea 47.3 47.3 47.3 47.3 38.4 37.0 33.4 33.4 33.4 22.7 22.7 33.4 28.1 28.1 28.1 31.1 31.1 31.1 31.1 Mexico 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 Netherlands 60.0 60.0 60.0 60.0 60.0 60.0 60.0 60.0 60.0 60.0 60.0 25.0 25.0 25.0 25.0 25.0 25.0 22.0 25.0 New Zealand 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 8.9 8.9 8.9 8.9 8.9 8.9 9.0 8.9 12.9 Norway 25.5 23.5 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 11.0 0.0 0.0 0.0 0.0 28.0 28.0 28.0 Poland - - - 20.0 20.0 20.0 20.0 20.0 20.0 20.0 20.0 15.0 15.0 15.0 19.0 19.0 19.0 19.0 19.0 Portugal 25.0 25.0 25.0 25.0 25.0 25.0 25.0 25.0 25.0 25.0 25.0 25.0 20.0 20.0 20.0 20.0 20.0 20.0 20.0 Spain 46.0 46.0 43.0 46.0 46.0 38.4 38.4 38.4 38.4 27.2 27.2 27.2 27.2 23.0 23.0 23.0 23.0 18.0 18.0 Sweden 66.2 30.0 30.0 30.0 0.0 30.0 30.0 30.0 30.0 30.0 30.0 30.0 30.0 30.0 30.0 30.0 30.0 30.0 30.0 Switzerland 40.9 40.9 41.5 42.4 42.4 42.4 42.4 42.4 42.4 42.4 42.1 41.5 41.0 40.4 40.4 40.4 40.4 40.4 25.7 United Kingdom 20.0 20.0 20.0 22.6 25.0 25.0 25.0 25.0 25.0 25.0 25.0 25.0 25.0 25.0 25.0 25.0 25.0 25.0 25.0 United States 28.0 31.0 31.0 39.6 39.6 39.6 39.6 39.6 39.6 39.6 39.6 39.1 38.6 15.0 15.0 15.0 15.0 15.0 15.0 33 Panel B: Capital Gains Tax Rates (in %) Country 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 Australia 48.5 48.5 48.5 48.5 48.5 48.5 48.5 48.5 48.5 48.5 24.3 24.3 24.3 24.3 24.3 24.3 23.3 23.3 23.3 Austria 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 Belgium 41.0 39.0 39.0 40.2 40.2 40.2 40.2 40.2 40.2 40.2 40.2 40.2 10.0 10.0 10.0 10.0 10.0 10.0 10.0 Canada 35.1 35.7 36.3 38.6 39.3 39.3 39.0 37.1 36.3 35.9 31.9 23.2 23.2 23.2 23.2 23.2 23.2 23.2 23.2 Denmark 0.0 0.0 0.0 40.0 40.0 40.0 40.0 40.0 40.0 40.0 40.0 43.0 43.0 43.0 43.0 43.0 43.0 43.0 45.0 Finland 23.8 27.8 27.9 25.0 25.0 25.0 28.0 28.0 28.0 28.0 29.0 29.0 29.0 29.0 29.0 28.0 28.0 28.0 28.0 France 19.4 19.4 19.4 19.4 19.4 19.4 19.4 19.9 19.9 26.0 26.0 26.0 26.0 26.0 26.0 27.0 27.0 27.0 30.1 Germany 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 Greece - - - 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 Hungary 20.0 20.0 20.0 20.0 20.0 10.0 10.0 10.0 20.0 20.0 20.0 20.0 20.0 20.0 0.0 0.0 20.0 20.0 20.0 Ireland 40.0 40.0 40.0 40.0 40.0 40.0 40.0 40.0 20.0 20.0 20.0 20.0 20.0 20.0 20.0 20.0 20.0 20.0 20.0 Italy 25.0 25.0 25.0 25.0 25.0 25.0 25.0 25.0 12.5 12.5 12.5 12.5 12.5 12.5 12.5 12.5 12.5 12.5 12.5 Japan 35.0 35.0 35.0 35.0 35.0 26.0 26.0 26.0 26.0 26.0 26.0 26.0 26.0 26.0 10.0 10.0 10.0 10.0 10.0 Korea 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 Mexico 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 Netherlands 60.0 60.0 60.0 60.0 60.0 60.0 60.0 60.0 60.0 60.0 60.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 New Zealand 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 Norway 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 28.0 28.0 28.0 Poland - - 40.0 40.0 45.0 45.0 45.0 44.0 40.0 0.0 0.0 0.0 0.0 0.0 19.0 19.0 19.0 19.0 19.0 Portugal 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 Spain 11.2 11.2 10.6 37.3 37.3 37.3 20.0 20.0 20.0 20.0 18.0 18.0 18.0 18.0 15.0 15.0 15.0 18.0 18.0 Sweden 33.1 30.0 25.0 25.0 12.5 30.0 30.0 30.0 30.0 30.0 30.0 30.0 30.0 30.0 30.0 30.0 30.0 30.0 30.0 Switzerland 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 42.4 42.4 42.1 41.5 41.0 40.4 40.4 40.4 40.4 40.4 25.7 United Kingdom 40.0 40.0 40.0 40.0 40.0 40.0 40.0 40.0 40.0 40.0 40.0 40.0 40.0 40.0 40.0 40.0 40.0 40.0 18.0 United States 28.0 28.0 28.0 28.0 28.0 28.0 28.0 20.0 20.0 20.0 20.0 20.0 20.0 15.0 15.0 15.0 15.0 15.0 15.0 34 Table 3 Sample Overview and Summary Statistics The sample consists of 7,661 firms in 25 countries for 1990-2008 presented in Panel A. Summary statistics for investment variables are presented in Panel B. Investment refers to capital expenditure in year t divided by the endof-year t-1 assets. PPE Growth refers to growth in plant, property, and equipment from t-1 to t divided by the endof-year t-1 assets, and Asset Growth is defined as the growth rate of assets over the prior year. Summary statistics for independent variables are presented in Panel C. Dividend Tax is the personal income tax rate on dividends (in %). Effective Tax C is the country-weighted effective corporate payout tax rate (in %). It is obtained by weighting each year’s dividend and effective capital gains tax rates by the relative importance of dividends and share repurchases as payout channels (relative to total corporate payout) in a country over the sample period. The effective tax rate on share repurchases equals one-fourth of the statutory capital gains tax rate. Average Tax C is an alternative measure of the average corporate payout tax rate (in %). It is calculated by weighting each year’s dividend and statutory capital gains tax rates by the relative importance of dividends and share repurchases as payout channels (relative to total corporate payout) in a country over the sample period. Cash Flow is the ratio of cash flow in year t relative to prior year total assets. Cash is defined as cash holdings over prior year assets. EBITDA measures earnings before interest, tax, and depreciation in year t as a fraction of t-1 total assets. Q is defined as the market-to-book ratio, that is, the market value divided by the replacement value of the physical assets of a firm. Sales Growth is the logarithm of the growth rate of sales from t-2 to t. Leverage is the ratio of year t total debt to prior year total assets, and Size is the relative firm size measured as the percentage of firms in the sample that are smaller than this firm. All variables are in real USD (base year 2000). Panel A: Sample Overview Country N(Firms) N(Obs) Country N(Firms) N(Obs) Country N(Firms) N(Obs) Australia 261 1,879 Hungary 13 111 Poland 70 403 Austria 26 332 Ireland 18 252 Portugal 28 269 Belgium 38 463 Italy 66 925 Spain 41 577 Canada 320 2,525 Japan 2,071 22,347 Sweden 100 1,112 Denmark 65 867 Korea 477 4,528 Switzerland 85 1,136 Finland 57 727 Mexico 39 401 UK 470 6,054 France 212 2,608 Netherlands 68 894 USA 2,720 28,439 Germany 245 3,067 New Zealand 31 272 Total 7,661 81,222 Greece 99 519 Norway 41 515 Panel B: Summary Statistics for Investment N Mean Standard Deviation 10 th Percentile Median 90 th Percentile Investment 81,222 0.0594 0.0676 0.0083 0.0398 0.1271 PPE Growth 77,626 0.0805 0.2364 -0.1377 0.0514 0.2898 Asset Growth 81,222 0.0785 0.3128 -0.1702 0.0338 0.3079 Panel C: Summary Statistics for Independent Variables N Mean St. Dev. 10 th % Median 90 th % Dividend Tax 81,222 27.7640 12.5679 10.0000 30.0000 43.6000 Effective Tax C 81,222 18.2530 9.1225 7.6536 17.5143 31.9932 Average Tax C 81,222 24.1584 10.3002 10.0000 26.9082 38.0938 Cash Flow 81,222 0.0696 0.1043 -0.0217 0.0720 0.1767 Cash 81,222 0.1480 0.1883 0.0127 0.0922 0.3409 EBITDA 81,222 0.0957 0.1139 -0.0066 0.1008 0.2138 Q 81,222 2.1270 2.9255 0.7524 1.2183 4.0391 Sales Growth 81,222 0.1114 0.3924 -0.2719 0.0896 0.5080 Leverage 81,222 0.2607 0.2345 0.0031 0.2276 0.5313 Size 81,222 0.6306 0.2404 0.2800 0.6571 0.9363 35 Table 4 Average Investment and Cash Flow around Payout Tax Changes Panel A of this table shows the average investment for bottom and top quintiles of cash flow to assets around 14 payout tax increases (Average Tax C) in 1990-2008 of at least 3 percentage points and with at least 30 observations in the country-year. Panel B illustrates the difference in investment between top and bottom cash flow quintiles around 15 payout tax decreases. We measure investment by capital expenditure in year t divided by the end-of-year t-1 assets. The table also shows the difference between groups and periods, and the difference-in-difference estimate. Standard errors are in parentheses. ***, **, * indicate statistical significance at 1%, 5%, and 10% level, respectively. The 31 tax events are listed in Figure 4. Panel A: 14 Tax Increase Events Low Cash Flow Firms High Cash Flow Firms Difference between Groups (1) (2) (3) Pre-reform Periodt-4;t-1 -0.0230*** 0.0307*** 0.0533*** (0.0015) (0.0038) (0.0046) Post-reform Period t;t+2 -0.0278** 0.0481*** 0.0759*** (0.0025) (0.0037) (0.0051) Difference between Periods -0.0048* 0.0173*** 0.0226*** (0.0029) (0.0053) (0.0069) Panel B: 15 Tax Decrease Events Low Cash Flow Firms High Cash Flow Firms Difference between Groups (1) (2) (3) Pre-reform Periodt-4;t-1 -0.0232*** 0.0495*** 0.0727*** (0.0024) (0.0035) (0.0046) Post-reform Period t;t+2 -0.0163*** 0.0390*** 0.0554*** (0.0029) (0.0030) (0.0042) Difference between Periods 0.0068* -0.0105** -0.0173*** (0.0038) (0.0046) (0.0062) 36 Table 5 Firm Investment and Internal Resources under Various Tax Regimes This table reports linear regression results for firm investment behavior, estimated over the 1990-2008 period. The dependent variable is Investment, defined as capital expenditure in year t divided by the end-of-year t-1 assets. We use Cash Flow as a measure of firm’s availability of internal resources for investment. Cash Flow is the ratio of cash flow in year t relative to prior year total assets. See Table 3 for a description of the other independent variables included in the regressions. In column (1) we measure firms’ tax burden on corporate payouts (Tax) as the personal income tax rate on dividends (Dividend Tax). Column (2) uses the country-weighted effective tax rate (Effective Tax C), and column (3) employs the country-weighted average tax rate (Average Tax C). Country-year interaction indicator variables are included in all specifications. Standard errors (shown in parentheses) allow for heteroskedasticity and are clustered by country-years. ***, **, * indicate statistical significance at 1%, 5%, and 10% level, respectively. Dividend Tax Rate Country-Weighted Effective Tax Rate Country-Weighted Average Tax Rate (1) (2) (3) Cash Flow*Tax 0.0009** 0.0021*** 0.0017*** (0.0004) (0.0006) (0.0005) Cash Flow 0.0749*** 0.0644*** 0.0599*** (0.0115) (0.0101) (0.0123) Sales Growth 0.0157*** 0.0156*** 0.0156*** (0.0011) (0.0011) (0.0011) Leverage 0.0374*** 0.0373*** 0.0373*** (0.0029) (0.0029) (0.0029) Size 0.0025 0.0031 0.0030 (0.0040) (0.0040) (0.0040) Q 0.0011*** 0.0011*** 0.0010*** (0.0001) (0.0001) (0.0001) Firm FE Yes Yes Yes Country-year FE Yes Yes Yes Observations 81,222 81,222 81,222 R-squared 0.5779 0.5781 0.5781 37 Table 6 Firm Investment and Internal Resources under Various Tax Regimes – Alternative Measures This table reports linear regression results for firm investment behavior, estimated over the 1990-2008 period. The dependent variable is Investment, defined as capital expenditure in year t divided by the end-of-year t-1 assets. We use two alternative measures of firm’s availability of internal resources for investment. Cash is defined as cash holdings over prior year assets (columns (1), (3), (5)). EBITDA measures earnings before interest, tax, and depreciation in year t as a fraction of t-1 total assets (columns (2), (4), (6)). See Table 3 for a description of the other independent variables included in the regressions. In columns (1) and (2) we measure firms’ tax burden on corporate payouts (Tax) as the personal income tax rate on dividends (Dividend Tax). Columns (3) and (4) use the country-weighted effective tax rate (Effective Tax C), and columns (5) and (6) employ the country-weighted average tax rate (Average Tax C). Countryyear interaction indicator variables are included in all specifications. Standard errors (shown in parentheses) allow for heteroskedasticity and are clustered by country-years. ***, **, * indicate statistical significance at 1%, 5%, and 10% level, respectively. Dividend Tax Rate Country-Weighted Effective Tax Rate Country-Weighted Average Tax Rate (1) (2) (3) (4) (5) (6) Cash*Tax 0.0005** 0.0006* 0.0005* (0.0002) (0.0003) (0.0002) EBITDA*Tax 0.0003 0.0010** 0.0009** (0.0003) (0.0004) (0.0003) Cash 0.0014 0.0060 0.0028 (0.0060) (0.0054) (0.0063) EBITDA 0.0395*** 0.0319*** 0.0283*** (0.0085) (0.0075) (0.0089) Sales Growth 0.0213** 0.0188*** 0.0213** 0.0188*** 0.0213** 0.0188*** (0.0011) (0.0012) (0.0011) (0.0012) (0.0011) (0.0012) Leverage 0.0331** 0.0366*** 0.0331** 0.0366*** 0.0332** 0.0365*** (0.0030) (0.0031) (0.0029) (0.0030) (0.0029) (0.0030) Size 0.0062 0.0038 0.0060 0.0042 0.0062 0.0041 (0.0041) (0.0040) (0.0041) (0.0040) (0.0041) (0.0040) Q 0.0013** 0.0013*** 0.0013** 0.0013*** 0.0013** 0.0013*** (0.0001) (0.0001) (0.0001) (0.0001) (0.0001) (0.0001) Firm FE Yes Yes Yes Yes Yes Yes Country-year FE Yes Yes Yes Yes Yes Yes Observations 81,222 81,222 81,222 81,222 81,222 81,222 R-squared 0.5688 0.5707 0.5687 0.5708 0.5687 0.5708 38 Table 7 Firm Investment and Internal Resources under Various Tax Regimes – Flexible Specifications This table reports linear regression results for firm investment behavior, estimated over the 1990-2008 period. The dependent variable is Investment, defined as capital expenditure in year t divided by the end-of-year t-1 assets. We use Cash Flow to measure firms’ availability of internal resources for investment. Cash Flow is the ratio of cash flow in year t relative to prior year total assets. See Table 3 for a description of the other independent variables included in the regressions. In column (1) we measure firms’ tax burden on corporate payouts (Tax) as the personal income tax rate on dividends (Dividend Tax). Column (2) uses the country-weighted effective tax rate (Effective Tax C), and column (3) employs country-weighted average tax rate (Average Tax C). Country-year interaction indicator variables are included in all three specifications. We also include the interaction of Cash Flow with both country and year indicator variables. Standard errors (shown in parentheses) allow for heteroskedasticity and are clustered by country-years. ***, **, * indicate statistical significance at 1%, 5%, and 10% level, respectively. Dividend Tax Rate Country-Weighted Effective Tax Rate Country-Weighted Average Tax Rate (1) (2) (3) Cash Flow*Tax 0.0011** 0.0027*** 0.0021*** (0.0005) (0.0008) (0.0006) Sales Growth 0.0158*** 0.0157*** 0.0157*** (0.0011) (0.0011) (0.0011) Leverage 0.0373*** 0.0372*** 0.0372*** (0.0029) (0.0029) (0.0029) Size 0.0035 0.0040 0.0038 (0.0040) (0.0040) (0.0040) Q 0.0009*** 0.0009*** 0.0009*** (0.0001) (0.0001) (0.0001) Firm FE Yes Yes Yes Country-year FE Yes Yes Yes Year FE*CashFlow Yes Yes Yes Country FE*CashFlow Yes Yes Yes Observations 81,222 81,222 81,222 R-squared 0.5803 0.5805 0.5804 39 Table 8 Firm Investment and Internal Resources under Various Tax Regimes – Cash Flow Percentile Ranks This table reports linear regression results for firm investment behavior, estimated over the 1990-2008 period. The dependent variable is Investment, defined as capital expenditure in year t divided by the end-of-year t-1 assets. We use the interaction of payout tax with the cash flow percentile rank (CF Rank) as explanatory variable. See Table 3 for a description of the other independent variables included in the regressions. Country-year interaction indicator variables are included in all specifications. In columns (2), (4), and (6) we also include the interaction of Cash Flow with both country and year indicators for the more demanding flexible specifications. Standard errors (shown in parentheses) allow for heteroskedasticity. ***, **, * indicate statistical significance at 1%, 5%, and 10% level, respectively. Dividend Tax Rate Country-Weighted Effective Tax Rate Country-Weighted Average Tax Rate (1) (2) (3) (4) (5) (6) CF Rank*Tax 0.0008*** 0.0008*** 0.0012*** 0.0013*** 0.0010*** 0.0010*** (0.0001) (0.0001) (0.0002) (0.0002) (0.0001) (0.0001) Baseline Controls Yes Yes Yes Yes Yes Yes Firm FE Yes Yes Yes Yes Yes Yes Country-year FE Yes Yes Yes Yes Yes Yes Year FE*CashFlow No Yes No Yes No Yes Country FE*CashFlow No Yes No Yes No Yes Observations 81,222 81,222 81,222 81,222 81,222 81,222 R-squared 0.5795 0.5818 0.5795 0.5817 0.5796 0.5818 40 Table 9 External Equity Financing and Tax Regimes This table presents linear regression results for external financing behavior, estimated over the 1990-2008 period. The dependent variable is the value of new equity issues to start-of-year book value of assets. Observations where the dependent variable exceeds 0.15 are excluded. See Table 3 for a description of the independent variables included in the regressions. In column (1) we measure firms’ tax burden on corporate payouts (Tax) as the personal income tax rate on dividends (Dividend Tax). Column (2) uses the country-weighted effective tax rate (Effective Tax C), and column (3) employs the country-weighted average tax rate (Average Tax C). Coefficient estimates are based on baseline specifications with country-fixed effects and year-fixed effects. Standard errors (shown in parentheses) are heteroskedasticity-robust and clustered by country-years. ***, **, * indicate statistical significance at 1%, 5%, and 10% level, respectively. Dividend Tax Rate Country-Weighted Average Tax Rate Country-Weighted Average Tax Rate (1) (2) (3) Tax -0.0001*** -0.0002*** -0.0002*** (0.0000) (0.0001) (0.0001) Cash Flow -0.0088*** -0.0089*** -0.0088*** (0.0031) (0.0031) (0.0031) Stock Price Appreciation 0.0112*** 0.0112*** 0.0112*** (0.0009) (0.0009) (0.0009) Sales Growth 0.0048*** 0.0047*** 0.0047*** (0.0006) (0.0006) (0.0006) Leverage 0.0085*** 0.0085*** 0.0085*** (0.0017) (0.0017) (0.0017) Size 0.0073*** 0.0072*** 0.0072*** (0.0025) (0.0025) (0.0025) Q 0.0006*** 0.0006*** 0.0006*** (0.0001) (0.0001) (0.0001) Year FE Yes Yes Yes Firm FE Yes Yes Yes Observations 33,280 33,280 33,280 R-squared 0.3819 0.3815 0.3819 41 Table 10 Old and New View Firms and the Link between Payout Taxes and Cash Flow Table 11 Corporate Governance and the Link between Payout Taxes and Cash Flow This table presents coefficient estimates for Cash Flow*Tax interaction using the country-weighted average tax rate (Average Tax C). Firms are sorted into quartiles of insider ownership, and regressions are estimated separately for each quartile. b is the coefficient estimate, (se) is the heteroskedasticity-robust standard error clustered by country-years, tstat is the t-statistic of the significance of coefficient b, and n is the number of observations.***, **, * indicate statistical significance at 1%, 5%, and 10% level, respectively. This table presents coefficient estimates for Cash Flow*Tax interaction using the country-weighted average tax rate (Average Tax C). We define firms as old view firms if predicted net proceeds from the sale/issue of common and preferred stock to lagged assets exceeds 2% (Panel A) or if previous years’ sales of shares divided by lagged book assets exceeded zero (Panel B) or if the firm has low financial constraints (using the KZ Index of financial constraints, with a cutoff of 0.7, see text for detail). We predict issues of common stock by common share free float, share turnover, sales growth, leverage, market capitalization and Tobin's q. b is the coefficient estimate, (se) is the heteroskedasticity-robust standard error clustered by country-years, t-stat is the t-statistic of the significance of coefficient b, and n is the number of observations. ***, **, * indicate statistical significance at 1%, 5%, and 10% level, respectively. Panel A: Predicted Equity Issues Category b (se) [t-stat] N New view firms; predicted equity issues < 2% 0.1012 (0.0847) [1.19] 21,614 Old view firms; predicted equity issues > 2% 0.2042** (0.0952) [2.14] 13,770 Panel B: Previous year Equity Issues Category B (se) [t-stat] n New view firms; last year equity issues = 0 0.1159 (0.0764) [1.52] 24,734 Old view firms; last year equity issues > 0 0.2588*** (0.0879) [2.94] 32,663 Panel C: KZ Index of Financial Constraints Category b (se) [t-stat] n New view firms; low financial constraints 0.0787 (0.0733) [1.07] 25,004 Old view firms; high financial constraints 0.1991*** (0.0671) [2.97] 25,003 Quartile of insider ownership Range of ownership B (se) [t-stat] n Low ownership 0-0.8% 0.0012 (0.0010) [1.19] 15,338 2 0.8%-5.0% 0.0016 (0.0010) [1.62] 14,942 3 5.0%-19.4% 0.0014 (0.0009) [1.55] 14,011 High ownership 19.4%- 0.0021** (0.0009) [2.46] 12,657 42 Table 12 Firm Investment and Internal Resources under Various Tax Regimes – Control for Corporate Income Tax This table replicates regressions for investment behavior from Table 4, estimated over the 1990-2008 period, but features the corporate tax rate as an additional explanatory variable for investment. Corporate Tax is the statutory tax rate on corporate income. We additionally interact CashFlow, CashFlow*CorporateTax, and CorporateTax with the indicator variable Imp, which is equal to 1 for imputation tax systems and zero otherwise. Baseline regression controls are as in Table 4. Country-year interaction indicator variables and interactions between the corporate tax rate and cash flow are included in all specifications. Standard errors (shown in parentheses) allow for heteroskedasticity and are clustered by country-years. ***, **, * indicate statistical significance at 1%, 5%, and 10% level, respectively. Dividend Tax Rate Country-Weighted Average Tax Rate Country-Weighted Average Tax Rate Cash Flow*Tax 0.0007* 0.0012** 0.0015*** (0.0004) (0.0006) (0.0005) CashFlow* CorporateTax 0.0016 0.0016 0.0017 (0.0013) (0.0014) (0.0014) CashFlow*Imp* CorporateTax 0.0048** 0.0045** 0.0044** (0.0019) (0.0020) (0.0020) Baseline Controls Yes Yes Yes Firm FE Yes Yes Yes Country-year FE Yes Yes Yes Observations 81,222 81,222 81,222 R-squared 0.5788 0.5788 0.5788 43 Table 13 Impact of Taxation on the Cash Flow Sensitivity of Investment – Robustness to Other Macroeconomic Determinants of Investment This table reports coefficients for the cash flow*tax interaction in the linear regressions for firm investment behavior, estimated over the 1990-2008 period. Regression specifications are as in Table 8 but additional macroeconomic determinants of investment are included as controls. Those are Subsidies, Grants, Social Benefits, which include all government transfers on current account to private and public enterprises, and social security benefits in cash and in kind (Panel A); Military Expenditure as a fraction of GDP, which includes all current and capital expenditures on the armed forces (Panel B), Sales and Turnover Tax, which measure taxes on goods and services as a fraction of value added of industry and services (Panel C); and the R&D Expenditure as a fraction of GDP, which includes all expenditures for research and development covering basic research, applied research, and experimental development (Panel D). Standard errors (shown in parentheses) allow for heteroskedasticity and are clustered by country-years. ***, **, * indicate statistical significance at 1%, 5%, and 10% level, respectively. Dividend Tax Rate Country-Weighted Effective Tax Rate Country-Weighted Average Tax Rate (1) (2) (3) Panel A: Subsidies, Grants, Social Benefits Cash Flow *Tax 0.0012 0.0026*** 0.0018** (0.0007) (0.0007) (0.0007) Observations 41,577 41,577 41,577 R-squared 0.6044 0.6048 0.6045 Panel B: Military Expenditure Cash Flow *Tax 0.0008** 0.0021*** 0.0016*** (0.0004) (0.0006) (0.0005) Observations 81,222 81,222 81,222 R-squared 0.5780 0.5781 0.5781 Panel C: Sales and Turnover Tax Cash Flow *Tax 0.0009 0.0024** 0.0012* (0.0007) (0.0010) (0.0007) Observations 39,608 39,608 39,608 R-squared 0.6019 0.6021 0.6019 Panel D: R&D Expenditure Cash Flow *Tax 0.0004 0.0011* 0.0009* (0.0003) (0.0005) (0.0005) Observations 61,963 61,963 61,963 R-squared 0.6128 0.6128 0.6128 44 Appendix Table A.I Firm Investment and Internal Resources under Various Tax Regimes – Tests without U.S. and Japan This table replicates regressions for investment behavior from Table 4, estimated over the 1990-2008 period, but excludes firms from U.S. and Japan. Baseline regression controls are as in Table 4. Country-year interaction indicator variables are included in all specifications. In columns (2), (4), and (6) we also include the interaction of cash flow with both country and year indicator variables. Standard errors (shown in parentheses) allow for heteroskedasticity and are clustered by country-years. ***, **, * indicate statistical significance at 1%, 5%, and 10% level, respectively. Dividend Tax Rate Country-Weighted Effective Tax Rate Country-Weighted Average Tax Rate (1) (2) (3) (4) (5) (6) Cash Flow *Tax 0.0017** 0.0044*** 0.0021** 0.0055*** 0.0013* 0.0040*** (0.0007) (0.0010) (0.0009) (0.0011) (0.0007) (0.0010) Baseline Controls Yes Yes Yes Yes Yes Yes Firm FE Yes Yes Yes Yes Yes Yes Country-year FE Yes Yes Yes Yes Yes Yes Year*CashFlow No Yes No Yes No Yes Country*CashFlow No Yes No Yes No Yes Observations 30,436 30,436 30,436 30,436 30,436 30,436 R-squared 0.5214 0.5262 0.5213 0.5262 0.5212 0.5261 Table A.II Firm Investment and Internal Resources under Various Tax Regimes – Different Clusters This table replicates regressions for investment behavior from Table 4, estimated over the 1990-2008 period, but with different clusters. Baseline regression controls are as in Table 4. Country-year interaction indicator variables and interactions between the corporate tax rate and cash flow are included in all specifications. Standard errors (shown in parentheses) allow for heteroskedasticity. ***, **, * indicate statistical significance at 1%, 5%, and 10% level, respectively. 25 Country Clusters 220 Country-Industry Clusters (1) (2) (3) (4) (5) (6) DivTax EffTaxC AvgTaxC DivTax EffTaxC AvgTaxC Cash Flow*Tax 0.0011 0.0027** 0.0021** 0.0011* 0.0027*** 0.0021*** (0.0006) (0.0011) (0.0009) (0.0006) (0.0009) (0.0008) Baseline Controls Yes Yes Yes Yes Yes Yes Firm FE Yes Yes Yes Yes Yes Yes Country-year FE Yes Yes Yes Yes Yes Yes Year*CashFlow Yes Yes Yes Yes Yes Yes Country*CashFlow Yes Yes Yes Yes Yes Yes Observations 81,222 81,222 81,222 81,222 81,222 81,222 R-squared 0.5803 0.5805 0.5804 0.5803 0.5805 0.5804 45 Table A.III Firm Investment and Internal Resources under Various Tax Regimes – Alternative Measures of Investment This table replicates regressions for investment behavior from Table 4, estimated over the 1990-2008 period, but uses growth in plant, property, and equipment from t-1 to t as dependent variable (columns (1) to (3), Panel A). In Column (4) to (6), Panel A assets growth from t-1 to t is the dependent variable. Regressions in columns (1) to (3), Panel B use capital expenditure in year t divided by the end-of-year t-1 plant, property, and equipment (Capex/PPE) as dependent variable. In Column (4) to (6), Panel B, capital expenditure in year t divided by the end-of-year t-1 fixed assets (Capex/FA) is the dependent variable. Baseline regression controls are as in Table 4. Country-year interaction indicator variables and interactions between the corporate tax rate and cash flow are included in all specifications. Standard errors (shown in parentheses) allow for heteroskedasticity and are clustered by country-years. ***, **, * indicate statistical significance at 1%, 5%, and 10% level, respectively. Panel A: PPE Growth and Assets Growth PPE Growth Assets Growth (1) (2) (3) (4) (5) (6) DivTax EffTaxC AvgTaxC DivTax EffTaxC AvgTaxC Cash Flow*Tax 0.0041* 0.0097*** 0.0081*** 0.0043 0.0118** 0.0097** (0.0022) (0.0036) (0.0030) (0.0033) (0.0052) (0.0044) Baseline Controls Yes Yes Yes Yes Yes Yes Firm FE Yes Yes Yes Yes Yes Yes Country-year FE Yes Yes Yes Yes Yes Yes Year*CashFlow Yes Yes Yes Yes Yes Yes Country*CashFlow Yes Yes Yes Yes Yes Yes Observations 77,626 77,626 77,626 81,222 81,222 81,222 R-squared 0.4392 0.4394 0.4394 0.5501 0.5502 0.5502 Panel B: Capex/PPE and Capex/FA Capex/PPE Capex/FA (1) (2) (3) (4) (5) (6) DivTax EffTaxC AvgTaxC DivTax EffTaxC AvgTaxC Cash Flow*Tax 0.2605** 0.6234*** 0.5105*** 0.0039* 0.0079** 0.0061** (0.1189) (0.1626) (0.1346) (0.0022) (0.0031) (0.0025) Baseline Controls Yes Yes Yes Yes Yes Yes Firm FE Yes Yes Yes Yes Yes Yes Country-year FE Yes Yes Yes Yes Yes Yes Year*CashFlow Yes Yes Yes Yes Yes Yes Country*CashFlow Yes Yes Yes Yes Yes Yes Observations 78,911 78,911 78,911 80,969 80,969 80,969 R-squared 0.4350 0.4351 0.4351 0.4490 0.4491 0.4491 46 Table A.IV Firm Investment and Internal Resources under Various Tax Regimes – Alternative Measures of Internal Resources This table reports linear regression results for firm investment behavior, estimated over the 1990-2008 period. The dependent variable is Investment, defined as capital expenditure in year t divided by the end-of-year t-1 assets. We use another alternative measure of firm’s availability of internal resources for investment. NetIncome is defined as net income over prior year assets. OpIncome is defined as operating income over prior year assets. See Table 3 for a description of the other independent variables included in the regressions. Country-year interaction indicator variables are included in all specifications. We additionally include the interaction of NetIncome and OpIncome respectively with both country and year indicator variables. Standard errors (shown in parentheses) allow for heteroskedasticity and are clustered by country-years. ***, **, * indicate statistical significance at 1%, 5%, and 10% level, respectively. Dividend Tax Rate Country-Weighted Effective Tax Rate Country-Weighted Average Tax Rate (1) (2) (3) (4) (5) (6) NetIncome *Tax 0.0005 0.0012** 0.0010** (0.0003) (0.0006) (0.0005) OpIncome *Tax 0.0005 0.0014** 0.0011** (0.0004) (0.0006) (0.0005) Baseline Controls Yes Yes Yes Yes Yes Yes Firm FE Yes Yes Yes Yes Yes Yes Country-year FE Yes Yes Yes Yes Yes Yes Year* Income Yes Yes Yes Yes Yes Yes Country*Income Yes Yes Yes Yes Yes Yes Observations 81,188 81,120 81,188 81,120 81,188 81,120 R-squared 0.5723 0.5747 0.5723 0.5747 0.5723 0.5747 47 Table A.V Old and New View Firms and the Link between Payout Taxes and Cash Flow – Dividend Tax Rate This table presents coefficient estimates for Cash Flow*Tax interaction using the dividend tax rate (Dividend Tax C). We define firms as old view firms if predicted net proceeds from the sale/issue of common and preferred stock to lagged assets exceeds 2% (Panel A) or if previous years’ sales of shares divided by lagged book assets exceed zero (Panel B) or if the firm has low financial constraints (using the KZ Index of financial constraints, with a cutoff of 0.7, see text for details). We predict issues of common stocks by past issuances, free float, stock turnover, sales growth, leverage, size and Tobin's q. b is the coefficient estimate, (se) is the heteroskedasticity-robust standard error clustered by country-years, tstat is the t-statistic of the significance of coefficient b, and n is the number of observations. ***, **, * indicate statistical significance at 1%, 5%, and 10% level, respectively. Panel A: Predicted Equity Issues Category b (se) [t-stat] N New view firms; predicted equity issues < 2% 0.0893 (0.0589) [1.52] 21,614 Old view firms; predicted equity issues > 2% 0.1215* (0.0625) [1.94] 13,770 Panel B: Previous year Equity Issues Category B (se) [t-stat] n New view firms; last year equity issues = 0 0.1029 (0.0682) [1.51] 24,734 Old view firms; last year equity issues > 0 0.1138 (0.0700) [1.63] 32,663 Panel C: KZ Index of Financial Constraints Category b (se) [t-stat] n New view firms; low financial constraints 0.0315 (0.0689) [0.46] 25,004 Old view firms; high financial constraints 0.1261** (0.0509) [2.48] 25,003 48 Table A.VI Old and New View Firms and the Link between Payout Taxes and Cash Flow – Country-Weighted Effective Tax Rate Table A.VII Corporate Governance and the Link between Payout Taxes and Cash Flow– Dividend Tax Rate This table presents coefficient estimates for Cash Flow*Tax interaction using the statutory dividend tax rate (Dividend Tax). Firms are sorted into quartiles of insider ownership, and regressions are estimated separately for each quartile. b is the coefficient estimate, (se) is the heteroskedasticity-robust standard error clustered by country-years, t-stat is the tstatistic of the significance of coefficient b, and n is the number of observations.***, **, * indicate statistical significance at 1%, 5%, and 10% level, respectively. This table presents coefficient estimates for Cash Flow*Tax interaction using the country-weighted effective tax rate (Effective Tax C). We define firms as old view firms if predicted net proceeds from the sale/issue of common and preferred stock to lagged assets exceeds 1% (Panel A) or if precious years’ sales of shares divided by lagged book assets exceed zero (Panel B) or if the firm has low financial constraints (using the KZ Index of financial constraints, with a cutoff of 0.7, see text for details). We predict issues of common stocks by past issuances, free float, stock turnover, sales growth, leverage, size and Tobin's q. b is the coefficient estimate, (se) is the heteroskedasticity-robust standard error clustered by country-years, t-stat is the t-statistic of the significance of coefficient b, and n is the number of observations. ***, **, * indicate statistical significance at 1%, 5%, and 10% level, respectively. Panel A: Predicted Equity Issues Category b (se) [t-stat] N New view firms; predicted equity issues < 2% 0.1125 (0.0945) [1.19] 21,614 Old view firms; predicted equity issues > 2% 0.1899* (0.1114) [1.70] 13,770 Panel B: Previous year Equity Issues Category b (se) [t-stat] n New view firms; last year equity issues = 0 0.1698* (0.0976) [1.74] 24,734 Old view firms; last year equity issues > 0 0.2759*** (0.0878) [3.14] 32,663 Panel C: KZ Index of Financial Constraints Category b (se) [t-stat] n New view firms; low financial constraints 0.1188 (0.0799) [1.49] 25,004 Old view firms; high financial constraints 0.2330*** (0.0786) [2.96] 25,003 Quartile of insider ownership Range of ownership B (se) [t-stat] n Low ownership 0-0.8% 0.0009 (0.0009) [1.0296] 15,338 2 0.8%-5.0% 0.0013* (0.0007) [1.7725] 14,942 3 5.0%-19.4% 0.0005 (0.0007) [0.6666] 14,011 High ownership 19.4%- 0.0009 (0.0006) [1.5839] 12,657 49 Table A.VIII Corporate Governance and the Link between Payout Taxes and Cash Flow– Country-Weighted Effective Tax Rate This table presents coefficient estimates for Cash Flow*Tax interaction using the country-weighted effective tax rate (Effective Tax C). Firms are sorted into quartiles of insider ownership, and regressions are estimated separately for each quartile. b is the coefficient estimate, (se) is the heteroskedasticity-robust standard error clustered by country-years, tstat is the t-statistic of the significance of coefficient b, and n is the number of observations.***, **, * indicate statistical significance at 1%, 5%, and 10% level, respectively. Quartile of insider ownership Range of ownership b (se) [t-stat] n Low ownership 0-0.8% 0.0009 (0.0012) [0.78] 15,338 2 0.8%-5.0% -0.0001 (0.0011) [-0.10] 14,942 3 5.0%-19.4% 0.0018* (0.0010) [1.91] 14,011 High ownership 19.4%- 0.0031*** (0.0009) [3.50] 12,657 50 Table A.IX Correlation between Tax Changes and Macroeconomic Factors This table reports correlation coefficients for 444 country-year observations. ?DivTax is the change in the dividend tax rate from t-1 to t. ?AvgTax (?EffTax) represents the change in country-weighted average (effective) payout tax rate. As macroeconomic variables we include GDP Growth, subsidies, cost for startups (Cost Startup), inflation, military expenditures and R&D expenditures by the government. P-values are shown in parentheses. Insignificant correlations (p = 0.1) are reported in italics. ?DivTax ?AvgTax ?EffTax GDP Growtht GDP Growtht-1 Subsidies Cost Startup Inflation Military Expenditures R&D Expenditures ?DivTax 1 ?AvgTax 0.936 1 (0.000) ?EffTax 0.985 0.970 1 (0.000) (0.000) GDP Growth 0.112 0.094 0.117 1 (0.018) (0.048) (0.014) GDP Growtht-1 0.153 0.116 0.145 0.516 1 (0.001) (0.015) (0.002) (0.000) Subsidies -0.023 -0.011 -0.016 -0.238 -0.263 1 (0.685) (0.849) (0.778) (0.000) (0.000) Cost Startup -0.022 -0.022 -0.043 0.236 0.158 0.088 1 (0.785) (0.790) (0.603) (0.004) (0.054) (0.311) Inflation 0.019 0.010 0.015 -0.108 -0.055 -0.201 0.164 1 (0.688) (0.826) (0.749) (0.019) (0.243) (0.000) (0.045) Military Expenditures -0.024 -0.021 -0.022 -0.029 -0.056 -0.150 0.086 0.067 1 (0.617) (0.667) (0.652) (0.535) (0.235) (0.009) (0.293) (0.143) R&D Expenditures -0.020 -0.003 -0.001 -0.218 -0.165 0.336 -0.568 -0.515 0.038 1 (0.746) (0.968) (0.987) (0.000) (0.007) (0.000) (0.000) (0.000) (0.541)Exploring the Duality between Product and Organizational Architectures: A Test of the “Mirroring” Hypothesis
|
CD ROM Annuaire d'Entreprises France prospect (avec ou sans emails) : REMISE DE 10 % Avec le code réduction AUDEN872
10% de réduction sur vos envois d'emailing --> CLIQUEZ ICI Retour à l'accueil, cliquez ici 2007, 2008, 2011 by Alan MacCormack, John Rusnak, and Carliss Baldwin Working papers are in draft form. This working paper is distributed for purposes of comment and discussion only. It may not be reproduced without permission of the copyright holder. Copies of working papers are available from the author. Exploring the Duality between Product and Organizational Architectures: A Test of the “Mirroring” Hypothesis Alan MacCormack John Rusnak Carliss Baldwin Working Paper 08-039Exploring the Duality between Product and Organizational Architectures: A Test of the “Mirroring” Hypothesis Corresponding Author: Alan MacCormack MIT Sloan School of Management 50 Memorial Drive E52-538 Cambridge MA 02142 alanmac@mit.edu John Rusnak, Carliss Baldwin Harvard Business School Soldiers Field Park Boston, MA 02163 cbaldwin@hbs.edu; jrusnak@hbs.edu2 Abstract A variety of academic studies argue that a relationship exists between the structure of an organization and the design of the products that this organization produces. Specifically, products tend to “mirror” the architectures of the organizations in which they are developed. This dynamic occurs because the organization’s governance structures, problem solving routines and communication patterns constrain the space in which it searches for new solutions. Such a relationship is important, given that product architecture has been shown to be an important predictor of product performance, product variety, process flexibility and even the path of industry evolution. We explore this relationship in the software industry. Our research takes advantage of a natural experiment, in that we observe products that fulfill the same function being developed by very different organizational forms. At one extreme are commercial software firms, in which the organizational participants are tightly-coupled, with respect to their goals, structure and behavior. At the other, are open source software communities, in which the participants are much more loosely-coupled by comparison. The mirroring hypothesis predicts that these different organizational forms will produce products with distinctly different architectures. Specifically, loosely-coupled organizations will develop more modular designs than tightly-coupled organizations. We test this hypothesis, using a sample of matched-pair products. We find strong evidence to support the mirroring hypothesis. In all of the pairs we examine, the product developed by the loosely-coupled organization is significantly more modular than the product from the tightly-coupled organization. We measure modularity by capturing the level of coupling between a product’s components. The magnitude of the differences is substantial – up to a factor of eight, in terms of the potential for a design change in one component to propagate to others. Our results have significant managerial implications, in highlighting the impact of organizational design decisions on the technical structure of the artifacts that these organizations subsequently develop. Keywords: Organizational Design, Product Design, Architecture, Modularity, OpenSource Software.3 1. Introduction The architecture of a product can be defined as the scheme by which the functions it performs are allocated to its constituent components (Ulrich, 1995). Much prior work has highlighted the critical role of architecture in the successful development of a firm’s new products, the competitiveness of its product portfolio and the evolution of its organizational capabilities (e.g., Eppinger et al, 1994; Ulrich, 1995; Sanderson and Uzumeri, 1995; Sanchez and Mahoney, 1996; Schilling, 2000; Baldwin and Clark, 2000; MacCormack, 2001). For any given set of functional requirements however, a number of different architectures might be considered viable. These designs will possess differing performance characteristics, in terms of important attributes such as cost, quality, reliability and adaptability. Understanding how architectures are chosen, how they are developed and how they evolve are therefore critical topics for academic research. A variety of studies have examined the link between a product’s architecture and the characteristics of the organization that develops it (Conway, 1968; Henderson and Clark, 1990; Brusoni and Prencipe, 2001; Sosa et al, 2004; Cataldo et al, 2006). Most examine a single project, focusing on the need to align team communications to the technical interdependencies in a design. In many situations however, these interdependencies are not predetermined, but are the product of managerial choices. Furthermore, how these choices are made can have a direct bearing on a firm’s success. For example, Henderson and Clark (1990) show that leading firms in the photolithography industry stumbled when faced with innovations that required radical changes to the product architecture. They argue that these dynamics occur because designs tend to reflect the organizations that develop them. Given organizations are slow to change, the designs they produce can quickly become obsolete in a changing marketplace. Empirical evidence of such a relationship however, has remained elusive. In this study, we provide evidence to support the hypothesis that a relationship exists between product and organizational designs. In particular, we use a network analysis technique called the Design Structure Matrix (DSM) to compare the design of products developed by different organizational forms. Our analysis takes advantage of the fact that software is an information-based product, meaning that the design comprises a series of instructions (or “source code”) that tell a computer what tasks to perform. Given this 4 feature, software products can be processed automatically to identify the dependencies that exist between their component elements (something that cannot be done with physical products). These dependencies, in turn, can be used to characterize a product’s architecture, by displaying the information visually and by calculating metrics that capture the overall level of coupling between elements in the system. We chose to analyze software because of a unique opportunity to examine two distinct organizational forms. Specifically, in recent years there has been a growing interest in open source (or “free”) software, which is characterized by: a) the distribution of a program’s source code along with the binary version of the product 1 and; b) a license that allows a user to make unlimited copies of and modifications to this product (DiBona et al, 1999). Successful open source software projects tend to be characterized by large numbers of volunteer contributors, who possess diverse goals, belong to different organizations, work in different locations and have no formal authority to govern development activities (Raymond, 2001; von Hippel and von Krogh, 2003). In essence, they are “loosely-coupled” organizational systems (Weick, 1976). This form contrasts with the organizational structures of commercial firms, in which smaller, collocated teams of individuals sharing common goals are dedicated to projects full-time, and given formal decision-making authority to govern development. In comparison to open source communities, these organizations are much more “tightly-coupled.” The mirroring hypothesis suggests that the architectures of the products developed by these contrasting forms of organization will differ significantly: In particular, open source software products are likely to be more modular than commercial software products. Our research seeks to examine the magnitude and direction of these differences. Our paper proceeds as follows. In the next section, we describe the motivation for our research and prior work in the field that pertains to understanding the link between product and organizational architectures. We then describe our research design, which involves comparing the level of modularity of different software products by analyzing the coupling between their component elements. Next, we discuss how we construct a sample of matched product pairs, each consisting of one open source and one 1 Commercial software is distributed in a binary form (i.e., 1’s and 0’s) that is executed by the computer.5 commercially developed product. Finally, we discuss the results of our analysis, and highlight the implications for practitioners and the academy. 2. Research Motivation The motivation for this research comes from work in organization theory, where it has long been recognized that organizations should be designed to reflect the nature of the tasks that they perform (Lawrence and Lorsch, 1967; Burns and Stalker, 1961). In a similar fashion, transaction cost economics predicts that different organizational forms are required to solve the contractual challenges associated with tasks that possess different levels of interdependency and uncertainty (Williamson, 1985; Teece, 1986). To the degree that different product architectures require different tasks to be performed, it is natural to assume that organizations and architectures must be similarly aligned. To date however, there has been little systematic empirical study of this relationship. Research seeking to examine this topic has followed one of two approaches. The first explores the need to match patterns of communication within a development project to the interdependencies that exist between different parts of a product’s design. For example, Sosa et al (2004) examined a single jet engine project, and found a strong tendency for communications to be aligned with key design interfaces. The likelihood of “misalignment” was shown to be greater when dependencies spanned organizational and system boundaries. Similarly, Cataldo et al (2006) explored the impact of misalignment in a single software development project, and found tasks were completed more rapidly when the patterns of communication between team members were congruent with the patterns of interdependency between components. Finally, Gokpinar et al (2006) explored the impact of misalignment in a single automotive development project, and found subsystems of higher quality were associated with teams that had aligned their communications to the technical interfaces with other subsystems. The studies above begin with the premise that team communication must be aligned to the technical interdependencies between components in a system, the latter being determined by the system’s functionality. A second stream of work however, adopts the reverse perspective. It assumes that an organization’s structure is fixed in the short-term, and explores the impact of this structure on the technical designs that emerge. This idea 6 was first articulated by Conway who stated, “any organization that designs a system will inevitably produce a design whose structure is a copy of the organization’s communication structure” (Conway, 1968). The dynamics are best illustrated in Henderson and Clark’s study of the photolithography industry, in which they show that market leadership changed hands each time a new generation of equipment was introduced (Henderson and Clark, 1990). These observations are traced to the successive failure of leading firms to respond effectively to architectural innovations, which involve significant changes in the way that components are linked together. Such innovations challenge existing firms, given they destroy the usefulness of the architectural knowledge embedded in their organizing structures and information-processing routines, which tend to reflect the current “Dominant Design” (Utterback, 1996). When this design is no longer optimal, established firms find it difficult to adapt. The contrast between the two perspectives can be clarified by considering the dynamics that occur when two distinct organizational forms develop the same product. Assuming the product’s functional requirements are identical, the first stream of research would assume that the patterns of communication between participants in each organization should be similar, driven by the nature of the tasks to be performed. In contrast, the second stream of research would predict that the resulting designs would be quite different, each reflecting the architecture of the organization from which it came. We define the latter phenomenon as “mirroring.” A test of the mirroring hypothesis can be conducted by comparing the designs of “matched-pair” products – products that fulfill the same function, but that have been developed by different organizational forms. To conduct such a test, we must characterize these different forms, and establish a measure by which to compare the designs of products that they produce. 2.1 Organizational Design and “Loosely-Coupled” Systems Organizations are complex systems comprising individuals or groups that coordinate actions in pursuit of common goals (March and Simon, 1958). Organization theory describes how the differing preferences, information, knowledge and skills of these organizational actors are integrated to achieve collective action. Early “classical” approaches to organization theory emphasized formal structure, authority, control, and 7 hierarchy (i.e., the division of labor and specialization of work) as distinguishing features of organizations, building upon work in the fields of scientific management, bureaucracy and administrative theory (Taylor, 1911; Fayol, 1949; Weber, 1947; Simon, 1976). Later scholars however, argued that organizations are best analyzed as social systems, given they comprise actors with diverse motives and values that do not always behave in a rational economic manner (Mayo, 1945; McGregor, 1960). As this perspective gained popularity, it was extended to include the link between an organization and the environment in which it operates. With this lens, organizations are seen as open systems, comprising “interdependent activities linking shifting coalitions of participants” (Scott, 1981). A key assumption is that organizations can vary significantly in their design; the optimal design for a specific mission is established by assessing the fit between an organization and the nature of the tasks it must accomplish (Lawrence and Lorsch, 1967). Weick was the first to introduce the concept that organizations can be characterized as complex systems, comprising many elements with different levels of coupling between them (Weick, 1976; Orton and Weick, 1990). Organizational coupling can be analyzed along a variety of dimensions, however the most important of these fall into three broad categories: Goals, structure and behavior (Orton and Weick, 1990). Organizational structure, in turn, can be further decomposed to capture important differences in terms of membership, authority and location. All these dimensions represent a continuum along which organizations vary in the level of coupling between participants. When aligned, they generate two distinct organizational forms, representing opposite ends of this continuum (see Table 1). While prior work had assumed that the elements in organizational systems were coupled through dense, tight linkages, Weick argued that some organizations (e.g., educational establishments) were only loosely-coupled. Although real-world organizations typically fall between these “canonical types,” they remain useful constructs for characterizing the extent to which organizations resemble one extreme or the other (Brusoni et al, 2001).8 Table 1: Characterizing Different Organizational Forms Tightly-Coupled Loosely-Coupled Goals Shared, Explicit Diverse, Implicit Membership Closed, Contracted Open, Voluntary Authority Formal, Hierarchy Informal, Meritocracy Location Centralized, Collocated Decentralized, Distributed Behavior Planned, Coordinated Emergent, Independent The software industry represents an ideal context within which to study these different organizational forms, given the wide variations in structure observed in this industry. At one extreme, we observe commercial software firms, which employ smaller, dedicated (i.e., full-time), collocated development teams to bring new products to the marketplace. These teams share explicit goals, have a closed membership structure, and rely on formal authority to govern their activities. At the other, we observe open source (or “free” software) communities, which rely on the contributions of large numbers of volunteer developers, who work in different organizations and in different locations (von Hippel and von Krogh, 2003). The participants in these communities possess diverse goals and have no formal authority to govern development, instead relying on informal relationships and cultural norms (Dibona et al, 1999). These forms of organization closely parallel the canonical types described above, with respect to the level of coupling between participants. They provide for a rich natural experiment, in that we observe products that perform the same function being developed in each. 2.2 Product Design, Architecture and Modularity Modularity is a concept that helps us to characterize different designs. It refers to the way that a product’s architecture is decomposed into different parts or modules. While there are many definitions of modularity, authors tend to agree on the concepts that lie at its heart; the notion of interdependence within modules and independence between modules (Ulrich, 1995). The latter concept is often called “loose-coupling.” Modular designs are loosely-coupled in that changes made to one module have little impact on the others. Just as there are degrees of coupling, there are degrees of modularity.9 The costs and benefits of modularity have been discussed in a stream of research that has sought to examine its impact on the management of complexity (Simon, 1962), product line architecture (Sanderson and Uzumeri, 1995), manufacturing (Ulrich, 1995), process design (MacCormack, 2001) process improvement (Spear and Bowen, 1999) and industry evolution (Baldwin and Clark, 2000). Despite the appeal of this work however, few studies have used robust empirical data to examine the relationship between measures of modularity, the organizational factors assumed to influence this property or the outcomes that it is thought to impact (Schilling, 2000; Fleming and Sorenson, 2004). Most studies are conceptual or descriptive in nature. Studies that attempt to measure modularity typically focus on capturing the level of coupling that exists between different parts of a design. In this respect, the most promising technique comes from the field of engineering, in the form of the Design Structure Matrix (DSM). A DSM highlights the inherent structure of a design by examining the dependencies that exist between its constituent elements in a square matrix (Steward, 1981; Eppinger et al, 1994; Sosa et al, 2003). These elements can represent design tasks, design parameters or the actual components. Metrics that capture the degree of coupling between elements have been calculated from a DSM, and used to compare different architectures (Sosa et al, 2007). DSMs have also been used to explore the degree of alignment between task dependencies and project team communications (Sosa et al, 2004). Recent work extends this methodology to show how design dependencies can be automatically extracted from software code and used to understand architectural differences (MacCormack et al, 2006). In this paper, we use this method to compare designs that come from different forms of development organization. 2.3 Software Design The measurement of modularity has gained most traction in the software industry, given the information-based nature of the product lends itself to analytical techniques that are not possible with physical products. The formal study of software modularity began with Parnas (1972) who proposed the concept of information hiding as a mechanism for dividing code into modular units. Subsequent authors built on this work, proposing metrics to capture the level of “coupling” between modules and “cohesion” within 10 modules (e.g., Selby and Basili, 1988; Dhama, 1995). This work complemented studies that sought to measure the complexity of software, to examine its effect on development productivity and quality (e.g., McCabe 1976; Halstead, 1976). Whereas measures of software complexity focus on characterizing the number and nature of the elements in a design, measures of modularity focus on the patterns of dependencies between these elements. Software can be complex (i.e., have many parts) and modular (i.e., have few dependencies between these parts). In prior work, this distinction is not always clear. 2 Efforts to measure software modularity generally follow one of two approaches. The first focuses on identifying specific types of dependency between components in a system, for example, the number of non-local branching statements (Banker et al, 1993); global variables (Schach et al, 2002); or function calls (Banker and Slaughter, 2000; Rusovan et al, 2005). The second infers the presence of dependencies by assessing which components tend to be changed concurrently. For example, Eick et al (1999) show that code decays over time, by looking at the number of files that must be altered to complete a modification request; while Cataldo et al (2006) show that modifications involving files that tend to change along with others, take longer to complete. While the inference approach avoids the need to specify the type of dependency being examined, it requires access to maintenance data that is not always captured consistently across projects. In multi-project research, dependency extraction from source code is therefore preferred. With the rise in popularity of open source software, interest in the topic of modularity has received further stimulus. Some authors argue that open source software is inherently more modular than commercial software (O’Reilly, 1999; Raymond, 2001). Others have suggested that modularity is a required property for this method of development to succeed (Torvalds, as quoted in DiBona, 1999). Empirical work to date however, yields mixed results. Some studies criticize the number of dependencies between critical components in systems such as Linux (Schach et al, 2002; Rusovan et al, 2005). Others provide quantitative and qualitative data that open source products are easier to modify (Mockus et al, 2002; Paulsen et al, 2004) or have fewer interdependencies between components (MacCormack et al, 2006). None of these studies however, conducts a 2 In some fields, complexity is defined to include inter-element interactions (Rivkin and Siggelkow, 2007).11 rigorous apples-to-apples comparison between open source and commercially developed software; the results may therefore be driven by idiosyncrasies of the systems examined. In this paper, we explore whether organizations with distinctly different forms – as captured by the level of coupling between participants – develop products with distinctly different architectures – as captured by the level of coupling between components. Specifically, we conduct a test of the “mirroring” hypothesis, which can be stated as follows: Loosely-coupled organizations will tend to develop products with more modular architectures than tightly-coupled organizations. We use a matched-pair design, to control for differences in architecture that are related to differences in product function. We build upon recent work that highlights how DSMs can be used to visualize and measure software architecture (Lopes and Bajracharya, 2005; MacCormack et al, 2006). 3. Research Methods 3 There are two choices to make when applying DSMs to a software product: The unit of analysis and the type of dependency. With regard to the former, there are several levels at which a DSM can be built: The directory level, which corresponds to a group of source files that pertain to a specific subsystem; the source file level, which corresponds to a collection of related processes and functions; and the function level, which corresponds to a set of instructions that perform a specific task. We analyze designs at the source file level for a number of reasons. First, source files tend to contain functions with a similar focus. Second, tasks and responsibilities are allocated to programmers at the source file level, allowing them to maintain control over all the functions that perform related tasks. Third, software development tools use the source file as the unit of analysis for version control. And finally, prior work on design uses the source file as the primary unit of analysis (e.g., Eick et al, 1999; Rusovan et al, 2005; Cataldo et al, 2006). 4 3 The methods we describe here build on prior work in this field (see MacCormack et al, 2006; 2007). 4 Metaphorically, source files are akin to the physical components in a product; whereas functions are akin to the nuts and bolts that comprise these components.12 There are many types of dependency between source files in a software product. 5 We focus on one important dependency type – the “Function Call” – used in prior work on design structure (Banker and Slaughter, 2000; Rusovan et al, 2005). A Function Call is an instruction that requests a specific task to be executed. The function called may or may not be located within the source file originating the request. When it is not, this creates a dependency between two source files, in a specific direction. For example, if FunctionA in SourceFile1 calls FunctionB in SourceFile2, then we note that SourceFile1 depends upon (or “uses”) SourceFile2. This dependency is marked in location (1, 2) in the DSM. Note this does not imply that SourceFile2 depends upon SourceFile1; the dependency is not symmetric unless SourceFile2 also calls a function in SourceFile1. To capture function calls, we input a product’s source code into a tool called a “Call Graph Extractor” (Murphy et al, 1998). This tool is used to obtain a better understanding of system structure and interactions between parts of the design. 6 Rather than develop our own extractor, we tested several commercial products that could process source code written in both procedural and object oriented languages (e.g., C and C++), capture indirect calls (dependencies that flow through intermediate files), run in an automated fashion and output data in a format that could be input to a DSM. A product called Understand C++ 7 was selected given it best met all these criteria. The DSM of a software product is displayed using the Architectural View. This groups each source file into a series of nested clusters defined by the directory structure, with boxes drawn around each successive layer in the hierarchy. The result is a map of dependencies, organized by the programmer’s perception of the design. To illustrate, the Directory Structure and Architectural View for Linux v0.01 are shown in Figure 1. Each “dot” represents a dependency between two particular components (i.e., source files). 5 Several authors have developed comprehensive categorizations of dependency types (e.g., Shaw and Garlan, 1996; Dellarocas, 1996). Our work focuses on one important type of dependency. 6 Function calls can be extracted statically (from the source code) or dynamically (when the code is run). We use a static call extractor because it uses source code as input, does not rely on program state (i.e., what the system is doing at a point in time) and captures the system structure from the designer’s perspective. 7 Understand C++ is distributed by Scientific Toolworks, Inc. seeReinventing Savings Bonds
|
CD ROM Annuaire d'Entreprises France prospect (avec ou sans emails) : REMISE DE 10 % Avec le code réduction AUDEN872
10% de réduction sur vos envois d'emailing --> CLIQUEZ ICI Retour à l'accueil, cliquez ici Harvard Business School Working Paper Series, NO. 06-017 Copyright © 2005 Working papers are in draft form. This working paper is distributed for purposes of comment and discussion only. It may not be reproduced without permission of the copyright holder. Copies of working papers are available from the author. Reinventing Savings Bonds Peter Tufano Daniel Schneider Peter Tufano Harvard Business School and NBER and D2D Fund Daniel Schneider Harvard Business School Reinventing Savings Bonds* Savings Bonds have always served multiple objectives: funding the U. S. government, democratizing national financing, and enabling families to save. Increasingly, this last goal has been ignored. A series of efficiency measures introduced in 2003 make these bonds less attractive and less accessible to savers. Public policy should go in the opposite direction: U.S. savings bonds should be reinvigorated to help low and moderate income (LMI) families build assets. More and more, these families’ saving needs are ignored by private sector asset managers and marketers. With a few relatively modest changes, the Savings Bond program can be reinvented to help these families save, while still increasing the efficiency of the program as a debt management device. Savings bonds provide market-rate returns, with no transaction costs, and are a useful commitment savings device. Our proposed changes include (a) allowing Federal taxpayers to purchase bonds with tax refunds; (b) enabling LMI families to redeem their bonds before twelve months; (c) leveraging private sector organizations to market savings bonds; and (d) contemplating a role for savings bonds in the life cycles of LMI families. Peter Tufano Daniel Schneider Harvard Business School Harvard Business School and D2D Fund and NBER Soldiers Field Soldiers Field Boston, MA 02163 Boston, MA 02163 ptufano@hbs.edu dschneider@hbs.edu * We would like to thank officials at the Bureau of Public Debt (BPD) for their assistance locating information on the Savings Bonds program. We would also like to thank officials from BPD and Department of Treasury, Fred Goldberg, Peter Orszag, Anne Stuhldreher, Bernie Wilson, Lawrence Summers, Jim Poterba and participants at the New America Foundation/Congressional Savings and Ownership Caucus and the Consumer Federation of America/America Saves Programs for useful comments and discussions. Financial support for this research project was provided by the Division of Research of the Harvard Business School. Any opinions expressed are those of the authors and not those of any of the organizations above. For the most up to date version of this paper, please visit http://www.people.hbs.edu/ptufano. 2 I. Introduction In a world in which financial products are largely sold and not bought, savings bonds are a quaint oddity. First offered as Liberty Bonds to fund World War I and then as Baby Bonds 70 years ago, savings bonds seem out of place in today’s financial world. While depository institutions and employers nominally market these bonds, they have few incentives to actively sell them. As financial institutions move to serve up-market clients with higher profit margin products, savings bonds receive little if no marketing or sales attention. Even the Treasury seems uninterested in marketing them. In 2003, the Treasury closed down the 41 regional marketing offices for savings bonds and has zeroed-out the budget for the marketing office, staff, and ad buys from $22.4 million to $0. (Block (2003)). No one seems to have much enthusiasm for selling savings bonds. Maybe this lack of interest is sensible. After all, there are many financial institutions selling a host of financial products in a very competitive financial environment. The very name “Savings Bonds” is out of touch; it is unfashionable to think of ourselves as “savers.” We are now “investors.” We buy investment products and hold our “near cash” in depository institutions or money market mutual funds. Saving is simply passé, and American families’ savings rate has dipped to its lowest point in recent history. Even if we put aside the macro-economic debate on the national savings rate, there is little question that lower income Americans would be well served with greater savings. Families need enough savings to withstand temporary shocks to income, but a shockingly large fraction don’t even have enough savings to sustain a few months of living expenses (see Table I). Financial planners often advise that families have sufficient liquid assets to replace six months of household income in the event of an emergency. Yet, only 22% of households, and only 19% of LMI households, meet this standard. Fewer than half (47%) of US households, and only 29% of LMI households, have sufficient liquid assets to meet their own stated emergency savings goals. Families do somewhat better when financial assets in retirement accounts are included, but even then more than two-thirds of households do not have sufficient savings to replace six months of income. And while the financial landscape may be generally competitive, there are low-profit pockets where competition cannot be counted upon to solve all of our problems. While it may be profitable to sell low income families credit cards, sub-prime loans, payday loans or check cashing services, there is no rush to offer them savings products. A not insubstantial number of them may have prior credit records that lead depository institutions to bar them from opening even savings accounts. Many do not have the requisite minimum balances of $2500 or $3000 that most money market mutual funds demand. Many of them are trying to build assets, but their risk profile 3 cannot handle the potential principal loss of equities or equity funds. Many use alternative financial services, or check cashing outlets, as their primary financial institution, but these firms do not offer asset building products. For these families, old-fashioned U. S. savings bonds offer an investment without any risk of principal loss due to credit or interest rate moves, while providing a competitive rate of return with no fees. Bonds can be bought in small denominations, rather than requiring waiting until the saver has amassed enough money to meet some financial institution’s minimum investment requirements. And finally, bonds have an “out-of-sight and out-of-mind” quality, which fits well with the mental accounting consumers use to artificially separate spending from saving behavior. Despite all of these positives, we feel the savings bond program needs to be reinvigorated to enhance its role in supporting family saving. In the current environment, the burden is squarely on these families to find and buy the bonds. Financial institutions and employers have little or no incentives to encourage savers to buy bonds. The government has eliminated its bond marketing program. Finally, by pushing the minimum holding period up to twelve months, the program is discouraging low-income families, who might face a financial emergency, from investing in them. We feel these problems can and should be solved, so that savings bonds can once again become a strong part of families’ savings portfolios. At one point in American history, savings bonds were an important tool for families to build assets to get ahead. They were “designed for the small investor – that he may be encouraged to save for the future and receive a fair return on his money” (US Department of the Treasury (1935)). While times have changed, this function of savings bonds may be even more important now. Our set of recommendations is designed to make savings bonds a viable asset building device for low to moderate income Americans, as well as reduce the cost to sell them to families. The proposal reflects an important aspect of financial innovation. Often financial innovations from a prior generation are reinvented by a new generation. The convertible preferred stock that venture capitalists use to finance high tech firms was used to finance railroads in the nineteenth century. Financiers of these railroads invented income bonds, which have been refined to create trust preferred securities, a popular financing vehicle. The “derivatives revolution” began centuries ago, when options were bought and sold on the Amsterdam Stock Exchange. Wise students of financial innovation realize that old products can often be re-invented to solve new problems. Here, we lay out a case for why savings bonds, an invention of the 20 th century, can and should be re-imagined to help millions of Americans build assets now. In section 2, we briefly describe why LMI families might not be fully served by private sector savings opportunities. In section 3, we briefly recount the history of savings bonds and fast forward to discuss their role in 4 the current financial services world. In section 4, we discuss our proposal to reinvent savings bonds as a legitimate device for asset building for American families. An important part of our proposal involves the tax system, but our ideas do not involve any new tax provisions or incentives. Rather, we make proposals about how changes to the “plumbing” of the tax system can help revitalize the savings bond program and support family savings. 2. An Unusual Problem: Nobody Wants My Money! 1 In our modern world, where many of us are bombarded by financial service firms seeking our business, why would we still need or want a seventy year old product like savings bonds? To answer this question, we have to understand the financial services landscape of low and moderate income Americans, which for our discussion includes the 41 million American households who earn under $30,000 a year or the 24 million households with total financial assets under $500 or the more than 18 million US households making less than $30,000 a year and holding less than $500 in financial assets (Survey of Consumer Finances (2001)) and Current Population Survey (2002)). In particular, we need to understand asset accumulation strategies for these families, their savings goals, and their risk tolerances. But we also need to understand the motives of financial service firms offering asset-building products. In generic terms, asset gatherers and managers must master a simple profit equation: revenues must exceed costs. Costs include customer acquisition, customer servicing and the expense of producing the investment product. Customer acquisition and servicing costs are not necessarily any less for a small account than for a large one. Indeed, if the smaller accounts are sufficiently “different” they can be quite costly; if held by people who speak different languages, require more explanations, or who are not well understood by the financial institution. The costs of producing the product would include the investment management expenses for a mutual fund or the costs of running a lending operation for a bank. On the revenue side, the asset manager could charge the investor a fixed fee for its services. However, industry practice is to charge a fee that is a fraction of assets under management (as in the case of a mutual fund which charges an expense ratio) or to give the investor only a fraction of the investment return (in the classic “spread banking” practiced by depository institutions.) The optics of the financial service business are to take the fee out of the return earned by the investor in an “implicit fee” to avoid the sticker shock of having to charge an explicit fee for services. Financial services firms can also earn revenues if they can subsequently sell customers other high margin products and services, the so called “cross-sell.” 5 At the risk of oversimplifying, our asset manager can earn a profit on an account if: Size of Account x (Implicit Fee in %) – Marginal Costs to Serve > 0 Because implicit fees are netted from the gross investment returns, they are limited by the size of these returns (because otherwise investors would suffer certain principal loss.) If an investor is risk averse and chooses to invest in low-risk/low-return products, fees are constrained by the size of the investment return. For example, when money market investments are yielding less than 100 bp, it is infeasible for a money market mutual fund to charge expenses above 100 bp. Depository institutions like banks or credit unions face a less severe problem, as they can invest in high risk projects (loans) while delivering low risk products to investors by virtue of government supplied deposit insurance. Given even relatively low fixed costs per client and implicit fees that must come out of revenue, the importance of having large accounts (or customers who can purchase a wide range of profitable services) is paramount. At a minimum, suppose that statements, customer service costs, regulatory costs, and other “sundries” cost $30 per account per year. A mutual fund that charges 150 bp in expense ratios would need a minimum account size of $30/.015 = $2000 to just break even. A bank that earns a net interest margin between lending and borrowing activities of 380 bp would need a minimum account size of $30/.038 = $790 to avoid a loss (Carlson and Perli (2004)). Acquisition costs make having large and sticky accounts even more necessary. The cost per new account appears to vary considerably across companies, but is substantial. The industry-wide average for traditional banks is estimated at $200 per account (Stone (2004)). Individual firms have reported lower figures. TD Waterhouse spent $109 per new account in the fourth quarter of 2001 (TD Waterhouse (2001)). T Rowe Price spent an estimated $195 for each account it acquired in 2003. 2 H&R Block, the largest retail tax preparation company in the United States, had acquisition costs of $130 per client (Tufano and Schneider (2004)). One can justify this outlay only if the account is large, will purchase other follow-on services, or will be in place for a long time. Against this backdrop, an LMI family that seeks to build up its financial assets faces an uphill battle. Given the risks that these families face and the thin margin of financial error they perceive, they seem to prefer low risk investments, which have more constrained fee opportunities for financial service vendors. By definition, their account balances are likely to be small. With respect to cross-sell, financial institutions might be leery of selling LMI families profitable 1 Portions of this section are adapted from an earlier paper, Schneider and Tufano, 2004, “New Savings from Old Innovations: Asset Building for the Less Affluent,” New York Federal Reserve Bank, Community Development Finance Research Conference. 2 Cost per new account estimate is based on a calculation using data on the average size of T Rowe Price accounts, the amount of new assets in 2003, and annual marketing expenses. Data is drawn from T Rowe Price (2003), Sobhani and Shteyman (2003), and Hayashi (2004)). 6 products that might expose the financial institutions to credit risk. Finally, what constitute inconveniences for wealthier families (e.g., a car breakdown or a water heater failure) can constitute emergencies for LMI families that deplete their holdings, leading to less sticky assets. These assertions about LMI financial behavior are borne out with scattered data. Table II and Table III report various statistics about U.S. financial services activity by families sorted by income. The preference of LMI families for low-risk products is corroborated by their revealed investment patterns, as shown by their substantially lower ownership rates of equity products. Low income families were less likely to hold every type of financial asset than high income families. However, the ownership rate for transaction accounts among families in the lowest income quintile was 72% of that of families in the highest income decile while the ownership rate among low-income families for stocks was only 6% and for mutual funds just 7% of the rate for high-income families. The smaller size of financial holdings by the bottom income quintile of the population is quite obvious. Even if they held all of their financial assets in one institution, the bottom quintile would have a median balance of only $2,000 (after excluding the 25.2% with no financial assets of any kind). The likelihood that LMI family savings will be drawn down for emergency purposes has been documented by Schreiner, Clancy, and Sherraden (2002) in their national study of Individual Development Accounts (matched savings accounts intended to encourage asset building through savings for homeownership, small business development, and education). They find that 64% of participants made a withdrawal to use funds for a non-asset building purpose, presumably one pressing enough that it was worth foregoing matching funds. In our own work (Beverly, Schneider, and Tufano (2004)), we surveyed a selected set of LMI families about their savings goals. Savings for “emergencies” was the second most frequent savings goal (behind unspecified savings), while long horizon saving for retirement was a goal for only 5% of households. A survey of the 15,000 participants in the America Saves program found similar results with 40% of respondents listing emergency savings as their primary savings goal (American Saver (2004)). The lower creditworthiness of LMI families is demonstrated by the lower credit scores of LMI individuals and the larger shares of LMI families reporting having past due bills. 3 Given the economics of LMI families and of most financial services firms, a curious equilibrium has emerged. With a few exceptions, firms that gather and manage assets are simply not very interested in serving LMI families. While their “money is as green as anyone else’s,” the 3 Bostic, Calem, and Wachter (2004) use data from the Federal Reserve and the Survey of Consumer Finances (SCF) to show that 39% of those in the lowest income quintile were credit constrained by their credit scores (score of less than 660) compared with only 2.8% of families in the top quintile and only 10% of families in the fourth quintile. A report from Global Insight (2003) also using data from the SCF finds that families in the bottom two quintiles of income were more than three times as likely to have bills more than 60 days past due than families in the top two quintiles of income. 7 customers are thought too expensive to serve, their profit potential too small, and, as a result, the effort better expended elsewhere. While firms don’t make public statements to this effect, the evidence is there to be seen. • Among the top ten mutual funds in the country, eight impose minimum balance restrictions upwards of $250. Among the top 500 mutual funds, only 11% had minimum initial purchase requirements of less than $100 (Morningstar (2004)). See Table IV. • Banks routinely set minimum balance requirements or charge fees on low balances, in effect discouraging smaller savers. Nationally, minimum opening balance requirements for statement savings accounts averaged $97, and required a balance of at least $158 to avoid average yearly fees of $26. These fees were equal to more than a quarter of the minimum opening balance, a management fee of 27%. Fees were higher in the ten largest Metropolitan Statistical Areas (MSAs), with average minimum opening requirements of $179 and an average minimum balance to avoid fees of $268 (Board of Governors of the Federal Reserve (2003)). See Table V. While these numbers only reflect minimum opening balances, what we cannot observe is the level of marketing activity (or lack thereof) directed to raising savings from the poor. • Banks routinely use credit scoring systems, like ChexSystems to bar families from becoming customers, even from opening savings accounts which pose minimal, if any, credit risks. Over 90% of bank branches in the US use the system, which enables banks to screen prospective clients for problems with prior bank accounts and to report current clients who overdraw accounts or engage in fraud (Quinn (2001)). Approximately seven million people have ChexSystems records (Barr (2004)). While ChexSystems was apparently designed to prevent banks from making losses on checking accounts, we understand that it is not unusual for banks to use it to deny customers any accounts, including savings accounts. Conversations with a leading US bank suggest that policy arises from the inability of bank operational processes to restrict a customer’s access to just a single product. In many banks, if a client with a ChexSystems record were allowed to open a savings account, she could easily return the next day and open a checking account. • Banks and financial services firms have increasingly been going “up market” and targeting the consumer segment known as the “mass affluent,” generally those with over $100,000 in investible assets. Wells Fargo’s Director of investment consulting noted that “the mass affluent are very important to Wells Fargo” (Quittner (2003) and American Express Financial Advisors’ Chief Marketing Officers stated that, “Mass affluent clients have special investment needs… Platinum and Gold Financial Services (AEFA products) 8 were designed with them in mind” (“Correcting and Replacing” (2004)). News reports have detailed similar sentiments at Bank of America, Citi Group, Merrill Lynch, Morgan Stanley, JP Morgan, Charles Schwab, Prudential, and American Express. • Between 1975 and 1995 the number of bank branches in LMI neighborhoods declined by 21%. While declining population might explain some of that reduction (per capita offices declined by only 6.4%), persistently low-income areas, those that that were poor over the period of 1975 -1995, experienced the most significant decline; losing 28% of offices, or a loss of one office for every 10,000 residents. Low income areas with relatively high proportions of owner-occupied housing did not experience loss of bank branches, but had very few to begin with (Avery, Bostic, Calem, and Caner (1997)). • Even most credit unions pay little attention to LMI families, focusing instead on better compensated occupational groups. While this tactic may be profitable, credit unions enjoy tax free status by virtue of provisions in the Federal Credit Union Act, the text of which mandates that credit unions provide credit “to people of small means” (Federal Credit Union Act (1989)). Given their legislative background, it is interesting that the median income of credit union members is approximately $10,000 higher than that of the median income of all Americans (Survey of Consumer Finances (2001)) and that only 10% of credit unions classify themselves as “low income,” defined as half members having incomes of less than 80% of the area median household income (National Credit Union Administration (2004) and Tansey (2001)). • Many LMI families have gotten the message, and prefer not to hold savings accounts citing high minimum balances, steep fees, low interest rates, problems meeting identification requirements, denials by banks, and a distrust of banks (Berry (2004)). • Structurally, we have witnessed a curious development in the banking system. The traditional payment systems of banks (e.g., bill paying and check cashing) have been supplanted by non-banks in the form of alternative financial service providers such as check cashing firms. These same firms have also developed a vibrant set of credit products in the form of payday loans. However, these alternative financial service providers have not chosen to offer asset building or savings products. Thus, the most active financial service players in many poor communities do not offer products that let poor families save and get ahead. This stereotyping of the financial service world obviously does not do justice to a number of financial institutions that explicitly seek to serve LMI populations’ asset building needs. This includes Community Development Credit Unions, financial institutions like ShoreBank in 9 Chicago, and the CRA-related activities of the nation’s banks. However, we sadly maintain that these are exceptions to the rule, and the CRA-related activities, while real, are motivated by regulations and not intrinsically by the financial institutions. We are reminded about one subtle—but powerful—piece of evidence about the lack of interest of financial institutions in LMI asset building each year. At tax time, many financial institutions advertise financial products to help families pay less in taxes: IRAs, SEP-IRAs, and KEOGHs. These products are important—for taxpayers. However, LMI families are more likely refund recipients, by virtue of the refundable portions of the Earned Income Tax Credit (EITC), the Child Tax Credit (CTC), and refunds from other sources which together provided over $78 billion in money to LMI families in 2001, mostly early in the year around February (refund recipients tend to file their taxes earlier than payers) (Internal Revenue Service (2001)). With the exception of H&R Block, which has ongoing pilot programs to help LMI families save some of this money, financial institutions seem unaware—and uninterested—in the prospect of gathering some share of a $78 billion flow of assets (Tufano and Schneider (2004)). “Nobody wants my money” may seem like a bit of an exaggeration, but it captures the essential problem of LMI families wanting to save. “Christmas Club” accounts, where families deposited small sums regularly, have all but disappeared. While they are not barred from opening bank accounts or mutual fund accounts, LMI families could benefit from a low risk account with low fees, which delivers a competitive rate of return, with a small minimum balance and initial purchase price, and which is available nationally and portable if the family moves from place to place. The product has to be simple, the vendor trustworthy, and the execution easy—because the family has to do all the work. Given these specifications, savings bonds seem like a good choice. 3. U. S. Savings Bonds: History and Recent Developments A. A Brief History of Savings Bonds Governments, including the U.S. government, have a long tradition of raising monies by selling bonds to the private sector, including large institutional investors and small retail investors. U.S. Treasury bonds fall into the former group and savings bonds the latter. The U. S. is not alone in selling small denomination bonds to retail investors; since the 1910s, Canada has offered its residents a form of Canada Savings Bonds. 4 Generally, huge demands for public debt, occasioned by wartime, have given rise to the most concerted savings bond programs. The earliest bond issue by the US was conducted in 1776 to finance the revolutionary war. Bonds were issued 10 again to finance the War of 1812, the Civil War, the Spanish American War, and with the onset of World War I, the Treasury Department issued Liberty Bonds, mounting extensive marketing campaigns to sell the bonds to the general public (Cummings (1920)). The bond campaign during World War II is the best known of these efforts, though bonds were also offered in conjunction with the Vietnam War and, soon after the terrorist attacks in 2001, the government offered the existing EE bonds as “Patriot Bonds” in order to allow Americans to “express their support for anti-terrorism efforts” (US Department of the Treasury (2002)). During these war-time periods, bond sales have been tied to patriotism. World War I campaigns asked Americans to “buy the “Victorious Fifth” Liberty Bonds the way our boys fought in France – to the utmost” (Liberty Loan Committee (1919)). World War II era advertisements declared, “War bonds mean bullets in the bellies of Hitler’s hordes” (Blum (1976)). The success of these mass appeals to patriotism was predicated on bonds being accessible and affordable to large numbers of Americans. Both the World War I and World War II bond issues were designed to include small savers. While the smallest denomination Liberty Bond was $100, the Treasury also offered Savings Stamps for $5, as well as the option to purchase “Thrift Stamps” in increments of 25 cents that could then be redeemed for a Savings Stamp (Zook (1920)). A similar system was put in place for the World War II era War Bonds. While the smallest bond denomination was $25, Defense Stamps were sold through Post Offices and schools for as little as 10 cents and were even given as change by retailers (US Department of the Treasury (1984), US Department of the Treasury (1981)). Pasted in albums, these stamps were redeemable for War Bonds. The War Bonds campaign went further than Liberty Bonds to appeal to small investors. During World War II, the Treasury Department oriented its advertising to focus on small savers, choosing popular actors and musicians that the Treasury hoped would make the campaign “pluralistic and democratic in taste and spirit” (Blum (1976)). In addition to more focused advertising, changes to the terms of War Bonds made them more appealing to these investors. The bonds were designed to be simple. Unlike all previous government bond issues, they were not marketable and were protected from theft (US Department of the Treasury (1984)). Many of these changes to the bond program had actually been put in place before the war. In 1935, the Treasury had introduced the “Savings Bond” (the basis for the current program) with the intention that it “appeal primarily to individuals with small amounts to invest” (US Department of the Treasury (1981)). The Savings Bond was not the first effort by the Treasury to encourage small investors to save during a peace time period. Following World War I and the Liberty Bond 4 Brennan and Schwartz (1979) provide an introduction to Canadian Savings Bonds as well as the savings bond offerings of a number of European countries. For current information on Canadian Savings Bonds see 11 campaigns, the Treasury decided to continue its promotion of bonds and stamps. It stated that in order to: Make war-taught thrift and the practice of saving through lending to the Government a permanent and happy habit of the American people, the United States Treasury will conduct during 1919 an intensive movement to promote wise spending, intelligent saving, and safe investment (US Department of the Treasury (1918)). The campaign identified seven principal reasons to encourage Americans to save including: (1) “Advancement” which was defined as savings for “a definite concrete motive, such as buying a home…an education, or training in trade, profession or art, or to give children educational advantages,” (2) “Motives of self interest” such as “saving for a rainy day,” and (3) “Capitalizing part of the worker’s earnings,” by “establishing the family on ‘safety lane’ if not on ‘easy street’” (US Department of the Treasury (1918)). Against this background, it seems clear that the focus of savings bonds on the “small saver” was by no means a new idea, but rather drew inspiration from the earlier “thrift movement” while attempting to tailor the terms of the bonds more precisely to the needs of small savers. However, even on these new terms, the new savings bonds (also called “baby bonds”) did not sell quickly. In his brief, but informative, summary of the 1935 bond introduction, Blum details how: “At first sales lagged, but they picked up gradually under the influence of the Treasury’s promotional activities, to which the Secretary gave continual attention. By April 18, 1936, the Department had sold savings bonds with a maturity value of $400 million. In 1937 [Secretary of the Treasury] Morgenthau enlisted the advertising agency of Sloan and Bryan, and before the end of that year more than 1,200,000 Americans had bought approximately 4 1/2 million bonds with a total maturity value of over $1 billion” (Blum (1959)). Americans planned to use these early savings bonds for much the same things that low-income Americans save for now, first and foremost, for emergencies (Blum (1959)). The intent of the program was not constrained to just providing a savings vehicle. The so-called “Baby-bond” allowed all Americans the opportunity to invest even small amounts of money in a governmentbacked security, which then-Secretary of the Treasury Morgenthau saw as a way to: “Democratize public finance in the United States. We in the Treasury wanted to give every American a direct personal stake in the maintenance of sound Federal Finance. Every man and woman who owned a Government Bond, we believed, would serve as a bulwark against the constant threats to Uncle Sam’s pocketbook from pressure blocs and special-interest groups. In short, we wanted to the ownership of America to be in the hands of the American people” (Morgenthau, (1944)). In theory, the peacetime promotion of savings bonds as a valuable savings vehicle with both public and private benefits continues. From the Treasury’s web site, we can gather its “pitch” to would-be buyers of bonds focuses on the private benefits of owning bonds: http://www.csb.gc.ca/eng/resources_faqs_details.asp?faq_category_ID=19 (visited September 26, 2004). 12 “There's no time like today to begin saving to provide for a secure tomorrow. Whether you're saving for a new home, car, vacation, education, retirement, or for a rainy day, U.S. Savings Bonds can help you reach your goals with safety, market-based yields, and tax benefits” (US Department of the Treasury (2004a)). But the savings bond program, as it exists today, does not seem to live up to this rhetoric, as we discuss below. Recent policy decisions reveal much about the debate over savings bonds as merely one way to raise money for the Treasury versus their unique ability to help families participate in America and save for their future. As we keep score, the idea that savings bonds are an important tool for family savings seems to be losing. B. Recent debates around the Savings Bond program and program changes Savings bonds remain an attractive investment for American families. In Appendix A we provide details on the structure and returns of bonds today. In brief, the bonds offer small investors the ability to earn fairly competitive tax advantage returns on a security with no credit risk and no principal loss due to interest rate exposure, in exchange for a slightly lower yield relative to large denomination bonds and possible loss of some interest in the event the investor needs to liquidate her holdings before five years. As we argue below and discuss in Appendix B, the ongoing persistence of the savings bond program is testimony to their attractiveness to investors. As we noted, both current and past statements to consumers about savings bonds suggest that Treasury is committed to making them an integral part of household savings. Unfortunately, the changes to the program over the past two years seem contrary to this goal. Three of these changes may make it more difficult for small investors and those least well served by the financial service community to buy bonds and save for the future. More generally, the structure of the program seems to do little to promote the sale of the bonds. On January 17 th , 2003, the Department of the Treasury promulgated a rule that amends section 31 of the CFR to increase the minimum holding period before redemption for Series EE and I Bonds from 6 months to 12 months for all newly issued bonds (31 CFR part 21 (2003)). In rare cases, savings bonds may be redeemed before 12 months, but generally only in the event of a natural disaster (US Department of the Treasury (2004b)). This increase in the minimum holding period essentially limits the liquidity of a bondholder’s investment, which is most important for LMI savers who might be confronted with a family emergency that requires that they liquidate their bonds within a year. By changing the minimum initial holding periods, the Department of the Treasury makes it bonds less attractive for low-income families. 13 The effect this policy change seems likely to have on small investors, particularly those with limited means, appears to be unintended. Rather, this policy shift arises out of concern over rising numbers of bondholders keeping their bonds for only the minimum holding period in order to maximize their returns in the short term. Industry observers have noted that given the low interest rates available on such investment products as CDs or Money market funds, individuals have been purchasing Series EE bonds and I bonds, holding them for 6 months, paying the interest penalty for cashing out early, but still clearing a higher rate of interest than they might find elsewhere (Pender (2003)). The Department of the Treasury cited this behavior as the primary factor in increasing the minimum holding period. Officials argue that this amounts to “taking advantage of the current spread between savings bond returns and historically low short-term interest rates,” an activity which they believe contravenes the nature of the savings bond as a long term investment vehicle (US Department of the Treasury (2003a)). Second, marketing efforts for savings bonds have been eliminated. Congress failed to authorize $22.4 million to fund the Bureau of Public Debt’s marketing efforts and on September 30, 2003, the Treasury closed all 41 regional savings bond marketing offices and cut 135 jobs. This funding cut represents the final blow to what was once a large and effective marketing strategy. Following the Liberty Bond marketing campaign, as part of the “thrift movement” the Treasury continued to advertise bonds, working through existing organizations such as schools, “women’s organizations,” unions, and the Department of Agriculture’s farming constituency (Zook (1920)). Morgenthau’s advertising campaign for Baby Bonds continued the marketing of bonds through the 1930’s, preceding the World War II era expansion of advertising in print and radio (Blum (1959)). Much of this war-time advertising was free to the government, provided as a volunteer service through the Advertising Council beginning in 1942. Over the next thirty years, the Ad Council arranged for contributions of advertising space and services worth hundreds of millions of dollars (US Department of the Treasury, Treasury Annual Report (1950-1979)). In 1970, the Treasury discontinued the Savings Stamps program, which it noted was one of “the Bond program’s most interesting (and promotable) features” (US Department of the Treasury (1984)). The Advertising Council ended its affiliation with the Bond program in 1980, leaving the job of marketing bonds solely to the Treasury (Advertising Council (2004)). In 1999, the Treasury began a marketing campaign for the newly introduced I bonds. However, that year the Bureau spent only $2.1 million on the campaign directly and received just $13 million in donated advertising, far short of the $73 million it received in donated advertising in 1975 (James (2000) and US Department of the Treasury, Treasury Annual Report (1975)). Third, while not a change in policy, the current program provides little or no incentive for banks or employers to sell bonds. Nominally, the existing distribution outlets for bonds are quite 14 extensive, including financial institutions, employers, and the TreasuryDirect System. There are currently more than 40,000 financial institutions (banks, credit unions and other depositories) eligible to issue savings bonds (US Department of the Treasury (2004b)). In principle, someone can go up to a teller and ask to buy a bond. As anecdotal evidence, one of us tried to buy a savings bond in this way, and had to go to a few different bank branches before the tellers could find the necessary forms, an experience similar to that detailed by James T. Arnold Consultants (1999) in their report on the Savings Bonds program. This lack of interest in selling bonds may reflect the profit potential available to a bank selling bonds. The Treasury pays banks fees of $.50 - $.85 per purchase to sell bonds and the bank receives no other revenue from the transaction. 5 In off-therecord discussions, bank personnel have asserted that these payments cover less than 25% of the cost of processing a savings bond purchase transaction. The results of an in-house evaluation at one large national bank showed that there were 22 steps and four different employees involved with the processing of a bond purchase. Given these high costs and miniscule payments, our individual experience is hardly surprising, as are banks’ disinterest in the bond program. In addition, savings bonds can be purchased via the Payroll Savings Plan, which the Treasury reports as available through some 40,000 employer locations (US Department of the Treasury (2004c)). 6 Again, by way of anecdote, one of us called our employer to ask about this program and waited weeks before hearing back about this option. Searching the University intranet, the term “savings bonds” yielded no hits, even though the program was officially offered. Fourth, while it is merely a matter of taste, we may not be alone in thinking that the “front door” to savings bonds, the U.S. Treasury’s Saving Bond web site 7 is complicated and confusing for consumers (though the BPD has now embarked on a redesign of the site geared toward promoting the online TreasuryDirect system). This is particularly important in light of the fact that the Treasury has eliminated its marketing activities for these bonds. Financial service executives are keenly aware that cutting all marketing from a product, even an older product, does not encourage its growth. Indeed, commercial firms use this method to quietly “kill” products. Fifth, on May 8 th , 2003 the Department of the Treasury published a final rule on the “New Treasury Direct System.” This rule made Series EE bonds available through the TreasuryDirect System (Series I bonds were already available) (31 CFR part 315 (2003)). This new system 5 Fees paid to banks vary depending on the exact role the bank plays in the issuing process. Banks which process savings bond orders electronically receive $.85 per bond while banks which submit paper forms receive only $.50 per purchase (US Department of the Treasury (2000), Bureau of Public Debt, 2005, Private Correspondence with Authors. 6 This option allows employees to allocate a portion of each paycheck towards the purchase of savings bonds. Participating employees are not required to allocate sufficient funds each pay period for the purchase of an entire bond but rather, can allot smaller amounts that are held until reaching the value of the desired bond (US Department of the Treasury (1993) and US Department of the Treasury (2004d). 7 http://www.publicdebt.treas.gov/sav/sav.htm15 represents the latest incarnation of TreasuryDirect, which was originally used for selling marketable Treasury securities (US GAO (2003)). In essence, the Treasury proposes that a $50 savings bond investor follow the same procedures as a $1 million investor in Treasury Bills. The Department of the Treasury aims to eventually completely phase out paper bonds (Block (2003)) and to that end have begun closing down certain aspects of the Savings Bond program, such as promotional give-aways of bonds, which rely on paper bonds. The Treasury also recently stopped the practice of allowing savers to buy bonds using credit cards. These changes seem to have the impact of reducing the access of low-income families to savings bonds or depress demand of their sale overall. By moving towards an only-on-line system of savings bonds distribution, the Department of the Treasury risks closing out those individuals without Internet access. Furthermore, in order to participate in TreasuryDirect, the Treasury Department requires users to have a bank account and routing number. This distribution method effectively disenfranchises the people living in the approximately 10 million unbanked households in the US (Azicorbe, Kennickell, and Moore (2003) and US Census (2002)). While there have been a few small encouraging pilot programs in BPD to experiment with making Treasury Direct more user-friendly for poorer customers, the overall direction of current policy seems makes bonds less accessible to consumers. 8 Critics of the Savings Bonds program, such as Representative Ernest Istook (R-OK), charge that the expense of administering the US savings bond program is disproportionate to the amount of federal debt covered by the program. These individuals contend that while savings bonds represent only 3% of the Federal debt that is owned by the public, some three quarters of the budget of the Bureau of Public Debt is dedicated to administering the program (Berry (2003)). Thus they argue that the costs of the savings bond program must be radically reduced. Representative Istook (R-OK) summed up this perspective with the statement: 8 Working with a local bank partner in West Virginia, the Bureau has rolled out “Over the Counter Direct” (OTC Direct). The program is designed to allow Savings Bond customers to continue to purchase bonds through bank branches, while substantially reducing the processing costs for banks. Under the program, a customer arrives at the bank and dictates her order to a bank employee who enters it into the OTC Direct website. Clients receive a paper receipt at the end of the transaction and then generally are mailed their bonds (in paper form) one to two weeks later. In this sense, OTC Direct represents an intermediate step; the processing is electronic, while the issuing is paper-based. While not formally provided for in the system, the local bank partner has developed protocols to accommodate the unbanked and those who lack web access. For instance, the local branch manager will accept currency from an unbanked bond buyer, set up a limited access escrow account, deposit the currency into the account, and affect the debit from the escrow account to the BPD. In cases where bond buyers lack an email address, the branch manager has used his own. A second pilot program, with Bank of America, placed kiosks that could be used to buy bonds in branch lobbies. The kiosks were linked to the Treasury Direct website, and thus enabled bond buyers without their own method of internet access to purchase bonds. However, the design of this initiative was such that the unbanked were still precluded from purchasing bonds. 16 “Savings Bonds no longer help Uncle Sam; instead the cost him money…Telling citizens that they help America by buying Savings Bonds, rather than admitting they have become the most expensive way for our government to borrow, is misplaced patriotism” (Block (2003)). However, some experts have questioned this claim. In testimony, the Commissioner of the Public Debt described calculations that showed that series EE and I savings bonds were less costly than Treasury marketable securities. 9 However, the BPD itself seems to have ascribed to this cost focused perspective with Treasury’s debt financing objective to borrow the money needed to operate the federal government at the lowest cost. In May 2005, the Treasury substantially changed the terms of EE bonds. Instead of having interest on these bonds float with the prevailing five year treasury, they became fixed-rate bonds, with their interest rate set for the life of the bond at the time of purchase. 10 While this may be prudent debt management policy from the perspective of lowering the government’s cost of borrowing, consumers have responded negatively to this news. 11 We would hope that policy makers took into consideration the impact this decision this might have in the usefulness of bonds to help families meet their savings goals. Focusing decisions of this sort solely on the cost of debt to the federal government misses a larger issue; the Savings Bond program was not created only to provide a particularly low-cost means of financing the federal debt. Rather, the original rationale for the savings bond program was to provide a way for individuals of limited means to invest small amounts of money and to allow more Americans to become financially invested in government. While this is not to say that the cost of the Savings Bonds program should be disregarded, this current debate seems to overlook one real public policy purpose of savings bonds: helping families save. And so while none of these recent developments (a longer holding period, elimination of marketing, and changes to the bond buying process) or the ongoing problems of few incentives to sell bonds or a lackluster public image seems intentionally designed to discourage LMI families from buying bonds, their likely effect is to make the bonds less attractive to own, more difficult to learn about, and less easy to buy. These decisions about bonds were made on the basis of the costs of raising money through savings bonds versus through large denomination Treasury bills, notes and bonds. 12 This discussion, while appropriate, seems to lose sight of the fact that savings bonds also have served— 9 See testimony by Van Zeck (Zeck (2002)); However, a recent GAO study requested by Rep. Istook cast doubt on the calculations that the Treasury used to estimate the costs of the program (US GAO (2003)). 10 See http://www.publicdebt.treas.gov/com/comeefixedrate.htm. 11 See http://www.bankrate.com/brm/news/sav/20050407a1.asp for one set of responses. 17 and can serve—another purpose: to help families save. The proposals we outline below are intended to reinvigorate this purpose, in a way that may make savings bonds even more efficient to run and administer. 4. Reinventing the Savings Bond The fundamental savings bond structure is sound. As a “brand,” it is impeccable. The Ibond experience has shown that tinkering with the existing savings bond structure can broaden its appeal while serving a valuable public policy purpose. Our proposals are designed to make the savings bond a valuable tool for low and moderate income families, while making savings bonds a more efficient debt management tool for the Treasury. Our goal is not to have savings bonds substitute for or crowd out private investment vehicles, but rather to provide a convenient, efficient, portable, national savings platform available to all families. 1. Reduce the Required Holding Period for Bondholders Facing Financial Emergencies While the Treasury legitimately lengthened the savings bond holding period to discourage investors seeking to arbitrage the differential between savings bond rates and money market rates, the lengthening of the holding period makes bonds less attractive to LMI families. The current minimum required holding period of 12 months is a substantial increase from the original 60 days required of baby bond holders. This longer period essentially requires investors to commit to saving for at least one year. A new Bureau of Public Debt program suggests that this may not be a problem for some investors. In an effort to encourage bond holders to redeem savings bonds that have passed maturity, the Bureau of Public Debt is providing a search service (called “Treasury Hunt”) to find the holders of these 33 million bonds worth $13.5 billion (Lagomarsino (2005)) . The program either reveals that bonds are an extremely efficient mechanism to encourage long term saving because they have an “out of sight, out of mind” quality—perhaps too much so. So, while many small investors may intend to save for the long term, and many may have no trouble doing so, this new extended commitment could still be particularly difficult for LMI families in that they would be prohibited from drawing on these funds even if faced with financial emergency. If we want to encourage bond-savings by LMI families, Treasury could either (a) exempt small withdrawals from the required holding periods or (b) set up and publicize existing simple emergency withdrawal rules. Under the first rule, Treasury could allow a holder to redeem some amount (say $5000 per year) earlier than twelve months, with or without interest penalty. While this design would most precisely address the need for emergency redemption, it could be difficult 12 For a cost-based view of the Savings Bond program from the perspective of the Bureau of Public Debt see US Department of the Treasury (2002). For an opposing view also from this cost-based perspective see 18 to enforce as redeeming banks do not have a real-time link to BPD records and so a determined bond holder could conceivably “game the system” by redeeming $5,000 bundles of bonds at several different banks. Alternatively, while current rules allow low-income bondholders who find themselves in a natural disaster or financial emergency to redeem their bonds early, this latter provision receives virtually no publicity. BPD does publicize the rule that allows bond holders who have been affected by natural disasters to redeem their bonds early. Were the BPD to provide a similar level of disclosure of the financial emergency rules LMI savers might be encouraged to buy savings bonds. Whether by setting some low limit of allowable early redemptions for all, or merely publicizing existing emergency withdrawal rules, it seems possible to meet the emergency needs of LMI savers while continuing to discourage arbitrage activity. 2. Make Savings Bonds Available to Tax Refund Recipients The IRS allows filers to direct nominal sums to funding elections through the Federal Election Campaign Fund and permits refund recipients to direct their refunds to pay future estimated taxes. We propose that taxpayers be able to direct that some of their refunds be invested in savings bonds. The simplest implementation of this system—merely requiring one additional line on the 1040 form—would permit the refund recipient to select the Series (I or EE) and the amount; the bonds would be issued in the primary filer’s name. Slightly more elaborate schemes might allow the filer to buy multiple series of bonds, buy them for other beneficiaries (e.g., children), or allow taxpayers not receiving refunds to buy bonds at the time of paying their taxes. 13 The idea of letting refund recipients take their refund in the form of savings bonds is not a radical idea, but rather an old one. Between 1962 and 1968 the IRS allowed refund recipients to purchase Savings bonds with their refunds. Filers directed less than 1% of refunds to bond purchase during this period (Internal Revenue Service (1962-1968)). On its face, it might appear that allowing filers to purchase savings bonds with their refunds has little potential, but we feel this historical experience may substantially underestimate the opportunity to build savings at tax time via our refund-based bond sales for two reasons. First, the size of low-income filers’ tax refunds has increased from an average of $636 in 1964 (in 2001 dollars) to $1,415 in 2001 allowing more filers to put a part of their refund aside as savings (Internal Revenue Service (2001, 1964)). 14 These refunds tend to be concentrated among low-income families, where we would like to stimulate savings. Second, the historical experiment was an all or nothing program, it did not GAO (2003). 13 Our proposal would allow taxpayers to purchase bonds with after-tax dollars, so it would have no implications for tax revenues. 19 allow refund recipients to direct only a portion of their refunds to bonds. We expect our proposal will be more appealing since filers would be able to split their refunds, and direct only a portion towards savings bonds while receiving the remainder for current expenses. By allowing this option, the Department of the Treasury would enable low-income filers to couple a large cash infusion with the opportunity to invest in savings bonds. Perhaps the largest single pool of money on which low-income families can draw for asset building and investment is the more than $78 billion dollars in refundable tax-credits made available through federal and state government each year (Internal Revenue Service (2001)). Programs across the country have helped low-income taxpayers build assets by allowing filers to open savings accounts and Individual Development Accounts when they have their taxes prepared. A new program in Tulsa, Oklahoma run by the Community Action Project of Tulsa County and D2D has allowed tax-filers to split their refund, committing some to savings and receiving the remainder as a check. This program allowed families to precommit to saving their refunds, instead of having to make a saving decision when the refund was in hand and temptation to spend it was strong. While these small sample results are difficult to extrapolate, the program seemed to increase savings initially and families reported that the program helped them their financial goals. Since the short-lived bond-buying program in the 1960’s, the BPD has introduced other initiatives to encourage tax refund recipients to purchase bonds. The first of these, beginning in the 1980s, inserted marketing materials along with the refund checks sent to refund recipients. Though only limited data has been collected, it appears that these mailings were sent at random points throughout the tax season (essentially depending on availability as the BPD competed for “envelope space” with other agencies) and that no effort was made to segment the market, with all refund recipients (low income and higher income) receiving the materials. In all, the BPD estimates that between 1988 and 1993, it sent 111,000,000 solicitations with a response rate of little less than .1%. While rate may appear low, it is comparable to the .4% response rate on credit card mailings and some program managers at BPD deemed the mailings cost effective (Anonymous (2004)). Considering that the refund recipient had to take a number of steps to effect the bond transaction (cash the refund, etc.) these results are in some sense fairly encouraging. A second related venture was tried for the first time in tax season 2004. The BPD partnered with a volunteer income tax preparation (VITA) site in West Virginia to try to interest low-income refund recipients in using the Treasury Direct System. The tax site was located in a public library and was open for approximately 12 hours per week, during tax season. In 2004, the 14 We define LMI filers as those with incomes of less than $30,000 in 2001 or less than $5,000 in the period from 1962-1968 (which is approximately $30,000 in 2001 dollars). 20 site served approximately 500 people. The program consisted of playing a PowerPoint presentation in the waiting area of the free tax preparation site and making available brochures describing the Treasury Direct system. Informal evaluation by tax counselors who observed the site suggests that tax filer interest was extremely limited and that most filers were pre-occupied with ensuring that they held their place in line and were able to get their taxes completed quickly. While both of these programs attempt to link tax refunds with savings, they do so primarily through advertising, not through any mechanism that would make such savings easier. The onus is still on the tax refund recipient to receive the funds, convert them to cash (or personal check), fill out a purchase order, and obtain the bonds. In the case of the 2004 experiment, the refund recipient had to set up a Treasury Direct account, which would involve having a bank account, etc. While these programs remind tax filers that savings is a good idea, but do not make saving simple. We remain optimistic in part based on data collected during the Tulsa experiment described above. While the experiment did not offer refund recipients the option of receiving savings bonds, we surveyed them on their interest in various options. Roughly 24% of participants expressed an interest in savings bonds and nearly three times as large a fraction were interested when the terms of savings bonds were explained (Beverly, Schneider, and Tufano (2004)). Our sample is too small to draw a reliable inference from this data, but it certainly suggests that the concept of offering savings bonds is not completely ungrounded. Currently, a family wanting to use their refund to buy savings bonds would have to receive their refund, possibly pay a check casher to convert the refund to cash, make an active decision to buy the bond, and go online or to a bank to complete the paperwork. Under our proposal, the filer would merely indicate the series and amount, the transaction would be completed, and the money would be safely removed from the temptation of spending. Most importantly, since the government does not require savings bond buyers to pass a ChexSystem hurdle, this would open up savings to possibly millions of families excluded from opening bank accounts. While we would hope that refund recipients could enjoy a larger menu of savings products than just bonds, offering savings bonds seamlessly on the tax form has practical advantages over offering other products at taxtime. By putting a savings option on the tax form, all filers— including self-filers, could be reminded that tax time is potentially savings time. Paid and volunteer tax sites wishing to offer other savings options on site would face a few practical limitations. First, certain products (like mutual funds) could only be offered by licensed brokerdealers which would either require on-site integration of a sales force or putting the client in touch via phone or other means with an appropriately licensed agent. More generally, tax preparers— especially volunteer sites—would be operationally challenged by the prospect of opening accounts 21 on site. However, merely asking the question: “How much of your refund—if any—would you like in savings bonds?” could be incorporated relatively easily in the process-flow. Not only would a refund-driven savings bond program make saving easier for families, it would likely reduce the cost of marketing and administering the savings bond program for the Treasury. All of the information needed to purchase a bond is already on the filer’s tax return, so there would be less likelihood of error. It should not require substantial additional forms, but merely a single additional line or two on the 1040. The Treasury would not need to pay banks fees of $.50 - $.85 per purchase to sell bonds. 15 Furthermore, the refund monies would never leave the federal government. If subsequent investigation uncovered some tax compliance problem for a refund recipient, some of the contested funds would be easily traceable. Given annual LMI refunds of $78 billion annually, saving bond sales could increase by 9.8% for each 1% of these refunds captured. 16 Ultimately, whether or not refund recipients are interested in buying bonds will only be known if one makes a serious attempt to market to them at refund time. We are attempting to launch an experiment this coming tax season which will test this proposition. 3. Enlist private sector social marketing for savings bonds Right now, banks and employers have little incentive to market savings bonds. If an account is likely to be profitable, a bank would rather open the account than sell the person a savings bond. If an account is unlikely to be profitable, the bank is not likely to expend much energy selling bonds to earn $.50 or $.85. With a reinvented Savings Bond program, the Treasury could leverage other private sector marketing. First, one can imagine a very simple advertising program for the tax-based savings bond program focusing its message on the simplicity of buying bonds at tax time and the safety of savings bond investments. We envision a “RefundSaver” program. Groups like the Consumer Federation of America and AmericaSaves might be enlisted to join in the public service effort if the message were sufficiently simple. 17 With a tax-centered savings bond marketing program, the IRS could leverage paid and volunteer tax preparers to market bonds. If these tax preparers could enhance their “value 15 Fees paid to banks vary depending on the exact role the bank plays in the issuing process. Banks who only accept bond orders and payment from customers but sent those materials to regional Federal Reserve Banks for final processing are paid $.50 per purchase. Banks which do this final level of processing themselves, inscription, receive $.85 per bond issue (US Department of the Treasury (2000)). 16 Savings Bonds sales of EE and I bonds through payroll and over-the-counter were $7.9 billion in 2004. Total refunds to LMI filers in 2001 were $78 billion. Each $780 million in refunds captured would be a 9.8% increase in Savings Bonds sales. 22 proposition” that they have with their clients by offering them a valuable asset building service at tax time, they might have a strong incentive to participate, possibly without any compensation. If the Treasury paid preparers the same amount that it offered to banks selling bonds, this would create even greater incentives for the preparers to offer the bonds, although this might create some perverse incentives for preparers as well. 4. Consider savings bonds in the context of a family’s financial life cycle As they are currently set up, savings bonds are the means and end of household savings. Bonds are bought and presumably redeemed years (if not decades) later. Data from the Treasury department partially bears out this assumption. Of the bonds redeemed between 1950 and 1980, roughly half were redeemed prior to maturity. Through the mid-1970s, redemptions of unmatured bonds made up less than half of all bond redemptions (41% on average), however in the late 1970s this ratio changed, with unmatured bonds making up an increasingly large share of redemptions (up to 74% in 1981, the last year for which the data is reported). However, even without this increase in redemptions (perhaps brought on by the inflationary environment of the late 1970s) early redemptions seem to have been quite frequent. This behavior is in line with the use of bonds as described by the Treasury in the 1950s, as a means of “setting aside liquid savings out of current income” (US Department of the Treasury, Treasury Annual Report (1957)). Under our proposal, savings bonds would be a savings vehicle for LMI families who have small balances and low risk tolerances. Over time, these families might grow to have larger balances and greater tolerance for risk; in addition, their investing horizons might lengthen. At this time, our savings bond investors might find that bonds are no longer the ideal investment vehicle, and our reinvented savings bonds should recognize this eventuality. We propose that the Treasury study the possibility of allowing Savings Bond holders to “roll over” their savings bonds to other investment vehicles. In the simplest form, the Treasury would allow families to move their savings bonds directly into other investments. These investments might be products offered by the private sector (mutual funds, certificates of deposits, etc.) If the proposals to privatize Social Security became reality, these “rollovers” could be into the new private accounts. Finally, it might be possible to roll over savings bond amounts into other tax deferred accounts, although this concept would add complexity, as one would need to consider the ramifications of mixing after-tax and pre-tax investments. The proposal for Retirement Savings Bonds (R-Bonds) takes a related approach. These bonds would allow employers to set aside small amounts of retirement savings for employees at a lower cost than 17 The Bureau of Public Debt commissioned Arnold Consultants to prepare a report on marketing strategy in 1999. They also cite the potential for a relationship between the BPD and non-profit private sector groups 23 would be incurred through using traditional pension systems. R-bonds would be specifically earmarked for retirement and could only be rolled over into an IRA (Financial Services Roundtable (2004)). 5. Make the process of buying savings bonds more user friendly There has been a shift in the type of outlets used to distribute US Savings Bonds. While there are still more than 40,000 locations at which individuals can purchase savings bonds, these are now exclusively financial institutions. Post Offices, the original distribution mode for baby bonds, no longer retail bonds. This shift is of particular concern to low-income small investors. Over the past 30 years a number of studies have documented the relationship between bank closings and the racial and economic make-up of certain neighborhoods. In a study of five large US cities, Caskey (1994) finds that neighborhoods with large African American or Hispanic populations are less likely to have a bank branch and that in several of the cities, “low-income communities are significantly less likely to have a local bank than are other communities.” Post Offices, on the other hand, remain a ubiquitous feature of most neighborhoods and could again serve as an ideal location for the sale of savings bonds. Our tax-intermediated bond program should make savings bonds more accessible for most Americans. In addition, just as the Treasury allows qualified employers to offer savings bonds, retailers like Wal-Mart or AFS providers like ACE might prove to be effective outlets to reach LMI bond buyers. Further, the Department of the Treasury could work with local public libraries and community based organizations to facilitate access to TreasuryDirect for the millions of Americans without Internet access. * * * * * * * * * * * Our proposals very much are in the spirit of RE-inventing the savings bond. As a business proposition, one never wants to kill a valuable brand. We suspect that savings bonds – conjuring up images of old fashioned savings – may be one of the government’s least recognized treasures. It was—and can be again—a valuable device to increase household savings while simultaneously becoming a more efficient debt management tool. The U.S. Savings Bond program, when first introduced in the early twentieth century, was a tremendous innovation that created a new class of investors and enabled millions of Americans to buy homes, durables goods, and pursue higher education (Samuel (1997)). In the same way, a revitalized Savings Bond program, aimed squarely at serving LMI families can again become a pillar of family savings. In mid-September of 2005, Senators Mary Landrieu and David Vitter proposed a renewed savings bond marketing effort, aimed at raising funds for the reconstruction of areas dedicated to encouraging savings (James T. Arnold Consultants (1999)). 24 damaged by hurricane Katrina (Stone (2005)). The Senators alluded to the success of the 1940’s War Bond program as inspiration. We think that they should focus on bonds not only to raise funds to rebuild infrastructure and homes, but also to use the opportunity to help families rebuild their financial lives. These “rebuilding-bonds” could be used to help families affected by the hurricane to save and put their finances in order, perhaps by offering preferred rates on the bonds or by offering matching on all bond purchases. Non-affected families could simply use the occasion to save for their futures or emergencies. A national bond campaign might emphasize that bond purchasers can rebuild not only critical infrastructure, homes, and businesses, but also families’ savings. 25 Sources 31 CFR Part 21 et al., United States Savings Bonds, Extension of Holding Period; Final Rule, Federal Register, 17 January, 2003. 31 CFR Part 315, et al: Regulations Governing Treasury Securities, New Treasury Direct System; Final Rule, 2003, Federal Register, 8 May, 2003. Advertising Council, 2004, Historic Campaigns: Savings Bonds, http://www.adcouncil.org/ campaigns/historic_savings_bonds/ (last accessed October 12 th , 2004). America Saves, 2004, Savings strategies: The importance of emergency savings, The American Saver http://www.americasaves.org/back_page/winter2004.pdf (last accessed October 12th, 2004). Anonymous, “Behind 2003’s Direct-Mail-Numbers,” Credit Card Management, 17(1), April 2004, ABI/INFORM Global. Aizcorbe, Ana M., Arthur B. Kennickell, and Kevin B. Moore, 2003, Recent Changes in U.S. Family Finances: Evidence from the 1998 and 2001 Survey of Consumer Finances, Federal Reserve Bulletin, 1 -32 http://www.federalreserve.gov/pubs/oss/oss2/2001/bull0103.pdf Avery, Robert B., Raphael W. Bostic, Paul S. Calem, and Glenn B. Canner, 1997, Changes in the distribution of banking offices, Federal Reserve Bulletin http://www.federalreserve.gov/pubs/bulletin/1997/199709LEAD.pdf (last accessed October 6th, 2004). Avery, Robert B., Gregory Elliehausen, and Glenn B. Canner, 1984, Survey of Consumer Finances, 1983, Federal Reserve Bulletin, 89; 679-692, http://www.federalreserve.gov/pubs/oss/oss2/83 /bull0984.pdf Bankrate.com, 2005, Passbook/Statement Savings Rates, http://www.bankrate.com/brm/publ/passbk. asp. Barr, Michael S., 2004, Banking the poor, Yale J. on Reg. 21(1). Berry, Christopher, 2004, To bank or not to bank? A survey of low-income households, Harvard University, Joint Center for Housing Studies, Working Paper BABC 04-3 http://www.jchs. harvard.edu/publications/finance/babc/babc_04-3.pdf (last accessed March 12th, 2004). Berry, John M., 2003, Savings Bonds under siege, The Washington Post, 19 January 2003, http://global.factiva.com/ene/Srch/ss_hl .asp (last accessed October 12, 2004). Beverly, Sondra, Daniel Schneider, and Peter Tufano, 2004, Splitting tax refunds and building savings: An empirical test, Working Paper. Blum, John Morton, 1959, From the Morgenthau Diaries: Years of Crisis, 1928-1938 (Houghton Mifflin Company, Boston, MA). Blum, John Morton, 1976, V was for Victory: Politics and American Culture During World War II (Harvest/HBJ, San Diego, CA). 26 Block, Sandra, 2003, An American tradition too unwieldy?, USA Today, September 8 th , 2003 http://global.factiva.com/ene/Srch/ss_hl .asp (last accessed September 28, 2004). Board of Governors of the Federal Reserve, 2003, Annual Report to Congress on Retail Fees and Services of Depository Institutions, http://www.federalreserve.gov/boarddocs/rptcongress/2003 fees.pdf Bostic, Raphael W., Paul S. Calem, and Susan M. Wachter, 2004, Hitting the wall: Credit as an impediment to homeownership, Harvard University, Joint Center for Housing Studies, Working Paper BABC 04-5 http://www.jchs.harvard.edu/publications/finance/babc/babc_04-5.pdf (last accessed September 29th, 2004). Brennan, Michael J. and Eduardo S. Schwartz, 1979, Savings Bonds: Theory and Empirical Evidence, New York University Graduate School of Business Administration, Monograph Series in Finance and Economics, Monograph 1979-4. Caskey, John, 1994, “Bank Representation in Low-Income and Minority Urban Communities,” Urban Affairs Review 29, 4 (June 1994): 617. Carlson, Mark and Roberto Perli, 2004, Profits and balance sheet development at US commercial banks in 2003, Federal Reserve Bulletin, Spring 2004, 162-191, http://www.federalreserve.gov /pubs/bulletin/2004/spring04profit.pdf (last accessed September 9th, 2004). Correcting and replacing: New ad campaign from American Express Financial Advisors speaks from investor’s point of view, 2004, Business Wire, September 27 th , 2004, http://global.factiva.com/ene/Srch/ss_hl .asp (last accessed October 7 th , 2004). Cummings, Joseph, 1920, United States government bonds as investments, in The New American Thrift, ed. Roy G. Blakey, Annals of the American Academy of Political and Social Science, vol. 87. Current Population Survey, 2002 March Supplement to the Current Population Survey Annual Demographic Survey, http://ferret.bls.census.gov/macro/032003/hhinc/new06_000.htm Federal Credit Union Act, 12 U.S.C. §1786, http://www.ncua.gov/Regulations OpinionsLaws/fcu_act/fcu_act.pdf (last accessed October 8th, 2004). Federal Deposit Insurance Corporation (FDIC), 2004, Historical Statistics on Banking, Table CB15, Deposits, FDIC- -Insured Commercial Banks, United States and Other Areas, Balances at Year End, 1934 – 2003, http://www2.fdic.gov/hsob/HSOBRpt.asp?state=1&rptType=1&Rpt_Num=15. Federal Reserve Board, 2005, Federal Reserve Statistics, Selected Interest Rates, Historical Data, http://www.federalreserve.gov/releases/h15/data.htm. Financial Services Roundtable, 2004, The Future of Retirement Security in America, http://www.fsround.org/pdfs/RetirementSecurityFuture12-20-04.pdf (last accessed March 3rd, 2004) Global Insight, 2003, Predicting Personal Bankruptcies: A Multi-Client Study, http://www.globalinsight.com/publicDownload/genericContent/10-28-03_mcs.pdf (last accessed 10/12/04). 27 Hanc, George, 1962, The United States Savings Bond Program in the Postwar Period, Occasional Paper 81 (National Bureau of Economic Research, Cambridge, MA). Hayashi, Yuka, 2004, First-quarter earnings for T. Rowe Price nearly double, Dow Jones Newswires, April 27 th , 2004, http://global.factiva.com/ene/Srch/ss_hl .asp (last accessed October13th, 2004). Imoneynet.com, 2005, Money Market Mutual Funds Data Base, data file in possession of authors. Internal Revenue Service Statistics of Income, 2001, Individual income tax statistics – 2001, Table 3.3 – 2001 Individual income tax, all returns: Tax liability, tax credits, tax payments, by size of adjusted gross income, http://www.irs.gov/pub/irs-soi/01in33ar.xls. Internal Revenue Service Statistics of Income, 1960-1969, Individual income tax statistics, Table 4 – Individual income tax, all returns: Tax liability, tax credits, tax payments, by size of adjusted gross income, http://www.irs.gov/pub/irs-soi/01in33ar.xls. Internal Revenue Service, 2003, Investment Income and Expenses (including Capital Gains and Losses), Publication 550, (Department of the Treasury, Internal Revenue, Washington, D.C.). James, Dana, 2000, Marketing bonded new life to “I” series, Marketing News, 34(23). James E. Arnold Consultants, (1999), Marketing Strategy Development for the Retail Securities Programs of the Bureau of Public Debt, Report to the Bureau of Public Debt, on file with the authors. Kennickell, Arthur B., Martha Starr-McLuer, and Brian J. Surette, 2000, Recent changes in US family finances: Results from the 1998 Survey of Consumer Finances, Federal Reserve Bulletin, 88; 1-29, http://www.federalreserve.gov/pubs/oss/oss2/98/bull0100.pdf Kennickell, Arthur and Janice Shack-Marquez, 1992, Changes in family finances from 1983 to 1989: Evidence from the Survey of Consumer Finances, Federal Reserve Bulletin, 78; 1-18, http://www.federalreserve.gov/pubs/oss/oss2/89/bull0192.pdf. Kennickell, Arthur B., Martha Starr-McLuer, 1994, Canges in US family finances from 1989 to 1992: Evidence from the Survey of Consumer Finances, Federal Reserve Bulletin, 80; 861-882, http://www.federalreserve.gov/pubs/oss/oss2/92/bull1094.pdf Kennickell, Arthur B., Martha Starr-McLuer, and Annika E. Sunden, 1997, Family finances in the US: Recent evidence from the Survey of Consumer Finances, Federal Reserve Bulletin, 83;, 1-24, http://www.federalreserve.gov/pubs/oss/oss2/95/bull01972.pdf Deborah Lagomarsino, “Locating Lost Bonds Only a ‘Treasury Hunt’ Away,” The Wall Street Journal, September 20, 2005, http://global.factiva.com (accessed September 23, 2005). Liberty Loan Committee of New England, 1919, Why Another Liberty Loan (Liberty Loan Committee of New England, Boston, MA). Morningstar Principia mutual funds advanced, 2004, CD-ROM Data File (Morningstar Inc., Chicago, Ill.). Morgenthau, Henry, 1944, War Finance Policies: Excerpts from Three Addresses by Henry Morgenthau, (US Government Printing Office, Washington D.C.). 28 National Credit Union Administration, 2004, NCUA Individual Credit Union Data http://www.ncua.gov/indexdata.html (last accessed October 12th, 2004). Pender, Kathleen, 2003, Screws Tightened on Savings Bonds,” San Francisco Chronicle, 16 January 2003, B1 Projector, Dorothy S., Erling T. Thorensen, Natalie C. Strader, and Judith K. Schoenberg, 1966, Survey of Financial Characteristics of Consumers (Board of the Federal Reserve System, Washington DC). Quinn, Jane Bryant, 2001, Checking error could land you on blacklist, The Washington Post, September 30 th , 2001, http://global.factiva.com/ene/Srch/ss_hl .asp (last accessed March 12, 2004). Quittner, Jeremy, 2003, Marketing separate accounts to the mass affluent, American Banker, January 8 th , 2003, http://global.factiva.com/ene/Srch/ss_hl .asp (last accessed October 7 th , 2004). Samuel, Lawrence R., 1997, Pledging Allegiance: American Identity and the Bond Drive of WWII (Smithsonian Institution Press, Washington DC). Schneider, Daniel and Peter Tufano, 2004, “New Savings from Old Innovations: Asset Building for the Less Affluent,” New York Federal Reserve Bank, Community Development Finance Research Conference, http://www.people.hbs.edu/ptufano/New_from_old.pdf. Schreiner, Mark, Margaret Clancy, and Michael Sherraden, 2002, “Final report: Saving performance in the American Dream Demonstration, a national demonstration of Individual Development Accounts 9 (Washington University in St. Louis, Center for Social Development, St. Louis, MO). Sobhani, Robert and Maryana D. Shteyman, 2003, T. Rowe Price Group, Inc. (TROW): Initiating Coverage with a Hold; Waiting for an Entry Point, http://onesource.com (last accessed October 7 th , 2004) (Citigroup Smith Barney, New York, NY). Stone, Adam, 2004, After some well-placed deposits in media, bank campaign shows positive returns, PR News, March 1 st , 2004, http://global.factiva.com/ene/Srch/ss_hl .asp (last accessed October 13 th , 2004). Stone, Andrea, “Republicans Offer Spending Cuts,” USA Today, September 20, 2005 available online at www.usatoday.com (last accessed September 23, 2005). Survey of Consumer Finances, 2001, Federal Reserve Board, 2003, Electronic Data File, http://www.federalreserve.gov/pubs/oss/oss2/2001/scf2001home.html#scfdata2001 (last accessed June, 2003). T.D. Waterhouse, 2001, TD Waterhouse Group, Inc. Reports Cash Earnings of $.01 per Share for the Fiscal Quarter Ended October 31, 2001 www.tdwaterhouse.com (last accessed October 13 th , 2004). T. Rowe Price, 2003, T. Rowe Price 2003 Annual Report: Elements of Our Success, www.troweprice.com (last accessed October 13 th , 2004). Tansey, Charles D., 2001, Community development credit unions: An emerging player in low income communities, Capital Xchange, Brookings Institution Center on Urnabn and Metropolitan Policy 29 and Harvard University Joint Center for Housing Studies http://www.brook.edu/metro /capitalxchange/article6.htm (last accessed October 1st, 2004). Tufano, Peter and Daniel Schneider, 2004, H&R Block and “Everyday Financial Services,” Harvard Business School Case no. 205-013 (Harvard Business School Press, Boston, MA). US Census, 2002, http://ferret.bls.census.gov/macro/032002/hhinc/new01_001.htm. United States Department of the Treasury, 1915-1980, Annual Report of the Secretary of the Treasury on the State of the Finances for the Year (Department of the Treasury, Washington, DC). United States Department of the Treasury, 1918, To Make Thrift a Happy Habit (US Treasury, Washington D.C.). United States Department of the Treasury, 1935-2003, Treasury Bulletin (Department of the Treasury, Washington, DC). United States Department of the Treasury, 1935, United States Savings Bonds (US Department of Treasury, Washington, DC). United States Department of the Treasury, 1981, United States Savings Bond Program, A study prepared for the Committee on Ways and Means, US House of Representatives (US Government Printing Office, Washington, DC). United States Department of the Treasury, U.S. Savings Bonds Division, 1984, A History of the United States Savings Bond Program (US Government Printing Office, Washington, DC). United States Department of the Treasury, 1993, Help Your Coworkers Secure Their Future Today, Take Stock in America, U.S. Savings Bonds, Handbook for Volunteers, (United States Department of the Treasury, Washington, D.C.). United States Department of the Treasury, 2000, Statement: Payment of Fees for United States Savings Bonds, ftp://ftp.publicdebt.treas.gov/forms/sav4982.pdf (last accessed October 7th, 2004). United States Department of the Treasury, 2002, Terrorist attack prompts sale of Patriot Bond, The Bond Teller, 31(1). United States Department of the Treasury, 2003a, Minimum holding period for EE/I bonds extended to 12 months, The Bond Teller, January 31 st , 2003, http://www.publicdebt.treas.gov/sav/ savbtell.htm (last accessed October 12th, 2004). United States Department of the Treasury, Fiscal Service, Bureau of the Public Debt, July 2003b, Part 351- Offering of United States Savings Bonds, Series EE, Department Circular, Public Debt Series 1-80. United States Department of the Treasury, Fiscal Service, Bureau of the Public Debt, July 2003c, Part 359- Offering of United States Savings Bonds, Series I, Department Circular, Public Debt Series 1-98. United States Department of the Treasury, 2004a, 7 Great Reasons to Buy Series EE bonds, http://www.publicdebt.treas.gov/sav/savbene1.htm#easy (last visited September 26 th , 2004). United States Department of the Treasury, 2004b, The U.S. Savings Bonds Owner’s Manual, ftp://ftp.publicdebt.treas.gov/marsbom.pdf (last accessed March 12th, 2004). 30 United States Department of the Treasury, 2004c, http://www.publicdebt.treas.gov/mar/marprs.htm (last accessed October 12th, 2004). United States Department of the Treasury Bureau of Public Debt, 2004d, FAQs: Buying Savings Bonds Through Payroll Savings, www.publicdebt.treas.gov. United States Department of the Treasury, Bureau of Public Debt, 2005, Current Rates (through April 2005), http://www.publicdebt.treas.gov/sav/sav.htm. United States Department of the Treasury, Bureau of Public Debt, 2005, “EE Bonds Fixed Rate Frequently Asked Questions,” available online at http://www.treasurydirect.gov/indiv /research/indepth/eefixedrate faqs.htm, last accessed June 23 rd , 2005. Unites States Department of the Treasury, Bureau of Public Debt, 2005, Private Correspondence with Authors, on file with authors. United States Government Accounting Office, 2003, Savings Bonds: Actions Needed to Increase the Reliability of Cost-Effectiveness Measures (United States Government Accounting Office, Washington, D.C.). Zeck, Van, 2002, Testimony before House subcommittee on Treasury, Postal Service, and General Government Appropriations, March 20, 2002. Zook, George F., 1920, Thrift in the United States, in The New American Thrift, ed. Roy G. Blakey, Annals of the American Academy of Political and Social Science, vol. 87. 31 Table I Fraction of U.S. Households Having “Adequate” Levels of Emergency Savings[1] Financial Assets (Narrow) [2] Financial Assets (Broad) [3] All Households; Savings adequate to Replace six months of income 22% 44% Replace three months of income 32% 54% Meet emergency saving goal [4] 47% 63% Household Income < $30,000; Savings adequate to Replace six months of income 19% 28% Replace three months of income 25% 35% Meet stated emergency saving goal 29% 39% Source: Author’s tabulations from the 2001 Survey of Consumer Finances (SCF (2001)) Notes: [1] This chart compares different levels of financial assets to different levels of precautionary savings goals. If a household’s financial assets met or exceed the savings goals, it was considered adequate. The analysis was conducted for all households and for households with incomes less than $30,000 per year. [2] Financial Assets (Narrow) includes checking, saving, and money market deposits, call accounts, stock, bond, and combination mutual funds, direct stock holdings, US savings bonds, Federal, State, Municipal, corporate, and foreign bonds. [3] Financial Assets (Broad) includes all assets under Financial Assets (Narrow) as well as certificates of deposit, IRA and Keogh accounts, annuities and trusts, and the value of all 401(k), 403 (b), SRA, Thrift, Savings pensions plans as well as the assets of other plans that allow for emergency withdrawals of borrowing. [4] Respondents were asked how much they felt it was necessary to have in emergency savings. This row reports the percentage of respondents with financial assets greater than or equal to that emergency savings goal.32 Table II Percent Owning Select Financial Assets, by Income and Net Worth (2001) Savings Bonds Certificates of Deposit Mutual Funds Stocks Transaction Accounts All Financial Assets Percentile of Income Less than 20 3.8% 10.0% 3.6% 3.8% 70.9% 74.8% 20 - 39.9 11.0% 14.7% 9.5% 11.2% 89.4% 93.0% 40 - 59.9 14.1% 17.4% 15.0% 16.4% 96.1% 98.3% 60 - 79.9 24.4% 16.0% 20.6% 26.2% 99.8% 99.6% 80 - 89.9 30.3% 18.3% 29.0% 37.0% 99.7% 99.8% 90 – 100 29.7% 22.0% 48.8% 60.6% 99.2% 99.7% Lowest quintile ownership rate as a 12.8% 45.5% 7.4% 6.3% 71.5% 75.0% percent of top decile ownership rate Percentile of net worth Less than 25 4.3% 1.8% 2.5% 5.0% 72.4% 77.2% 25 - 49.9 12.8% 8.8% 7.2% 9.5% 93.6% 96.5% 50 - 74.9 23.5% 23.2% 17.5% 20.3% 98.2% 98.9% 75 - 89.9 25.9% 30.1% 35.9% 41.2% 99.6% 90.8% 90 – 100 26.3% 26.9% 54.8% 64.3% 99.6% 100.0% Lowest quintile ownership rate as a 16.3% 6.7% 4.6% 7.8% 72.7% 77.2% percent of top decile ownership rate Source: Aizcorbe, Kennickell, and Moore (2003). 33 Table III Median value of Select Financial Assets among Asset Holders, by Income and Net Worth (2001) Savings Bonds Certificates of Deposit Mutual Funds Stocks Transaction Accounts All Financial Assets Percentile of Income Less than 20 $1,000 $10,000 $21,000 $7,500 $900 $2,000 20 - 39.9 $600 $14,000 $24,000 $10,000 $1,900 $8,000 40 - 59.9 $500 $13,000 $24,000 $7,000 $2,900 $17,100 60 - 79.9 $1,000 $15,000 $30,000 $17,000 $5,300 $55,500 80 - 89.9 $1,000 $13,000 $28,000 $20,000 $9,500 $97,100 90 – 100 $2,000 $25,000 $87,500 $50,000 $26,000 $364,000 Percentile of net worth Less than 25 $200 $1,500 $2,000 $1,300 $700 $1,300 25 - 49.9 $500 $500 $5,000 $3,200 $2,200 $10,600 50 - 74.9 $1,000 $11,500 $15,000 $8,300 $5,500 $53,100 75 - 89.9 $2,000 $20,000 $37,500 $25,600 $13,700 $201,700 90 – 100 $2,000 $40,000 $140,000 $122,000 $36,000 $707,400 Source: Aizcorbe, Kennickell, and Moore (2003). Medians represent holdings among those with non-zero holdings. 34 Table IV Minimum Initial Purchase Requirements among Mutual Funds in the United States. Min = $0 Min =< $100 Min =< $250 Among all Funds listed by Morningstar Number allowing 1,292 1,402 1,785 Percent allowing 8% 9% 11% Among the top 500 mutual funds by net assets Number allowing 49 55 88 Percent allowing 10% 11% 18% Among the top 100 index funds by net assets Number allowing 30 30 30 Percent allowing 30% 30% 30% Among the top 100 domestic stock funds by net assets Number allowing 11 13 24 Percent allowing 11% 13% 24% Among the top 100 money market funds by net assets Number allowing 6 6 6 Percent allowing 6% 6% 6% Source: Morningstar (2004) and imoneynet.com (2005). Table V Average Savings Account Fees and Minimum Balance Requirements, Nationally and in the Ten Largest Consolidated Metropolitan Statistical Areas (CMSAs) (2001) Monthly Fee Annual Fee Minimum Balance to Open Account Minimum Balance to Avoid Monthly Fee Annual Fee as a Percent of Min Balance Requirement All Respondent Banks $97 $2.20 $158 $26 27% New York $267 $3.10 $343 $37 14% Los Angeles $295 $2.80 $360 $34 11% Chicago $122 $3.50 $207 $43 35% District of Columbia $100 $3.20 $152 $38 38% San Francisco $275 $2.80 $486 $34 12% Boston $44 $2.70 $235 $33 75% Dallas $147 $3.20 $198 $38 26% Average 10 Largest CMSAs $179 $2.90 $268 $35 20% Source: Board of Governors of the Federal Reserve (2002) 35 Table VI Attributes of Common Savings Vehicles, February 9 th , 2005 Sources: bankrate.com, imoneynet.com, US Department of the Treasury (2005). * Rate assuming early redemption in month 12 (first redemption date) and penalty of loss of three months of interest. Savings Bonds Savings Accounts Certificates of Deposit Money Market Mutual Funds Yield Series EE: 3.25% (2.44%*) Series I: 3.67% (2.75%*) 1.59% 1-month: 1.16% 3-month: 1.75% 6-month: 2.16% Taxable: 1.75% Non-table: 1.25% Preferential Tax Treatment Federal taxes deferred until time of redemption. State and local tax exempt None None None Liquidity Required 12 month holding period. Penalty for redemption before 5 years equal to loss of prior three months of interest. On demand Penalties for early withdrawal vary: all interest on 30 day CD, 3 months on 18 month CD, 6 months on 2 year or longer CD. On demand, but fees are assessed upon exit from fund. Risk "Full faith and credit of US" No principal risk FDIC insurance to $100,000 FDIC insurance to $100,000 Risk to principal, although historically absent for Money Market Funds Minimum Purchase $25 Minimum opening deposit average $100 Generally, $500 Generally, $250 or more Credit Check None ChexSystems sometimes used ChexSystems sometimes used None 36 Table VII Savings Bonds (all series) Outstanding as a Percent of Total Domestic Deposits and Total Domestic Savings Deposits at Commerical Banks 0% 20% 40% 60% 80% 100% 120% 140% 160% 180% Fiscal Year 1937 1940 1943 1946 1949 1952 1955 1958 1961 1964 1967 1970 1973 1976 1979 1982 1985 1988 1991 1994 1997 2000 Savings Bonds Outstanding as a Percent of Commercial Bank Deposits Total Domestic Deposits Total Savings Deposits Source: US Treasury Department, Treasury Bulletin (1936-2003), FDIC (2004) 37 Table VIII Ownership of Select Financial Assets (1946 – 2001) 1946 1951 1960 1963 1970 1977 1983 1989 1992 1995 1998 2001 Checking Accounts 34% 41% 57% 59% 75% 81% 79% 81% 84% 85% 87% 87% Savings Accounts 39% 45% 53% 59% 65% 77% 62% n/a n/a n/a n/a 55% Transaction Account n/a n/a n/a n/a n/a n/a n/a 85% 88% 87% 91% 91% Savings Bonds 63% 41% 30% 28% 27% 31% 21% 24% 23% 23% 19% 17% Corporate Stock n/a n/a 14% 14% 25% 25% 19% 16% 18% 15% 19% 21% Mutual Funds n/a n/a n/a 5% n/a n/a n/a 7% 11% 12% 17% 18% Source: Aizcorbe, Kennickell, and Moore, (2003); Avery, Elliehausen, and Canner, (1984); Kennickell, Starr McLuer, and Surette, (2000); Kennickell and Shack -Marquez, (1992); Kennickell and Starr-McLuer, (1994); Kennickell, Starr-McLuer, and Sunden, (1997); Projector, Thorensen, Strader, and Schoenberg, (1966). Table IX Savings Bond Ownership by Income Quintile, 1957 and 2001. 1957 2001 Percent Decrease Bottom 20 12.8% 3.8% 70.3% Second 21.3% 11.0% 48.4% Third 27.4% 14.1% 48.5% Fourth 35.9% 17.4% 51.5% Top 20 44.9% 17.2% 61.8% Source: Hanc (1962) and Aizcorbe, Kennickell, and Moore (2003) 38 Appendix A: Savings Bonds Today Series EE and the Series I bonds are the two savings bonds products now available (Table VI summarizes the key features of the bonds in comparison to other financial products). 18 Both are accrual bonds; interest payments accumulate and are payable on redemption of the bond. Series EE bonds in paper form are sold at 50% of their face value (a $100 bond sells for $50) and, until May of 2005, accumulated interest at a variable “market rate” reset semiannually as 90% of the five-year Treasury securities yield on average over the prior 6 month period. However, as of May, the interest rate structure for EE bonds changed. Under the new rules, EE bonds earn a fixed rate of interest, set bi-annually in May and October. The rate is based on the 10 year Treasury bond yield, but the precise rate will be set “administratively” taking into account the tax privlidges of savings bonds and the early redemption option. 19 EE bonds are guaranteed to reach face value after 20 years, but continue to earn interest for an additional 10 years before the bond reaches final maturity (US Department of the Treasury (2003b)). Inflation-indexed I Bonds are sold at face value and accumulate interest at an inflation-adjusted rate for 30 years (Treasury (2003c)). 20 In terms of their basic economic structure of delivering fixed rates, EE savings bonds resemble fixed rate certificates of deposit (CDs). Backed by the “full faith and credit of the United States Government,” savings bonds have less credit risk than any private sector investment. (Bank accounts are only protected by the FDIC up to $100,000 per person) Holders face no principal loss, as rises in rates do not lead to a revaluation of principal, because the holder may redeem them without penalty (after a certain point). Also, interest on I bonds is indexed to inflation rates. The holder faces substantial short-term liquidity risk, as current rules do not allow a bond to be redeemed earlier than 1 year from the date of purchase (although this requirement may be waived in rare circumstances involving natural disasters or, on a case-by-case basis, individual financial problems.) Bonds redeemed less than 5 years from the date of purchase are subject to a penalty equal to 3 months of interest. In terms of liquidity risk, savings bonds are similar to certificates of deposit more so than to MMMF or MMDA accounts. Interest earnings on EE and I Bonds are exempt from state and local taxes, but federal taxes must be paid either 1) when the bond is redeemed, 2) 30 years from the date of purchase, or 18 The current income bond, the Series HH, was discontinued in August of 2004 (United States Department of the Treasury Bureau of Public Debt, Series HH/H Bonds, available online at www.publicdebt.treas.gov). 19 United States Department of the Treasury, Bureau of Public Debt, “EE Bonds Fixed Rate Frequently Asked Questions,” available online at http://www.treasurydirect.gov/indiv/research/indepth/eefix edratefaqs.htm, last accessed June 23 rd , 2005. 20 This inflation-adjusted rate is determined by a formula which is essentially the sum of a fixed real rate (set on the date of the bond issue) and the lagging rate of CPI inflation 39 3) yearly. 21 With respect to tax treatment, savings bonds are attractive relative to many private sector products. Comparing the actual yields of savings bonds with those of other savings products is not simple. The rates of return on savings bonds vary, as do those on short term CDs. Further, the true yield of savings bonds is influenced by their partially tax-exempt status as well as the penalties associated with early redemption. In order to get an as accurate as possible estimate of yields, we model realized returns over a five year period with various assumptions regarding early redemption, yields, and taxes. Generally, EE bond’s performance is on par with that of average certificates of deposit with a 6-month, 2.5 year, or 5 year term or a NOW account. 22 While their returns are 10% less than the Treasury securities to which they are pegged, savings bond holders do not face the interest rate exposure and principal risk that holders of Treasury securities face and are able to buy them in small convenient denominations. It is more difficult to evaluate the Series I bonds, as U.S. private sector analogues for these instruments are scarce. 21 Under the Education Savings Bond Program, bondholders making qualified higher education expenditures may exclude some or all of the interest earned on a bond from their Federal Taxes. This option is incometested and is only available to joint filers making less than $89,750 and to single filers making less than $59,850 (IRS (2003)). 22 Historically, when they were offered in the 1940s to support World War II, Savings Bonds earned better rates than bank deposits (Samuel (1997)). Savings Bonds retained this advantage over savings accounts and over corporate AAA bonds (as well as CDs following their introduction in the early 1960s) through the late 1960s. However, while rates on CDs and corporate bonds rose during the inflationary period of the late 1970s, yields on Savings Bonds did not keep pace and even by the late 1990’s had not fully recovered their competitive position (Federal Reserve (2005)). 40 Appendix B: Patterns and Trends in Bond Ownership Patterns of bond ownership and purchase have changed over time. Savings Bond sales rose rapidly in the early 1940s with the onset of World War II but then slowed substantially in the post war period. Savings bond holdings benchmarked against domestic deposits in US commercial banks fell from 39% of total domestic deposits in 1949 to 5% of domestic deposits in 2002 (Table VII). In 1946, 63% of households held savings bonds. Over the next 60 years, savings bond ownership declined steadily; dropping to around 40% of households in the 1950s, to approximately 30% through the 1960s and 1970s, and then to near 20% for much of the 1980s and 1990s. The 16.7% 2001 ownership rate appears to be the lowest since World War II. See Table VIII for saving bond and other saving product ownership rates over time. In 2001, high income and high wealth households were far more likely to hold savings bonds than low income and low wealth households. While gaps nearly as wide or wider appear between income and wealth quintiles for stocks, mutual funds, and CDs (Transaction account ownership is closer) the gap for savings bonds is of particular note given the product’s original purpose of appealing to the “small-saver.” Historically, ownership of savings bonds was far more equal. While savings bond ownership is down from the 1950s’ levels across households of all incomes, this shift is most pronounced among lower-income households. Table IX summarizes ownership rates by income in 1957 and 2001. Overall, in 2001 savings bond ownership was down 42% from 1957. For those in the lowest income quintile, savings bonds ownership declined by 70%. Interestingly, savings bond ownership was off 62% in the highest income quintile. However, while large shares of high-income households now own stocks and mutual funds, ownership rates for these products are quite low (3-4%) among low-income households. If lowincome households have moved savings from savings bonds to other products, it has most likely been into transaction accounts, not the more attractive investment vehicles more common among high income householdsSocial Enterprise Initiative Kick-Of
|
CD ROM Annuaire d'Entreprises France prospect (avec ou sans emails) : REMISE DE 10 % Avec le code réduction AUDEN872
10% de réduction sur vos envois d'emailing --> CLIQUEZ ICI Retour à l'accueil, cliquez ici Copyright © President & Fellows of Harvard College Social Enterprise Initiative Kick-Off Tuesday, September 6, 2011A multi-disciplinary approach to addressing societal issues through a managerial lens ? Applying innovative business practices ? Driving sustained, high-impact social change ? Grounded in the mission of HBS 2 What is Social Enterprise at HBS?3 Supporting a Dynamic Community grounded in practice Faculty and Administrative leadership Alumni engagement Student engagement4 Key Ingredient: Faculty Engagement Faculty Co-Chairs Herman B. “Dutch” Leonard V. Kasturi “Kash Rangan SE Faculty Group Extended SE Faculty Group Allen Grossman Warren McFarlan Joshua Margolis Michael Chu Forest Reinhardt Nava Ashraf Julie Battilana Joseph Bower Dennis Campbell Shawn Cole Bill Sahlman Amy Edmondson Stephen Greyser Andre Hagiu Regina Herzlinger James Heskett Robert Higgins Rosabeth Kanter Rob Kaplan Christopher Marquis Alnoor Ebrahim Karthik Ramanna Luis Viceira Arthur Segel Andreas Nilsson John J-H Kim Michael Toffel Youngme Moon Eric Werker Other Faculty Engaged in Specific SE Activities Nicolas Retsinas Jim Austin Bob Eccles Ray Goldberg Kathleen McGinn Ramana Nanda Rebecca Henderson Bob Kaplan George SerafeimSince 1993, HBS faculty members have published more than 500 cases, 100 articles, and several books including: • Joining a Nonprofit Board: What You Need to Know (2011, McFarlan, Epstein) • Leading for Equity: The Pursuit of Excellence in the Montgomery County Public Schools (2009, Childress, Doyle, and Thomas) • SuperCorp: How Vanguard Companies Create Innovation, Profits, Growth, and Social Good (2009, Kanter) • Business Solutions for Global Poor (2007, Rangan, Quelch, Herrero, Barton) • Entrepreneurship in the Social Sector (2007, Wei-Skillern, Austin, Leonard, Stevenson) • Managing School Districts for High Performance (2007, Childress, Elmore, Grossman, Moore Johnson) 5 Knowledge Generation6 Key Ingredient: Administrative Engagement SE Administrative Group Director: Laura Moon Director of Programs: Margot Dushin Assistant Director: Keri Santos Coordinator: Liz Cavano Key Administrative Partners Knowledge & Library Services MBA Program Office Admissions/ Financial Aid Executive Education Registrar Services Student and Academic Services Donor Relations Alumni Relations Other HBS Administrative Departments MBA Career & Professional Development Other Initiatives (BEI, HCI, Entrepreneurship, Global, Leadership)A Little Bit About You • Approximately 12% of the Class of 2013 has prior experience working in the nonprofit or public sectors (with about two-thirds coming to HBS directly from these sectors) • You and your colleagues represent a breadth of experience • Including entrepreneurial ventures, for-profit efforts focused on social impact, funding organizations, government agencies, nonprofit organizations • In issue areas including arts, education, economic development, environment, healthcare, human services, international development • In countries and regions around the world • Colleagues in the Class of 2012, reflect a similar profile • Approximately 8% of the class pursued Social Enterprise Summer Fellowships with organizations in 20+ countries around the world 7 Key Ingredient: Student Engagement8 Catalyzing Student Involvement9 SEI MBA Career Support Programs Private Sector Social Entrepreneurship Nonprofit Sector Public Sector Goldsmith Fellowship Bplan Contest RC Year Summer EC Year Post HBS Summer Fellowship Independent Project Leadership Fellows Leadership Fellows Social Entrepreneurship Fellowship Loan Repayment Assistance Summer Fellowship Summer Fellowship Summer Fellowship Goldsmith Fellowship Independent Project Loan Support Social Entrepreneurship Fellowship Loan Repayment Assistance Loan Repayment Assistance Bplan Contest Social Entrepreneurship Fellowship Bplan Contest Goldsmith Fellowship Bplan Contest Independent Project Bplan Contest Loan Support Independent Project Bplan Contest10 Social Enterprise Focused Student Clubs • Social Enterprise Club • Social Enterprise Conference • Board Fellows • Harbus Foundation • Volunteer Consulting Organization • Volunteers• SEI Website—a Gateway to Information • www.hbs.edu/socialenterprise • Periodic Email Announcements for SEI • Mainstream HBS Communications • MBA Events Calendar • MyHBS • Student Clubs • SEC Weekly e-Newsletter • Other Club Communications • Follow us on Twitter: HBSSEI 11 Information, Resources and Staying Connected• Student Club Fair, September 8 • CPD Super Day—Social Enterprise Industry 101, September 23 • Social Enterprise Professional Perspectives Session, September 27 • Club Kick-Offs and Events • Social Enterprise Community Engagement Lunches • …and more… • And, now…Join us for an ice-cream reception! 12 Next Month and BeyondThe Consequences of Entrepreneurial Finance : A Regression Discontinuity Analysis
|
CD ROM Annuaire d'Entreprises France prospect (avec ou sans emails) : REMISE DE 10 % Avec le code réduction AUDEN872
10% de réduction sur vos envois d'emailing --> CLIQUEZ ICI Retour à l'accueil, cliquez ici Copyright © 2010 by William R. Kerr, Josh Lerner, and Antoinette Schoar Working papers are in draft form. This working paper is distributed for purposes of comment and discussion only. It may not be reproduced without permission of the copyright holder. Copies of working papers are available from the author. The Consequences of Entrepreneurial Finance: A Regression Discontinuity Analysis William R. Kerr Josh Lerner Antoinette Schoar Working Paper 10-0861 The Consequences of Entrepreneurial Finance: A Regression Discontinuity Analysis William R. Kerr, Josh Lerner, and Antoinette Schoar * Abstract: This paper documents the role of angel funding for the growth, survival, and access to follow-on funding of high-growth start-up firms. We use a regression discontinuity approach to control for unobserved heterogeneity between firms that obtain funding and those that do not. This technique exploits that a small change in the collective interest levels of the angels can lead to a discrete change in the probability of funding for otherwise comparable ventures. We first show that angel funding is positively correlated with higher survival, additional fundraising outside the angel group, and faster growth measured through growth in web site traffic. The improvements typically range between 30% and 50%. When using the regression discontinuity approach, we still find a strong, positive effect of angel funding on the survival and growth of ventures, but not on access to additional financing. Overall, the results suggest that the bundle of inputs that angel investors provide have a large and significant impact on the success and survival of start-up ventures. * Harvard University; Harvard University; and MIT. All three authors are affiliates of the National Bureau of Economic Research. We thank James Geshwiler of CommonAngels, Warren Hanselman and Richard Sudek of Tech Coast Angels, and John May of the Washington Dinner Club for their enthusiastic support of this project and willingness to share data. We also thank the many entrepreneurs who responded to our inquiries. Harvard Business School‘s Division of Research and the Kauffman Foundation supported this research. Andrei Cristea provided excellent research assistance. All errors and omissions are our own.2 One of the central and most enduring questions in the entrepreneurial finance literature is the extent to which early stage financiers such as angels or venture funds have a real impact on the firms in which they invest. An extensive theoretical literature suggests the combination of intensive monitoring, staged investments, and powerful control rights in these types of deals should alleviate agency problems between entrepreneurs and institutional investors (examples include Admati and Pfleiderer, 1994; Berglöf, 1994; Bergmann and Hege, 1998; Cornelli and Yosha, 2003; and Hellmann, 1998). This bundle of inputs, the works suggest, can lead to improved governance and operations in the portfolio firms, lower capital constraints, and ultimately stronger firm growth and performance. But the empirical documentation of this claim has been challenging. Hellmann and Puri (2000) provide a first detailed comparison of the growth path of venture backed versus non venture backed firms. 1 This approach, however, faces the natural challenge that unobserved heterogeneity across entrepreneurs, such as ability or ambition, might drive the growth path of the firms as well as the venture capitalists‘ decision to invest. These problems are particularly acute for evaluating early-stage investments. An alternative approach is to find exogenous shocks to the level of venture financing. Examples of such exogenous shocks are public policy changes (Kortum and Lerner, 2000), variations in endowment returns (Samila and Sorenson, 2010), and differences in state pension funding levels (Mollica and Zingales, 2007). These studies, however, can only examine the impact of entrepreneurial finance activity at an aggregate level. Given the very modest share that high-potential growth firms represent of all 1 A similar approach is taken in Puri and Zarutskie (2008) and Chemmanur et al. (2009) who employ comprehensive Census Bureau records of private firms to form more detailed control groups based on observable characteristics.3 entrepreneurial ventures and economic activity overall, these studies face a ?needle in the haystack? type challenge to detect any results. This paper takes a fresh look at the question of whether entrepreneurial financiers affect the success and growth of new ventures. We focus on a neglected segment of entrepreneurial finance: angel investments. Angel investors have received much less attention than venture capitalists, despite the fact that some estimates suggest that these investors are as significant a force for high-potential start-up investments as venture capitalists, and even more significant investors elsewhere (Shane, 2008; Goldfarb et al., 2007; Sudek et al., 2008). Angel investors are increasingly structured as semi-formal networks of high net worth individuals, often former entrepreneurs themselves, who meet in regular intervals (usually once a month for breakfast or dinner) to hear aspiring entrepreneurs pitch their business plans. The angels then decide to conduct further due diligence and ultimately whether to invest in some of these deals either individually or in subgroups of the members. Similarly to traditional venture capital investments, angel investment groups often adopt a very hands-on role in the deals they get involved in and provide entrepreneurs with advice and contacts to potential business partners. In addition to their inherent interest as funders of early stage companies, angel investment groups are distinguished from the majority of traditional venture capital organizations by the fact that they make their investment decisions through well documented collections of interest and, in some cases, formal votes. By way of contrast, the venture firms that we talked to all employ a consensual process, in which controversial proposals are withdrawn before coming up for a formal vote or disagreements are resolved in conversations before the actual voting takes place. In addition, venture firms also rarely document the detailed voting behind their decisions. Angel investors, in contrast, express their interest for deals independently from one another and based 4 upon personal assessment. This allows us to observe the level of support or lack thereof for the different deals that come before the angel group. These properties allow us to undertake a regression discontinuity design using data from two angel investment groups. This approach, while today widely used in program evaluations by economists (Lee and Lemieux, 2009), remains underutilized in financial economics (exceptions include Rauh, 2006; and Cherenko and Sunderam, 2009). We essentially compare firms that fall just above and those that are just below the criteria for funding for the angel group. The underlying identification relies on the idea that firms that fall just around the cut-off level have very similar ex ante characteristics that allow us to estimate the causal effect of obtaining angel financing. After showing the ex ante comparability of the ventures in the border region, we examine differences in their long-run performance. In this way, we can employ micro-data on firm outcomes while minimizing the problem of unobserved heterogeneity between the funded and rejected transactions. Several clear patterns emerge from our analysis: First, and maybe not surprisingly, companies that elicit higher interest in initial voting at the angel meeting are far more likely to be ultimately funded by the angel groups. More importantly, angel groups display break points or discontinuities where a small change in the collective interest levels of the angels leads to a discrete change in the probability of funding among otherwise comparable ventures. This provides a powerful empirical foothold for overcoming quality differences and selection bias between funded and unfunded ventures. Second, we look at the impact of angel funding on performance and access to follow-on financing for firms that received angel funding compared to those that did not. We first compare the outcomes for the full sample of firms that pitched to the angels and then narrow our 5 identification strategy to the firms that fall just above and below the funding breakpoint we identified. We show that funded firms are significantly more like to survive for at least four years (or until 2010) and to raise additional financing outside the angel group. They are also more likely to show improved venture performance and growth as measured through growth in web site traffic and web site rankings. The improvement gains typically range between 30% and 50%. An analysis of ventures just above and below the threshold, which removes the endogeneity of funding and many omitted variable biases, confirms the importance of receiving angel investments for the survival and growth of the venture. However, we do not see an impact of angel funding on accessing additional financing using this regression discontinuity approach. This may suggest that access to additional financing might often be a by-product of how angel funded firms grow but that this path may not be essential for venture success as we measure it. In addition, the result on follow-on venture funding might underline that in the time period we study, prior angel financing was not an essential prerequisite to accessing follow-on funding. However, the results overall suggest that the bundle of inputs that angel investors provide have a large and significant impact on the success and survival of the firms. Finally, we also show that the impact of angel funding on firm outcomes would be overstated if we look at the full distribution of ventures that approach the angel groups, since there is a clear correlation between the quality of the start up and the level of interest. Simply restricting the treatment and control groups to a narrow range around the border discontinuity reduces the measured effects by a quarter from the raw correlations. This result reinforces the need to focus on the regression discontinuity approach we follow in this paper. Thus, this paper provides a fresh look and new evidence at an essential question in entrepreneurial finance. It quantifies the positive impact that angel investors make to the 6 companies that they fund in a way that simultaneously exploits novel, rich micro-data and addresses concerns about unobserved heterogeneity. Our work is closest in spirit to the papers in the entrepreneurial finance literature that focus on the investment process of venture capitalist. For example, Sorensen (2007) assesses the returns to being funded by different tiers of investors. Our work instead focuses on the margin of obtaining initial funding or not. Kaplan and Stromberg (2004) and Kaplan et al. (2009) examine the characteristics and dimensions that venture capitalists rely on when making investment decisions. The plan of this paper is as follows. Section 1 reviews the angel group investment process. Section 2 introduces our angel investment data and describes our methodology. Section 3 introduces our outcomes data. Section 4 presents the analysis. The final section concludes the paper. 1. The Angel Group Investment Process Angel investments—or equity investments by individuals into high-risk ventures—is an among the oldest of human commercial activities, dating back at least as far as the investment agreements recorded in the Code of Hammurabi circa 1790 B.C. For most of American economic history, angels represented the primary way in which entrepreneurs obtained high-risk capital for start-up businesses (e.g., Lamoreaux, Levenstein and Sokoloff, 2004), whether directly through individuals or through the offices that managed the wealth of high net worth individuals beginning in the last decades of the nineteenth century. Wealthy families such as the Phippses, Rockefellers, Vanderbilts, and Whitneys invested in and advised a variety of business enterprises, including the predecessor entities to AT&T, Eastern Airlines, McDonald-Douglas, and W.R. Grace.7 The first formal venture capital firm, however, was not established until after World War II: American Research and Development (ARD) was formed by MIT President Karl Compton, Harvard Business School Professor Georges F. Doriot, and Boston business leaders in 1946. Over time, a number of the family offices transformed as well into stand-alone venture firms, including such groups as Bessemer, Venrock, and J.H. Whitney. While angel investors have a long history, angel investment groups are a quite recent phenomenon. Beginning in the mid 1990s, angels began forming groups to collectively evaluate and invest in entrepreneurial ventures. These groups are seen as having several advantages by the angels. First, angels can pool their capital to make larger investments than they could otherwise. Second, each angel can invest smaller amounts in individual ventures, allowing participation in more opportunities and diversification of investment risks. They can also undertake costly due diligence of prospective investments as a group, reducing the burdens for individual members. Fourth, these groups are generally more visible to entrepreneurs and thus receive a superior deal flow. Finally, the groups frequently include some of the most sophisticated and active angel investors in the region, which results in superior decision-making. The Angel Capital Association (ACA) lists 300 American groups in its database. The average ACA member angel group had 42 member angels and invested a total of $1.94 million in 7.3 deals in 2007. Between 10,000 and 15,000 angels are believed to belong to angel groups in the U.S. 2 Most groups follow a template that is more or less similar. Entrepreneurs typically begin the process by submitting to the group an application that may also include a copy of their business plan or executive summary. The firms, after an initial screening by the staff, are then 2 Statistics are based on http://www.angelcapitalassociation.org/ (accessed February 15, 2010).8 invited to give a short presentation to a small group of members, followed by a question-andanswer session. Promising companies are then invited to present at a monthly meeting (often a weekday breakfast or dinner). The presenting companies that generate the greatest interest then enter a detailed due diligence process, although the extent to which due diligence and screening leads or follows the formal presentation varies across groups. A small group of angel members conduct this additional, intensive evaluation. If all goes well, this process results in an investment one to three months after the presentation. Figure 1 provides a detailed template for Tech Coast Angels (Sudek et al. 2008). 2. Angel Group Data and Empirical Methodology This section jointly introduces our data and empirical methodology. The discussion is organized around the two groups from which we have obtained large datasets. The unique features of each investment group, their venture selection procedures, and their data records require that we employ conceptually similar, but operationally different, techniques for identifying group-specific discontinuities. We commence with Tech Coast Angels, the larger of our two investment groups, and we devote extra time in this first data description to also convey our empirical approach and the biases it is meant to address. We then describe our complementary approach with CommonAngels and how we ultimately join the two groups together to analyze their joint behavior. 2.1. Tech Coast Angels Tech Coast Angels is a large angel investment group based in southern California. They have over 300 angels in five chapters seeking high-growth investments in a variety of high-tech 9 and low-tech industries. The group typically looks for funding opportunities of $1 million or less. Additional details on this venture group are available at http://www.techcoastangels.com/. Tech Coast Angels kindly provided us with access to their database regarding prospective ventures under explicit restrictions that the confidentiality of individual ventures and angels remain secure. For our study, this database was exceptional in that it allowed us to fully observe the deal flow of Tech Coast Angels. Our analysis considers ventures that approached Tech Coast Angels between 2001 and 2006. We thus mainly build upon data records that existed in early 2007. At this time, there were over 2500 ventures in the database. The database is also exceptional in that it has detailed information about many of the companies that are not funded by Tech Coast Angels. We first document in Table 1 the distribution of interest from the angel investors across the full set of potential deals. This description sets the stage for identifying a narrower group of firms around a funding discontinuity that offers a better approach for evaluating the consequences of angel financing. Table 2 then evaluates the ex ante comparability of deals around the border, which is essential for the identification strategy. The central variable for the Tech Coast Angel analysis is a count of the number of angels expressing interest in a given deal. This indication of interest does not represent a financial commitment, but instead expresses a belief that the venture should be pursued further by the group. The decision to invest ultimately depends upon a few angels taking the lead and championing the deal. While this strength of conviction is unobserved, we can observe how funding relates to obtaining a critical mass of interested angels. Table 1 documents the distribution of deals and angel interest levels. The first three columns of Table 1 show 64% of ventures receive no interest at all. Moreover, 90% of all 10 ventures receive interest by fewer than ten angels. This narrowing funnel continues until the highest bracket, where there are 44 firms that receive interest from 35 or more angels. The maximum observed interest is 191 angels. This funnel shares many of the anecdotal traits of venture funding—such as selecting a few worthy ventures out of thousands of business plans— but it is exceptionally rare to have the interest level documented consistently throughout the distribution and independent of actual funding outcomes. The shape of this funnel has several potential interpretations. It may reflect heterogeneity in quality among companies that are being pitched to the angels. It could also reflect simple industry differences across ventures. For example, the average software venture may receive greater interest than a medical devices company if there are more angels within the group involved in the software industry. There could also be an element of herding around ?hot deals?. But independent of what exactly drives this investment behavior of angels, we want to explore whether there are discontinuities in interest levels such that small changes in angels expressing interest among otherwise comparable deals results in material shifts in funding probability. The central idea behind this identification strategy is that angel interest in ventures does not map one-to-one into quality differences across ventures, which we verify empirically below. Instead, there is some randomness or noise in why some firms receive n votes and others receive n+1. It is reasonable to believe that there are enough idiosyncrasies in the preferences and beliefs of angels so that the interest count does not present a perfect ranking of the quality of the underlying firms. Certainly, the 2% of ventures with 35 or more interested angels are not comparable to the 64% of ventures with zero interest. But we will show that ventures with 18 votes and 22 votes are much more comparable, except that the latter group is much more likely to be funded.11 We thus need to demonstrate two pieces. First, we need to identify where in the distribution do small changes in interest level lead to a critical mass of angels, and thus a substantial increase in funding probability. As Tech Coast Angels does not have explicit funding rules that yield a mandated cut-off, we must identify from observed behavior where de facto breaks exist. We then need to show that deals immediately above and below this threshold appear similar at the time that they approached Tech Coast Angels. To investigate the first part, the last column of Table 1 documents the fraction of ventures in each interest group that are ultimately funded by Tech Coast Angels. None of the ventures with zero interest are funded, whereas over 40% of deals in the highest interest category are. The rise in funding probability with interest level is monotonic with interest, excepting some small fluctuations at high interest levels. There is a very stark jump in funding probability between interest levels of 15-19 angels and 20-24 angels, where the funded share increases from 17% to 38%. This represents a distinct and permanent shift in the relationship between funding and interest levels. We thus identify this point as our discontinuity for Tech Coast Angels. In most of what follows, we discard deals that are far away from this threshold, and instead look around the border. We specifically drop the 90% of deals with fewer than ten interested angels, and we drop the 44 deals with very high interest levels. We designate our ?above border? group as those ventures with interest levels of 20-34; our ?below border? group is defined as ventures with 10-19 interest levels. Having identified from the data the border discontinuity, we now verify the second requirement that ventures above and below the border look ex ante comparable except that they received funding from Tech Coast Angels. This step is necessary to assert that we have identified a quasi exogenous component to angel investing that is not merely reflecting underlying quality 12 differences among the firms. Once established, a comparison of the outcomes of above border versus below border ventures will provide a better estimate of the role of angel financing in venture success as the quality differences inherent in the Table 1‘s distribution will be removed. Before assessing this comparability, we make two sample adjustments. First, to allow us to later jointly analyze our two investment groups, we restrict the sample to ventures that approached Tech Coast Angels in the 2001-2006 period. This restriction also allows us a minimum horizon of four years for measuring outcomes. Second, we remove cases where the funding opportunity is withdrawn from consideration by the venture itself. These withdrawn deals are mainly due to ventures being funded by venture capital firms (i.e., the venture was courting multiple financiers simultaneously). As these deals do not fit well into our conceptual experiment of the benefits and costs of receiving or being denied angel funding, it is best to omit them from the sample. Our final sample includes 87 firms from Tech Coast Angels, with 46 ventures being above the border and 41 below. 45 of the 87 ventures are funded by Tech Coast Angels. Table 2 shows that the characteristics of ventures above and below the funding threshold are very similar to one another ex ante. If our empirical approach is correct, the randomness in how localized interest develops will result in the observable characteristic of firms immediately above and below the threshold not being statistically different. Table 2 documents this comparability across a number of venture characteristics. Columns 2 and 3 present the means of the above border and below border groups, respectively. The fourth column tests for the equality of the means, and the t-tests allow for unequal variance. The two border groups are very comparable in terms of venture traits, industries, and venture stages. The first four rows show that basic characteristics like the amount of funding 13 requested, the documents provided by the venture to the angels, and the firm‘s number of managers and employees are not materially different for the firms above and below the discontinuity. The same is true for industry composition and stage of the business (e.g., is the firm in the idea stage, in its initial marketing and product development stage, or already revenue generating). For all of these traits, the null hypothesis that the two groups are similar is not rejected. While there are no observable differences in the characteristics of the ventures in the first three panels, the fourth panel of Table 2 shows that there are significant differences in how angels engage with ventures above and below the cut-off. With just a small adjustment in interest levels, angels assemble many more documents regarding the venture (evidence of due diligence), have more discussion points in their database about the opportunity, and ultimately are 60% more likely to fund the venture. All of these differences are statistically significant. 2.2. CommonAngels CommonAngels is the leading angel investment group in Boston, Massachusetts. They have over 70 angels seeking high-growth investments in high-tech industries. The group typically looks for funding opportunities between $500 thousand and $5 million. Additional details on this venture group are available at http://www.commonangels.com. CommonAngels kindly provided us with access to their database regarding prospective ventures under explicit restrictions that the confidentiality of individual ventures and angels remain secure. The complete database for CommonAngels as of early 2007 contains over 2000 ventures. The funnel process is again such that a small fraction of ventures receive funding. 14 Unlike the Tech Coast Angels data, however, CommonAngels does not record interest for all deals. We thus cannot explicitly construct a distribution similar to Table 1. CommonAngels does, however, conduct a paper-based poll of members following pitches at its monthly breakfast meetings. Most importantly, attending angels give the venture an overall score. Angels also provide comments about ventures and potential investments they might make in the company. Figure 2 provides a recent evaluation sheet. We focus on the overall score provided by angels for the venture as this metric is collected on a consistent basis throughout the sample period. CommonAngels provided us with the original ballots for all pitches between 2001 and 2006. After dropping two poor quality records, our sample has 63 pitches in total. One potential approach would be to order deals by the average interest levels of angels attending the pitch. We find, however, that the information content in this measure is limited. Instead, the data strongly suggest that the central funding discontinuity exists around the share of attending angels that award a venture an extremely high score. During the six years covered, CommonAngels used both a five and ten point scale. It is extremely rare that an angel awards a perfect score to a pitch. The breaking point for funding instead exists around the share of attending angels that award the pitch 90% or more of the maximum score (that is, 4.5 out of 5, 9 out of 10). This is close in spirit to the dichotomous expression of interest in the Tech Coast Angels database. Some simple statistics describe the non-linear effect. Of the 63 pitches, 14 ventures receive a 90% or above score from at least one angel; no deal receives such a score from more than 40% of attending angels. Of these 14 deals, 7 deals are ultimately funded by CommonAngels. Of the 49 other deals, only 11 are funded. This stark discontinuity is not present when looking at lower cut-offs for interest levels. For example, all but 12 ventures 15 receive at least one vote that is 80% of the maximum score (that is, 4 out of 5, 8 out of 10). There is further no material difference in funding probability based upon receiving more or fewer 80% votes. The same applies to lower cut-offs for interest levels. We restrict the sample to the 43 deals that have at least 20% of the attending angels giving the presentation a score that is 80% of the maximum possible score or above. As a specific example, a venture is retained after presenting to a breakfast meeting of 30 angels if at least six of those angels score the venture as 8 out of 10 or higher. This step removes the weakest presentations and ventures. We then define our border groups based upon the share of attending angels that give the venture a score greater than or equal to 90% of the maximum possible score. To continue our example, a venture is considered above border if it garners six or more angels awarding the venture 9 out of 10 or better. A venture with only five angels at this extreme value is classified as below border. While distinct, this procedure is conceptually very similar to the sample construction and culling undertaken with the Tech Coast Angels data. We only drop 20 Common Angel pitches that receive low scores, but that is because the selection into providing a formal pitch to the group itself accomplishes much of the pruning. With Tech Coast Angels, we drop 90% of the potential deals due to low interest levels. We implicitly do the same with CommonAngels by focusing only on the 63 pitches out of over 2000 deals in the full database. Our formal empirical analyses jointly consider Tech Coast Angels and CommonAngels. To facilitate this merger, we construct simple indicator variables for whether a venture is funded or not. We likewise construct an indicator variable for above and below the border discontinuity. We finally construct uniform industry measures across the groups. This pooling produces a regression sample of 130 ventures.16 3. Outcome Data This section documents the data that we collect on venture outcomes. This is the most significant challenge for this type of project as we seek comparable data for both funded and unfunded ventures. In many cases, the prospective deals are small and recently formed, and may not even be incorporated. We develop three broad outcomes: venture survival, venture growth and performance as measured by web site traffic data, and subsequent financing events. 3.1. Venture Survival Our simplest measure is firm survival as of January 2010. This survival date is a minimum of four years after the potential funding event with the angel group. We develop this measure through several data sources. We first directly contacted as many ventures as possible to learn their current status. Second, we looked for evidence of the ventures‘ operations in the CorpTech and VentureXpert databases. Finally, we examined every venture‘s web site if one exists. Existence of a web site is not sufficient for being alive, as some ventures leave a web site running after closing operations. We thus based our measurement on how recent various items like press releases were. In several cases, ventures have been acquired prior to 2010. We coded whether the venture was alive or not through a judgment of the size of the acquisition. Ventures are counted as alive if the acquisition or merger was a successful exit that included major announcements or large dollar amounts. If the event was termed an ?asset sale? or similar, we code the venture as not having survived. The results below are robust to simply dropping these cases.17 3.2. Venture Performance and Web Site Traffic Our second set of metrics quantify whether ventures are growing and performing better in the period after the potential financing event. While we would ideally consider a range of performance variables like employment, sales, and product introductions, obtaining data on private, unfunded ventures is extremely challenging. A substantial number of these ventures do not have employees, which limits their coverage even in comprehensive datasets like the Census Bureau surveys. We are able to make traction, however, through web traffic records. To our knowledge, this is the first time that this measure has been employed in an entrepreneurial finance study. We collected web traffic data from www.alexa.com, one of the largest providers of this type of information. Alexa collects its data primarily by tracking the browsing patterns of web users who have installed the Alexa Toolbar, a piece of software that attaches itself onto a user‘s Internet browser and records the user‘s web use in detail. According to the company, there are currently millions of such users. The statistics are then extrapolated from this user subset to the Internet population as a whole. The two =building block‘ pieces of information collected by the toolbar are web reach and page views. Web reach is a measure of what percentage of the total number of Internet users visit a website in question, and page views measures how many pages, on average, they visit on that website. Multiple page views by the same user in the same day only count as one entry in the data. The two usage variables are then combined to produce a variable known as site rank, with the most visited sites like Yahoo and Google having lower ranks. We collected web traffic data in the summer of 2008 and January 2010. We identify 91 of our 130 ventures in one of the two periods, and 58 ventures in both periods. The absolute level of web traffic and its rank are very dependent upon the specific traits and business models of 18 ventures. This is true even within broad industry groups as degrees of customer interaction vary. Some venture groups may also wish to remain ?under the radar? for a few years until they are ready for product launch or have obtained intellectual property protection for their work. Moreover, the collection method by Alexa may introduce biases for certain venture types. We thus consider the changes in web performance for the venture between the two periods. These improvements or declines are more generally comparable across ventures. One variable simply compares the log ratio of the web rank in 2010 to that in 2008. This variable is attractive in that it measures the magnitudes of improvements and declines in traffic. A limitation, however, is that it is only defined for ventures whose web sites are active in both periods. We thus also define a second outcome measure as an indicator variable for improved venture performance on the web. If we observe the web ranks of a company in both 2008 and 2010, the indicator variable takes a value of one if the rank in 2010 is better than that in 2008. If we only observe the company on the web in 2008, we deem its web performance to have declined by 2010. Likewise, if we only observe a company in 2010, we deem its web performance to have improved. This technique allows us to consider all 91 ventures for which we observe web traffic at some point, while sacrificing the granularity of the other measure. 3.3. Subsequent Financing Events Our final measures describe whether the venture received subsequent financing external to the angel group. We define this measure through data collected from CorpTech and VentureXpert, cross-checked with as many ventures directly as possible. We consider a simply indicator variable for a subsequent, external financing and a count of the number of financing rounds.19 4. Results This section documents our empirical results. We first more closely examine the relationship between border investments and angel funding. We then compare the subsequent outcomes of funded ventures with non-funded ventures; we likewise compare above border ventures with those below the discontinuity. 4.1. Border Discontinuities and Angel Funding Table 3 formally tests that there is a significant discontinuity in funding around the thresholds for the ventures considered by Tech Coast Angels and CommonAngels. The dependent variable is an indicator variable that equals one if the firm received funding and zero otherwise. The primary explanatory variable is an indicator variable for the venture being above or below the interest discontinuity. Column 1 controls for angel group fixed effects, year fixed effects, and industry fixed effects. Year fixed effects are for the year that the venture approached the angel group. These regressions combine data from the two angel groups. Across these two groups, we have 130 deals that are evenly distributed above and below the discontinuity. We find that there is a statistically and economically significant relationship between funding likelihood and being above the border: being above the border increases funding likelihood by about 33%. Clearly, the border line designation is not an identity or perfect rule, but it does signify a very strong shift in funding probability among ventures that are ex ante comparable as shown in Table 2. Column 2 shows similar results when we add year*angel group fixed effects. These fixed effects control for the secular trends of each angel group. The funding jump also holds for20 each angel group individually. Column 3 repeats the regression controlling for deal characteristics like firm size and number of employees at the time of the pitch. The sample size shrinks to 87 as we only have this information for Tech Coast Angel deals. But despite the smaller sample size, we still find a significant difference in funding probability. The magnitude of the effect is comparable to the full sample at 29%. Unreported regressions find a groupspecific elasticity for CommonAngels of 0.45 (0.21). These patterns further hold in a variety of unreported robustness checks. These results suggest that the identified discontinuities provide a reasonable identification strategy. 4.2. The Impact of Funding on Firm Outcomes We now look at the relationship between funding and firm outcomes. In the first column of Table 4, we regress a dummy variable for whether the venture was alive in 2010 on the indicator for whether the firm received funding from the angel group. We control for angel group, year, and industry fixed effects. The coefficient on indicator variable is 0.27 and is statistically significant at the 1% level. Firms that received angel funding are 27% more likely to survive for at least 4 years. Columns 2 through 5 repeat this regression specification for the other outcomes variables. Funded companies show improvements in web traffic performance. Funded ventures are 16% more likely to have improved performance, but this estimate is not precisely measured. On the other hand, our intensive measure of firm performance, the log ratio of web site ranks, finds a more powerful effect. Funded ventures show on average 39% stronger improvements in web rank than unfunded ventures.21 Finally, we estimate whether angel funding promotes future funding opportunities. We only look at venture funding external to the angel group in question. Column 4 finds a very large effect: angel funding increases the likelihood of subsequent venture investment by 44%. This relationship is very precisely measured. Column 5 also shows a positive relationship to a count of additional venture rounds. Funded firms have about 3.8 more follow-on funding rounds than those firms that did not get angel funding in the first place. Of course, we cannot tell from this analysis whether angel-backed companies pursue different growth or investment strategies and thus have to rely on more external funding. Alternatively, the powerful relationships could reflect a supply effect where angel group investors and board members provide networks, connections, and introductions that help ventures access additional funding. We return this issue below after viewing our border discontinuity results. 4.3. The Role of Sample Construction The results in Table 4 suggest an important association between angel funding and venture performance. In describing our data and empirical methodology, we noted several ways that our analysis differed from a standard regression. We first consider only ventures that approach our angel investors, rather than attempting to draw similar firms from the full population of business activity to compare to funded ventures. This step helps ensure ex ante comparable treatment and control groups in that all the ventures are seeking funding. Second, we substantially narrow even this distribution of prospective deals (illustrated in Table 1) until we have a group of funded and unfunded companies that are ex ante comparable (show in Table 2). 22 This removes heterogeneous quality in the ventures that approach the angel investors. Finally, we introduce the border discontinuity to bring exogenous variation in funding outcomes. Before proceeding to the border discontinuity, it is useful to gauge how much the second step— narrowing the sample of ventures to remove quality differences inherent in the selection funnel—influences our regression estimates. Table 5 presents this analysis for one outcome variable and the Tech Coast Angels data. We are restricted to only one outcome variable by the intense effort to build any outcomes data for unfunded ventures. The likelihood of receiving subsequent venture funding is the easiest variable to extend to the full sample. The first column repeats a modified, univariate form of Column 4 in Table 4 with just the Tech Coast Angels sample. The elasticities are very similar. The second column expands the sample to include 2385 potential ventures in the Tech Coast Angels database. The elasticity increases 25% to 0.56. The difference in elasticities between the two columns demonstrates the role of sample construction in assessing angel funding and venture performance. The narrower sample provides a more comparable control group. Our rough estimate of the bias due to not controlling for heterogeneous quality is thus about a quarter of the true association. 4.4. Border Discontinuities and Firm Outcomes Table 6 considers venture outcomes and the border discontinuity. Even with eliminating observable heterogeneity through sample selection, the results in Table 4 are still subject to the criticism that ventures are endogenously funded. Omitted variables may also be present. Looking above and below the funding discontinuity helps us to evaluate whether the ventures that looked ex ante comparable, except in their probability of being funded, are now performing differently. 23 This test provides a measure of exogeneity to the relationship between angel financing and venture success. Table 6 has the same format as Table 5; the only difference is that the explanatory variable is the indicator variable for being above the funding border. The results are similar in direction and magnitude for the first three outcomes, although the coefficients in Tables 5 and 6 are not directly comparable in a strict sense. Being above the border is associated with stronger chances for survival and better operating performance as measured by web site traffic. This comparability indicates that endogeneity in funding choices and omitted variable biases are not driving these associations for the impact of angel financing. On the other hand, the last two columns show no relationship between being above the border discontinuity and improved funding prospects in later years. Our experiment thus does not confirm that angel financing leads to improved future investment flows to portfolio companies. This may indicate the least squares association between current financing and future financing reflects the investment and growth strategies of the financiers, but that this path is not necessary for venture success as measured by our outcome variables. This interpretation, however, should be treated with caution as we are not able to measure a number of outcomes that would be of interest (e.g., the ultimate value of the venture at exit). 5. Conclusions and Interpretations The results of this study, and our border analysis in particular, suggest that angel investments improve entrepreneurial success. By looking above and below the discontinuity in a restricted sample, we remove the most worrisome endogeneity problems and the sorting between ventures and investors. We find that the localized increases in interest by angels at break points, 24 which are clearly linked to obtaining critical mass for funding, are associated with discrete jumps in future outcomes like survival and stronger web traffic performance. Our evidence regarding the role of angel funding for access to future venture financing is more mixed. The latter result could suggest that start-up firms during that time period had a number of funding options and thus could go to other financiers when turned down by our respective angel groups. Angel funding per se was not central in whether the firm obtained follow-on financing at a later point. However, angel funding by one of the groups in our sample does positively affect the long run survival and web traffic of the start-ups. We do not want to push this asymmetry too far, but one might speculate that access to capital per se is not the most important value added that angel groups bring. Our results suggest that some of the ?softer? features, such as their mentoring or business contacts, may help new ventures the most. Overall we find that the interest levels of angels at the stages of the initial presentation and due diligence are predictive of investment success. However, additional screening and evaluation do not substantially improve the selection and composition of the portfolio further. These findings suggest that the selection and screening process is efficient at sorting proposals into approximate bins: complete losers, potential winners, and so on. The process has natural limitations, however, in further differentiating among the potential winners (e.g., Kerr and Nanda, 2009). At the same time, this paper has important limitations. Our experiment does not allow us to identify the costs to ventures of angel group support (e.g., Hsu, 2004), as equity positions in the counterfactual, unfunded ventures are not defined. We thus cannot evaluate whether taking the money was worth it from the entrepreneur‘s perspective after these costs are considered. On a similar note, we have looked at just a few of the many angel investment groups that are active in 25 the US. Our groups are professionally organized and managed, and it is important for future research to examine a broader distribution of investment groups and their impact for venture success. This project demonstrates that angel investments are important and also offer an empirical foothold for analyzing many important questions in entrepreneurial finance 26 References Admati, A., and Pfleiderer, P. 1994. Robust financial contracting and the role for venture capitalists. Journal of Finance 49, 371–402. Berglöf, E. 1994. A control theory of venture capital finance. Journal of Law, Economics, and Organizations 10, 247–67. Bergemann, D., and Hege, U. 1998. Venture capital financing, moral hazard, and learning. Journal of Banking and Finance 22, 703-35. Chemmanur, T., Krishnan, K., and Nandy, D. 2009. How does venture capital financing improve efficiency in private firms? a look beneath the surface. Unpublished working paper, Center for Economic Studies. Cherenko, S., and Sunderam, A. 2009. The real consequences of market segmentation. Unpublished working paper, Harvard University. Cornelli, F., and Yosha, O. 2003. Stage financing and the role of convertible debt. Review of Economic Studies 70, 1–32. Goldfarb, B., Hoberg, G., Kirsch, D., and Triantis, A. 2007. Are angels preferred series a investors? Unpublished working paper, University of Maryland. Hellmann, T. 1998. The allocation of control rights in venture capital contracts. RAND Journal of Economics 29, 57–76. Hellmann, T., and Puri, M. 2000. The interaction between product market and financing strategy: the role of venture capital. Review of Financial Studies 13, 959–84. Hsu, D. 2004. What do entrepreneurs pay for venture capital affiliation? Journal of Finance 59, 1805–44. Kaplan, S., and Strömberg, P. 2004. Characteristics, contracts, and actions: evidence from venture capitalist analyses. Journal of Finance 59, 2177–210. Kaplan, S., Sensoy, B., and Strömberg, P. 2009. Should investors bet on the jockey or the horse? evidence from the evolution of firms from early business plans to public companies. Journal of Finance 64, 75–115. Kerr, W., and Nanda, R. 2009. Democratizing entry: banking deregulations, financing constraints, and entrepreneurship. Journal of Financial Economics 94, 124–49. Kortum, S., and Lerner, J. 2000. Assessing the contribution of venture capital to innovation. RAND Journal of Economics 31, 674–92. 27 Lamoreaux, N., Levenstein, M., and Sokoloff, K. 2004. Financing invention during the second industrial revolution: Cleveland, Ohio, 1870-1920. Working paper no. 10923, National Bureau of Economic Research. Lee, D., and Lemieux, T. 2009. Regression discontinuity designs in economics. Working paper no. 14723, National Bureau of Economic Research. Mollica, M., and Zingales, L. 2007. The impact of venture capital on innovation and the creation of new businesses. Unpublished working paper, University of Chicago. Puri, M., and Zarutskie, R. 2008. On the lifecycle dynamics of venture-capital- and non-venturecapital-financed firms. Unpublished working paper, Center for Economic Studies. Rauh, J. 2006. Investment and financing constraints: evidence from the funding of corporate pension plans. Journal of Finance 61, 31–71. Samila, S., and Sorenson, O. 2010. Venture capital, entrepreneurship and economic growth. Review of Economics and Statistics, forthcoming. Shane, S. 2008. The importance of angel investing in financing the growth of entrepreneurial ventures. Unpublished working paper, U.S. Small Business Administration, Office of Advocacy. Sorensen, M. 2007. How smart is the smart money? a two-sided matching model of venture capital. Journal of Finance 62, 2725–62. Sudek, R., Mitteness, C., and Baucus, M. 2008. Betting on the horse or the jockey: the impact of expertise on angel investing. Academy of Management Best Paper Proceedings.28 Figure 1: Tech Coast Angels Investment Process29 Figure 2: CommonAngels Pitch Evaluation SheetAngel group Number of Cumulative Share funded interest level ventures share of ventures by angel group 0 1640 64% 0.000 1-4 537 84% 0.007 5-9 135 90% 0.037 10-14 75 93% 0.120 15-19 52 95% 0.173 20-24 42 96% 0.381 25-29 33 97% 0.303 30-34 21 98% 0.286 35+ 44 100% 0.409 Table 1: Angel group selection funnel Notes: Table documents the selection funnel for Tech Coast Angels. The vast majority of ventures proposed to Tech Coast Angels receive very little interest, with 90% of plans obtaining the interest of fewer than ten angels. A small fraction of ventures obtain extremely high interest levels with a maximum of 191 angels expressing interest. We identify an interest level of 20 angels as our border discontinuity. Our "below border" group consists of ventures receiving 10-19 interested angels. Our "above border" group consists of ventures receiving 20-34 interested angels.Traits of ventures above and Above border Below border Two-tailed t-test below border discontinuity ventures ventures for equality of means Basic characteristics Financing sought ($ thousands) 1573 1083 0.277 Documents from company 3.0 2.5 0.600 Management team size 5.8 5.4 0.264 Employee count 13.4 11.2 0.609 Primary industry (%) Biopharma and healthcare 23.9 29.3 0.579 Computers, electronics, and measurement 15.2 17.1 0.817 Internet and e-commerce 39.1 39.0 0.992 Other industries 21.7 14.6 0.395 Company stage (%) Good idea 2.2 2.4 0.936 Initial marketing and product development 34.8 46.3 0.279 Revenue generating 63.0 51.2 0.272 Angel group decisions Documents by angel members 10.5 5.1 0.004 Discussion items by angel members 12.0 6.7 0.002 Share funded 63.0 39.0 0.025 Table 2: Comparison of groups above and below border discontinuity Notes: Table demonstrates the ex ante comparability of ventures above and below the border discontinuity. Columns 2 and 3 present the means of the above border and below border groups, respectively. The fourth column tests for the equality of the means, and the t-tests allow for unequal variance. The first three panels show that the two groups are very comparable in terms of venture traits, industries, and venture stage. The first row tests equality for log value of financing sought. For none of these ex ante traits are the groups statistically different from each other. The two groups differ remarkably, however, in the likelihood of receiving funding. This is shown in the fourth panel. Comparisons of the subsequent performance of these two groups thus offers a better estimate of the role of angel financing in venture success as the quality heterogeneity of ventures inherent in the full distribution of Table 1 is removed.(1) (2) (3) (0,1) indicator variable for venture being 0.328 0.324 0.292 above the funding border discontinuity (0.089) (0.094) (0.110) Angel group, year, and industry fixed effects Yes Yes Yes Year x angel group fixed effects Yes Additional controls Yes Observations 130 130 87 Table 3: Border discontinuity and venture funding by angel groups Notes: Regressions employ linear probability models to quantify the funding discontinuity in the border region. Both Tech Coast Angels and CommonAngels data are employed excepting Column 3. Additional controls in Column 3 include stage of company and employment levels fixed effects. A strong, robust increase in funding probability of about 30% exists for ventures just above the border discontinuity compared to those below. Robust standard errors are reported. (0,1) indicator variable for being funded by angel group(0,1) indicator (0,1) indicator Log ratio of (0,1) indicator Count variable for variable for 2010 web rank variable for of subsequent venture being improved web to 2008 rank receiving later venture financing alive in January rank from 2008 (negative values funding external rounds external 2010 to 2010 are improvements) to angel group to angel group (1) (2) (3) (4) (5) (0,1) indicator variable for venture 0.276 0.162 -0.389 0.438 3.894 funding being received from angel group (0.082) (0.107) (0.212) (0.083) (1.229) Angel group, year, and industry fixed effects Yes Yes Yes Yes Yes Observations 130 91 58 130 130 Table 4: Analysis of angel group financing and venture performance Notes: Linear regressions quantify the relationship between funding and venture outcomes. Both Tech Coast Angels and CommonAngels data for 2001-2006 are employed in all regressions. Differences in sample sizes across columns are due to the availability of outcome variables. The first column tests whether the venture is alive in 2010. The second and third columns test for improved venture performance through web site traffic data from 2008 to 2010. Column 2 is an indicator variable for improved performance, while Column 3 gives log ratios of web traffic (a negative value indicates better performance). The last two columns test whether the venture received subsequent financing outside of the angel group by 2010. Across all of these outcomes, funding by an angel group is associated with stronger subsequent venture performance. Robust standard errors are reported.Outcome variable is (0,1) indicator Simple TCA Full TCA variable for receiving later funding univariate univariate external to angel group regression with regression with (see Column 4 of Table 4) border sample complete sample (1) (2) (0,1) indicator variable for venture 0.432 0.562 funding being received from angel group (0.095) (0.054) Observations 87 2385 Table 5: Border samples versus full samples Notes: Linear regressions quantify the role of sample construction in the relationship between funding and venture outcomes. Column 1 repeats a modified, univariate form of the Column 4 in Table 4 with just the Tech Coast Angels sample. Column 2 expands the sample to include all of the potential ventures in the Tech Coast Angels database, similar to Table 1. The difference in elasticities between the two columns demonstrates the role of sample construction in assessing angel funding and venture performance. The narrower sample provides a more comparable control group. Robust standard errors are reported.(0,1) indicator (0,1) indicator Log ratio of (0,1) indicator Count variable for variable for 2010 web rank variable for of subsequent venture being improved web to 2008 rank receiving later venture financing alive in January rank from 2008 (negative values funding external rounds external 2010 to 2010 are improvements) to angel group to angel group (1) (2) (3) (4) (5) (0,1) indicator variable for venture being 0.229 0.232 -0.382 0.106 -0.318 above the funding border discontinuity (0.094) (0.120) (0.249) (0.100) (1.160) Angel group, year, and industry fixed effects Yes Yes Yes Yes Yes Observations 130 91 58 130 130 Table 6: Analysis of border discontinuity and venture performance Notes: See Table 4. Linear regressions quantify the relationship between the border discontinuity and venture outcomes. Companies above the border are more likely to be alive in 2010 and have improved web performance relative to companies below the border. These results are similar to the funding relationships in Table 4. The border discontinuity in the last two columns, however, is not associated with increased subsequent financing events.The Cycles of Theory Building in Management Researc
|
CD ROM Annuaire d'Entreprises France prospect (avec ou sans emails) : REMISE DE 10 % Avec le code réduction AUDEN872
10% de réduction sur vos envois d'emailing --> CLIQUEZ ICI Retour à l'accueil, cliquez ici 05-057 Copyright © Working papers are in draft form. This working paper is distributed for purposes of comment and discussion only. It may not be reproduced without permission of the copyright holder. Copies of working papers are available from the author. The Cycles of Theory Building in Management Research Paul R. Carlile School of Management Boston University Boston, MA 02215 carlile@bu.edu Clayton M. Christensen Harvard Business School Boston, MA 02163 cchristensen@hbs.edu The Cycles of Theory Building in Management Research Paul R. Carlile School Of Management Boston University Boston, MA 02215 carlile@bu.edu Clayton M. Christensen Harvard Business School Boston, MA 02163 cchristensen@hbs.edu October 27, 2004 Version 5.01 The Cycle of Theory Building in Management Research Theory thus become instruments, not answers to enigmas, in which we can rest. We don’t lie back upon them, we move forward, and, on occasion, make nature over again by their aid. (William James, 1907: 46) Some scholars of organization and strategy expend significant energy disparaging and defending various research methods. Debates about deductive versus inductive theory-building and the objectivity of information from field observation versus large-sample numerical data are dichotomies that surface frequently in our lives and those of our students. Despite this focus, some of the most respected members of our research profession (i.e., Simon (1976), Solow (1985), Staw and Sutton (1995), and Hayes (2002)) have continued to express concerns that the collective efforts of business academics have produced a paucity of theory that is intellectually rigorous, practically useful, and able to stand the tests of time and changing circumstances. The purpose of this paper is to outline a process of theory building that links questions about data, methods and theory. We hope that this model can provide a common language about the research process that helps scholars of management spend less time defending the style of research they have chosen, and build more effectively on each other’s work. Our unit of analysis is at two levels: the individual research project and the iterative cycles of theory building in which a community of scholars participates. The model synthesizes the work of others who have studied how communities of scholars cumulatively build valid and reliable theory, such as Kuhn (1962), Campbell & Stanley (1963), Glaser & Strauss (1967) and Yin (1984). It has normative and pedagogical implications for how we conduct research, evaluate the work of others, and for how we train our doctoral students. While many feel comfortable in their own understanding of these perspectives, it has been our observation that those who have written about the research process and those who think they understand it do not yet share even a common language. The same words are applied to very different phenomena and processes, and the same phenomena can be called by many different words. Papers published in reputable journals often violate rudimentary rules for generating cumulatively improving, reliable and valid theory. While recognizing that research progress is hard to achieve at a collective level, we assert here that if scholars and practitioners of management shared a sound understanding of the process by which theory is built, we could be much more productive in doing research that doesn’t just get published, but meets the standards of rigorous scholarship and helps managers know what actions will lead to the results they seek, given the circumstances in which they find themselves. We first describe a three stage process by which researchers build theory that is at first descriptive, and ultimately normative. Second, we discuss the role that discoveries of anomalies play in the building of better theory, and describe how scholars can build theory whose validity can be verified. Finally, we suggest how scholars can define research questions, execute projects, and design student coursework that lead to the building of good theory. 2 The Theory Building Process The building of theory occurs in two major stages – the descriptive stage and the normative stage. Within each of these stages, theory builders proceed through three steps. The the theory-building process iterates through these stages again and again. 1 In the past, management researchers have quite carelessly applied the term theory to research activities that pertain to only one of these steps. Terms such “utility theory” in economics, and “contingency theory” in organization design, for example, actually refer only to an individual stage in the theory-building process in their respective fields. We propose that it is more useful to think of the term “theory” as a body of understanding that researchers build cumulatively as they work through each of the three steps in the descriptive and normative stages. In many ways, the term “theory” might better be framed as a verb, as much as it is a noun – because the body of understanding is continuously changing as scholars who follow this process work to improve it. The Building of Descriptive Theory The descriptive stage of theory building ia a preliminary stage because researchers must pass through it in order to develop normative theory. Researchers who are building descriptive theory proceed through three steps: observation, categorization, and association. Step 1: Observation In the first step researchers observe phenomena and carefully describe and measure what they see. Careful observation, documentation and measurement of the phenomena in words and numbers is important at this stage because if subsequent researchers cannot agree upon the descriptions of phenomena, then improving theory will prove difficult. Early management research such as The Functions of the Executive (Barnard, 1939) and Harvard Business School cases written in the 1940s and 50s was primarily descriptive work of this genre – and was very valuable. This stage of research is depicted in Figure 1 as the base of a pyramid because it is a necessary foundation for the work that follows. The phenomena being explored in this stage includes not just things such as people, organizations and technologies, but processes as well. Without insightful description to subsequently build upon, researchers can find themselves optimizing misleading concepts. As an example: For years, many scholars of inventory policy and supply chain systems used the tools of operations research to derive ever-more-sophisticated optimizing algorithms for inventory replenishment. Most were based on an assumption that managers know what their levels of inventory are. Ananth Raman’s pathbreaking research of the phenomena, however, obviated much of this research when he showed that most firms’ computerized inventory records were broadly inaccurate – even when they used state-of-the-art automated tracking systems (Raman 199X). He and his colleagues have carefully described how inventory replenishment systems work, and what variables affect the accuracy of those processes. Having laid this foundation, supply chain scholars have now begun to build a body of theories and policies that reflect the real and different situations that managers and companies face. 1 This model is a synthesis of models that have been developed by scholars of this process in a range of fields and scholars: Kuhn (1962) and Popper (1959) in the natural sciences; Kaplan (1964), Stinchcombe (1968), Roethlisberger (1977) Simon (1976), Kaplan (1986), Weick (1989),Eisenhardt (1989) and Van de Ven (2000) in the social sciences. 3 Researchers in this step often develop abstractions from the messy detail of phenomena that we term constructs. Constructs help us understand and visualize what the phenomena are, and how they operate. Joseph Bower’s Managing the Resource Allocation Process (1970) is an outstanding example of this. His constructs of impetus and context, explaining how momentum builds behind certain investment proposals and fails to coalesce behind others, have helped a generation of policy and strategy researchers understand how strategic investment decisions get made. Economists’ concepts of “utility” and “transactions cost” are constructs – abstractions developed to help us understand a class of phenomena they have observed. We would not label the constructs of utility and transactions cost as theories, however. They are part of theories – building blocks upon which bodies of understanding about consumer behavior and organizational interaction have been built. Step 2: Classification With the phenomena observed and described, researchers in the second stage then classify the phenomena into categories. In the descriptive stage of theory building, the classification schemes that scholars propose typically are defined by the attributes of the phenomena. Diversified vs. focused firms, and vertically integrated vs. specialist firms are categorization examples from the study of strategy. Publicly traded vs. privately held companies is a categorization scheme often used in research on financial performance. Such categorization schemes attempt to simplify and organize the world in ways that highlight possibly consequential relationships between the phenomena and the outcomes of interest. Management researchers often refer to these descriptive categorization schemes as frameworks or typologies. Burgelman (1986), for example, built upon Bower’s (1970) construct of context by identifying two different types of context – organizational and strategic. Step 3: Defining Relationships In the third step, researchers explore the association between the category-defining attributes and the outcomes observed. In the stage of descriptive theory building, researchers recognize and make explicit what differences in attributes, and differences in the magnitude of those attributes, correlate most strongly with the patterns in the outcomes of interest. Techniques such as regression analysis typically are useful in defining these correlations. Often we refer to the output of studies at this step as models. Descriptive theory that quantifies the degree of correlation between the category-defining attributes of the phenomena and the outcomes of interest are generally only able to make probabilistic statements of association representing average tendencies. For example, Hutton, Miller and Skinner (2000) have examined how stock prices have responded to earnings announcements that were phrased or couched in various terms. They coded types of words and phrases in the statements as explanatory variables in a regression equation, with the ensuing change in equity price as the dependent variable. This analysis enabled the researchers then to assert that, on average across the entire sample of companies and announcements, delivering earnings announcements in a particular way would lead to the most favorable (or least unfavorable) reaction in stock price. Research such as this is important descriptive theory. However, at this point it can only assert on average what attributes are associated with the best 4 results. A specific manager of a specific company cannot know whether following that average formula will lead to the hoped-for outcome in her specific situation. The ability to know what actions will lead to desired results for a specific company in a specific situation awaits the development of normative theory in this field, as we will show below. The Improvement of Descriptive Theory When researchers move from the bottom to the top of the pyramid in these three steps – observation, categorization and association – they have followed the inductive portion of the theory building process. Theory begins to improve when researchers cycle from the top back to the bottom of this pyramid in the deductive portion of the cycle – seeking to “test” the hypothesis that had been inductively formulated. This most often is done by exploring whether the same correlations exist between attributes and outcomes in a different set of data than the data from which the hypothesized relationships were induced. When scholars test a theory on a new data set (whether the data are numbers in a computer, or are field observations taken in a new context), they might find that the attributes of the phenomena in the new data do indeed correlate with the outcomes as predicted. When this happens, this “test” confirms that the theory is of use under the conditions or circumstances observed. 2 However, the researcher returns the model to its place atop the pyramid tested but unimproved. It is only when an anomaly is identified – an outcome for which the theory can’t account – that an opportunity to improve theory occurs. As Figure 1 suggests, discovery of an anomaly gives researchers the opportunity to revisit the categorization scheme – to cut the data in a different way – so that the anomaly and the prior associations of attributes and outcomes can all be explained. In the study of how technological innovation affects the fortunes of leading firms, for example, the initial attribute-based categorization scheme was radical vs. incremental innovation. The statements of association that were built upon it concluded that the leading established firms on average do well when faced with incremental innovation, but they stumble in the face of radical change. But there were anomalies to this generalization – established firms that successfully implemented radical technology change. To account for these anomalies, Tushman & Anderson (1986) offered a different categorization scheme, competence-enhancing vs. competence-destroying technological changes. This scheme resolved many of the anomalies to the prior scheme, but subsequent researchers uncovered new ones for which the TushmanAnderson scheme could not account. Henderson & Clark’s (1990) categories of modular vs. architectural innovations; Christensen’s (1997) categories of sustaining vs. disruptive technologies; and Gilbert’s (2001) threat-vs.-opportunity framing each uncovered and resolved anomalies for which the work of prior scholars could not account. This body of understanding has improved and become remarkably useful to practitioners and subsequent scholars (Adner, 2003; Daneels, 2005) because these scholars followed the process in a disciplined way: – uncovered anomalies, sliced the phenomena in different ways, and articulated new associations between the attributes that defined the categories and the outcome of interest. 2 Popper asserts that a researcher in this phase, when the theory accurately predicted what he observed, can only state that his test or experiment of the theory “corroborated” or “failed to dis-confirm” the theory. 5 Predict Anomaly Confirm Observe, describe & measure the phenomena (constructs) Categorization based upon attributes of phenomena (frameworks & typologies) Statements of association (models) Deductive proce The Process of Building Theory Inductive proce Figure 1 Figure 1 suggests that there are two sides to every lap around the theory-building pyramid: an inductive side and a deductive side. In contrast to either/or debates about the virtues of deductive and inductive approaches to theory, this suggests that any complete cycle of theory building includes both. 3 Descriptive theory-building efforts typically categorize by the attributes of the phenomena because they are easiest to observe and measure. Likewise, correlations between attributes and outcomes are easiest to hypothesize and quantify through techniques such as regression analysis. Kuhn (1962) observed that confusion and contradiction typically are the norm during descriptive theory-building. This phase is often characterized by a plethora of categorization schemes, as in the sequence of studies of technology change cited above, because the phenomena generally have many different attributes. Often, no model is irrefutably superior: Each seems able to explain anomalies to other models, but suffers from anomalies to its own. The Transition from Descriptive to Normative Theory The confusion and contradiction that often accompany descriptive theory become resolved when careful researchers – often through detailed empirical and ethnographic observation – move beyond statements of correlation to define what causes the outcome of interest. As depicted in Figure 2, they leap across to the top of the pyramid of causal theory. With their understanding of causality, researchers then work to improve theory by following the same three steps that were 3 Kant, Popper, Feyerabend and others have noted that all observations are shaped, consciously or unconsciously, by cognitive structures, previous experience or some theory-in-use. While it is true that individual researchers might start their work at the top of the pyramid, we believe that the hypotheses that deductive theorists test generally had been derived consciously or unconsciously, by themselves or others, from an inductive source. There are few bluesky hypotheses that were formulated in the complete absence of observation. 6 used in the descriptive stage. Hypothesizing that their statement of causality is correct, they cycle deductively to the bottom of the pyramid to test the causal statement: If we observe these actions being taken, these should be the outcomes that we observe. When they encounter an anomaly, they then delve into the categorization stage. Rather than using schemes based on attributes of the phenomena, however, they develop categories of the different situations or circumstances in which managers might find themselves. They do this by asking, when they encounter an anomaly, “What was it about the situation in which those managers found themselves, that caused the causal mechanism to yield a different result? By cycling up and down the pyramid of normative theory, researchers will ultimately define the set of the situations or circumstances in which managers might find themselves when pursuing the outcomes of interest. This allows researchers to make contingent statements of causality – to show how and why the casual mechanism results in a different outcome, in the different situations. A theory completes the transition from descriptive to normative when it can give a manager unambiguous guidance about what actions will and will not lead to the desired result, given the circumstance in which she finds herself. Anomaly Predict Deductive proce Confirm Inductive proce Anomaly Conf Pre irm dict Categorization by the attributes of the phenomena Preliminary statements of correlation Deductive proce Inductive proce Observe, describe & measure the phenomena Careful field-based research Statement of causality Observe, describe & measure the phenomena Categorization of the circumstances in which we might find ourselves Figure 2: The Transition from Descriptive Theory to Normative Theory Normative Theory Descriptive Theory The history of research into manned flight is a good way to visualize how this transition from descriptive to normative theory occurs, and how it is valuable. During the middle ages, would-be aviators did their equivalent of best-practices research and statistical analysis. They observed the many animals that could fly well, and compared them with those that could not. The vast majority of the successful fliers had wings with feathers on them; and most of those that couldn’t fly had neither. This was quintessential descriptive theory. Pesky outliers like ostriches 7 had feathered wings but couldn’t fly; bats had wings without feathers and were very good at it; and flying squirrels had neither and got by. But the R 2 was so high that aviators of the time copied the seemingly salient characteristics of the successful fliers in the belief that the visible attributes of the phenomena caused the outcome. They fabricated wings, glued feathers on them, jumped off cathedral spires, and flapped hard. It never worked. For centuries they assumed that the prior aviators had failed because they had bad wing designs; hadn’t bulked up their muscles enough; or hadn’t flapped hard enough. There were substantial disagreements about which of the birds’ attributes truly enabled flight. For example, Roger Bacon in about 1285 wrote an influential paper asserting that the differentiating attribute was birds’ hollow bones (Clegg, 2003). Because man had solid bones, Bacon reasoned, we could never fly. He then proposed several machine designs that could flap their wings with sufficient power to overcome the disadvantage of solid bones. But it still never worked. Armed with the correlative statements of descriptive theory, aviators kept killing themselves. Then through his careful study of fluid dynamics Daniel Bernoulli identified a shape that we call an airfoil – a shape that, when it cuts through air, creates a mechanism that we call lift. Understanding this causal mechanism, which we call Bernoulli’s Principle, made flight possible. But it was not yet predictable. In the language of this paper, the theory predicted that aviators would fly successfully when they built machines with airfoils to harness lift. But while they sometimes flew successfully, occasionally they did not. Crashes were anomalies that Bernoulli’s theory could not explain. Discovery of these anomalies, however, allowed the researchers to revisit the categorization scheme. But this time, instead of slicing up the world by the attributes of the good and bad fliers, researchers categorized their world by circumstance – asking the question, “What was it about the circumstance that the aviator found himself in that caused the crash?” This then enabled them to improve equipment and techniques that were based upon circumstance-contingent statements of causality: “This is how you should normally fly the plane. But when you get in this situation, you need to fly it differently in order to get the desired outcome. And when you get in that situation, don’t even try to fly. It is impossible.” When their careful studies of anomalies allowed researchers to identify the set of circumstances in which aviators might find themselves, and then modified the equipment or developed piloting techniques that were appropriate to each circumstance, manned flight became not only possible, but predictable. Hence, it was the discovery of the fundamental causal mechanism that made flight possible. And it was the categorization of the salient circumstances that made flight predictable. This is how this body of understanding about human flight transitioned from descriptive to normative theory. Dsciplined scholars can achieve the same transition in management research. The discovery of the fundamental causal mechanisms makes it possible for managers purposefully to pursue desired outcomes successfully and predictably. When researchers categorize managers’ world according to the circumstances in which they might find themselves, they can make circumstance-contingent statements of cause and effect, of action and result. Circumstance-based categories and normative theory Some cynical colleagues despair of any quest to develop management theories that make success possible and predictable – asserting that managers’ world is so complex that there are an 8 infinite number of situations in which they might find themselves. Indeed, this is very nearly true in the descriptive theory phase. But normative theory generally is not so confusing. Researchers in the normative theory phase resolve confusion by abstracting up from the detail to define a few categories – typically two to four – that comprise salient circumstances. Which boundaries between circumstances are salient, and which are not? Returning to our account of aviation research, the boundaries that defined the salient categories of circumstance are determined by the necessity to pilot the plane differently. If a different circumstance does not require different methods of piloting, then it is not a meaningful category. The same principle defines the salience of category boundaries in management theory. If managers find themselves in a circumstance where they must change actions or organization in order to achieve the outcome of interest, then they have crossed a salient boundary. Several prominent scholars have examined the improvement in predictability that accompanies the transition from the attribute-based categorization of descriptive theory, to the circumstance-based categorization of normative theory. Consider, for example, the term “Contingency Theory” – a concept born of Lawrence & Lorsch’s (1967) seminal work. They showed that the best way to organize a company depended upon the circumstances in which the company was operating. In our language, contingency is not a theory per se. Rather, contingency is a crucial element of every normative theory – it is the categorization scheme. Rarely do we find one-size-fits-all answers to every company’s problem. The effective course of action will generally “depend” on the circumstance. Glaser and Strauss’s (1967) treatise on “grounded theory” actually is a book about categorization. Their term substantive theory corresponds to the attribute-defined categories in descriptive theory. And their concept of formal theory matches our definition of normative theory that employs categories of circumstance.. Thomas Kuhn (1962) discussed in detail the transition of understanding from descriptive to normative theory in his study of the emergence of scientific paradigms. He described a preliminary period of confusion and debate in theory building, which is an era of descriptive theory. His description of the emergence of a paradigm corresponds to the transition to normative theory described above. We agree with Kuhn that even when a normative theory achieves the status of a broadly believed paradigm, it continues to be improved through the process of discovering anomalies, as we describe above. Indeed, the emergence of new phenomena – which probably happens more frequently in competitive, organizational and social systems than in the natural sciences – ensures that there will always be additional productive laps up and down the theory pyramid that anomaly-seeking researchers can run. The observation that management research is often faddish has been raised enough that it no longer seems shocking (Micklethwait and Wooldridge, 1996; Abrahamson, 1998). Fads come and go when a researcher studies a few successful companies, finds that they share certain characteristics, concludes that he has seen enough, and then skips the categorization step entirely by writing a book asserting that if all managers would imbue their companies with those same characteristics, they would be similarly successful. When managers then apply the formula and find that it doesn’t work, it casts a pall on the idea. Some faddish theories aren’t uniformly bad. It’s just that their authors were so eager for their theory to apply to everyone that they never took the care to distinguish correlation from causality, or to figure out the circumstances in which their 9 statement of causality would lead to success, and when it would not. Efforts to study and copy “the best practices of successful companies” almost uniformly suffer from this problem. Unfortunately, it is not just authors-for-profit of management books that contribute to the problem of publishing theory whose application is uncertain. Many academics contribute to the problem by taking the other extreme – articulating tight “boundary conditions” outside of which they claim nothing. Delimiting the applicability of a theory to the specific time, place, industry and/or companies from which the conclusions were drawn in the first place is a mutation of one of the cardinal sins of research – sampling on the dependent variable. In order to be useful to managers and to future scholars, researchers need to help managers understand the circumstance that they are in. Almost always, this requires that they also be told about the circumstances that they are not in. The Value of Anomalies As indicated before, when researchers in both the descriptive and normative stages use statements of association or causality to predict what they will see, they often observe something that the theory did not lead them to expect; thus identifying an anomaly—something the theory could not explain. This discovery forces theory builders to cycle back into the categorization stage with a puzzle such as “there’s something else going on here” or “these two things that we thought were different, really aren’t.” The results of this effort typically can include: 1) more accurately describing and measuring what the phenomena are and are not; 2) changing the definitions by which the phenomena or the circumstances are categorized – adding or eliminating categories or defining them in different ways; and/or 3) articulating a new theoretical statement of what is associated with, or causes what, and why, and under what circumstances. The objective of this process is to revise theory so that it still accounts for both the anomalies identified and the phenomena as previously explained. Anomalies are valuable in theory building because the discovery of an anomaly is the enabling step to identifying and improving the categorization scheme in a body of theory – which is the key to being able to apply the theory with predictable results. Researchers whose goal is to “prove” a theory’s validity are likely to view discovery of an anomaly as failure. Too often they find reasons to exclude outlying data points in order to get more significant measures of statistical fit. There typically is more information in the points of outlying data than in the ones that fit the model well, however, because understanding the outliers or anomalies is generally the key to discovering a new categorization scheme. This means that journal editors and peer reviewers whose objective is to improve theory should embrace papers that seek to surface and resolve anomalies. Indeed, productive theory-building research is almost invariably prompted or instigated by an anomaly or a paradox (Poole & Van de Ven, 1989). The research that led to Michael Porter’s (1991) Competitive Advantage of Nations is an example. Before Porter’s work, the theory of international trade was built around the notion of comparative advantage. Nations with inexpensive electric power, for example, would have a competitive advantage in those products in which the cost of energy was high; those with low labor costs would enjoy an advantage in making and selling products with high labor content; and so on. Porter saw anomalies for which this theory could not account. Japan, with little iron ore and coal, became a successful steel 10 producer. Italy became the world’s dominant producer of ceramic tile even though it had high electricity costs and had to import much of the clay used in making the tile. Porter’s work categorized the world into two circumstances – situations in which a factor-based advantage exists, and those in which it does not. In the first situation the reigning theory of comparative advantage still has predictive power. But in the latter circumstance, Porter’s theory of competitive industrial clusters explained the phenomena that had been anomalous to the prior theory. Porter’s theory is normative because it gives planners clear guidance about what they should do, given the circumstance in which they find themselves. The government of Singapore, for example, attributes much of that country’s prosperity to the guidance that Porter’s theory has provided. Yin (1984) distinguishes between literal replications of a theory, versus theoretical replications. A literal replication occurs when the predicted outcome is observed. A theoretical replication occurs when an unusual outcome occurs, but for reasons that can be explained by the model. Some reviewers cite “exceptions” to a theory’s predictions as evidence that it is invalid. We prefer to avoid using the word “exception” because of its imprecision. For example, the observation that airplanes fly is an exception to the general assertion that the earth’s mass draws things down toward its core. Does this exception disprove the theory of gravity? Of course not. While falling apples and flaming meteors are literal replications of the theory, manned flight is a theoretical replication. It is a different outcome than we normally would expect, but Bernoulli’s Principle explains why. An anomaly is an outcome that is neither a literal or theoretical replication of a theory. How to Design Anomaly-Seeking Research Although some productive anomalies might be obvious from the outset, often the task of theory-building scholars is to design their research to maximize the probability that they will be able to identify anomalies. Here we describe how to define research questions that focus on anomalies, and outline three ways to design anomaly-seeking research. We conclude this section by describing how literature reviews might be structured to help readers understand how knowledge has accumulated in the past, and position the present paper in the stream of scholarship. Anomaly-Seeking Research Questions Anomaly-seeking research enables new generations of researchers to pick up even wellaccepted theories, and to run the theory-building cycle again – adding value to research that already has earned broad praise and acceptance. Consider Professor Porter’s (1991) research mentioned above. In Akron, Ohio there was a powerful cluster of tire manufacturers whose etiologies and interactions could be explained well by Porter’s theory. That group subsequently vaporized – in part because of the actions of a company, Michelin, that operated outside of this cluster (Sull, 2000). This anomaly suggests that there must situations in time or space in which competing within a cluster is competitively important; in other situations it must be less important. When an improved categorization scheme emerges from Sull’s and others’ work, the community of scholars and policy makers will have an even clearer sense for when the competitive crucible of clusters is critical for developing capabilities, when it is not, and why. 11 In this spirit, we outline below some examples of “productive” questions that could be pursed by future researchers that potentially challenge many current categories used in management research: • When might process re-engineering or lean manufacturing be bad ideas? • When could sourcing from a partner or supplier something that is not your core competence lead to disaster? • Are there circumstances in which pencil-on-paper methods of vendor management yield better results than using supply-chain management software? • When and why is a one-stop-shopping or “portal” strategy effective and when would we expect firms using focused specialist strategies to gain the upper hand? • When are time-based competition and mass customization likely to be critical and when might they be competitively meaningless? • Are SIC codes the right categories for defining “relatedness” in diversification research? • When should acquiring companies integrate a firm they have just purchased into the parent organization, and when should they keep it separate? Much published management research is of the half-cycle, terminal variety – hypotheses are defined and “tested.” Anomaly-seeking research always is focused on the categorization step in the pyramid. Many category boundaries (such as SIC codes) seem to be defined by the availability of data, rather than their salience to the underlying phenomena or their relation to the outcome – and questioning their sufficiency is almost always a productive path for building better theory. “When doesn’t this work?” and “Under what conditions might this gospel be bad news?” are simple questions that can yield breakthrough insights – and yet too few researchers have the instinct to ask them. The Lenses of Other Disciplines One of Kuhn’s (1962) most memorable observations was that the anomalies that led to the toppling of a reigning theory or paradigm almost invariably were observed by researchers whose backgrounds were in different disciplines than those comprising the traditional training of the leaders in the field. The beliefs that adherents to the prior theory held about what was and was not possible seemed to shape so powerfully what they could and could not see that they often went to their graves denying the existence or relevance of the very anomalous phenomena that led to the creation of improved theory. Researchers from different disciplines generally use different methods and have different interests toward their object of study. Such differences often allow them to see things that might not be recognized or might appear inconsequential to an insider. It is not surprising, therefore, that many of the most important pieces of breakthrough research in the study of management, organization and markets have come from scholars who stood astride two or more academic disciplines. Porter’s (1980, 1985, 1991) work in strategy, for 12 example, resulted from his having combined insights from business policy and industrial organization economics. The insights that Robert Hayes and his colleagues (1980, 1984, 1985, 1988) derived about operations management combined insights from process research, strategy, cost accounting and organizational behavior. Baldwin & Clark’s (2000) insights about modularity were born at the intersection of options theory in finance with studies of product development. Clark Gilbert ((2001) looked at Christensen’s (1997) theory of disruptive innovation through the lenses of prospect theory and risk framing (Kahnemann & Tversky 1979, 1984), and saw explanations of what had seemed to be anomalous behavior, for which Christensen’s model could not account. Studying the Phenomena within the Phenomena The second method to increase the probability that researchers will identify anomalies is to execute nested research designs that examine different levels of phenomena. Rather than study just industries or companies or divisions or groups or individuals, a nested research design entails studying how individuals act and interact within groups; and how the interaction amongst groups and the companies within which they are embedded affect the actions of individuals. Many anomalies will only surface while studying second-order interactions across levels within a nested design. The research reported in Johnson & Kaplan’s Relevance Lost (1987) which led to the concept of activity-based costing, is a remarkable example of the insights gained through nested research designs. Most prior researchers in managerial accounting and control had conducted their research at a single level—the numbers printed in companies’ financial statements. Johnson and Kaplan saw that nested beneath each of those printed numbers was a labyrinth of political, negotiated, judgmental processes that could systematically yield inaccurate numbers. Spear and Bowen (1999) developed their path-breaking insights of the Toyota Production System through a nested research design. Researchers in the theory’s descriptive stage had studied Toyota’s production system at single levels. They documented visible artifacts such as minimal inventories, kanban scheduling cards and rapid tool changeovers. After comparing the performance of factories that did and did not possess these attributes, early researchers asserted that if other companies would use these same tools, they could achieve similar results (see, for example, Womack et.al., 1990). The anomaly that gripped Spear and Bowen was that when other firms used these artifacts, they still weren’t able to achieve Toyota’s levels of efficiency and improvement. By crawling inside to study how individuals interacted with individuals, in the context of groups interacting with other groups, within and across plants within the company and across companies, Spear and Bowen were able to go beyond the correlative statements of descriptive theory, to articulate the fundamental causal mechanism behind the Toyota system’s self-improving processes – which they codified as four “rules-in-use” that are not written anywhere but are assiduously followed when designing processes of all sorts at Toyota. Spear is now engaged in search of anomalies on the deductive side of the cycle of building normative theory. Because no company besides Toyota has employed this causal mechanism, Spear cannot retrospectively study other companies. Like Johnson & Kaplan did when they used 13 “action research” to study the implementation problems of activity-based costing, Spear is helping companies in very different circumstances to use his statements of causality, to see whether the mechanism of these four rules yields the same results. To date, companies in industries as diverse as aluminum smelting, hospitals, and jet engine design have achieved the results that Spear’s theory predicts – he has not yet succeeded in finding an anomaly. The categorization step of this body of normative theory still has no salient boundaries within it. Observing and Comparing a Broad Range of Phenomena The third mechanism for maximizing the probability of surfacing an anomaly is to examine, in the deductive half of the cycle, a broader range of phenomena than prior scholars have done. As an example, Chesbrough’s (1999) examination of Japanese disk drive makers (which Christensen had excluded from his study) enabled Chesbrough to surface anomalies for which Christensen’s theory of disruptive technology could not account—leading to an even better theory that then explains a broader range of phenomena. The broader the range of outcomes, attributes and circumstances that are studied at the base of the pyramid, the higher the probability that researchers will identify the salient boundaries among the categories. Anomaly-Seeking Research and the Cumulative Structure of Knowledge When interviewing new faculty candidates who have been trained in methods of modeling, data collection and analysis as doctoral students, we observe that many seem almost disinterested in the value of questions that their specialized techniques are purporting to answer. When asked to position their work upon a stream of scholarship, they recite long lists of articles in “the literature,” but then struggle when asked to diagram within that body of work which scholar’s work resovles anomalies to prior scholars’ theories; whose results contradicted whose, and why. Most of these lists of prior publications are simply lists, sometimes lifted from prior authors’ lists of prior articles. They are listed because of their relatedness to the topic. Few researchers have been taught to organize citations in a way that describes the laps that prior researchers have taken, to give readers a sense for how theory has or has not been built to date. Rather, after doffing the obligatory cap to prior research, they get busy testing their hypotheses in the belief that if nobody has tested these particular ones before, using novel analytical methods on a new data set, it breaks new ground. Our suggestion is that in the selection of research questions and the design of research methods, authors physically map the literature on a large sheet of paper in the format of Figure 2 above, and then answer questions like these: • Is this body of theory in the descriptive or normative stage? • What anomalies have surfaced in prior authors’ work, and which pieces of research built on those by resolving the anomaly? In this process, how have the categorization schemes in this field improved? • At what step am I positioning my work? Am I at the base of the pyramid defining constructs to help others abstract from the detail of the phenomena what really is going on? Am I strengthening the foundation by offering better ways to examine and measure 14 the phenomena more accurately? Am I resolving an anomaly by suggesting that prior scholars haven’t categorized things correctly? Am I running half a lap or a complete cycle, and why? Similarly, in the “suggestions for future research” section of the paper, we suggest that scholars be much more specific about where future anomalies might be buried. “Who should pick up the baton that I am setting down at the end of my lap, and in what direction should they run?” We have attempted to construct such maps in several streams of research with which we are familiar (See, for example, Gilbert 2005). It has been shocking to see how difficult it is to map how knowledge has accumulated within a given sub-field. In many cases, it simply hasn’t summed up to much, as the critics cited in our first paragraph have observed. We suggest that the pyramids of theory building might constitute a generic map, of sorts, to bring organization to the collective enterprises within each field and sub-field. The curriculum of doctoral seminars might be organized in this manner, so that students are brought through the past into the present in ways that help them visualize the next steps required to build better theory. Literature reviews, if constructed in this way at the beginning of papers, would help readers position the work in the context of this stream, in a way that adds much more value than listing articles that are topically related. Here’s just one example of how this might be done. Alfred Chandler’s (1977, 1990) landmark studies essentially proposed a theory: that the “visible hand” of managerial capitalism was a crucial enabling factor that led not just to rapid economic growth between 1880 and 1930, but led to the dominance of industry after industry by large, integrated corporations that had the scale and scope to pull everything together. In recent years, much has been written about “virtual” corporations and “vertical dis-integration;” indeed, some of today’s most successful companies such Dell are specialists in just one or two slices of the vertical value-added chain. To our knowledge, few of the studies that focus on these new virtual forms of industrial organization have even hinted that the phenomena they are focusing upon actually is an anomaly for which Chandler’s theory of capitalism’s visible hand cannot adequately account. If these researchers were to build their work on this anomaly, it would cause them to delve back into the categorization process. Such an effort would define the circumstances in which technological and managerial integration of the sort that Chandler observed are crucial to building companies and industries, while identifying other circumstances in which specialization and market-based coordination are superior structures. A researcher who structured his or her literature review around this puzzle, and then executed that research, would give us a better contingent understanding of what causes what and why. Establishing the Validity of Theory A primary concern of every consumer of management theory is to understand where it applies, and where it does not apply. Yin (1984) helps us with these concerns by defining two types of validity for a theory – internal and external validity – which are the dimensions of a body of understanding that help us guage whether and when we can trust it. In this section we’ll discuss how these concepts relate to our model of theory building, and describe how researchers can make their theories valid on both of these dimensions. 15 Internal Validity Yin asserts that a theory’s internal validity is the extent to which: 1) its conclusions are logically drawn from its premises; and 2) the researchers have ruled out all plausible alternative explanations that might link the phenomena with the outcomes of interest. The best way we know to ensure the internal validity of a theory is to examine the phenomena through the lenses of as many disciplines and parts of the company as possible – because the plausible alternative explanations almost always are found in the workings of another part of the company, as viewed through the lenses of other academic disciplines. We offer here two illustrations. Intel engineered a remarkable about-face in the early 1980s, as it exited the industry it built – Dynamic Random Access Memories (DRAMs) – and threw all of its resources behind its microprocessor strategy. Most accounts of this impressive achievement attribute its success to the leadership and actions of its visionary leaders, Gordon Moore and Andy Grove (see, for example, Yoffie et.al. 2002). Burgelman’s careful ethnographic reconstruction of the resource allocation process within Intel during those years of transition, however, reveals a very different explanation of how and why Intel was able to make this transition. As he and Grove have shown, it had little to do with the decisions of the senior-most management (Burgelman, 2002). One of the most famous examples of research that strengthens its internal validity by examining a phenomenon through the lenses of several disciplines is Graham Allison’s (1971) The Essence of Decision. Allison examined the phenomena in a single situation—the Cuban missile crisis—using the assumptions of three different theoretical lenses (e.g., rational actor, organizational, & bureaucratic). He surfaced anomalies in the current understanding of decision making that could not have been seen had he only studied the phenomenon from a single disciplinary perspective. Through the use of multiple lenses he contributed significantly to our understanding of decision making in bureaucratic organizations. As long as there’s the possibility that another researcher could say, “Wait a minute. There’s a totally different explanation for why this happened,” then we cannot be assured of a theory’s internal validity. If scholars will patiently examine the phenomena and outcomes of interest through the lenses of these different perspectives, they can incorporate what they learn into their explanations of causality. And one-by-one, they can rule out other explanations so that theirs is the only plausible one left standing. It can then be judged to be internally valid. External Validity The external validity of a theory is the extent to which a relationship that was observed between phenomena and outcomes in one context can be trusted to apply in different contexts as well. Many researchers have come to believe that a theory’s external validity is established by “testing” it on different data sets. This can never conclusively establish external validity, however – for two reasons. First, researchers cannot test a theory on every conceivable data set; and second, data only exists about the past. How can we be sure a model applies in the future, when there is no data to test it on? Consider, for illustration, Christensen’s experience after publishing the theory of disruptive innovation in The Innovator’s Dilemma (Christensen, 1997). This book presented in its first two chapters a normative theory, built upon careful empirical descriptions of the history of the disk drive industry. It asserted that there are two circumstances 16 – sustaining and disruptive situations – in which innovating managers might find themselves. Then it defined a causal mechanism – the functioning of the resource allocation process in response to the demands of customers and financial markets – that caused leading incumbent firms and entrants to succeed or fail at different types of innovations in those circumstances. Christensen’s early papers summarized the history of innovation in the disk drive industry, from which the theory was inductively derived. Those who read these papers instinctively wondered, “Does this apply outside the disk drive industry?” In writing The Innovator’s Dilemma, Christensen sought to establish the generalizability or external validity of the theory by “testing” it on data from as disparate a set of industries as possible – including hydraulic excavators, steel, department stores, computers, motorcycles, diabetes care, accounting software, motor controls and electric vehicles. Despite the variety of industries in which the theory seemed to have explanatory power, executives from industries that weren’t specifically studied kept asking, “Does it apply to health care? Education? Financial services?” When Christensen published additional papers that applied the model to these industries, the response was, “Does it apply to telecommunications? Relational database software? Does it apply to Germany” The killer question, from an engineer in the disk drive industry, was, “It clearly applies to the history of the disk drive industry. But does it apply to its future as well? Things are very different now.” As these queries illustrate, it is simply impossible to establish the external validity of a theory by testing it on data sets – because there will always be another one upon which it hasn’t yet been tested, and the future will always lie just beyond the reach of data. When researchers have defined what causes what, and why, and show how the result of that causal mechanism differs by circumstance, then the scope of the theory, or its external validity, is established. In the limit, we could only say that a theory is externally valid when the process of seeking and resolving anomaly after anomaly results in a set of categories that are collectively exhaustive and mutually exclusive. Mutually exclusive categorization would allow managers to say, “I am in this circumstance and not that one.” And collectively exhaustive categorization would assure us that all situations in which managers might find themselves with respect to the phenomena and outcomes of interest, are accounted for in the theory. No theory’s categorization is likely to achieve the ultimate status of mutually exclusive and collectively exhaustive, of course. But the accumulation of insights and improvements from cycles of anomaly-seeking research can improve theory asymptotically towards that goal. This raises an interesting paradox for large sample-size research that employs “mean” analyses to understand ways to achieve the optimum result or best performance. One would think that a theory derived from a large data set representing an entire population of companies would have greater external validity than a theory derived from case studies of a limited number of situations within that population. However, when the unit of analysis is a population of companies, the researcher can be specific only about the entire population of companies – the population comprises one category, and other sources of variance or differences that exist in that population become potentially lost as an explanation. Some managers will find that following the formula that works best on average, works best in their situation as well, of course. However, sometimes the course of action that is optimal on average will not yield the best outcome in a specific situation. Hence, researchers who derive a theory from statistics about a population still need to establish external validity through circumstance-based categorization. 17 Some large sample, quantitative studies in strategy research have begun to turn to analyses that estimate simultaneously the expected value (a mean analysis) and the variance associated with performance oriented dependent variables using a “variance decomposition” approach (Fleming and Sorensen, 2001; Sorensen and Sorensen, 2001). The simultaneous nature of this methodological approach allows a deeper understanding of the mean as well as the variance associated with a firm overtime (Sorensen, 2002) or a population of firms (Hunter, 2002). What such analysis suggests is that when there are significant heterogeneity in a given strategic environment, not only will there be variance in firm performance, but also what a firm needs to do to be successful will also differ based of the niche that they pursue. This reminds us that explanations for strategic questions are not only contingent, but more importantly are based on an understanding what sources of variance, what relations across different variables, matter most and why. From a methodological point of view, this also reminds of how our abilities (i.e., tools, methods) to represent data shape how we are able to describe what “strategic action” is possible. The value of progressing from descriptive to normative theory can be illustrated in the case of Jim Collins’ (2001) popular book, Good to Great. Collins and his research team found 15 companies that had gone from a period of mediocre performance to a period of strong performance. They then found a matching set of companies in similar industries that had gone from mediocre performance to another period of mediocre performance, and identified attributes that the “good-to-great” companies shared in common, and found that the “good-to-good” companies did not share these attributes. Greater success is associated with the companies that possess these attributes. They have done a powerful piece of descriptive theory-building built on a categorization scheme of companies that share these attributes, vs. companies that do not. The research in this book has been very helpful to many executives and academics. As descriptive theory, however, there is still uncertainty about whether a specific company in a specific situation will succeed if it acquires the attributes of the good-to-great, because the theory has not yet gone through the process of circumstance-based categorization. For example, one of those attributes is that the good-to-great companies were led by relatively humble CEOs who generally have shunned the limelight, whereas the mediocre companies tended to be led by more ego-centric, hired-in “superstar” executives. There might indeed be situations in which an egocentric superstar executive is crucial to success, however. Such a precise, situation-specific statement will only possible – and the theory can be judged to be externally valid – only when this body of understanding has progressed to the normative theory stage. What is Good Data? The dichotomy between subjectivity and objectivity is often used as a cleavage point to judge the scientific quality of data – with many seeing objective data as more legitimate than subjective data. Case- or field-derived data versus large-sample data sets is a parallel dichotomy that often surfaces in academic discourse. Much like theory, the only way we can judge the value of data is by their usefulness in helping us understand how the world works, identifying categories, making predictions and surfacing anomalies. Research that employs a nested design often reveals how illogical these dichotomies are. Christensen’s (1997) research, for example, was built upon a history of the disk drive industry derived from analysis of tens of thousands of data points about markets, technologies and 18 products that were reported in Electronic Business and Disk/Trend Report. In the context of the industry’s history, the study then recounted the histories of individual companies, which were assembled partially from published statistics and partially from interviews with company managers. The study also included histories of product development projects within these companies, based upon a few numbers and extensive personal interviews. Finally, the study included many accounts of individuals’ experiences in developing and launching new products, comprised exclusively of information drawn from interviews – with no numbers included whatsoever. So what is a case study? Because a case is a description and assessment of a situation over a defined period of time, every level in Christensen’s study was a case – industry, company, group and individual. And what is data? Each level of this study involved lots of data of many sorts. Each of these descriptions – from the industry’s history to the individuals’ histories – captured but a fraction of the richness in each of the situations. Indeed, the “hardest” numbers on product product performance, company revenues and competitors’ market shares, really were after-thefact proxy manifestations all the processes, prioritizations and decisions amongst the groups and individuals that were observed in the nested, “subjective” portions of the study. Let’s drill more deeply on this question of where much quantitative data comes from. For example, the data used in many research projects comes directly or indirectly from the reported financial statements of publicly traded companies. Is this objective data? Johnson & Kaplan (1987) showed quite convincingly that the numbers representing revenues, costs and profits that appear in companies’ financial statements are typically the result of processes of estimation, allocation, debate and politics that can produce grossly inaccurate reflections of true cost and profit. The subjective nature of financial statement data, and the skills and methods used by those who made those judgments, however, are hidden from the view of researchers who use the published numbers. The healthiest and probably the most accurate mindset for researchers is that nearly all research – whether presented in the form of large data sample analysis, a mathematical optimization model, or an ethnographic description of behavior – is a description of a situation and is, therefore, a case. And all data are subjective. Each form of data is a higher-level abstraction from a much more complex reality, out of which the researcher attempts to pull the most salient variables or patterns for examination. Generally, the subjectivity of data is glaringly apparent in field-based, ethnographic research, whereas the subjectivity tends to be hidden behind numerical data. Researchers of every persuasion ought always to strive to examine phenomena not just through the lenses of different academic or functional disciplines, but through the lenses of multiple forms of data as well. And none of us ought to be defensive or offensive about the extent to which the data in our or others’ research are subjective. We are all in the same boat, and are obligated to do our best to be humble and honest with ourselves and our colleagues as we participate individually within and collectively across the theory building cycle. 4 4 An excellent account that has helped us understand how pervasive the exercise of subjectivity is in the generation of “facts” is E.H. Carr’s (1961) treatise, What Is History. Carr describes that even the most complete historical accounts simply summarize what those who recorded events decided were important or interesting enough to record. In most 19 Implications for Course Design Schools of management generally employ two methods of classroom instruction: casebased classes and lecture-based classes. These are descriptive categorizations of the phenomena. Attempts to assess which method of instruction is associated with the best outcomes is fraught with anomaly. We suggest that there is a different, circumstance-based categorization scheme that may constitute a better foundation of a theory of course design: Whether the instructor is using the course to develop theory, or to help students practice the use of theory. When designing a course on a subject about which normative theory has not yet emerged, designing the course to move up the inductive side of the theory pyramid can be very productive. For example, Harvard Business School professor Kent Bowen decided several years ago that because a significant portion of HBS graduates end up running small businesses, he ought to create a course that prepares students to do that. He then discovered that the academic literature was amply stocked with studies of how to structure deals and start companies, but that there wasn’t much written about how to run plain old low-tech, slow-growth companies. Bowen tackled the problem with an inductive course-design strategy. He first wrote a series of cases that simply described what managers in these sorts of companies worry about and do. In each class Bowen led the students in case discussions whose purpose was to understand the phenomena thoroughly. After a few classes, Bowen paused, and orchestrated a discussion through which they sought to define patterns in the phenomena – to begin categorizing by type of company, type of manager, and type of problem. Finally, they explored the association between these types, and the outcomes of interest. In other words, Bowen’s course had an inductive architecture that moved up the theory pyramid. Then armed with their preliminary body of theory, Bowen and his students cycled down the deductive side of the pyramid to examine more companies in a broader range of circumstances. This allowed them to discover things that their initial theory could not explain; and to improve their constructs, refine their classification scheme, and improve their understanding of what causes what, and why. There is another circumstance – where well-researched theories pertaining to a field of management already exist. In this situation, a deductive course architecture can work effectively. For example, Clayton Christensen’s case-based course, Building a Sustainable Enterprise, is designed deductively. For each class, students read a paper that summarizes a normative theory about a dimension of a general manager’s job. The students also study a case about a company. They then look through the lenses of the theory, to see if it accurately explains what historically happened in the company. They also use the theory to discuss what management actions will and will not lead to the desired outcomes, given the situation the company is in. Because the cases are complicated, students often discover an anomaly that then enables the class to revisit the categorization scheme and the associated statement of causality. Students follow this process, theory after theory, class after class, for the semester – and in the process, learn not just how to use theory, but how to improve it. 5 processes that geneate numerical data the subjectivity that was exercised in the process of recording or not recording lies hidden. 5 At one point Christensen attempted to teach his course through an inductive architecture. Case by case, he attempted to lead his students to discover well-documented theories that prior scholars already had discovered. The course was a disaster – the wrong architecture for the circumstance. Students could tell that Christensen already had 20 As the experiences of Professors Bowen and Christensen suggest, the dichotomy that many see between teaching and research need not create conflict. It may be better to view developing and teaching courses as course research. And there are two circumstances in which professors might find themselves. When a body of theory has not yet coalesced, an inductive architecture is productive. When useful theory already has emerged, then a deductive architecture can make sense. In both circumstances, however, instructors whose interest is to build theory and help students learn how to use theory, can harness the brainpower of their students by leading them through cycles up and down the theory-building pyramid. Implications: Theory as Method Building theory in management research is how we define and measure our value and usefulness as a research community to society. We have focused on specific examples from management research to illustrate how our approaches to the empirical world shape what we can represent and can value and, more broadly, how theory collectively shapes the field of management research. This reminds us that building theory at an individual or collective level, handing off or picking up the baton, is not a detached or neutral process, yet the model developed here gives us a method to guide these efforts. From this model we recognize first the importance of both the inductive and deductive sides of the pyramid; second how subsequent cycles move us from attributes and substantive categories toward a circumstance-based understanding and more formal theory; and third eventually to an understanding of the relational properties that are of consequence and define the boundary conditions wherein the theory is of value. This is our uiltimate aim: As students of business we readily accept that if employees in manufacturing and service companies follow robust processes they can predictably produce outputs of quality and value. When reliable processes are followed, success and failure in producing the desired result become less dependent upon the capabilities of individual employees, because they are embedded in the process. We assert that the same can be true for management researchers. If we follow a robust, reliable process, even the most “average” of us can produce and publish research that is of high value to academics and practitioners. the answer, and his attempts to orchestrate a case discussion seemed like the professor was asking the students to guess what was on his mind. The next year, Christensen revised his course to the deductive architecture described above, and students reacted very positively to the same material. 21 Parking Lot for Important ideas that need to go somewhere: So a major question that arises in conducting research is how do we know we are categorizing or measuring the best things to help us understand the phenomena of interest? Glaser and Strauss state that the elements of theory are, first, the conceptual categories with their conceptual properties and, second, the generalized relations among categories and their properties (1967: 35-43). A way to proceed with combining these elements is to emphasize a “relational” approach to theorizing (Bourdieu and Wacquant, 1992: 224-233) rather than just a substantialist approach. As already alluded to, a substantialist approach emphasizes “things” to be counted and categorized such as people, groups, products, or organizations. A relational approach, however, emphasizes the properties between things in a given area of interest, or what determines the relative positions of force or power between people, groups or organizations. The reason that most research follows a substantialist approach is that most methodological tools are focused on and best suited in identifying convenient sources data that can be easily counted and categorized more readily than the relational properties that exist between individuals, groups or organizations in a given social space over time (Bourdieu, 1989). Given the methodological focus toward convenient sources of data to collect, it is not surprising that a substantialist approach dominates most of management research, as well as the social sciences. For example, the concept of “core competency” (Selznick, 1957) was developed to account for organizations that were successful in their environments. This concept became a very useful concept in the field of strategy in the late 1980s and the 90s (Prahalad and Hamel, 1990). However, the limitation of this category is that it was used to identify only successful companies; less successful companies were seen as lacking a core competency. The field of strategy did not begin to look more closely at the concept until Dorothy Leonard’s research (1992; 1995) focus on the processes and outcomes that identified how a core competency can turn into a source of core rigidity. Leonard found that changes in a firm’s “relations” to its suppliers and customers determine whether the firm can remain competitive. The corollary of this is that a core competency can become a core rigidity, diminishing competitive strength. By identifying this consequential “relations” Leonard not only provided a deeper formalization of “competency,” but this also proved helpful to managers in suggesting how they apply their firm’s resources to avoid this competency-rigidity tendency. While a relational approach can push research to a deeper level of formalization, it raises methodological challenges. Because relations among individuals, groups or organizations are most telling as they change over time, a relational approach requires both the means of collecting data over time and a method of analyzing and representing the insights that such data can reveal. In one of the most influential ethnographic studies of technology implementation in management research, Barley’s careful ethnographic analysis (1986; 1988; 1990) provided a comparative and temporal window into the implementation of the same technology in similar hospital settings. Despite these similarities, Barley documented very different outcomes in how radiologist and technicians joint used the CT-Scanning technology implemented. Based on these different outcomes, he asserted that technological and social structures mutually adapted differently over time. Barley observations over time helped to replaced the either or debate between the static view of technological determinism and the situated view of technology. 22 Using Barley’s empirical documentation, Black, Carlile and Repenning (2003) formalized his observation at a more specific causal level through the use of a system dynamics method. This allowed them to specify the relation between radiologist and technicians and how their relative expertise in using the technology explains the different outcomes that Barley documented. Even though Barley recognized the importance of the “distribution of expertise” (Barley, 1986) between the two groups, he lacked a methodology to represent how over time the relative accumulations of expertise accounted for the different outcomes he observed. With this more formalized approach Black et al. could state a balance in “relative expertise” in using the new technology was essential in developing collaboration around a new technology. The specification of these relational properties was an improvement upon Barley managerial suggestion that a more decentralized organization is better able to successfully implement a new technology than a centralized one. This more formalized theory and relational understanding provides specific guidance to a practitioner about what to do when faced with the challenge of implementing a new technology when collaboration is desired. This relational approach goes farther than a “contingency theory” approach (Lawrence and Lorsch, 1967)—because it recognizes not only are things contingent, but that in any situation some things, some relations, matter more than others in explaining the contingent (different) outcomes possible. The development of contingency theory has provided significant insight into the field of organizational behavior and design because it has identified that circumstances do affect outcomes. However, the fact that contingency theory is viewed by many as a stand-alone theory rather than a further reason to search for the particular sources of contingency limits the theory-building effort. This points to the proclivity of many researchers to leap directly from phenomena to theory and back again. If we continue around the theory building cycle, what we at first call contingent (e.g., decentralization versus centralization), upon further analysis reveals the underlying relational properties and why those relations are most consequential and why (e.g., how and why relative expertise matters). 23 References Allison, G. (197), The Essence of Decision. Glenview, IL: Scott, Foresman & Co. Argyris, C. (1993), On Organizational Learning. Cambridge, MA: Blackwell. Argyris, C. & Schon, D. (1976), Theory in Practice. San Francisco: Jossey-Bass. Baldwin, C. and Clark, K.B. (2000), Design Rules: The Power of Modularity. Cambridge, MA: MIT Press. Barley, S.R. (1986), “Technology as an occasion for structuring: Evidence from observations of CT scanners and the social order of radiology departments.” Administrative Science Quarterly, 31, 1: 78-108. Black, L., Repenning, N. and Carlile, P.R. (2002) “Formalizing theoretical insights from ethnographic evidence: Revisiting Barley’s study of CT-Scanning implementations.” Under revision, Administrative Science Quarterly. Bourdieu, P. (1989/1998), Practical Reason. Stanford: Stanford University Press. Bourdieu, P. and Wacquant, L. (1992), An Invitation to Reflexive Sociology. Chicago: University of Chicago Press. Bower, Joseph (1970), Managing the Resource Allocation Process. Englewood Cliffs, NJ: Irwin. Bower, J.L., and Gilbert, C.G., eds. (2005), From Resource Allocation to Strategy. Oxford University Press. Burgelman, Robert & Leonard Sayles (1986), Inside Corporate Innovation. New York: The Free Press. Burgelman, Robert (2002), Strategy Is Destiny. New York: The Free Press. Campbell, D.T.and Stanley, J.C. (1963), Experimental and Quasi-experimental Design for Research. Boston: Hougthon Mifflin Press. Carlile, P.R. (2003), “Transfer, translation and transformation: Integrating approach in sharing and assessing knowledge across boundaries.” Under revision, Organization Science. Carr, E.H. (1961), What Is History? New York: Vintage Books. Chandler, A. D. Jr. (1977), The Visible Hand: The Managerial Revolution in American Business. Cambridge, MA: Belknap Press. Chandler, A. D. Jr. (1990), Scale and Scope: The Dynamics of Industrial Capitalism. Cambridge, MA: The Belknap Press. 24 Christensen, C.M. (1997), The Innovator’s Dilemma: When New Technologies Cause Great Firms to Fail. Boston: Harvard Business School Press. Chesbrough, H.W. (1999). “The Differing organizational impact of technological change: A comparative theory of institutional factors.” Industrial and Corporate Change, 8: 447-485. Clegg, Brian (2003), The First Scientist: A Life of Roger Bacon. New York: Carroll & Graf Publishers. Daneels, Erwin (2005), “The Effects of Disruptive Technology on Firms and Industries,” Journal of Product Innovation Management (forthcoming special issue that focuses on this body of theory). Gilbert, C.G. (2001), A Dilemma in Response: Examining the Newspaper Industry’s Response to the Internet. Unpublished DBA thesis, Harvard Business School. Gilbert, C.G., and Christensen, C.M. (2005). “Anomaly Seeking Research: Thirty Years of Development in Resource Allocation Theory.” In Bower, J.L., and Gilbert, C.G., eds. From Resource Allocation to Strategy. Oxford University Press, forthcoming. Fleming, L. and Sorensen, O. (2001), “Technology as a complex adaptive system: Evidence from patent data.” Research Policy, 30: 1019-1039. Glaser, B. & Straus, A. (1967), The Discovery of Grounded Theory: Strategies of Qualitative Research. London: Wiedenfeld and Nicholson. Hayes, R. (1985), “Strategic Planning: Forward in Reverse?” Harvard Business Review, November-December: 111-119. Hayes, R. (2002), “The History of Technology and Operations Research,” Harvard Business School Working paper. Hayes, R. and Abernathy, W. (1980), “Managing our Way to Economic Decline.” Harvard Business Review, July-August: 7-77. Hayes, R. and Wheelwright, S.C. (1984), Restoring our Competitive Edge. New York: John Wiley & Sons. Hayes, R., Wheelwright, S. and Clark, K. (1988), Dynamic Manufacturing. New York: The Free Press. Henderson, R.M. & Clark, K.B. (1990), “Architectural Innovation: The Reconfiguration of Existing Systems and the Failure of Established Firms.” Administrative Science Quarterly, 35: 9- 30. Hunter, S.D. (2002), “Information Technology, Organizational Learning and Firm Performance.” MIT/Sloan Working Paper. Hutton, A., Miller, G., and Skinner, D. (2000), “Effective Voluntary Disclosure.” Harvard Business School working paper. 25 James, W. (1907), Pragmatism. New York: The American Library. Johnson, H.T. & Kaplan, R. (1987), Relevance Lost. Boston: Harvard Business School Press. Kaplan, A. (1964), The Conduct of Inquiry: Methodology for Behavioral Research. Scranton, PA: Chandler. Kaplan, R. (1986), “The role for Empirical Research in Management Accounting.” Accounting, Organizations and Society, 4: 429-452. Kuhn, T. (1962), The Structure of Scientific Revolutions. Chicago: University of Chicago Press, 1962. Lawrence, P. R. and Lorsch, J.W. (1967), Organization and Environment. Boston: Harvard Business School Press. Leonard, D. (1995), Wellsprings of Knowledge. Boston: Harvard Business School Press. Poole, M. & Van de Ven, A. (1989), “Using Paradox to Build Management and Organization Theories.” Academy of Management Review 14: 562-578. Popper, K. (1959), The Logic of Scientific Discovery. New York: Basic Books. Porter, M. (1980), Competitive Strategy. New York: The Free Press. Porter, M. (1985), Competitive Advantage. New York: The Free Press. Porter, M. (1991), The Competitive Advantage of Nations. New York: The Free Press. Raman, Ananth, (need citation) Roethlisberger, F. (1977), The Elusive Phenomena. Boston: Harvard Business School Press. Rumelt, Richard P. (1974), Strategy, Structure and Economic Performance. Cambridge, MA: Harvard University Press. Selznick, P. (1957), Leadership in Administration: A Sociological Interpretation. Berkeley: University of California Press. Simon, H. (1976), Administrative Behavior (3 rd edition). New York: The Free Press. Solow, R. M. (1985), “Economic History and Economics.” The American Economic Review, 75: 328-331. Sorensen, O. and Sorensen, J. (2001), Research Note - Finding the right mix: Franchising, organizational learning, and chain performance. Strategic Management Journal, 22: 713-724. Sorensen, J. (2002), “The Strength of Corporate Culture and the Reliability of Firm Performance,” Administrative Science Quarterly, 47: 70-91. 26 Spear, S.C. and Bowen, H.K. (1999), “Decoding the DNA of the Toyota production system.” Harvard Business Review, September-October. Stinchcombe, Arthur L. (1968), Constructing Social Theories.” New York: Harcourt, Brace & World. Sull, D. N. (2000), “Industrial Clusters and Organizational Inertia: An Institutional Perspective.” Harvard Business School working paper. Van de Ven, A. (2000), “Professional Science for a Professional School.” In Beer, M. and Nohria, N. (Eds), Breaking the Code of Change. Boston: Harvard Business School Press. Weick, K. (1989), “Theory Construction as Disciplined Imagination,” Academy of Management Review, 14: 516-532. Womack, J. P., Jones, D. T. & Roos, D. (1990), The Machine that Changed the World. New York: Rawson Associates. Yin, R. (1984), Case Study Research. Beverly Hills: Sage Publications. Yoffie, David, Sasha Mattu & Ramon Casadesus-Masanell (2002), “Intel Corporation, 1968- 2003,” Harvard Business School case #9-703-427.The Psychological Costs of Pay-for-Performance: Implications for the Strategic Compensation of Employee
|
CD ROM Annuaire d'Entreprises France prospect (avec ou sans emails) : REMISE DE 10 % Avec le code réduction AUDEN872
10% de réduction sur vos envois d'emailing --> CLIQUEZ ICI Retour à l'accueil, cliquez ici Ian Larkin, Lamar Pierce, and Francesca Gino Working papers are in draft form. This working paper is distributed for purposes of comment and discussion only. It may not be reproduced without permission of the copyright holder. Copies of working papers are available from the author. The Psychological Costs of Pay-for-Performance: Implications for the Strategic Compensation of Employees Ian Larkin Lamar Pierce Francesca Gino Working Paper 11-056Strategic Compensation 1 Running Head: STRATEGIC COMPENSATION The Psychological Costs of Pay-for-Performance: Implications for the Strategic Compensation of Employees Ian Larkin 1 , Lamar Pierce 2 , and Francesca Gino 1 Forthcoming, Strategic Management Journal 1 Harvard Business School, Soldiers Field Road, Boston MA 02163 ilarkin@hbs.edu 617-495-6884, fgino@hbs.edu 617-495-0875 2 Olin Business School, Washington University in St. Louis, One Brookings Drive Box 1133, St. Louis, MO 63130 pierce@wustl.edu 314-935-5205Strategic Compensation 2 Abstract Most research linking compensation to strategy relies on agency theory economics and focuses on executive pay. We instead focus on the strategic compensation of nonexecutive employees, arguing that while agency theory provides a useful framework for analyzing compensation, it fails to consider several psychological factors that increase costs from performance-based pay. We examine how psychological costs from social comparison and overconfidence reduce the efficacy of individual performance-based compensation, building a theoretical framework predicting more prominent use of teambased, seniority-based, and flatter compensation. We argue that compensation is strategic not only in motivating and attracting the worker being compensated, but also in its impact on peer workers and the firm’s complementary activities. The paper discusses empirical implications and possible theoretical extensions of the proposed integrated theory. Keywords: compensation; pay; incentives; principal-agent models; motivation; psychologyStrategic Compensation 3 The Psychological Costs of Pay-for-Performance: Implications for the Strategic Compensation of Employees Compensation is a critical component of organizational strategy, influencing firm performance by motivating employee effort and by attracting and retaining high-ability employees. Compensation is the largest single cost for the average company (Gerhart, Rynes and Fulmer, 2009), with employee wages accounting for 60 to 95 percent of average company costs excluding a firm’s physical cost of goods sold (Bureau of Labor Statistics, 2009). Although literatures across disciplines including economics, social psychology and human resource management take different approaches to studying compensation, the strategy literature on compensation is dominated by one theory and one focus: the use of agency theory and a focus on executive compensation. Indeed, by our count, over 80 percent of recent papers on compensation in leading strategy journals explicitly or implicitly use agency theory as the dominant lens of analysis. 3 Nearly three-quarters of these papers also examine executive compensation, rather than focusing on compensation for “non-boardroom” employees. The impact of executive compensation on firm strategy is undeniable (e.g. Dalton et. al, 2007; Wowak and Hambrick, 2010), given the importance of attracting top executive talent and financially motivating strong effort and profitable choices. Yet pay for top executives averages only a few percentage points of the total compensation costs of the firm (Whittlesey, 2006), 3 Between 2004 and 2009, one-hundred fifty-two papers in five of the leading strategy journals – Strategic Management Journal; Organization Science; Management Science; Academy of Management Journal and Academy of Management Review – contained the word “compensation” in the topic listed in the Social Sciences Citation Index. 82 of these explicitly used the lens of agency theory, and a further 45 clearly used the basic predictions of agency theory in the research. Over 83 percent of the papers on compensation therefore rested on agency theory. In contrast, only 16 of the papers, or just more than 10 percent, discussed any concepts from social psychology or behavioral decision research. Similarly, a recent review article on compensation by Gerhart, Rynes and Fulmer (2009) contained over 220 citations, 60 of which were in strategy journals. Of these 60 articles, 52 explicitly or implicitly used agency theory as the dominant lens of analysis, and only three discussed social psychology in a significant way. Across these two data sources, 72 percent of compensation papers in strategy journals focused on executive pay.Strategic Compensation 4 meaning the bulk of a company’s wage bill represents pay to non-executives. Furthermore, employee compensation is intimately tied to firm decisions regarding technology, diversification, market position, and human capital (Balkin and Gomez-Mejia, 1990; Nickerson and Zenger, 2008), and has widespread implications for organizational performance (Gomez-Mejia, 1992). Non-executive compensation therefore remains an important but under-explored topic in the strategy literature. In this paper, we examine the strategic implications of compensation choices for nonexecutive employees. We argue that agency theory falls short in providing fully accurate predictions of strategic compensation choices by firms for non-executive employees. 4 The prominent use of agency theory by strategy scholars 35 years after its introduction by Jensen and Meckling (1976) and Holmstrom (1979) suggests that this theoretical approach has substantial merit. Yet, most firms’ compensation strategies for non-executive employees do not fully align with the predictions of agency theory. As detailed below, in fact, agency theory predicts the use of individualized performance-based pay far more frequently than is actually observed for nonexecutive employees. We argue that the predictions of agency theory often fail because performance-based pay is less effective than the theory predicts. We propose a more realistic theory of strategic compensation for non-executive employees that uses the basic framework of agency theory but incorporates important insights from social psychology and behavioral decision research. We argue that while these insights impact compensation strategy in many ways, two main factors are of first-order importance: social comparison processes and overconfidence. We concentrate on these factors because they 4 The question of the extent to which agency theory is an adequate framework for explaining strategic executive compensation is outside the scope of this paper. We believe, however, that the theory developed in the paper will prove useful in examining executive compensation choices as well.Strategic Compensation 5 most dramatically affect the underlying differences in the objectives and information on which agency theory is based. Also, these factors strongly influence firm performance due to their impact not only on the behavior of the employee being compensated, but the decisions and actions of other employees. We first incorporate these factors into an agency theory framework, and then argue that the true costs of individual performance-based systems are far greater than predicted by agency theory. We use our theory to derive a set of testable propositions regarding how psychological factors, economic factors, and information influence both the efficacy and prevalence of certain strategic compensation choices. Our main argument is that psychological factors raise the cost of individual pay-for-performance, leading firms to rely on team-based, seniority-based and flatter compensation strategies such as hourly wages or salaries. Although several notable studies in the management literature have examined the effect of individuals’ psychology on compensation (e.g., Gerhart and Rynes, 2003; Gerhart, Rynes and Fulmer, 2009), to the best of our knowledge our paper is the first to integrate economic and psychological factors into a theory of how strategic employee compensation impacts firm strategy and performance. The role psychology plays in compensation choice is by no means a new topic. Gerhart, Rynes and Fulmer (2009) cite 42 articles in psychology journals that examined compensation issues, yet most of these studies ignore or even dismiss the relevance of economic theory, in our opinion making the same mistake as agency theory research in neglecting relevant factors from other disciplines. Additionally, these studies do not attempt to fully assess the costs and benefits to firms of different compensation choices, and tend to be more narrowly focused on partial effects. Similarly, while some economists acknowledge the importance of psychological factors such as fairness in wages (Akerlof and Yellen, 1990; Fehr and Gachter, 2000; Card et al., 2010) Strategic Compensation 6 and the non-pecuniary costs and benefits such as shame (Mas and Moretti, 2009), social preferences (Bandiera, Barankay, and Rasul, 2005), and teamwork (Hamilton, Nickerson, and Owan, 2003), these papers primarily focus on social welfare or individual or team performance. Only Nickerson and Zenger (2008) discuss the strategic implications of psychological processes for employee compensation but, different from the current paper, focus exclusively on the role of employee envy on the firm. Our work seeks to build theory that integrates the predictions of agency theory and insights from the psychology literature in a comprehensive way. Agency theory is a natural lens by which to study strategic compensation because it approaches the setting of compensation from a cost-benefit viewpoint, with the firm’s principals, or owners, as the fundamental unit of analysis. By using agency theory as a base, our integrated framework leads to a rich set of testable predictions around the methods by which firms strategically set compensation policy. We further seek to illustrate the impact of non-executive compensation on the broader strategy of the firm, explaining how our framework can inform other complementary activities and choices made by the firm. The paper is laid out as follows. In the next section, we briefly introduce the approach we take to building an integrated theory of strategic compensation. We then review agency theory as well as the literatures on social psychology and behavioral decision making for relevant and empirically-supported insights regarding social comparison processes and overconfidence. Next, we combine insights from these literatures into an integrated theory of strategic compensation. We end the paper by examining the implications of our theory for strategic compensation decisions by firms, and by discussing empirical implications, testable propositions and next steps.Strategic Compensation 7 The Implications of the Infrequency of Individual Pay-for-performance Our research is primarily motivated by the disconnect between the broad effectiveness of individual pay-for-performance predicted by agency theory and the relative infrequency with which it is observed. 5 We hold that agency theory is correct in broadly equating the effectiveness of different compensation regimes with their prevalence. Compensation systems that tend to be more effective will be used more often. Although firms often deviate from the most efficient systems and can make mistakes, in general the prevalence of systems and decisions is highly correlated with efficiency and effectiveness (Nelson, 1991; Rumelt and Schendel, 1995). We note that the theory we propose in this paper is focused on effectiveness, but due to the above correlation we will often make reference to the prevalence of certain schemes as prima facie evidence of effectiveness. Indeed, the infrequent use of individual performance-based pay for non-executives casts doubt on its overall efficacy (Zenger, 1992). A 2010 international survey of 129,000 workers found only 40 percent received pay tied to performance at any level (individual, team, firm) (Kelly, 2010), and over half of Fortune 1000 companies report using individual performancebased pay for “some,” “almost none” or “none” of their work force (Lawler, 2003). Even when performance-based pay is used, the proportion contingent on performance is typically low. The median bonus for MBA graduates, whose employment skews toward professional services that frequently use performance-pay, represents only 20 percent of base salary (VanderMay, 2009). Performance pay based on team metrics – such as division profitability, product market share, or other non-individual measures – is far more common than individual performance-based pay. 5 Note “pay-for-performance” includes pay based on subjective measures of performance as well as objective ones. Agency theory holds that even when output is not observable or measurable, firms will often use performancebased, subjective measures of performance (e.g. Baker, 1992).Strategic Compensation 8 This unexpectedly low prevalence suggests higher costs or lower performance from individual incentives than agency theory predicts. Still, this discrepancy does not mean that agency theory fails to garner empirical support. Many of the core predictions of agency theory have been empirically validated in experimental and real-world settings (Gerhart, Rynes and Fulmer, 2009; Prendergast, 1999). Our theory takes the insights from agency theory that have received strong empirical support and integrates them with empirically validated insights from social psychology. We argue that only by using an integrated cost-benefit lens can accurate predictions around compensation be made at the level of the firm. Agency Theory and Strategic Compensation At its core, agency theory posits that compensation is strategic in that firms will use the compensation program that maximizes profits based on its unique costs and benefits. In agency theory, costs arise due to differences between firms and employees in two crucial areas: objectives and information. Two potential costs arise from these differences: an employee may not exert maximum effort (or effort may be inefficiently allocated), and the firm may pay workers more than they are worth (i.e. their expected marginal product). In this section we detail the key differences between employees and firms in objectives and information, and the resulting predictions from agency theory about a firm’s compensation strategy. Figure 1 summarizes the arguments described below. *** Insert Figure 1 here *** Objectives The fundamental tension in agency theory arises from differences in the objectives of firms and employees. Firms seek to maximize profits, and increased compensation affects Strategic Compensation 9 profitability by motivating employee effort (+) and attracting more highly-skilled employees (+) while increasing wage costs (-) (Prendergast, 1999). Employees, on the other hand, seek to maximize utility. Increased compensation affects utility by increasing income (+), yet employees must balance utility from income with the disutility (or cost) of increasing effort (-). Agency theory argues that effort is costly to employees at the margin; employees may intrinsically enjoy effort in small or moderate levels, but dislike increases in effort at higher levels (Lazear and Oyer, 2011). Agency theory further argues that firms must pay workers a premium for taking on any risk in pay uncertainty, since employees are risk averse. This creates distortion with risk neutral firm owners, who can use financial markets to optimally hedge against risk (Jensen and Meckling, 1976). However, we limit our discussion of risk in this paper for the sake of brevity, and because agency theory’s predictions on risk have demonstrated very little if any empirical support (Pendergrast, 1999). In contrast, agency theory’s prediction on the relationship between effort and pay has been largely supported in the empirical literature (Pendegrast, 1999; Lazear and Oyer, 2011). Information Two information asymmetries, where the worker knows more than the firm, drive compensation choices in agency theory. Workers know their own effort exertion and skill level, while firms have imperfect information about both. Agency theory holds that firms overcome these asymmetries by providing incentives for workers to exert effort and self-select by skill level. For example, by offering a low guaranteed wage with a large performance element, a firm can incentivize higher effort from all workers, but it can also attract and retain workers with high skills, while “sorting away” those with low skills (Lazear, 1986; Lazear and Oyer, 2011).Strategic Compensation 10 Predictions of standard agency theory The basic tradeoffs in agency theory are around effort (good for the firm but bad for the employee) and pay (bad for the firm but good for the employee). Given the information problems described above, and ignoring psychological factors, firms should pay employees for performance if the productivity gains from the effort it motivates are greater than the cost of the pay. Secondarily, pay-for-performance systems separate skilled employees who earn more under such schemes from unskilled ones better off in settings where performance does not matter. Basic agency theory holds that there are two basic alternatives firms take when setting pay: paying a flat wage, or paying for performance. The most obvious way to pay for performance is to base pay on some observed output of the worker or company, but firms can also base pay on subjective measures not tied to observed output. 6 The tradeoffs noted above lead to three fundamental insights on information and individual pay-for-performance that emerge from agency theory: Insight 1: Employees work harder when their pay is based on performance Insight 2: Firms are more likely to use performance-based pay (vs. flat pay) when they have less information about actual employee effort. Insight 3: Firms are more likely to use performance-based pay (vs. flat pay) as they have less information about employee skill level, and/or as employee skill level is more heterogeneous. Team-based compensation Agency theory also approaches team-based compensation with a cost-benefit lens; teambased compensation improves performance when benefits from coordination outweigh costs from the reduced effort of free-riding (Bonin et al., 2007). Notably, standard agency theory 6 Agency theory holds that firms are more likely to use subjective measures as the correlation between observed output and effort is lower (Baker, 1992).Strategic Compensation 11 views team-based compensation as important only when the firm chooses a production process requiring close integration across a team to internalize production externalities from individual workers. Consequently, when coordination is unnecessary, team-based incentives are unlikely to be efficient and firms set compensation strategy largely based on the observability of output, effort, and skill. If high-powered incentives are particularly important but individual effort is not observable, firms may use team-based compensation, although the costs of free-riding make this an exception rather than the rule. Furthermore, team-based pay on average may attract lowerskilled or less productive workers than individual-based pay due to lower earning potential and lower costs to shirking. 7 This leads to a fourth insight from standard agency theory: Insight 4: Firms are more likely to use team-based performance pay vs. individual-based pay when coordination across workers is important, when free-riding is less likely, or when monitoring costs are low. Basic predictions of agency theory Given these four insights from agency theory, we present the likely compensation choices of firms under an agency theory model in Figure 2, where coordination by employees is not required and the primary determinants of pay are observability of output, effort, and ability. As noted in the left-hand figure, when ability is observable, individual performance-based pay is more likely to be used as a firm better observes individual output, but is less able to observe actual effort. When both effort and output are highly observable, firms prefer to use a set salary, where an employee is given a set wage regardless of performance. 8 It is important to note that with effort and output both observable, this salary is inherently based on average performance. 7 Results from Hamilton, Nickerson, and Owan’s (2003) study of garment factory workers cast some doubt on these predictions. They found that high-ability workers prefer to work in teams, despite earning lower wages. This is consistent with recent work on how the social preferences of workers can overwhelm financial incentives (Bandiera, Barankay, and Rasul, 2005). 8 This prediction also stems from the assumed risk aversion of employees.Strategic Compensation 12 While the worker can reduce effort for short periods, the observability of this effort means that the firm can adjust compensation or terminate the employee in response to observed output. *** Insert Figure 2 here *** As noted in the right-hand figure, the situation changes dramatically when individual skill is not observable. In such cases, compensation not only motivates employees, but also attracts types of employees to the firm. Individual performance-based pay is more likely across both margins on the graph: at a given level of output or effort observability, firms are more likely to use performance-based pay when employee skills are not observable compared to when they are. When it is important for employees to coordinate effort across tasks, a third compensation strategy comes into play: team performance-based pay. This refers to a pay system that measures and rewards performance at a level other than the individual, such as the division, product line or company. As depicted in Figure 3, assuming imperfect (but not zero) observability of individual output, team performance-based pay is more likely as coordination across employees increases and observability of an individual effort decreases. Finally, as individual effort observability increases, firms again prefer salaries as they are the most efficient form of compensation. As before, individual-based performance pay becomes more important as the need for sorting due to skill unobservability grows high. *** Insert Figure 3 here *** Agency theory provides a compact, plausible theory that predicts the profitability and use of performance-based pay in a wide number of settings. It is therefore surprising that individual performance-based pay is used so little (Camerer et al., 2004; Baker, Jensen and Murphy, 1988), given the strong empirical evidence of its impact on employee effort (e.g., Lazear, 1986; Paarsch and Shearer, 2000). Part of this inconsistency may be due to the fact that the induced effort is Strategic Compensation 13 directed toward non-productive or detrimental activities (Kerr, 1975; Oyer, 1998; Larkin, 2007). However, even considering these “gaming costs,” the magnitude in performance differences in the above empirical studies makes it difficult to believe gaming alone explains the dearth of performance-based pay. 9 Incorporating Insights from Psychology and Decision Research into Agency Theory We argue that the low prevalence of individual performance-based pay in firms reflects several important relationships between the psychology of employees and their pay, utility, and resulting actions. In each case, the psychological mechanism we suggest to be at work makes performance-based pay more costly for firms, which may help explain why performance-based pay is less common than agency theory predicts. However, we also argue that the basic structure of agency theory is still a useful lens for examining how insights from psychology and behavioral decision research affect compensation predictions. Like agency theory, our framework decomposes the strategic element of compensation into differences between firms and employees in objectives and information, and recognizes that there is a “work-shirk” tradeoff for the average employee. Integrating psychological insights into this agency-based framework allows us to put forward an integrated theory of strategic compensation that considers both economic and psychological factors, and a testable set of propositions. As with all models, we abstract away from many variables that are relevant to compensation, and focus on two psychological factors which, in our view, create the largest impact on the methods by which firms compensate workers: overconfidence and social 9 Note that the existence of costs from performance-based pay, as demonstrated in the studies above, does not mean that these pay systems are suboptimal. Agency theory would hold that the net benefits of the system, even including the identified costs, must be greater than the net benefits of any other system.Strategic Compensation 14 comparison processes. In this section, we discuss how these psychological factors add costs to performance-based compensation systems, using the framework developed in Section 2. These additions are depicted in Figure 4. Throughout the section, we will refer back to this figure to clearly explain how the consideration of these psychological costs modifies some of the main predictions of standard agency theory. *** Insert Figure 4 here *** Performance-based pay and social comparison Social comparison theory (Festinger, 1954) introduces considerable costs associated with individual pay-for-performance systems because it argues that individuals evaluate their own abilities and opinions in comparison to referent others. Psychologists have long suggested that individuals have an innate desire to self-evaluate by assessing their abilities and opinions. Because objective, nonsocial standards are commonly lacking for most such assessments, people typically look to others as a standard. Generally, individuals seek and are affected by social comparisons with people who are similar to them (Festinger, 1954), gaining information about their own performance. As noted in Figure 4, social comparison theory adds a fourth information set to the three studied in agency theory: firms’ and employees’ knowledge about the pay of other employees. When deciding how much effort to exude, workers not only respond to their own compensation, but also respond to pay relative to their peers as they socially compare. In individual pay-forperformance systems, pay will inevitably vary across employees, generating frequent pay comparisons between peers. As suggested by equity theory (Adams, 1965), workers are not necessarily disturbed by such differences, since they consider information about both the inputs (performance) and outputs (pay) in such comparisons. If workers were to rationally perceive pay Strategic Compensation 15 inequality to be fairly justified by purely objective and easily observable performance differences, then such pay differences would generate few (if any) psychological costs. Yet pay comparisons can lead to distress resulting from perceptions of inequity if inputs or performance are either unobservable or perceptions of those inputs are biased. For example, employees might believe they are working longer hours or harder than referent coworkers, and if their pay level is relatively low, they will likely perceive inequity. Theoretical work in economics and strategy has followed psychology in arguing that such comparisons can lead to reduced effort (Solow, 1979; Akerlof and Yellen, 1990) and behavior grounded in envy, attrition, and the tendency to sabotage other workers within the same organization (Nickerson and Zenger, 2008; Bartling and von Siemens, 2010). 10 Empirical studies show that social comparisons are indeed important to workers (Blinder and Choi, 1990; Campbell and Kamlani, 1990; Agell and Lundborg, 2003), and can hurt morale (Mas, 2008), stimulate unethical behavior (Cropanzano et al., 2003; Pruitt and Kimmel, 1977; Gino and Pierce, 2010; Edelman and Larkin, 2009), and reduce effort (Greenberg, 1988, Cohn et al., 2011; Nosenzo, 2011). Perceived inequity can also increase turnover and absenteeism (Schwarzwald et al., 1992) and lower commitment to the organization (Schwarzwald et al., 1992). While this negative impact is typically stronger when the employee is disadvantaged (Bloom, 1999), costly behavior can also occur when the employee is advantaged and feels compelled to help others (Gino and Pierce, 2009). Perceived inequity in pay can furthermore have a costly asymmetric effect. Recent evidence suggests that below-median earners suffer lower job satisfaction and are more likely to search for a new job, while above-median earners generate no productivity benefits from 10 Social psychology’s work on equity and social comparison has slowly disseminated into the economics literature, having a profound impact on experimental economics (Rabin 1996), particularly in the literature on fairness (e.g., Camerer, 2003; Fehr & Gachter, 2000; Fehr & Schmidt, 1999).Strategic Compensation 16 superior pay (Card et al., 2010) and may even engage in costly actions to assuage guilt (Gino and Pierce, 2009). While not all below-median earners perceive unfairness, this evidence is certainly consistent with a substantial frequency of inequity perception, and may also reflect dissatisfaction with the procedures used to allocate pay across workers. Social comparison across firms by CEO’s has also been shown to lead to costly escalations in executive salaries, a phenomenon that can also occur between employees in the same firms (Faulkender and Yang, 2007; DiPrete, Eirich, and Pittinsky, 2008). As noted in Figure 4, social comparison theory adds two insights to the costs of performance-based pay: Insight 5a: Perceived inequity through wage comparison reduces the effort benefits of individual pay-for-performance compensation systems. Insight 5b: Perceived inequity through wage comparison introduces additional costs from sabotage and attrition in individual pay-for-performance compensation systems. Furthermore, employees may believe “random shocks” to performance-based pay as being unfair, especially if these shocks do not occur to other workers. If a regional salesperson’s territory suffers an economic downturn, for example, this may impact their pay despite no change in their effort or ability. Other shocks, such as weather, equipment malfunctions, customer bankruptcies, or changing consumer preferences, may negatively impact worker compensation outside the employee’s control. Resulting perceptions of unfairness can lead to the same problems noted above: lack of effort, sabotage and attrition. As noted in Figure 4, this generates an additional insight: Insight 6: Perceived inequity arising through random shocks in pay introduces additional costs from effort, sabotage, and attrition in individual pay-for-performance compensation systems. Therefore, social comparison theory essentially adds another information set to agency theory: the pay of others. The firm, of course, knows everyone’s pay, but the effects of social comparison on pay are greater as workers have more information about the pay of referent Strategic Compensation 17 others. The psychology literature has until recently placed less emphasis on tying the importance of social comparisons to employee actions which benefit or cost firms, and the strategy literature has, with the exception of Nickerson and Zenger (2008), not yet integrated this construct into studies of organizational strategy. As we show in a later section, the failure of agency theory to include social comparisons costs means that many of the firm-wide costs of performance-based pay are missed. Overconfidence and performance-based pay Psychologists and decision research scholars have long noted that people tend to be overconfident about their own abilities and too optimistic about their futures (e.g., Weinstein, 1980; Taylor and Brown, 1988). Overconfidence is thought to take at least three forms (Moore and Healy, 2008). First, individuals consistently express unwarranted subjective certainty in their personal and social predictions (e.g., Dunning et al., 1990; Vallone et al., 1990). Second, they commonly overestimate their own ability; and finally they tend to overestimate their ability relative to others (Christensen-Szalanski and Bushyhead, 1981; Russo and Schoemaker, 1991; Zenger, 1992; Svenson, 1981). Recent research has shown that overconfidence is less an individual personality trait than it is a bias that affects most people, depending on the task at hand (e.g., Moore and Healy, 2008). People tend to be overconfident about their ability on tasks they perform very frequently, find easy, or are familiar with. Conversely, people tend to be underconfident on difficult tasks or those they seldom carry out (e.g., Moore, 2007; Moore and Kim, 2003). This tendency has large implications for overconfidence in work settings, since work inherently involves tasks with Strategic Compensation 18 which employees are commonly very familiar. 11 We suggest that overconfidence changes the informational landscape by which firms determine compensation structure, as noted in Figure 4. When overconfident, employees’ biased beliefs about their own ability and effort alters the cost-benefit landscape of performance-based pay. First and foremost, performance-based pay may fail to efficiently sort workers by skill level, reducing one of the fundamental benefits of performance-based bay. Overconfident workers will tend to select into performance-based compensation systems, particularly preferring individual-based pay-for-performance (Cable and Judge, 1994; Larkin and Leider, 2011). This implies that workers may no longer accurately selfselect into optimal workplaces based on the incentives therein. Instead, overestimating their ability, they may select into performance-based positions that are suboptimal for their skill set. If workers overestimate the speed with which they can complete tasks (Buehler et al., 1994), for instance, they may expect a much higher compensation than they will ultimately receive, leading to repeated turnover as workers seek their true avocation. While this sorting problem may impact some firms less due to superior capability to identify talent, considerable evidence suggests that hiring lower-ability workers is a widespread problem (Bertrand and Mullainathan, 2004). A similar sorting problem may occur when overconfident workers are promoted more frequently under a tournament-based promotion system, exacerbating problems as they rise to managerial positions (Goel and Thakor, 2008). These overconfident managers may in turn attract similar overconfident employees, amplifying future problems (Van den Steen, 2005). Based on this reasoning, we propose the following insight: 11 Economists have begun to study the effect of overconfidence on firm and employee actions, finding overconfidence influences individuals’ market-entry decisions (Camerer and Lovallo, 1999), investment decisions (e.g., Barber and Odean, 2001), and CEOs’ corporate decisions (e.g., Malmendier and Tate, 2005).Strategic Compensation 19 Insight 7: Overconfidence bias reduces the sorting benefits of individual pay-forperformance compensation systems. Overconfidence not only has immediate implications for the optimal sorting of workers across jobs, but it also may lead to reduced effort when combined with social comparison. A worker, believing himself one of the most skilled (as in Zenger, 1992), will perceive lower pay than a peer as inequitable, despite that peer’s true superior performance. This perceived inequity would be particularly severe when there is imperfect information equating effort and ability to measurable and thus compensable performance. We thus suggest that: Insight 8a: Overconfidence bias increases perceived inequity in wage comparison and thereby decreases the effort benefits of individual pay-for-performance compensation systems. Insight 8b: Overconfidence bias increases perceived inequity in wage comparison and thereby aggravates costs from sabotage and attrition in individual pay-for-performance compensation systems. Reducing Psychological Costs through Team-Based and Scaled Compensation Although psychological costs of social comparison and overconfidence make individual pay-for-performance systems less attractive than under a pure agency theory model, firms may still wish to harness the effort-improvement from performance-based pay. We argue that firms frequently use intermediate forms of compensation that combine some level of pay-forperformance with the flatter wages of fixed salaries. In this section we use an integrated agency and psychology lens to analyze the costs and benefits of two of these intermediate forms: teambased and scale-based wages. While both team-based and scale-based systems can be costly due to decreased effort, they present clear psychological benefits. Under a team-based system, an employee is compensated based on the performance of multiple employees, not just their individual performance. The primary psychological benefit of team-based performance pay is that it reduces the costs of social comparison, making it relatively Strategic Compensation 20 more attractive than predicted by agency theory, which holds that team-based pay will be used only when there are benefits to coordination across employees that are greater than the costs of free-riding. Under a scaled wage system, employees are compensated in relatively tight “bands” based largely on seniority. As with team-based systems, scaled wages result in lower costs from social comparison and overconfidence, and are therefore more attractive than standard agency theory would predict, even if effort is somewhat attenuated due to weakened incentives. Reducing social comparison costs through intermediate forms of compensation In team-based compensation systems, the firm retains performance-based incentives, but instead of tying them to individual performance they link them with the performance of teams of employees. These teams may be extremely large, such as at the business unit or firm level, or may be based in small work groups. In general, smaller groups present higher-powered incentives and reduce free-riding, while larger groups present weaker incentives. Team-based compensation can reduce one dimension of social comparison: wage comparison. By equalizing earnings across workers within teams, team-based compensation removes discrepancies in income among immediate coworkers that might be perceived as sources for inequity or unfairness. Employees, however, examine the ratio of inputs to outcomes when judging equity (Adams, 1965). The evening of wages within teams reduces social comparison on wages (outcomes) and not comparisons of contribution through perceived ability or effort (inputs). Team members will therefore perceive equivalent pay among members as truly equitable only if they perceive each employee’s contribution to the team to be equal, so some problems of social comparison remain. Although overconfidence may magnify perceptions of own contributions, existing studies, while limited, suggest that perceptions of fairness depend Strategic Compensation 21 much more on outcomes than inputs (Oliver and Swan, 1989; Siegel et al., 2008; Kim et al., 2009), with employees more focused on compensation than inputs (Gomez-Mejia and Balkin, 1992). 12 Team-based compensation would best resolve the social comparison problem in teams where contribution is homogeneous, but given the lesser weight of inputs in equity evaluations, even widely heterogeneous differences in ability or effort are unlikely to produce the social comparison costs that wage inequality will. This reasoning leads to the following proposition: Proposition 1: Team-based compensation reduces costs of social comparison when individual contribution is not highly heterogeneous within the team. Team-based compensation fails to reduce an additional social comparison costs, however: it cannot address wage comparisons across teams. Workers in some teams may believe earnings in higher-paid teams are inequitable, which may lead to psychological costs similar to individual-based systems. This problem may be exacerbated by workers’ perception that their team assignment was inherently unfair, and thereby may create a new dimension for comparison. Firms can reduce this potential social comparison cost by implementing scaled wages. Scaled wages will severely reduce equity and envy-based problems associated with wage comparisons across teams by creating uniformity throughout the firm for given job and seniority levels. While workers may still perceive outcome and effort to be unfair, this perception will be less personal given the firm’s consistent policy of scale-based wages. The worker may view the policy as unfair, but will not feel personally affronted by a managerial decision to underpay them. Costs from inequity and envy will therefore be reduced, reducing psychological costs relative to performance-based pay. Scaled wages will of course motivate the highest-ability workers to leave the firm, because their contribution will not be adequately remunerated, but this 12 Gachter, Nosenzo, and Sefton (2010) find that laboratory participants socially compare on effort, and that this reduces the efficacy of increases in flattened financial incentives in inducing effort. This suggests team-based compensation may be less effective relative to flat wages in motivating effort. Strategic Compensation 22 is a cost already accounted for in economic theories of agency. Similarly, scaled wages may also involve larger administrative and bureaucratic costs, since firms must determine and communicate the appropriate basis on which the scaled system is based. These administrative costs, however, may actually deepen employee trust in the fairness of the system. We thus propose that: Proposition 2: Scaled wages have lower social comparison costs than team-based and individual-based compensation systems. We illustrate our model’s predicted impact of social comparison on the likely compensation choices of the firm in Figure 5. For reference, the left-hand box shows the standard predictions of agency theory, based on Figure 3 and the assumption of a moderate degree of task coordination across employees. As noted in the figure, agency theory assumes that compensation choice does not depend on the ability of employees to observe the pay of peers. The right-hand box shows how the incorporation of social comparison costs changes the model’s predicted compensation choice. As seen in the figure, individual-performance-based pay is predicted far less often when social comparison is present, and team-based and salary-based pay are predicted more often. Also, scale-based pay is predicted with social comparison, but not under agency theory. The model’s predictions therefore change dramatically with the incorporation of psychology. *** Insert Figure 5 here *** At high levels of pay observability by peers, performance-based pay is very costly, and firms are predicted to turn towards scale-based pay or flat salaries. As employee observability of peer pay goes down, pay based on team performance becomes more likely as the motivational benefits of pay for performance begin to outweigh the costs of social comparison. Still, if peers have some view of peer pay, the model holds that firms are unlikely to base pay primarily on Strategic Compensation 23 individual performance. Hence, team-based pay is used far more frequently than predicted in agency theory because of its lower social comparison costs. Finally, individual-based performance pay is predicted only when peers have very poor visibility of others’ pay, and when effort cannot be perfectly observed. This is analogous to the prediction of standard agency theory, which does not take social comparison costs into consideration. Reducing overconfidence costs through flattening compensation Overconfidence creates considerable problems for individual-based compensation in its aggravation of social comparison and its undermining of efficient sorting processes. It creates similar problems in team-based compensation. Overconfident employees, unless they can observe the actual contribution of teammates, will usually interpret underperformance by the team as reflective of other workers’ deficiencies, while attributing strong team performance to themselves. These biased conclusions, which result from biases in attribution of performance, will create erroneous perceptions of inequity that may lead to reduced effort, attrition, and reduced cooperation. Similarly, overconfident workers will perceive assignments to lower quality teams as unfair, because they will perceive their teammates as below their own ability. This can result in workers constantly trying to switch into better teams of the level they observe themselves to be. Thus, we introduce the following proposition: Proposition 3: Team-based compensation only resolves problems of overconfidence in individual pay-for-performance systems if the actual contribution of teammates is observable. Introducing scaled compensation within teams may not completely alleviate costs of overconfidence, but scaled wage systems can prove much less costly when overconfidence is present. With flatter wages across the firm, workers are less likely to socially compare with peers in other teams, and are less likely to expend political effort attempting to transfer into other team. Strategic Compensation 24 Instead, overconfident workers under scale-based wages will potentially observe workers at other firms earning higher wages and attempt to leave a firm in order to restore perceived inequality. The most overconfident workers are unlikely to even sort into the firm, given their perception that they will never be paid what they are truly worth. Scale-based wages therefore solve psychological costs of overconfidence by sorting out the most overconfident workers. This reasoning leads to our next proposition: Proposition 4: Scale-based wages reduce costs of overconfidence in individual- and team-based pay-for-performance. We present the impact of overconfidence on likely pay choices in Figure 6, which shows our model’s predictions about how a firm’s compensation policy changes when employees are overconfident. For comparison, Figure 6 on the left repeats the right-hand box in Figure 5, where overconfidence is not considered. As noted in the figure, overconfidence increases the need for team-based and scale-based wages because they sort out overconfident workers who are more likely to perceive inequity in pay. Correspondingly, firms are less likely to use salaries even when individual effort is observable, because employees do not have unbiased views on their own or others’ effort. Even when employees cannot see one another’s pay, firms are more likely to use team-based pay because an overconfident employee has biased views about her own contributions and effort and overestimates the pay of peers (Lawler, 1965; Milkovich and Anderson, 1972). A team performance-based system can provide positive effort motivation while weeding out highly overconfident workers. Therefore, when overconfidence is most severe, scale-based and team performance-based wages will drive out the most overconfident and potentially destructive workers, and are much more likely to be used than salaries or individual performance-based wages. Compared to the predictions from standard agency theory shown in the left-hand side of Figure 5, which does not take into account the costs of social comparison or Strategic Compensation 25 overconfidence, our model shows that scale- and team- performance-based pay are far more likely than agency theory predicts. *** Insert Figure 6 here *** Implications for Firm Strategy Reflecting agency theory, strategic compensation has almost exclusively focused on improved effort and sorting that firms enjoy when using optimal compensation strategy. While these direct effects are undeniably relevant, an important implication of our model is that indirect effects of compensation also have strategic implications. Indeed, employee compensation is not an isolated firm policy. It broadly impacts the other choices and activities of the firm, and must be complementary with them in order to support the firm’s strategic position (Porter, 1996). Also, social comparison theory suggests that compensation for one employee can spill over and affect decisions made by other employees within a firm. Social comparison costs can dramatically impact the overall strategy of the firm by limiting the firm’s ability to apply high-powered incentives or a wide variance in compensation levels across employees. Williamson (1985) explained how this can affect a firm’s corporate strategy in limiting gains from mergers and acquisitions in his discussion of Tenneco’s acquisition of Houston Oil and Minerals Corporation. Agency theory would predict that premerger firms having considerably different pay structures would have little impact on the postmerger firm. Yet Tenneco was forced to standardize pay across employees to avoid social comparison costs, an adjustment that cost USAir 143 million USD the year following its acquisition of Piedmont Aviation (Kole and Lehn, 2000). This reflects how firm boundaries can change reference groups among employees and force firms to elevate the wages of the lowest Strategic Compensation 26 peer group to improve perceptions of pay equity among new coworkers (Kwon and MeyerssonMilgrom, 2009). Similarly, Dushnitsky and Shapira (2010) suggest that a firm’s strategic decision to implement a corporate venture capital program may create problems of social comparison, since the efficacy of high-powered incentives in such programs necessitates pay-for-performance. Since the considerable upside of such compensation contracts can generate huge pay inequalities within the firm, such programs may generate conflict across personnel. Similar problems have limited the ability to implement individual pay-for-performance for internal pension fund managers in firms and state governments (Young, 2010; Wee, 2010). In enterprise software, aggressive pay-for-performance in sales – a single job function – has been shown to be correlated with high turnover and low employee satisfaction in other job functions such as marketing and product development (Larkin, 2008). Overconfidence can also impact the strategic implications of compensation policy. Investment banks frequently take highly-leveraged positions in the marketplace, creating tremendous profit potential but also greater risk. The high-powered performance-based incentives of investment banking attract many high-ability individuals, but these compensation schemes also attract some of the most overconfident workers in the world (Gladwell, 2009). While this overconfidence may yield some benefits in bluffing and credible commitment, it also produced considerable problems at firms like Bear Stearns, which collapsed early in the recent banking crisis. First, persistent overconfidence led the bank toward aggressive, highly-leveraged derivatives that ultimately yielded liquidity problems. Second, envy and comparison of bonus pay led to increasingly aggressive behavior in investment banks.Strategic Compensation 27 Furthermore, recent work suggests that overconfident CEO’s are more likely to pursue innovation, particularly in highly-competitive industries (Galasso and Simcoe, forthcoming). While the focus of our paper is non-executive pay, the same rule may apply at lower levels in the firm, whether in research and development, product development, operations, or finance. Experimental evidence suggests that overconfident technical managers are much more likely to pursue aggressive R&D strategy (Englmaier, 2010). Under individual pay-for-performance, which is inherently highly-competitive, non-executive employees may also pursue extensive innovation for financial or career gains. The decision to grant such employees wide discretion in applying innovation and change within the firm may require flatter compensation structures to reduce the risk of attracting overconfident workers and incentivizing them toward excessive risk. Similarly, many firms position their products in ways that require personal and customized sales channels. Because effort is difficult to monitor among these salespeople, firms typically employ pay-for-performance commission schemes, which motivates effort, but can provide few sorting benefits. One leading management consulting company used extensive surveys to find that enterprise software salespeople’s expected commissions averaged $800,000 per year. Yet these expectations were nearly eight times the actual median compensation, suggesting high overconfidence about their own sales abilities. Larkin (2007) notes that the annual attrition rate of similar software salespeople was nearly 30 percent, and average tenure level was only two years, suggesting that salesperson failure to meet excessive expectations motivated attrition. Given that industry sales cycles are a year or more and customer relationships are critical, high salesperson attrition is extremely costly to software vendors (Sink, 2006). Empirical Implications and Directions for Future ResearchStrategic Compensation 28 The agency theory approach to strategic compensation has proved very robust: it makes simple, testable predictions, many of which have held up to considerable empirical testing. The three major predictions with strong empirical support are that 1) employees increase effort in response to incentives; 2) employees put effort into “gaming” incentive systems which can negatively affect performance; and 3) incentives can lead employees to sort by skill level. Our integrated framework suggests a number of new predictions regarding the role of psychological costs in the study of strategic compensation. We identified two sets of psychological costs: social comparison costs and overconfidence costs. A first set of predictions focuses on social comparison costs. Our theory predicts that social comparison costs reduce the efficacy of individual performance-based pay as a compensation strategy. Consequently, firms will take one of two actions when social comparisons are prevalent among employees: dampen the use of performance-based incentives, or attempt to keep wages secret. Although many firms have strict wage secrecy policies, these are frequently ineffective due to workers’ overestimation of peer wages (Lawler, 1965; Milkovich and Anderson, 1972) or are explicitly illegal (Card et al. 2010). The difficulty of imposing and maintaining wage secrecy makes flattening wages through scale- or team-based pay a frequently necessary solution. One approach to testing these propositions is to collect data from surveys or industry reviews to examine how and when the prevalence and costs of social comparisons vary across industry and company environments. Instruments developed in the psychology literature provide guidance on how to measure social comparison processes using survey items or field interventions in organizations. Such an analysis would be inherently cross-sectional, however, and would merely establish correlations between social comparison and compensation practices. Strategic Compensation 29 One fruitful avenue for empirical testing may be publicly-funded organizations such as universities and hospitals. In many jurisdictions, salary-disclosure laws have produced natural experiments that allow for the study of behavioral responses to newly observed peer compensation, and the organizational responses to them. Recent work by Card et al. (2010), which exploits the public disclosure of California employee salaries, is an example of the potential of this approach. Similarly, acquiring data on firms that change compensation structure or acquire another firm with different wage levels can allow for examining how increased variance in pay may reduce worker productivity. Such findings would be particularly striking if productivity decreased despite absolute pay increases. Exploiting variation in worker assignment (Chan et al. 2011; Mas and Moretti, 2009) or exogenous organizational change for workers (Dahl, 2011) could allow for estimating the effect of relative pay on performance while controlling for absolute pay. Similar changes between team- and individual- based compensation systems could potentially identify how individuals react to social comparison in different incentive structures, and how that influences performance. Where data on such changes are not available, field experiments that change compensation systems for a random set of employees and study resulting behavior and performance may prove useful (for a recent example, see Hossain and List, 2009). A second set of new predictions resulting from our theoretical framework centers around overconfidence. If overconfidence plays a negative role in the job function, we predict that firms will either dampen incentive intensity, or set up a compensation scheme which sorts against overconfidence. As noted, overconfidence can exacerbate the perceived inequity of pay-forperformance schemes in settings where social comparisons matter. We would therefore expect that industries and job settings marked by strong social comparison effects will strategically use Strategic Compensation 30 compensation to screen against overconfidence. Furthermore, theoretical work suggests it can considerably reduce sorting benefits from individual pay-for-performance and even generate an escalating attraction and promotion of overconfident employees (Van den Steen, 2005; Goel and Thakor, 2008). However, we still have limited empirical evidence on how compensation sorts by confidence, so future research needs to focus on this question first. In job functions where confidence is important for success, such as in the sales setting, we predict that firms will strategically use compensation to sort by confidence. Data on sales commission structure by industry are available (e.g., Dartnell, 2009); a researcher could test whether industries with lower “lead-to-sales” ratios, and/or industries with longer sales cycles, have commission schedules which appear to cater to overconfident employees. For example, in enterprise software, an industry with low “lead-to-sales” ratios and an 18-24 month sales cycle, salespeople are paid by convex commission schedules that can differ by a factor of 20 times or more depending on the salesperson’s other sales in the quarter (Larkin, 2007). Our theoretical framework predicts a relationship between convex compensation (or other schemes that would sort by confidence) and the industry sales cycle and/or lead-to-sales ratio. However, we still need a better understanding of the role confidence plays in job functions outside sales. There is considerable research yet to be done on psychological factors causing employees to sort into different job functions. Future research might also benefit from extending our theoretical framework to include new factors influencing strategic compensation such as employee attitudes towards risk and uncertainty, or to relax some of the assumptions made in our model, for example around the fixed nature of production and technology. These extensions are likely to provide opportunities for future research on the boundary conditions of influences identified in our model.Strategic Compensation 31 Managerial implications We believe our work has a number of immediate implications for managers in both the private and public sector. The first, and most obvious implication, is that the efficacy of individual pay-for-performance is powerfully influenced by psychological factors which if not considered a priori could have considerable unintended consequences for the firm. In choosing whether to implement such a pay system, managers must not only consider easily quantifiable economic costs related to the observability of worker pay and productivity, but also psychological costs due to social comparisons and overconfidence. Under increasing global pressure for worker performance in private sectors, managers are reevaluating traditional scale-based and other flat compensation systems and experimenting with high-powered incentive systems. Similarly, in the public sector, managers facing tightened budgets and public perceptions of ineffectiveness are implementing pay-for-performance schemes to improve effort in settings where these schemes have rarely been used before, such as education (e.g. Lavy, 2009) and aviation regulation (Barr, 2004). While in many cases these increased incentives may prove effective, our work suggests that there may be a sound basis for many of the existing flat compensation systems. Focusing exclusively on increasing effort through high-powered incentives may ignore many of the benefits social and psychological benefits that existing compensation systems provide. In addition, social networking and related phenomena have made information about peer effort, performance and compensation more readily available. We would argue that the costs of performance-based systems are heightened as employees share information across social networks, similar to the impact of online salary information for public employees observed in Card et al (2010). With pay secrecy increasingly difficult to enforce, and the private lives of Strategic Compensation 32 coworkers increasingly observable, social comparison costs seem even more likely to play an important role in compensation in the future. Limitations Our theoretical framework needs to be qualified in light of various limitations. One limitation of is our focus on financial incentives as the major driver of effort and job choice. Research in psychology and organizational behavior has proposed that individuals are intrinsically motivated by jobs or tasks (Deci and Ryan, 1985, Deci 1971). While many scholars agree that money is a strong motivator (Jurgensen, 1978; Rynes, Gerhart, and Minette, 2004), powerful pecuniary incentives may be detrimental by reducing an individual’s intrinsic motivation and interest in the task or job. As Deci and Ryan (1985) argue, this reduction occurs because when effort is exerted in exchange for pay, compensation becomes an aspect controlled by others that threatens the individual’s need for self-determination. In the majority of cases, the effects of extrinsic or pay-based motivators on intrinsic motivation are negative (Deci, Koestner, and Ryan, 1999; Gerhart and Rynes, 2003). This stream of research highlights the importance of distinguishing between extrinsic and intrinsic motivation, distinctions which are increasingly being incorporated into the personnel economics literature (Hamilton, Nickerson, and Owan, 2003; Bandiera et al., 2005; Mas and Moretti, 2009). An additional limitation of this work is that we ignore other psychological factors that can impact the role of employee compensation in firm strategy. Loss aversion, for example, could greatly impact the efficacy of individual pay-for-performance. Considerable work in psychology and behavioral decision research has shown that many individuals are asymmetrically loss-averse, where losses are of greater impact than same-sized gains (Kahneman and Tversky, 1979; Tversky and Kahneman, 1991, 1992). These models present individuals as Strategic Compensation 33 having psychologically-important reference points, target income levels based in previous earnings, social expectations, cash-flow requirements, or arbitrary numbers. Workers below the target suffer tremendous losses from this sub-reference income, and will respond with increased effort (Camerer et al., 1997; Fehr and Goette, 2007), misrepresentation of performance or gaming (Schweitzer, Ordonez, and Doumo, 2004), and increased risk-taking. This loss-averse behavior could particularly hurt the firm when the income of the pay-for-performance worker depends on economic returns to the firm. Since such workers typically earn more when returns are high, the direct implication is that workers will put forth less effort when it is most beneficial to the firm and more effort when least beneficial (Koszegi and Rabin, 2009). Conclusion Compensation is inherently strategic. Organizations use different compensation strategies and have discriminatory power in choosing their reward and pay policies (Gerhart and Milkovich, 1990). As the human resource and personnel economics literatures explain, these policies directly affect employee performance, but they are also highly complementary with the other activities of the firm. Compensation is not an isolated choice for the firm. It is inextricably linked to the technology, marketing, operations, and financial decisions of the firm. Furthermore, in a world with imperfect information, differing risk attitudes and behavioral biases, achieving an efficient, “first best” compensation scheme is impossible, thereby creating the opportunity for firms to gain strategic advantage through compensation strategies complementary to their market position. Given the important effects of compensation for both firm performance and employee behavior, it is important to understand what factors managers should consider when designing their firms’ compensation systems and what elements should be in place for compensation systems to produce desirable worker behavior. Strategic Compensation 34 This paper proposed an integrated framework of strategic compensation drawing from both the economics and psychology literatures. The dominant theoretical perspective for the majority of studies of compensation has been the economics theory of agency (e.g., Jensen and Meckling, 1976; Holmstrom, 1979). Agency theory, with the later extensions of personnel economics, provides powerful insight into the strategic role of compensation by clearly defining the mechanisms that affect employee and firm performance, namely effort provision and sorting. In economic theory, the three observability problems of effort, skill, and output are key to the efficacy of compensation systems in incentivizing effort and sorting workers. We argued that, while providing useful insights on how to design compensation systems, the economic perspective on strategic compensation captures only some of the factors that can affect compensation policy performance. We described an integrated theoretical framework that relies on the effort provision and sorting mechanisms of agency theory, but that introduces psychological factors largely neglected in economics. We focused on the psychology of information, specifically incorporating social comparison costs and overconfidence costs, and their effects on the performance and likely frequency of specific compensation strategies. We demonstrated that firms that account for these psychological costs will likely enact flatter compensation policies or else suffer costs of lower effort, lower ability, and sabotage in their workers. We believe our theoretical framework offers guidance on the main factors managers should consider when determining compensation strategy. At the same time, it offers guidance to researchers interested in advancing and deepening our understanding of the economic and psychological foundations of strategic compensation. Strategic Compensation 35 Acknowledgments We thank Editor Will Mitchell, Todd Zenger, and three anonymous reviewers for insightful feedback on earlier versions of this paper. References Adams JS. 1965. Inequity in social exchange. In Advances in Experimental Social Psychology, Berkowitz L (ed). Academic Press: New York; 2, 267–299. Agell J, Lundborg P. 2003. Survey evidence on wage rigidity and unemployment: Sweden in the 1990s. Scandinavian Journal of Economics 105(1): 15-29. Akerlof GA, Yellen JL. 1990. The fair-wage effort hypothesis and unemployment. Quarterly Journal of Economics 105: 255-283. Baker GP. 1992. Incentive Contracts and Performance Measurement. Journal of Political Economy 100 (3): 598-614. Baker GP, Jensen MC, Murphy KJ. 1988. Compensation and incentives: theory and practice. The Journal of Finance 43 (3): 593-616. Balkin D, Gomez-Mejia L. 1990. Matching compensation and organizational strategies. Strategic Management Journal 11: 153-169. Bandiera O, Barankay I, Rasul I. 2005. Social preferences and the response to incentives: evidence from personnel data. Quarterly Journal of Economics 120: 917-962. Barber BM, Odean T. 2001. Boys will be boys: gender, overconfidence, and common stock investment. The Quarterly Journal of Economics 116 (1): 261-292. Barr, S. (2004). At FAA, some lingering discontent over pay system. The Washington Post, November 30, 2004. Metro; B02. Bartling B, von Siemens FA. 2010. The intensity of incentives in firms and markets: Moral hazard with envious agents. Labour Economics 17 (3): 598-607. Bertrand M, Mullainathan S. 2004. Are Emily and Greg more employable than Lakisha and Jamal? A field experiment on labor market discrimination. American Economic Review 94 (4): 991-1013. Blinder AS, Choi DH. 1990. A shred of evidence on theories of wage stickiness. Quarterly Journal of Economics 105: 1003-1015. Bloom M. 1999. The performance effects of pay dispersion on individuals and organizations. Academy of Management Journal 42: 25-40. Bonin H, Dohmen T, Falk A, Huffman D, Sunde U. 2007. Cross-sectional earnings risk and occupational sorting: the role of risk attitudes. Labour Economics 14: 926-937. Buehler R, Griffin D, Ross M. 1994. Exploring the “planning fallacy”: why people underestimate their task completion times. Journal of Personality and Social Psychology 67: 366-381. Bureau of Labor Statistics. 2009. National Compensation Survey. http://www.bls.gov/eci/ Last accessed May 1, 2011. Cable DM, Judge TA. 1994. Pay preferences and job search decisions: a person-organization fit perspective. Personnel Psychology 47: 317–348. Camerer C. 2003. Strategizing in the brain. Science 300: 1673-1675.Strategic Compensation 36 Camerer C, Babcock L, Loewenstein G, Thaler R. 1997. Labor supply of New York City cabdrivers: one day at a time. Quarterly Journal of Economics 112 (2):407-441. Camerer C, Lovallo D. 1999. Overconfidence and excess entry: an experimental approach. American Economic Review 89 (1): 306-318. Camerer C, Loewenstein G, Rabin M. 2004. Advances in Behavioral Economics,. Princeton University Press: Princeton, NJ. Campbell C, Kamlani K. 1990. The reasons for wage rigidity: Evidence from a survey of firms. Quarterly Journal of Economics 112: 759-789. Card D, Mas A, Moretti E, Saez E. 2010. Inequality at work: The effect of peer salaries on job satisfaction. NBER Working Paper No. 16396. Chan TY, Li J, Pierce L. 2011. Compensation and peer effects in competing sales teams. Unpublished Working Paper. Christensen-Szalanski JJ, Bushyhead JB. 1981. Physician’s use of probabilistic information in a real clinical setting. Journal of Experimental Psychology: Human Perception and Performance 7: 928-935. Cohn A, Fehr E, Herrmann B, Schneider F. 2011. Social comparison in the workplace: Evidence from a field experiment. IZA Discussion Paper No. 5550. Cropanzano R, Rupp DE, Byrne ZS. 2003. The relationship of emotional exhaustion to work attitudes, job performance, and organizational citizenship behaviors. Journal of Applied Psychology 88(1): 160-169. Dahl M. 2011. Organizational change and employee stress. Management Science 57 (2): 240- 256. Dalton DR, Hitt MA, Certo ST, Dalton C. 2007. The fundamental egency problem and its mitigation: Independence, equity, and the market for corporate control. Academy of Management Annals 1: 1-65. Dartnell Corp. 2009. Dartnell’s 30 th Sales Force Compensation Survey. The Dartnell Corporation: Chicago. Deci E. 1971. Effects of externally mediated rewards on intrinsic motivation. Journal of Personality and Social Psychology 18: 105-115. Deci EL, Ryan RM. 1985. Intrinsic Motivation and Self-Determination in Human Behavior. Plenum: New York. Deci EL, Koestner R, Ryan RM. 1999. A meta-analytic review of experiments examining the effects of extrinsic rewards on intrinsic motivation. Psychological Bulletin 125: 627-668. DiPrete TA, Eirich GM, Pittinsky. M. 2008. Compensation benchmarking, leapfrogs, and the surge in executive pay. American Journal of Sociology 115 (6): 1671-1712. Dunning D, Griffin DW, Milojkovic J D, Ross L. 1990. The overconfidence effect in social prediction. Journal of Personality and Social Psychology 58: 568-581. Dushnitsky G, Shapira ZB. 2010. Entrepreneurial finance meets corporate reality: comparing investment practices and performing of corporate and independent venture capitalists. Strategic Management Journal 31(9): 990-1017. Edelman B and Larkin I. 2009. Envy and deception in academia: evidence from self-inflation of SSRN download counts. Working paper, Harvard University, Cambridge, MA. Englmaier F. 2010. Managerial optimism and investment choice. Managerial and Decision Economics 31 (4): 303 – 310. Fang H, Moscarini G. 2005. Morale hazard. Journal of Monetary Economics 52 (4): 749-777.Strategic Compensation 37 Faulkender MW, Yang J. 2007. Inside the black box: the role and composition of compensation peer groups. Unpublished manuscript. Fehr E, Schmidt K. 1999. A theory of fairness, competition, and cooperation. Quarterly Journal of Economics 114: 817-868. Fehr E, Gächter S. 2000. Cooperation and punishment in public goods experiments. American Economic Review 90: 980-994. Fehr E, Goette L. 2007. Do workers work more if wages are high? Evidence from a randomized field experiment. American Economic Review 97 (1): 298-317. Festinger L. 1954. A theory of social comparison processes. Human Relations 7 (2): 117-140. Galasso A, Simcoe TS. forthcoming. CEO overconfidence and innovation. Management Science. Gächter S, Nosenzo D, Sefton M. 2011. The impact of social comparisons of reciprocity. Forthcoming in Scandinavian Journal of Economics. Gerhart BA, Milkovich GT. 1990. Organizational differences in managerial compensation and financial performance. Academy of Management Journal 33 (4): 663-692. Gerhart B, Rynes S. 2003. Compensation: Theory, Evidence, and Strategic Implications. Sage Publications. Gerhart B, Rynes S, Smithey Fulmer I. 2009. Pay and performance: Individuals, groups, and executives. The Academy of Management Annals 3: 251-315. Gino F, Pierce L. 2009. Dishonesty in the name of equity. Psychological Science 20 (9): 1153- 1160. Gino F, Pierce L. 2010. Robin Hood under the hood: wealth-based discrimination in illicit customer help. Organization Science 21 (6): 1176-1194. Gladwell M. 2009. Cocksure: banks, battles, and the psychology of overconfidence. The New Yorker 27 July: 24. Goel AM, Thakor AV. 2008. Overconfidence, CEO selection, and corporate governance. Journal of Finance 63: 2737-2784, Gomez-Mejia LR. 1992. Structure and process of diversification, compensation strategy, and firm performance. Strategic Management Journal 13 (5): 381-397. Gomez-Mejia LR, Balkin DB. 1992. Compensation, organizational strategy and firm performance. Cincinnati: Southwestern. Greenberg J. 1988. Equity and workplace status: a field experiment. Journal of Applied Psychology 73: 606-613. Hamilton BH, Nickerson JA, Owan H. 2003. Team incentives and worker heterogeneity: an empirical analysis of the impact of teams on productivity and participation. Journal of Political Economy 111 (3): 465-497. Holmstrom B. 1979. Moral hazard and observability. Bell Journal of Economics 10: 74-91. Hossain T, List J. 2009. The behavioralist visits the factory: increasing productivity using simple framing manipulations. Working Paper No. 15623, National Bureau of Economic Research. Jensen MC, Meckling W. 1976. Theory of the firm: managerial behavior, agency costs, and ownership structure. Journal of Financial Economics 11 (4): 5-50. Jurgensen CE. 1978. Job preferences (what makes a job good or bad?). Journal of Applied Psychology 63: 267-76. Kahneman D, Tversky A. 1979. Prospect theory: an analysis of decision under risk. Econometrica XLVII: 263-291.Strategic Compensation 38 Kelly Services. 2010. Performance pay and profit sharing entice high-performance workers. Kelly Global Workforce Index. Kelly Services: Troy, MI. Kerr S. 1975. On the folly of rewarding A, while hoping for B. Academy of Management Journal 18(4): 769-783. Kim TY, Weber TJ, Leung K, Muramoto Y. 2009. Perceived fairness of pay: The importance of task versus maintenance inputs in Japay, South Korea, and Hong Kong. Management and Organization Review 6 (1): 31-54. Kole SR, Lehn K. 2000. Workforce integration and the dissipation of value in mergers – The case of USAir’s acquisition of Piedmont Aviation. In Kaplan S (ed.), Mergers and Productivity, University of Chicago Press: 239-279. Koszegi B, Rabin M. 2009. Reference-dependent consumption plans. American Economic Review 99 (3): 909-936. Kwon I, Meyersson-Milgrom E. 2009. Status, relative pay, and wage growth: Evidence from M&A. Unpublished Working Paper. Stanford University. Larkin I. 2007. The cost of high-powered incentives: employee gaming in enterprise software sales. Working paper, Harvard University, Cambridge, MA. Larkin I. 2008. Bargains-then-ripoffs: Innovation, pricing and lock-in in enterprise software. Working paper, Harvard University, Cambridge, MA. Larkin I, Leider S. 2011. Incentive Schemes, Sorting and Behavioral Biases of Employees: Experimental Evidence. Forthcoming, American Economic Journal: Applied Microeconomics. Lavy, V. 2009. Performance pay and teachers’ effort, productivity, and grading ethics. American Economic Review 99 (5): 1979-2011. Lawler EE. 2003. Pay practices in Fortune 1000 corporations. WorldatWork Journal 12(4): 45- 54. Lazear EP. 1986. Salaries and piece rates. Journal of Business 59 (3): 405-431. Lazear EP, Oyer P. 2011. Personnel economics: Hiring and incentives. in Ashenfelter O, Card D, editors: Handbook of Labor Economics, Vol 4b, Great Britain, North Holland, 2011, pp. 1769-1823. Malmendier U, Tate G. 2005. CEO overconfidence and corporate investment. Journal of Finance 60: 2660-2700. Mas A. 2008. Labor unrest and the quality of production: Evidence from the construction equipment resale market. Review of Economic Studies 75: 229-258. Mas A, Moretti E. 2009. Peers at work. Forthcoming in American Economic Review. Milkovich GT, Anderson PH. Management compensation and secrecy policies. Personnel Pscyhology 25(2): 293-302. Moore, DA. 2007. Not so above average after all: When people believe they are worse than average and its implications for theories of bias in social comparison. Organizational Behavior and Human Decision Processes 102(1): 42-58. Moore DA, Healy PJ. 2008. The trouble with overconfidence. Psychological Review 115 (2): 502-517. Moore DA, Kim TG. 2003. Myopic social prediction and the solo comparison effect. Journal of Personality and Social Psychology 85(6): 1121-1135. Nelson RR. 1991. Why do firms differ, and how does it matter? Strategic Management Journal 12 (S2): 61-74.Strategic Compensation 39 Nickerson JA, Zenger, TR. 2008. Envy, comparison costs, and the economic theory of the firm. Strategic Management Journal 29(13): 1429-1449. Nosenzo D. 2010. The impact of pay comparisons on effort behavior. CeDEx Discussion Paper n.2010-03, Centre for Decision Research and Experimental Economics at the University of Nottingham, Nottingham, U.K. Oliver RL, Swan JE. 1989. Consumer perceptions of interpersonal equity in transactions: A field survey approach. The Journal of Marketing 53 (2): 21-35. Oyer P. 1998. Fiscal year ends and non-linear incentive contracts: the effect on business seasonality. Quarterly Journal of Economics 113: 149-185. Paarsch H, Shearer B. 2000. Piece rates, fixed wages, and incentive effects: statistical evidence from payroll records. International Economic Review 41: 59-92. Porter M. 1996. What is strategy? Harvard Business Review 74 (6): 61-78. Prendergast C. 1999. The provision of incentives in firms. Journal of Economic Literature 37 (1): 7-63. Pruitt DG, Kimmel M J. 1977. Twenty years of experimental gaming: critique, synthesis, and suggestions for the future. Annual Review of Psychology 28: 363-392. Rabin M. 1996. In American Economists of the Late Twentieth Century, Kahneman D, Tversky A. Edward Elgar Publishing Ltd.: Cheltehem, UK; 111-137. Rumelt RP, Schendel DE, Teece DJ. 1994. Fundamental Issues in Strategy: A Research Agenda. Harvard Business School Press: Boston. Russo J E, Schoemaker PJH. 1991. Decision Traps. Simon & Schuster: New York. Rynes SL, Gerhart B, Minete A. 2004. The importance of pay in employee motivation: What people say and what they do. Human Resource Management 43 (4): 381-394. Schwarzwald J, Koslowsky M., Shalit B. 1992. A field study of employees' attitudes and behaviors after promotion decisions. Journal of Applied Psychology 77: 511-514. Schweitzer ME, Ordóñez L, Douma B. 2004. Goal setting as a motivator of unethical behavior. Academy of Management Journal 47 (3): 422-432. Siegel P, Schraeder M, Morrison R. 2008. A taxonomy of equity factors. Journal of Applied Social Psychology 38 (1): 61-75. Sink E. 2006. Eric Sink on the Business of Software. Apress: New York. Solow, RM. 1979. Another possible source of wage stickiness. Journal of Macroeconomics 1(1): 79-82. Svenson O. 1981. Are we all less risky and more skillful than our fellow drivers? Acta Psychologica 47: 143-48. Taylor SE, Brown JD. 1988. Illusion and well-being: a social psychological perspective on mental health. Psychological Bulletin 103: 193-210. Tversky A, Kahneman D. 1991. Loss aversion in riskless choice: a reference dependent model. Quarterly Journal of Economics 106: 1039-1061. Tversky A, Kahneman D. 1992. Advances in prospect theory: cumulative representation of uncertainty. Journal of Risk and Uncertainty 5(4): 297-323. Vallone RP, Griffin DW, Lin S, Ross L. 1990. Overconfident prediction of future actions and outcomes by self and others. Journal of Personality and Social Psychology 58: 568-581. Van den Steen E. 2005. Organizational beliefs and managerial vision. Journal of Law, Economics, and Organization 21 (1): 256-282, Vandermay A. 2009. MBA pay: riches for some, not all. Bloomberg Business Week, Sept. 28.Strategic Compensation 40 Wee G. 2010. Harvard endowment chief Mendillo paid almost $1 million in 2008. Business Week, May 18. Weinstein ND. 1980. Unrealistic optimism about future life events. Journal of Personality and Social Psychology 39: 806-820. Whittlesey, F. 2006. The great overpaid CEO debate. CNET News, June 1. Williamson OE. 1985. The Economic Institutions of Capitalism. Free Press: New York. Wowak AJ, Hambrick DC. 2010. A model of person-pay interaction: How executives vary in their responses to compensation arrangements. Strategic Management Journal 31: 803- 821. Young V. 2010. Missouri pension system to stop giving bonuses. St. Louis Post Dispatch, Jan. 22. Zenger TR. 1992. Why do employers only reward extreme performance? Examining the relationship among performance pay and turnover. Administrative Science Quarterly 37: 198–219.Strategic Compensation 41 Figures Figure 1: Agency Theory Framework PREFERENCES Objective function Strategic compensation variables which affect objective function INFORMATION State of nature Employee effort Employee ability Firms Maximize profits Employee skill level (+) Employee effort (+) Wage costs (-) Random Unknown/imperfect Unknown/imperfect Employees Maximize utility Pay (+) Effort (-) Risk (-/averse) Random Known KnownStrategic Compensation 42 Figure 2: Compensation Predictions from Agency Theory (With No Task Coordination Benefits) Figure 3: Compensation Predictions from Agency Theory (With Task Coordination Benefits and Imperfect Observability of Individual Output)Strategic Compensation 43 Figure 4: Insights from Psychology and Decision Research on the Agency Theory Framework PREFERENCES Objective function Strategic compensation variables which affect objective function INFORMATION State of nature Employee effort Employee ability Pay of others Firms Maximize profits Employee skill level (+) ? (-) Employee effort (+) Wage costs (-) Non-wage costs (-) ?, ? Random Unknown/imperfect Unknown/imperfect Known ? Employees Maximize utility Pay (+) Effort (-/averse) Risk (averse) Perceived inequity (-) ?, ? Random Potentially unfair ? Known Biased ? Known Biased ? Known ? ? Introduced by social comparison ? Introduced by overconfidenceStrategic Compensation 44 Figure 5: Compensation Implications of Social Comparison Figure 6: Compensation Implications of OverconfidenceTo Groupon or Not to Groupon: The Profitability of Deep Discounts
|
CD ROM Annuaire d'Entreprises France prospect (avec ou sans emails) : REMISE DE 10 % Avec le code réduction AUDEN872
10% de réduction sur vos envois d'emailing --> CLIQUEZ ICI Retour à l'accueil, cliquez ici To Groupon or Not to Groupon: The Profitability of Deep Discounts Benjamin Edelman Sonia Jaffe Scott Duke Kominers Working Paper 11-063To Groupon or Not to Groupon: The Protability of Deep Discounts Benjamin Edelman y Sonia Jae z Scott Duke Kominers x June 16, 2011 Abstract We examine the protability and implications of online discount vouchers, a new marketing tool that oers consumers large discounts when they prepay for participating merchants' goods and services. Within a model of repeat experience good purchase, we examine two mechanisms by which a discount voucher service can benet aliated merchants: price discrimination and advertising. For vouchers to provide successful price discrimination, the valuations of consumers who have access to vouchers must systematically dier from|and typically be lower than|those of consumers who do not have access to vouchers. Oering vouchers is more protable for merchants which are patient or relatively unknown, and for merchants with low marginal costs. Extensions to our model accommodate the possibilities of multiple voucher purchases and merchant price re-optimization. Keywords: voucher discounts, Groupon, experience goods, repeat purchase. The authors appreciate the helpful comments and suggestions of Peter Coles, Clayton Featherstone, Alvin Roth, and participants in the Harvard Workshop on Research in Behavior in Games and Markets. Kominers gratefully acknowledges the support of a National Science Foundation Graduate Research Fel- lowship, a Yahoo! Key Scientic Challenges Program Fellowship, and a Terence M. Considine Fellowship in Law and Economics funded by the John M. Olin Center. yHarvard Business School; bedelman@hbs.edu. zDepartment of Economics, Harvard University; sjae@fas.harvard.edu. xDepartment of Economics, Harvard University, and Harvard Business School; skominers@hbs.edu. 11 Introduction A variety of web sites now sell discount vouchers for services as diverse as restaurants, skydiving, and museum visits. To consumers, discount vouchers promise substantial savings|often 50% or more. To merchants, discount vouchers oer opportunities for price discrimination as well as exposure to new customers and online \buzz." Best known among voucher vendors is Chicago-based Groupon, a two-year-old startup that purport- edly rejected a $6 billion acquisition oer from Google (Surowiecki (2010)) in favor of an IPO at yet-higher valuation. Meanwhile, hundreds of websites oer discount schemes similar to that of Groupon. 1 The rise of discount vouchers presents many intriguing questions: Who is liable if a merchant goes bankrupt after issuing vouchers but before performing its service? What happens if a merchant simply refuses to provide the promised service? Since vouchers entail prepayment of funds by consumers, do buyers enjoy the consumer protections many states provide for gift certicates (such as delayed expiration and the right to a cash refund when value is substantially used)? Must consumers using vouchers remit tax on merchants' ordinary menu prices, or is tax due only on the voucher-adjusted prices consumers actually pay? What prevents consumers from printing multiple copies of a discount voucher and redeeming those copies repeatedly? To merchants considering whether to oer discount vouchers, the most important question is the basic economics of the oer: Can providing large voucher discounts actu- ally be protable? Voucher discounts are worthwhile if they predominantly attract new customers who regularly return, paying full price on future visits. But if vouchers prompt many long-time customers to use discounts, oering vouchers could reduce prots. For most merchants, the eects of oering vouchers lie between these extremes: vouchers bring in some new customers, but also provide discounts to some regular customers. In this paper, we oer a model to explore how consumer demographics and oer details interact to shape the protability of voucher discounts. We illustrate two mechanisms by which a discount voucher service can benet al- iated merchants. First, discount vouchers can facilitate price discrimination, allowing merchants to oer distinct prices to dierent consumer populations. In order for voucher oers to yield protable price discrimination, the consumers who are oered the voucher discounts must be more price-sensitive (with regards to participating merchants' goods or services) than the population as a whole. Second, discount vouchers can benet merchants through advertising, by informing consumers of a merchant's existence. For these adver- tising eects to be important, a merchant must begin with suciently low recognition among prospective consumers. The remainder of this paper is organized as follows. We review the related literature in Section 2. We present our model of voucher discounts in Section 3, exploring price discrimination and advertising eects. In Section 4, we extend our model to consider the possibility of consumers purchasing multiple vouchers and of merchants adjusting prices 1 Seeing these many sites, several companies now oer voucher aggregation. Yipit, one such company, tracked over 400 dierent discount voucher services as of June 2011. 2in anticipation of voucher usage. Finally, in Section 5, we discuss implications of our results for merchants and voucher services. 2 Related Literature The recent proliferation of voucher discount services has garnered substantial press: a multitude of newspaper articles and blog posts, and even a short feature in The New Yorker (Surowiecki (2010)). However, voucher discounts have received little attention in the academic literature. The limited academic work on online voucher discounts is predominantly empirical. Dholakia (2011) surveys businesses that oered Groupon discounts. 2 Echoing sentiments expressed in the popular press, 3 Dholakia (2011) nds mixed empirical results: some business owners speak glowingly of Groupon, while others regret their voucher promo- tions. Byers et al. (2011) develop a data set of Groupon deal purchases, and use this data to estimate Groupon's deal-provision strategy. To the best of our knowledge, the only other theoretical work on voucher discounting is that of Arabshahi (2011), which considers vouchers from the perspective of the voucher service, whereas we operate from the perspective of participating merchants. Unlike the other academic work on voucher discounting, we (1) seek to understand voucher discount economics on a theoretical level, and (2) focus on the decision problem of the merchant, rather than that of the voucher service provider. Our results indicate that voucher discounts are naturally good ts for certain types of merchants, and poor ts for others; these theoretical observations can help us interpret the range of reactions to Groupon and similar services. Although there is little academic work on voucher discounts, a well-established liter- ature explores the advertising and pricing of experience goods, i.e. goods for which some characteristics cannot be observed prior to consumption (Nelson (1970, 1974)). The parsimonious framework of Bils (1989), upon which we base our model, studies how prices of experience goods respond to shifts in demand. Bils (1989) assumes that consumers know their conditional valuations for a rm's goods, but do not know whether that rm's goods \t" until they have tried them. 4 Analyzing overlapping consumer gen- erations, Bils (1989) measures the tradeo between attracting more rst-time consumers and extracting surplus from returning consumers. Meanwhile, much of the work on experience goods concerns issues of information asym- metry: if a merchant's quality is unknown to consumers but known to the merchant, then advertising (Nelson (1974); Milgrom and Roberts (1986)), introductory oers (Shapiro (1983); Milgrom and Roberts (1986); Bagwell (1990)), or high initial pricing (Bagwell and Riordan (1991); Judd and Riordan (1994)) can provide signals of quality. Of this lit- erature, the closest to our subject is the work on introductory oers. Voucher discounts, a 2 In a related case study, Dholakia and Tsabar (2011) track a startup's Groupon experience in detail. 3 For example, Overly (2010) reports on Washington merchants' mixed reactions to the LivingSocial voucher service. 4 Firms know the distribution of consumer valuations and the (common) probability of t. 3form of discounted initial pricing, may encourage consumers to try experience goods they otherwise would have ignored. However, we identify this eect in a setting without asym- metric information regarding merchant quality; consumer heterogeneity, not information asymmetries, drives our main results. 5 Additionally, our work diers from the classical literature on the advertisement of experience goods, as advertising in our setting serves the purpose of awareness, rather than signaling. 6 A substantial literature has observed that selective discounting provides opportunities for price discrimination. In the settings of Varian (1980), Jeuland and Narasimhan (1985), and Narasimhan (1988), for example, merchants engage in promotional pricing in order to attract larger market segments. 7 Similar work illustrates how promotions may draw new customers (Blattberg and Neslin (1990); Lewis (2006)), and lead those customers to become relational customers (Dholakia (2006)). These results have been found to motivate the use of coupons (Neslin (1990)), especially cents-o coupons (Cremer (1984); Narasimhan (1984)). We harness the insights of the literature on sale-driven price discrim- ination to analyze voucher discounting|a new \sale" technology. Like the price-theoretic literature which precedes our work, we nd that price discrimination depends crucially upon the presence of signicant consumer heterogeneity. Our work also importantly diers from antecedents in that the prior literature, in- cluding the articles discussed above, has considered only marginal pricing decisions. In particular, the previous work on experience goods and price discrimination does not con- sider deep discounts of the magnitudes now oered by voucher services. 3 Model Oering a voucher through Groupon has two potential advantages: price discrimination and advertising. We present a simple model in which a continuum of consumers have the opportunity to buy products from a single rm. The consumers are drawn from two populations, one of which can be targeted by voucher discount oers. First, in Section 3.1, we consider the case in which all consumers are aware of the rm and vouchers serve only to facilitate price discrimination. Then, in Section 3.2, we introduce advertising eects. 5Of course, our treatment of advertising includes a very coarse informational asymmetry: some con- sumers are simply not aware of the merchant's existence. However, conditional upon learning of the merchant, consumers in our model receive more information than the merchant does about their valua- tions. This is in sharp contrast to much of the previous work on experience goods, in which merchants can in principle exploit private quality information in order to lead consumers to purchase undesirable (or undesirably costly) products (e.g., Shapiro (1983); Bagwell (1987)). 6 In the classical theory of experience goods, advertising serves a \burning money" role. Merchants with high-quality products can aord to advertise more than those with low-quality products can, as consumers recognize this fact in equilibrium and ock to merchants who advertise heavily (Nelson (1974); Milgrom and Roberts (1986)). In our model, advertising instead serves to inform consumers of a merchant's existence; these announcements are a central feature of the service voucher vendors promise. 7 In other models, heterogeneity in consumer search costs (e.g., Salop and Stiglitz (1977)) or reservation values (e.g., Sobel (1984)) motivate sales. Bergemann and Valimaki (2006) study the pricing paths of \mass-market" and \niche" experience goods, nding that initial sales are essential in niche markets to guarantee trac from new buyers. 4We present comparative statics in Section 3.3. Our model has two periods, and the rm ex ante commits to a price p for both periods. The rm and consumers share a common discount factor . Following the setup of Bils (1989), consumers share a common probability r that the rm's product is a \t." Conditional on t, the valuation of a consumer i for the rm's oering is vi . A consumer i purchases in the rst period if either the single-period value, rvi p, or the expected discounted future value, rvi p + r(vi p), is positive, i.e. if maxfrvi p; rvi p + r(vi p)g 0: For > 0, there is an informational value to visiting in the rst period: if a consumer learns that the rm's product is a t, then the consumer knows to return. As a result, all consumers with values at least v(p) 1 + r r + r p purchase in the rst period. To consider the eects of oering discounts to a subset of consumers, we assume there are two distinct consumer populations. Proportion of consumers have valuations drawn from a distribution with cumulative distribution function G, while proportion 1 have valuations drawn from a distribution with cumulative distribution function F . We denote by V supp(F ) [ supp(G) the set of possible consumer valuations. We assume that G(v) F (v) for all v 2 V , i.e. that the valuations of consumers in the G population are systematically lower than those of consumers in the F population. The rm faces demand (1 G(v(p))) + (1 )(1 F (v(p))) in the rst period, and fraction r of those consumers return in the second period. The rm maximizes prots given by (p) (1 + r)((1 G(v(p))) + (1 )(1 F (v(p))))(p c); where c is the rm's marginal cost. The rst-order condition of the rm's optimization problem is (1 G(v )) + (1 )(1 F (v )) 1 + r r + r ((p c)(g(v ) + (1 )f(v )) = 0 (1) where p is the optimal price and v v(p ). We assume that the distribution of con- sumers is such that prots are single-peaked, so that p is uniquely dened. 3.1 Discount Vouchers After setting its optimal price p , the rm is given the opportunity to oer a discount voucher. 8 Only fraction of consumers in the G population have access to the discount 8 For now, we assume the rm did not consider the possibility of a voucher when setting its price. In Section 4.2, we consider the possibility of re-optimization. 5Social Enterprise Series No. 32: Value Creation in Business – Nonprofit Collaborations
|
CD ROM Annuaire d'Entreprises France prospect (avec ou sans emails) : REMISE DE 10 % Avec le code réduction AUDEN872
10% de réduction sur vos envois d'emailing --> CLIQUEZ ICI James E. Austin and M. May Seitanidi Working papers are in draft form. This working paper is distributed for purposes of comment and discussion only. It may not be reproduced without permission of the copyright holder. Copies of working papers are available from the author. Social Enterprise Series No. 32: Value Creation in Business – Nonprofit Collaborations James E. Austin M. May Seitanidi Working Paper 12-019 September 26, 20111 VALUE CREATION IN BUSINESS – NONPROFIT COLLABORATIONS James E. Austin, Eliot I. Snider and Family Professor of Business Administration, Emeritus, Harvard Business School M. May Seitanidi, Senior Lecturer in CSR, Director of the Centre for Organisational Ethics, Hull University Business School, University of Hull-UK PURPOSE & CONTENT This focused review of theoretical and empirical research findings in the corporate social responsibility (CSR) and business-nonprofit collaboration literature aims to develop an analytical framework for and a deeper understanding of the interactions between nonprofit organizations and businesses that contribute to the co-creation of value. Our research question is: How can collaboration between businesses and NPOs most effectively co-create significant economic and social value, including environmental value, for society, organizations, and individuals? More specifically, we will: ? elaborate a Collaborative Value Creation (CVC) framework for analyzing social partnerships between businesses and nonprofits; ? review how the evolving CSR literature has dealt with value creation and collaboration; ? analyze how collaborative value creation occurs across different stages and types of collaborative relationships: philanthropic, transactional, integrative, transformational; ? examine the nature of value creation processes in collaboration formation and implementation and the resultant outcomes for the societal [macro], organizational [meso], and individual [micro] levels; ? identify knowledge gaps and research needs. IMPORTANCE OF THE COLLABORATION PHENOMENON The growing magnitude and complexity of socioeconomic problems facing societies throughout the world transcend the capacities of individual organizations and sectors to deal with them. As Visser (2011, p. 5) stated, “Being responsible also does not mean doing it all ourselves. Responsibility is a form of sharing, a way of recognizing that we’re all in this together. ‘Sole responsibility’ is an oxymoron.” Cross-sector partnering, and in particular collaboration between businesses and NPOs, has increased significantly and is viewed by academics and by business and nonprofit practitioners as an inescapable and powerful vehicle for implementing CSR and for achieving social and economic missions. Our starting premise is that creating value for collaborators and society is the central justification for such cross- sector partnering, and closer scrutiny and greater knowledge of the processes for and extent of value creation in general and co-creation more specifically are required for needed theoretical advancement and practitioner guidance. 2 ANALYTICAL FRAMEWORK: COLLABORATIVE VALUE CREATION The CVC Framework is a conceptual and analytical vehicle for viewing more clearly and understanding more systematically the phenomenon of value creation through collaboration (Austin, 2010). We define collaborative value as the transitory and enduring benefits relative to the costs that are generated due to the interaction of the collaborators and that accrue to the organizations, individuals, and society. Thus, the focus is on the value creating processes of and results from partnering, in this case, between businesses and nonprofits. There are two main types of value, economic and social (including environmental), but to examine more thoroughly value creation within the collaboration context the Framework elaborates further dimensions. The four components of the Framework are: the Value Creation Spectrum, Collaboration Stages, Partnering Processes, and Collaboration Outcomes. Each component provides a different window through which to examine the co-creation process. We will elaborate the Value Creation Spectrum as it is a new conceptualization and is a reference point for the other three components that have received attention in the literature and will only be briefly described here and expanded on in their subsequent respective sections. CVC Component I: Value Creation Spectrum Within the construct of collaboration, value can be created by the independent actions of one of the partners, which we label as “sole creation,” or it can be created by the conjoined actions of the partners, which we label as “co-creation”. While there is always some level of interaction within a collaborative arrangement, the degree and form can vary greatly and this carries significant implications for value creation. To provide a richer understanding of the multiple dimensions of social and economic value, the Framework posits four potential sources of value and identifies four types of collaboration value that reflect different ways in which benefits arise. Our overall hypothesis is that greater value is created at the meso, micro, and macro levels as collaboration moves across the Value Creation Spectrum from sole creation toward co-creation. The four sources of value are: Resource Complementarity – The Resource Dependency literature stresses that a fundamental basis for collaboration is obtaining access to needed resources that are different than those it possesses. However, the realization of the potential value of resource complementarity is dependent on achieving organizational fit. The multitude of sectoral differences between businesses and nonprofits are simultaneously impediments to collaboration and sources of value creation. Organizational fit helps overcome barriers and enable collaboration. We hypothesize that greater the resource complementarity and the closer the organizational fit between the partners, the greater the potential for co-creation of value. Resource Type – The partners can contribute to the collaboration either generic assets, i.e., those that any company has, e.g., money, or any nonprofit, e.g., a positive reputation; or, they can mobilize and leverage more valuable organization-specific assets, such as, knowledge, capabilities, infrastructure, and relationships, i.e., those assets key to the organization’s success. We hypothesize that the more an organization mobilizes for the collaboration its distinctive competencies, the greater the potential for value creation.3 Resource Directionality and Use – Beyond the type of the resources brought to the partnership is the issue of how they are used. The resource flow can be largely unilateral, coming primarily from one of the partners, or it could be a bilateral and reciprocal exchange between the partners, or it could be a conjoined intermingling of their resources. Parallel but separate inputs or exchanges can each create value, but combining complementary and distinctive resources to produce a new service or activity that neither organization could have created alone or in parallel co-creates new value. The most leveraged form of these resource combinations produces economic and social innovations. We hypothesize that the more the partners integrate their key resources into distinctive combinations, the greater the potential for value creation. Linked Interests – Although collaboration motivations are often a mixture of altruism and utilitarianism, self-interest – organizational or individual – is a powerful shaper of behaviour. Unlike single sector partnerships, collaborators in cross-sector alliances may have distinct objective functions; there is often no common currency or price with which to assess value. The value is dependent on its particular utility to the recipient. Therefore, it is essential to understand clearly how partners view value – both benefits and costs- and to reconcile any divergent value creation frames. The collaborators must perceive that the value exchange -their respective shares of the co-created value- is fair, otherwise, the motivation for continuing the collaboration erodes. We hypothesize that the more collaborators perceive their self- interests as linked to the value they create for each other and for the larger social good and the greater the perceived fairness in the sharing of that value, the greater the potential for co-creating synergistic economic and social value. The combinations of the above value sources produce the following four different types of value in varying degrees: “Associational Value” is a derived benefit accruing to another partner simply from having a collaborative relationship with the other organization. For example, one global survey of public attitudes revealed that over 2/3 of the respondents agreed with the statement “My respect for a company would go up if it partnered with an NGO to help solve social problems.”(GlobeScan, 2003) “Transferred Resource Value” is the benefit derived by a partner from the receipt of an asset from the other partner. The significance of the value will depend on the nature of the assets transferred and how they are used. Some assets are depreciable, for example, a cash or product donation gets used up, and other assets are durable, for example, a new skill learned from a partner becomes an on-going improvement in capability. In either case, once the asset is transferred, to remain an attractive on-going value proposition the partnership needs to repeat the transfer of more or different assets that are perceived as valuable by the receiving partner. In effect, value renewal is essential to longevity. “Interaction Value” is the benefits that derive from the processes of interacting with one’s partner. It is the actual working together that produces benefits in the form of intangibles. Co-creating value both requires and produces intangibles. In effect, these special assets are both enablers of and benefits from the collaborative value creation process. Intangibles are a form of economic and social value and include, e.g., reputation, trust, relational capital, learning, knowledge, joint problem-solving, communication, coordination, transparency, accountability, and conflict resolution. “Synergistic Value” arises from the underlying premise of all collaborations that combining partners’ resources enables them to accomplish more together than they could have separately. Our more4 specific focus is the recognition that the collaborative creation of social value can generate economic value and vice versa, either sequentially or simultaneously. Innovation, as an outcome of the synergistic value creation is one of perhaps the highest forms of value creation because it produces a completely new form of change due to the combination of the collaborators’ distinctive assets, thereby holding the potential for significant organizational and systemic advancement at the micro, meso, and macro levels. There is a virtuous value circle. Kanter (1983, p. 20) states that all innovations require change, associated with the disruption of pre-existing routines and defining innovation as “the generation, acceptance, and implementation of new ideas, processes, products, or services”. CVC Component II: Relationship Stages Value creation is a dynamic process that changes as the relationship between partners evolves. To describe the changing nature of the collaborative relationship across the spectrum we draw on Austin’s Collaboration Continuum with its three relationship categories of philanthropic, transactional, and integrative (Austin, 2000a; 2000b), and we add a fourth stage – transformational. Within each stage there can exist different types of collaboration with varying value creation processes. We hypothesize that as the relationship moves toward the integrative and transformational stages, the greater the potential for co-creation of value, particularly societal value. CVC Component III: Partnering Processes The realization of the potential collaborative value depends on the partnering processes that occur during the formation, selection, and implementation phases. It is these processes that tap the four sources of value and produce the four forms of value. The dynamic nature of social problems (McCann, 1983) on one hand and the complexities of partnership implementation on the other can result in a multitude of problems including early termination and hence inability to materialize their potential by providing solutions to social problems. Understanding the formation and implementation process in partnerships is important in order to overcome value creation difficulties during the implementation stage (Seitanidi & Crane, 2009) but also to unpack the process of co-creation of synergistic value. CVC Component IV: Partnering Outcomes The focus in this element of the framework is on who benefits from the collaboration. Collaborations generate value at multiple levels –meso, micro, and macro-often simultaneously. For our purpose of examining value, we distinguish two loci: within the collaboration and external to it. Internally, we examine value accruing at the meso and micro levels for the partnering organizations and the individuals within those organizations. Externally, we focus on the macro or societal level where social welfare is improved by the collaboration in the form of benefits at the micro (to individual recipients), meso (other organizations), and macro (systemic changes) levels. The benefits accruing to the partnering organizations and their individuals internal to the collaboration are ultimately largely due to the value created external to the social alliance. Ironically, while societal betterment is the fundamental5 justification for cross-sector collaborative value creation, this is the value dimension that is least thoroughly dealt with in the literature and in practice. CSR & VALUE CREATION As a precursor to our examination of collaborative value creation, it is relevant to examine how the evolving CSR literature has positioned value creation and collaboration with nonprofits. Corporate Social Responsibility can be defined as discretionary business actions aimed at increasing social welfare, but CSR has been in a state of conceptual evolution for decades (Bowen, 1953; Carroll, 2006). This is reflected in the variety of additional labels that have emerged, such as Corporate Social Performance, Corporate Citizenship, Triple Bottom Line, and Sustainability that incorporated environmental concerns (Elkington, 1997; 2004). The bibliometric analysis of three decades of CSR research by de Bakker, Groenewegen and den Hond (2005), which builds on earlier reviews of the literature (Rowley & Berman, 2000; Carroll, 1999; Gerde & Wokutch, 1998; Griffen & Mahon, 1997), provides a comprehensive view of the evolving theoretical, prescriptive, and descriptive work in this field. Garriga and Melé (2004) categorize CSR theories and approaches into four categories: instrumental, political, integrative, and ethical. These and other more recent CSR reviewers (Lockett, Moon, & Visser, 2006; Googins, Mirvis & Rochlin, 2007; Egri & Ralston, 2008) conclude that CSR is deeply established as a field of study and practice but still lacks definitional and theoretical consensus. The field continues to evolve conceptually and in implementation. Our purpose is not to add yet another general review of the CSR literature but rather to focus on the following five central themes that emerged from the literature review on CSR and how it has dealt with collaborative value creation: Primacy of Business Value vs. Stakeholder Approach, Empirical Emphasis, Evolving Practice and Motivations, Integration of Economic and Social Value, and CSR Stages. Primacy of Business Value vs. Stakeholder Approach The most referenced anchor argument against CSR is that set forth by Friedman that pitted social actions and their moral justifications by managers as contrary to the primary function of generating profits and returns to shareholders. His stated position is: “there is one and only one social responsibility of business - to use its resources and engage in activities designed to increase its profits so long as it stays within the rules of the game, which is to say, engages in open and free competition without deception or fraud.” (Friedman, 1962; 1970) The intellectual current flowing against this argument of incompatibility of social and business value came from the broadening conceptualization of relevant stakeholders beyond investors to include consumers (Green & Peloza, 2011), employees, communities, governments, the environment, among others (Freeman, 1984; Neville & Menguc, 2006). This approach also opened the relational door for nonprofits as a type of stakeholder from communities or civil society. While for some academics this theory placed stakeholders as alternative claimants on company value (wealth redistribution), embedded in this approach was the argument that attending to stakeholders other than just investors was not incompatible with profitability but rather contributed to it through a variety of ways. Various researchers stressed the instrumental value of stakeholder engagement (Donaldson & Preston, 1995;6 Jones & Wicks, 1999; Freeman, 1999). In effect, creating social value – benefits to other stakeholders - produced business value, such as, better risk management; enhanced reputation, legitimacy and license to operate; improved employee recruitment, motivation, retention, skill development, and productivity; consumer preference and loyalty; product innovation and market development; preferential regulatory treatment (Makower,1994; Burke & Logsdon, 1996; Googins, Mirvis & Rochlin 2007). This is what we have labelled in the CVC Framework “Synergistic Value Creation.” Jensen (2002), a pioneering thinker on agency theory, recognized that “we cannot maximize the long- term market value of an organization if we ignore or mistreat any important constituency,” but he also specified that under “enlightened value maximization” “managers can choose among competing stakeholder demands by”…spending “an additional dollar on any constituency to the extent that the long-term value added to the firm from such expenditure is a dollar or more.” Jensen adds, “enlightened stakeholder theorists can see that although stockholders are not some special constituency that ranks above all others, long-term stock value is an important determinant…of total long-term firm value. They would see that value creation gives management a way to assess the tradeoffs that must be made among competing constituencies, and that it allows for principled decision making independent of the personal preferences of managers and directors.” Recognizing the complexity of value measurement, Jensen notes that “none of the above arguments depend on value being easily observable. Nor do they depend on perfect knowledge of the effects on value of decisions regarding any of a firm's constituencies” (Jensen, 2002). This approach to value creation and assessment through CSR and stakeholder interaction is primarily instrumental (Jones, 1995; Hill & Jones, 1985). Even though there has been this broadening view of the business benefits derived from benefitting other stakeholders, Halal (2001, p. 28) asserts that “corporations still favour financial interests rather than the balanced treatment of current stakeholder theory". Margolis and Walsh (2003, p. 282) express the concern that “if corporate responses to social misery are evaluated only in terms of their instrumental benefits for the firm and its shareholders, we never learn about their impact on society, most notably on the intended beneficiaries of these initiatives.” Empirical Emphasis: Corporate Social Performance & Corporate Financial Performance The emergence of the asserted “Business Case” for CSR (Makower, 1994) led to a stream of research aimed at empirically testing whether in the aggregate Corporate Social Performance (CSP) contributed positively or negatively to Corporate Financial Performance (CFP), i.e., the link between social value and economic value (Margolis & Walsh, 2003). While this literature over the decades yielded ambiguous and conflicting conclusions, the most recent and comprehensive meta-analysis of 52 studies with a sample size of 33,878 observations by Orlitzky, Schmidt and Rynes (2003) found a positive association. Barnett (2007) asserts that assessing the business case for CSR must recognize that financial results are dependent on the specific historical relationship pathways between companies and their stakeholders, and thus will vary across firms and time. The special capabilities of a firm “to identify, act on, and profit from opportunities to improve stakeholder relationships through CSR” (Barnett, 2007, p. 803) and the perceptions and responses of stakeholders, including consumers (Schuler & Cording, 2006), to new CSR7 actions produce unique value outcomes. Looking at the macro level of value creation, Barnett (2007, p. 805) also adds: “ ‘Does CSR improve social welfare?’ Oddly enough, this question is seldom asked or answered.’ ” This consolidated view of CSR does not disaggregate the value contributed from collaborative activities in particular, but it is important in moving the debate from the “should we” to the “how” and “so what” perspectives, which is where collaborations enter the socio-economic value equation. As Margolis and Walsh (2003, p. 238) put it: “the work leaves unexplored questions about what it is firms are actually doing in response to social misery and what effects corporate actions have, not only on the bottom line but also on society.” However, they also state that examples of partnering with nonprofits abound and are increasing and “may be the option of choice when the firm has something to give and gain from others when it makes its social investments” (p. 289). Andrioff and Waddock (2002, p. 42) stress the mutual dependency in their definition: “Stakeholder engagements and partnerships are defined as trust- based collaboration between individuals and/or social institutions with different objectives that can only be achieved together.” Finn (1996) emphasizes how stakeholder strategies can create collaborative advantage. Evolving Practice & Multiple Motivations Even in advance of the researchers’ empirical validation, practitioners perceived value in CSR and broadly and increasingly have been taking actions to implement it, although the degree and form vary across firms and over time. Recent surveys of more than a thousand executives by Boston College’s Center for Corporate Community Relations revealed that over 60% saw “as very important that their company treat workers fairly and well, protect consumers and the environment, improve conditions in communities, and, in larger companies, attend to ethical operation of their supply chain” (Googins, Mirvis & Rochlin, 2007, p. 22). Research exploring the motivations behind this increased practice suggests that it is not entirely instrumental, but rather is a varying mix of altruism (“doing the right thing”) and utilitarianism (Galaskiewicz, 1997; Donnelly, 2001; Austin, Reficco, Berger, Fischer, Gutierrez, Koljatic, Lozano, Ogliastri & SEKN team, 2004; Goodpaster & Matthews, 1982). Aguilera, Rupp, Williams and Ganapathi (2007) present an integrative theoretical model that contends that “organizations are pressured to engage in CSR by many different actors, each driven by instrumental, relational, and moral motives.” Among these actors are nonprofit organizations acting as societal watchdogs to counter adverse business practices and agitate for positive corporate social actions, which we elaborate on in a subsequent section. Marquis, Glynn and Davis (2007) point to institutional pressures at the community level as key shapers of the nature and level of corporations’ social actions. Campbell (2007) also stresses contextual factors but emphasizes economic and competitive conditions as the determiners of CSR, but with the effects being mediated by actions of stakeholders. Some have asserted that societies’ growing expectations (GlobeScan, 2005) that business should assume a more significant responsibility for solving social problems have created a “new standard of corporate performance-one that encompasses both moral and financial dimensions” (Paine, 2003). The argument is that values – personal and corporate – have intrinsic and social worth but are also a source of economic value for the company. Martin (2002) asserts that the potential for value creation is greater when the motivation is intrinsic rather than instrumental.8 Integrating Economic and Social Value This movement toward a merged value construct has most recently been extended into a repositioning of the very purpose of corporations and capitalism. Porter and Kramer (2011), while putting forth the same premise of producing economic and social value previously discussed extensively in the literature and referred to in our CVC Framework as “Synergistic Value Creation”, give emphasis to making this central to corporate purpose, strategy, and operations. It is asserted that such an approach will stimulate and expand business and social innovation and value as well as restore credibility in business, in effect, reversing the Friedman position of Thou shalt not! to Thou must! Walsh, Weber and Margolis (2003) also signalled the growing importance of double value: “Attending to social welfare may soon match economic performance as a condition for securing resources and legitimacy.” Growing investor interest in social along with economic returns has been manifested by the emergence of several social rating indicators such as Dow Jones Sustainability Indexes, FTSE4Good Indexes, Calvert Social Index, Social Investment Index. This dual value perspective is found in companies around the world, such as the Mexican-headquartered multinational FEMSA: “our commitment to social responsibility is an integral part of our corporate culture. We recognize the importance of operating our businesses to create economic and social value for our employees and the communities where we operate, and to preserve the planet for future generations” (www.femsa.com/es/social). In the 2009 ‘Report to Society’ (De Beers, 2009, p. 2) the Chairman of the De Beers Group highlights their search for the “new normal” that will stem from exploiting the synergies that exist between “running a sustainable and responsible business, and a profitable one” that in some cases, he admits, will represent a departure from their past practices. Such an open plea for change is not an isolated nor a surprising statement as gradually companies realize that the ability to anticipate, manage, and mitigate long-term risks, address difficult situations at exceptionally challenging and turbulent times (Selsky & Parker, 2011), and develop new capabilities will be achieved through deepening their collaboration with stakeholders including employees, customers, governments, local communities and developing inter-organizational capabilities (Porter & Kramer, 2011; Austin, 2000a). Central to the development of the ‘new normal’ of intense interactions is the call for business to demonstrate strong intent in playing a substantial role not only in social issues management but co-creating solutions with wide and deep impacts. NPOs are key-actors with deep levels of expertise in fields such as health, education, biodiversity, poverty, and social inclusion. In addition, their expertise is embedded across local communities (Kolk, Van Tulder & Westdijk, 2006) and global networks on social issues (Crane & Matten, 2007; Pearce & Doh, 2005; Heath, 1997; Salamon & Anheier, 1997). Hence, NPOs represent substantial opportunities for corporations intentionally to co- create local and potentially global value by providing solutions to social problems (Van Tulder & Kolk, 2007) or by designing social innovations that will deliver social betterment (Austin & Reavis, 2002). Porter and Kramer (2011) see this happening by (1) developing new and profitable products, services, and markets that meet in superior ways societal needs; (2) improving processes related to, for example, worker welfare, environment, resource use in the value chain that simultaneously enhance productivity and social well-being; and (3) strengthening the surrounding community’s physical and service infrastructure that is essential for cluster and company competitiveness. They, along with several business leaders, have also emphasized the need for business to escape the narrow sightedness caused by fixation on short-term financial results and shift to a longer-term orientation within which to build9 mutually reinforcing social and economic value (Barton, 2011). Porter and Kramer’s conception contends that “Not all profit is equal. Profits involving social purpose represent a higher form of capitalism, one that creates a positive cycle of company and community prosperity” (p. 15). To achieve this they emphasize as a critical element the “ability to collaborate across profit/nonprofit boundaries” (p. 4). Unilever’s CEO Roger Polman (2010) has called for a shift to “collaborative capitalism.” Halal (2001) earlier had urged “viewing stakeholders as partners who create economic and social value through collaborative problem-solving.” Zadek (2001) similarly called for collaboration with the increasingly important nonprofit sector as the way to move beyond traditional corporate philanthropy. Ryuzaburo Kaku, the former chairman of Cannon, stated that the way for companies to reconcile economic and social obligations is kyosei, “ ‘spirit of cooperation,’ in which individuals and organizations live and work together for the common good” (1997, p. 55) . This approach of integrating social and economic value generation into the business strategy and operations is also the central premise of the “Base of the Pyramid” movement that has emerged over the last decade aimed at incorporating into the value chain the low income sector as consumers, suppliers, producers, distributors, and entrepreneurs (Prahalad, 2005; Prahalad & Hammond, 2002; Prahalad & Hart, 2002; Rangan, Quelch, Herrero & Barton, 2007; Hammond, Kramer, Katz, Tran & Walker, 2007). The fundamental socioeconomic value being sought is poverty alleviation through market-based initiatives. Recent research has shifted the focus from “finding a fortune” in the business opportunities of the mass low income markets to “creating a fortune” with the low income actors (London & Hart, 2011). Recent research has also highlighted the critical roles that not for profit organizations frequently play in building these ventures and co-creating value (Márquez, Reficco & Berger, 2010). Portocarrero and Delgado (2010), based on 33 case studies throughout Latin America and Spain, provide further elaboration of the concept of social value produced by socially inclusive , market-based initiatives involving the low income sector, starting from the Social Enterprise Knowledge Network’s earlier definition of social value (Social Enterprise Knowledge Network, 2006): “the pursuit of societal betterment through the removal of barriers that hinder social inclusion, the assistance to those temporarily weakened or lacking a voice, and the mitigation of undesirable side effects of economic activity.” They posit four categories of social value: (1) increasing income and expanding life options resulting from inclusion as productive agents into market value chains; (2) expanding access to goods and services that improve living conditions; (3) building political, economic, and environmental citizenship through restoring rights and duties; and (4) developing social capital through constructing networks and alliances. CSR Stages The foregoing movement toward integration is part of the evolution of theory and practice. Various scholars have attempted to categorize into stages the wide and evolving range of corporate approaches to CSR. These stage conceptualizations are relevant to our co-creation model because where a corporation has been and is heading is a precursor conditioning factor shaping the potential for and nature of collaborative value creation. 10 Zadek (2004) conceptualized corporations’ learning about CSR as passing through five stages: (1) Defensive (Deny practices, outcomes, or responsibilities), (2) Compliance (Adopt a policy-based compliance approach as a cost of doing business), (3) Managerial (Embed the societal issue in their core management processes), (4) Strategic (Integrate the societal issue into their core business strategies), (5) Civil (Promote broad industry participation in corporate responsibility). Googins, Mirvis and Rochlin (2007) – based on examination of company practices - have created a more elaborated 5 stage model, with each stage having a distinct “Strategic Intent,” which expresses the value being sought at each stage: (1) Elementary (Legal Compliance)->(2) Engaged (License to Operate)->(3) Innovative (Business Case)->(4) Integrated (Value Proposition)->(5) Transforming (Market Creation or Social Change). Across these 5 stages, stakeholder relationships also evolve: Unilateral->Interactive->Mutual Influence- >Partnership/Alliances->Multi-Organization. The authors assert that for the emerging generation of partnerships between businesses and nonprofits “the next big challenge is to co-create value for business and society” (p. 8). In effect, at higher levels of CSR, collaboration becomes more important in the value creation process. As creating synergistic value becomes integrated and institutionalized into a company’s mission, values, strategy, and operations, engaging in the co-creation of value with nonprofits and other stakeholders becomes an imperative. Hence, co-creation of value indicates a higher degree of CSR institutionalization. NONPROFITS’ MIGRATION TOWARD ENGAGEMENT WITH BUSINESS Just as businesses have increasingly turned to nonprofits as collaborators to implement their CSR and to produce social value, several factors have also been moving nonprofits toward a greater engagement with companies. Parallel to the increasing integration of social value into business strategy there has emerged a growing emphasis in nonprofits on incorporating economic value into their organizational equation. The field of social enterprise and social entrepreneurship emerged as an organizational concept, with some conceptualizations referring to the application of business expertise and market- based skills to the social sector, such as when nonprofit organizations operate revenue-generating enterprises. (Reis, 1999; Thompson, 2008; Boschee & McClurg, 2003). Broader conceptualizations of social entrepreneurship refer to innovative activity with a social purpose in either the business or nonprofit sectors or as hybrid structural forms which mix for-profit and nonprofit activities. (Dees, 1998a; 1998b; Austin, Stevenson & Wei-Skillern, 2006; Bromberger, 2011). Social entrepreneurship has also been applied to corporations and can include cross-sector collaborations (Austin, Leonard, Reficco & Wei-Skillern, 2006). Emerson (2003) has emphasized generation of “blended” social and economic value. The field of social marketing emerged as the application of marketing concepts and techniques to change behaviour to achieve social betterment (Kotler & Zaltman, 1971). It is a set of tools that can be used independently by either businesses or nonprofits as part of their strategies. However, Kotler and Lee (2009) have recently highlighted the importance of cross-sector collaboration in its application. 11 Many academics and practitioners have commented on the “blurring of boundaries” between the sectors (Dees & Anderson, 2003; Glasbergen, Biermann & Mol, 2007; Crane, 2010), and some researchers have empirically documented this “convergence” (Social Enterprise Knowledge Network, 2006; Austin, Gutiérrez, Ogliastri & Reficco, 2007). While this overlap of purposes reflects an increasingly common appreciation and pursuit of social and economic value creation and fosters collaboration across the sector, this is not a comfortable move for all nonprofits. Many advocacy nonprofits, in fact, view themselves as in opposition to corporations and fight against practices that they deem as detrimental to society (Grolin, 1998; Waygood & Wehrmeyer, 2003; Rehbein, Waddock & Graves, 2004; Hendry, 2006). While this can serve as a healthy social mechanism of checks and balances, it is interesting to note that many nonprofits that have traditionally been antagonists of corporations have increasingly discovered common ground and joint benefits through alliances with companies (Yaziji & Doh, 2009; Ählström & Sjöström, 2005; Stafford, Polonsky & Hartman, 2000). Heugens (2003) found that even from adversarial relationships with NGOs a company could develop “integrative and communication skills.” Similarly, many business leaders have shifted their conflictive posture with activist nonprofits and viewed them as important stakeholders with whom constructive interaction is possible and desirable (Argenti, 2004). John Mackey, founder and CEO of Whole Foods Market, stated, “I perceived them as our enemies. Now the best way to argue with your opponents is to completely understand their point of view,” adding, “To extend our love and care beyond our narrow self-interest is antithetical to neither our human nature nor our financial success. Rather, it leads to the further fulfilment of both” (Koehn & Miller, 2007). Porter and Kramer (2006) contend “Leaders in both business and civil society have focused too much on the friction between them and not enough on the points of intersection. The mutual dependence of corporations and society implies that both business decisions and social policies must follow the principle of shared value. That is, choices must benefit both sides. If either a business or a society pursues policies that benefit its interests at the expense of the other, it will find itself on a dangerous path. A temporary gain to one will undermine the long-term prosperity of both.” A recent illustration of this interface is when Greenpeace attacked the outdoor apparel maker Timberland with the accusation that leather for its boots came from Brazilian cattle growers who were deforesting the Amazon. CEO Jeff Swartz, who received 65,000 emails from Greenpeace supporters, engaged with the nonprofit and ensured with its suppliers that none of its leather would be sourced from the Amazon area. Nike made a similar agreement. Reflecting on the experience with the activist NGO, Swartz observed, “You may not agree with their tactics, but they may be asking legitimate questions you should have been asking yourself. And if you can find at least one common goal-in this case, a solution to deforestation- you’ve also found at least one reason for working with each other, not against” (Swartz, 2010,p. 43). Eccles, Newquist and Schatz’s (2007) advice on managing reputational risk echoed Swartz’s perspective: “Many executives are skeptical about whether such organizations are genuinely interested in working collaboratively with companies to achieve change for the public good. But NGOs are a fact of life and must be engaged. Interviews with them can also be a good way of identifying issues that may not yet have appeared on the company’s radar screen” (p. 113). In a similar vein, Yaziji (2004) documents the valuable types of resources that nonprofits can bring: legitimacy, awareness of social forces, distinct networks, and specialized technical expertise that can head off12 trouble for the business, accelerate innovation, spot future shifts in demand, shape legislation, and set industry standards. One of the bridging areas between nonprofit advocacy and collaboration with businesses has been corporate codes of conduct. Arya and Salk (2006) point out how nonprofits have compelled the adoption of such codes but also help corporations by providing knowledge that enables compliance. Conroy (2007) has labeled this phenomenon as the “Certification Revolution” wherein nonprofits and companies have established standards and external verification systems across a wide array of socially desirable business practices and sectors, e.g., forestry, fishing, mining, textiles, and apparel. The resultant Fair Trade movement has experienced rapid and significant growth, resulting in improved economic and social benefits to producers and workers while also giving companies a vehicle for differentiating and enriching their brands due to the social value they are co-creating. Providing consumers with more information on a company’s social practices, such as labor conditions for apparel products, can positively affect “willingness-to-pay” (Hustvedt & Bernard, 2010 ). Various more general standards and social reporting systems have emerged, such as AA 1000 on Stakeholder Management (www.accountability21.net), SA 8000 on Labor Issues (www.sa-intl.org), ISO 14000 Series of Standards on Environmental Management and ISO 26000 on Corporate Social Responsibility (www.ISO.org); Global Reporting Initiative (GRI) on economic, environmental, and social performance (www.globalreporting.org). NPO - BUSINESS COLLABORATION AND VALUE CREATION Businesses and nonprofit organizations can and do create economic and social value on their own. However, as is clear from the stakeholder literature discussed and from resource dependency theory (Pfeffer & Salancik, 1978; Wood & Gray, 1991) and from various major articles and books with ample examples of practice, cross-sector collaboration is the organizational vehicle of choice for both businesses and nonprofits to create more value together than they could have done separately (Kanter, 1999; Austin, 2000a,b; Sagawa & Segal, 2000; Googins & Rochlin, 2000; Jackson & Nelson, 2004; Selsky & Parker, 2005; Galaskiewicz & Sinclair Colman, 2006; Googins, Mirvis & Rochlin, 2007; Seitanidi, 2010; Austin, 2010). For companies, as the foregoing sections have revealed, collaborating with NPOs is a primary means of implementing their CSR. For nonprofits, alliances with businesses increase their ability to pursue more effectively their missions. The calls for heightened social legitimacy for corporations (Porter & Kramer, 2011; Manusco Brehm, 2001; Wood, 1991), corporate accountability (Newell, 2002; Bendell, 2004; Bendell, 2000b) and increased accountability for nonprofit organizations (Meadowcroft, 2007; Ebrahim, 2003; Najam, 1996) signalled the equal importance of process and outcomes (Seitanidi & Ryan, 2007) while paying attention to the role of multiple stakeholders, such as employees and beneficiaries (Le Ber & Branzei, 2010a; Seitanidi & Crane, 2009). Interestingly, Mitchell, Agle and Wood, (1997: 862) remarked in their chronology and stakeholder identification rationales that there was no stakeholder definition “emphasising mutual power”, a balance required for the process of co-creation. The previous role of NPOs as influence seekers (Oliver, 1990) has moved beyond the need to demonstrate power, legitimacy, and urgency to business managers (Mitchell, Agle & Wood, 1997) as their new found salience stems from their ability to be value producers (Austin, 2010; Le Ber & Branzei, 2010b) and from the extreme urgency of social problems (Porter & Kramer, 2011). The involvement of nonprofit organizations as a source of value creation ranges from their potential to co-produce intangible resources such as new capabilities through employee volunteering programmes (Muthuri, Matten & Moon, 2009), and new13 production methods as a result of the adoption of advanced technology held by nonprofit organizations (Stafford & Hartman, 2001). Salamon (2007) stresses the role of the nonprofit sector as a “massive economic force, making far more significant contributions to the solution of public problems than existing official statistics suggest” based on mobilizing millions of volunteers, engaging grass-roots energies, building cross-sector partnerships, and reinvigorating democratic governance and practice. All the above constitute the un-tapped potential of the nonprofit sector. Evidence is provided by Salamon (2007) in a country scale suggesting that the nonprofit sector “exceeds the overall growth of the economy in many countries. Thus, between 2000 and 2003, the sector's average annual rate of growth in Belgium outdistanced that of the overall economy by a factor of 2:1 (6.7 versus 3.2 per cent). In the United States, between 1996 and 2004, the non-profit sector grew at a rate that was 20 per cent faster than the overall GDP.” The above demonstrate the value potential but also the difficulties in understanding and unpacking the value creation that stems from the nonprofit sector during the partnership implementation. The role of the partners is to act as facilitators and enablers of the value creation process, understand how to add value to their partner (Andreasen, 1996), and design appropriate mechanisms to enhance the co-creation processes. The fundamental reason for the proliferation of nonprofit-business partnerships is the recognition that how businesses interact with nonprofits can have a direct effect in their success due to the connection of social and financial value (Austin, 2003). Equally, nonprofits are required to work with other organizations to achieve and defend their missions against financial cuts, a shrinking pool of donors, fierce competition by demonstrating efficiency and effectiveness in delivering value for money. Coupled with the realization that nonprofits are of significant value to business is the acceptance that nonprofits can also achieve mutual benefit through the collaboration with companies (Austin, 2003). The Corporate-NGO Partnership Barometer Summary Report (C&E, 2010) confirms the above, indicating that 87% of NGOs consider partnerships important, particularly for the generation of resources; similarly, 96% of businesses consider partnerships with NGOs important in order to meet their CSR agendas (ibid, p. 4-5). Interestingly, 59% of the respondents confirmed that they are engaged in approximately 11-50 or more partnerships (C&E, 2010, p. 7), indicating the necessity for partnership portfolio management in order to achieve portfolio balance (Austin, 2003). The most frequently identified (52%) challenge for business in a partnership is “the lack of clear processes for reviewing and measuring performance” (C&E, 2010: 13). Only 21% of nonprofit organizations consider the above as a key-challenge, as their most pressing challenge remains (52%) “lack of resources on our part” (ibid). There is a significant literature on economic value creation and capture for businesses dealing with other businesses or even co-creating value with their consumers (Brouthers, Brouthers & Wilkerson, 1995; Bowman & Ambrosini, 2000; Foresstrom, 2005; O’Cass & Ngo, 2010; Lepak, Smith & Taylor, 2007), and similarly nonprofits collaborating with other nonprofits (Cairns, Harris & Hutchinson, 2010; McLaughlin, 1998). Additionally, there is much written about cross-sector collaborations by business and/or nonprofits with government (Bryson, Crosby & Middleton Stone, 2006; Cooper, Bryer & Meek, 2006). While there are commonalities and differences in value creation processes across all types of intra and inter-sector collaborations that are worthy of analysis (Selsky & Parker, 2005; Milne, Iyer & Gooding-Williams, 1996), the scope of our inquiry is limited to business-nonprofit dyads. We will now examine collaborative value creation from three dimensions: collaboration relationship stages, partnering processes, and collaboration outcomes. 14 Relationship Stages and Value Creation The collaborative relationships between NPOs and businesses take distinct forms and can evolve over time through different stages. Our focus is on understanding how the value creation process can vary across these stages. To facilitate this analysis, we will use Austin’s (2000a; 2000b) conceptualization of a Collaboration Continuum, given that this work seems to be amply referenced by various cross-sector scholars in significant reviews and publications (e.g., Selsky & Parker 2010; 2005; LeBer & Branzei 2010b; 2010c; Seitanidi & Lindgreen, 2010; Bowen, Newenham-Kahindi & Herremans, 2010; Seitanidi, 2010; Kourula & Laasonen, 2010; Jamali & Keshishian, 2009; Setanidi & Crane, 2009; Glasbergen, Biermann & Mol, 2007; Brickson, 2007; Googins, Mirvis & Rochlin, 2007; Seitanidi & Ryan, 2007; Galaskiewicz & Sinclair Colman, 2006; Berger, Cunningham & Drumwright, 2004; Rondinelli & London, 2003; Wymer & Samu, 2003; Margolis & Walsh, 2003). Seitanidi (2010, p.13) explained that Austin (2000) “positioned previous forms of associational activity between the profit and the non-profit sectors in a continuum...This was an important conceptual contribution, as it allowed for a systematic and cohesive examination of previously disparate associational forms. The ‘Collaboration Continuum’ is a dynamic conceptual framework that contains two parameters of the associational activity: the degree, referring to the intensity of the relationship, and the form of interaction, referring to the structural arrangement between nonprofits and corporations (ibid, p. 21), which he based on the recognition that cross-sector relationships come in many forms and evolve over time. In fact, he termed the three stages that a relationship between the sectors may pass through as: philanthropic, transactional and integrative.” We will present this conceptualization, relate it to other scholars’ takes on relationship stages and typologies, and then examine the nature of value creation in each stage. The Collaboration Continuum (CC) has three relationship stages: Philanthropic (charitable corporate donor and NPO recipient, largely a unilateral transfer of resources), Transactional (the partners exchange more valuable resources through specific activities, sponsorships, cause-related marketing, personnel engagements), and Integrative (where missions, strategies, values, personnel, and activities experience organizational integration and co-creation of value). Figure 1 suggests how the nature of the relationship changes across those stages in terms of the following descriptors: level of engagement, importance to mission, magnitude of resources, scope of activities, interaction level, managerial complexity, strategic value, and co-creation of value. 15 INSERT FIGURE 1 HERE Figure 1. The Collaboration Continuum Stage I Stage II Stage III NATURE OF RELATIONSHIP Philanthropic>>>>Transactional>>>>Integrative ? Level of Engagement Low>>>>>>>>>>>>>>>>>>>>>>>>>>>>>High ? Importance to Mission Peripheral>>>>>>>>>>>>>>>>>>>>>>>Central ? Magnitude of Resources Small>>>>>>>>>>>>>>>>>>>>>>>>>>>>>Big ? Type of resources Money>>>>>>>>>>>>>>>>>Core Competencies ? Scope of Activities Narrow>>>>>>>>>>>>>>>>>>>>>>>>>>Broad ? Interaction Level Infrequent>>>>>>>>>>>>>>>>>>>>>>Intensive ? Trust Modest>>>>>>>>>>>>>>>>>>>>>>>>>>>Deep ? Managerial Complexity Simple>>>>>>>>>>>>>>>>>>>>>>>>>Complex ? Strategic Value Minor>>>>>>>>>>>>>>>>>>>>>>>>>>>Major ? Co-creation of value Sole>>>>>>>>>>>>>>>>>>>>>>>>>Conjoined Source: Derived from James E. Austin, The Collaboration Challenge, (San Francisco:Jossey-Bass, 2000) The use of a continuum is important analytically because it recognizes that that the stages are not discrete points; conceptually and in practice a collaborative relationship is multifaceted and some characteristics may be closer to one reference stage while other traits are closer to another. Nor does a relationship automatically pass from one stage to another; movement, in either direction, is a function of decisions and actions by the collaborators. Furthermore, one need not pass through each stage, but rather could begin at a different stage, e.g., creating a transactional relationship without having had a prior philanthropic relationship. A continuum captures more usefully the dynamic nature and heterogeneity of evolving relationships and the corresponding value creation process. Several researchers have also found useful the concept of a continuum, although they have depicted the content somewhat differently than used in the CC. Bryson, Crosby and Middleton Stone(2006) use a collaboration continuum construct, with one end for organizations that only barely relate to each other regarding a social problem (as in the CC’s Philanthropic stage), and the other end for “organizations that have merged into a new entity to handle problems through merged authority and capabilities,”(p. 44) as in the CC’s Integrative stage. Rondinelli and London (2003) similarly use a continuum of the relationship’s “intensity,” moving from low intensity “arm’s-length” (similar to the CC’s Philanthropic stage), to moderate intensity “interactive collaborations” (similar to the Transactional stage), to high intensity “management alliances” (similar to the Integrative stage). 16 Bowen, Newenham-Kahindi & Herremans’ (2010) review of 200 academic and practitioner sources on cross-sector collaboration uses a “continuum of community engagement” concept and offers a typology of three engagement strategies: transactional, transitional, and transformational. Their descriptions of the three strategies are different in definition than in the CC. Their “transactional” strategy of “giving back” is close to the definition of the Philanthropic stage in the CC. Their “transitional” strategy points to increasing collaborative behaviour but lacks definitional power as it is seen as a phase of moving from philanthropic activities to a “transformational” phase, which has some of the characteristics of the CC’s Integrative stage of joint problem-solving, decision-making, management, learning, and creating conjoined benefits. They point to difficulties in “distinguishing between ‘collaboration and partnership’ and truly transformational engagement” (p. 307). Googins, Mirvis & Rochlin (2007) characterize company relationships with stakeholders as moving from unilateral, which corresponds to Austin’s Philanthropic stage, to mutual influence, which is close to the Transactional stage, to partnerships and alliances, which have integrative characteristics, and then to multi-organization, which is “transforming” and seems to depict a more aspirational stage that achieves significant social change. The identification by these researchers of a transformational stage offers an opportunity to enrich the CC, so we will make that elaboration below. Galaskiewicz and Sinclair Colman’s (2006) major review of business-NPO collaboration does not explicitly use a continuum, but the underlying differentiator in its typology is motivation and destination of the benefits generated. Their collaboration types can be connected to the CC framework. The review’s primary focus and exhaustive treatment is on the philanthropic relationship. This first stage in the CC is the most common collaborative relationship and is characterized as predominantly motivated by altruism, although some indirect benefits for the company are hoped for. Additionally, but with much less elaboration, they pointed to “strategic collaborations” involving event sponsorships and in- kind donations aimed at generating direct benefits for the company and the NPO. Similarly, they point to “commercial collaborations” involving cause-related marketing, licensing, and scientific cooperation, also aimed at producing direct benefits. The asserted distinctions between these two categories are that in the latter the benefits are easier to measure and the “activity is unrelated to the social mission.” It is unclear why the former would be “strategic” but not the latter, as both could be part of an explicit strategy. Some researchers have even labelled philanthropy as “strategic” based on how it is focused (Porter & Kramer, 2002). In relationship to the CC, the strategic and the commercial categories correspond to the Transactional Stage. Galaskiewicz and Sinclair Colman also refer to “political collaboration” that aims at influencing other entities, social or governmental; depending on the precise nature of the relationship in carrying out this purpose, this type could be placed in any of the 3 stages of the CC, but would seem closest to a transactional relationship. We will now examine value creation in each of the three stages of the CC: Philanthropic, Transactional, Integrative, and also add a fourth stage, Transformational.17 -Philanthropic Collaborations As Lim (2010) points out in introducing his helpful review on assessing the value of corporate philanthropy, “How to measure the value and results of corporate philanthropy remains one of corporate giving professionals’ greatest challenges. Social and business bene?ts are often long-term or intangible, which make systematic measurement complex. And yet: Corporate philanthropy faces increasing pressures to show it is as strategic, cost-e?ective, and value-enhancing as possible.” In philanthropic collaborations, the directionality of the resource flow is primarily unilateral, flowing from the company to the nonprofit. In the USA corporations donated $14.1 billion in cash and goods in 2009, up 5.9% from 2008 in inflation adjusted dollars (Giving USA Foundation, 2010). About 31% of these donations come via company foundations, of which there were an estimated 2,745 in 2009 (Lawrence & Mukai, 2010). This “transferred resource value” accrues to the nonprofit. It is an economic value that enables the nonprofit to pursue its mission, the completion of which creates social value. Margolis and Walsh (2003, p. 289) depict these donations as the “buy” option for implementing CSR. The nonprofit has the organizational capabilities lacking in the company to address a particular social need and the company has the funds that the nonprofit lacks. This is basic resource complementarity but the resource type is generic – cash. It enables the nonprofit to do more of what it already does, but it does not add any more value than what would come from any other cash donor. Legally, corporate donations made via their foundations cannot benefit directly the corporation, although Levy (1999) revealed many ways to capture synergies between the company and its foundation. Still, it has been asserted that beyond the tax deduction, donations are largely altruistic; the benefit flows in one direction to the nonprofit and the hoped for generation of social value. However, a variety of benefits can, in fact, accrue to the business. There is the potential for associational value, whereby the company’s reputation and goodwill with various stakeholders, including communities and regulators affecting its “License to Operate,” is enhanced due to its philanthropic association with the nonprofit and its social mission. This is in part due to the generally higher levels of trust associated with nonprofits and the value created for the business when that asset is transferred through association (Seitanidi, 2010). One survey (Deloitte, 2004) indicated that 92% of Americans think that it is important for companies to make charitable contributions or donate products and/or services to nonprofit organizations in the community. It has been calculated that 14% of U.S. companies’ reputations is attributable to citizenship efforts (Reputation Institute, 2011). Similarly, the nonprofit can gain credibility and enhance its reputation by having been vetted and selected as a donation recipient of an important company (Galaskiewicz & Wasserman, 1989). Managing reputational risk is an important task for companies and nonprofits. Several researchers have documented that companies’ philanthropic activities provide an “insurance policy” that helps mitigate the repercussions of negative events (Godfrey, Merrill & Hansen, 2009). Both partners run the risk of being tainted by their partner’s negative actions and the corresponding bad publicity (Galaskiewicz & Sinclair Colman, 2006).18 When the donation is a company’s product, it is more distinctive than a cash contribution; product donations are sometimes preferred as a way of moving inventories or promoting product usage and brand recognition. There is evidence that a company that is perceived as collaborating with nonprofits and contributing to the resolution of social problems will garner greater respect and preference from consumers (GlobeScan, 2003). However, their pathway from intention to buy to actual purchase is circuitous and requires other explicit companion actions (Bhattacharya & Sen, 2004) that are more likely to occur in the more structured collaborations found in the Transactional stage, such as Cause-Related Marketing, which we will discuss in the next section. Another stakeholder of particular relevance in philanthropic collaborations is employees, with the perceived benefits of attracting, retaining, and motivating them (Boston College Center for Corporate Citizenship & Points of Light Foundation, 2005). Survey and experimental work has revealed that almost ¾ of those surveyed would choose to work for a company with a good philanthropic record, all other things being equal (Deloitte, 2004; Greening & Turban, 2000). CEOs have also pointed to attracting talent as a significant motivation for their corporate philanthropy (Bishop & Green, 2008; Bhattacharya, Sen & Korschun, 2008). If the company moves beyond cash donations, including matching employee grants, and engages in employee volunteerism through outreach programs with nonprofit groups, then additional benefits can be expected. The Deloitte (2004) survey revealed that: ? 87% percent of Americans believe it is important for companies to offer volunteer opportunities to its employees; ? 73% say that workplace volunteer opportunities help companies contribute to the well being of communities; ? 61% think that they help to communicate a company’s values; ? 58% believe that workplace volunteer opportunities improve morale. A survey of 131 major U.S. corporations revealed that 92% had formal employee volunteer programs (Lim, 2010). Research has identified benefits in terms of increased employee identification with the company and enhanced job performance (Bartel, 2001; Jones, 2007). Volunteering and interacting with the nonprofits can also foster new skill development (Peterson, 2004; Sagawa & Segal, 2000). Corporate volunteering can be relatively informal but sometimes develops into highly structured collaborative projects with the nonprofit with specific objectives, time frames, and expected exchanges of assets. For example, Timberland has a highly developed community service program with City Year and other nonprofits, including giving employees paid 40 hours of paid release time to work with nonprofits (Austin, 2000a; Austin, Leonard & Quinn, 2004; Austin & Elias, 2001). Many corporations encourage their management employees to volunteer as board members of nonprofits and some have supported formal governance training and placement (Epstein & McFarlan, 2011; Korngold, 2005; Austin, 1998). In these more elaborated forms, the collaboration migrates from the philanthropic stage towards the transactional stage. This reveals that as the partners broaden the Resource Type from just cash to also include their employees, they can create new opportunities for value creation. The benefits accrue at the meso level for both partnering organizations and at the micro level for the employees. However, a critical19 determinant of how much value is created is the type of skills the employee volunteers bring to the collaboration. If they bring specialized skills rather than just their time and manual labor, then the potential value added is greater (Kanter, 1999; Vian, Feeley, Macleod, Richards & McCoy, 2007). To conclude this subsection, we note that traditional philanthropic collaboration largely involves sole creation rather than co-creation of value. Each partner provides inputs – the corporation gives funds and the nonprofit delivers a social service. The degree of interaction is generally quite limited and the functions rather independent. There is synergistic value in that complementary resources come together that enable the nonprofit to produce social value which in turn gives rise indirectly to economic value for the company. There are benefits at the meso, micro, and macro levels, but they are relatively less robust than at the subsequent stages in the CC. The search for greater value gave rise to a move toward “strategic philanthropy” as part of the CSR evolution. While it has taken many different forms, one of the most noted was that put forth by Porter and Kramer (2002) and was an intellectual precursor to their 2006 analysis of the links between CSR and competitive advantage (Porter & Kramer 2006), and their 2011 conceptualization of shared value discussed above in the CSR Evolution section. They emphasize the importance of having corporate philanthropy be “context focused,” aimed at strengthening their social, economic, and political operating environments that greatly determine a company’s ability to compete. In effect, they are seeking what our CVC Framework labels linked interests between companies and communities. This is tied to the creation of synergistic value, as they contend that “social and economic goals are not inherently conflicting but integrally connected.” Two further value elements in their concept concern the type of resources deployed and how they are used. They stress the importance of giving not only money but also leveraging organizations’ special capabilities to strengthen each other and their joint efforts, asserting, “Philanthropy can often be the most cost-effective way to improve its competitive context, enabling companies to leverage the efforts and infrastructure of nonprofits and other institutions” (Porter and Kramer,2002, p. 61). These shifts move collaborations further along the value creation spectrum and toward higher stages of engagement on the Collaboration Continuum. Transactional Collaborations Transactional relationships differ from philanthropic along several dimensions as elaborated previously, but we focus here on the value aspects. Salient among these is that the directionality of the resource flow shifts from unilateral to bilateral. There is an explicit exchange of resources and reciprocal value creation (Googins & Rochlin, 2000). There is higher resource complementarity and the type of transferred resources the partners’ are deploying is often more specialized assets with their greater value generating potential (Waddell, 2000). The partners have linked interests in that creating value for oneself is dependent on creating it for the other. Associational value is more salient and organizational fit is more essential to value creation. The value creation tends to be more quantifiable and the benefits to the organizations more direct, however, there is less certainty regarding the realization of improved societal welfare. 20 The types of collaborations that characterize the Transactional stage include Cause-Related Marketing (CRM), event and other sponsorships, name and logo licensing agreements, and other specific projects with clear objectives, assigned responsibilities, programmed activities, and predetermined timetables. The various certification arrangements between businesses and nonprofits would also be encompassed within the transactional collaboration category. Selsky and Parker (2010) consider these transactional collaborations as arising from a “Resource Dependency Platform” and the partners’ motivation is primarily self-interest and secondarily the social issue. Varadarajan and Menon’s (1988) early article on CRM indicated many benefits from CRM, but pointed to revenue enhancement as the “main objective.” IEG, the leading advisory agency on event sponsorships, estimated that sponsorships in 2010 were $17.2 billion in North America and $46.3 billion globally, with Europe and Asia Pacific being the other primary areas. While sponsorship of sporting events is the largest category, social cause sponsorships grew the fastest at 6.7% and arts at 2.7% (IEG, 2011). Cone’s (2004) longitudinal consumer survey revealed that 91% indicated they would have a more positive attitude toward a product or a company when it supports a social cause, up from 83% in 1993, because it wins their trust. Furthermore, 84% compared to 66% in 1993 indicated that they would be likely to switch brands of similar quality and price if one product was associated with a social cause. These respondents also stated that a company’s commitment to a social issue was relevant to their decisions regarding which companies to work for, have in their communities, recommend to others, and invest in. Hoeffler and Keller (2002) assert that these campaigns can increase brand awareness, image, credibility, feelings, community, and engagement. Heal’s (2008) as well as Marin, Ruiz & Rubio’s (2009) research revealed identification with the company, emotional connection, and buyer brand loyalty increased when associated with a social cause. Associational Value is the central benefit accruing to the company, and the various forms of CRM, sponsorships, and certifications aim to make more salient that association, with the hope that sales will be enhanced. However, intermediating variables can affect the realization of the potential associational value, such as, product type, perceived motivation of campaign and company’s CSR record, and size of contribution (Smith & Langford, 2009; Bhattarchaya & Sen, 2004; Strahilevitz & Myers, 1998; Strahilevitz, 1999; Strahilevitz, 2003). Although buyer intentions are often not realized, some survey evidence revealed that UK consumers actually did switch brands, try a new product, or increase purchases of a product due to its association with a charity’s cause (Farquason, 2000). Hiscox and Smyth (2008) researched the following question: “A majority of surveyed consumers say they would be willing to pay extra for products made under good working conditions rather than in sweatshops, but would they really do so?” The results from experiments that they conducted in a major retail store in New York City showed that “Sales rose by 12- 26% for items labelled as being made under good labor standards. Moreover, demand for the labelled products actually rose when prices were increased. Raising prices of labelled goods by 10% actually increased their sales by an additional 21-31%.” Castaldo, Perrini, Misani and Tencati (2009) confirmed the importance of trust to consumers’ decision-making in the purchase of Fair-Trade labelled products. Certified products can even elicit a willingness to pay a premium price from environmentally conscious consumers (Thompson, Anderson, Hansen & Kahle, 2010). Collaboration with certifying organizations is one mechanism for gaining consumer trust, but the company’s CSR reputation also proved to be a key source of trust. The strength of that reputation also provides some “insurance” in the form of resistance by consumers to negative information about its CSR activities (Eisingerich, Rubera, Seifert & Bhardwaj,21 2011). The effectiveness of a CRM campaign can be enhanced or decreased depending on the specific methods used to implement it, e.g., the frequency of repetition of the CRM claims as a means of overcoming consumer skepticism (Singh, Kristensen & Villaseñor, 2009). The primary benefit being sought by the nonprofits is the revenue from the company, often a percentage of sales if a product is being promoted around the cause in a special campaign, or a prearranged fee. American Express’s affinity marketing campaign that donated a percentage of sales or a fee for new card applications resulted in a 28% increase in card usage in the first month and a 45% rise in applications, producing $1.7 million for the restoration of the Statue of Liberty. Coca Cola’s six week promotion to support Mothers Against Drunk Driving boosted sales 490% and provided the nonprofit with 15 cents for each case sold (Gray & Hall, 1998). The associated publicity of the cause and the collaborating nonprofit can also be valuable to the nonprofit and generate some social value through the form of greater public awareness of the need. Because the associational relationship is closer and more visible in these transactional relationships, the risks to the partners’ respective brands, i.e., the creation of negative value, is also greater (Wymer & Samu, 2003; Andreasen, 1996; Haddad & Nanda, 2001). Basil and Herr (2003) point to the risk of negative attitudes toward the nonprofit arising from inappropriate organizational fit between the partners. Berger, Cunningham and Drumwright (2004) also stress the importance of alignment of missions, resources, management, work force, target market, product/cause, culture, business cycle, and evaluation if the partners are to realize the full benefits their social alliance. Gourville and Rangan (2004) present a model and examples that show how appropriate fit allows the partners to generate value beyond the “first order” direct benefits of enhanced revenues for the company and fees for the nonprofit, to produce “second order” benefits. For the firm these could include strengthening relationships with employees, investors, and the larger community, and for the nonprofit they could include greater name recognition and a widening of its donor base. Good fit enables the generation of synergistic value, and the better the fit, the greater the value creation. Beyond these benefits accruing at the meso level to the partnering organizations, there remains the issue of to what extent these transactional collaborations generate societal benefits. Some have asserted that these are largely commercial undertakings rather than social purpose alliances (Galskiewicz & Sinclair Colman, 2006; Porter & Kramer, 2006). It is a fact that many CRM undertakings are funded from corporate marketing budgets rather than their philanthropic funds and their effects on consumer intentions and actions are measured. This is evidence that companies recognize the business case for supporting nonprofits in this manner, and it also creates access for nonprofits to a much larger pool of corporate resources for social causes than just the philanthropy budget. However, there is little parallel effort documented in the literature to measure the presumed resultant societal benefit, although environmental collaborations seem to assess impact outcomes more often. As in the Philanthropic Stage, there exists the assumption that by channelling resources to the nonprofit social value creation will be enabled. To the extent that more resources are generated for the nonprofit via the transactional arrangements than would have occurred from a traditional donation, then the potential for greater value exists. 22 In assessing social value generation, it is important to differentiate among types of transactional collaborations. Seitanidi and Ryan (2007), for example, distinguish between “commercial sponsorship” and “socio-sponsorship” based on predominant purpose, with the former aimed primarily at generating revenues for the partners and the latter at meeting social needs, although benefits also accrue to the partnering organizations. At the macro level, the heightened publicity for the cause may create larger awareness of a problem and steps for remediation. For example, Avon’s social cause partnerships with breast cancer organizations in over 50 countries has resulted in $700 million being donated since 1992 to these nonprofits and over 100,000 women being educated about breast cancer early detection, diagnosis, and treatment (Avon Foundation for Women, 2011). Gourville and Rangan (2004) provide a useful methodology that is aimed at assessing the first and second order benefits of CRM to both business and nonprofit partners, which facilitates more constructive discussions in the value capture negotiations, however, they do not provide guidance for assessing the societal value generated. Lim’s useful review (2010) also provides very helpful methodologies for assessing the corporate value of transactional and other CSR efforts, but the focus is primarily on the business benefits, direct and derived. Nonetheless, he also describes a variety of approaches and methodologies for measuring social impact, including some references with examples applied to collaborations in different social sectors to which we will return to in our subsequent outcomes section. Integrative Collaborations A collaboration that evolves into the integrative stage changes the relationship in many fundamental ways including the value creation process. Organizational fit becomes more synchronous: partners’ missions, values, and strategies find much greater congruency as a result of working together successfully and developing deeper relationships and greater trust. The discovery of linked interests and synergistic value creation provides an incentive for collaborating ever more closely to co-create even more value. The strategic importance of the collaboration becomes significant and is seen as integral to the success of each organization, but beyond this, greater priority is placed on producing societal betterment. Good collaboration produces better collaboration, creating a virtuous cycle. But arriving at this state requires much effort and careful relational processes on many fronts, including reconciling their different value creation logics (Le Ber and Branzei, 2010a). Achieving this value frame fit can occur progressively as a relationship evolves through the stages or over time within the integrative stage on the Collaboration Continuum. The value creation equation changes in the integrative relationship compared to the more common transactional relationships particularly in terms of the type of resources and how they are used. The partners increasingly use more of their key assets and core competencies, but rather than just using them in an isolated fashion to perform an activity that produces value for the collaboration (as often occurs in transactional collaborations), they combine these key resources. The directionality of the resource flow is conjoined. Jeff Swartz, CEO of Timberland and also formerly Chair of the Board of its NPO partner City Year, described their integrative relationship: “Our organization and their organization, while not completely commingled, are much more linked....While we remain separate organizations, when we come together to do things we become one organization” (Austin, 2000a, p. 27). The23 importance of this intermingling is that it creates an entirely new constellation of productive resources, which in turn holds potential for co-creating greater value for the partners and for society through synergistic innovative solutions. Kanter (1999) cited examples of each partner combining their complementary competencies to create innovative solutions, e.g., in welfare-to work programs: “while Marriott provides uniforms, lunches, training sites, program management, on the-job training, and mentoring, its partners help locate and screen candidates and assist them with housing, child care, and transportation” (p. 129). In IBM’s Reinventing Education collaboration with schools, the company’s staff had their offices in the schools and they interacted constantly with the teachers in a continuous co-creation process of feedback and development. Whereas transactional collaborations tend to be clearly defined and for a specified time period, in the integrative stage innovative co-creation has a different dynamic, as Kanter noted: “Like any R&D project, new-paradigm partnerships require sustained commitment. The inherent uncertainty of innovation - trying something that has never been done before in that particular setting - means that initial project plans are best guesses, not firm forecasts” (p. 130). Rondinelli and London (2003) provide several examples of “highly intensive” collaborations between environmental NPOs and companies in which the partners integrated their respective expertise to co- create innovative solutions aimed at improving environmentally company products and processes. The Alliance for Environmental Innovation worked in integrated, cross-functional teams with UPS and its suppliers combining their respective technical expertise on material usage lifecycles in a collective discovery process that “created new designs and technologies, resulting in an almost 50 percent reduction in air pollution, a 15 percent decline in wastewater discharge, and 12% less in energy usage” (p. 72). These outcomes are societal benefits that simultaneously generate economic benefits to the company. Alliance’s aspiration is to create Best Practices that will be emulated throughout a sector, thereby multiplying the social value creation. There were clearly linked interests giving rise to synergistic value. Holmes and Moir (2007) suggest that when the collaboration has a narrow scope, then the innovation is likely to be incremental, whereas a more open-ended search would potentially produce more radical and even unexpected results. In the integrative stage, while benefits to the partners remain a priority, generating societal value takes on greater importance. This emerges from the company’s values when generating social value has become an integral part of its core strategy. A company cannot undertake an integrative collaboration until its CSR has reached an integrative state. For example, as Googins, Mirvis and Rochelin (2007) report, one of IBM’s values is “innovation that matters for the world” with its corollary “collaboration that matters.” The company holds that in its “socio-commercial efforts, the community comes first. Only when the company proves its efforts in society…does it …leverage marketing or build commercial extensions.” IBM’s CEO Sam Palmisano explained, “It’s who we are; it’s how we do business; it’s part of our values; it’s in the DNA of our culture” (p. 123). The more CSR is institutionalized the more co- creation becomes part of the value-creation process, i.e., it moves from sole creation to co-creation. It is in the integrative stage that interaction value emerges as a more significant benefit derived from the closer and richer interrelations between partners. Bowen, Newenham-Kahindi & Herremans (2010) assert that “value is more likely to be created through engagement which is relational rather than24 transactional” (p. 311). The intangible assets that are produced – e.g., trust, learning, knowledge, communication, transparency, conflict management, social capital, social issues sensitivity- have intrinsic value to partnering organizations, individuals, and the larger society, but in addition are enablers of integrative collaboration. While these intangibles and processes will be further discussed in the subsequent section on collaboration implementation, it is worth noting that various researchers have pointed to these elements as essential to co-creation of value (Austin, 2000ab; Berger, Cunningham & Drumwright, 2004; Bowen, Newenham-Kahindi & Herremans, 2010; Bryson, Crosby & Middleton Stone, 2006; Googins, Mirvis & Rochlin, 2007; Googins & Rochlin, 2000; Le Ber & Branzei, 2010b; 2011; Selsky & Parker, 2005; Selsky & Parker, 2010; Rondinelli & London, 2003; Sagawa & Segal, 2000; Seitanidi, 2010; Seitanidi & Ryan, 2007). Integrative collaborations are much more complex and organic than transactional arrangements. They require deployment of more valuable resources and demand more managerial and leadership effort, and therefore entail a much deeper commitment. The compensation for these greater investments in co-creation is greater value for the partners and society. The substantiating evidence from the literature comes primarily via case studies, which is an especially appropriate methodology for describing, analyzing, and understanding the partnering processes. However, the specific pathways for the co- creation of value have not received the thoroughness of scrutiny that their importance merits, particularly, as we elaborate subsequently, the outcomes for societal welfare at the macro, meso, and micro levels. Transformational Collaborations We now briefly offer a possible extension of Austin’s Collaboration Continuum with the addition of this fourth stage: Transformational Collaborations. This is a theoretical rather than an empirically-based conceptualization. It would build on but move beyond the integrative stage and emerge as a yet higher level of convergence. The primary focus in this stage is to co-create transformative change at the societal level. There is shared learning about social needs and partners’ roles in meeting those needs, which Selsky and Parker (2010) refer to as a “Social Issues Platform” for the collaboration. Partners not only agree on what the social issue they want to address because it affects them both (Waddock, 1989) but they also agree that their intention is to transform their own processes or to deliver transformation through a social innovation that will change for the best the lives of those affected by the social problem. The end beneficiaries take a more active role in the transformation process (Le Ber & Branzei, 2010b). The aim is to create “disruptive social innovations” (Christensen, Baumann, Ruggles & Sadtler, 2006). This stage represents collaborative social entrepreneurship which, “aims for value in the form of large-scale, transformational benefit that accrues either to a significant segment of society or to society at large” (Martin & Osberg 2007; Nelson & Jenkins, 2006). Interdependence and collective action is the operational modality. One form might be the joint creation of an entirely new hybrid organization. For example, Pfizer and Edna McConnel Clark Foundation joined together to create the International Trachoma Institute as a way to most effectively achieve their goal of eliminating Trachoma (Barrett, Austin & McCarthy, 2000). As the social problems25 being addressed become more urgent or complex, the need to involve other organizations in the solution also increases, giving rise to multi-party, multi-sector collaborations. The transformative effects would not only be in social, economic, or political systems, but also be transformational for the partnering organizations. The collaboration would change each organization and its people in profound, structural, and irreversible ways. We will now examine the third component of the CVC Framework, partnership processes, where the potential and the creation of value will be discussed. Partnership Processes This section of the paper reviews the literature on nonprofit-business partnership processes that contribute importantly to the co-creation of value in the partnership formation and implementation phases. Understanding the process of the partnership formation phase is important as it provides indications of the potential for co-creation of value which is likely to take place during the subsequent partnership implementation phase in which partners’ resources are deployed and the key interactions occur for the co-creation of value. We discuss first the key processes that indicate the potential for the co-creation of value in the partnership formation. Next we move to the examination of the partnership selection as the connecting process between partnership formation and implementation. Finally, we discuss the micro-processes and dynamics that contribute to the co-creation of value in the implementation phase where value is created by the partners. Partnership Formation: Potential for Co-creation of Value Partnership formation (Selsky & Parker, 2005) is usually expressed in the literature as initial conditions (Bryson, Crosby & Middleton Stone, 2006), problem-setting processes (McCann, 1983; Gray, 1989), coalition building (Waddock, 1989), and preconditions for partnerships (Waddell & Brown, 1997). Some scholars present formation as part of the partnership selection process (McCann, 1983; Gray, 1989; Waddock, 1989), hence the processes of formation and implementation appear to “overlap and interact” (McCann, 1983, p. 178), while others suggest that partnership formation consists of a distinct phase or a set of preconditions (Waddell & Brown, 1997; Seitanidi, Koufopoulos & Palmer, 2010). We propose that the selection stage is positioned in a grey area functioning as a bridge between partnership formation and implementation. Conceptually and analytically we follow Seitanidi, Koufopoulos and Palmer (2010) and Seitanidi and Crane (2009) by separating the two in order to discuss the co-creation of value. McCann (1983, p. 178), however, suggests “processes greatly overlap and interact”, which is observed in the extension of processes across the formation, selection and implementation. For example, it is not unusual that pre-selection of partners and due diligence are not always easy or clear and neither positioned within a discrete stage. As Vurron, Dacin and Perrini (2010) remark the time dimension in the analysis of cross sector social partnerships (Selsky & Parker, 2005) is represented by studies that examine the static characteristics of partnerships (Bryson, Crosby & Middleton Stone, 2006) and process-based views (Seitanidi & Crane, 2009) that “extend the debate to the variety of managerial26 challenges and conditions affecting collaborations as they progress through stages” (Vurron, Dacin and Perrini (2010, p.41). Partnership formation is a process originating either prior to or during the previous interactions (Bryson, Crosby & Middleton Stone, 2006) across the same or other partners, for either philanthropic or transactional relationships (Austin, 2000b). Hence, formation can be seen as an early informal assessment mechanism that evaluates the suitability of a collaboration to evolve into an integrative or transformational relationship where the long term value creation potential of the partnership for the partners and society is higher (Austin, 2000a). Underestimating the costs and negative effects of poor organizational pairing can be the result of insufficient experience in co-creation of value, planning and preparation (Berger, Cunningham & Drumwright, 2004; Jamali & Keshishian, 2009). Often managers “think about it” but they do not usually invest “a huge amount of time in that process” (Austin, 2000a, p. 50). Such neglect carries consequences, as due diligence and relationship building are key process variables that can determine the fit between the partners. This process will increase managers’ ability to anticipate and capture the full potential for the partnership for both the business and the nonprofit partner. More importantly, the steps that we discuss below will provide early indications of the benefits that are likely to be produced by both organizations collectively (i.e., at the partnership level) (Gourville & Rangan, 2004; Clarke & Fuller, 2010) indicating the co-creation of value and the potential to externalize the value to society. However, deciding which partner holds the highest potential for the production of synergistic value is time consuming and challenging. The difficulties in undertaking cross-sectoral partnering and particularly developing integrative and transformational collaborations are extensively documented in the literature (Kolk, Van Tulder, & Kostwinder, 2008; Bryson, Crosby & Middleton Stone, 2006; Teegen, Doh & Vachani, 2004; Austin, 2000a; Crane, 2000; 1998), as well as the misunderstandings and power imbalances (Berger, Cunningham & Drumwright, 2004; Seitanidi & Ryan, 2007). Achieving congruence in their mission, strategy and values during the partnership relationship has been deemed particularly significant (Austin, 2000a), however, sectoral differences across the profit and nonprofit organizations create barriers. Differences in goals and characteristics (McFarlan, 1999), values, motives and types of constituents (Di Maggio & Anheier, 1990; Crane, 1998; Milne, Iyer & Gooding- Williams, 1996; Alsop, 2004), objectives, (Heap, 1998; Stafford & Hartman, 2001), missions (Shaffer & Hillman, 2000; Westley & Vedenburg, 1997), and organizational characteristics and structures (Berger, Cunningham & Drumwright, 2004) require early measures of fit that can provide indications for the potential of co-creation of value. The partners’ differences consist at the same time “both obstacles and advantages to collaboration” (Austin, 2010, p. 13) that can be the source of potential complementary value creation (Yaziji & Doh, 2009). Bryson, Crosby and Middleton Stone (2006, p. 46) suggest: “As a society, we rely on the differential strengths of the for-profit, public and non-profit sectors to overcome the weaknesses or failures of the other sectors and to contribute to the creation of public value”. Berger, Cunningham and Drumwright (2004) suggest that many of the partnership problems, but not all, can be predictable and dealt with. Such problems include: misunderstandings, misallocation of costs and benefits, mismatches of power, lack of complementarity in skills, resources and decision making styles, mismatching of time scales and mistrust. They propose a useful set of nine measures of fit and compatibility that can assist the partners to assess the existing and potential degree of fit including mission, resources, management, work force, target market, product/cause, cultural, cycle and evaluation fit (Ibid., p. 69-76). However, they assert that the measures of fit more crucial for the initial stages are the mission fit, resource fit, management fit and evaluation fit. In the case of a new partnership it would be rather difficult to examine the management fit at the formation phase; hence27 we discuss this issue in partnership implementation. We extend this fit framework by adding further measures of fit that contribute to the anticipation of problems while focusing on the maximization of the potential of the co-creation of value at the partnership formation stage. Partnership Fit Potential Partnership fit refers to the degree organizations can achieve congruence in their perceptions, interests, and strategic direction. As pointed out by Weiser, Kahane, Rochlin & Landis (2006, p. 6) “the correct partnership is everything,” hence when organizations are in the process of either deepening an existing collaboration (previously philanthropic or transactional) or experimenting with a new collaboration they should seek early indications of partnership fit. An important mechanism (Bryson, Crosby & Middleton Stone, 2006) that offers an indication of value co-creation potential is the initial articulation of the social problem that affects both partners (Gray, 1989; Waddock, 1986). Examining partners’ social problem frames reveals commonalities or differences on how they perceive the dimensions of a social problem (McCann, 1983). The process of articulation can identify incompatibilities signalling the need for either frame realignment or abandoning their collaborative efforts. Provided there is sufficient common ground, the partners will identify next if their individual interests are sufficiently linked (Logsdon, 1991). This process will assist partners to understand how they view value -both benefits and costs- and if required to reconcile any divergent value creation frames. Part of this process is developing an early understanding of how the social problem might be addressed through the partners’ capabilities and developing an insight into how the benefits of the partnership will escalate from the meso to the macro level, i.e., how society is going to be better off due to the partnering efforts of the business and nonprofit organizations (Austin, 2000b). This moves the concerns “beyond how the benefit pie is divided among the collaborators … to the potential of cross sector partnerships to be a significant transformative force in society” (Austin, 2010, p. 13). Importantly, moving beyond the social problem focus to the societal level is encouraging the partners to look at the partnership’s “broader political implications” (Crane, 2010, p. 17), elevating social partnerships to global governance mechanisms (Crane, 2010). In effect, if the partners are able to link their interests, and also draw links with the broader societal betterment, it would provide an early indication of high potential for co-creation of value for the social good, i.e., synergistic value capture at the societal level. The more the social problem is linked to the interests of the organizations the higher the potential to institutionalize the co-creation process within the organizations which will lead to better value capture by the partners and intended or unintended beneficiaries (Le Ber & Branzei, 2010a). Resource fit is a further step that refers to the resource complementarity, a precondition for collaboration. The compatibilities and differences across the partners allow for diverse combinations of tangible and intangible resources into unique resource amalgamations that can benefit not only the partners in new ways, but more importantly externalize the socio-economic innovation value produced to society. In order to assess the complementarity of the resources it is important to recognize the resource types that each partner has the potential to contribute, including tangible (money, land, facilities, machinery, supplies, structures, natural resources) and intangible resources (knowledge, capabilities, management practices and skills). Intangibles were considered as early as in 1987 the most valuable for a company (Itami & Roehl, 1987) together with core competencies (Prahalad & Hamel, 1990), which have a high potential to increase the value of the company (Sanchez, Chaminade & Olea, 2000) or the nonprofit organization. Galbreath suggests that what constitutes value and what the rules of value creation are is one of the most far-reaching changes in the twenty-first century. Moving from28 the tradition of tangible to intangibles and relationship assets constitute a change in perceiving where the value of the organizations is positioned today: “what becomes easily apparent is that the firm’s success is ultimately derived from relationships, both internal and external” (Galbreath, 2002, p. 118). An interlinked issue to the resource fit is the resource flow across the partners, i.e., the extent the exchange of resources is unilateral or bilateral and reciprocal. During the co-creation of value the exchange of resources is required to be reciprocal and mutli-directional involving both tangible and intangible resources. Familiarizing oneself with the partner organizations and their resource availability is a requirement in order to assess the type and complementarity of resources. The directionality of resources will not be easily assessed at the formation phase unless the partners had previous interactions (Goffman, 1983) or information is available from their previous interactions with other partners. Differences across the partners include misunderstandings of each other’s motivations due to unfamiliarity (Long & Arnold, 1995; Kolk, Van Tulder, & Westdijk, 2006; Huxham & Vangen, 2000) often leading to distrust (Rondinelli & London, 2003) that can undermine the formation and implementation processes (Rondinelli & London, 2003). Examining the partners’ motivations can provide an early indication of partners’ intentions and expected benefits (Seitanidi, 2010), offering some evidence of the transformative intention of the partnership (Seitanidi, Koufopoulos & Palmer, 2010). Due to the required time horizon (Austin, 2000a; Rondinelli & London, 2003) of such integrative and transformational relationships it is important to include in the formation analysis instances of previous value creation through the production of “first” (direct transfer of monetary funds) and “second order” benefits (e.g., improved employee morale, increased productivity, better motivated sales force) (Gourville & Rangan, 2004). This process will safeguard more appropriate fit between the organizations and will enable the generation of synergistic value, which is likely to lead to greater value creation. Linked to the motives is the mission of each partner organization. A particularly important measure to assess if the organizations are compatible is the mission fit. When the mission of each organization is strongly aligned with the partnership (Berger, Cunningham & Drumwright, 2004; Gourville & Rangan, 2004) the relationship has more potential to be important to both organizations. In the case of co- creation of value organizations might even use the partnership as a way to redefine their mission (Berger, Cunningham & Drumwright, 2004), which will develop a stronger connection with the partnership and each other. Hence the first step in assessing organizational fit is to examine the mission fit across the partner organizations. The previous experience of the partners (Hardy, Lawrence & Phillips, 2006), including their unique organizational histories (Barnett, 2007) in developing value relations, is an important determinant for the potential partnership fit indicating the ability of the partners to uncover novel capabilities and improve their prospects for social value creation (Brickson, 2007; Plowman, Baker, Kulkarni, Solansky & Travis, 2007). This will indicate the degree of “structural embeddedness” (Bryson, Crosby & Middleton Stone, 2006, p. 46), i.e., how positively the partners have interacted in the past (Jones, Hesterly & Borgatti, 1997; Ring & Van de Ven, 1994) in producing value. Therefore, in order for the partners not to rely “on the shadow of the future” (Rodinelli & London, 2003, p. 71) the history of interactions between the two organizations or with previous partners will provide an indication of the partners’ relevant value creation experience for integrative or transformative relations (Seitanidi, Koufopoulos & Palmer, 2010). Because organizations exist in turbulent environments, their history is dynamic and reassessment becomes a continual exercise (Selsky, Goes, & Babüroglu, 2007).29 One of the central motives for the formation of partnerships for both partners is to gain visibility (Gourville & Rangan, 2004) that can be expressed as reputation (Tully, 2004), public image (Heap, 1998; Rodinelli & London, 2003; Alsop, 2004), and desire to improve public relations (Milne, Iyer & Gooding- Williams, 1996). Visibility contributes to social license to operate, access to local communities (Heap, 1998; Greenall & Rovere, 1999) for high risk industries, credibility (Gourville & Rangan, 2004), and increased potential for funding from the profit sector (Heap, 1998; Seitanidi, 2010). In effect, positive visibility is a highly desired outcome for the partners. Although positive reputation is an intangible resource, we consider visibility a fit measure that takes place either explicitly or implicitly during the formation phase. Organizations consider the degree of their partners’ visibility and the extent it is positive or negative at a very early stage. In some cases a corporation may consider appropriate a partner with medium or low visibility in order to avoid attracting unnecessary publicity to its early attempts of setting up a partnership, as was the case with the Rio Tinto-Earthwatch partnership (Seitanidi, 2010). On the other hand negative visibility might create a unique opportunity for the co- creation of value for the partners and for society as it holds the potential for social innovation and change (Le Ber & Branzei, 2010a; Seitanidi, 2010). It is essential that both partners are comfortable with the potential benefits and costs of their partner’s visibility which will contribute to the organizational fit and the potential for co-creation of value. Finally, Rondinelli and London (2003) refer to the importance of identifying pre-partnership champions, particularly senior executives with a long term commitment who will play a key-role in developing cross- functional teams within and across the partnership. The compatibility of the partnership champions in both organizations is a key-determinant for the potential partnership fit which will extend to the people they will both select as part of the members of each organization’s partnership teams. Below we summarise the measures of fit that were discussed above. INSERT FIGURE 2 HERE Figure 2: Partnership formation: Partnership fit potential Partnership Fit Potential Initial articulation of the social problem Identify linked interests and resources across partners and for social betterment Identify partners’ motives and missions Identify stakeholders affected by each of the partners Identify the history of interactions and visibility fit Identify Pre-partnership Champions .30 Partnership Implementation: Selection, Design, and Institutionalization for Synergistic Value Partnerships In order to examine the value creation processes in the implementation phase we employ the micro- stage model of Seitanidi and Crane (2009) which responded to previous calls (Godfrey & Hatch, 2007; Clarke, 2007a; 2007b; Waddock, 1989) for more studies on the processes of interactions required in order to deepen our understanding. The model moves beyond the chronological progression models that define broad stages (Bryson, Crosby & Middleton Stone, 2006; Berger, Cunningham & Drumwright, 2004; Googins & Rochlin, 2000; Wilson & Charlton, 1997; Westley & Vredenburg, 1997; McCann, 1983) providing a process-based dynamic view (Vurron, Dacin & Perinni, 2010) by introducing micro-processes as a way of overcoming implementation difficulties (Pressman & Wildavsky, 1973), demonstrating the quality of partnering and allowing for a deeper understanding of the partnership implementation (McCann, 1983). As Godfrey and Hatch (2007, p. 87) remark: “in a world that is increasingly global and pluralistic, progress in our understanding of CSR must include theorizing around the micro-level processes practicing managers engage in when allocating resources toward social initiatives”. Following the selection-design-institutionalization stages the model focuses only on the implementation of partnerships rather than incorporating outcomes as part of the examination of partnership processes (Clarke & Fuller, 2010; Hood, Logsdon & Thompson, 1993; Dalal-Clayton & Bass, 2002). We extend the model of Seitanidi and Crane (2009) by discussing processes that relate to the co-creation of synergistic value. More specifically, we focus on the opportunities for the co-creation of socio-economic value during the implementation phase of partnerships and we discuss how the dynamics between the partners can facilitate these processes. We further indicate the two levels of implementation, organizational and collaborative responding to the call of Clarke & Fuller (2010) for such a separation. . Partner Selection Organizations often collect information or engage in preliminary discussions during the formation stage with several potential partners (Seitanidi, 2010). Only in the selection stage do they decide to proceed with more in depth collection of information that refers to the organization they wish to partner with. Despite being a common reason for partnership failure, poor partner selection (Holmberg & Cummings, 2009) has received relatively limited attention even in the more advanced strategic alliances literature (Geringer, 1991). Selecting the most appropriate partner is a decision that to a large extent determines the success of the partnership. Having identified during the formation stage the key social issue of interest (Waddock, 1989; Selsky & Parker, 2005), the organizations theoretically make a decision whether to embark in an integrative or transformational collaboration or evolve their philanthropic or transactional relationship into these more intense strategic alliances. In the case of a transformational collaboration the partners need to affirm the intent of potential partners to co-create change that will transform their own processes and deliver externally transformation through social innovation that will change for the best the lives of31 those affected by the social problem. In this case additional criteria need to be met by both organizations which we discuss below. Simonin (1997) refers to the “collaborative know-how”, encompassing ”Knowledge, skills and competences” (Draulans, deMan & Volberda, 2003), a distinctive set of skills that are important for the selection of partners. This “alliance process knowledge” requires skills in searching, negotiating as well as terminating relations early on (Kumar & Nti, 1998) that do not hold the potential for the co-creation of value. Partner selection might consist of a long process that can take years or a brief process that will last a few months (Seitanidi, 2010; London & Rondinelli, 2003). Depending on the existence of previous interactions, familiarity and trust between the partners (Selsky & Parker, 2005; 2010; Austin, 2000a), the selection can be either emergent or planned (Seitanidi & Crane, 2009). Inadequate attention to the selection of partners due to lack of detailed analysis is associated with organizational inexperience (Harbison & Pekar, 1998) which can result in short-lived collaborations. The highest potential for encompassing partnership benefits is associated with long-term collaborations, balancing the initial costs and time required during the partnership selection process (Pangarkar, 2003). Developing partnership specific criteria facilitates the process of assessing potential partners; selection criteria may include: industry of interest, scope of operations, cost effectiveness (investment required vs. generation of potential value), time-scales of operation, personal affiliations, availability and type of resources (Holmberg & Cummings, 2009; Seitanidi & Crane, 2009; Seitanidi, 2010). The development of selection criteria will make visible the complementarity potential and point towards a strategic approach (Holmberg & Cummings, 2009) for the creation of value. When the aim is to co-create synergistic value the more compatible the criteria identified by both partners the higher the potential for operational complementarity. The transformational collaboration would require additional criteria such as: identifying the operational area for process changes and identifying the domain for innovation. Despite partnerships being presented as mechanisms for the mitigation of risk (Selsky & Parker, 2005; Tully, 2004; Warner & Sullivan, 2004; Wymer & Samu, 2003; Bendell, 2000b; Heap, 2000; Andrioff & Waddock, 2002; Heap, 1998) and the important role of risk coupled with social value creation enabling the momentum for partnership success (Le Ber & Branzei, 2010b), models of partnership implementation do not usually incorporate risk assessment (for exceptions see Seitanidi, 2010; Seitanidi & Crane, 2009; Le Ber & Branzei, 2010b; Andrioff, 2000). The risk assessment would be a necessary micro-process, particularly in the case of high-negative visibility of one of the partners, in order to assess the potential value loss either due to exposure to public criticism or due to early termination of the partnership as a result of failure to adjust their value creation frames (Le Ber & Banzei, 2010c). Although it is the nonprofit organization’s credibility that may be more at stake by forging a partnership with a business, both are exposed to negative affiliation value (Utting, 2005). We propose a formal and an informal risk assessment process for both partners elaborated by internal and external processes. The formal internal risk assessment process aims to collect interaction intelligence across the potential partner organizations by requesting material such as: internal reports, both process and output reports, also referred as process-centric and plan-centric (Clarke & Fuller, 2010), press releases, external assessment of previous collaborative projects (Utting, 2005). The formal external process aims to collect intelligence from previous partners in order to develop an awareness of any formal incidences that took place or any serious formal concerns that may be voiced by previous partner organizations. Moving to the informal risk assessment process, we follow the suggestions of Seitanidi and Crane (2009) that include an internal process consisting of open dialogue among the constituents of each partner organization (in the case of the nonprofit organization: employees, trustees, members of the board, beneficiaries) and informal meetings between the partners and particularly the potential members of32 the partnership teams. The informal external process consists of open dialogue of each partner with its peer organizations within their own sector and across other sectors in order to collect intelligence such as positive or negative ‘word of mouth’ and anecdotal evidence related to the potential partner. The above processes allow for accountable decision making mechanisms through the voicing of internal and external concerns (Hamman & Acutt, 2003), identifying sources of potential value loss, developing an appreciation of the types of resources available by partners and the outcomes that were previously achieved; hence each partner would be in a much better position to develop a strategy on how to manage potential problems during the value creation processes (London & Rondinelli, 2003) both informally and formally (Seitanidi & Crane, 2009). Figure 3 offers an overview of the process of Partnership Selection. We incorporate feedback loops (Clarke & Fuller, 2010) to demonstrate the role of the risk assessment informing the final options of potential partners. The partnership selection consists predominately of micro-processes that take place on the organizational level of each partner. Furthermore, interactions across multiple stakeholder groups are encouraged during partnership selection as a way of managing power distribution, thereby asserting that collaboration can be a different model of political behaviour rather than being devoid of political dynamics (Gray, 1989). It is only in the next stage (partnership design) where we identify two levels of analysis: the organizational and the ‘coalition framing’ as referred by Choteau & Hicks (2003) or as referred by others the ‘inter-organizational collective’ (Astley, 1984) or collaborative level (Huxham, 1993; Clarke & Fuller, 2010). INSERT FIGURE 3 HERE Figure 3: Partnership Selection for co-creation of value Adapted from Seitanidi & Crane, 2009 Assessing the different NPO or BUS options Assessing co-creation potential & transformational intent Formal Risk Assessment Process Internal Process External Process Open dialogue among employees Informal Meetings between NPO & BUS employees Open dialogue among similar organizations within sector Collecting intelligence from organizations outside sector Deciding Associational Form: Integrative/Transformational Partnership Informal Risk Assessment Process External Process Internal Process Collecting intelligence from previous partners Collecting interaction intelligence across partners Partnership Selection for Co-Creation of Value Developing Partnership Criteria Assessing operational complementarity Risk Assessment Processes Assessing potential sources of value loss33 Partnership Design & Operations Partnership design and operations encompass formal processes that influence the partnership implementation and are considered necessary in order to ensure desirable behavior (Geringer & Hebert, 1989) in order to arrive to the anticipated outcomes. The literature has pointed to several design parameters and operating actions that contribute to partnering effectiveness. In social partnerships, Austin, Leonard, Reficco & Wei-Skillern (2006) suggested that social value is created by missions and design. The partnership design includes the experimentation with the procedural and substantive partnership issues (Gray, 1989) by setting objectives and structural specifications (Glasbergen, 2007; Arya & Salk, 2006; Bryson, Crosby & Middleton Stone, 2006; Andreasen, 1996; Halal, 2001; Austin, 2000b; Googins & Rochlin, 2000) including rules and regulations (Das & Teng, 1998; Gray, 1989), deciding upon the commitment of resources (Bryson, Crosby & Middleton Stone, 2006; Berger, Cunningham & Drumwright, 2004; Austin, 2000a; Googins & Rochlin, 2000; Waddock, 1988), establishing leadership positions (Austin, 2000a; Waddock, 1986), deciding upon the organizational structures of the partnership (Berger, Cunningham & Drumwright, 2004; McCann, 1983) including decisions regarding the teams of each partner, drafting a Memorandum of Understanding (MoU), and agreeing on the partnership management (Seitanidi & Crane, 2009; Austin & Reavis, 2002). The above processes add structural and purpose congruency (Andreasen, 1996) to the partnership and take place both on the organizational and collective level. Each organization internally debates its own priorities and interests and considers its own structures that will generate value on the organizational level. However, partners are also discussing, debating and negotiating on the collective level processes and structures (Clarke & Fuller, 2010; Bowen, Newenham-Kahindi, & Herremans, 2010; Bryson, Crosby & Middleton Stone, 2006) and co-design mechanisms (Seitanidi, 2008) that will collectively add value to the partnership. This is the first instance that they embark on the collective implementation process that requires co-ordination mechanisms (Bryson, Crosby & Middleton Stone, 2006; Selsky & Parker, 2005; Brinkerhoff, 2002; Milne, Iyer & Gooding-Williams, 1996). The decisions gradually reach operationalization and structures are forming, passing through several adaptations due to internal or external factors (Austin, 2000a; Gray, 1989) that lead to the stabilization of partnership content, processes, and structures (Seitanidi & Crane, 2009) until the next cycle of iteration. The time required for the operationalization of processes and structures will depend in part on the resource complementarity between the partners; in case of previous interactions across the partners experimentation and adaptation might be incorporated in one step (Seitanidi & Crane, 2009; Seitanidi, 2010). Recently the literature on social partnerships presented factors that determine the social change potential within the partnership relationship. Seitanidi (2008) suggested that in order for a partnership to increase its social change potential the partners are required to embrace their adaptive responsibilities allowing them to move away from their limiting pre-defined roles and transcend beyond a single dimension of responsibility in order to offer solutions to problems that require fundamental change. The above confirms our assertion that the company’s CSR and perception of its responsibilities need to have evolved in order to be in a position to co-produce synergistic value; similarly Le Ber and Branzei (2010b) proposed that deliberate role recalibration can tighten the coupling between social value creation and risk. As such the above research stresses the need for change within the relationship for the organizations in order to contribute to the potential for change outside the relationship. 34 The above processes constitute forms of formal control mechanisms in collaboration (Das & Teng, 1998). Informal measures of control such as trust-based governance may play a more important role in nonprofit-business partnerships (Rivera-Santos & Rufin, 2010) including managing alliance culture that requires blending and harmonizing two different organizational cultures (Wilkof, Brown & Selsky, 1995). Other key processes include: charismatic leadership that can inspire employees to participate in the partnership (Bhattacharya, Sen & Korschun, 2008; Berger, Cunningham & Drumwright, 2004; Andreasen, 1996) and facilitate an emotional connection with the social cause (Austin, 2000a); forms of communication that enable formation of trust (Austin, 2000a; Googins & Rochlin, 2000), mutual respect, openness and constructive criticism to both external and internal audiences (Austin, 2000a); continual learning (Bowen, Newenham-Kahindi, & Herremans, 2010; Senge, Dow & Neath, 2006; London & Rondinelli, 2003; Austin, 2000a), managing conflict (Seitanidi, 2010; Covey & Brown, 2001; Gray, 1989), and encouraging open dialogue (Elkington & Fennell, 1998). The above informal processes determine the alliance viability (Arya & Salk, 2006) and contribute to the co-creation of value. Although the formal measures are likely to be introduced in an early stage and play an important role in developing familiarity across the organizations, the informal measures are more likely to be effective in tensions around indeterminacy, vagueness, balancing the interpretations between the partners (Ben, 2007; Orlitzky, Schmidt & Rynes, 2003), and uncertainty in the process of partnerships (Waddock, 1991) by exerting symbolic power that can influence individual organizations and industry macroculture (Harris & Crane, 2002). The above informal measures are both enablers of value contributing to the creation and capture of value as it emerges and hence play a role in preventing value erosion; they also align value closer to the intangible resources, e.g., reputation, trust, relational capital, learning, knowledge, joint problem-solving, communication, coordination, transparency, accountability, and conflict resolution contributing to the co-creation of value. As such the above constitute processes that produce benefits for both partners and society and generate interaction value. In addition, the nonprofit sector has multiple bottom-lines and accountabilities towards their own stakeholders (Anheier & Hawkes, 2008; Mowjee, 2001; Commins, 1997; Edwards & Hulme, 1995) that are required to be respected by the profit sector during the process of engagement. Both partners are required to move their sense of responsibility from reactive and pro-active to adaptive in order to facilitate transformational interactions (Seitanidi, 2008). Such process adaptations take place both at the organizational level of each partner, during the interaction of the partners and at collaborative level (Clarke & Fuller, 2010). Figure 4 below summarizes the partnership design and operations that set up the structures and processes that will generate value, both formal and informal, identifies and mobilizes the resources across the partners in order to recognize the resource complementarities that will determine the co- creation of value. The partners experiment with the design both individually within each organization and collectively. This is the first instance that partners identify their value distance between their resources, goals, perceptions, and capabilities. In the next step the partners will embark in the value frame fusion in order to reconcile iteratively their divergent value creation frames (Le Ber & Branzei, 2010c) and co-create synergistic value. The partnership design may be the end for some partnerships if the partners realize that their value distance is too great. The double arrows in figure 4 demonstrate feedback loops across processes that lead to redesign and adaptations.35 INSERT FIGURE 4 HERE Figure 4: Partnership Design & Operations Partnership Design & Operations Experimentation Setting up structures & processes for co-creation of value Adaptations Iterations of processes & structures Operationalization Gradual stabilisation of processes & structures Collective Experimentation Organizational Experimentation Organizational Adaptations Collective Adaptations Exit Strategy36 Partnership Institutionalization A partnership has reached institutionalization when its structures, processes and programmes are accepted by the partner organizations (Seitanidi & Crane, 2009) and their constituents and are embedded within the existing strategy, values, structures, and administrative systems of the profit and nonprofit organizations. Following the gradual stabilization of structures and processes (partnership operationalization) organizational and personal familiarization leads to the gradual institutionalization of the partnership relationship within both organizations. The level of institutionalization can be tested in two ways: (1) the extent the partnership remains intact regardless of crisis situations it may face and (2) the relationship sustains changes of key-people in the partnership (e.g., departure of the partnership manager) (Seitanidi & Crane, 2009). Nonprofit-business partnerships represent contradictory value frames (Le Ber & Branzei, 2010b; Yaziji & Doh, 2009; Bryson, Crosby & Middleton Stone, 2006; Selsky & Parker, 2005; Teegen, Doh & Vachani, 2004; Austin, 2000; Gray, 1989; Waddock, 1988) due to the different sectors represented and their associated beliefs, motives, and logics. If the partners are to co-create socio-economic value, they are required to adjust their value frames to reach frame convergence (Noy, 2009) or frame fusion (Le Ber & Branzei, 2010b). Frame fusion is defined as “the construction of a new prognostic 1 frame that motivates and disciplines partners’ cross sector interactions while preserving their distinct contribution to value creation”, preserving the identity and differences of each partner (Le Ber & Branzei, 2010b, p. 164). Achieving value frame fusion (Le Ber & Branzei, 2010b) not only assists in overcoming the partners’ differences but also allows for transformation of the “current means into co-created goals with others who commit to building a possible future” (Dew, Read, Sarasvathy, & Wiltbank, 2008, p. 983). Anticipating each partner’s frame and intentionally adjusting their own (Le Ber & Branzei, 2010c) consists of iterative processes, taking place in and as a result of interactions (Kaplan, 2008) that gradually allow for micro-adjustments that lead to alignment that increases the potential for identifying complementarities. The above process takes place by each partner perceiving the strategic direction of the partner’s decisions (Kaplan, 2008), observing organizational change processes (Balogun & Johnson, 2004), participating in multiplayer interaction (Croteau & Hicks, 2003; Kaplan & Murray, 2008), monitoring and interpreting each other’s frames (Le Ber & Branzei, 2010c). Partners’ conceptions of the environment and perception of their own role in the partnership can lead to variations in commitment (Crane, 1998). Hence the value frame fusion plays an important role in the alignment of perceptions and the creation of a mutual language by developing a vocabulary of meaning (Crane, 1998). We position the co-creation of synergistic value within the partnership institutionalization as the value frame fusion is likely to take place within an advanced relationship stage. Stafford, Polonsky & Hartman (2000, p. 122) provide evidence on how the partners align their socio-economic value frames in order to co-create “entrepreneurial innovations that address environmental problems and result in operational efficiencies, new technologies and marketable ‘green’ products”. They demonstrate that in some cases partners may consciously decide to embark into a transformational collaboration (Stafford & Hartman, 2001); however we assume that in most cases the social change or social innovation potential emerges within the process (London & Rondinelli, 2003; Austin, 2000a). If frame fusion is not successful, then it is 1 Diagnostic frames are encoders of individuals’ experiences that assist in the assessment of a problem and prognostic frames are the use of the experiences in order to assess a possible solution (Le Ber & Branzei, 2010c; Kaplan, 2008)37 likely that frame divergence will shape the degree to which the organization will pursue its strategy, if at all, and to what degree change will be created (Kaplan, 2008). In fact, “it is the interactions of individuals in the form of framing contests” that shape the outcomes (Kaplan, 2008, p. 744). The plurality of frames and the existence of conflict (Glynn, 2000; Gray, 1989) within a partnership allows for divergent frames that can consist of opportunities for co-creation. Particularly novel tasks (Seitanidi, 2010; Le Ber & Branzei, 2010c; Heap, 2000), allow for balancing potential bias associated with power dynamics (Utting, 2005; Tully, 2004; Millar, Choi & Chen, 2004; Hamman & Acutt, 2003; Crane, 2000; Bendell & Lake, 2000). Adaptations are essential for survival (Kaplan, 2008) and present opportunities on the individual, organizational and sectoral levels (Seitanidi & Lindgreen, 2010) to unlearn and (re) learn how to frame and act collectively in order to develop a synergistic framework, essential for providing solutions to social problems. The value capture will depend on the interlinked interests of the partners which will influence the level of institutionalization of the co-creation of value (Le Ber & Branzei, 2010a). After the frame fusion and co-creation of value, the institutionalization process enters a point of emerged collective meaning between the partner organizations, which require a re-institutionalization of partnership processes, structures and programs after each cycle of co-creation of value. When the partners have captured either unilaterally or jointly (Le Ber & Branzei, 2010a; Makadok, 2001) some value, a necessary prerequisite for the continuous co-creation of value, they are ready for the next iteration of co-creation of value. Innovation value is what reinvigorates and sustains the institutionalization of a partnership. Despite improvements in procedural aspects of partnerships including independent monitoring of partnership initiatives (Utting, 2005) and developing informal risk assessment processes (Seitanidi, 2010), partnerships are still faced with concerns. Reed and Reed (2009) refer to: the accountability of partnerships particularly to the beneficiaries; the appropriateness of the standards developed, effectiveness and enforceability of the mechanisms they establish; and their role as mechanisms for greenwashing and legitimizing self-regulation in order to keep at bay state regulation. Furthermore, the power asymmetries associated with NPO and BUS partners (Seitanidi & Ryan, 2007) and the exercise of control of corporate partners (Reed & Reed, 2009; Le Ber & Branzei, 2010a; Utting, 2005) in the process of interaction has fuelled concern from NPOs regarding the loss of control in decision making (Brown, 1991). Hence calls for shared (Austin, 2000a; Ashman, 2000), consensus (Elbers, 2004) decision making and co-regulation (Utting, 2005) have been suggested in order to balance the power dynamics across the partners. Decentralized control of the partnership implementation by allowing multiple stakeholders to voice concerns within the partnership implementation process and incorporating feedback loops (Clarke & Fuller, 2010) can address the previous criticisms. As such, decentralized social accountability check- points would need to be incorporated in the implementation of partnerships in order to increase societal determination by inviting suggestions from the ground and facilitating answerability, enforceability, and universality (Newell, 2002; Utting, 2005). In effect, the co-creation of socio-economic value would be the result of a highly engaged and decentralized community of voices and would also allow for the diffusion of outcomes pointing towards a participative, network perspective (Collier & Esteban, 1999; Heuer, 2011), including engagement with fringe stakeholders as a means to achieve creative destruction and innovation for the partners and society (Gray, 1989; Murphy & Arenas, 2010). The above expands prioritization of a few stakeholders to the engagement of many stakeholders associated directly or indirectly with partners pointing towards what Gray (1989) termed “global interdependence”. Hence, while in the previous philanthropic, transactional and less so integrative38 stages partnerships were concentrating in the nonprofit-business dyad, the more we move towards the transformational stage the partnership requires the consideration, involvement, and prioritization of a plurality of stakeholders, suggesting a network perspective of stakeholders (Collier & Esteban, 1999; Rowley, 1997; Donaldson & Preston, 1995; Nohria, 1992; Granovetter, 1985). The President and CEO of Starbucks testifies to the efforts of business to broaden the engagement with stakeholders (In Austin, Gutiérrez, Ogliastri, & Reficco, (2007: 28): “[Our stakeholders] include our partners (employees), customers, coffee growers, and the larger community”. As Austin, Gutiérrez, Ogliastri, and Reficco, (2007, p. 28) remark other companies include in their broadening definition of stakeholders “representatives of nonprofits, workers, and grassroots associations in their governance bodies, or create ad hoc bodies for them, such as advisory boards or social councils”. The more inclusive the engagement is the higher the potential for co-creation of value on multiple levels achieving plurality of frames and decreasing the accountability deficit of partnerships. As social betterment becomes more central in the integrative and transformational stages of collaboration, the role of engagement with multiple stakeholders becomes a key component in the co- creation process and in re-shaping the dialogue (Cornelious & Wallace, 2010; Fiol, Pratt & O’ Connor, 2009; Barrett, Austin & McCarthy, 2002; Israel, Schulz, Parker, & Becker, 1998) by contributing diverse voices in the value frame fusion during the implementation process (Le Ber & Branzei, 2010c). Multi- stakeholder engagement during the partnership is the intentional maximization of interaction with diverse stakeholder groups, including latent and fringe groups (Le Ber & Branzei, 2010a; Murphy & Arenas, 2010; Mitchell, Agle & Wood, 1997), during the partnership implementation in order to increase the potential for value creation and allow for value capture on multiple levels. The co-creation process that aims to deliver social betterment (more on the transformational rather the integrative stage) will assume a much larger and diverse constituency. Embedding the partnership institutionalization across interested communities introduces a new layer of partnership institutionalization outside the dyad of the profit and nonprofit organizations. Figure 5 below presents the partnership institutionalization process based on the above discussion of the literature. The institutionalization process commences by embedding the partnership relationship within each organization. After they reach value frame fusion a re-institutionalization of partnership processes, structures and programmes between the partners is required based on the new emerged shared perceptions. The inner circle of process change demonstrates the iterative processes of internal value creation that lead to the development of new capabilities and skills, passing through the frame fusion, identification of complementarities, and value perceptions of each partner. The external circle demonstrates the institutionalization of stakeholder and beneficiary voice in the partnership process, appearing as co-creation value in cycle 1. Partnerships have the potential to deliver several cycles of value creation depending on the quality of the processes, the evolution of the partners’ interests and capabilities, and changes in the environment. Value renewal is a prerequisite for the co-creation and capture of value. Partnerships may end unexpectedly before the value capture by the partners or beneficiaries or after one value creation cycle due to their dynamic character or due to external changes. The above testify that the relationship process is the source of value for both partners and society.39 INSERT FIGURE 5 HERE Figure 5: Partnership Institutionalization Partnership Institutionalization Relationship Mastering Managing crises, accepting differences as a source of value, Identifying Complementarities Use of generic & distinctive competences Bilateral & reciprocal exchange of resources Linking interests, aligning value perceptions (benefits & costs) Partner Frame A Partner Frame B Organizational Adaptations Collective Adaptations Personal Familiarization Developing personal relations & familiarization Frame Fusion Frame convergence while preserving differences Co-Creation of Synergistic Socio-Economic Value Outcomes: Social Innovation Partner Value Perception A Partner Value Perception B Process change Exit Strategy Partner Value Capture A Partner Value Capture B Co-creation value cycle 1 Stakeholder group 1 Value beneficiary Stakeholder group 3 Value beneficiary Stakeholder group 2 Value beneficiary40 London and Rondinelli (2003) employ the HBS partnership case study of Austin and Reavis (2002) between Starbucks and Conservation International-CI in order to describe the partnership phases from the formation and the first meeting of the partners, the negotiation period that lasted four months, and the partnership design, i.e., setting up core partnership operations, including training provided to local growers in organic farming methods by CI and the provision of organic seeds and fertilizers to farmers at nominal prices giving them access to high quality resources which were made possible due to the funding provided by Starbucks; and setting up quality control mechanisms to sustain the required Starbucks quality for coffee. The outcomes of the partnership were: 40% on average increase in the farmers’ earnings, 100% growth in the cooperatives international coffee sales, and the provision of $200,000 to farmers in the form of loans through the local cooperatives. We used their description to unpack and describe below the co-creation process in partnerships that aim to deliver synergistic value. During the formation, selection and the early design of the partnership the partners have originally only information about each other, i.e., who Starbucks and CI are, the industry and product/service proposition, and their interest in developing a collaboration with an organization from a different economic sector; the basic information about the key product/service proposition gradually increases, first within the members of the partnership team and later it diffuses within other departments of the organization; due to the intensification of the interactions gradually the information is transformed from information to knowledge, i.e., the meetings and intensification of interactions facilitate the transformation of information to knowledge (e.g., why Starbucks is interested in CI, how they are planning to work with a partner, under what conditions, what is unique about the partner’s product/service proposition, what are the constituent elements of the partner’s identity/product/service). The explicit knowledge about each gradually increases and is combined with the increased familiarity, due to the interactions, that incorporates tacit knowledge about each other (e.g., how the organization works, the mechanisms and processes they have in place, culture of the organization). When tacit knowledge meets positive informal conditions that lock the emotional involvement of the partners within the interactions, then a higher level of knowledge is exchanged with enthusiasm and pride and with the explicit aim to share the unique resources of the organization. As the partnership progresses the knowledge about the partner organization, its resources and use of resources becomes deeper and for the members of the partnership teams the knowledge about their partner turns into a capability, i.e., at this stage the partner is able to apply the knowledge in the context of its own organization. Having arrived at a deep mutual knowledge about each other’s organizations and the development of new capabilities, the partners are able to speak the ”same language” and embark in the co-creation process that may involve the creation of new products, services and the co-creation of new skills that they will be able to apply in the domain of common interest where the collaborative strategy takes place resulting in change or social innovation. The following figure 6 demonstrates the process that we describe above: how the sector/organization based information turns into concrete knowledge and then a capability that can be applied in the context of the partner organization and due to the multiple uses of such new capabilities partners are able to develop new products/services that constitute social innovation or change as they contribute positively to society or minimize the previous harm.41 INSERT FIGURE 6 HERE Figure 6: Information to knowledge, to capability to change and innovation (note: the change cloud is connected with the nonprofit capability with a standard shape connector that does not denote any particular meaning in the shape below) The partnership implementation is the value creation engine of cross sector interactions where the internal, external change and innovation can be either planned or emergent. The co-creation process not only requires the partners’ interests to be linked but also to be embedded in the local communities of beneficiaries and stakeholders in order to incorporate perceptions of value beyond the partnership dyad and hence facilitate the value capture and diffusion on different levels. In the next section we discuss the evaluation of the partnership implementation before proceeding to the partnership outcomes section. Change NPO information NPO information BUS information BUS BUS information Knowledge BUS Knowledge NPO Knowledge NPO Knowledge NPO capability BUS capability NEW capability Social Innovation42 Evaluation of Partnership Implementation Process outcomes, in contrast to programmatic outcomes which we discuss in the next section, concentrate on how to improve efficiency and effectiveness of the partnership implementation process (Brinkerhoff, 2002). Continuous assessment during the implementation is an important part of the partnership process as it can improve service delivery, enhance efficiency (Brinkerhoff, 2002), assist in making tactical decisions (Schonberger, 1996), propose adjustments in the process, and importantly “explain what happened and why” (Sullivan & Skelcher, 2003). It can also encourage the involvement of beneficiaries and stakeholder groups in order to include their voices in the process (Sullivan & Skelcher, 2003). Furthermore, the process assessment can provide indications of how to strengthen the long term partnership value creation (Kaplan & Norton, 1992) and in effect avoid delays in achieving impact (Weiss, Miller Anderson & Lasker, 2002). Difficulties associated with setting, monitoring, and assessing process outcomes include measurement (e.g., articulating the level of familiarization between members of the partnership, monitoring the evolution of relations, and assessing the level of partnership institutionalization) (Shah & Singh, 2001) and attribution, i.e., “how can we know that this particular process or institutional arrangement causes this particular outcome” (Brinkerhoff, 2002, p. 216). Hence, evaluation frameworks for the implementation of partnerships are relatively scarce (El Ansari & Weiss, 2005; Dowling, Powell & Glendinning, 2004; El Ansari, Phillips & Hammick, 2001). Frameworks exist for the evaluation of the performance in partnerships in general (Huxham & Vangen, 2000; Audit Commission, 1998; Cropper, 1996), for the assessment of public sector networks (Provan & Milward, 2001), urban regeneration (Rendon, Gans & Calleroz, 1998), and more frequently in the health field (Markwell, Watson, Speller, Platt & Younger, 2003; Hardy, Hudson & Waddington, 2000; Watson, Speller, Markwell & Platt, 2000); no framework, to our knowledge, concentrates on the nonprofit- business dyad. Brinkerhoff (2002, p. 216) suggested that “we need to examine partnerships both as means and as end in itself”. Provan and Milward (2001) proposed a framework for the evaluation of public sector networks at the level of 1/community; 2/ network (e.g., number of partners, number of connections between organizations, range of services provided) and 3/ organization/participant. Brinkerhoff (2002) criticized the above framework suggesting that it neither examines the quality of the relationship among the partners nor offers suggestions that can improve the outcomes. Criteria for relationship evaluation in the health field include: “willingness to share ideas and resolve conflict, improve access to resources, shared responsibility for decisions and implementation, achievement of mutual and individual goals, shared accountability of outcomes, satisfaction with relationships between organizations, and cost effectiveness” (Leonard, 1998, p. 5). Interestingly the Ford Foundation Urban Partnership Program, in the education field, provided an example of partnership relationship assessment (Rendon, Gans & Calleroz, 1998) which included the partner stakeholders agreeing on their own indicators. Brinkerhoff’s (2002) assessment approach addresses two aims: “1/ improve the partnership practice in the context of programme implementation; 2/ refine and test hypothesis regarding the contribution of the partnership in the partnership performance and outcomes and 3/ suggest lessons for future partnership work in order to maximise its potential to enhance outcomes” (Brinkerhoff, 2002, p. 216). Her framework, incorporating qualitative and quantitative indicators, emphasizes relationship outcomes and addresses the evaluation challenges of integrating both process and institutional arrangements in performance measurement, allowing for continuous assessment and encouraging dialogue and a shared understanding. 43 With regards to the synergistic results of partnerships, which are usually not well articulated and measured (Brinkerhoff, 2002; Dobbs, 1999), an interesting quantitative study on health partnerships (Weiss, Miller Anderson & Lasker, 2002) suggested that assessing the level of synergy in partnerships provides a useful way to determine the degree that the implementation process is effective prior to measuring the impacts of partnerships. They conceptualized synergy at the partnership level “combining the perspectives, knowledge and skills of diverse partners in a way that enables a partnership to (1) think in new and better ways about how it can achieve its goals; (2) plan more comprehensive, integrated programs; and (3) strengthen its relationship to the broader community” (Weiss, Miller Anderson & Lasker, 2002, p. 684). The study examined the following dimensions of partnership functioning they hypothesized to be related to partnership synergy: leadership, administration and management, efficiency, nonfinancial resources, partner involvement challenges, and community- related challenges. The findings demonstrate that the partnership synergy is closely associated with effective leadership and partnership efficiency. Regarding leadership, high levels of synergy were associated with “facilitating productive interactions among the partners by bridging diverse cultures, sharing power, facilitating open dialogue, and revealing and challenging assumptions that limit thinking and action” (Weiss, Miller Anderson & Lasker, 2002, p. 693). These findings are in agreement with previous research suggesting that leaders who are able to understand the differences across sectors, perspectives, empower partners, and act as boundary spanners are important for partnerships (Alter & Hage, 1993; Wolff, 2001; Weiner & Alexander, 1998). Furthermore, partnership efficiency, i.e., the degree of achieving partnership optimization through the partners’ time, financial, and in-kind resources had also a significant effect on synergy. The above are some of the factors that influence the implementation and can potentially be set-up by design (Austin & Reavis, 2002). One of the most detailed assessment tools by Markwell, Watson, Speller, Platt and Younger (2003) is looking at six major areas of implementation: leadership, organization, strategy, learning, resources and programs; each element is divided into several sections providing a well elaborated tool. It assess issues such as: the level of representation of each partner within the partnership relationship, the extent to which the partnership builds on each partner’s individual way of working, if the partnership has in place a community involvement strategy, if multidisciplinary training in partnership skills is looked at, if partners have been able to manage conflict, among other issues. All of the above questions aim at addressing the process of co-creation of value and allow for re-designing the partnership operations in a more efficient and effective way. In the above section we looked at partnership processes, as constructive exchanges (King, 2007), that have the potential to be critically important in providing solutions to social problems. Partnerships are laboratories of social change as they have the ability to internalize externalities and transform them to solutions, innovation, and thereby social change. The basic assumption of most research in social partnerships is that organizations interact for private gain and to improve their own welfare (King, 2007). We contend that corporations and nonprofit organizations interact for private and societal gain, and this interaction improves the welfare of both parties and society. In the next section we move to the discussion of partnership outcomes, the different level of value capture, and methods for the evaluation of outcomes.44 SYNERGISTIC VALUE OUTCOMES: LOCI, EXCELLENTIA AND DISTINCTUS OF VALUE We are experiencing an unprecedented proliferation of “accelerated interdependence” (Austin, 2000b, p. 69) across the public, profit, and nonprofit sectors due to the double devolution in functions (from central governments to the local authorities) and in sectors (from the public to the private and nonprofit) (Austin, 2000b). The increasing fiscal needs of the public and nonprofit sectors contribute to the diffusion of responsibilities promoting cross sector collaboration as an effective and efficient approach to manage assets and provide solutions to social problems (Austin, 2000b). However, the intense needs for resources can capture the critical role of the state and in some cases of the nonprofit sector (Seitanidi, 2010; Bendell, 2000a,b; Raftopoulos, 2000; Mitchell, 1998; Ndegwa, 1996). Hence, criticism towards partnerships (Reed & Reed, 2009; Biermann, Chan, Mert & Pattberg, 2007; Hartwich, Gonzalez & Vieira, 2005) and the outcomes achieved (Austin, 2010; Seitanidi, 2010; Brinkerhoff, 2007) is not a surprise, but rather a call for a paradigm change. The examination of nonprofit-business partnership outcomes (Selsky & Parker, 2005) is an evolving area in practice and research, particularly when the focus is not only on the benefits for the partners but for society (Austin, 2010; Seitanidi & Lindgreen, 2010; Margolis & Walsh, 2003; Austin, 2000). Although what makes collaboration possible is “the need and the potential” for benefit (Wood & Gray, 1991, p. 161) given that social partnerships aim to address social issues (Waddock, 1988), the definition of what constitutes positive partnership outcomes “should encompass the social value generated by the collaboration” (Austin, 2000b, p. 77) on different levels. The shift in the literature from social partnerships (Waddock, 1988) to strategic partnerships (Warner & Sullivan, 2004; Birch, 2003; Elkington & Fennell, 2000; Andrioff, 2000) seems to be turning full circle as new found significance is assigned to collective impact (Kania & Krammer, 2010), social value measurement (Mulgan, 2010), and the very recent creation of a new class of assets, named by JP Morgan and the Rockefeller Foundation, ‘impact investment’ that aim to “create positive impact beyond the financial return” (O’Donohoe, Leijonhufvud, Saltuk, Bugg-Levine, & Brandeburg, 2010, p.5): “… investors rejecting the notion that they face a binary choice between investing for maximum risk- adjusted returns or donating for social purpose, the impact investment market is now at a significant turning point as it enters the mainstream. … Impact investments are investments intended to create positive impact beyond financial return. As such, they require the management of social and environmental performance (for which early industry standards are gaining traction among pioneering impact investors) in addition to financial risk and return. We distinguish impact investments from the more mature field of socially responsible investments (“SRI”), which generally seek to minimize negative impact rather than proactively create positive social or environmental benefit.” The significance of impact investments, supported by two global institutions, a traditionally financial JP Morgan and an integrally social Rockefeller Foundation, lies in the institutionalization of the paradigm shift and in the change in the signification of the constitution of value. Reconfiguring the meaning of financial value by incorporating social value as a pre-condition for the inclusion of business in these assets is of critical importance. In the report by O’Donohoe, Leijonhufvud, Saltuk, Bugg-Levine, & Brandeburg (2010, p.7) the pre-condition reads: “The business (fund manager or company) into which, the investment is made should be designed with intent to make a positive impact. This differentiates impact investments from investments that have unintentional positive social or environmental consequences”. Socio–economic value creation enters the mainstream not only as a suggestion from philanthropy and the social sector, but also as a condition from the markets signalling what constitutes a45 priori an acceptable outcome. The re-constitution of value creates a unique opportunity for intentional social change mechanisms to provide opportunities for social impact as forms of superior value creation for economic and social returns, not only for few but for many. In order to assess if nonprofit-business partnerships constitute such intentional mechanisms for social change and innovation we need to locate where value is created (loci 2 of value creation), how the value is assessed (excellentia 3 of value creation), and if the value created can make a difference to society (distinctus 4 of value creation), which we discuss in the following sections. Where Value is created: Loci of Value Creation An important constituent of our framework is establishing the loci of value creation while incorporating multi-level value assessment by introducing three levels of analysis: organizational, individual and societal. The focus in this element of the framework is on who benefits from the collaboration. Collaborations generate value, often simultaneously, at multiple levels –meso, micro, and macro. For our purpose of examining value, we distinguish two loci: within the collaboration and external to it. Internally, we examine value accruing at the meso and micro levels for the partnering organizations and the individuals within those organizations. Externally, we focus on the macro or societal level where social welfare is improved by the collaboration in the form of benefits at the micro (to individual recipients), meso (other organizations), and macro (systemic changes) levels. Internal Value Creation Meso level - The most common focus in the literature and in practice is on the value accruing to the partners, which are the organizational benefits that enhance the performance of the company or the nonprofit. Below we discuss in turn the benefits for the business and for nonprofits. For companies the cited business benefits of collaboration summarized here include enhancement of: company, brand reputation and image (Yaziji & Doh, 2009; Greenall and Rovere; 1999; Heap, 1998); legitimacy (Yaziji & Doh, 2009); corporate values (Austin, 2000b; Crane, 1997); community and government relations (Seitanidi, 2010; Pearce & Doh, 2005; Austin, 2000a); employee morale, recruitment, motivation, skills, productivity, and retention (Bishop & Green, 2008; Googins & Rochlin, 2000; Pearce & Doh, 2005; Turban & Greening, 1997); consumer preference (Heal, 1998; Brown, & Dacin, 1997); market intelligence and development (Milne, Iyer & Gooding-Williams, 1996); market, product, process innovation and learning (Austin, 2000b; Googins & Rochlin, 2000; Kanter, 1999); stakeholder communication and accountability (Bowen, Newham-Kahindi & Herremans, 2010; Pearce & Doh, 2005; Andreasen, 1996); external risk management (Selsky & Parker, 2005; Tully, 2004; Wymer & Samu, 2003; Bendell, 2000a; Das & Teng, 1998); competitiveness (Porter & Kramer, 2002); innovation (Yaziji & Doh, 2009; Stafford, Polonsky, & Hartman, 2000; Austin, 2000a); adaptation of new management practices due to the interaction with nonprofit organizations (Drucker, 1989). As a result, the financial performance and corporate sustainability can be strengthened. In the above cases the value of the partnership is located within the partner organizations. 2 In Latin locus refers to the place, location, situation, spot; loci is the plural, i.e., where we position the value creation. 3 In Latin excellentia refers to excellence, merit, worth, i.e., what is the worth of value creation 4 In Latin distinctus refers to difference, i.e., the difference of the value creation 46 On the other hand, business can incur costs including increased need in resource allocation and skills; increased risk of losing exclusivity in social innovation (Yaziji & Doh, 2009); internal & external scepticism and scrutiny (Yaziji & Doh, 2009); potential for reduced competitiveness due to open access innovation (Stafford, Polonsky, & Hartman, 2000); increased credibility costs in case of unforeseen partnership exit or reputational damage due to missed opportunity of making a difference (Steckel, Simons, Simons & Tanen, 1999). For nonprofits the summarized cited benefits of collaboration include: financial support received by the business (Yaziji & Doh, 2009; Brown, & Kalegaonkar, 2002; Googins & Rochlin, 2000; Galaskiewicz, 1985) increased visibility (Seitanidi, 2010; Gourville & Rangan, 2004; Austin, 2000); credibility and opportunities for learning (Yajizi & Doh, 2009; Austin, 2000b; Googins & Rochlin, 2000; Huxham, 1996); development of unique capabilities and knowledge creation (Porter & Kramer, 2011; Yaziji & Doh, 2009; Hardy, Phillips & Lawrence, 2003; Googins & Rochlin, 2000; Gray, 1989; Huxham, 1996), increased public awareness on the social issue (Gourville & Rangan, 2004; Waddock & Post, 1995); increase in support for organizational mission (Pearce & Doh, 2005); access to networks (Millar, Choi & Chen, 2004; Yaziji & Doh, 2009; Heap, 1998); technical expertise (Vock, van Dolen & Kolk, 2011; Seitanidi, 2010; Austin, 2000a); increased ability to change behaviour (Gourville & Rangan, 2004; Waddock and Post, 1995); opportunities for innovation (Holmes, & Moir, 2007; Stafford, Polonsky, & Hartman, 2000); opportunities for processes based improvements (Seitanidi, 2010); increased long term value potential (Le Ber & Branzei, 2010a, b; Austin, 2000a, b); increase in volunteer capital (Vock, van Dolen & Kolk, 2011; Googins & Rochlin, 2000); positive organizational change (Seitanidi, 2010; Glasbergen, 2007; Waddock & Post 2004; Murphy & Bendell, 1999); and sharing leadership (Bryson & Crosby, 1992). As a result, attainment of its social mission can be strengthened. Costs for the nonprofit organizations are often reported to be more than the costs for business (Seitanidi, 2010; Yajizi & Doh, 2009; Ashman, 2001) and may include the decrease in potential donations due to the high visibility of a wealthy partner (Gourville & Rangan, 2004); increased need for resource allocation and skills (Seitanidi, 2010); internal and external scepticism ranging from decrease in volunteer and trustee support to reputational costs (Yaziji & Doh, 2009; Millar, Choi & Chen, 2004; Rundall, 2000); decrease in employee productivity; increased costs due to unforeseen partner’s exit from partnership; effectiveness and enforceability of the developed mechanisms; legitimizing mechanism of ”greenwashing” (Utting, 2005). Micro level - Collaborations can produce benefits within the partnering organizations for individuals. This value can be twofold: instrumental and psychological. From the practical side, working in cross- sector collaboration can, for example, provide new or strengthened managerial skills, leadership opportunities, technical and sector knowledge, broadened perspectives. On the emotional side, the individual can gain psychic satisfaction from contributing to social betterment and developing new friendships with colleagues from the partnering organization. The micro level benefits are largely under- explored in the literature despite the broad acceptance that implementing CSR programmes should benefit a wide range of stakeholders beyond the partner organizations (Green, & Peloza, 2011; Vock, van Dalen & Kolk, 2011; Bhattacharya & Sen, 2004), including employees and consumers. In a recent study Vock, van Dolen and Kolk (2011) argue that the participation of employees in partnerships can affect consumers either favorably or unfavorably. The effect on consumers will depend on how they perceive the employees’ involvement with the cause, i.e., whether they perceive that during work hours the cause distracts employees from serving customer needs well. 47 Bhattacharya, Sen and Korschun (2008) reported that a company’s involvement in CSR programs can satisfy several psychological needs including personal growth, the employees’ own sense of responsibility for the community and reduction in levels of stress. A precondition of the above is that employees should get involved in the relevant programs. More instrumental benefits comprise the development of new skills, building a connection between the company and the employee, particularly when there are feelings of isolation due to physical distance between the employee and the central office; potential career advancement (Burchell & Cook, 2011); using the resultant positive reputation as a “shield” for the employee when local populations are negative towards the company (Bhattacharya, Sen, & Korschun, 2008). Similar psychological mechanisms associated with the enthusiasm of employees have the potential to cause spillover effects, triggering favourable customer reactions (Kolk, Van Dolen & Vock, 2010). Employee volunteering, an important component of partnerships (Austin, 2000a), may improve the work motivation and job performance (Bartel, 2001; Jones, 2007), customer orientation,, and productivity, and in effect benefit consumers (Vock, van Dolen & Kolk (2011). The partnership literature makes extensive reference to the partnership outcomes, concentrating more on the benefits rather the costs, that contribute to the value creation internally, either for the profit or the nonprofit partners as demonstrated above. However, there is a notable lack of systematic in-depth analysis of outcomes beyond the descriptive level; in effect, the full appreciation of the benefits and costs remains unexplored. The majority of the literature discusses outcomes as part of a partnership conceptual framework or by reporting outcomes as one of the partnership findings. A limited number of studies are available on addressing outcomes as a focal issue and offering an outcomes-centred conceptualization (Hardy, Phillips & Lawrence, 2003; Austin & Reavis, 2002). The above is surprising as partnerships are related to improved outcomes; furthermore, as an interdisciplinary setting partnerships have been associated with the potential to link different levels of analysis (Seitanidi & Lindgreen, 2010), practices across sectors (Waddock, 1988), and address how society is better off as a result of the cross sector interactions (Austin, 2000a). A precondition to address the above is to study the links across levels and loci of benefits. As Bhattacharya, Korschun, & Sen, (2009) remark in order to understand the full impact of CSR initiatives we first need to understand how CSR can benefit individual stakeholders. Similarly, Waddock (2011) refers to the individual level of analysis as the “difference makers” comprising the fundamental element for the development of institutional pressures. Hence, either the effects of initiatives to individuals or the role of individuals in affecting value creation require further analysis on the micro level. Table 1 below presents the categorization of benefits on different levels of analysis and according to the loci of value. Understanding the links across the different levels of value creation and value capture is challenging. Interestingly the most recent research on the micro level of analysis is leading in capturing the interaction level across the internal/external dimension (employees/customers) of benefits (Vock, Van Dolen & Kolk, 2011; Kolk, Van Dolen & Vock, 2010). The conceptualization of the links between employees and customers herald a new research domain that captures the missing links of cause and effect in partnerships either directly or indirectly and focuses on interaction as a level of analysis. In Table 1 value creation is divided also according to the production of ‘first’ (direct transfer of monetary funds) and ‘second order’ benefits and costs (e.g., improved employee morale, increased productivity, better motivated sales force) (Gourville & Rangan, 2004) providing a time and value dimension in the categorization. INSERT TABLE 1 HERE48 External Value Creation Macro level - Beyond the partnering organizations and their individuals, collaborations aim to generate social and economic value for the broader external community or society. While actions that alleviate problems afflicting others can take countless forms, we define collaborative value creation at the macro level as societal betterment that benefits others beyond the collaborating organizations but due to their joint actions. External to the partnering organizations, the collaboration can create social value for individuals – targeted beneficiaries with needs that are attended to by the collaborative action. It can also strengthen other social, economic, or political organizations that are producers of social value, and hence increase society’s capacity to create social well-being. At a broader societal level the collaboration may also contribute to welfare enhancing systemic change in institutional arrangements, sectoral relationships, societal values and priorities, and social service and product innovations. The benefits accruing to the partnering organizations and their individuals internal to the collaboration are ultimately due to the value created external to the social alliance. Ironically, while societal betterment is the fundamental purpose for cross-sector collaborative value creation, this is the value dimension that is least thoroughly dealt with in the literature and in practice. We provide examples of value creation external to the partnership in Table 1. On the macro level the benefits for individuals or beneficiaries include the creation for value for customers as we have seen above, an indirect benefit (Vock, Ven Dolen & Kolk, 2011; Kolk, Van Dolen & Vock, 2010) mediated by the direct benefit that accrues to the employees as a result of partnerships. Creating direct value for customers is an important distinction between philanthropic and integrative/transformational interactions for socio-economic benefit (Reficco & Marquez, 2009). Rufin and Rivera-Santos (2008) pointed to the linearity that characterizes business value-chains, i.e., “a sequential process in which different actors members contribute to value creation in a chronological sequence, with each member receiving a product and enhancing it through the addition of value before handing to the next” (Reficco & Marquez, 2009, p. 6). However, in nonprofit-business partnerships the duality of the nature of benefits (economic and social) exhibit non-linearity (Reficco & Marquez, 2009) in the process of value creation. Hence, the isolation and attribution of socio-economic benefit is rather complex. An example of a socio-economic customer benefit derived from the collaboration of HP and an African social enterprise mPedigree. The solution they developed of cloud and mobile technology allows customers to check the genuineness of drugs in Africa and avoid taking counterfeit drugs which in effect saves lives (Bockstette & Stamp, 2011). Individuals that may benefit from partnerships include the beneficiaries of the partnership programs such as the dairy farmers receiving support in rural areas, creation of jobs for women in rural India (Bockstette & Stamp, 2011) or increasing by 40% the income of coffee farmers earnings in Mexico and the quality of coffee produced for Starbucks’s customers (Austin & Reavis, 2002). Costs might include accountability and credibility issues and possible problems with administering the solution. The benefits for other organizations result from the complexity that surrounds social problems and the interdependence across organizations and sectors. Addressing poverty requires tackling issues in education and health, hence, administering a solution crosses other organizational domains that interface with the central issue of the partnership. For example, when the partnership between Starbucks and Conservation international aimed at improving the quality of coffee for its customers and increasing the income of the Mexican farmers it also increased 100% the growth in the local cooperatives’ coffee sales and in addition resulted in the development of another partnership between49 the company and Oxfam (Austin & Reavis, 2003). Potential costs include expenses for the development of new markets and appropriateness of the standards developed. The overall benefits, for example, of reduced pollution, deaths, increasing recycling, improved environmental standards result in value to society at large benefiting many people and organizations either directly or indirectly. For example, by reducing the drug abuse society benefits by controlling the work time loss, health problems, and crime rates related to drugs (Waddock & Post, 1995). Moving to systemic benefits for other organizations can include the adoption of technological advantage through available open innovation/intellectual property, changing processes of “doing business” that may result in industry wide changes. For example, developing environmentally friendly technology between a firm and an environmental organization in order to decrease the environmental degradation and in effect creating new industry standards (Stafford, Polonsky, & Hartman, 2000); changing a banks’ lending policies in order to facilitate job creation for socially disadvantaged young people leading to change in banking industry policies (Seitanidi, 2008); contributing to the development of community infrastructure; increasing the paid-time allocation for employee community service; developing a foundation that supports community initiatives (Austin, 2000a). In all the above examples the value is located outside the partner organizations. In cases where partners raise claims that are unable to be substantiated, possible costs can include decrease in the credibility of the institution of partnerships to deliver societal benefits, increase in cynicism, and potential decrease in institutional trust in business and nonprofit organizations. Waddock and Post (1995) suggested that catalytic alliances focus their efforts for a brief period of time in the generation of public awareness through the media for complex and worsening social problems. Some of the characteristics of catalytic alliances are quite different from the nonprofit-business partnerships (temporary nature vs. long term; direct vs. indirect long-term shifts in public attitude). However, they have some unique characteristics that potentially can be beneficial for partnerships: they are driven by a core central vision rather than the instrumentality that predominately characterizes cross-sector partnerships (Selsky & Parker, 2005). Hence, catalytic alliances successfully link the work of previously fragmented agencies that used to work on related issues (e.g., hunger and homelessness) (Waddock & Post, 1995, p. 959). Equally they allow for an expectation gap to emerge “between the current state of action on an issue and the public’s awareness of the issue. The ‘expectations gap’ actually induces other organizations and institutions to take action on the issue. … the money paled by comparison to the organizational process stimulated” (Waddock & Post, 1995, p. 959). Social partnerships develop socio-economic value for a broad constituency. Hence, they address the societal level and function increasingly as governance mechanisms (Crane, 2010) while providing diverse and multiple benefits. In effect, they will be required to move from an instrumental to an encompassing normative approach focusing on a central vision which can assist in the engagement with internal and external stakeholders early on and produce “catalytic-or ripple-effect” (Waddock & Post, 1995) that will be beneficial on all levels of analysis directly or through the virtuous circle of value creation. How Value is assessed: Excellentia (worth) of Value Creation “The perceived worth of an alliance is the ultimate determinant of, first whether it will be created and second whether it will be sustained” (Austin, 2000b, p. 87). A necessary prerequisite for the continuous co-creation of value is the ability of each partner to capture some of the value either unilaterally or50 jointly during value cycles (Le Ber & Branzei, 2010a; Makadok, 2001), not always proportionately to the value generation of each partner, as value capture is not dependent on the value generation (Lepak, Smith & Taylor, 2007). The co-creation of economic (EV) and social (SV) value in partnerships should be more/different than the value originally created by each organization separately as this remains a strong motivation for the partners to engage in long-term interactions. In order to assess the socio-economic value of the partnership outcomes created the partners are required to define economic and social value. For both businesses and nonprofits EV is “defined as financial sustainability; i.e., an organization’s capacity to operate indefinitely” (Márquez, Reficco & Berger, 2010, p. 6). On the other hand SV has been associated in the context of partnerships “meet(ing) society’s broader challenges“ (Porter & Kramer, 2011, p. 4); similarly “meeting social needs in ways that improve the quality of life and increase human development over time (Hitt, Ireland, Sirmon & Trahms, 2011, p. 68), including attempts “that enrich the natural environment and/or are designed to overcome or limit others’ negative influences on the physical environment” (ibid). Although previously doing well and doing good were separate functions associated with different sectors, today they are seen as “manifestations of the blended value proposition” (Emerson, 2003, p. 35) or of the more recent “shared value” (Porter & Kramer, 2011). The value capture on the different levels of analysis is dependent on the source that initiates the creation of value (Lepak & Smith, 2007) as internal and external stakeholders of the partnership may hold different perceptions as to what is valuable due to different “knowledge, goals, context conditions that affect how the novelty and appropriateness of the new value will be evaluated” (ibid, p. 191). The outcome assessment in partnerships is likely to increase as cross sector collaborations proliferate (Sullivan & Skelcher, 2003) and there will be more pressure to understand the consequences of partnerships (Biermann, Mol, & Glasbergen, 2007). Different forms of collaboration will have varying degrees of evaluation difficulty associated with the availability and quality of data,and the experience of organizations in employing both qualitative and qualitative measures of assessment. Some of the difficulties in assessing the socio-economic value in partnerships are: 1/ the subjectivity associated with valuing the outcomes, i.e., what is considered acceptable, appropriate and of value for whom (Mulgan, 2010; Lepak & Smith, 2007; Austin, 2003; Amabile, 1996); 2/ the variation in the valuations of stakeholders of a company’s CSR implementation programs by country and culture (Endacott, 2003); 3/ the attribution to a particular program, particularly for companies that have a sophisticated CSR portfolio of activities (Peloza & Shang, 2010; Peloza, 200;) or a portfolio of partnerships (Austin, 2003; Hoffman, 2005); 4/ the lack of consistency in employing CSR metrics (Peloza & Shang, 2010; Peloza, 2009); 5/ many companies lack an explicit mission statement for their social performance activities against which they would have to perform (Austin, Gutiérrez, Ogliastri & Reficco, 2007) 6/ attribution of a particular outcome to a specific partnership program (Brinkerhoff, 2002) 7/ combining all the elements of a partnership relationship 8/ methodological challenges in the measurement due to the intangible character of many outcomes associated with partnerships, requirements for documented, likely, and perceived effects of partnerships (Jorgensen, 2006; Sullivan & Skelcher, 2003). Austin, Stevenson and Wei-Skillern (2006) summarize the difficulties concisely: “The challenge of measuring social change is great due to nonquantifiability, multicausality, temporal dimensions, and perspective differences of the social impact created”.51 Despite the above difficulties the three key-reasons why businesses should aim to strengthen their financial metrics regarding CSP are suggested by Peloza (2009): 1/as a method to facilitate cost effective decision making; 2/ as a measure to avoid interference in the allocation of resources due to the lack of hard data; 3/ as an instrument to enable inclusion of CSP budgets in the mainstream budgeting of companies. Similarly, demands for metrics for nonprofit organizations to measure social value have emerged due to: 1/ the need to demonstrate the effectiveness of programs to foundations; 2/ provide justification for the continuation of funding from public authorities; 3/ provide hard data to investors similar to those that are used to measure profit; 4/ to demonstrate impact to all stakeholders (funders, beneficiaries, partners). Mulgan(2010)provides a synopsis of ten methods out of hundreds that exist for calculating social value which often are competing . He notes that despite the enthusiasm that surrounds such methods, in reality they are used by few as guidance in decision making; furthermore, the fragmentation that exists in the use of different metrics by each group (group 1: NGOs & foundations, group 2: governments group 3: academics) provides an explanation of why metrics are not used in practice. He further remarks that due to the subjective nature of value the tools that are used for the assessment do not reflect this fact and in effect are misaligned with the strategic and operational priorities of an organization. Mulgan (2010) points that while in business different tools are used for “accounting to external stakeholders, managing internal operations, and assessing societal impact” (p. 40). However, when social value is measured in nonprofit organizations, measurements are conflated into one, comprising another reason for the failure of metrics to influence decisions. A further difficulty is estimating the benefit that will be produced in the future due to a recent action relative to the cost by using social return on investment (SROI). His advice for constructing value as the director of the Young Foundation is that metrics should be used for the three roles they can perform: “external accountability, internal decision making, and assessment of broader social impact” (Mulgan, 2010, p. 42). He suggests that funders must adapt their frameworks to the particular organization they are interested in assessing and, more importantly, the metrics must disclose their inherent subjectivity and need to be employed in a proportionate way depending on the size of the nonprofit organization. Table 2 provides an overview of the methods to measure social value, providing a brief description, an example, and the problems usually associated with each method. INSERT TABLE 2 HERE52 Table 2: 10 Ways to Measure Social Value, adopted from Mulgan 2010, p. 41.53 Implicit in the above list but explicitly evident in the extant literature of social partnership and nonprofit/development is the interchangeable use of the terms outcomes and impact (Jorgensen, 2006; Sullivan & Skelcher, 2003; Vendung, 1997) which results in difficulties in the categorization, comparison, and discussion of the issues around assessment and evaluation. Most of the available literature discusses evaluation parameters and provides frameworks for evaluation that are usually associated with performance of the partnership. As Preskill and Jones (2009, p. 3) suggest: “ evaluation is about asking and answering questions that matter-about programs, processes, products, policies and initiatives”. When evaluation works well, it provides information to a wide range of audiences that can be used to make better decisions, develop greater appreciation and understanding, and gain insights for action”. We have discussed such issues in the section of evaluation of partnership implementation. In this section we are concerned with the assessment of outcomes. Measurement is not a frequent topic thus far as the discussion appears to have entered only recently the outcomes of partnerships. This appears also to be the case in the nonprofit-government and government-business (PPPs) “arenas of partnerships” (Selsky & Parker, 2005). Andrews and Entwistle (2010, p. 680) suggest that in the context of public sector partnerships “very few studies, however, have examined whether the benefits assumed by sectoral rationales for partnership are actually realized (for partial exceptions, see Provan and Milward 1995; Leach, Pelkey, and Sabatier 2002; Arya and Lin 2007)”. On the other hand, assessing value in philanthropic and transactional approaches (sponsorship and cause related marketing) is a well-established practice that involves sophisticated metrics (Bennett, 1999; Irwin & Asimakopoulos, 1992; Meenaghan, 1991; Wright, 1988; Burke Marketing Research, 1980). This is due to the following reasons: historical data are available that assist in developing objective standards; the evolution of metrics has taken place through time; the assessment involves less complicated metrics as the activities assessed can be attributed to the philanthropic or transactional interaction in more direct ways. The agency that has developed sophisticated metrics for transactional forms of interactions is IEG (2011), the leading provider of valuation and measurement research in the global sponsorship industry. Based on 25 years of experience they have developed a methodology that captures the value for the sponsorship and cause related marketing incorporating the assessment of tangible and intangible benefits (examples of the criteria used include: impressions in measured and non-measured media, program book advertising, televised signage, tickets, level of audience loyalty, degree of category exclusivity, level of awareness of logos), the geographic research/impact (estimation of the size and value of the market where the sponsor will promote its sponsored activity), cost/benefit ratio (assessment of the costs and benefits and recognizing the risks and rewards associated with sponsorship), price adjusters/market factors (allowing for the incorporation of factors that are unique to each sponsor, length of the sponsor’s commitment and the fees for the sponsorship). As it becomes evident from the above, the assessment uses the ‘value-for-money’ analysis which tends to employ a single criterion, usually quantitative, allowing the comparison across data (Sullivan & Skelcher, 2003), but leaves unaddressed societal outcomes. Indicators for the synergistic outcomes may include: “aspects of program performance that relate to advantages beyond what the actors could have independently produced” (Brinkerhoff, 2002, p. 225- 226); developing links with other programs and actors; enhanced capacity of the individuals involved in the partnership and influence of individual partners; and multiplier effects (extension or development of new programs) (Brinkerhoff, 2002). Examples of multiplier effects could be building a degree of good will towards the BUS partner in quite important players in environmental sector, hence creating a buffer zone between the BUS and the Nonprofit sector where previously relationships were antagonistic (Seitanidi, 2010). Attribution, however, remains problematic for the CVC as it is difficult to provide54 evidence for the value-added that derives from the partnership. Brinkerhoff (2002) offers that it is usually perception and consensus based and subjective, hence relates to each partner’s level of satisfaction from the relationship, which will also provide an indication of the relationship’s sustainability. Although reference is being made often to the synergistic results of partnerships, they are rarely fully articulated and measured (Dobbs, 1999; Brinkerhoff, 2002). Looking only on outcomes and ignoring the process dimension usually is linked with sacrificing long-term value creation for the benefit of short-term performance (Kaplan & Norton, 2001). Hence, by establishing single or pluralistic criteria (i.e., stakeholder based evaluations) according to the interests involved in the partnership (Sullivan & Skelcher, 2003), deciding the standards of performance against each criterion, and measuring the performance of the collaboration, one constructs the logic of evaluation (Fournier, 1995). The achievement of process outcomes is linked with the program outcomes that are concerned with the total benefits of the partnership minus the costs. Some methods predominately prioritize the costs/benefit analysis at the expenses often of “questions of quality and appropriateness” (Sullivan & Skelcher, 2003, p. 189). Alternative methods of bottom-up evaluation include stakeholder based approaches such as a ‘theory of change approach’ defined as “a systematic and cumulative study of the links between the activities, outcomes and contexts of the initiative” (Connell & Kubisch, 1998, p. 16). This approach aims to encourage stakeholders to participate in the evaluation that assists in connecting the social problem with the context, specifying the strategies that can lead to long-term outcomes. An alternative approach is “interactive evaluation” (Owen & Rogers 1999) where the views of the stakeholders represent the “local experience and expertise in delivering change and improvement, seeing evaluation as an empowering activity for those involved in the programme and attempting to address problems not addressed before or employing approaches that are new to the organisation. The value of interactive evaluation lies in the fact that it aims to encourage a learning culture and is appropriate for use in innovative programmes” (Sullivan & Skelcher, 2003, p. 197). The more nonprofit-business partnerships will embrace their role as global governance mechanisms (Crane, 2010), the more they will be required to align their evaluation methods with public policy partnerships. We consider that partnerships require a three point evaluation equal to social development and change (Oakley, Pratt & Clayton, 1998; Blankenberg, 1995): process outcomes, program outcomes, and impact. Process outcomes (also expressed as outputs in several frameworks) are associated with the effort in partnerships. We suggested the evaluation of partnership implementation as the point of assessment, which was discussed at the end of the implementation phase. When the point of measurement is the effectiveness we suggested above the effectiveness of the partnership by assessing the program outcomes. In the next section we examine the impact of partnerships and we associate the point of measurement with change, i.e., the difference from the original social problem that the partnership addressed. What is the impact: Distinctus (difference) of Value Creation Impact refers to the “long term and sustainable changes introduced by a given intervention in the lives of the beneficiaries” (Oakley, Pratt & Clayton, 1998, p. 36). They can relate to anticipated or unanticipated changes caused by the partnership to the beneficiaries or others and can range from positive to negative (Oakley, Pratt & Clayton, 1998; Blankenberg, 1995). We adapt the definition of55 impact assessment for development interventions to the partnership context by Oakley, Pratt & Clayton (1998, p. 36): Partnership impact assessment refers to the evaluation of how and to what extent partnership interventions cause sustainable changes in living conditions and behaviour of beneficiaries and the effects of these changes on others and the socio-economic and political situations in society. Impacts can emerge, for example, from hiring practices, emissions and production and can respectively provide outcomes such as increased diversity in the workplace, reduction of emissions and increased safety conditions in production. Capturing the impacts, however, often requires intelligence that exceeds the abilities of single organizations and mimics the data gathering process of the government for measuring large scale phenomena such as poverty, health pandemics, and so forth. The impacts of the above examples for a company would respectively refer to improved employment rates in workforce across the world, improved air quality/biodiversity, reduction in accident rates. In the case of a nonprofit-business partnership capturing the impact of the co-creation process would require attributing the partnership effects from other related efforts of the business and the nonprofit organization. In addition, it would require developing an understanding of the expectations of the stakeholders within context (social, political, economic and environmental) (Oakley, Pratt & Clayton, 1998). Hence, it is not a surprise that even in development interventions very few evaluations move beyond outcomes to impacts (citied in: Oakley, Pratt & Clayton, 1998, p. 37). Scholars in CSR have called for research to provide evidence not only for the existence of positive social change but to move towards how change is being achieved (Aguilera, Rupp, Williams, & Ganapathi, 2007; McWilliams & Wright, 2006). Companies initially responded by including in their CSR reports lists of their social programs and initiatives demonstrating their actions on social issues. However, CSR reports neither offered a coherent nor a strategic framework “instead they aggregate anecdotes about uncoordinated initiatives to demonstrate a company’s social sensitivity” (Porter & Kramer, 2006, p. 3). The majority of companies include: “reductions in pollution, waste, carbon emissions, or energy use, for example, may be documented for specific divisions or regions but not for the company as a whole. Philanthropic initiatives are typically described in terms of dollars or volunteer hours spent but almost never in terms of impact” (ibid). Hence, it appears that companies have been at best reporting outcomes of social and environmental initiatives, and at times suggesting they represented impacts. This is more evident in the high profile rankings such as the FTSE4GOOD and the Dow Jones Sustainability Indexes. Although they intend to present rigorous impact indicators comprising of social and environmental effects, in fact the lack of consistency, the existence of variable weighting of criteria, the lack of external verification, the statistical insignificance of the answers provided in the surveys, and often the inadequate proxies employed reveal the difficulties associated in reporting impacts systematically and consistently even for professional and large organizations (Porter& Kramer, 2006; Chatterji & Levine, 2006). Hence, the challenge is not only finding new ways to co-create socio-economic value through partnerships in order to achieve simultaneously positive impact for both business and society (Kolk, 2004), directly or indirectly (Maas & Liket, 2010), but also to develop indicators and measure it. Despite numerous papers that refer to impact (Atkinson, 2005) either by mentioning the call for business to report on their “numerous and complex social impacts of their operations” (Rodinelli & London, 2003, p. 62) or by suggesting key principles for successful collaborative social initiatives that can contribute to the impact (Pearce II & Doh, 2005), the fact remains that there is a lack of studies that focus on impact assessment of nonprofit-business partnerships. 56 A recent study developed by the UK’s biggest organization for the promotion of CSR: Business in the Community and Cranfield University’s Doughty Centre for Corporate Responsibility (BITC & Doughty Report, 2011) identified 60 benefits for business that are clustered in seven areas one of which is the ‘direct financial impact’ of CSR activities. Also one of the report’s increasingly important future trends is ‘macro-level sustainable development’, defined as: “the somewhat undefined benefits from contributing to sustainable development. This relates to the impact and responsibilities an organisation has in relation to a geographically wide level of economic, social and environmental issues – at a ‘macro level’. Here, ‘macro level’ means society and nature as a whole, encompassing not just an organisation and its immediate interactions, but sustainable development in its industry, country, region and indeed planet.” Although the report does not separate the outcome from the impact level in the presentation of benefits, it provides examples of macro-level issues such as “health inequalities or access to healthcare; poor education; ageing populations; lack of investment in sciences or arts and innovation generation; the rights of workers, children and sex/race equality; and environmental issues such as climate change, deforestation, pollution, ocean health, extinction of species, and urbanisation.” (ibid, p. 17). The report further suggests that ‘macro-level sustainable development’ is a recent (in 2008/09) addition to the reported business benefits. Studies that refer to the evaluation of collaborative networks suggest that due to the multidimensionality of the programs involved there is a need for a combination of randomized control trials in combination with more flexible forms of evaluation that involve researchers and practitioners combining their knowledge during workshops in order to establish links between actions and outcomes while using multiple criteria for measuring success based on local knowledge (Head, 2008; Schorr, 1988). In particular, Head (2008) cautions against premature judgements that may be drawn by funders that do not realize that initiatives may take 4-6 years until the beginning of the implementation phase. In collaborative environmental management Koontz and Thomas (2006) suggest as unanswered the question: “To what extent does collaboration lead to improved environmental outcomes?” (p. 111). They offer suggestions for measuring environmental impact through outcomes such as perceptions of changes in: environmental quality, in land cover, in biological diversity, in parameters appropriate to a specific resource (e.g., water biochemical oxygen demand, ambient pollution levels) (ibid, p. 115). They also recommend that academics should not attempt to pursue large impact questions but rather to collaborate with practitioners for the design, monitoring of outputs and funding of the required longitudinal and cross-sectional studies (ibid, p. 117). In the context of partnerships for development Kolk, Van Tulder & Kostwinder (2008, p. 271) group the changes, benefits, and results of partnerships to the wider society as “the final and ultimate outcomes”. Although the word outcome is used, impact is assumed as they suggest that the best way to assess the outcomes is by their “direct and indirect impact on the Millennium Development Goals.” In fact, the Business in Development program of the Dutch national committee for international cooperation and sustainable development (NCDO) developed a methodology measuring the contribution of the private sector to the Millennium Development Goals (MDGs) (NCDO, 2006). The methodology of the report highlights the measurement of impact and indirect contributions of MNCs and stresses that the lack of availability of information was a significant factor in the scoring developed. Furthermore, it remarked that due to the differences in the nature of the participating businesses testing the measurement framework it would be impossible to compare their performance. Conclusions from the report included: that the contribution and attention given by companies to the MDGs can be measured; it is clearer where, how and why companies contribute to the MDGs; understanding the focus of a company’s MDG efforts can help it choose which NGOs to partner with to achieve even better MDG impact (NCDO, 2006: 7-12). 57 Currently the largest research and knowledge development initiative of the European Commission is under way aiming to measure impact. The research project commenced in March 2010 and will conclude in March 2013, the “Impact Measurement and Performance Analysis of CSR” (IMPACT) project which is hoping to break new ground in addressing questions across multiple levels and dimensions, combining four empirical methods -econometric analysis, in-depth case studies, network analysis, and Delphi analysis. The research will address how CSR impacts sustainability and competitiveness in the EU across 27 countries (Impact, 2010). Although impact is frequently used to denote effectiveness, outcomes, or performance, it is often the case that its contextual and temporal meaning is understood in an evolutionary and interactive way. Demonstrating the importance of impact and its original response Nike stated in its 2005 CSR report: “A critical task in these last two years was to focus on impact and develop a systematic approach to measure it. We’re still working hard at this. How do we know if a worker’s experience on the contract factory floor has improved, or if our community investments helped improve a young person’s life? We’re not sure anyone has cornered the market in assessing real, qualitative social impact. We are grappling with those challenges now. In FY07-08, we will continue working with key stakeholders to determine the best measures. We aim to have a simple set of agreed upon indicators that form a baseline and then to measure in sample areas around the world” (Nike, 2005, p. 11). In its 2009 CSR report Nike acknowledged that solutions require industry-level and systemic change that will have to pass through ‘new approaches to innovation and collaboration’. Interestingly the report states: “Our aim is to measure our performance and report accurate data. At times, that means systems and methodology for gathering information need to change even as we collect data, as we learn more about whether we are asking the right questions and whether we are getting the information that will help us to answer them rather than just information. (p. 18)”. The company also reported that it aimed at developing targets and metrics around programs for excluded youth around the world, which demonstrates the policy-type-thinking required for the development of impact indicators, processes to monitor, report, and advocate, usually associated with nonprofit organizations that need to be developed as new competencies from business and their partners. Figure 7 below demonstrates the evolution of understanding in the process of monitoring and collection of data that contribute to the understanding and reporting of impacts. INSERT FIGURE 7 HERE58 Figure 7: Workplace impact in factories, adopted by the Nike CSR Report (Nike, 2009, p. 37).59 Unilever’s recent ‘Sustainable Living Plan’, launched in 2010, aims to capture holistically the company’s social, environmental and economic impacts around the world. The focus on the multidimensional effects demonstrate that some companies are moving forward for the moment with aspirational impact targets that if achieved will demonstrate a significant forward step in delivering socio-economic and environmental progress around the world. Unilever has developed 50 impact targets that are grouped under the following priorities of the plan to be achieved by 2020: (1) To help more than one billion people take action to improve their health and well-being; (2) To halve the environmental footprint of the making and use of its products; (3) To source 100% of its agricultural raw materials sustainably. The impacts are associated with increasing the positive and reducing the negative impact. Working in partnership is central as NGOs provide the local connection that facilitates the implementation of the programs (Unilever, 2010). Also, impacts are captured at the local level hence the role of local partners and government/local authorities is profound in capturing, measuring and reporting impact. However, the compilation of impact reports will be the responsibility of MNCs. Hence they will need to demonstrate transparency, accountability, and critical reflection if they wish the reports to play an important substantive role rather than just being cosmetic. Demonstrating, for example, missed impact targets and capturing the reasons behind the misses will be important in not only raising awareness of the difficulties associated with impact measurement and reporting but also calling for the assistance of other actors in pursuing more effectively impact targets in the next value capture circle. The nonprofit sector is similarly under pressure to demonstrate “its own effectiveness as well as that of their partners. They need to be able to identify the difference their efforts (and funds) have made to the poorest and most vulnerable communities; as well as to demonstrate that these efforts are effective in bringing about change” (O’Flynn, 2010, p. 1). The UK Charity Commission, for example, requires NGOs to report against their core strategic objectives. The above are in addition to the moral obligation of nonprofits to demonstrate accountability and appreciation of their impacts (ibid). An interesting insight in O’Flynn’s (2010) paper from the nonprofit sector is that due to the distance between an organization’s initial intervention and the systems it aims to affect, it is very difficult to claim with confidence a direct impact that is attributable. Changes in complex systems are likely to be influenced by a range of factors and hence it is impossible for a nonprofit to claim attribution (ibid). Moving away from this complexity, development organizations have started working and documenting their contribution to change instead of their attribution (ibid). Based on the views of partnership practitioners that participated in a study of the ‘Partnering Initiative’, one of the priorities for the future regarding evaluation will be the need to develop tools for measuring the impact of the beneficiaries, the impact on partners, the unexpected outcomes (Serafin, Stibbe, Bustamante, & Schramme, 2008). Moving to the transactional forms of interaction, as a proxy to examining partnership impact, Maas and Liket (2010) in a recent empirical study examined the extent to which the corporate philanthropy of 500 firms listed in the Dow Jones Sustainability Index (DJSI) is strategic as indicated by the measurement of their philanthropic activities’ impact along three dimensions: society, business and reputation/stakeholder satisfaction. The authors suggested that despite the lack of common practice in how impact is measured, it appeared that 76% of the DJSI firms measure some sort of impact of their philanthropic activities, predominately impact on society and on reputation and stakeholder satisfaction. More likely to measure impact are larger firms with substantial philanthropic budgets, from Europe and North America and from the financial sector. Following long standing pressures for strategic corporate philanthropy to demonstrate value, Lim (2010) has produced a report that aims to offer guidance on the measurement of value of corporate philanthropy. Transactional and integrative approaches of cross sector interaction share similar60 challenges in addressing questions about impact, including the long-term nature of the outcomes and impact, the complexity in measuring the results, they both aspire to affect social change which is a lengthy process and the context specific character of the interventions (Lim, 2010). We provide below (Table 3) a brief overview of the measures that can be employed for impact assessment for corporate philanthropy and partnerships. Despite the lack of agreement on definitions on what constitutes social value and on ways to measure, Lim (2010) suggests that the attempt to measure it is beneficial in itself as it encourages rigour in the process, improvement and making explicit the assumptions of the partners. Articulating impact requires developing a ‘baseline’ as a starting point; developing indicators is associated only with some of the methods; most of the methods involve a degree of monitoring and developing a final reporting on the impacts. The Illustrative methods provide a soft approach but not necessarily less effective in identifying the problems in delivering and increasing the impact of interventions. They are appropriate when it is impossible to develop indicators, or develop experimental procedures. Experimental methods are conducted to be able to explain some kind of causation. Lim (2010, p. 9) suggests that experimental methods or formal methods should be employed for (1) “reasonably mature programs that represent an innovative solution and wherein the funder and /or grantee seeks to prove to other funders or NGOs that it should be scaled-up” and (2) “programs wherein the cost of risk of failure is high (e.g., those with highly vulnerable beneficiaries)”. These are the only methods that can prove definite causation and attribution. Alternatives to experimental methods are practical methods of measuring intermediate outcomes that allow for identifying improvement opportunities. The two practical methods that we listed in Table 3 are presented by Lim (2010, each associated with different applications: the outcomes measurement: (1) “programs wherein the funder is involved in the program’s design and management and shares responsibility for its success. (2) Programs wherein funders and grantees desire frequent and early indicators in order to make real-time adjustments to interventions and strategy” (ibid). Regarding the impact achievement potential, Lim (2010) states that they are more appropriate for the start-up programs in their early stages and in interventions that the funder is not involved in the management. INSERT TABLE 3 HERE Due to the multidimensionality of nonprofit-business partnerships operating on multiple levels and producing a wide range of effects, it is difficult in most cases to set up experiments in order to establish causality due to their operation within dynamic adaptive systems of multiple interactions. We borrow the term of ‘panarchy’ of Gunderson and Holling (2001) to refer to the evolving nature of complex adaptive systems as a set of adaptive cycles. Applying to partnerships panarchy theory would suggest that the interlinked and never-ending cycles of value creation at each level and the links between them represent a nested set of adaptive cycles taking place in spatial and temporal scales. In order to increase the effectiveness of the monitoring in dynamic environments as Gunderson and Holling (2001) advocate, it might be possible to identify the points at which it is possible for the system to accept positive change. In this way the partners will acknowledge the effects the interactive and non-linear effects of the dynamics of the different levels of change. Managers can get a more in-depth understanding of the role their actions play in influencing socio-economic and environmental systems and instil in the systems positive input to further encourage positive social change. Theories of social change might also prove useful as they examine the connection between the micro and macro levels (Hernes, 1976). In order for a partnership to address its impacts it requires identification of the social issue that the collaboration will address and the articulation of the effects of impacts on different targets. For61 example, following the categorization of our outcomes model: internal/external to the partnership and on different levels: macro, meso and micro can provide a systematic organization of the impacts. The extent to which a partnership delivers synergistic impacts is the critical test of the collaboration. The partners need to ask: did our collaboration make a difference, to whom and how? Following from our study the next section provides brief conclusions and suggestions for future explorations. Table 3: Impact Assessment methodologies, adopted and compiled based a wide range of sources including: Cooperrider, Sorensen, Yaeger, & Whitnet, 2001; O’Flynn, 2010; Lim, 2010; Jorgensen, 2006; Description Usage Type of Method ILLUSTRATIVE METHODS Stories of Change (The Most Significant Change-MSC) Stories of change are used to illustrate change rather than measure change The method is employed to provide insights into the perceptions and expectation of stakeholders that participate in the process of evaluation. Selecting through the process stories allows expert panels to identify change/impact stories. MSC does not make use of pre- defined indicators, especially ones that have to be counted and measured. The technique is applicable in many different sectors, including agriculture, education and health, and especially in development programs. It is also applicable to many different cultural contexts. MSC has been used in a wide variety of countries by a range of organizations. Appreciative Enquiry Developing community maps visualisation and recording of changes in the lives of stakeholders Used in place of the traditional problem solving approach-finding what is wrong and forging solutions to fix the problems-Appreciative Inquiry seeks what is "right" in an organization. It is a habit of mind, heart, and imagination that searches for the success, the life-giving force, the incidence of joy. It moves toward what the organization is doing right and provides a frame for creating an imagined future that builds on and expands the joyful and life-giving realities as the metaphor and organizing principle of the organization. Future methods: Delphi Survey technique Prediction based on experts A panel of experts judges the timing, probability, importance and implications of factors, trends, and events regarding the problem in question by creating a list of statements/questions and applying ratings; next a first draft report is developed allowing for revisions based on feedback which is incorporating I the final report. EXPERIMENTAL METHODS Experiments with randomized or matched controls Comparison between the control and experimental group It consists of a form of scientific experiment usually employed for testing the safety (adverse drug reactions), effectiveness of healthcare services, health technologies. Before the intervention to be studied subjects are randomly allocated to receive one or other of the alternative treatments under study. After randomization, the two (or more) groups of subjects are followed up in exactly the same way, and the only differences between the care they receive, is for example, the policy implementation of a partnership program. The method used in psychology and education. Matched subject design uses separate experimental groups for each particular treatment, but relies upon matching every subject in one group with an equivalent in another. The idea behind this is that it reduces the chances of an influential variable skewing the results by negating it.62 Shadow Controls Expert judgement A judgement of an expert is employed to assess the success of a programme. Such a design is useful when there is limited scope for a control group. The predictions (shadow controls) are followed by comparisons to the outcome data at the end of the programme. Important feedback is about the programme’s effectiveness is provided. The method is used in healthcare. PRACTICAL METHODS Outcome Measurement Data collected on national in combination with mutually agreed assumptions between the partners Funder and grantee co-design the program and measurement process. Experts may be consulted for advice; data is collected in house by the nonprofit organization with the assistance of the funder (technological or management). Instead of control groups, national databases may be used of comparison purposes. Most organizations appear to you this method. Impact achievement potential Reliance on the grantees (nonprofit organization’s) measurement standards The funder accepts the self-reporting claims as reliable, particularly in the case where the nonprofit organization might have available measures, demographics. FILLING THE GAPS & PUSHING THE FRONTIERS We end by providing a few concluding observations and suggesting some avenues of further exploration to advance our collective knowledge. The Collaborative Value Creation Framework provided an analytical vehicle for reviewing the CSR and cross-sector collaboration literature relevant to the research question How can collaboration between businesses and NPOs most effectively co-create significant economic and social value for society, organizations, and individuals? The analytical framework for Collaborative Value Creation allows for a deeper understanding of interactions that contribute to value creation. Building on earlier research, the purpose of the framework is twofold: first, it seeks to provide guidance to researchers and practitioners who would like to assess the success of their cross sector interactions in producing value. Second, it aims to promote consistency and maximize comparability between processes and outcomes of collaboration. The Collaborative Value Creation framework is a conceptual and analytical vehicle for the examination of partnerships as multi-dimensional and multi-level value creation vehicles and aims to assist researchers and practitioner to position and assess collaborative interactions. The intention of the framework is not to prescribe a fixed approach to value creation but to provide a frame for those seeking to maximize value creation across all levels of social reality. Practitioners should feel at liberty to adapt the framework to meet their particular requirements. Researchers should employ either the entire or elements of the CVC framework in order to examine the value creation spectrum, the relationship stages, partnering processes, and outcomes. The first CVC component aims to examine what are the sources of value employed by the partners, how they are used and to what effect (types of value produced); the second component aims to position partners’ cross sector interactions within the63 collaboration continuum’s stages and examine the nature of the relationship according to the value descriptors (see figure 1); the third component answers the question how does the partnership processes contribute to the value co-creation of the partners on the macro-meso and micro levels. As such it identifies who is involved and how in the partnership and aims to maximize the interactive co- creation of value through processes. The final component of partnership outcomes positions the value of each partner per level of analysis to facilitate the assessment of the benefits and costs. It concludes with the examination of the outcomes and impact of partnerships in order to develop comparable mechanisms of value assessment both qualitative and quantitative. Figure 8 presents a summary view of the Framework’s Value Creation Spectrum’s key variables (collaboration stages, value sources, value types) and how they change as partnerships evolve from sole- creation to co-creation. The underlying general hypothesis is that greater value is produced the more one moves toward co-creation. INSERT FIGURE 8 HERE Figure 8: COLLABORATIVE VALUE CREATION SPECTRUM Form Sole-Creation----------------------------------------------------------? Co-Creation Stages Philanthropic------------------------------------?Integrative/Transformational Resource Complementarity Low---------------------------------------------------------------------------------?High Resource Type Generic---------------------------------------------------? Distinctive Competency Resource Directionality Unilateral-----------------------------------------------------------------? Conjoined Linked Interests Weak/Narrow------------------------------------------------------? Strong/Broad Associational Value Modest--------------------------------------------------------------------------? High Transferred Resource Value Depreciable-------------------------------------------------------------? Renewable Interaction Value Minimal--------------------------------------------------------------------? Maximal Synergistic Value Least-----------------------------------------------------------------------------? Most Innovation Value Seldom---------------------------------------------------------------------? Frequent64 It is clear from the literature review that value creation through collaboration is recognized as a central goal, but it is equally clear that it has not been analyzed by researchers and practitioners to the extent or with the systematic rigor that its importance merits. While many of the asserted benefits (and costs) of collaboration rest on strong hypotheses, there is a need for additional empirical research – quantitative and qualitative, case study and survey – to produce greater corroborating evidence. There has been in recent years an encouraging uptake in research in this direction, as well as growing attention by practitioners. The CVC Framework’s Value Creation Spectrum offers a set of variables and hypotheses in terms of sources and types of value that may help focus such research. Similarly, the models of the Partnering Processes identify multiple value determinants that merit additional study. There is a need for field- based research that documents specific value creation pathways. In all of this the focus is on the factors enhancing co-creation, particularly Synergistic Value. There is a need to demonstrate how and to what extent economic value creates social value and vice versa, whether simultaneously or sequentially. Understanding more deeply this virtuous value circle is at the heart of the paradigm change. It is hoped that such additional research will lead to further elaboration, revision, and refinement of the Framework’s theoretical construct. In terms of the Collaboration Continuum there is a need to deepen our understanding of the enabling factors that permit collaborative relationships to enter into the Integrative and Transformational stages. Within these higher level collaborations, one needs to document how the co-creation process operates, renews, and grows. Given that these partnering forms are less common and more complex than earlier stages such as philanthropic and transactional, in-depth case studies are called for, with longitudinal or retrospective analyses required to capture the evolutionary dynamics (Koza, & Lewin, 2000). Of particular interest are the processes producing Innovation Value as a higher form of synergistic co- creation. In the Outcomes area it was evident from the literature review that impact at the societal level is relatively neglected in terms of documentation. Perhaps because of measurement complexity and costs, there is a tendency to assume societal betterment rather than assess it specifically. Consequently, the core question of How is society better off due to the collaboration? remains underdocumented. Collaborations do not always produce value as sometimes partners reach bad solutions, create new problems and may not solve the problems they originally aimed at addressing (Bryson et al, 2006; Austin, 2000a). The partnership literature is in the early stages of addressing issues of mapping the value creation road on different levels of analysis. The macro level benefits and costs would require longitudinal studies of groups of researchers collaborating across interrelated fields, across multiple organizations in order to capture how a direct social benefit has long term economic effects across organizations. Such research teams have not yet emerged as policy makers have also only recently demonstrated an interest in capturing impacts (ESRC, 2011). Furthermore, multi-level value assessment, i.e., introducing all three levels of analysis: organizational, individual and social is a recent focus in the literature (Seitanidi & Lindgreen, 2010). Examples include the study of the impact of social regeneration through partnership in disadvantaged communities (Cornelious & Wallace, 2010); studying the65 orchestration of multilevel coordination that shapes relational processes of frame fusion in the process of value creation (Le Ber & Branzei, 2010c); addressing the reciprocal multi-level of change through the interplay between organizational, individual and social levels of reality in the stage of partnership formation (Seitanidi, Koufopoulos & Palmer, 2010). The empirical studies that aim to capture social, societal or systemic benefits (Seitanidi, 2010) employ the perceptions of organizational actors in the focal organizations without involving beneficiary voices, or if they make reference to the beneficiaries, they employ a theoretical perspective (Le Ber & Branzei, 2010a). Overcoming the existing limitations of research that focus on single organizations requires a shift in focus, means, and methods. Such changes will allows us to capture the interconnections of cross sector social interactions on multiple levels and possibly unlock the secrets to the ability of our societies to achieve positive social change intentionally in a short period of time. Lastly, for CSR scholars there is the symmetry hypothesis that corporations must have advanced to the higher levels of CSR in order to engage effectively in the higher levels of collaborative value co-creation, with the latter being evidence of the former. Table 4 below offers new avenues for research within each CVC component and it contributes possible research questions that cut across the different components of our value creation framework. INSERT TABLE 4 HERE This literature review and conceptual paper are intended to help partnership professionals think systematically about their partnerships as internal and external value creation mechanisms. What partners do and how they implement partnerships will have an impact on the micro-meso and macro levels whether or not partners are considering co-creation of value explicitly or implicitly during the partnership processes. Similarly, value creation will have an effect on the partners and society. The CVC framework we propose can improve the understanding of value creation processes in partnerships, and anticipate the outcomes of partnerships on different levels of analysis. Given that our starting premise for this article was that value creation is the fundamental justification for cross-sector collaboration, our ending aspiration is that embedded in the minds of every collaboration scholar and practitioner be the following mandatory question: How will my research or my action contribute to the co-creation of value? 66 Table 4: RESEARCH AVENUES BY CVC COMPENENT Collaborative Value Creation Components Research Avenues Component I: Value Creation Spectrum Is resource complementarity dependent on organizational fit? And what are the factors that affect the resource complimentary for maximizing co-creation of value? How do generic and organization specific assets/ competencies contribute to the co-creation of synergistic value? Which distinctive competencies of the organization contribute most to the co-creation of value? And how? How do different combinations of resource types across the partners produce economic and social value? What are the evolving patterns of value creation per resource type, resource directionality? How can partners link their interests with the social good? Does co-creation of synergistic economic and social value depend on the degree the interests of the partners are linked with each other and the social good? Are associational, transferred, interaction and synergistic value produced in different degrees across the collaboration continuum? What is the relationship between the different types of value produced? What is the role of tangible and intangible resources in co-creating social value? How can partners achieve value renewal? Component II: Relationship Stages How do the value descriptors associated with the nature of the relationship in the Collaboration Continuum relate to each stage of the continuum in different fields of partnerships? How can the Collaboration Continuum be associated with the evolution of appreciation of social responsibilities in organizations? What forms of cross sector social interactions can be grouped under the transformation stage of the Collaboration Continuum? What sources and types of value are associated with each stage of the Collaboration Continuum (Philanthropic, Transactional, Integrative and Transformational)? What are the key enablers of moving to each higher level of collaboration in the Continuum? Component III: Partnering Processes How can partners maximize their partnership fit potential? How do partners articulate social problems and how do they develop frames that connect them with their interests and the social good? Do partners’ motives link with their partnership strategies? How can we examine systematically the history of the partners’ interactions in time? What is the role of partnership champions before and during the partnership? Should partners reconcile their value frames, to what extent and how? How can partnership process increase the potential for co-creation of synergistic value? How can partnerships strengthen their accountability through their processes mechanisms? How can partnership processes enhance societal outcomes? How can the processes in partnerships facilitate the development of new capabilities and skills? How can processes in partnerships facilitate value renewal? How can evaluation of the partnership implementation strengthen the value creation process?67 How can the evaluation of partnership implementation improve the benefits for both partners but also for society? Component IV: Partnering Outcomes How do partners view their own and each other’s benefits and costs from the collaboration? How is social value generated as a result of the partnership outcomes? Do partnerships constitute intentional social change mechanisms? And how? How do the loci of value creation in partnerships interact? Are the multiple levels of value creation interdependent and what are the links between the micro-meso and macro levels? What is the relation between benefits and costs in partnerships? What are the links between the social and economic value creation and the different types of benefits and costs in partnerships? What are the partnership benefits and the costs for the stakeholders? And the beneficiaries of partnerships? How can we conceptualize the links between the benefits and costs in cross sector social partnerships? How does external value created in partnerships contribute to the socio economic value creation for the partners? How do partnerships’ direct and indirect benefits link to the different levels of value creation (macro-meso-micro)? What is the role of vision in producing socio economic value in partnerships? How can the different types of value be assessed in partnerships? How can we develop a systematic and transparent value assessment in partnerships? How can assessment in partnerships strengthen decision making? How can indicators of value assessment in partnerships account for the different levels of value creation? How can we connect the different points of evaluation in partnerships (process outcomes, program outcomes and impact) to strengthen value creation on different levels? How can we assess the long term impact of partnerships? Which are the most appropriate methods to assess impact? To what extent do partnerships deliver synergistic impacts? For whom? And how? Overarching themes across components How and to what extent does economic value create social value and vice versa? Is social and economic value being created simultaneously or sequentially? Can we invent a new measure that assess multidimensional (economic- social-environmental) and multilevel (macro-meso-micro) value? How do partnerships re-constitute value? How can partnership function as global mechanisms of societal governance?68 REFERENCES REFERENCES Aguilera, R. V., Rupp, D. E., Williams, C. A., & Ganapathi, J. (2007). Putting the S back in corporate social responsibility: A multilevel theory of social change in organizations. Academy of Management Review, 32(3), 836-863. Ählström, J., & Sjöström, E. (2005). CSOs and business partnerships: Strategies for interaction. Business Strategy and the Environment, 14(4), 230-240. Alsop, R. J. (2004). The 18 immutable laws of corporate reputation. New York: Free Press. Alter, C., & Hage, J. (1993). Organizations working together. Newbury Park, CA: Sage. Amabile, T. M. (1996). Creativity in context. (Update to The social psychology of creativity.) Boulder, CO: Westview Press. Andreasen, A. R. (1996). Profits for nonprofits: Find a corporate partner. Harvard Business Review, 74(6), 47-50, 55-59. Andrews, R., & Entwistle, T. (2010). Does cross-sectoral partnership deliver? An empirical exploration of public service effectiveness, efficiency, and equity. Journal of Public Administration Research and Theory, 20(3), 679–701. Andrioff, J. (2000). Managing social risk through stakeholder partnership building: Empirical descriptive process analysis of stakeholder partnerships from British Petroleum in Colombia and Hoechst in Germany for the management of social risk. PhD thesis, Warwick University. Andrioff, J., & Waddock, S. (2002). Unfolding stakeholder management. In J. Andriof & S. Waddock (Eds.), Unfolding Stakeholder Thinking (pp. 19-42). Sheffield: Greenleaf Publishing. Anheier, H. K., & Hawkes, A. (2008). Accountability in a globalised world. In F. Holland (Eds.), Global Civil Society 2007/08: Communicative power and democracy. Beverly Hills: Sage. Argenti, P. A. (2004). Collaborating with activists: how Starbucks works with NGOs. California Management Review, 47(1), 91-116. Arya, B., & Salk, J. E. (2006). Cross-sector alliance learning and effectiveness of voluntary codes of corporate social responsibility. Business Ethics Quarterly, 16(2), 211-234. Ashman, D. (2000). Promoting corporate citizenship in the global south: Towards a model of empowered civil society collaboration with business. IDR Reports, 16(3), 1-24. Ashman, D. (2001). Civil society collaboration with business: Bringing empowerment back in. World Development, 29(7), 1097-1113.69 Astley, W. G. (1984). Toward an appreciation of collective strategy. Academy of Management Review, 9, 526–535. Audit Commission (1998). A Fruitful Partnership. London: Audit Commission. Austin, J. E. (1998). Business leaders and nonprofits. Nonprofit Management and Leadership, 9(1), 39-51. Austin, J. E. (2000a). The collaboration challenge: How nonprofits and businesses succeed through strategic alliances. San Francisco: Jossey-Bass Publishers. Austin, J. E. (2000b). Strategic collaboration between nonprofits and businesses. Nonprofit and Voluntary Sector Quarterly, 29 (Supplement 1), 69-97. Austin, J. E. (2003). Strategic alliances: Managing the collaboration portfolio. Stanford Social Innovation Review, 1(2), 49-55. Austin, J. E. (2010). From organization to organization: On creating value. Journal of Business Ethics, 94 (Supplement 1), 13-15. Austin, J. E., & Elias, J. (2001). Timberland and community involvement. Harvard Business School Case Study. Austin, J. E., Gutiérrez, R., Ogliastri, E., & Reficco, E. (2007). Capitalizing on convergence. Stanford Social Innovation Review, Winter, 24-31. Austin, J. E., Leonard, H. B., & Quinn, J. W. (2004). Timberland: Commerce and justice. Boston: Harvard Business School Publishing. Austin, J. E., Leonard, H. B., Reficco, E., & Wei-Skillern, J. (2006). Social entrepreneurship: It’s for corporations, too. In A. Nicholls (Eds.), Social entrepreneurship: New models of sustainable social change (pp. 169-180). Oxford: Oxford University Press. Austin, J., & Reavis, C. (2002). Starbucks and conservation international. Cambridge, MA: Harvard Business School Case Services. Austin, J. E., Reficco, E., Berger, G., Fischer, R. M., Gutierrez, R., Koljatic, M., Lozano, G., Ogliastri, E., & SEKN team (2004). Social partnering in Latin America: Lessons drawn from collaborations of business and civil society organizations. Cambridge, MA: Harvard University Press. Austin, J. E., Stevenson, H., & Wei-Skillern, J. (2006). Social and commercial entrepreneurship: The same, different, or both? Entrepreneurship Theory and Practice, 30(1), 1-22. Avon Foundation for Women (2011). The avon breast cancer crusade. Retrieved from www.avonfoundation.org/breast-cancer-crusade. Balogun, J., & Johnson, G. (2004). Organizational restructuring and middle manager sensemaking. Academy of Management Journal, 47, 523–549.70 Barnett, M. L. (2007). Stakeholder influence capacity and the variability of financial returns to corporate social responsibility. The Academy of Management Review, 32(3), 794-816. Barrett, D., Austin, J. E., & McCarthy, S. (2000). Cross sector collaboration: Lessons from the international Trachoma Initiative. In M. R. Reich (Eds.), Public-private partnerships for public health. Cambridge, MA: Harvard University Press. Bartel, C. A. (2001). Social comparisons in boundary-spanning work: Effects of community outreach on members’ organizational identity and identification. Administrative Science Quarterly, 46, 379-413. Barton, D. (2011). Capitalism for the long term. Harvard Business Review, March. Basil, D. Z., & Herr, P. M. (2003). Dangerous donations? The effects of cause-related marketing on charity attitude. Journal of Nonprofit & Public Sector Marketing, 11(1), 59-76. Ben, S. (2007). New processes of governance: Cases for deliberative decision-making. Managerial Law, 49(5/6), 196-205. Bendell, J. (2000b). A no win-win situation? GMOs, NGOs and sustainable development. In J. Bendell (Eds.), Terms for endearment: Business, NGOs and sustainable development (pp. 96-110). Sheffield: Greenleaf Publishing. Bendell, J. (2000a). Working with stakeholder pressure for sustainable development. In J. Bendell (Eds.), Terms for endearment: Business, NGOs and sustainable development (pp. 15-110). Sheffield: Greenleaf Publishing. Bendell, J. (2004). Flags of convenience? The global compact and the future of the United Nations. ICCSR Research Paper Series, 22. Bendell, J., & Lake, R. (2000). New Frontiers: Emerging NGO activities and accountability in business. In J. Bendell (Eds.), Terms for endearment: Business, NGOs and sustainable development (pp. 226-238). Sheffield: Greenleaf Publishing. Bennett, R. (1999). Sports sponsorship, spectator recall and false consensus. European Journal of Marketing, 33(3/4), 291-313. Berger, I. E., Cunningham, P. H., & Drumwright, M. E. (2004). Social alliances: Company/nonprofit collaboration. California Management Review, 47(1), 58-90. Bhattacharya, C. B., Korschun, D., & Sen, S. (2009). Strengthening stakeholder-company relationships through mutually beneficial corporate social responsibility initiatives. Journal of Business Ethics, 85 (Supplement 2), 257–272. Bhattacharya, C. B. & Sen, S. (2004). Doing better at doing good: When, why and how consumers respond to social initiatives. California Management Review, 47(1), 9-24. 71 Bhattacharya, C. B., Sen, S., & Korschun, D. (2008). Using corporate social responsibility to win the war for talent. MIT Sloan Management Review, 49(2), 37-44. Biermann, F., Chan, M., Mert, A., & Pattberg, P. (2007). Multi-stakeholder partnerships for sustainable development: Does the promise hold? In P. Glasbergen, F. Biermann & A. P. J. Mol (Eds.), Partnerships, governance and sustainable development: Reflections on theory and practice (239-260). Cheltenham: Edward Elgar. Biermann, F., Mol, A. P. J., & Glasbergen, P. (2007). Conclusion: Partnerships for sustainability – reflections on a future research agenda. In P. Glasbergen, F. Biermann & A. P. J. Mol (Eds.), Partnerships, governance and sustainable development: Reflections on theory and practice (288-300). Cheltenham: Edward Elgar. Birch, D. (2003). Doing Business in New Ways. The Theory and Practice of Strategic Corporate Citizenship with Specific Reference to Rio Tinto’s Community Partnerships. A Monograph. Corporate Citizenship Unit, Deakin University, Melbourne Bishop, M., & Green, M. (2008). Philanthrocapitalism: How giving can save the world. New York: Bloomsbury Press. BITC & Doughty Report (2011). The Business Case of CSR for being a responsible business. Business in the Community and the Doughty Centre for Corporate Responsibility. Available from: www.bitc.org.uk/research Accessed June 2011. Blankenberg, F. (1995). Methods of impact assessment research programme, resource pack and discussion. The Hague: Oxfam UK/I and Novib. Bockstette, V., & Stamp, M. (2011). Creating shared value: A how-to guide for the new corporate (r)evolution. Retrieved from http://www.fsg.org/Portals/0/Uploads/Documents/PDF/Shared_Value_Guide.pdf?cpgn=WP%20DL%20- %20HP%20Shared%20Value%20Guide [Accessed May 5, 2011]. Boschee, J., & McClurg, J. (2003). Toward a better understanding of social entrepreneurship: some important distinctions. Retrieved from http://www.se-alliance.org/. Boston College Center for Corporate Citizenship & Points of Light Foundation (2005). Measuring employee volunteer programs: The human resources model. Retrieved from http://www.bcccc.net. Bowen, H. R. (1953). Social responsibilities of the businessman. New York: Harper & Row. Bowen, F., Newenham-Kahindi, A., & Herremans, I. (2010). When suits meets roots: The antecedents and consequences of community engagement strategy. Journal of Business Ethics, 95(2), 297-318.72 Bowman, C. & Ambrosini, V. (2000). Value creation versus value capture: Towards a coherent definition of value in strategy. British Journal of Management, 11, 1-15. Brammer, S. J. & Pavelin, S. (2006). Corporate Reputation and Social Performance: The Importance of Fit. Journal of Management Studies 43:3 May, pp. 435-454. Brickson, S. L. (2007). Organizational identity orientation: The genesis of the role of the firm and distinct forms of social value. Academy of Management Review, 32, 864-888. Brinkerhoff, J. M. (2002). Assessing and improving partnership relationships and outcomes: A proposed framework. Evaluation and Program Planning, 25 (3), 215-231. Brinkerhoff, J. M. (2007). Partnerships as a means to good governance: Towards an evaluation framework. In P. Glasbergen, F. Biermann & A. P. J. Mol (Eds.), Partnerships, governance and sustainable development: Reflections on theory and practice (68-92). Cheltenham: Edward Elgar. Bromberger, A. R. (2011). A new type of hybrid. Stanford Social Innovation Review, Spring, 48-53. Brown, L. D. (1991). Bridging organizations and sustainable development. Human Relations, 44(8), 807- 831. Brown, T. J., & Dacin, P. A., (1997). The Company and the Product: Corporate Associations and Consumer Product Responses. The Journal of Marketing Vol. 61, No. 1 (Jan., 1997), pp. 68-84. Brown, L. D., & Kalegaonkar, A. (2002). Support organizations and the evolution of the NGO Sector. Nonprofit and Voluntary Sector Quarterly, 31(2), 231-258. Bryson, J., & Crosby, B. (1992). Leadership for the common good: Tackling public problems in a shared power world. San Francisco: Jossey Bass. Bryson, J. M., Crosby, B. C., & Middleton Stone, M. (2006). The design and implementation of cross- sector collaborations: Propositions from the literature. Public Administration Review, 66, 44-55. Burchell, J, & Cook, J. (2011, July 6-9). Deconstructing the myths of employer sponsored volunteering schemes. Paper presented at the 27th EGOS Colloquium in Gothenburg, Sweden (Theme 16). Burke, L., & Logsdon, J. M. (1996). How corporate social responsibility pays off. Long RangePlanning, 29 (4), 495-502. Burke Marketing Research (1980). Day-after Recall TV Commercial Testing. Columbus: Burke Inc. C&E (2010). Corporate-NGO Partnership Barometer Summary Report. Retrieved from http://www.candeadvisory.com/sites/default/files/report_abridged.pdf [Accessed January, 2011]. Cairns, B., Harris, M., & Hutchison, R. (2010, June 29). Collaboration in the voluntary sector: A meta- analysis. IVAR Anniversary Event. 73 Campbell, J. L. (2007). Why would corporations behave in socially responsible ways? An institutional theory of corporate social responsibility. The Academy of Management Review, 32(3), 946-967. Carroll, A. B. (1999). Corporate social responsibility: Evolution of a definitional construct. Business & Society, 38(3), 268-295. Carroll, A. B. (2006). Corporate social responsibility: A historical perspective. In M. J. Epstein & K. O. Hanson (Eds.), The accountable corporation: Corporate social responsibility (pp. 3-30). Westport, CT: Praeger Publishers. Carrigan, M. (1997). The great corporate giveaway - can marketing do good for the do-gooders? European Business Journal, 9(4), pp. 40–46. Castaldo, S., Perrini, F., Misani, N., & Tencati, A. (2009). The missing link between corporate social responsibility and consumer trust: The case of fair trade products. Journal of Business Ethnics, 84(1), 1- 15. Croteau, D. & Hicks, L. (2003). Coalition Framing and the challenge of a consonant frame pyramid: The case of collaborative response to homelessness. Social Problems, 50(2), 251-272. Christensen, C. M., Baumann, H., Ruggles, R., & Sadtler, T. M. (2006). Disruptive innovation for social change. Harvard Business Review, 84(12), 96-101. Clarke, A. (2007a, May 24). Cross sector collaborative strategic management: Regional sustainable development strategies. Presentation at the Scoping Symposium: The future challenges of cross sector interactions, London, England. Clarke, A. (2007b, April 19-20). Furthering collaborative strategic management theory: Process model and factors per phase. Presented at the Sprott Doctoral Symposium, Ottawa, Canada. Clarke, A., & Fuller, M. (2010). Collaborative strategic management: Strategy formulation and implementation by multi-organizational cross-sector social partnerships. Journal of Business Ethics, 94 (Supplement 1), 85-101. Collier, J., & Esteban, R. (1999). Governance in the participative organization: Freedom, creativity and ethics. Journal of Business Ethics, 21, 173-188. Commins , S . (1997). World vision international and donors: Too close for comfort. In M. Edwards & D. Hulme (Eds.), NGOs, states and donors: Too close for domfort? (pp. 140-155). Basingstoke/London: The Save the Children Fund. Cone (2004). Corporate citizenship study: Building brand trust. Retrieved from: http://www.coneinc.com/content10862004. Connell, J. P. & Kubisch, A. C. (1998). Applying a theory of change approach to the evaluation of comprehensive community initiatives: Progress, prospects and problems, In: Fulbright-Anderson, K.,74 Kubisch, A.C. and Connell, J. P. (eds) (1998), New Approaches to Evaluating Community initiatives, vol. 2: Theory, Measurements and Analysis (Washington, Dc: Aspen Institute). Cook, J. & Burchell, J. (2011, July 6-9). Deconstructing the myths of employer sponsored volunteering schemes. Paper presented at the 27th EGOS Colloquium in Gothenburg, Sweden (Theme 16). Cooperrider, D. Sorensen, P.F. Yaeger, T.F & Whitnet, D. (2001) Appreciative Inquiry. An emerging direction for organization development. Stipes. Cooper, T. L., Bryer, T. A., & Meek, J. C. (2006). Citizen-centered collaborative public management. Public Administration Review, 66, 76-88. Cornelious, N., & Wallace, J. (2010). Cross-sector partnerships: City regeneration and social justice. Journal of Business Ethics, 94 (Supplement 1), 71-84. Covey, J., & Brown, L. D. (2001). Critical co-operation: An alternative form of civil society- business engagement. IDR Reports, 17(1), 1-18. Crane, A. (1997). Rhetoric and reality in the greening of organizational culture. In G. Ledgerwood (Eds.), Greening the boardroom: Corporate environmental governance and business sustainability (pp.130-144). Sheffield: Greenleaf Publishing. Crane, A. (1998). Exploring green alliances. Journal of Marketing Management, 14(6), 559-579. Crane, A. (2000). Culture clash and mediation: Exploring the culture dynamics of business-NGO collaboration. In J. Bendell (Eds), Terms for endearment: Business, NGOs and sustainable development (pp. 163-177). Sheffield: Greenleaf Publishing. Crane, A. (2010). From governance to governance: On blurring boundaries. Journal of Business Ethics, 94 (Supplement 1), 17-19. Crane, A., & Matten, D. (2007). Business ethics: Managing corporate citizenship and sustainability in the age of globalization. Oxford: Oxford University Press. Cropper, S. (1996). Collaborative working and the issue of sustainability. In C. Huxham (Eds.), Creating Collaborative Advantage (pp. 80-100). London: Sage. Croteau, D., & Hick, L. (2003). Coalition framing and the challenge of a consonant frame pyramid: The case of a collaborative response to homelessness. Social Problems, 50, 251–272. Dalal-Clayton, B., & Bass, S. (2002). Sustainable development strategies: A resource handbook. London, The International Institute for Environment and Development: Earthscan Publications Ltd. Das, T. K., & Teng, B. S. (1998). Between trust and control: Developing confidence in partner cooperation in alliances. The Academy of Management Review, 23(3), 491-512. 75 Davies, R. & Dart, J. (2005). The ‘Most Significant Change’ (MSC) Technique. A Guide to Its Use. Version 1.00 – April 2005. Available from: http://www.mande.co.uk/docs/MSCGuide.pdf De Bakker, F. G. A., Groenewegen, P., & Den Hond, F. (2005). A bibliometric analysis of 30 years of research and theory on corporate social responsibility and corporate social performance. Business & Society, 44(3), 283-317. De Beers Group (2009). Report to Society 2009. Living up to diamonds. Retrieved from http://www.debeersgroup.com/ImageVault/Images/id_2110/scope_0/ImageVaultHandler.aspx Dees, J. G. (1998a). Enterprising nonprofits. Harvard Business Review, January-February, 55-67. Dees, J. G. (1998b). The meaning of ‘social entrepreneurship. Comments and suggestions contributed from the Social Entrepreneurship Funders Working Group, Center for the Advancement of Social Entrepreneurship. Fuqua School of Business: Duke University. Dees, J. G., & Anderson, B. B. (2003). Sector-bending: Blurring lines between nonprofit and for-profit. Society, 40(4), 16-27. Deloitte (2004). Deloitte volunteer IMPACT survey. Retrieved from: http://www.deloitte.com/view/en_US/us/Services/additional-services/chinese-services- group/039d899a961fb110VgnVCM100000ba42f00aRCRD.htm. Dew, N., Read, S., Sarasvathy, S. D., & Wiltbank, R. (2008). Effectual versus predictive logics in entrepreneurial decision-making: Differences between experts and novices. Journal of Business Venturing, 24, 287–309. Di Maggio, P., & Anheier, H. (1990). The sociology of the non-profit sector. Annual Review of Sociology, 16, 137-159. Dobbs, J. H. (1999). Competition’s new battleground: The integrated value chain. Cambridge, MA: Cambridge Technology Partners. Donaldson, T., & Preston, L. E. (1995). The stakeholder theory of the corporation: Concepts, evidence, and implications. Academy of Management Review, 20(1), 65-91. Dowling, B., Powell, M., & Glendinning, C. (2004) Conceptualising successful partnerships. Health and Social Care in the Community, 12(4), 309-317 Draulans, J., deMan, A. P., & Volberda, H. W. (2003). Building alliance capability: Managing techniques for superior performance. Long Range Planning, 36(2), 151-166. Drucker, P., E. (1989). What Business can Learn from Nonprofits. Harvard Business Review, July-August: 88-93.76 Ebrahim, A . (2003). Making sense of accountability: Conceptual perspectives for northern and southern nonprofits. Nonprofit Management and Leadership, 14(2), 191-212. Eccles, R. G., Newquist, S. C., & Schatz, R. (2007). Reputation and its risks. Harvard Business Review, 85(2), 104-114, 156. Edwards, M., & Hulme, D. (1995). Performance and accountability: Introduction and overview. In M. Edwards & D. Hulme (Eds.), Beyond the magic bullet: Non-governmental organizations-performance and cccountability (pp. 3-16). London: Earthscan Publications. Egri, C. P., & Ralston, D. A. (2008). Corporate responsibility: A review of international management research from 1998 to 2007. Journal of International Management, 14, 319–339. Eisingerich, A. B., Rubera, G., Seifert, M., & Bhardwaj, G. (2011). Doing good and doing better despite negative information? The role of corporate social responsibility in consumer resistance to negative information. Journal of Service Research, 14(1), 60-75. El Ansari, W., Phillips, C., & Hammick, M. (2001). Collaboration and partnership: Developing the evidence base. Health and Social Care in the Community, 9, 215–227. El Ansari, W., & Weiss, E. S. (2005). Quality of research on community partnerships: Developing the evidence base. Health Education Research, 21(2), 175-180. Elbers, W. (2004). Doing business with business: Development NGOs interacting with the corporate sector. Retrieved from http://www.evertvrmeer.nl/download.do/id/100105391/cd/true/ Elkington, J. (1997). Cannibals with forks: The triple bottom line of 21st century business. Oxford: Capstone Publishing. Elkington, J. (2004). The triple bottom line: Sustainability’s accountants. In M. J. Epstein & K. O. Hanson (Eds.), The accountable corporation: Corporate social responsibility (pp. 97-109). Westport, CT: Praeger Publishers. Elkington, J., & Fennell, S. (2000). Partners for sustainability. In J. Bendell (Eds.), Terms for endearment: Business, NGOs and sustainable development (pp. 150-162). Sheffield: Greenleaf Publishing. Emerson, J. (2003). The blended value proposition: Integrating social and financial returns. California Management Review, 45(4), 35-51. Endacott, R. W. J. (2003). Consumers and CSRM: A national and global perspective. Journal of Consumer Marketing, 21(3), 183-189. Epstein, M. J., & McFarlan, F. W. (2011). Joining a nonprofit board: What you need to know. San Francisco: Jossey-Bass. Farquason, A. (2000, November 11). Cause and effect. The Guardian.77 Finn, C. B. (1996). Utilizing stakeholder strategies for positive collaborative outcomes. In C. Huxham (Eds.), Creating Collaborative Advantage (pp. 152-164). London: Sage. Fiol, C. M., Pratt, M. G., & O’Connor, E. J. (2009). Managing intractable identity conflicts. Academy of Management Review, 34, 32–55. Forsstrom, B. (2005). Value Co-Creation in Industrial Buyer-Seller Partnerships – Creating and Exploiting Interdependencies An Empirical Case Study. ABO AKADEMIS FORLAG – ABO AKADIMI UNIVERSITY PRESS Fournier, D. (1995). Establishing evaluative conclusions: A distinction between general and working logic. New Directions for Evaluation, 68, 15-32. Freeman, R. E. (1984). Strategic management: A stakeholder approach. Boston: Pitman Publishing. Freeman, R. E. (1999). Divergent stakeholder theory. Academy of Management Review, 24, 233-236. Friedman, M. (1962). Capitalism and Freedom. Chicago: University of Chicago Press. Friedman, M. (1970, September 13). The social responsibility of business is to increase its profits. New York Times Magazine, 122-126. Galaskiewicz, J. (1985). Interorganizational relations. Annual Review of Sociology, 11, 281-304. Galaskiewicz, J. (1997). An urban grants economy revisited: Corporate charitable contributions in the Twin Cities, 1979-81, 1987-89. Administrative Science Quarterly, 42, 445-471. Galaskiewicz, J., & Sinclair Colman, M. (2006). Collaboration between corporations and nonprofit organizations. In R. Steinberg & W. W. Powel (Eds.), The non-profit sector: A research handbook (pp. 180- 206). New Haven, CT: Yale University Press. Galaskiewicz, J., & Wasserman, S. (1989). Mimetic processes within an interorganizational field: An empirical test. Administrative Science Quarterly, 34, 454-479. Galbreath, J. R. (2002). Twenty first century management rules: The management of relationships as intangible assets. Management Decision, 40(2), 116-126. Garriga, E., & Melé, D. (2004). Corporate social responsibility theories: Mapping the territory. Journal of Business Ethics, 53, 51-71. Gerde, V. W., & Wokutch, R. E. (1998). 25 years and going strong: A content analysis of the first 25 years of the social issues in management division proceedings. Business & Society, 37(4), 414-446. Geringer, J. M. (1991). Strategic determinants of partner selection criteria in international joint ventures. Journal of International Business Studies, 22, 41-62.78 Geringer, J.M., & Herbert, L. (1989). Measuring performance of international joint ventures. Journal of International Business Studies, 22, 249-263. Giving USA Foundation, 2010, Giving USA 2010: The Annual report of Philanthropy for the Year 2009, Indianapolis, Indiana: The Center on Philanthropy at Indiana University Glasbergen, P. (2007). Setting the scene: The partnership paradigm in the making. In P. Glasbergen, F. Biermann & A. P. J. Mol (Eds.), Partnerships, governance and sustainable development: Reflections on theory and practice (pp. 1-28). Cheltenham: Edward Elgar. Glasbergen, P., Biermann, F., & Mol, A. P. J. (2007). Partnerships, governance and sustainable development: Reflections on theory and practice. Cheltenham: Edward Elgar Publishing Limited. GlobeScan (2003). Corporate Social Responsibility Monitor. Retrieved from http://www.deres.org.uy/home/descargas/guias/GlobalScan_Monitor_2003.pdf GlobeScan (2005). Corporate Social Responsibility Monitor. Retrieved from http://www.deres.org.uy/home/descargas/guias/GlobalScan_Monitor_2005.pdf Glynn, M. A. (2000). When cymbals become symbols: Conflict over organizational identity within a symphony orchestra. Organization Science, 11, 285–298. Godfray, P. C., & Hatch, N. W. (2007). Researching corporate social responsibility: An agenda for the 21st century. Journal of Business Ethics, 70, 87-98. Godfrey, P. C., Merrill, C. B., & Hansen, J. M. (2009). The relationship between corporate social responsibility and shareholder value: An empirical test of the risk management hypothesis. Strategic Management Journal, 30(4), 425-445. Goodpaster, K. E., & Matthews, J. B. (1982). Can a corporation have a conscience? Harvard Business Review, January-February, 132-141. Goffman, E. (1983). The interaction order. American Sociological Review, 48(1), 1–17. Googins, B. K., Mirvis, P. H., & Rochlin, S. A. (2007). Beyond good company: Next generation corporate citizenship. New York: Palgrave MacMillan. Googins, B. K., & Rochlin, S. A. (2000). Creating the partnership society: Understanding the rhetoric and reality of cross-sectoral partnerships. Business and Society Review, 105(1), 127-144. Gourville, J. T., & Rangan, V. K. (2004). Valuing the cause marketing relationship. California Management Review, 47(1), 38-57. Granovetter, M. (1985). Economic action and social structure: The problem of embeddedness. American Journal of Sociology, 91, 481-510.79 Gray, B. (1989). Collaborating. San Francisco: Jossey-Bass. Gray, S., & Hall, H. (1998). Cashing in on charity’s good name. The Chronicle of Philanthropy, 25, 27-29. Green, T., & Peloza, J. (2011). How does corporate social responsibility create value for consumers? Journal of consumer marketing, 28(1), 48-56. Greenall, D., & Rovere, D. (1999). Engaging stakeholders and business-NGO partnerships in developing countries. Ontario: Centre for innovation in Corporate Social Responsibility. Greening, D. W., & Turban, D. B. (2000). Corporate social performance as a competitive advantage in attracting a quality workforce. Business & Society, 39(3), 254-280. Griffin, J. J., & Mahon, J. F. (1997). The corporate social performance and corporate financial performance debate: Twenty-five years of incomparable research. Business & Society, 36(1), 5-31. Grolin, J. (1998). Corporate legitimacy in risk society: The case of Brent Spar. Business Strategy and the Environment, 7(4), 213-222. Gunderson, L.H. and Holling, C. S. (2001). Panarchy: understanding transformations in humans and natural systems. Washington DC: Island Press. Haddad, K. A., & Nanda, A. (2001). The American Medical Association-Sunbeam deal (A - D). Harvard Business School Case Study. Halal, W. E. (2001). The collaborative enterprise: A stakeholder model uniting probability and responsibility. Journal of Corporate Citizenship, 1(2), 27-42. Hamman, R., & Acutt, N. (2003). How should civil society (and the government) respond to ‘corporate social responsibility’? A critique of business motivations and the potential for partnerships. Development Southern Africa, 20(2), 255-270. Hammond, A. L., Kramer, W. J., Katz, R. S., Tran, J. T., & Walker, C. (2007). The next four billion: Market size and business strategy at the base of the pyramid. Washington DC: International Finance Corporation/ World Resources Institute. Harbison, J. R., & Pekar, P. (1998). Smart alliances: A practical guide to repeatable success. San Francisco: Jossey-Bass. Hardy, B., Hudson, B., & Waddington, E. (2000). What makes a good partnership? Leeds: Nuffield Institute. Hardy, C., Lawrence, T. B., & Phillips, N. (2006). Swimming with sharks: Creating strategic change through multi-sector collaboration. International Journal of Strategic Change Management, 1, 96-112. Hardy, C., Phillips, N., & Lawrence, T. B. (2003). Resources, knowledge and influence: The organizational effects of interorganizational collaboration. Journal of Management Studies, 40, 321–47.80 Harris, L. C., & Crane, A. (2002). The greening of organizational culture: Managers’ views on the depth, degree and diffusion of change. Journal of Organizational Change Management, 15(3), 214-234. Hartwich, F., Gonzalez, C., & Vieira, L. F. (2005). Public-private partnerships for innovation-led growth in agrichains: A useful tool for development in Latin America? ISNAR Discussion Paper, 1. Washington, DC: International Food Policy Research Institute. Head, B. W. (2008). Assessing Network-based collaborations. Effectiveness for whom? Public Management Review, 10(6), pp. 733-749. Heal, G. (2008). When principles pay: Corporate social responsibility and the bottom line. New York: Columbia University Press. Heap, S. (1998). NGOs and the private sector: Potential for partnerships? INTRAC Occasional Papers Series, 27. Hernes, G. (1976). Structural Change in social processes. The American journal of Sociology, 82(3),pp. 513-547. Heap, S. (2000). NGOs engaging with business: A world of difference and a difference to the world. Oxford: Intrac Publications. Heath, R. L. (1997). Strategic issues management: Organizations and public policy challenge. Thousand Oaks, CA: Sage. Hendry, J. R. (2006). Taking aim at business: What factors lead environmental non-governmental organizations to target particular firms? Business & Society, 45(1), 47-86. Heugens, P. P. M. A. R. (2003). Capability building through adversarial relationships: A replication and extension of Clarke and Roome (1999). Business Strategy and the Environment, 12, 300-312. Heuer, M. (2011). Ecosystem cross-sector collaboration: Conceptualizing an adaptive approach to sustainable governance. Business Strategy and the Environment, 20, 211-221. Hill, C. W. L., & Jones, T. M. (1985). Stakeholder agency theory. Journal of Management Studies, 29(2), 131-154. Hiscox, M. & Smyth, N. (2008). Is there Consumer Demand for Improved Labor Standards? Evidence from Field Experiments in Social Product Labeling Version. Harvard University Research Paper, 3/21/08. Hitt, M. A., Ireland, R. D., Sirmon, D. G., & Trahms, C. (2011). Strategic entrepreneurship: Creating value for individuals, organizations, and society. Academy of Management Perspectives, 25(2), 57-75.81 Hoeffler, S., & Keller, K. L. (2002). Building brand equity through corporate societal marketing. Journal of Public Policy & Marketing, 21(1), 78-89. Hoffman, W. H. (2005). How to manage a portfolio of alliances. Long Range Planning, 38(2), 121-143. Holmberg , S. R., & Cummings, J. L. (2009). Building successful strategic alliances: Strategic process and analytical tool for selecting partner industries and firms. Long Range Planning, 42(2), 164-193. Holmes, S., & Moir, L. (2007). Developing a conceptual framework to identify corporate innovations through engagement with non-profit stakeholders. Corporate Governance, 7(4), 414-422. Hood, J. N., Logsdon J. M., & Thompson J. K. (1993). Collaboration for social problem-solving: A process model. Business and Society, 32(1), 1–17. Hustvedt, G., & Bernard, J. C. (2010). Effects of social responsibility labelling and brand on willingness to pay for apparel. International Journal of Consumer Studies, 34(6), 619-626. Huxham, C. (1993). Pursuing collaborative advantage. The Journal of Operational Research Society, 44(6), 599–611. Huxham, C. (1996). Collaboration and collaborative advantage. In C. Huxham (Eds.), Creating Collaborative Advantage (pp. 1-18). London: Sage. Huxham, C., & Vangen, S. (2000). Leadership in the shaping the implementation of collaborative agendas: How things happen in a (not quite) joined up world. Academy of Management Journal, 43(6) 1159-1175. IEG (2011). Sponsorship spending: 2010 proves better than expected; Bigger gains set for 2011. IEG Insights. Retrieved from www.sponsorship.com/ieg-insights/sponsorship-spending. Impact (2010). Impact Research project: Impact Measurement and Performance Analysis of CSR. Available from: http://www.eabis.org/projects/project-detail-view.html?uid=18 Accessed: January 2011. Irwin, R. L., & Asimakopoulos, M. K. (1992). An approach to the evaluation and selection of sport sponsorship proposals. Sport Marketing Quarterly, 1(2), 43-51. Israel, B., Schulz, A. J., Parker, E. A., & Becker, A. B. (1998). Review of community-based research: Assessing partnership approaches to improve public health. Annual Review of Public Health, 19, 173- 202. Itami, H., & Roehl, T. (1987). Mobilizing invisible assets. Cambridge, MA: Harvard University Press. Jackson, I., & Nelson, J. (2004). Profits with principles: Seven strategies for creating value with values. Currency/Doubleday. 82 Jamali, D., & Keshishian T. (2009). Uneasy alliances: Lessons learned from partnerships between businesses and NGOs in the context of CSR. Journal of Business Ethics, 84(2), 277–295. Jensen, M. C. (2002). Value maximization, stakeholder theory, and the corporate objective function. Business Ethics Quarterly, 12(2), 235-256. Jones, T. M. (1995). Instrumental stakeholder theory: A synthesis of ethics and economics. Academy of Management Review, 20(2), 404-437. Jones, D. A. (2007). Corporate volunteer programs and employee responses: How serving the community also serves the company. Socially Responsible Values on Organizational Behaviour Interactive Paper Session at the 67th annual meeting of the Academy of Management. Philadelphia, United States, 6-7 August 2006. Jones, C., Hesterly, W., & Borgatti, S. ( 1997). A general theory of network governance: Exchange conditions and social mechanisms. Academy of Management Review, 22(4), 911-945. Jones, T. M., & Wicks, A. C. (1999). Convergent stakeholder theory. Academy of Management Review, 24 (2), 206-221. Jorgensen, M. (2006, August 14). Evaluating cross-sector partnerships. Working paper presented at the conference: Public-private partnerships in the post WWSD context, Copenhagen Business School. Kaku, R. (1997). The path of Kyosei. Harvard Business Review, 75(4), 55-63. Kania, J. & Krammer, M (2010). Collective impact. Stanford Social Innovation Review. Winter. Kanter, R, M. (1983). The Change Masters: Innovation for productivity in the American corporation. New York: Simon and Schuster. Kanter, R. M. (1994). Collaborative advantage: Successful partnerships manage the relationship, not just the deal. Harvard Business Review, July-August, 96-108. Kanter, R. M. (1999). From spare change to real change: The social sector as beta site for business innovation. Harvard Business Review, May-June, 122-132. Kaplan, S. (2008). Framing contests: Strategy making under uncertainty. Organization Science, 19, 729– 752. Kaplan, R. S., & Murray, F. ( 2008). Entrepreneurship and the construction of value in biotechnology. In N. Philips, D. Griffiths & G. Sewell (Eds.), Technology and organization: essays in honour of Joan Woodward (Research in the sociology of organizationorganizations). Bingley: Emerald Group. Kaplan, R.S. & Norton, D.P. (1992). The balanced scorecard—Measures that drive performance. Harvard Business Review, January–February, 71–79. Kaplan, R. S., & Norton, D. P. (2001). Transforming the balanced scorecard from performance measurement to strategic management: Part I. Accounting Horizons, 15(1), 87–104.83 King, A. (2007). Cooperation between corporations and environmental groups: A transaction cost perspective. Academy of Management Review, 32, 889-900. Koehn, N. F., & Miller, K. (2007). John Mackey and whole foods market. Harvard Business School Case Study (9-807-111). Kolk, A. (2004). MVO vanuit bedrijfskundig en beleidsmatig perspectief, het belang van duurzaam management. Management en Organisatie, 4(5), pp.112-126. Kolk, A., Van Dolen, W., & Vock, M. (2010). Trickle effects of cross-sector social partnerships. Journal of Business Ethics, 94 (Supplement 1), 123-137. Kolk, A., Van Tulder, R., & Kostwinder, E. (2008). Partnerships for development. European Management Journal, 26(4), 262-273. Kolk, A., Van Tulder, R., & Westdijk, B. (2006). Poverty allevation as business strategy? Evaluating commitments of frontrunner multinational corporations. World Development, 34(5), 789–801. Koontz, T. M. and Thomas, C. W. (2006) What do we know and need to know about the environmental outcomes f collaborative management? Public Administration Review, December, 111-121. Korngold, A. (2005). Leveraging goodwill: Strengthening nonprofits by engaging businesses. San Francisco: Jossey-Bass. Kotler, P., & Lee, N. R. (2009). Up and out of poverty: The social marketing solution. Uppersaddle River, New Jersey: Pearson Education Publishing. Kotler, P., & Zaltman, G. (1971). Social marketing: An approach to planned social change. The Journal of Marketing, 35(3), 3-12. Kourula, A. & Laasonen, S. (2010). Nongovernmental Organizations in Business and Society, Management, and International Business – Review and Implications 1998-2007. Business and Society, 49 (1) 3-5 Koza, M. P., & Lewin, A. Y. (2000). The co-evolution of strategic alliances. Organization Science, 9(3), 255-264. Kumar, R., & Nti, K. O. (1998). Differential learning and interaction in alliance dynamics: A process and outcome discrepancy model. Organization Science, 9, 356-367. Lawrence, S., & Mukai, R. (2010). Foundation growth and giving estimates: Current outlook. Foundations Today Series, Foundation Center. Le Ber, M. J., & Branzei, O. (2010a). Towards a critical theory value creation in cross-sector partnerships. Organization, 17(5), 599-629. Le Ber, M. J., & Branzei, O. (2010b). (Re)forming strategic cross-sector partnerships: Relational processes of social innovation. Business & Society, 49(1): 140-172.84 Le Ber, M. J., & Branzei, O. (2010c). Value frame fusion in cross sector interactions. Journal of Business Ethics, 94 (Supplement 1), 163-195. Leonard, L. G. (1998). Primary health care and partnerships: Collaboration of a community agency, health department, and university nursing program. Journal of Nursing Education, 37(3), 144–151. Lepak, D. P., Smith, K. G., & Taylor, M. S. (2007). Value creation and value capture: A multilevel perspective. Academy of Management Review, 32(1), 180-194. Levy, R. (1999). Give and take: A candid account of corporate philanthropy. Boston: Harvard Business School Press. Lim, T. (2010). Measuring the value of corporate philanthropy: Social impact, business benefits, and investor returns. New York: Committee Encouraging Corporate Philanthropy. Lockett, A., Moon, J., & Visser, W. (2006). Corporate social responsibility in management research: Focus, nature, salience and sources of influence. Journal of Management Studies, 43(1), 115-136. Long, F. J., & Arnold, M. B. (1995). The power of environmental partnerships. Fort Worth: Dryden. Logsdon, J. M. (1991). Interests and interdependence in the formation of social problem-solving collaborations. Journal of Applied Behavioral Science, 27(1), 23-37. London, T., & Rondinelli, D. A. (2003). Partnerships for learning: Managing tensions in nonprofit organizations’ alliances with corporations. Stanford Social Innovation Review, 1(3), 28-35. London, T., & Hart, S. L. (2011). Next generation business strategies for the base of the pyramid: New approaches for building mutual value. Upper Saddle River, New Jersey: Pearson Education. Makadok, R. (2001). Towards a synthesis of a resource-based view and dynamic capability views of rent creation. Strategic Management Journal, 22(5), 387-401. Makower, J. (1994). Beyond the bottom line: Putting social responsibility to work for your business and the World. New York: Simon & Schuster. Mancuso Brehm, V. (2001). Promoting effective north-south NGO partnerships: A comparative study of 10 European NGOs. INTRAC Occasional Papers, 35, 1-75. Margolis, J. D., & Walsh, J. P. (2003). Misery loves companies: Rethinking social initiatives by business. Administrative Science Quarterly, 48(2), 268-305. Marin, L., Ruiz, S., & Rubio, A. (2009). The Role of identity salience in the effects of corporate social responsibility on consumer behavior. Journal of Business Ethics, 84, 65-78. Markwell, S., Watson, J., Speller, V., Platt, S., & Younger, T. (2003). The working partnership. Book 3: In- depth assessment. Health Development Agency, NHS. Retrieved from http://www.nice.org.uk/niceMedia/documents/working_partnership_3.pdf85 Márquez, P., Reficco, E., & Berger, G. (2010). Socially inclusive business: Engaging the poor through market initiatives in Iberoamerica. Cambridge: Harvard University Press. Marquis, C., Glynn, M. A., & Davis, G. F. (2007). Community isomorphism and corporate social action. The Academy of Management Review, 32(3), 925-945. Martin, R. L. (2002). The virtue matrix: Calculating the return on corporate social responsibility. Harvard Business Review, 80(3), 68-75. Martin, R. L., & Osberg, S. (2007). Social entrepreneurship: The case for definition. Stanford Social Innovation Review, Spring, 29-39. McCann, J. E. (1983). Design guidelines for social problem-solving interventions. The Journal of Applied behavioural Science, 19(2), 177-189. McFarlan, F. W. (1999). Working on nonprofit boards: Don't assume the shoe fits. Harvard Business Review, November-December, 65-80. McLaughlin, T. A. (1998). Nonprofit mergers & alliances: A strategic planning guide. New York: John Wiley & Sons Meadowcroft, J. (2007). Democracy and accountability: The challenge for cross-sectoral partnerships. In P. Glasbergen, F. Biermann & A. P. J. Mol (Eds.), Partnerships, governance and sustainable development (pp. 194-213). Cheltenham: Edward Elgar. Meenaghan, T. (1991). The role of sponsorship in the marketing communications mix. International Journal of Advertising, 10, 35-47. Millar, C., Choi, J. C., & Chen, S. (2004). Global strategic partnerships between MNEs and NGOs: Drivers of change and ethical issues. Business and Society Review, 109(4), 395-414. Milne, G. R., Iyer, E.., & Gooding-Williams, S. (1996). Environmental organization alliance relationships within and across nonprofit, business, and government sectors. Journal of Public Policy & Marketing, 15 (2), 203-215. Mitchell, J. (1998). Companies in a world of conflict: NGOs, sanctions and corporate responsibility. London: Royal Institute of International Affairs–Earthscan. Mitchell, R. K., Agle, B. R., & Wood, D. J. (1997). Towards a theory of stakeholder identification and salience: Defining the principle of who and what really counts. Academy of Management Review, 22(4), 853-886.86 Montgomery, D. B., & Ramus, C. A. (2007). Including corporate social responsibility, environmental sustainability, and ethics in calibrating Mba job preferences. Stanford Graduate School of Business. Research Collection Lee Kong Chian School of Business, paper 939. Mowjee, T. (2001). NGO – donor funding relationships: UK government and European community finding for the humanitarian aid activities of UK NGOs from 1990 – 1997. Unpublished PhD Thesis, London School of Economics, Centre for Civil Society. Mulgan, G. (2010). Measuring social value. Stanford Social Innovation Review (Summer). Retrieved from http://www.ssireview.org/articles/entry/measuring_social_value/. Murphy, M., & Arenas, D. (2010). Through indigenous lenses: Cross-sector collaborations with fringe stakeholders. Journal of Business Ethics, 94 (Supplement 1), 103-121. Muthuri, J. N., Matten, D., & Moon, J. (2009). Employee volunteering and social capital: Contributions to corporate social Responsibility. British Journal of Management, 20, 75–89. Najam, A. (1996). NGO accountability: A conceptual framework. Development Policy Review, 14 (December), 339-353. NCDO. 2006. Measuring the contribution of the private sector to the achieving the Millennium Development Goals. Version II. Amsterdam: National Committee for International Cooperation and Sustainable Development. Ndegwa, S. (1996). The two faces of civil society: NGOs and politics in Africa. West Hartford, CT: Kumarian. Nelson, J., & Jenkins, B. (2006). Investing in social innovation: Harnessing the potential for partnership between corporations and social entrepreneurs. In F. Perrini (Eds.), The new social entrepreneurship: What awaits social entrepreneurial ventures? (pp. 272-280). Cheltenham: Edgar Elgard Publishing. Newell, P. (2002). From responsibility to citizenship: Corporate accountability for development. IDS Bulletin, 33(2), 91–100. Nohria, N. (1992). Is a network perspective a useful way of studying organizations? In N. Nohria & R. G. Eccles (Eds.), Networks and organizations: Structure, form, and action (pp. 1-22). Boston, MA: Harvard Business School Press. Noy, D. (2009). When framing fails: Ideas, influence, and resources in San Francisco’s homeless policy field. Social Problems, 56, 223–242. Oakley, P., Pratt, B., & Clayton, A. (1998). Outcomes and impact: Evaluating change in the social development. INTRAC NGO Management & Policy Series, 6. 87 O’Cass, A. & Ngo, L. V. (2010) Examining the firm's value creation process: A managerial perspective of the firm's value offering strategy and performance. British Journal of Management Early View (Online Version of Record published 11 May 2010 before inclusion in an issue) O’Donohoe, N. Leijonhufvud, C., Saltuk, Y., Bugg-Levine, A. & Brandeburg, M. (2010). Impact Investments. An emerging Asset class. J.P. Morgan & Rockefeller Foundation, November 2010. O’Flynn, M. (2010). Impact Assessment: Understanding and assessing our contributions to change. Intrac-Inte4rnational NGO Training and Research Centre, M&E Paper 7. Oliver, C. (1990). Determinants of interorganizational relationships: Integration and future directions. Academy of Management Review, 15(2), 241-265. Orlitzky, M., Schmidt, F. L., & Rynes, S. L. (2003). Corporate social and financial performance: A meta- analysis. Organization Studies, 24(3), 403-441. Owen, J. M. & Rogers, P. J. (1999). Program Evaluation: forms and Approaches. Sage. Paine, L. S. (2003). Value shift: Why companies must merge social and financial imperatives to achieve superior performance. New York: McGraw-Hill. Pangarkar, N. (2003). Determinants of alliance duration in uncertain environments: The case of the biotechnology sector. Long Range Planning, 36(3), 269-284. Pearce, J. A., & Doh, J. P. (2005). The high impact of collaborative social initiatives. Sloan Management Review, 46(3), 30-38. Peloza, J. (2009). The challenge of measuring financial impacts from investments in corporate social performance. Journal of Management, 25(6), 1518-1541. Peloza, J., & Shang, J. (2010). How can corporate social responsibility activities create value for stakeholders? A systematic review. Journal of the Academy of Marketing Science, 39, 117-135. Peterson, D. K. (2004). Benefits of participation in corporate volunteer programs: Employees’ perceptions. Personnel Review, 33(6), 615-627. Pfeffer, J., & Salancik, G. (1978). The external control of organizations: A resource dependence perspective. New York: Harper & Row. Plowman, D. A., Baker, L. T., Kulkarni, M., Solansky, S. T., & Travis, D. V. (2007). Radical Change accidentally: The emergence and amplification of small change. Academy of Management Journal, 50, 512-543.88 Polman, R. (2010). The remedies for capitalism. Retrieved from http://www.mckinseyquarterly.com/spContent/2011_04_05a.htm. Porter, M. E. (2010, June 2-3). Creating shared value: The role of corporation in creating economic and social development. Speech at the CECP Corporate Philanthropy Summit, New York. Retrieved from http://www.youtube.com/watch?v=z2oS3zk8VA4. Porter, M. E., & Kramer, M. R. (2002). The competitive advantage of corporate philanthropy. Harvard Business Review, December, 5-16. Porter, M. E., & Kramer, M. R. (2006). Strategy & society: The link between competitive advantage and corporate social responsibility. Harvard Business Review, December, 78-92. Porter, M. E., & Kramer, M. R. (2011). Shared value: How to reinvent capitalism – and unleash a wave of innovation and growth. Harvard Business Review, January-February, 62-77. Portocarrero, F., & Delgado, Á. J. (2010). Inclusive business and social value creation. In P. Márquez, E. Reficco & G. Berger (Eds.), Socially inclusive business: Engaging the poor through market initiatives in Iberoamerica (pp. 261-293). Cambridge: Harvard University Press. Prahalad, C. K. (2005). The fortune at the bottom of the pyramid: Eradicating poverty through profits. Upper Saddle River, New Jersey: Wharton School Publishing. Prahalad, C. K., & Hamel, G. (1990). The core competence of the corporation. Harvard Business Review, May-June, 71-91. Prahalad, C. K., & Hammond, A. (2002) Serving the world’s poor, profitably. Harvard Business Review, September, 4-11. Prahalad, C. K., & Hart, S. (2002). The fortune at the bottom of the pyramid. Strategy + Management, 26, 54-67. Preskill, H., & Jones, N. (2009). A practical guide for engaging stakeholders in developing evaluation questions. Robert Wood Johnson Foundational Evaluation Series. Retrieved from http://www.rwjf.org/pr/product.jsp?id=49951. Pressman, J. L., & Wildavsky, A. B. (1973). Implementation. Berkeley: University of California Press. Provan, K. G., & Milward, J. B. (2001). Do networks really work? A framework for evaluating public- sector organizational networks. Public Administration Review, 61(4), 414–423. Raftopoulos, B. (2000). The state, NGOs, and democratisation. In S. Moyo, J. Makumbe & J. Raftopoulos (Eds.), NGOs, the State and Politics in Zimbabwe. Harare: Southern Africa Printing and Publication. Rangan, V. K., Quelch, J. A., Herrero, G., & Barton, B. (2007). Business solutions for the global poor: Creating social and economic value. San Francisco: Jossey-Bass. 89 Reed, A. M., & Reed, D. (2009). Partnerships for development: Four models of business involvement. Journal of Business Ethics, 90, 3-37. Reficco, E. & Marquez, P. (2009). Inclusive Networks for Building BOP Markets. Business and Society, doi:10.1177/0007650309332353 Rehbein, K., Waddock, S., & Graves, S. B. (2004). Understanding shareholder activism: Which corporations are targeted? Business & Society, 43(3), 239-267. Reis, T. (1999). Unleashing the new resources and entrepreneurship for the common good: A scan, synthesis and scenario for action. Battle Creek, MI: W.K. Kellogg Foundation. Rendon, L. I., Gans, W. L., & Calleroz, M. D. (1998). No pain, no gain: The learning curve in assessing collaboratives. New Directions for Community Colleges, 103, 71–83. Reputation Institute (2011). U.S. RepTrak Pulse 2011. Retrieved from www.US_RepTrak_Pulse_Topline_2011.pdf. Ring , P. S., & Van de Ven, A. H. (1994) . Developmental processes of cooperative interorganizational relationships. Academy of Management Review, 19(1), 90-118. Rivera-Santos, M., & Rufin, C., (2010). Odd couples: Understanding the governance of firm-NGO alliances. Journal of Business Ethics, 94 (Supplement 1), 55-70. Rondinelli, D. A., & London, T. (2003). How corporations and environmental groups cooperate: Assessing cross-sector alliances and collaborations. Academy of Management Executive, 17(1), 61-76. Rowley, T. J. (1997). Moving beyond dyadic ties: A network theory of stakeholder influences. The Academy of Management Review, 22(4), 887-910. Rowley, T. J., & Berman, S. (2000). A brand new brand of corporate social performance. Business & Society, 39(4), 397-418. Rundall, P., (2000). The perils of partnership - an NGO perspective. Addiction, Volume 95, Issue 10, pages 1501–1504. Sagawa, S., & Segal, E. (2000). Common interest, common good: Creating value through business and social sector partnerships. Boston: Harvard Business School Press. Salamon, L. M. (2007). Putting the non-profit sector and volunteering on the economic map. Retrieved from http://www.unv.org/en/news-resources/resources/on-volunteerism/doc/putting-the-non-profit- sector.html. Salamon, L. M., & Anheier, H. K. (1997). Defining the non-profit sector: A cross-national analysis. Manchester: Manchester University Press. Sanchez, P., Chaminade, C., & Olea, M. (2000). Management of intangibles: An attempt to build a theory. Journal of International Capital, 1(4), 312-27.90 Schonberger, R. J. (1996). Backing off from the bottom line. Executive Excellence, May, 16-17. Schorr, L. B. (1988). Determining what works in Social programs and social policies: towards a more inclusive knowledge base. Washington DC: Brookings. Schuler, D. A., & Cording, M. (2006). A corporate social performance-corporate financial performance behavioral model for consumers. Academy of Management Review, 31(3), 540–558. Shaffer, B., & Hillman, A. (2000). The development of business-government strategies by diversified firms. Strategic Management Journal,21(2),175-190. Shah, J., & Singh, N. (2001). Benchmarking internal supply-chain performance: Development of a framework. Journal of Supply Chain Management, 37(1), 37–47. Seitanidi, M.M. (2008). Adaptive responsibilities: Non-linear interactions across social sectors. Cases from cross sector partnerships. Emergence: Complexity & Organization E:CO Journal, 10(3), 51-64. Seitanidi, M. M. (2010). The Politics of Partnerships. A Critical Examination of Nonprofit-Business Partnerships. Springer. Seitanidi, M. M., & Crane, A. (2009). Implementing CSR through partnerships: understanding the selection, design and institutionalisation of nonprofit-business partnerships. Journal of Business Ethics, 85, 413-429. Seitanidi, M. M., Koufopoulos, D., & Palmer, P. (2010.) Partnership formation for change: Indicators for transformative potential in cross sector social partnerships. Journal of Business Ethics, 94 (Supplement 1), 139-161. Seitanidi, M. M., & Lindgreen, A. (2010). Cross sector social interactions. Journal of Business Ethics, 94 (Supplement 1), 1-7. Seitanidi, M. M., & Ryan, A. (2007). A critical review of forms of corporate community involvement: From philanthropy to partnerships. International Journal of Nonprofit and Voluntary Sector Marketing, 12(3), 247-266. Selsky, J. W., Goes, J., & Babüroglu, O. N. (2007). Contrasting perspectives of strategy making: Applications in ‘hyper’ environments. Organization Studies, 28(1), 71-94. Selsky, J. W., & Parker, B. (2005). Cross-sector partnerships to address social issues: Challenges to theory and practice. Journal of Management, 31(6), 849-873. Selsky, J. W., & Parker, B. (2010). Platforms for cross-sector social partnerships: prospective sensemaking devices for social benefit. Journal of Business Ethics, 94, 21-37. Senge, P. M., Dow, M., & Neath, G. (2006). Learning together: New partnerships for new times. Corporate Governance, 6(4), 420-430. 91 Serafin, R., Stibbe, D. Bustamante, C. & Schramme, C. (2008). Current practice in the evaluation of cross- sector partnerships for sustainable development. The Partnering initiative, TPI Working paper No. 1/2008. Simonin, B. L. (1997). The importance of collaborative know-how: An empirical test of the learning organization. Academy of Management Journal, 40(5), 1150-1174. Singh, S., Kristensen, L., & Villseñor, E. (2009). Overcoming skepticism toward cause related claims: The case of Norway. International Marketing Review, 26(3), 312-326. Smith, V., & Langford, P. (2009). Evaluating the impact of corporate social responsibility programs on consumers. Journal of Management and Organization, 15(1), 97-109. Social Enterprise Knowledge Network (SEKN) (2006). Effective Management of Social Enterprise: Lessons from Business and Civil Society Organizations in Iberoamerica. Cambridge: Harvard University Press Stafford, E. R., & Hartman, C. L. (2001). Greenpeace’s ‘Greenfreeze Campaign’: Hurdling competitive forces in the diffusion of environmental technology innovation. In K. Green, P. Groenewegen & P. S. Hofman (Eds.), Ahead of the curve: Cases of innovation in environmental management (pp. 107-132). Dordrecht: Kluwer Academic Publishers. Stafford, E. R., Polonsky, M. J., & Hartman, C. L. (2000). Environmental NGO-business collaboration and strategic bridging: A case analysis of the Greenpeace-Foron alliance. Business Strategy and the Environment, 9(2), 122-135. Steckel, R., Simon, R., Simons, J., & Tanen, N (1999). Making money while making a difference: How to profit with a nonprofit partner. New Lenox: High Tide Press. Strahilevitz, M. (1999). The effects of product type and donation magnitude on willingness to pay more for a charity-linked brand. Journal of Consumer Psychology, 8(3), 215-241. Strahilevitz, M. (2003). The effects of prior impressions of a firm’s ethics on the success of a cause- related marketing campaign: Do the good look better while the bad look worse? Journal of Nonprofit & Public Sector Marketing, 11(1), 77-92. Strahilevitz, M., & Myers, J. G. (1998). Donations to charity as purchase incentives: How well they work may depend on what you are trying to sell. The Journal of Consumer Research, 24(4), 434-446. Sullivan, H., & Skelcher, C. (2003). Working across boundaries: Collaboration in public services. Palgrave Macmillan. Swartz, J. (2010). Timberland’s CEO on standing up to 65,000 angry activists. Harvard Business Review, September, 39-43. 92 Teegen, H., Doh, J. P., & Vachani, S. (2004). The importance of nongovernmental organizations (NGOs) in global governance and value creation: An international business research agenda. Journal of International Business Studies, 35, 463-483. Thompson, D. W., Anderson, R. C., Hansen, E. N., & Kahle, L. R. (2010). Green segmentation and environmental certification: Insights from forest products. Business Strategy and the Environment, 19(5), 319–334. Thompson, J. (2008). Social enterprise and social entrepreneurship: where have we reached? A summary of issues and discussion points. SocialEnterprise Journal, 4 (2), 149-161 Tully, S. (2004, June). Corporate-NGO partnerships as a form of civil regulation: Lessons from the energy and biodiversity initiative. Discussion Paper 22, ESRC Centre for Analysis of Risk and Regulation (CARR), London School of Economics. Unilever(2010). Sustainable Living Plan. Small Actions, Big Difference. Available from: http://www.sustainable-living.unilever.com/the-plan/health-hygiene/lifebuoy/ Accessed: 15 June 2011. Uniliver, (2011). Sustainable Living Plan. http://www.sustainable-living.unilever.com/the-plan/health-hygiene/pureit/ Utting, P. (2005). Rethinking business regulation: From self-regulation to social control. Programme Paper 15 on Technology, Business and Society, United Nations Research Institute for Social Development (NRISD), Geneva. Van Tulder, R., & Kolk, A. (2007). Poverty alleviation as a business issue. In C. Wankel (Eds.), 21stCentury management: A reference handbook (pp. 95-105). London: Sage. Varadarajan, P. R., & Menon, A. (1988). Cause-related marketing: A coalignment of marketing strategy and corporate philanthropy. The Journal of Marketing, 52(3), 58-74. Vendung, E. (1997). Public policy and program evaluation. New Brunswick: Transaction Publishers. Vian, T., Feeley, F., MacLeod, W., Richards, S. C., & McCoy, K. (2007). Measuring the impact of international corporate volunteering: Lessons learned from the Global Health Fellows Program of Pfizer Corporation. Final Report. Boston, MA: Center for International Health, Boston University School of Public Health. Visser, W. (2011). The Age of Responsibility: CSR 2.0 and the New DNA of Business. West Sussex, U.K.: John Wiley & Sons Ltd. Vock, M., Van Dolen, W., & Kolk, A. (2011). Micro-level interactions in business-nonprofit partnerships. Business & Society (Forthcoming). Vurron, C., Dacin, T., & Perinni, F. (2010). Institutional antecedents of partnering for social change: How institutional shape cross sector social partnerships. Journal of Business Ethics, 94 (Supplement 1), 39-53.93 Waddell, S. (2000). Complementary resources: The win-win rationale for partnership with NGOs. In J. Bendell (Eds), Terms for endearment: Business, NGOs and sustainable development (pp. 193-206). Sheffield: Greenleaf Publishing. Waddell, S., & Brown, L. D. (1997). Fostering intersectoral partnering: A guide to promoting cooperation among governments, business, and civil society actors. IDRC Reports, 13(3). Waddock, S. A. (1986). Public-private partnerships as social product and process. Research in Corporate Social Performance and Policy, 8, 273-300. Waddock, S. A. (1988). Building successful partnerships. Sloan Management Review (summer), 17-23. Waddock, S. A. (1989). Understanding social partnerships: An evolutionary model of partnership organizationorganizations. Administration & Society, 21(1): 78-100. Waddock, S. A. (1991). A typology of social partnership organizations. Administration & Society 22(4), 480–516. Waddock, S. A., & Post, J. (1995). Catalytic alliances for social problem solving. Human Relations, 48(8), 951-973. Walsh, J. P., Weber, K., & Margolis, J. D. (2003). Social issues and management: Our lost cause found. Journal of Management, 29(6), 859-881. Warner, M., & Sullivan, R. (2004). Putting partnerships to work: Strategic alliances for development between government and private sector and civil society. Sheffield: Greenleaf Publishing. Watson, J., Speller, V., Markwell, S., & Platt S. (2000). The Verona Benchmark: Applying evidence to improve the quality of partnership. Promotion & Education, 7, 16-23. Waygood, S., & Wehrmeyer, W. (2003). A critical assessment of how non-governmental organizations use the capital markets to achieve their aims: A UK study. Business Strategy and the Environment, 12(6), 372-385. Weiner, B. J., & Alexander, J. A. (1998). The challenges of governing public-private community health partnerships. Health Care Management Review, 23, 39-55. Weiser, J., Kahane, M., Rochlin, S., & Landis, J. (2006). Untapped: Creating value in underserved markets. San Francisco: Berrett-Koehler. Weiss, E. S., Miller Anderson, R., & Lasker, R. D. (2002). Making the most of collaboration: Exploring the relationship between partnership synergy and partnership functioning. Health Education & Behavior, 29, 683-698. Westley, F., & Vredenburg, H. (1997). Interorganizational collaboration and the preservation of global biodiversity. Organization Science, 8(4), 381-403.94 Wilkof, M. V., Brown, D. W., & Selsky, J. W., (1995). When stories are different: The influence of corporate culture mismatches on interorganizational relations. Journal of Applied Behavioral Sciences, 31, 373-388. Wilson, A., & Charlton, K. (1997). Making partnerships work. : A practical guide for the public, private, voluntary and community sectors. London: J. Roundtree Foundation. Wolff, T. (2001). Community coalition building: Contemporary practice and research. American Journal of Community Psychology, 29(2), 165-172. Wood, D. J. (1991). Corporate social performance revisited. The Academy of Management Review, 16(4), 691-718. Wood, D. J. & Gray, B. (1991). Toward a comprehensive theory of collaboration. Journal of Applied Behavioral Science 27 ( 2), 139-162. Wright, R. (1988). Measuring awareness of British football sponsorship. European Research, May, 104- 108. Wymer, W. W. Jr., & Samu, S. (2003). Dimensions of business and nonprofit collaborative relationships. Journal of Nonprofit & Public Sector Marketing, 11(1), 3-22. Yaziji, M. (2004). Turning gadflies into allies. Harvard business Review, 82 (2), 110-115, 124. Yaziji, M., & Doh, J. (2009). NGOs and corporations: Conflict and collaboration. New York: Cambridge University Press Zadek, S. (2001). The civil corporation: The new economy of corporate citizenship. London: Earthscan Publications. Zadek, S. (2004). The path to corporate responsibility. Harvard Business Review, 82(12), 125-132. |