NYU Stern Center for Business and Human Rights responds to the Corporate Human Rights Benchmark Draft List of Indicators

To the Corporate Human Rights Benchmark and associated groups:

Thank you for the invitation to provide comments on the July 2015 “Corporate Human Rights Benchmark – Draft List of Indicators.” Our Center’s perspective and approach emphasizes the need for industry-specific, standards-based models for advancing human rights in business. In that context, the benchmark is a potentially useful tool to encourage companies in the same sector to abide by common standards and seek to distinguish themselves from their competitors on the basis of human rights performance.

The industry-specific approach of the benchmark is a welcome development. We emphasized the need to focus the benchmark exercise on specific industries in the 2014 consultation in New York and are pleased to see the benchmark evolve in this way. The focus on apparel manufacturing, extractives, and agriculture is timely and appropriate given the many, well-documented human rights issues in each of these sectors.

Like you, we view the development of substantive standards on human rights in each industry as essential in providing investors and consumers with the information they need to make more informed and sustainable investment or purchasing decisions. Investors in particular are hungry for more and better information about the companies in which they invest, including on social factors like human rights. 

We also welcome the spirit of collaboration in your process and the opportunity to participate in consultations throughout the development of the proposed benchmark. As you move to finalize the tool in its first iteration, the product will be made stronger by transparency about the choices you will have to make in responding to the feedback generated by the consultative process.

We see three key challenges in realizing the potential of the proposed benchmark as a tool for investors and consumers to distinguish companies on the basis of human rights performance:

Challenge 1: Prioritizing outcomes over process

We agree with the aim of the proposed benchmark to “incentivize better human rights performance over time” and the effort to measure and rank companies on their human rights performance. However, in its current draft, the benchmark places too much emphasis on process, policy, and statements of commitment and too little on assessing the outcomes of those efforts.

The proposed benchmark is broken down in to five components, each given a relative weight: leadership (10%), governance (10%), management systems (30%), performance (40%), and reporting/transparency (10%). While we are pleased to see that performance receives the largest weight ranking, our view is that “leadership,” “governance,” and “management systems” are distinctions without significant differences. The combined 50% weight of these three factors reflects an over-emphasis on procedural commitments, rather than real-world effects.

Even within the performance category, the proposed benchmark rewards commitment, disclosure, description, and monitoring on the most difficult issues, rather than outcomes. See for example (emphasis added):

  • “The Company commits to the issues in policies or statements.” (D.1.5 – child labor)

  • “The Company describes the resulting actions (such as training, encouraging actions from under-represented groups) and related targets as relevant.” (D.1.6 – non-discrimination)

  • “The company discloses the percentage of operations (or plants, factories, subsidiaries, as relevant) with a representative union…” (D.1.8 – freedom of association)

  • “The Company discloses the percentage of total workers covered by collective bargaining agreements….” (D.1.9 – collective bargaining)

  •  “The company discloses quantitative information on health and safety.” (D.1.10 – health and safety)

  • “The Company monitors trends in overtime throughout its activities and operations.” (D.1.11 – working hours)

This kind of disclosure does not identify a standard against which competitors can be measured, other than a binary assessment of whether a company has disclosed. If, for example, a company reports that 0% of its plants, factories, or subsidiaries have a representative union, it will have met a standard of disclosure, while indicating a troubling climate for the freedom of association rights of its workers. Will assessors make judgments about what companies disclose? And if they do so, on what basis?

In general, the proposal lacks specificity and relies on disclosure as a standard when the human rights issues at stake are particularly challenging. We are sympathetic to the difficulty of measuring performance on issues such as freedom of association, discrimination, child labor, and wages. These are the issues that have vexed the most well-intentioned stakeholders – ourselves included – for many years and continue to present formidable challenges.

These issues are exacerbated by the extension into sub-contracting and joint venture partners, as suggested in D.1.1 and D.1.2 – forced labor; D.1.4 – child labor; D.1.11 – working hours. We commend the effort to extend the benchmark further into the supply chain, where our research has highlighted extreme vulnerabilities for workers and communities. But the proposal lacks the kind of specificity that would guide companies or assessors in making judgments about the standard of human rights performance in a company’s extended supply chain.

We also recognize that the emphasis on disclosure reflects a broader trend in CSR and sustainability reporting, including those required by governments (e.g. the U.S. State Department’s Burma Responsible Investment Reporting Requirements or the California Transparency in Supply Chains Act).

Many people have observed that measuring performance is the toughest nut to crack and argued that measurement or disclosure of policies, procedures, commitments, and management systems is a good enough starting place. We resist this view. As the business and human rights field has matured over the last two decades, it is time for more meaningful, outcome-based measurement of company performance.

We commend the objective of the proposed benchmark to encourage a race to the top among competitors in their human rights performance. But as currently framed, the race will be around the generation of more policies and processes, and more resources devoted to reporting, rather than competing for better outcomes. We encourage you to increase the relative weight of performance over process, and to develop significantly more detailed, outcome-based standards for measuring performance.

Challenge 2: Providing metrics that are “decision-useful” for investors and consumers

In order to achieve its objective of making corporate human rights performance easier to understand for a wide range of audiences, we would like to see the benchmark move toward a narrower set of metrics that provide more meaningful indicators of companies’ actual performance on human rights.

The current draft gives precedence to those things that companies already are doing and reporting on (many of which are related to policies and procedures, as discussed above). Part of the effort to improve assessment of performance should focus on the few metrics or indicators that distinguish the best companies within a sector from the middle of the pack (and the worst).

This kind of measurement may not explicitly relate to human rights. For example, in assessing the labor rights performance of apparel companies, degree of control in the supply chain is a distinguishing factor. Richard Locke’s innovative research on apparel and manufacturing companies indicates that companies with low turnover of suppliers have better outcomes on working conditions.

Assessing turnover in the supply chain would therefore be one meaningful basis on which to compare companies. We would like to see apparel companies competing to increase transparency about their supply chains and establish longer-term relationships with an increasing number of suppliers. In our view, this is a more meaningful metric than whether companies have made policy commitments on particular issues at this stage in the evolution of business and human rights.

This kind of metric will take time, research, and further consultation to develop. It will be important to limit the development of performance- and impact-based metrics to a few factors that are the most meaningful when it comes to assessing human rights in a given sector.

It also is important for the benchmarks to clearly differentiate between actions that are desirable and those that are essential or obligatory. Too often the discussions of these issues focus only on efforts to identify “best practices” by companies, without establishing baseline standards for all companies within a particular industry. While aspirational best practices are desirable, the primary focus of the benchmarks should be to set clear baselines, which companies feel obligated to meet.

Other benchmarking initiatives have struggled to strike the right balance of being sufficiently detailed to assess human rights, while simply and straightforwardly conveying information to consumers and investors. One of the ways Behind the Brands has been successful is in defining a narrow set of metrics that are clearly communicated on an appealing and easily understood website. Behind the Brands is limited at this stage to assessing companies’ self-reported policy commitments, but the principle of simplicity and identifying a relatively narrow set of the most meaningful indicators should be applied to the proposed benchmark.

Challenge 3: Where the information comes from

One of the longstanding challenges in this field is how to balance what companies say about themselves against what others say about them. We think more work needs to be done to get this balance right in the proposed benchmark. We would like to see less emphasis on self-reported data on policies, procedures, leadership statements, commitments, and management systems. Companies already report this information in a dizzying number of formats and outlets ranging from their own sustainability reports, to the UN Global Compact, the Global Reporting Initiative, efforts to report their activities in line with the UN Guiding Principles, and an increasing number of government-mandated reports.

As drafted, much of the proposed benchmark relies on information that companies already self-report or that they will be able to voluntarily report through a portal, in addition to some reliance on information generated by third parties, such as NGOs, academic organizations, and journalists.

Moreover, companies are left with significant discretion to define and prioritize risk, rather than taking a standardized approach that would apply across a sector. For example, while companies are encouraged to assess risks and to prioritize those that are “salient” in C.2 – Due Diligence, it seems that each company has the latitude to make these determinations unilaterally, without an external determination or standard for assessing whether the risks they identify truly are the most serious or pressing.

We are eager to learn more about the proposed portal – how it will be managed, funded, and curated. The portal provides a potentially interesting avenue to collect information that is not reported elsewhere and that creates new opportunities to measure companies’ actual human rights impacts. If the portal generates information on the kind of metrics described above – such as supplier turnover – it would represent a new and innovative contribution to push companies to improve their performance on the most meaningful factors.

As an academic institution that does research on business and human rights, we are, of course, interested in contributing information about human rights in different business sectors that could help inform the rating. But we also are mindful of the challenges of curating information provided by third parties. Some companies are the targets of public advocacy campaigns that generate a great deal of information and allegations, including some information that is not accurate. What is the process for evaluating the credibility of allegations?  Who will make these determinations and on what basis?

In conclusion, we look forward to hosting the New York consultation on September 25 and are eager to work with you as the benchmark developments to incorporate a stronger standards-based approach in the next iteration.


Michael Posner                                                Sarah Labowitz