The Justice Index is an online, data-intensive, ranking system created by the National Center for Access to Justice in 2014. Justice Index findings promote policies that enable people who need the help of state courts to gain access and get fair treatment, even if they can’t afford a lawyer.
The Justice Index ranks the 50 states, the District of Columbia, and Puerto Rico on their adoption of selected best policies for ensuring access to justice. In doing so, it raises the profile of these policies, shows where they have and have not been adopted, and helps push for their replication and implementation.
In its 2021 edition, the Justice Index ranks states on five categories of access to justice policies: access to an attorney, self-representation, language access, disability access, and fines and fees. Within the attorney access category, the Justice Index also includes a count of civil legal aid attorneys in the United States.
A Composite Index, included in the Justice Index, shows the state rankings based on the combined findings from the traditional four Justice Index categories: attorney access, self-representation, language access, and disability access. An alternative visualization of the Composite Index ranks the states based on the combined findings from all five categories, including fines & fees.
The Justice Index is the product of an enormous amount of research carried out in reliance on multiple methodological approaches. In the following outline, we describe how the work was done. (In About the Justice Index, we describe the scope of the Justice Index, the principles that guide its vision, highlights from its findings, and sources of support for its research).
I. Attorney Access, Self-Representation Access, Language Access, Disability Access Research
We describe below NCAJ’s methodology for researching the benchmarked policies in the four traditional Justice Index categories: attorney access, self-representation access, language access and disability access. In Part II, we describe NCAJ’s methodology for researching the national civil legal aid attorney count. In Part III, we describe the methodology for researching policies concerning fines and fees.
A. Selecting Best Policies and Building a Survey Instrument. NCAJ consulted with subject matter experts to select and refine best policy benchmarks, and to create a survey instrument for state justice system officials asking whether the policies are present in their respective states. (To learn about NCAJ’s consultation with experts, see About the Justice Index. In April 2020, NCAJ sent the survey to court administrators and state access to justice commission staff in all 50 states, the District of Columbia and Puerto Rico, inviting these individuals to respond to the survey with answers establishing the presence of the benchmarked policies. See Justice Index 2020 Update Survey.
B. Survey Answers from Participating and Non-Participating States. The research initiative was conducted from April 2020 through March 2021. Court officials in 36 jurisdictions (“participating states” responded to the survey: AR, AZ, CA, CO, CT, DC, DE, GA, HI, IA, IL, IN, KY, LA, MA, MD, MI, MN, MS, MT, NC, NJ, NM, NY, OH, OK, OR, PA, PR, SC, TN, UT, VA, WA, WI, WY. In 16 jurisdictions (“non-participating states”) court officials did not respond, although in some of these jurisdictions survey answers were provided by other stakeholders. Those non-participating jurisdictions are AK, AL, FL, ID, KS, ME, MO, ND, NE, NH, NV, RI, SD, TX, VT, WV.
C. Pro Bono Research Teams. NCAJ enlisted pro bono research teams from the following six law firms to carry out the research in the four traditional Justice Index categories of policy: DLA Piper, Kirkland & Ellis, Latham & Watkins, Morgan Lewis, O’Melveny & Myers, Simpson Thacher. More than 30 volunteer attorneys participated in the project. Each firm undertook responsibility for researching policy for approximately seven to ten jurisdictions.
D. Initial Pro Bono Review. The firms reviewed the survey answers, relying on multiple research strategies to determine if t sufficient documentation exists to establish the presence of the benchmarked policies in the state. These strategies included analyzing state laws, rules and practices, speaking with court officials, and reviewing proffered citations for accuracy. For non-participating jurisdictions, the pro bono attorneys carried out original research on each state’s laws, rules, and practices.
E. Establishing Yes, No, and No* Findings. For each benchmark, state court officials were invited to respond “Yes” if the state had adopted the law or practice on a statewide level, and to provide a source to support the “Yes” response. NCAJ reviewed these answers with the support of pro bono attorneys, and determined whether to establish a finding of Yes, No, or No.*
As stated on the Survey, a “Yes” finding was made where a policy was documented by a “statewide statute, rule, regulation, appropriation, or other written guidance.” We made exceptions to the requirement for certain benchmarks where written sources were less likely to be available (such as training questions or questions about employees or offices designated to perform specific tasks), accepting alternative documentation if it (i) identified the court official who oversees the practice or oversaw the training; (ii) included the official’s title and contact information; and (iii) included some indicator of definiteness such as when the training was held or the level of authority of the employee or office that conducted the specific task.
If a state did not respond “Yes” for a benchmark, or if a state responded “Yes” but did not provide sources found to be sufficient during our quality review process, a “No” finding was made. A “No” finding means only that a “Yes” finding was not established; it does not mean that NCAJ has affirmative support for a finding that the state has not adopted the subject law or practice. Nor does it mean that the state is not, in actuality, carrying out the practice in some form or in some place.
Finally, these four Justice Index categories includes “No*” findings that carry an asterisk (“*”) where a state’s policy is neither the same as nor equivalent to a benchmark, but was found by NCAJ to hold potential interest to other justice system stakeholders.
F. Quality Assurance. NCAJ conducted meetings on Zoom with representatives of all six pro bono teams every two weeks for a period of three months. In these meetings, NCAJ staff provided guidance on research strategies, surfaced common questions from researchers, and responded to those questions. NCAJ also provided additional guidance to some of the research teams in off-line Zoom meetings. NCAJ staff carried out two (or more, depending on the state) rounds of review of the law firm’s recommended determinations and citations and worked with the pro bono teams and with state officials to clarify findings as necessary to resolve open questions about accuracy and about the sufficiency of citations and other forms of documentation. In December 2020, NCAJ notified state judiciary officials of the proposed findings and invited comments, questions, and supplemental answers. For states that responded, NCAJ reviewed all comments and provided the states with supplemental responses explaining the rationale for the adverse findings, and inviting additional documentation for some of the policy benchmarks.
II. Civil Legal Aid Attorney Count Research
NCAJ carried out the Civil Legal Aid Attorney Count research with a different group of pro bono volunteers, and a different approach than was used to research the policy benchmarks, above.
A. Building the Attorney Count Research Team. NCAJ carried out the research for the Civil Legal Aid Attorney Count by working with pro bono volunteers at Pfizer, Deloitte, and DLA Piper. Pfizer and Deloitte had provided pro bono support to NCAJ to build the original Justice Index and Civil Legal Aid Attorney Count in 2014, and to build the expanded Justice Index and Count in 2016. For Justice Index 2021, these partners again supported the Count, this time with a third partner, the DLA Piper Law Firm. (DLA also participated as one of the six law firms supporting the policy research initiative, above).
B. Selecting a Definition of Civil Legal Aid Lawyers and Civil Legal Aid Organizations. To guide the Count, NCAJ consulted with experts and developed a definition that included lawyers in all salaried roles in civil legal aid organizations, thus including: a) staff; b) management; c) supervisors and supporters of students; d) supervisors and supporters of pro bono volunteers; e) full-time lawyers; and f) part-time lawyers (in full-time equivalent decimal format). NCAJ sought to include many types of civil legal aid organizations, ranging from those that provide only civil legal services, to those that provide civil legal aid among other types of services, and including organizations whose services ranged from provision of brief advice and assistance to those providing full traditional legal representation. The project included organizations providing services in a single practice area to those handling multiple practice areas. The core requirement was that the organization provided its services without charge to the service recipient. NCAJ sought to exclude from the Count organizations whose only staff were volunteers, law school clinics, and organizations carrying out exclusively systemic advocacy services in areas of law unrelated to the provision of free civil legal services.
C. Creating the Initial Database. NCAJ began the project with its own database of civil legal aid organizations and civil legal aid leaders (compiled for the Justice Index 2014 and 2016 editions). NCAJ supplemented its 2016 database with names of civil legal aid organizations and their contacts provided to NCAJ by the American Bar Association, Legal Services Corporation, and National Legal Aid & Defender Association.
D. Building the Database. Working with pro bono team members at Pfizer and Deloitte, NCAJ reached out to IOLTA programs, Access to Justice Commissions, and other state based civil legal aid coordinating institutions to request, where available, state-specific lists of civil legal aid organizations, their executive directors, and, their numbers of lawyers. Pfizer enlisted students from its 2019 summer internship program in the Count, engaging both college students and law students in making calls across the country to request copies of state-specific lists. A team of attorneys and staff at DLA Piper carried out extensive internet and then telephone research to gather names of additional civil legal aid organizations, to remove duplicates, and to track down emails and telephone numbers of civil legal aid leaders in every civil legal aid organization. NCAJ also posted notice on its website, ncaj.org, inviting participation in the civil legal aid attorney count, and also emailed its own contacts in the civil legal aid community asking for their state-specific lists of civil legal aid organizations and their executive director contacts.
E. Collecting data from civil legal aid coordinating organizations. Team members at Deloitte, DLA and Pfizer designed and built a spreadsheet for seeking data from state-based coordinating entities in their respective states on the number of civil legal aid attorneys and names of organizations. 127 civil legal aid organizations were identified through this method. Team members used the design of the spreadsheet to build a comprehensive database in which to upload the data, and to which to add data that would be obtained from the field going forward. Deloitte carried out a rigorous process to “de-dupe” the names and contacts in the growing database
F. Creating the Survey Instrument. NCAJ guided the creation of the Survey Instrument, with assistance from volunteers at Deloitte, Pfizer and DLA Piper. DLA staff designed the Survey instrument with Survey Monkey, producing a final version, available here [insert link]. The Survey Instrument asked civil legal aid leaders to provide their number of civil legal aid lawyers, the Employer Identification Numbers for their organization (as a means of screening out duplicate entries for organizations from the final database of civil legal aid organizations that would be created upon completion of the initiative), and data about client target populations, types of legal services provided, practice areas for which legal services are provided, and method (if any) for determining financial eligibility.
G. Collecting Survey Responses. NCAJ emailed the survey to 1,453 organization addresses, re-sending the email to non-responders five times over a period of approximately three months. Survey answers were submitted directly by respondents into Survey Monkey, compiled by the Survey Monkey software at DLA Piper, and made available to Deloitte for further sorting and tabulating of the data contained in the Survey responses. Volunteers at DLA Piper and at Pfizer placed follow-up calls to civil legal aid leaders over a period of several months to secure a broad response to the Survey. 392 non-LSC civil legal aid organizations responded to the survey in these rounds of data collection.
H. Additional Telephone Outreach by Deloitte Staff. At end of 2020, Deloitte enlisted several additional volunteers to call civil legal aid organizations from which no Survey response had yet been obtained. The Deloitte team placed 883 calls, and collected data by phone from 188 additional non-LSC civil legal aid organizations through this method, bringing the total number of unique non-LSC civil legal aid organizations in the NCAJ attorney count to 580.
I. Including data from the Legal Services Corporation (LSC). NCAJ requested publicly available data from the LSC about the names of civil legal aid grant recipient organizations and the numbers of attorneys employed within those organizations. LSC provided this information which was incorporated into the Civil Legal Aid Attorney Count by Deloitte. 129 unique LSC-funded civil legal aid organizations were identified through this method. Adding the LSC-funded organizations to the 580 non-LSC organizations brought the total CLA organizations included in the NCAJ attorney count to 709.
J. Quality Assurance Review. Volunteers at Deloitte performed data analysis on the data gathered from state-based coordinating entities, Survey answers, and Deloitte’s additional telephone outreach. Deloitte created a comprehensive spreadsheet, showing the names of all civil legal aid organizations identified by tax identification numbers, showing the number of civil legal aid attorneys in each organization and in each state, and sorting the answers to survey questions that inquired about such factors as each organization’s method for determining financial eligibility, priorities among legal practice areas, priorities among types of services provided.
III. Fines & Fees Research
A. Selecting Best Policies. Working with an expert advisory group of advocates and activists from across the country, NCAJ identified 17 policies every state should have in place to rein in fines and fees abuses. We also developed secondary policies, representing “second-best” policy approaches for many of the primary policies – short of the benchmarked primary goals, but still worthy of positive recognition. NCAJ then worked with Strook & Strook, and with Fordham Law students, to test the viability of the benchmark policy formulations in the field, and to refine them as appropriate to assure their clarity and viability in guiding the research.
B. Conducting the Research. NCAJ then engaged Hughes Hubbard and Reed to carry out research, pro bono, on whether the states had adopted the benchmarked policies. In contrast to research initiatives for the other Justice Index Benchmarks, the Fines and Fees research was conducted as a traditional legal research project involving dialogue and review between the law firm and NCAJ. NCAJ staff worked closely with the law firm, reviewing the firm’s findings, providing feedback, and finalizing the findings. At the end of this process, NCAJ invited state justice system officials (court administrators and access to justice commission members) to review and offer feedback on the proposed final findings, and a small number of states responded to this invitation.
C. Establishing Yes, No and Partial Findings. For primary benchmarks, NCAJ determined whether to establish a finding of “Yes,” “No” or “Partial” credit. A finding of Yes was made where research established that the state had adopted the policy on a statewide level. Partial credit was provided where a secondary benchmark policy was met. For secondary benchmarks, research was carried out only when the finding for the associated primary benchmark was “No.” This approach differed from that taken with the four traditional Justice Index categories, where all benchmarks are researched, and where a finding of No, accompanied by an asterisk, denotes state policies that do not meet any benchmark but that are included in the Justice Index because of their potential interest to visitors to the Justice Index.
IV. Weights & Calculations
Each state’s Justice Index score in each category and in the Composite Index is provided on a 100-point scale. This allows for easy comparison among states in each category, and also allows category scores to be merged together to produce a composite score according equal weight to the scores in each category. A single approach is used to weight and score the four traditional categories of the Justice Index. A somewhat different approach is use to weight and score the Civil Legal Aid Attorney Count findings and the Fines and Fees findings, as described below.
A. Attorney Access, Self-Representation, Language Access, Disability Access Weights and Scores. In the traditional four Justice Index categories, the Policy benchmarks are each assigned a weight of either 1, 5, or 10 points. A state’s raw score for the category consists of the sum of the weighted “yes” answers for the state in that category. For example, if a state receives a “yes” for a benchmark that has a weight of five, then five points are added to the state’s raw score for the category. If the weight of the benchmark is one, then one point is added.
In setting weights for each benchmark, NCAJ relied on consultation with national subject matter specialists and on NCAJ’s expertise, taking into account such factors as (i) the importance of the law, rule, or practice for access to justice; (ii) the relative importance of the law, rule, or practice as compared to other benchmarked policies in the same subject matter index; and (iii) the cumulative weight of multiple indicators that track aspects of the same issue (for example, if a court’s website included multiple benchmarks, the overall weight of the website benchmarks would be considered as a whole when setting the weight of each individual website benchmarks). The final raw scores for each state were then normalized to a 100 pt scale by dividing the raw score by the total points in that category. For example, a state that achieved a raw score of x points in a category with Y total points would receive a scaled score of Z (X divided by Y times 100).
B. Attorney Count Weights and Scores. To calculate the Civil Legal Aid Attorney Count score for each state, NCAJ began by treating each state’s number of civil legal aid attorneys (the number produced by our pro bono team outreach effort, and by Deloitte’s data analysis) as the state’s “raw score.” The raw score was then divided by the number of people in the state with incomes below 200% of the federal poverty guideline to produce a value representing “the ratio of civil legal aid lawyers to 10,000 people with low incomes” in each state – referred to in the Justice Index as “the ratio.”
The ratio was then compared to a benchmark ratio of 10 civil legal aid lawyers per 10,000 people with incomes at or below 200% of the federal poverty guideline. Thus a state with a ratio of 5 lawyers to 10,000 poor would have half as many lawyers as a state reaching the benchmark of 10, and would therefore have an Attorney Count Score of 50%. A state with a raw score of 10 or higher (matching or exceeding the benchmark) could earn the maximum Attorney Access scaled index score (for inclusion in the Composite Index) of 100. The benchmark of 10 was chosen because 10 is approximately 25% of 40, which is the national average number of active attorneys per 10,000 people in the general population based on census data and ABA data.
Several things should be noted about NCAJ’s use of the benchmark of 10 civil legal aid attorneys per 10,000 people with low incomes. The simplest approach would be to set the benchmark for the number of attorneys dedicated to serving those unable to afford one as equal to the total number of attorneys available to serve an equivalent number of people in the total population. This standard would be based on the notion that access to legal representation should be equally distributed across the population and determined by one’s legal needs, not one’s ability to pay. Some would instead argue that the benchmark ratio should be higher for civil legal aid attorneys than for attorneys generally because people with low incomes often face issues requiring legal assistance that people who are more affluent do not, or face more rarely. Others, however, would argue that the 40-to-10,000 ratio for the general population is an inapt baseline for a civil legal aid benchmark because it includes corporate lawyers, government lawyers, and others who do not serve individual clients.
The Justice Index’s benchmark of 25% of the national average – or 10 – reflects two critical points. First, it is a very conservative figure for estimating the need for civil legal aid lawyers. It is hard to argue credibly that those living at or below 200% of the federal poverty level do not need at least one-quarter of the access to counsel that others have. Second, no state is even close to the reaching the index benchmark of 10 civil legal aid attorneys per 10,000 people in poverty (apart from the District of Columbia, which has a uniquely high concentration of attorneys because it is a large metropolitan center that is dissimilar to the typical state). The benchmark is demonstrative of a large unfilled need and, if anything, grossly understates that need.
C. Fines and Fees Weights and Scores. NCAJ’s17 policy goals carry individual weights between 1 and 10, adding up to a total possible point value of 100. Where states did not meet a primary benchmark, a finding that a secondary benchmark was met earned a reduced number of points (between 1 and 5) relative to the potential value of the primary benchmark. This approach differed from that taken with the four traditional Justice Index categories, where all benchmarks are researched, and where a finding of No, accompanied by an asterisk, denotes state policies that do not meet any benchmark but that are included in the Justice Index because of their potential interest to visitors to the Justice Index. For detailed information about the policy benchmarks, including the complete list of weights for all benchmarks, see the Fines & Fees Project Overview.
V. Justice Index Score (“Composite Index” or “Overall Score”)
The Justice Index presents individual sub-category scores in five areas of policy, but also “rolls up” the categorical findings into a total ranking system compares the states relative to one another based on their overall policy landscape for supporting access to justice. Justice Index 2021 offers two composite index scores: a) a traditional Justice Index score, which rolls up the findings in the four traditional categories of attorney access, self-representation access, language access, and disability access, and b) a Fines and Fees Composite Score, which combines the four traditional categories with the Fines and Fees findings that are new to the Justice Index in 2021.
A. The Composite Score. The math is simple. Each of the issue area normalized scores on a 100% scale is added together, with each making a proportionate contribution to the composite score for each state. Thus, with four sub-indexes -- attorney access, self-representation, language access, and disability access – each sub-index contributes 25% of the value of the composite score for each state. More specifically, because the sub-index scores are each reported on a 100 point scale, the composite score is created by adding a state’s four separate scores from each sub-index, and then dividing by four. One detail to note in this context is that the greater the number of benchmarks in a given category, the smaller an individual benchmark’s contribution will be to the state’s ranking in that category and in the Composite).
B. The Alternative Composite Index, including Fines and Fees. In Justice Index 2021, a fifth category, fines and fees, has been researched, allowing visualization of an alternative composite index that includes the fines and fees state scores. In this alternative data visualization, each sub-index contributes 20% of the total composite score for each state (as contrasted with 25% when only the four established Justice Index categories are considered in the Composite Index). More specifically, the Composite Index score is created by adding each state’s five separate scores from each sub-index, and then dividing by five.
We hope this methodology discussion helps visitors to the Justice Index understand the research strategies and activities leading to the creation of the Justice Index. Please also see About the Justice Index for more on the scope, principles, contributors, and other essential elements of Justice Index 2021. Justice Index 2021 was created during the height of the Covid 19 pandemic which disrupted lives, created challenges for all of our research team volunteers, and posed unique burdens to the courts and other justice system stakeholders across the country who were responding to our research surveys with their answers. The work on Justice Index 2021 was made more difficult but also took on greater importance as it became clear to all that the constraints imposed by the pandemic would not only present new legal problems in people’s lives but would also place new financial, practical, and emotional pressures on people in the institutions responsible for assuring access to justice. At NCAJ, we are grateful to all who were able to help contribute to Justice Index 2021, with its capacity to help increase access to justice for all.