Commentary
The Health of Asian Americans Depends on Not Grouping Communities Under the Catch-All Term
Dec 13, 2021
Photo by sturti/Getty Images
In recent years, there has been a growing recognition of the profound impact that social determinants of health have on individual well-being and equitable distribution of opportunities for health across populations. Factors such as access to quality health care, income levels, educational opportunities, trust in medical institutions, and the availability of community resources are increasingly understood to have a cumulative effect on health. Despite targeted efforts to address these determinants, disparities in mortality and other health indicators persist across the United States.
RAND Health Care researchers collaborate across a wide range of academic specialties to both deeply understand unequal access to care throughout the U.S. health care system and envision innovative solutions in health care practice, policy, and research. Using their extensive expertise in health policy, quality measurement, statistical analysis, and data science capabilities, their research helps policymakers gain insights on such issues as where disparities exist in the health care system and how algorithms could be introducing bias in programs and administrative practices—and, importantly, potential strategies (including community-based and hyper-local initiatives) to mitigate these problems. Our work has also explored how artificial intelligence (AI) and similar technologies can both hinder and help efforts to make health care accessible for all individuals.
Health care quality may be lower for certain groups of people, with disparities varying by condition, location, insurance type, and other factors. RAND researchers skillfully analyze data to untangle health care outcomes from patient characteristics to document where disparities exist. Recent studies have focused on disparities among sexual minority veterans and non-veterans, and among women with perinatal opioid use disorder, particularly among Black, Hispanic, and American Indian/Alaska Native women. One study also assessed bias toward Black and White Medicare patients among health care providers, by county. Bias in favor of White patients was associated with Black-White disparities in health outcomes, lower influenza immunization rates, and lower scores on patient experience of care measures, which suggests the need for interventions to prevent bias from affecting care for older adults.
Within Medicare Advantage, the latest iteration of multi-year analyses found that clinical care measures are worse for Black and Hispanic enrollees compared to White enrollees; individuals in rural areas receive worse clinical care compared to urban residents; and those with low incomes, particularly White enrollees, are more likely to have behavioral health conditions and worse quality of care than those with higher incomes. About 9 percent of Medicare Advantage enrollees report unfair treatment in health care settings, most often related to their health condition, disability status, or age.
Though disparities have proved difficult to eradicate, recent research offers some insights for future efforts to improve health across communities. RAND researchers developed and tested two new indexes of physician network segregation and then used those measures to assess the probability of receiving cardiac care at the best facilities within health care markets across the country. In demonstrating that cardiac care for Black patients is highly segregated, this study suggests that policies to improve Black patients’ access to existing networks and increase physician workforce diversity could have a meaningful impact on desegregating care.
Health care payment amounts can change based on whether providers meet certain quality metrics. Increasingly, these changes are driven by algorithms that link payments to quality, leading some to scrutinize the incentives embedded in quality measures. RAND is actively researching algorithmic bias and strategies to address challenges with measuring racial and ethnic disparities in care. One challenge many researchers and policymakers face is trying to work with data that do not include information on individuals’ race or ethnicity. To address this, RAND has developed the publicly available Bayesian Improved Surname Geocoding (BISG) tool, which helps policymakers and researchers impute race and ethnicity data when datasets are incomplete. Colorado recently adopted the method as a requirement for use in machine learning–driven life insurance underwriting.
The Centers for Medicare & Medicaid Services (CMS) announced in April 2024 that it incorporated the RAND-developed Medicare Bayesian Improved Surname Geocoding (MBISG) algorithm into its largest data warehouse, with significant help from RAND researchers. Embedding MBISG in the Chronic Conditions Data Warehouse gives researchers across all institutions access to accurate, cost-effective estimates of racial and ethnic information on the Medicare population.
A newly developed weighting approach also offers a promising framework to address shortcomings in current quality measurement that can inadvertently create incentives that could be misaligned with or undermine efforts to improve health care quality and access. The developers of this new approach demonstrate how it can fix unintended consequences and be adapted to policy goals.
Concerns about the use of artificial intelligence (AI) and other technologies go beyond patient safety in clinical care to encompass health care administration and public policy research. RAND has explored how issues of fairness and bias can emerge from the use of AI in health care and the use of machine learning in public policy, as well as strategies to mitigate these challenges. On the other hand, RAND has demonstrated that AI can accelerate research while keeping personal data private. For instance, researchers have established the enormous potential in generating synthetic health data to aid modeling efforts, to both preserve patient privacy and reduce the need to use sensitive individual data in research projects. Machine learning can also be used to analyze massive sets of real-world data: one team used infant mortality data to predict risks and identify the most effective interventions, enabling health care and social service providers to develop tailored strategies to reduce disparities and care while also meeting patients' needs. Natural language processing can reduce the labor involved in coding patient experience narratives that are often collected in qualitative research and potentially also reduce inadvertent bias.