For many, many years we’ve been hearing about gene therapy – the chance that we can get into people’s DNA and fix it to resolve problems and fix disease. In a recent piece in Science, Stuart Orkin and Philip Reilly discuss what finally achieving success might mean:

Imagine a young man with hemophilia A who no longer has to self-administer factor VIII replacement; an individual with sickle cell disease who is free of chronic pain and intermittent crises; a girl functionally blind since the age of 5 who can now see; or a baby rescued from a fatal, inherited neurodegenerative disease. For decades, gene therapy has tantalized us with such futuristic scenarios. However, these goals are now coming into focus, and it is the time to consider some of the consequences of success.

As they report, gene therapy has been forty-four years in the making. But gene therapy, which has cost billions of dollars in research and development, is different from the traditional pharmaceutical market. For one thing, most diseases which are the focus of gene therapy research are relatively rare. Most of these conditions affect children. In addition, gene therapy is more like a procedure than a drug. You perform it once, and potentially achieve a lifetime cure. There’s no way that, like many pharmaceutical products, profits can be made on volume.

It’s important to note that at this time, we still don’t have a lot of promising results in human subjects. But we’re getting close – close enough that it’s best we consider how we might pay for this now, rather than wait until it’s here and we all start fighting about it.

We may expect that prices for gene therapy might approach previously unseen amounts. The only gene therapy currently approved for use in Europe is Glybera. It treats lipoprotein lipase deficiency, a rare illness. It’s now priced at more than $1 million per patient, even though, as the authors point out, its efficacy is not without doubt.

This may seem ridiculously high, but it’s not. Organ transplants can run that high, and they sometimes offer less of a cure than gene therapy might. Even bone marrow transplant can cost more than $500,000, and we do that all the time. But when it comes to therapies that aren’t procedures, we often balk. The fights over Sovaldi, which was still arguably cost-effective compared to other treatments, might give us an inkling of what’s to come.

The authors offer some ideas on where to start. First, they present some estimates on the current cost of managing genetic disorders. Cystic fibrosis costs almost $6 million per patient over a lifetime. Gaucher disease about $5 million, sickle cell disease $1 million, and Hemophilia A between $5 and $10 million.

They also suggest we consider the modality of the specific gene therapy. If it’s as intensive as bone marrow transplant, it may be priced similarly. Development costs must be considered, which are likely similar regardless of the prevalence of the disease being treated. That prevalence is its own factor, however. The price will likely have to be higher when the disease is rarer.

Further considerations involve how much it costs to produce the therapy, and, of course, what outcome is expected. A full, definitive cure may be worth quite a bit.

Clearly, different countries will make these determinations in varying ways. The United States, more than any other, lets the market decide what it will pay. That doesn’t always turn out well. The authors suggest the following instead:

First, very expensive gene therapies with large up-front payments should require that the burden of retreatment be borne by the drugmaker or its successors… Second, some reasonable portion of the economic benefits that under the Orphan Drug Act flow to companies that develop therapies for “orphan” disorders might be redirected to reducing the price of the drug, perhaps by greatly reducing the pricing of copays… Third, the U.S. National Academy of Medicine (or a similar body) should commission a study to explore new methods to streamline the regulatory process for developing genetic and perhaps other therapies for ultrarare disorders.

These aren’t definitive solutions, but they’re a good place to start. It’s better that we have this discussion before decisions have to be made, rather than after. Go read their article.

Aaron

Share

{ 0 comments }

Earlier this month, AcademyHealth and partners Allergy & Asthma Network, Asthma and Allergy Foundation of America, GlaxoSmithKline, and Research!America hosted the third in a series of congressional briefings on the “research continuum” and how different types of research—basic, clinical, health services, and population—complement one another in preventing and treating diseases and conditions. The first briefing was on heart disease, the second, on cancer, and the third focused on asthma and allergies.

According to the Centers for Disease Control and Prevention (CDC), an estimated 24 million Americans (one in 12) are living with asthma; it costs $56 billion annually. Of those with severe asthma, two-thirds are unable to work full time. Allergies, which can trigger asthma symptoms, are among the most common chronic diseases, affecting more than 50 million people and costing in excess of $18 billion each year.

To kick things off, mother and patient Vernetta Santos spoke to attendees, reminding everyone of the need for and impact of research: the patients. Ms. Santos has been “dealing with asthma for 23 years.” Not only does she struggle with asthma, but her three children are affected as well. As she says, “It’s a scary ride from the beginning.”

Dr. Kirk Druey, National Institute of Allergy and Infectious Diseases (NIAID), then discussed the role of basic science, which increases knowledge about how living organisms work and what causes disease. Dr. Druey explained that, because of basic research trials, researchers have determined that asthma may have multiple causes and may represent at least two—and probably more—distinctive diseases. Precision medicine has enhanced this work by allowing researchers to take a virtual snapshot of the human genome and work with partners to match the unique patient to an individual, effective therapy.

Following Dr. Druey’s presentation, Dr. Catherine Bonuccelli of GlaxoSmithKline (GSK) noted that it’s an unprecedented time for medicine and science. She spoke of the exciting developments in precision medicine, such as the development of a new therapy that provides a segment of severe asthma patients with a treatment customized to their specific condition. Unfortunately, she said, developing those medicines isn’t easy. The average time to develop a new medicine is clinically between 10 and 15 years. Dr. Bonuccelli noted the important role of patient partnerships moving forward, since for patients it’s not just about having a disease—it’s the different experience the disease brings to each patient and his/her family, friends, and community.

Also speaking to the importance of the patient relationship was Dr. Michael Cabana, who presented on the role of health services research (HSR) to ensure that evidence is implemented in the right way at the right time. As we’re all too aware, patient visit time is often limited; during that encounter, physicians have to establish rapport, address the patient’s symptoms, complete a history and physical exam, and establish a plan, ideally one suited to patients’ different beliefs, concerns, and goals about their specific treatment. One way to improve those encounters, he explained, is through HSR-informed programs like the Physician Asthma Care Education (PACE) program—an educational seminar to improve physician awareness, ability, and use of communication and therapeutic techniques for reducing the effects of asthma on children and their families. Data has shown that this program has resulted in improved health care provider confidence; providers being more likely to use better communication and education techniques; decreased asthma symptoms reported by families; and decreased emergency department visits for asthma patients. Dr. Cabana described it as a critical kind of personalization beyond science.

The fourth and final type of research presented, population-based research, touches people where they live, learn, work, play, and pray. As Dr. Simpson reminded attendees at the beginning of the briefing, “Your zip code is often more important than your genetic code” in predicting health. Speaking on this perspective was Dr. Tyra Bryant-Stephens from The Children’s Hospital of Philadelphia. Dr. Bryant-Stephens began her research driven by the question of why children receiving care consistent with national guidelines were still ending up in the ER. She learned that biology is only part of the answer — as laid out in the CDC’s Determinants of Health, health outcomes are determined by lifestyle (50 percent), environment (20 percent), biology (20 percent), and the health system (10 percent).

Dr. Bryant-Stephens turned that knowledge into action, working with her team in the Community Asthma Prevention Program, which utilizes community health workers and residents to implement asthma interventions in underserved, poorly-resourced inner-city communities. Their program resulted in reduced hospitalizations, emergency room visits, sick visits, and asthma symptoms.

Pulling It All Together

Ultimately, these briefings continue to show that despite there being many different types of research, each plays an integral function in the larger health enterprise. Any one type of research on its own cannot effectively or appreciably improve health and health care. It takes basic research to teach us the fundamentals of disease, clinical research to make advancements in (targeted) medications and therapies, health services research to move what works to the right group of patients in the right setting and at the right time, and population research to work with communities to ensure that we’re taking the right steps to prevent, educate, and treat patients in the places where they live their lives.

Share

{ 0 comments }

According to one survey of pharmacists drug shortages were the predominant challenge in hospital pharmacy in 2014. Among all the classes of drugs, generic sterile injectables — frequently delivered in the inpatient setting — are most prone to shortage. Seventy percent of drug shortages are generic injectables, according to the FDAThese drugs are used, for example, as surgical anesthesia, in emergency medicine, or to treat cancer, infections, tuberculosis, syphilis, and other severe illnesses.

By all measures, incident shortages peaked in 2011 and have been on the decline, likely due to efforts by the FDA to avert them. This can be seen in data from the University of Utah Drug Information System (UUDIS), shown below. The bar chart below shows new (incident) shortages by year since 2001 from UUDIS data. The decline from 2011 is clear, though incident shortages are still above levels seen in 2001-2007.

New-Drug-Shortages-by-year

The line chart below shows active (ongoing) shortages by quarter since 2011. Sensibly, then, the numbers in the line chart below are higher than those in the bar chart above. Active shortages are also on the decline, having peaked in the third quarter of 2014.

OPA-National-Drug-Shortages

At this point, it is necessary to pause for an important technical note: UUDIS tracks shortages of drugs at the manufacturer level and the FDA tracks market-level shortages. It is possible for one manufacturer to experience a shortage (i.e., a UUDIS-type shortage) but for there to be no market-wide shortage (i.e., no FDA-style one) because other manufacturers offer adequate supply. It is also possible for a shortage to be regional, which the UUDIS would track but FDA would not, but not national, due to a distribution issue.

UUDIS-type shortages that are not FDA-type shortages are still important because they require hospitals and clinicians to switch drugs, which is itself prone to problems and risks. Supply switches can take time, causing delayed care. The same or similar drugs from different manufacturers can be packaged differently, which can lead to dosing errors. In Health Affairs, Willson wrote that

[T]wo fatal events were reported involving doctors prescribing doses of hydromorphone ‘as if it were morphine,’ even though it’s seven times as powerful as morphine, which was in short supply. [...] [S]ome people have awakened during surgery because anesthesiologists were not as familiar with the alternative agents they had to use.

Though it varies by year, in recent years about 25-50% of incident UUDIS-reported shortages were also FDA-type (market-wide) shortages. These are clearly most problematic because patients must go without the right medication.

The literature describes many factors that contribute to shortages of generic injectables (see chart below, click to enlarge), but, after reviewing that literature, in 2014 the GAO placed most of the blame on low profit margins stemming from the nature of the market.

shortages-causes

Profit margins are low because these are generic drugs. As such, just like most oral generics, they experience market competition that drives down prices. In addition, hospitals — where sterile drugs are usually administered — have consolidated purchasing into group purchasing organizations. GPOs play a similar role to hospitals as pharmacy benefits management (PBM) organizations play for insurers — they increase purchasing power and lower prices. When they negotiate long-term contracts for generic injectables, that constrains price levels so they cannot as easily respond (rise) when supply falls below demand.

Some have also pointed to Medicare Part B’s payment method for inpatient and clinician-administered drugs as a source of price pressure, though many are not convinced it plays a significant role. One study found it responsible for one-quarter of shortage days. Medicare pays hospitals a drug’s average sales price (ASP) plus 6%. If anything, this encourages hospitals to buy more expensive drugs (higher ASP), not cheaper ones. This could have an indirect effect of reducing the size of the market for cheaper generics, as hospitals switch to more expensive brand drugs, for which they receive a higher reimbursement. A smaller (or less reliable) market for generics reduces incentives to invest in manufacturing capacity, which could increase risk of shortage of generics.

It’s clear that the price level and profit margins for generic injectables are low. But that’s true of most oral generics too. So why are shortages concentrated in the injectables market? The answer is that manufacturing of injectables is much more difficult and costly. They must be produced in a sterile environment because these drugs are put directly into the blood, spine, or eye. They do not pass through the digestive system, which protects the body from contaminants that might exist in oral drugs. This requires a lot more care and comes with greater regulatory oversight, which itself takes time and adds costs. In short, the fixed costs are high, a barrier to entry above that which exist in oral generics markets. Just three companies sell 71 percent of sterile injectables.

Thus, when the quantity of generic injectables that can be supplied falls below the quantity demanded (e.g., because of a manufacturing plant problem, necessitating shutdown), it does not necessarily offer another manufacturer a good opportunity to step into the market, even if the price level rises. For new market entry, it would have to rise a lot and stay high for a long time — long enough for the new entrant to recoup its investment. That’s rare, so manufacturing capacity remains low, and shortages more common.

If we want to do something about shortages of generic injectables, we have to increase manufacturers’ profit margins, perhaps along with other changes that also incentivize supply resiliency. Yes, that means paying some drug companies more, the opposite of what most Americans want to do. But, in contrast to much of the rest of health care in the U.S., here is an area where prices may be too low.

Austin B. Frakt, PhD, is a health economist with the Department of Veterans Affairs, an Associate Professor at Boston University’s School of Medicine and School of Public Health, and a Visiting Associate Professor with the Department of Health Policy and Management at the Harvard T.H. Chan School of Public Health. He blogs about health economics and policy at The Incidental Economist and tweets at @afrakt. The views expressed in this post are that of the author and do not necessarily reflect the position of the Department of Veterans Affairs, Boston University, or Harvard University.

Share

{ 0 comments }

One, of many, reasons to blog about research is that blog posts can and often do reach a much wider audience than the studies they rely on. A paper by Jenny Hoang and colleagues is a case study that makes this point.

The investigators compared traffic (online page views) to two radiology journal articles and a blog post about them by their senior author, from from April 2013 to September 2014. The articles — both on the topic of incidental thyroid nodule imaging — appeared in the American Journal of Neuroradiology (AJNR) in September 2013 (and ahead of print in April 2013) and the American Journal Roentgenology (AJR) in January 2014 (and, I presume, before then ahead of print as well). Links to both articles were emailed to the journals’ email lists. The AJNR was also tweeted and discussed on the AJNR Fellow’s Journal Club podcast. The AJR article was selected the AJR Journal Club.

The Radiopaedia.org blog post referencing both the AJNR and AJR articles was originally posted on November 5, 2013, and updated on August 16, 2014. It was shared on Facebook, Tumblr, and Twitter to followers of Radiopaedia.org on those social media platforms in February 2014 and August 2014.

Which got more traffic, the journal articles or the blog post about them? You can tell by the title and first paragraph of this post that it was the blog post, right? OK, so try to guess how much more traffic the blog post received. Was it more by a factor of 2? 6? 9? 123?

The answer is 6. The journal articles received a combined 5,478 page views over the study period. The blog post received over 32,675, the vast majority from Facebook. An increase in AJNR traffic is apparent in the chart below in August and September 2014; it was probably spillover from the blog post.

page views

There are lots of reasons why a blog post would get more traffic than a journal article. It’s shorter and easier to read. It’s free and ungated. In the case of this study, the Radiopaedia.org blog is very popular. Not every blog would garner the same level of traffic. We should also acknowledge that readership volume is a process measure that may not translate into any greater scientific, practice, or policy influence. And, to be fair, the journal articles were also distributed in print. If one accounts for print circulation, the blog’s readership was still higher (by how much, the authors did not say).

Finally, this analysis is a case study, so it’s unclear how much the findings can be generalized. That said, there are a few other studies to consider, as the authors summarize:

Allen et al found that blogging and social media promotion of 16 selected articles correlated with increases in daily rates of downloads for full-text articles by 3 to 4 times. Shema et al showed that articles that were promoted on a research blog (ResearchBlogging.org) increased journal citations compared with articles that did not receive blog citations. Finally, Thelwall et al found a positive correlation between altmetrics and eventual scholarly citations for >200,000 PubMed articles with at least 1 altmetric mention, compared with other articles published immediately before and after it in the same journal. However, a prospective randomized study recently published in the high-impact cardiology journal Circulation found no advantage for articles promoted via social media versus those not promoted within a 30-day interval.

By the way, Hoang et al. appeared in the Journal of the American College of Radiology, which I do not read. It was brought to my attention on Twitter, an increasingly useful means of sharing studies of interest, at least in my experience. So, now you know not only why some of us blog, but also why we tweet.

Austin B. Frakt, PhD, is a health economist with the Department of Veterans Affairs, an Associate Professor at Boston University’s School of Medicine and School of Public Health, and a Visiting Associate Professor with the Department of Health Policy and Management at the Harvard T.H. Chan School of Public Health. He blogs about health economics and policy at The Incidental Economist and tweets at @afrakt. The views expressed in this post are that of the author and do not necessarily reflect the position of the Department of Veterans Affairs, Boston University, or Harvard University.

Share

{ 0 comments }

By Clare Roche, AcademyHealth

Working with students and new professionals, I constantly hear phrases like, “I’m new to this field, how likely is it that I make a significant impact?” “it seems like an uphill battle to create change,” and even “how will the everyday tasks I work on translate and impact others?” From an individual’s perspective, especially to those just starting their professional careers, impacting the field in its entirety can seem daunting – even impossible. How could a single student or member of an organization impact legislation, inform key-decision makers, or even foster global development? Thankfully, I’m able to counter their dispirited comments with a proposition: what if you are able to join a specialized network to help you navigate these challenges?

AcademyHeath Interest Groups (IGs) serve this exact purpose! IGs facilitate interaction of individuals around specific topic areas related to health services research and health policy. Our 17 IGs act as villages within a community and aid individuals in the navigation through the potentially overwhelming bustle that is the field of HSR. Together, in conjunction with other members who share a common goal, participants are able to address current and future needs of an evolving health system and ultimately translate evidence into measurable action.

AcademyHealth’s Interest Group members are among the leaders in the health care industry. They work on a number of current and topical issues, such as:

  • Measuring Health Care Quality Improvement and Safety
  • Improving Access to Health Care, and;
  • Engaging Marginalized Populations into Participatory Roles in Research

All of which are abuzz in the health services research and health policy fields. Several of our IGs are in the midst of planning web-based meetings to disseminate relevant, impactful information to our members. Topics include:

  • Microsimulation Models to Improve Health Care Delivery Enhancing Race,
  • Ethnicity Data in Administrative Databases, and;
  • Care Coordination across the Continuum of Care

Through the dissemination of research findings, networking, and building on research skills, each IG is working to positively impact not only their community, but ultimately the field of health services research.

One student’s comment at the end of our conversation summed up the value of AcademyHealth’s Interest Groups quite nicely, she said, “after all, the saying does go ‘it takes a village.’”

Participation in Interest Groups is open to all individual members of AcademyHealth. Join now to become a part of a “village.”

Clare Roche is the Student Membership and Chapter Coordinator at AcademyHealth where she manages all student chapters and student members, as well as coordinates the scholarships and awards programs. She encourages you to get involved with AcademyHealth through the formation of student chapters which enhance the learning and professional development experience for students in health services research and health policy. Clare graduated from the George Washington University as a double major in Communication and Business. Contact her at [email protected].

Share

{ 0 comments }

Insurance is just a first step towards improving access. Also important is whether that insurance helps people to get the care that they need. In a recent report from the Urban Institute, John Holahan, Michael Karpman, and Stephen Zuckerman probed that question. They surveyed people between ages 18 and 64 in September of 2015 as part of the Health Reform Monitoring Survey. They focused on people who had incomes below 400% of the federal poverty line, and thus would be eligible for either subsidies or Medicaid.

It’s important to remember that this group makes up all of the people who get Medicaid and more than 80% of those who obtain private insurance on through the exchanges. Most people who do not get insurance from their jobs qualify for some level of tax credits to help them obtain insurance.

First, researchers queried whether people had a “usual source of care”. Only about half of people who were uninsured for part of all of the last year answered yes. But about three-quarters of those with employer-sponsored insurance (ESI), Medicaid, private exchange, or non-exchange private insurance answered in the affirmative. Insurance of any stripe improved things greatly. Only about 41% of the uninsured had a checkup lin the last year, compared to 65%-70% of all other with any type of insurance. On the other hand, people with Medicaid had the most trouble getting a doctor’s appointment (19%), compared to other types of insurance (9%-14%) and the uninsured (12%).

Almost 40% of the uninsured had an unmet health care need in the past year because of cost. Of those with private insurance, 27%-29% had such an unmet need. Of those with Medicaid, though, only 21% had an unmet health care need because of cost. This is likely because Medicaid has very low (if any) deductibles or co-pays, let alone premiums. Cost is not nearly as much of a barrier.

Similarly, while 28% of the uninsured had problems paying medical bills in the last year, only 22%-26% of people with private insurance did. Only 16% of those with Medicaid reported such problems. This is – again – because of out-of-pocket costs. Only 17% of the uninsured has out-of-pocket costs of more than $1500 in the last year. Contrast that with 21% of those with exchange private plans, 24% of those with ESI, and 32% of non-exchange private plans. Only 8% of those with Medicaid met this barrier. The uninsured and those with Medicaid also had no deductibles. About 46% of those with exchange private plans, 33% of those with ESI, and 45% of non-exchange private plans had deductibles of at least $1500.

When asked about satisfaction with their plans, answers differed as well. With respect to choice of doctors and providers, the highest levels of dissatisfaction were seen among those with private exchange plans (14%). Both Medicaid and non-exchange private plans had 10% dissatisfied with choice in this area. Only 6% of those with ESI were unhappy with their choice of doctors. With respect to premiums, more dissatisfaction was seen, including 31% of those with non-exchange private plans, 25% of those with exchange private plans, and 21% with ESI (compared to only 8% of those with Medicaid). When asked about satisfaction with protection from high medical bills, answers were similar, with dissatisfaction of 26% of those with non-exchange private plans, 25% of those with exchange private plans, and 17% with ESI, and 8% of those with Medicaid.

In many ways, this is good news. The vast majority of people with all types of plans are satisfied with their choice of physicians and providers, even those with Medicaid, contrary to many media reports. People are less satisfied with the costs of insurance, and with the protection they receive from high costs, but – again – Medicaid seems to do well there, too.

We do, however, still have too many people with unmet health care needs because of costs. Too many avoid care because of the cost, and too few people have a usual source of care. Ironically, in many of these areas as well, Medicaid seems to do better than private insurance.

Aaron

Share

{ Comments on this entry are closed }

Although the ACA has significantly reduced the percent of Americans who are uninsured, we have not yet come close to universal coverage. This has become a topic of focused debate among Democratic primary candidates. Short of achieving full coverage by passing a single-payer plan (which seems very unlikely in the near future), further gains in insurance coverage will come through means available through the ACA.

It’s worth revisiting, therefore, exactly who constitute the uninsured at this point. A better understanding could allow policymakers and advocates to focus their efforts on those populations. A recent report from the Robert Wood Johnson Foundation and The Urban Institute covered just that:

Data collected from the 2015 Current Population Survey— Annual Social and Economic Supplement (CPS-ASEC) provides information on those with and without insurance coverage from a large, federal, nationally representative survey (most of the data are collected in March, although there are some interviews in February and April;3 hereafter we refer to the data as having been collected in March 2015). Although the CPS-ASEC questionnaire changed in significant ways in 2014 such that it should not be used to compare 2015 coverage to 2013, the data allow analysts to assess the characteristics of those remaining uninsured following the first year of implementation of the ACA’s main coverage provisions and after two marketplace open enrollment periods

According to their most recent surveys, about 12.2% of the non-elderly, non-military, non-institutionalized population remains uninsured. This is just under 33 million people. About half of those people live in states that have refused the Medicaid expansion. This has certainly made a difference. The rate of uninsurance in states with the Medicaid expansion is 10.1%, compared to 15.4% in states which have refused it. It’s clear, therefore, that one way to reduce the numbers of uninsured at this point would increase the numbers of states participating in the program.

More than a quarter of the uninsured are eligible for Medicaid of CHIP. About two-thirds of uninsured children fall into this category. These are all people who could have insurance if they could overcome the barriers and hurdles necessary to sign up for coverage. It’s also possible this could be an information gap. Many of them may not know they are eligible, and may not have tried to obtain Medicaid or CHIP for themselves, or their children.

An additional 21% of the uninsured qualify for subsidies on the exchanges, but have not obtained plans. This, too, could be an information issue, where people do not know they are eligible for tax credits. It could be that they feel that, even with the tax credits, they still cannot afford coverage. It could also be that they simply do not want insurance, and would rather pay the penalty of the individual mandate.

Clearly, however, there is much to be gained from outreach. Efforts to increase enrollment in both Medicaid and CHIP, as well as through the exchanges, could significantly increase the number of people who are already eligible for coverage, but have not yet obtained it. More than 80% of the uninsured eligible for Medicaid or CHIP live in metropolitan areas. More than two-thirds of them live in families in which at least one family member is already receiving the earned income tax credit and some other public benefit. Nearly half have at least one school-aged child in their family. It’s possible to locate many of these people, and help them sign up for coverage.

In addition, furthering the Medicaid expansion is a straightforward way to decrease uninsurance. That will require more political efforts, and a different skill set.

Regardless, increasing the number of Americans who have health insurance is only one goal of improved access. Making sure that care is still affordable, and that underinsurance doesn’t become a bigger issue, is a whole different ball game.

Aaron

Share

{ Comments on this entry are closed }

The budget process sometimes feels like elaborate theater with its procedures, budget battles, and standstills. Nevertheless, these decisions have to be made – and as fiscal constraints become ever more real, so do the threats. Over the years, we’ve fought to preserve funding for health services research, and in particular the Agency for Healthcare Research and Quality (AHRQ). Act One ended well; the agency survived. But for how long? As the curtain opens on Act Two, a season with its own challenges and threats, there’s an important page to be taken from the history books: a case study of the demise of the Office of Technology Assessment (OTA).

The Office of Technology Assessment (OTA), a former nonpartisan analytical support agency of the United States government, closed its doors on September 29, 1995, after Republican leadership in Congress sought to cut costs and, more generally, reduce the size and scope of the federal government. During its more than 20-year history, OTA provided Congress and the public with comprehensive analyses on science and technology issues, producing nearly 800 reports on everything from agriculture, education, and telecommunications to bioengineering, medicine, space, and energy.

This agency’s elimination came as a shock to many. After all, OTA was producing work that seemed befitting of bipartisan support and was operating under a modest $22 million budget, a fraction of the $2.4 billion legislative branch appropriation at the time. In spite of these considerations, then chairman of the Senate Appropriations Subcommittee on the Legislative Branch, Senator Connie Mack (R-FL), couldn’t be dissuaded of zeroing out the agency’s fiscal year 1996 funding. According to a June 1995 article from the American Institute of Physics, Senator Mack’s concerns about OTA centered on two points (note the first): “The first was that some of OTA’s research is performed elsewhere. He also criticized OTA for doing research on topics that did not have a strictly technological orientation.”

OTA’s proponents, which included Representative Rush Holt (D-NJ), Senator Orrin Hatch (R-UT), Senator Charles Grassley (R-IA), late Senator Edward Kennedy (D-MA), and a number of advocacy groups, such as the Union of Concerned Scientists, contended OTA enabled “members of Congress and the public to better understand the advantages and implications of the science and technologies in which they are asked to invest.” Some, including Representative Holt, who is a former physicist, argued the office was a fiscally sound investment; in a press release he called eliminating OTA to cut costs “foolish,” as “OTA had always saved taxpayers far more money than it had cost.”

Despite Representative Holt’s protests, OTA was dissolved, and although House Speaker Newt Gingrich claimed Congress could get help elsewhere, Representative Holt said those claims didn’t work. According to Representative Holt, what happened instead was that OTA’s elimination took a scientific toll on Capitol Hill:

“When OTA shut down, technological topics did not become less relevant to the work of Congress. They just became less understood. And scientific thinking lost its toehold on Capitol Hill, with troubling consequences for the ways Congress approaches all issues – not just those that are explicitly scientific.”

During its tenure, OTA produced a considerable amount of scientific and technological-related evidence and had support (albeit limited) from legislators on both sides of the aisle – so what led to its downfall?

Dr. Bruce Bimber, assistant professor in the Department of Political Science at the University of California, Santa Barbara, wrote an enlightening piece on the agency, titled “The Death of an Agency: Office of Technology Assessment & Budget Politics in the 104th Congress” (1996). In it, he highlights three characteristics that are central to understanding OTA’s eventual fate (again, pay attention):

“The first was its small internal constituency within the legislature. Unlike CRS, which provides services to virtually every legislator and committee, or GAO which produces dozens of studies annually for many legislators, OTA’s regular constituency numbered only a few dozen senior legislators at most…

Just as important was the fact that OTA had no regular role in the policy-making process…The influence of the agency’s work on policy was difficult for many to see…Legislators rarely drew the agency into the more publicly visible processes of debating bills, voting, and publicly explaining decisions. This fact contributed to OTA’s low profile inside Congress and especially outside of it.

The third important characteristic of the agency was its strategy toward publicity and visibility. OTA fostered its own low profile and committed itself to the avoidance of controversy. It never attempted significant institutional expansion or, made a priority of establishing a secondary constituency for its work among rank-and-file legislators, the leadership, or the media. Instead, it focused a tremendous amount of organizational effort on balancing the often conflicting interests of its small, primary clientele.”

For those working in health policy, does this sound at all familiar? If not, it should.

The Agency for Healthcare Research and Quality (AHRQ) finds itself in eerily similar circumstances to those of OTA in 1994 and in the lead-up to 1995. Not only is the political context virtually identical—a Republican controlled Congress looking to cut costs in a tight and competitive fiscal environment—but AHRQ is also fighting the same uphill battle to demonstrate its value on Capitol Hill.

Since fiscal year 2012 (and really since its first near-death experience during this same period in 1994) AHRQ and its supporters have been fighting to prove that its mission is unique and fundamentally different from that of other research agencies – and therefore won’t be performed effectively elsewhere without clear authority.

To that point, AHRQ has a low profile inside—but especially outside—of Congress. AHRQ is rarely mentioned during hearings and testimonies, is habitually absent from constituents’ meetings with congressional staff, and notably, is rarely mentioned by name or function (health services research) in the media. Yet AHRQ, as the only agency with a congressional mandate to conduct health services research, has made momentous contributions to health care, including groundbreaking work around central line infections, and is home to highly relevant and widely utilized tools and datasets, including MEPS (Medical Expenditure Panel Survey), the most complete source of data on the cost of use of health care and health insurance coverage.

Nevertheless, despite AHRQ’s charge to generate the evidence to make health care safer, of higher quality, and more accessible, equitable, and affordable, the work of AHRQ and its researchers goes largely unnoticed. Being unnoticed is in many ways its own threat. Revisiting Bimber’s piece, he argues that OTA was essentially collateral damage of a larger legislative budget strategy:

“OTA was terminated precisely because it was a small and uncontroversial part of the federal budget, not in spite of this fact…Congress terminated the agency…not because it offered substantive financial savings to legislators hard pressed to wring dollars out of the budget, but because it offered a politically inexpensive way for legislators to signal their willingness to lead the way in making sacrifices in the name of budget reduction. Ultimately that strategy failed, leaving legislators without the symbol they sought or the services of the agency.”

Today, AHRQ is real danger of meeting this same fate.

The House is expected to propose zeroing out the agency’s budget once again in FY17.

It continues to be more important than ever before for the health research and policy communities demonstrate why terminating AHRQ isn’t politically salient or in the best interest of the American people. Looking at history, we know that every threat of elimination requires a robust response. We cannot sit quietly, unnoticed, on the sidelines. Don’t let history repeat itself.

For further reading, visit the following:

Share

{ Comments on this entry are closed }

Today, U.S. Department of Health and Human Services Secretary Sylvia Mathews Burwell announced that Andrew Bindman, M.D., has been named as the next director of the Agency for Healthcare Research and Quality (AHRQ). Dr. Bindman will begin his appointment on Monday, May 2, 2016.

Dr. Bindman is an established leader in health services and policy research with over 130 peer-reviewed articles and has been a longtime friend to AcademyHealth and the field of health services research.

Currently, Dr. Bindman is a professor of medicine, health policy, epidemiology and biostatistics at the University of California, San Francisco (UCSF) and director of the University of California Medicaid Research Institute (CAMRI) and UCSF’s Primary Care Research Fellowship. As Director of CAMRI, he has been instrumental in helping AcademyHealth establish the State-University Partnership Learning Network (SUPLN), a network of 23 partnerships in 19 states that works collaboratively to support evidence-based state health policy and practice with a focus on transforming Medicaid-based health care, including improving the patient experience, improving the health of populations, and reducing the per capita cost of health care. He is also the recipient of two prominent AcademyHealth awards: the Alice S. Hersh New Investigator Award (1996) and the Article-of-the-Year Award (1996).

In addition to these roles, Dr. Bindman has served as a senior advisor in the Centers for Medicare and Medicaid Services (CMS), where he worked on using data analytics to accelerate health care transformation in Medicaid. He also has extensive knowledge of patient safety issues, having worked as a physician and as the director of UCSF’s Primary Care Research Center, as well as pertinent Hill experience, serving as a Robert Wood Johnson Health Policy Fellow from 2009-2010 on the staff of the U.S. House of Representatives Energy and Commerce Committee.

We commend those in HHS and beyond for recognizing the tremendous contributions Dr. Bindman has made to improving health care with high quality evidence, a cornerstone of AHRQ’s mission.

“I couldn’t be more thrilled to hear of Dr. Bindman’s appointment, especially at this critically important moment in time,” said AcademyHealth President and CEO Dr. Lisa Simpson. “AHRQ is uniquely positioned to fund and support the research we need to understand the impact of all the changes in health care delivery and bring that evidence forward to Capitol Hill. I have witnessed first-hand Dr. Bindman’s ability to explain and convey the findings of health services research in an accessible way, and I look forward to working with Dr. Bindman in communicating the value of investing in health services research.”

AcademyHealth is excited to begin working with Dr. Bindman as we continue our efforts to bring the health services research to the forefront, illustrating how research, tools, and datasets – like those supported by AHRQ – can help us understand and improve a complex, costly health system and achieve better outcomes for more people at greater value.

Share

{ Comments on this entry are closed }

As I wrote in my previous post, the vast majority of the two hours or so it usually requires to see a doctor is spent not seeing the doctor. Travel and waiting time take big chunks out of our day. With modern technology, for some kinds of care, that wasted time can be avoided.

Of course I’m talking about telemedicine (TM), which includes the use of the telephone, video conferencing, and secure messaging that can replace doctor visits. There are other forms of TM too, like remote patient monitoring, electronic ICUs, and others that aren’t intended to replace outpatient doctor visits. These help remote doctors advise ones proximate to the patient in the hospital or ED.

Does TM work? Is it at least as good as in-person patient care? The evidence I’ve found is strongly in the affirmative.

A recently completed systematic review and meta analysis by Gerd Flodgren et al. (2015) examined 93 randomized controlled trials (N = 22,047 participants; 23 trials for heart failure, 16 on diabetes patients). Interventions varied by study: chronic condition monitoring (41 studies), provision of treatment (12 studies), education/support for self-management (23 studies), consults for diagnosis or treatment (8 studies), clinical assessment (8 studies), screening (1 study). (Total adds to more than 93 because some studies examined more than one thing.)

Findings included no mortality differences between TM and non-TM heart failure patients, but improved quality of life for if they received TM; better glucose control for diabetes patients through TM; and lower LDL cholesterol and blood pressure with TM. No outcome differences for mental health and substance use disorder patients or dermatology patients. The study found inconsistent results on hospital admissions.

Many prior literature reviews and studies came to the same general conclusion that TM is associated with no worse, and often better, outcomes. With respect to heart failure, see this systematic reviewthis program at Partners Healthcare and Polisena (2010), which found lower mortality, readmissions, and health care use for TM patients. Other reviews found “improved outcomes of using TM in the delivery of cardiac rehabilitation and diabetes care” (Balas 1997); “improved clinical outcomes of TM in hypertension, but conflicting evidence for the effectiveness in diabetes” (Hersh 2001); “no differences in quality of life or in the number of emergency department (ED) visits, but a lower hospitalization rate for patients receiving TM, compared with the control group” (McLean 2010); “fewer ED visits and hospitalizations in chronic obstructive pulmonary disease (COPD) patients receiving TM as compared to control, and improved quality of life” (McLean 2011Polisena 2010a); “reduced number of patients admitted to hospital and fewer hospital bed days for patients receiving home TM” (Polisena 2009).

James Ralston and colleagues report that “recent trials [of TM] suggest a positive impact on control of blood pressure in patients with hypertension and glycemic control in patients with type 2 diabetes.” See also this study on improvements for PTSD patients using telemedicine.

Kaiser Permanente was one of the early leaders in offering online health services (portals) with which patients can view parts of their medical records, schedule appointments, refill prescriptions, or e-mail their doctors or pharmacists, among other functions. Kaiser Permanente Northern California (KPNC) offers additional TM services, including 10-15 minute telephone visits. According to Robert Pearl, 80% of dermatology cases involving rashes are resolved by digital communication at KPNC. Prior literature on teledermatology has demonstrated high rates of diagnostic accuracy, diagnostic concordance (i.e., level of agreement with face to face consultation), and patient and provider satisfaction.

KPNC’s after hours video visits provide a substitute for patients who might otherwise have to visit an ED. The vast majority of patients (85%) rated TM visits “very good” or “excellent” for meeting their needs. KPNC offers pregnant women at risk for substance use disorders support services via video visits. A 2008 study found this program associated with a lower rate of fetal death and preterm birth.

According Pearl, there is no apparent impact of electronic communication on the number of office visits. As KPNC’s virtual visits have grown, in-person visits have held constant. Suzanne Leveille and colleagues found something similar, examining 45,000 adult, primary care patients of a rural Pennsylvania health system (Geisinger Health System) and and a Boston academic medical center (Beth Israel Deaconess Medical Center) over one year before electronic portal implementation and one year after (2009-2011). They concluded that patients use portals after visits, but portal use does not drive an increase in future visits, consistent with North et al. Though another study, by Zhou et al., found portal use associated with more pediatric visits at Kaiser Permanente.

Kenneth McConnochie and colleagues: studied TM in Rochester, NY among 1216 children with access to it, matched to those who didn’t. The TM system was designed for “triage, diagnosis, and treatment of acute problems.” Children (or their families) with telemedicine access had 23.5% more overall visits (telemedicine + in person) but 22.2% less ED use. Reduced ED use associated with TM has been found in other work as well.

Intravenous, clot-busting drugs are effective at treating acute ischemic strokes. But, their application requires expertise that is less available in remote, rural areas. A 2010 systematic review of telestroke, which facilitates use of the drugs through videoconferencing between emergency department physicians in remote areas and distant stroke specialists who provide guidance on their administration, concluded that it “lead to better functional health outcomes, including reduced mortality and dependency, compared with conventional care.” Richard Nelson and colleagues estimated the cost-effectiveness of . The approach would cost more money than usual care but would improve outcomes. It was estimated to be cost-effective over the long term.

In general, TM cost estimates vary, as does methodology. Systematic reviews do not consistently find telemedicine more cost-effective than face-to-face care, but they also find that cost-effectiveness methodology of most studies to be weak. Few studies reported the opportunity costs of travel and waiting time.

Julia Adler-Milstein and colleagues used the IT supplement to the 2012 AHA Annual Survey of Hospitals (2,891 acute care, non-federal hospitals), combined with Area Resource File, Medicare inpatient claims, the Health Resource and Services Administration’s Health Professional Shortage Area files, and Dartmouth Atlas. They found that 42% of hospitals used TM in 2012 but with wide variation by state: close to 70% or more in Maine, South Dakota, Arkansas, and Alaska. Rhode Island had 0%. Utah just 13%.

Those that used TM tended to have greater technological capabilities more generally, were more likely part of a larger hospital system, more likely to be teaching hospitals, and more likely to be rural. TM use is more likely in states that require insurers to reimburse it the same as face-to-face visits and less likely in states requiring out-of-state providers to have obtain a license in the state in which they’re providing telemedicine care.

Studies have consistently found TM to be at least as or more effective than traditional care. Of course, it’s not applicable to every type of care, and it may not always reduce costs (estimates vary). Finally, there are barriers to its provision in some states and not all insurers cover it (Medicare among them).

It seems inevitable that things will change, that TM will become more common. As it does, you and I will spend less time traveling to and waiting at doctors’ offices.

Austin B. Frakt, PhD, is a health economist with the Department of Veterans Affairs, an Associate Professor at Boston University’s School of Medicine and School of Public Health, and a Visiting Associate Professor with the Department of Health Policy and Management at the Harvard T.H. Chan School of Public Health. He blogs about health economics and policy at The Incidental Economist and tweets at @afrakt. The views expressed in this post are that of the author and do not necessarily reflect the position of the Department of Veterans Affairs, Boston University, or Harvard University.

Share

{ Comments on this entry are closed }