CER, HIT, and Women’s Health Research

Below is a video of my discussion with Phyllis Greenberger, President and CEO of the Society for Women’s Health Research, about the implications of comparative effectiveness research (CER) and information technology for women’s health and quality improvement.

What are your thoughts about CER and HIT?  Will they lead to higher quality, lower cost, or more efficient/better healthcare?  And if so, how soon?


FYI – The SWHR’s July 18-19 meeting mentioned in the video is “What a Difference an X Makes: The State of Women’s Health Research.”

Historical Perspectives on Health Policy: Part 3

I just found my copy of the book “Improving Health Policy and Management” edited by Stephen Shortell and Uwe Reinhardt.  The book’s eleven chapters address many of the hot-button issues in today’s health reform debate:

  1. Creating and Executing Health Policy
  2. Minimum Health Insurance Benefits
  3. Caring for the Disabled Elderly
  4. An Overview of Rural Health Care
  5. Effectiveness Research and the Impact of Financial Incentives and Outcomes
  6. Changing Provider Behavior: Applying Research on Outcomes and Effectiveness in Health Care
  7. Health Care Cost Containment
  8. Redesign of Delivery Systems to Enhance Productivity
  9. Medical Malpractice
  10. Prolongation of Life: The Issues and the Questions
  11. Challenges for Health Services Research

The observant ready will notice one critical issue from today’s debate missing from this list… Information technology.  That is because this book was published in 1992… and actually the titles of the first and last chapters also included “in the 1990s.”

What this points out is that the fundamental issues of controlling costs, defining benefits, and improving efficiency in care delivery and through financial incentives are not new to the health care debate.  Reinforcing this historical reality, I recently ran into Professor Stuart Altman from Brandeis – who is one of the most insightful and clear thinking non-ideological health policy expert I’ve ever had the pleasure of talking to and hearing testify before Congress. And he told me on a rainy NYC sidewalk that he has been talking to people across the country about how the current debate is both similar to and different than the early 1990s, the 1980s, the 1970s….. and back to even the 1930s…and despite the ongoing delays he is hopeful that legislation will be enacted this time.

So while the issues haven’t changed, and likely won’t change no matter what legislation is enacted in the coming months, (and years), the hope is that this time around progress will be made so that health care becomes less of a national obsession, (and drag on the economy), and people and politicians can focus on life, liberty, and the pursuit of happiness, rather than illness, accessing needed treatments, and financial uncertainty.

Historical Perspective on Health Reform – Part 1, Medical Effectiveness

Since the time-line for health reform legislation has continued to be stretched, I recently spent some time cleaning out old files.  In my excavations I came across papers, articles, memos and briefing books which demonstrate that no matter how much things change, some aspects of health reform have stayed the same.  For example, below are a couple of snippets from memos about a proposed Medical Effectiveness Initiative from circa 1989:

Establishing a Medical Effectiveness Initiative at the OASH [Office of the Assistant Secretary of Health] level. (FY90 request = $52 million) This initiative would assess which medical treatments are cost-effective, and identify inappropriate and unnecessary medical practices. This knowledge would be used by reimbursing agencies in containing health care costs. [FYI – for budgetary comparisons, FY89 budget authority for the NIH was $7.15 billion, and $536 million for the FDA, and $141 billion for HCFA – now CMS.]

The Secretary’s Effectiveness Initiative for promoting the public health has as its goals:

  • improving the quality of health care received by Americans through the provision of effective, appropriate care, and involving the consensus of the medical community;
  • control of health care costs through elimination of ineffective and unnecessary medical treatments and comparison of the cost-effectiveness of alternative treatment modalities, thus insuring access to care;
  • enhancing the scientific basis of medicine through application of current technology (e.g. meta analysis; mainframe and software design) to the issues of medical treatment effectiveness; and
  • enhancing the competitive basis of the health care industry through provision of information to patients and providers on risks and benefits, including cost-effectiveness of medical treatments.

While the budgetary size of the proposal is very small compared to current initiatives, (e.g. the $1.1 billion for Comparative Effectiveness Research enacted earlier this year in the stimulus legislation), the wording and rationale for the proposals sound very similar – except that this initiative would explicitly use the information to alter government reimbursement  practices, which was precluded under the ARRA bill.

One difference that dates this language is the phrase “mainframe and software design.”  There have been significant advances in computer technology – which we now term IT – and these advances enable much better and rapid monitoring of quality, as well as and spending and utilization.  Such near real time quality and cost monitoring is important for implementing programs that provide cost and quality information to clinicians, patients, payers and others.  The ability to deliver analyses based on information which is days, weeks or maybe a month or two old, and reflects individual actions, is much more effective for changing behaviors and practice patterns than is data which is years old, and may be aggregated information for a population or across a region.  In addition, IT advances have made risk adjustment a much more robust process – if not exactly precise.  This is critical for the success of quality improvement and cost control programs because the first response from every clinician presented with information that the care they provide is costlier, or somehow lower in quality, than their peers’ practice patterns is that their patients are more severely ill than average and that explains why their costs are higher and outcomes poorer.

Next Up: Part 2 – Historical Perspectives on Universal Coverage and Cost Containment

Encouraging Communications About Patients’ Goals

I attended a great event yesterday where experts discussed how to improve healthcare quality and safety by increasing patients’ involvement in making healthcare decisions.

This seminar, “Patient-Centeredness and Patient Safety: How Are They Interconnected,” was organized by the Kenneth B. Schwartz Center and sponsored by the Massachusetts Medical Society and CRICO/RMFDon Berwick (President & CEO of the Institute for Healthcare Improvement) was the main speaker followed by a panel consisting of two patient safety leaders from local hospitals and a patient involved with promoting patient engagement in quality improvement.

To start the event, Dr. Berwick discussed how his thinking about healthcare quality had evolved over several decades, and his increasing belief in the importance of patient involvement. He discussed his Health Affairs article on Patient-Centered care, and summarized his current thinking about how to design patient-centered care in 8 bullets:

  1. Place the patient at the center
  2. Individualize
  3. Welcome family and loved ones
  4. Maximize health influences within care
  5. Maximize health influences outside of care
  6. Rely on sophisticated, disciplined evidence
  7. Use all relevant capabilities – waste nothing
  8. Connect helping influences with each other

Communications Is Crucial for Achieving Patient-Centerdness and Goal Sharing
The essence of the panel’s discussion was about how to improve communications among patients and their clinicians so that each others’ goals were shared and understood.  One example raised by a panelist was initiatives to prevent patients from falling in the hospital.  Patients may see nurses being in bathrooms with them as intrusive or uncomfortable, but discussing their shared goal of not having patients fall and hurt themselves shifts the context of the nurse’s action and enables it to be embraced by the patient rather than resisted.

From the patient’s perspective too often clinicians may have their own ideas about what the goals of the treatment should be, but without understanding the patient’s life interests and goals the two may be disconnected.  For example, clinicians often ask patients what they do for work to understand if the treatment or the outcomes will be compatible with their jobs, but often patient’s happiness or life fulfillment is related to something outside of work, such as playing the piano, playing with grandchildren, rollerblading, hiking with their dogs in the mountains, or hang-gliding.  Treating a patient’s injury or illness so they can do (or be able to try to do) those activities may be very different than what would be indicated if the goal was to enable them to work in an office.

Creating Policies to Promote Communications and Goal Sharing
Dr. Berwick’s presentation also included a brief discussion of how evidence based medicine (EBM) can improve patient safety by avoiding unnecessary care and setting realistic expectations about the outcomes for chosen treatments.  This is captured in his 6th bullet above. One of the challenges in the current push towards more EBM – and comparative effectiveness research (CER) – is what to actually measure in this research. Combining the health system’s desire for optimal outcomes with patient-centeredness, (i.e., his 2nd bullet – “Individualize”), could be achieved by including the patient’s goals for their treatment as one of the outcomes measured in EBM and CER programs.

Benefits of Measuring Achievement of Patients’ Goals as an “Outcome”
Process measures, (such as percentage of patients who’ve received a recommended treatment), are usually easier to evaluate, but are really proxies for clinical outcomes.  Actual outcomes like mortality or hospitalization can be harder to evaluate, in part because of individual patient differences and thus the raw data needs to be risk adjusted. However, measuring achievement of the patient’s goals could be very important and valuable to add to these evaluations – and could be a rough way to inherently risk adjust the data, i.e. the “goals” of treating a broken hip may be different for a 50 year old person than someone who is 70.  The actual measurement of such goal achievement could be done based upon answering the question of “how well were the patient’s goals met?”  Clearly this would have to be quantified in some way – and perhaps that could be done by the patients themselves on an 11 point scale from 0-100%.

Not only would measuring this “patient goal achievement” outcome add a useful dimension to some research, but it would also put the question of “what are the patient’s goals?” right at the front of the patient-clinician conversation.  And in the context of health reform and system improvement, by using the dictum of, “we manage what we measure,” measuring how well delivery systems and clinicians are achieving patients’ goals could be an important force for transforming care delivery.

Bottom Line for Patients and Clinicians
The next time you’re a patient talking to a clinician, be sure to talk about your goals for treating whatever ailment caused you to see that clinician.  And clinicians need to tell their patients what goals they expect to achieve from the treatment they’re recommending.  This is the start of a conversation since the patient’s expectations may not be realistic – such as for a patient with a severe fracture who wants to run a marathon in three weeks.  But by understanding each others goals and expectations they can agree on what should be done and how to proceed.

Need for Continuity of Care and Primary Care Clinicians
Of course some patients may seek to “doctor shop” looking for a clinician who will promise to achieve their goals.  This can be good if the first clinician isn’t attuned to the patient’s wishes, but it can also be bad if the patient’s expectations are unrealistic.  That is why having a trusted relationship with a primary care clinician can be so important, since their PCC can help them evaluate and digest other clinicians’ recommendations.  Again, it comes down to ongoing and two-way communications to understand goals and jointly develop treatment plans and decisions.

Diabetes Updates – New Diagnostics, Increasing Rates, and Implications for Health Reform, CER, etc.

Changes in the diagnosis and treatment of diabetes is a great example for understanding how healthcare delivery constantly evolves based upon new discoveries.  And the history of these changes may help illuminate some thinking about health reform and the development and use of comparative effectiveness research (CER).

First, a little background on diabetes.

Diabetes Background
Diabetes mellitus (or “sugar diabetes”) occurs when the body has problems regulating the level of sugar (specifically glucose) in the blood.  This can be because the body’s pancreas doesn’t produce enough insulin, or for some reason the person’s organs become resistant to the actions of the insulin that is present – or sometimes both occur simultaneously.  Impaired control of glucose means that the levels get too high, which produces problems in the eyes, (leading to blindness), in the kidney, (leading to kidney failure), and in the small blood vessels elsewhere in the body, which can lead to nerve damage and low oxygen delivery to the extremities – particularly the legs and feet, (leading to amputations).

In olden times, diabetes could be diagnosed by sugar in the urine.  (Medical lore says this was done by taste….)  However, until insulin was discovered in 1921 there were no therapies for severe insulin deficiency.  And even once insulin became available, sugar in the urine was still the way diabetes was diagnosed and monitored – usually with a dipstick that changed color depending on the sugar concentration.

It wasn’t until the 1960s that measuring blood glucose levels became possible – and only then in the doctors’ offices because the machines were large and expensive.  In the 1980s machines small and cheap enough for patients to monitor their blood sugar levels at home became available.  This enabled patients to start adjusting their own insulin dosages based upon their blood sugar levels.  (Before this it was too dangerous for patients to significantly alter their insulin dosages because while too little insulin leads to too high sugar levels causing long-term damage, too much insulin can drop sugar levels too low and lead to confusion, coma and death.)

In more recent years it was discovered that keeping diabetics’ sugar levels near normal could prevent essentially all the adverse consequences of diabetes, i.e. blindness, renal failure and amputations. But doing this based upon finger-stick blood sugar levels even 3 and 4 times a day was tricky – and those were just single data points.  So in the mid 1970s it was proposed that monitoring the amount of hemoglobin in the blood that had combined with glucose would give a measure of the average blood sugar level for the 2-3 month life of the red blood cells.  (It was known that glucose irreversibly connects to the hemoglobin in red blood cells in a way that directly correlates to the blood sugar level.)  This test, known as “glycosylated hemoglobin, (or HbA1C, or simply A1C), has been increasingly used over the past few decades to monitor diabetics and adjust their treatments, with the goal to keep A1C levels below 7%, since the level in people without diabetes is 4-6%.

Care Lags Discovery and Development of Innovations
Despite improved ability to monitor diabetes, it is still under diagnosed, and poorly managed.  It is estimated that there are about 6 million people in the US who have diabetes, but don’t know it – which is about 25% of all people with diabetes.  And in 2003-2004, only about 57% of people with diabetes had A1C levels <7%.  (The medical and lost productivity costs for all people with diabetes may be approaching $200 Billion.)

And the prevalence of diabetes is increasing – and with it so are the costs of treating people with diabetes. Last year I wrote about this, and now the CDC has updated information showing the continuing growth in the number of people in the US diagnosed with diabetes:

Increasing Rate Diabetes in the US 1980-2006
Source: http://www.cdc.gov/diabetes/statistics/prev/national/figpersons.htm

The treatment of diabetes has also changed.  After insulin was discovered, different forms and modifications where developed to change how quickly it acted, and beef and pork sources have been replaced with biotech “human” insulins grown in bacterial cultures. Many different types of non-insulin treatments for diabetes have also been developed – these act primarily by increasing insulin production from the pancreas or the action of the insulin in the body.

Which brings us back to the A1C test.  An International Expert Committee from the American Diabetes Association is now recommending that the A1C test be used to diagnose diabetes.  This would replace (or supplement) the traditional fasting blood glucose diagnostic test, and the A1C test would still be used for twice yearly monitoring of the adequacy of treatment for people with diabetes.

These developments in diagnosis and treatment have progressed in tandem – each leveraging off the knowledge gained from the other – with the A1C test being part of the continuing evolution of tests for diagnosing diabetes.  For example, the fasting blood glucose level for diagnosing diabetes has changed over the years.  It was originally set at 140mg/dl in 1979, and then lowered to126 in 1997, when it was also decided that a level between 110-126 should be considered pre-diabetic, or “impaired fasting glucose.” And in 2003 the lower bound for “prediabetes” was lowered to 100.

Why A1C Now?
While A1C testing has been used for years, there have been problems in standardizing the measurement. (This is discussed in the ADA paper linked to above.) But now A1C measurement inconsistencies, (which occur for all lab tests), have been narrowed sufficiently so that the ADA committee is recommending that an A1C level of >6.5% be used to diagnose diabetes, (for patients who are not pregnant and do not have hemoglobin abnormalities – these can change HbA1C levels significantly), and that people with A1C levels >6.0% and <6.5% be considered to have “subdiabetic hyperglycemia” because they have a significant risk of progressing to diabetes.

So Back to Health Reform and CER – The Challenges Ahead
The challenges ahead are to make sure that we continue to utilize future discoveries in a timely and intelligent way. Which finally brings us to health reform and CER. Health reform that expands insurance coverage should dramatically improve the diagnosis and treatment of people with diabetes – which should also help control other healthcare and societal costs because poorly controlled diabetes leads to many other costly problems.  However, immediate cost pressures present barriers to using the best diagnostic and therapeutic interventions.

Comparative effectiveness research is supposed to provide information about the best interventions, but as has been seen with advancements in diabetes, what is best often changes in progressive leaps based upon new discoveries.  And one of the limitations of CER, (and all research for that matter), is that it takes time to do the work and analyze the results.  Therefore, research really provides information about what was the best when the research started – which could have been several years before the results are known and disseminated.  And this time lag effect can be even longer when the research is based upon previously published studies or analyses of clinical records.

The lesson here is that while CER and similar research can provide very important and useful information, it must be put into the proper historical and clinical contexts.  What was state-of-the-art when the research protocols were developed may be 2, 3, 4 or more years out of date when the data is analyzed.  This reality needs to be considered when such information is used for coverage and reimbursement, and decisions about health delivery and financing system redesign.

I am confident that most insurers are not paying for A1C tests to screen people for diabetes – and that it will likely take a year or more for even the most progressive insurers to do so…. but they eventually will.  Which raises the question, what did they gain by waiting?  And what did they, (and the patients), lose?

Addendum: The hospital lab my doctor uses charges $59 for a HbA1C test.  So assuming that price doesn’t come down if more people are getting the test, the calculation needs to be made as to what is the ROI for using HbA1C as a screening test?  And the CER questions are how to identify people who would most likely benefit from HbA1C screening, and how to determine how frequently the screening should be done?

Savings from Comparative Effectiveness Research

The May 23rd issue of National Journal has two very interesting pieces about Comparative Effectiveness Research.

Scoring Savings from CER:
The first is in an interview with CBO Director Doug Elmendorf which includes this Q&A about scoring savings from CER:
“NJ: In the first five years after studying comparative effectiveness, are the savings that CBO can find relatively small?
Elmendorf: The estimates that we’ve done in the past suggest that by the 10th year, you are saving about as much as the cost of the research itself.  By the fifth year, you are not.  We would expect there to be savings in the private sector.  The federal government captures only a piece of that through the tax effect.  What I haven’t told you about is the net effect of comparative effectiveness research on national health expenditures.  That will tend to be a net saver for the country sooner.”

CER in Health Reform:
The next article in the NJ issue, (“The Risk of Comparing Treatments”), is about the possible inclusion of a new agency or independent institute to conduct or oversee CER. The legislative fate of such organization may hinge upon how CBO scores increased or continued funding for CER, and as seen above, it seem unlikely that CBO will attribute large savings to CER.

While scored savings from CER may be small, the fight about how CER should be used is getting hot.  The NJ article also discusses two new organizations that sound somewhat similar, but are actually on opposite sides of this issue: The Partnership to Improve Patient Care, and the Alliance for Better Health Care.  The former includes innovative companies and groups from industries such as biotech, pharmaceuticals and medical devices. While the latter includes health insurance plans, physicians and others.

Interestingly, patient organizations are divided between the two, with more disease specific groups who place a high value on the discovery of new treatments are aligning with PIPC, while broader “consumer” organizations that prioritize better information about existing therapies have signed on with ABHC.  Similarly, biomedical researchers could be viewed as split about CER, with academic researchers viewing the $1.1Billion in new CER money in the stimulus bill as a great opportunity for more funding, while industry researchers understand that the use of CER to make reimbursement and coverage decisions could reduce the incentives for investors to fund innovative private sector R&D.

So stay tuned.  The next event in the CER skirmishes will likely be around what the Finance Committee includes in their legislation about a new agency or institute for CER in the bill they are expected to unveil in a week or two.  Look for this issue, and other aspects of CER, to fuel one of the more interesting controversies within the health reform debate this summer.

Improving Cancer Care in Medicare

This week’s AMA News includes an article about how cancer care for Medicare beneficiaries has improved because of a provision in last year’s Medicare Improvements for Patients and Providers Act (MIPPA).  The provision of interest clarified that Medicare Part D plans need to pay for off label uses of medicines to treat cancer when there is supportive evidence in the peer-review literature.  This changes became effective January 1st, and for at least one patient, it has improved their care. (See the Medicare Rights Center’s press release about the coverage appeal they won for a client because of the new law.)

However, as I noted in an interview with the American Medical News ReachMD Radio-XM 160, (See MP3 audio file below), because the change only applies to cancer treatments, patients with other serious and life threatening illnesses may still find their treatment options limited.  That is, under current law, for non-cancer illnesses, Medicare Part D plans can still limit coverage to only the off-label uses listed in the standard compendia.

American Medical News ReachMD Interview May 5, 2009 - Off Label Coverage by Medicare Part D Plans
American Medical News ReachMD Interview May 5, 2009:
Off Label Coverage by Medicare Part D Plans

I had recommended that the MIPAA change go beyond cancer to include serious or life-threatening conditions – terminology that is somewhat imprecise, but widely recognized, including by the FDA. However, I suspect that because of cost concerns, this broader expansion of off-label coverage was not included in MIPPA.  I find this interesting for two reasons.  First, in these times of record government spending, even MIPPA’s limited coverage expansion for off-label cancer treatments raised some concerns about cost increases – which I wrote about in January.  And second, that restricting coverage of treatments in this way seems philosophically opposite to the intended benefits of Comparative Effectiveness Research – which is all about using the best research findings to improve the quality of care.  Of course, with the size of our health care system, I’m sure this won’t be the last time the left and right hands are not perfectly in sync.

Juggling Balls

Business Perspectives on Comparative Effectiveness Research

Comparative effectiveness research continues to be a hot health policy issue for many companies and stakeholders, in part, because they’re concerned that CER information will be used to deny access to innovations because of cost.

I recently talked with Jeff Sandman, CEO of Hyde Park Communications, about how healthcare companies should productively approach CER issues, and how quickly CER would lead to dramatic changes in the healthcare system.  (See part of our conversation below.)

There will certainly be more reports, seminars, meetings and Congressional hearings about CER as the $1.1 Billion in ARRA funding for CER is distributed, and the results of that research begins to roll in. I’ve written about CER in the past, (see here and here), and expect to continue writing and talking about it in the future – and I would be very interested to hear anyone else’s perspectives on this issue and how they think it will impact the transformation of healthcare.

Investment for Health Reform – Escaping the Valley of Death

The debate about health reform has mostly focused on expanding insurance coverage and controlling costs.  However, successfully improving the US healthcare system will require some long-term quality improving investments.

The stimulus bill (ARRA) included two such investments.  The $1.1 Billion for Comparative Effectiveness Research has been widely discussed because it is important, and a very large percentage increase in the Federal Government’s spending in this area.  But the ARRA bill also included $10 Billion to increase NIH’s funding.

The significance of the increased NIH funding is twofold:  First, it will provide expansion of biomedical research related jobs.  And second, it will help the NIH increase the work it does in translational research, which should help biomedical research build a better bridge over what the Parkinson’s Action Network and others have labelled the “Valley of Death.”

Valley of Death
The Valley of Death, as described by PAN at a briefing last week hosted by FasterCures, is the work required to turn basic lab research discoveries into treatments that help people.  Some people call this “translational research,” and the NIH has been moving in this direction by funding institutional centers with Clinical and Translational Science Awards (CTSAs).  The two major activities that occur (or don’t occur readily enough), in the Valley of Death are prototype discovery and design, and preclinical development.  (See PAN’s graphic below.)

PAN Valley of Death -1

The Valley of Death is a real challenge because while these activities are vitally important for improving the quality of healthcare over the long run, the incentives for doing this work are smaller than for basic or fully applied research:  In the private sector, there may be small incentives for translational work, but in academia the incentives may actually be negative because successes in this area garner little or no professional prestige or recognition, and more importantly, generally doesn’t attract research grants, i.e. money to support the researcher’s lab and the university.

A Bridge to Better Care
Filling in – or bridging –  the Valley of Death, (feel free to pick your metaphor), is important for improving healthcare because there are so many serious conditions where current treatments are very inadequate.  For example, Parkinson’s is one of many neurodegenerative diseases where existing treatments address some of the symptoms – and often only partially or temporarily – without effecting the course of the illness.  Similarly, while significant advances have been made in treating cancer, those successes have been in select types of cancer, (with leukemia being one good example), or have made the treatments much easier for the patient.  Both of these are valuable, but it is also worth noting the recent article discussing how survival rates for cancer haven’t really increased over the last several decades.

The cure for this problem is clearly more (and better) research and development…. and the translational work that bridges the two. (Perhaps we should talk more about increasing R&T&D, rather than just R&D?) PAN’s description of how to do this is sophisticated and multifaceted.  As their illustration below shows, not only are the NIH’s CTSAs and SBIR programs important for helping institutions and individual researchers push forward with more translational work, but other parts of the solution include DoD’s Telemedicine & Advanced Technology Research Center, and private foundations and venture philanthropy.

PAN - Valley of Death -2

Increasing funding for translational work through all these sources may help build a bridge, (of fill in the Valley), but translational work will also benefit by greater coordination of efforts.  At the institutional level that is part of the role of the CTSA’s, but there could also be more coordination and emphasis of translational work at the NIH itself – which is why PAN is recommending the NIH conduct an analysis and present their own recommendations for improving the translational activities of NIH funded research programs.

Institutional researchers could also benefit by having more resources about the nuts and bolts of translational work – like how to structure research and information so that it will be readily usable for filing an IND with the FDA.  Universities already have Technology Transfer/Licensing offices that serve the dual function of licensing university generated research to private companies for commercialization and ensuring that the university receives fair compensation for the company’s use of these discoveries – which is required by the Federal Bayh-Dole Act.  Perhaps universities should also have offices that work to educate their researchers about translational activities to help them plan their research so that the information they generate will be more readily usable by those farther down the development pipeline?

This is clearly not an easy nor readily obvious task.  At the FasterCures/PAN briefing last week, someone told a story about an academic researcher who experienced a 2 year delay in moving their discovery along the development pathway because they didn’t understand the information that would be needed.

Rather than make specific recommendations about how to improve the situation for basic researchers, I’ll just note that the Bayh-Dole Act managed to get every institution receiving Federal grants to create a Technology Transfer/Licensing office.  It seems that every institution could also have a Office of Translational Research Assistance too.  How to structure the funding or financial incentives for this could be complicated, but certainly not impossible.  And given that we are spending tens of Billions of tax dollars on biomedical research, we should also be doing everything we can to make sure that the discoveries coming out of that research gets translated into better treatments for patients ASAP.  Delays because academic researchers don’t want to pitch their careers into the Valley of Death, shouldn’t be a tolerated part of the structure of our biomedical research system – which, as I’ve previously discussed, is one of the four spheres comprising our entire healthcare system.

Comparative Effectiveness, Efficacy, Evidence Based Medicine, P4P, etc…

Comparative Effectiveness Research (CER) is being talked about more and more as a fulcrum for controlling healthcare costs.  For example:

  • The Congressional Budget Office issued a report on CER in December 2007 and has highlighted it in more recent analyses and reports about health reform options
  • The ARRA legislation included $1.1 Billion for CER
  • ARRA included language for the IOM Committee on Comparative Effectiveness Research Priorities to provide a report by June 30, 2009 about how to spend the $400 million allocated to HHS for CER.

All this discussion has kept me thinking about how CER will be done, how the results from this research will actually be used to improve quality and reduce costs, and what are the scope of healthcare issues that CER is, will, or should be applied to help improving.

While understanding what works best in healthcare is certainly a worthy goal, this is far from a simple task.  Some of the factors that complicate research to compare the effectiveness of various treatment options are:

  • Gold-standard double-blinded trials for clinical research provide information about efficacy – which is different than effectiveness.
  • Observational research can provide information about real world effectiveness, but the information from this type of research can be flawed by problems in the data – including selection biases and other conclusion skewing factors.
  • Both these types of research methodologies inherently have a lag between the time the research project starts and the time the data is analyzed and conclusions developed.   This time lag is often several years, during which new treatment options will likely have been developed.  Thus, CER really is only answering questions about the most effective treatment options when the study began, not when its conclusions are presented.
  • There is also considerable controversy about what factors to compare in CER projects.  That is, should only clinical outcomes be compared, or should costs be a factor?  And if cost is a factor, how are indirect costs, such as diagnostic testing, office visits, patient’s time, etc., included?  And how is quality of life valued?  (Some CER analyses report results according to Quality Adjusted Life Years).
  • And of course people interested in biopharma and medical device innovations are concerned that CER will be used not just to inform clinicians and patients, but to justify coverage and payment decisions which will impact R&D in therapeutic areas where reimbursement for innovative products is denied or limited.

All of these factors point towards larger issues of how to ensure that medical practice is maximizing knowledge to optimize clinical care for the good of the patient and society.  In some cases, this is termed Evidence Based Medicine (EBM).  In theory CER should support good EBM ASAP.  And from what President Obama has said, he wants this done PDQ.

Health System CER & Evidence-Based Interventions
While all these challenges for CER are ongoing, there also seems to be opportunities for applying the principles of CER and EBM to more system wide properties of the US healthcare system to increase value and efficiency.  For example – and I hope I’m not beating a too tired horse here – but the surgical checklist (and similar quality improving activities) have been shown to increase quality and reduce costly events, but not all hospitals and clinicians are using them.  Therefore, how about research to compare the effectiveness of hospitals (or surgeons, etc.) that use and don’t use such practices?  Some people might say that we don’t need this research since the value of these practices is already known, but perhaps focused research highlighting this information will serve as a big push to get the laggards on-board.

Similarly, CER type analyses could be applied to Medical Homes to determine what characteristics and capabilities of Medical Home medical practices make them better at improving the quality of patient care and controlling overall spending.  In particular, there might be specific features of Medical Homes that would be most important for diabetics, and others for patients with CHF, etc.  And currently NCQA’s 3 tiers of Medical Homes build upon each other, but don’t permit greater granularity nor do they distinguish between potentail patient populations. This research might be complicated, but with initiatives such as Medical Homes being proposed as a way to redesign and reconfigure outpatient care in the United States, more focused research beyond the existing and planned demonstrations and pilot projects might be very worthwhile expenditures.

P4P for Cost Containment
Another big push for cost containment in health reform is pay-for-performance (a.k.a. P4P).  While the knowledge gained from CER could certainly be fed into P4P practices, P4P itself has some controversy about how well it does or does not work to change behaviors to improve quality and reduce costs. At a breakout session about P4P at a conference on Friday led by Bob Galvin, MD (GE’s Director of Global Healthcare), I stated that two basic criteria that P4P interventions need to be successful are:

  1. The group effected needs to be small enough that each individual feels that changing their actions will effect their compensation, i.e., a group of 500 clinicians is too large, but it most likely can be larger than 5.
  2. The information about how the group or the individual is doing towards any P4P goals is delivered often enough to provide timely feedback, i.e. once a year is not frequent enough, perhaps quarterly is OK, and monthly would be great.

Dr. Galvin pointed out that the size of the P4P incentives also needs to be significant, i.e. it can’t max out at $100 per clinician.  And another participant noted that the P4P measures need to be controllable in some way by the clinician.  For example, while patients’ seat belt use might be somewhat influenced by clinicians’ reminders and admonishments, clinicians are much more able to see that their diabetic patients are getting regular HbA1C testing, eye exams, and appropriate immunizations.

Coming Full Circle From CER to P4P
18 years ago I coauthored a book chapter about the structure of bonus pools and other P4P-type incentives for physicians in nascent managed care organizations. Unfortunately, in the early 1990s, there weren’t robust information systems to provide data about “performance” for these P4P systems to be effectively implemented.  Perhaps now – and in the future – as health IT matures and become well integrated into healthcare delivery, better data will be available and P4P can be productive for clinicians, patients and society.

To help make that potential a more likely reality, perhaps some of the CER efforts could also be directed toward determining how to best structure and implement P4P programs to maximally change clinician (and possibly patient) behaviors to better utilize information about what is already known to work best in medical care.  And then these same P4P interventions would be in place and prepared to use the new knowledge that will come out of the expanded CER programs starting this year – and which will hopefully enable us to dramatically improve medical care and the medical system in the future.