Error message

The page you requested does not exist. For your convenience, a search was performed using the query about us interactive decision month 2015 09.

Page not found

You are here

Funded by National Institutes of Health; National Institute of Mental Health

Funding Years: 2012-2017

This project will test a practical intervention that uses low cost technologies to activate depressed patients' existing social networks for self-management support. The intervention links patients with a "CarePartner" (CP), i.e., a non-household family member or close friend who is willing to support the patient in coordination with the clinician and any existing in-home caregiver (ICG). Through weekly automated telemonitoring, patients report their mood and self-management status, and receive tailored guidance on self-management. The CP receives a corresponding update along with guidance on how to best support the patient's self-management efforts, and the primary care team is notified about clinically urgent situations. The intervention will be tested among depressed primary care patients from clinics serving low-income and underinsured patients, whom the intervention was especially designed to benefit. Specific Aim 1 is to conduct a randomized controlled trial to compare the effectiveness of one year of telemonitoring-supported CP for depression versus usual care (control) on depression severity. Specific Aim 2 is to examine key secondary outcomes (response and remission, impairment, well-being, caregiving burden, healthcare costs) and potential moderators. Specific Aim 3 is to use a mixed-methods approach to enrich our interpretation of the statistical associations, and to discover strategies to enhance the intervention's acceptability, effectiveness, and sustainability. If the intervention proves effective without increasing clinician burden or marginal costs, then its subsequent implementation could yield major public health benefits, especially in medically underserved populations.

PI(s): James Aikens

Co-I(s): Michael Fetters, John Piette, Ananda Sen, Marcia Valenstein, Daniel Eisenberg, Daphne Watkins

The Importance of First Impressions (Jun-05)

How do your risk estimate and your actual level of risk impact your anxiety? Please answer the following question to the best of your ability:

What is the chance that the average woman will develop breast cancer in her lifetime?

The average lifetime chance of developing breast cancer is actually 13%.

How does this risk of breast cancer (13% or 13 out of 100 women) strike you?
As an extremely low risk 1       2       3       4       5        6        7        8       9       10 As an extremely high risk

How do your answers compare?

Making a risk estimate can change the feel of the actual risk

CBDSM investigators Angela Fagerlin, Brian Zikmund-Fisher, and Peter Ubel designed a study to test whether people react differently to risk information after they have been asked to estimate the risks. In this study, half the sample first estimated the average woman's risk of breast cancer (just as you did previously), while the other half made no such estimate. All subjects were then shown the actual risk information and indicated how the risk made them feel and gave their impression of the size of the risk. The graph below shows what they found:


As shown in the graph above, subjects who first made an estimated risk reported significantly more relief than those in the no estimate group. In contrast, subjects in the no estimate group showed significantly greater anxiety. Also, women in the estimate group tended to view the risk as low, whereas those in the no estimate group tended to view the risk as high.

So what's responsible for these findings? On average, those in the estimate group guessed that 46% of women will develop breast cancer at some point in their lives, which is a fairly large overestimate of the actual risk. It appears, then, that this overestimate makes the 13% figure feel relatively low, leading to a sense of relief when subjects find the risk isn't as bad as they had previously thought.

Why this finding is important

Clinical practice implications - The current research suggests that clinicians need to be very deliberate but very cautious in how they communicate risk information to their patients. These results argue that a physician should consider whether a person is likely to over-estimate their risk and whether they have an unreasonably high fear of cancer before having them make a risk estimation. For the average patient who would overestimate their risk, making a risk estimation may be harmful, leading them to be too relieved by the actual risk figure to take appropriate actions. On the other hand, if a patient has an unreasonably high fear of cancer, having them make such an estimate may actually be instrumental in decreasing their anxiety. Physicians may want to subtly inquire whether their patient is worried about her cancer risk or if she has any family history of cancer to address the latter type of patient.

Research implications - Many studies in cancer risk communication literature have asked participants at baseline about their perceived risk of developing specific cancers. Researchers then implement an intervention to "correct" baseline risk estimates. The current results suggest that measuring risk perceptions pre-intervention will influence people's subsequent reactions, making it difficult to discern whether it was the intervention that changed their attitudes or the pre-intervention risk estimate. Researchers testing out such interventions need to proceed with caution, and may need to add research arms of people who do not receive such pre-tests.

For more details: Fagerlin A, Zikmund-Fisher BJ, Ubel PA. How making a risk estimate can change the feel of that risk: shifting attitudes toward breast cancer risk in a general public survey. Patient Educ Couns. 2005 Jun;57(3):294-9.



Does order matter when distributing resources? (Jun-03)

Should people with more severe health problems receive state funding for treatment before people with less severe health problems? See how your opinion compares with the opinions of others.

Imagine that you are a government official responsible for deciding how state money is spent on different medical treatments. Your budget is limited so you cannot afford to offer treatment to everyone who might benefit. Right now, you must choose to spend money on one of two treatments.

  • Treatment A treats a life threatening illness. It saves patients' lives and returns them to perfect health after treatment
  • Treatment B treats a different life threatening illness. It saves patients' lives but is not entirely effective and leaves them with paraplegia after treatment. These patients are entirely normal before their illness but after treatment will have paraplegia.

Suppose the state has enough money to offer Treatment A to 100 patients. How many patients would have to offered Treatment B so that you would have difficulty choosing which treatment to offer?

How do your answers compare?

The average person said that it would become difficult to decide which treatment to offer when 1000 people were offered Treatment B.

What if you had made another comparison before the one you just made?

In the study, some people were asked to make a comparison between saving the lives of otherwise-healthy people and saving the lives of people who already had paraplegia. After they made that comparison, they made the comparison you just completed. The average person in that group said it would take 126 people offered Treatment B to make the decision difficult. The differences are shown in the graph below

Why is this important?

The comparison you made is an example of a person tradeoff (PTO). The PTO is one method used to find out the utilities of different health conditions. These utilities are basically measures of the severities of the conditions. More severe conditions have a lower utility, and less severe conditions have a higher utility, on a scale of 0 to 1. Insurance companies, the government, and other organizations use these utilities as a way to decide which group to funnel money into for treatments.

On the surface, it seems like basing the money division on the severity of a condition is a good and fair method, since theoretically the people who are in the greatest need will be treated first. However, the PTO raises issues of fairness and equity that aren't accounted for in other utility elicitation methods like the time tradeoff (TTO) and rating scale (RS).

For example, when asked to decide how many people with paraplegia would have to be saved to equal saving 100 healthy people, many people say 100; that is, they think it is equally important to save the life of someone with paraplegia and a healthy person. Going by values obtained using the TTO or RS, an insurance company may conclude that 160 people with paraplegia (using a utility of .6) would have to be saved to make it equal to saving 100 healthy people. This would mean that less benefit would be gotten by saving someone with paraplegia, and thus they might not cover expenses for lifesaving treatments for people with paraplegia as much as they would for a healthy person. The PTO shows that many people would not agree with doing this, even though their own responses to other utility questions generated the policy in the first place.

For more information see:

Ubel PA, Richardson J, Baron J. Exploring the role of order effects in person trade-off elicitations. Health Policy, 61(2):189-199, 2002.

CBSSM investigators Holly Witteman, Andrea Fuhrel-Forbis, Angela Fagerlin, and Brian Zikmund-Fisher, along with CBSSM alumni Peter Ubel and Andrea Angott will give a plenary talk at the Society for Medical Decision Making's 32nd Annual Meeting in Toronto on Monday, October 25.  The talk is titled, "Colostomy is Better than Death, but a 4% Chance of Death Might Be Better Than a 4% Chance of Colostomy: Why People Make Choices Seemingly At Odds With Their Stated Preferences." 


Purpose: When asked for their preference between death and colostomy, most people say that they prefer colostomy. However, when given the choice of two hypothetical treatments that differ only in that one has four percent chance of colostomy while the other has four percent additional chance of death, approximately 25% of people who say that they prefer colostomy actually opt for the additional chance of death. This study examined whether probability-sensitive preference weighting may help to explain why people make these types of treatment choices that are inconsistent with their stated preferences.

Method: 1656 participants in a demographically diverse online survey were randomly assigned to indicate their preference by answering either, “If you had to choose, would you rather die, or would you rather have a colostomy?†or, “If you had to choose, would you rather have a 4% chance of dying, or would you rather have a 4% chance of having a colostomy?†They were then asked to imagine that they had been diagnosed with colon cancer and were faced with a choice between two treatments, one with an uncomplicated cure rate of 80% and a 20% death rate, and another with an uncomplicated cure rate of 80%, a 16% death rate, and a 4% rate of colostomy.

Result: Consistent with our prior research, most people whose preferences were elicited with the first question stated that they preferred colostomy (80% of participants) to death (20%), but many then made a choice inconsistent with that preference (59% chose the treatment with higher chance of colostomy; 41% chose the treatment with higher chance of death). Compared to the first group, participants whose preferences were elicited with the 4% question preferred death (31%) over colostomy (69%) more often (Chi-squared = 24.31, p<.001) and their treatment choices were more concordant with their stated preferences (64% chose the treatment with higher chance of colostomy; 36% chose the treatment with higher chance of death, Chi-squared for concordance = 36.92, p<.001).

Conclusion: Our experiment suggests that probability-sensitive preference weighting may help explain why people’s medical treatment choices are sometimes at odds with their stated preferences. These findings also suggest that preference elicitation methods may not necessarily assume independence of probability levels and preference weights.

Funded by National Institutes of Health; National Insitute on Drug Abuse

Funding Years: 2012-2017

This application seeks a five-year continuation of the panel data collections of the Monitoring the Future (MTF) study, an ongoing epidemiological and etiological research and reporting project begun in 1975. In addition to being a basic research study, MTF has become one of the nation's most relied upon sources of information on trends in illicit drug, alcohol, and tobacco use among American adolescents, college students, and young and middle-aged adults. This application seeks continuation of the mail follow-up surveys of high school graduates (augmented with internet options) at modal ages 19-30, 35, 40, 45, 50, and now 55. The companion main application seeks to continue the in-school data collections and to support the analysis of all of the data in the study, including past and future panel data. (NIDA requests that the study seek continuation funding through two separate applications, as it has done in the last two rounds.)
The study's cohort-sequential longitudinal design permits the measurement and differentiation of three types of change: age (developmental), period (historical), and cohort. Each has different determinants, and all three types of change have been shown by MTF to occur for most drugs. Factors that may explain historical trends and cohort differences also are monitored. MTF is designed to document the developmental history and consequences of drug use and related attitudes from adolescence through middle adulthood, and to determine the individual and contextual characteristics and social role transitions that affect use and related attitudes. Research on risk and protective behaviors for the transmission of HIV/AIDS among adults ages 21-40 also will be continued. All of this work will be extended to new years, cohorts, and ages under this application and the companion main application. The study will examine the importance of many hypothesized determinants of drug use (including attitudes and beliefs and access), as well as a range of potential consequences (including physical and psychological health, status attainment, role performance, and drug abuse and dependence). Impacts of some policy changes on adolescents and young adults will be evaluated, including those of the new FDA cigarette labeling requirements. MTF will experiment with the use of internet response methods and pursue several new approaches to making its panel data more accessible to other investigators.
The study's very broad measurement covers (a) initiation, use, and cessation for over 50 categories and sub-categories of licit and illicit drugs, including alcohol and tobacco; (b) attitudes and beliefs about many of them, perceived availability, and peer norms; (c) other behaviors and individual characteristics; (d) aspects of key social environments (home, work, school) and social role statuses and transitions; and (e) risk and protective behaviors related to the spread of HIV/AIDS. Results will continue to elucidate drug use from adolescence through middle adulthood (including the introduction of new drugs) with major implications for the policy, research, prevention, and treatment agendas.

PI(s): Mick Couper

Co-I(s): Lloyd Johnston, Patrick O'Malley, John Schulenberg, Megan Patrick, Richard Miech

Is your well-being influenced by the guy sitting next to you? (Nov-03)

Rating your satisfaction with your life may not be a completely personal decision. See how your satisfaction rating may be influenced by others.

When answering this question, imagine that there is someone in a wheelchair sitting next to you. They will also be answering this question, but you will not have to share your answers with each other.

How satisfied are you with your life in general?

Extremely satisfied 1       2       3       4       5       6       7       8       9       10 Not at all satisfied

How do you compare to the people surveyed?

You gave your life satisfaction a rating of 1, which means that you are extremely satisfied with your life. In a study done where people with a disabled person sitting next to them wrote down their life satisfaction on a questionnaire, they gave an average life satisfaction rating of 2.4, which means they were very satisfied with their lives.

What if you'd had to report your well-being to another person instead of writing it down?

In the study, half the people had to report their well-being in an interview with a confederate (a member of the research team who was posing as another participant). When the participants had to report in this way, and the confederate was not disabled, the participants rated their well-being as significantly better than those who reported by writing it on the questionnaire in the presence of a non-disabled confederate (2.0 vs. 3.4, lower score means higher well-being). The scores given when reporting to a disabled confederate elicited a well-being score that was no different than that when completing the questionnaire in the presence of a disabled confederate (2.3 vs. 2.4).

Mean life satisfaction ratings, lower score means higher satisfaction
Mode of rating well-being Disabled confederate Non-disabled Confederate
Interview (public) 2.3 2.0
Questionnaire (private) 2.4 3.4
What caused the difference in well-being scores?

When making judgments of well-being, people (at least in this study) tend to compare themselves to those around them. This effect is seen more when well-being was reported in an interview than when the score was privately written down, due to self-presentation concerns. A higher rating was given in public so as to appear to be better off than one may truly feel. Note that the effect was only seen in the case where the confederate was not disabled. While well-being ratings were better overall with a disabled confederate, there was no difference between the private and public ratings. Social comparison led to a better well-being judgment, but it appears that the participants were hesitant to rate themselves too highly in front of the disabled person for fear of making the disabled person feel worse.

Why is this important?

Subjective well-being is a commonly used measure in many areas of research. For example, it is used as one way to look at the effectiveness new surgeries or medications. The above studies show that SWB scores can vary depending on the conditions under which they are given. Someone may give a response of fairly high SWB if they are interviewed before leaving the hospital, surrounded by people more sick than they are. From this, it would appear as though their treatment worked great. But suppose that they are asked to complete a follow-up internet survey a week later. Since they do not have to respond to an actual person face-to-face, and without being surrounded by sick people, they may give a lower rating than previously. Is this because the treatment actually made their SWB worse over the longer term, or simply because a different method was used to get their response? The only way to really know would be to use the same methodology to get all their responses, which might not always be feasible. These are important considerations for researchers to keep in mind when analyzing results of their studies. Are the results they got the true SWB of their participants, or is it an artifact of how the study was done? And is there a way to know which measure is right, or are they both right which would lead to the conclusion that SWB is purely a momentary judgment based on a social context?

For more information see:

Strack F, Schwarz N, Chassein B, Kern D, Wagner D. Salience of comparison standards and the activation of social norms: Consequences for judgements of happiness and their communication. British Journal of Social Psychology. 29:303-314, 1990.

Funded by National Institutes of Health.

Funding Years: 2014-2019.

Randomized controlled trial (RCT) results diffuse into clinical practice slowly - the average time from trial completion to widespread adoption of a new treatment is nearly 20 years. These delays result in suboptimal treatment for patients with neurological diseases. In light of these delays and the enormous societal value of NINDS clinical trials findings, NINDS has recognized the need to accelerate implementation by promoting research to translate trial findings into routine care (T2 translational research). This application seeks to optimize translation of NINDS trials by personalizing clinical trial results ad addressing barriers to translation for clinicians and policy-makers. Using translational research methods, we can move from one-size-fits-all evidence-based medicine towards personalized medicine by estimating treatment benefit for individual patients. Other translational methods can evaluate and address stakeholder concerns that hinder translation. Because clinicians are often skeptical of trial results, changing practice requires convincing them not only that a treatment works in an RCT or that it works in academic medical centers, but that it will work for their patients. Similarly, if policy-makers and payers can be convinced that a new treatment is a good value (e.g., a favorable cost-benefit ratio), they can use their considerable influence on the healthcare delivery system to facilitate translation. Specifically, we will use translational research methods to address three important issues essential to improving trial translation: 1. estimating individual-level outcomes using multivariable outcome prediction 2. Estimating the impact of real world circumstances on outcomes using simulation analyses and 3. Cost effectiveness analysis. Results from these analyses can influence clinicians and policy-makers directly or through the use of tools, such as websites and mobile applications. This proposal has two key objectives. First, we will adapt translational research methods to clinical trials by addressing essential translation-relevant questions for the Carotid Revascularization Endarterectomy versus Stenting (CREST) trial. Second, we will develop a model to concurrently perform similar translational analyses in the Neurology Emergency Treatment Trials (NETT) network. These objectives will be addressed through 3 specific aims: 1. to estimate the expected net benefit of carotid endarterectomy (CEA) vs. carotid artery stenting (CAS) for individual patients in the CREST trial using refined multivariable outcome prediction methods. 2. To estimate the impact of personalized decision-making and real world circumstances (e.g., differing complication rates) on the net benefit of CAS vs. CEA for real world patients using simulation analyses. 3. To assess the feasibility of performing concurrent translational and cost analyses in NETT trials by evaluating a process implementation model in newly initiated and recently completed NETT trials. Dr. Burke has a unique background as a vascular neurologist with training in Translational research methodology through the highly regarded Robert Wood Johnson Clinical Scholars Program. In this proposal, Dr. Burke will develop the additional expertise in clinical trials, multivariable outcome prediction, simulation analyses and cost analyses to become a leader and independently-funded investigator in neurological translational research working to develop a new generation of NETT trials better designed to effectively inform real world clinical practice and improve patient outcomes. This proposal capitalizes on unique environmental strengths at the University of Michigan. Most importantly, Dr. Burke will be supported by an outstanding multi- disciplinary mentorship team including Dr. William Barsan the NETT Clinical Coordinating Center (CCC) principal investigator and a research leader in the emergency treatment of neurological diseases, Dr. Rodney Hayward a Professor of Internal Medicine and a pioneer in translational research and Dr. Lewis Morgenstern, a leader in neurological translational research. All three mentors have excellent track records in mentoring junior faculty and transitioning junior faculty to independence. In addition, Dr. Burke will have te opportunity to participate in a unique hands-on clinical trials immersion through the NETT to gain experience in clinical trial design, management and implementation. Finally, the University of Michigan has recently built the largest academic Translational Research center in the United States (the Institute for Health Policy and Innovation) which will support the advanced statistical methods required for this proposal.

PI(s): James Burke