Monday, December 30, 2013

Uncertainty (TFS, Part 11)

I am reading Thinking, Fast and Slow, by Daniel Kahneman. In this series I will summarize key parts of the book and supply some comments and reflections on the material.

Part IV: Choices
Chapters 25-34

Summary:

Expected Utility Theory, Prospect Theory (gains/losses matter more than wealth; there is diminishing sensitivity to changes from the reference point; loss aversion; extremely low probability events are overweighted and extremely high probability events are underweighted relative to expected utility theory), The Endowment Effect.

If people have NO experience with something, then low-probability events are UNDER-weighted, not overweighted (e.g. perceived probability of an earthquake in CA by those living in CA who have not yet experienced an earthquake is too low).

Valuations of gambles are less sensitive to probability changes when the outcomes are vividly described. Framing matters. People can exhibit preference reversals, which violates standard economic assumptions.

Rules (Kahneman calls them "risk policies" in this part of the book), even ones that you impose on yourself, can mitigate some of these biases.

My Thoughts:

We have now reached the point in the book where things are becoming very dense. Much more dense than a normal "popular economics" book. If you are interested in reading more about the results and material in the book, I encourage you to pick it up.

Rather than spell out some of my "deeper" thoughts on this section, I want to present some of the paradoxes he uses in his (and others use in related) research. If you've never thought about the questions at the heart of these research paradoxes before (especially #3-5), they are worth thinking about.

Food for Thought:

1) "As a rule, I never buy extended warranties."

2) "Always take the highest possible deductible when purchasing insurance."

-----

For the following questions, read them carefully, but decide on a preliminary answer in 10 seconds or less. Then, spend as much time as you need on them to reason out what you would do if it really mattered. (Hint: What would an expected value maximizer do? What would a utility maximizer who is risk averse do? Then ask again, what would you do?)

3) Decision i) Choose between:
    A) sure gain of $240
    B) 25% chance to gain $1,000 and 75% chance to gain nothing.

    Decision ii) Choose between:
    C) sure loss of $750
    D) 75% chance to lose $1,000 and 25% chance to lose nothing.

    Decision iii) Choose between:
    AD) 25% chance to win $240 and 75% chance to lose $760.
    BC) 25% chance to win $250 and 75% chance to lose $750.

If you chose A, D, then BC, think VERY HARD about what you just did.

4) Imagine the US is preparing for the outbreak of an unusual Asian disease, which is expected to kill 600 people. Two alternative programs to combat the disease have been proposed. Assume that the exact scientific estimates of the consequences of the programs are as follows:
    If Program A is adopted, 200 people will be saved.
    If Program B is adopted, there is a one-third probability that 600 people will be saved and a two-thirds probability that no people will be saved.

Do you choose A or B? Now suppose instead your options were as follows:

    If Program C is adopted, 400 people will die.
    If Program D is adopted, there is a one-third probability that no one will die and a two-thirds probability that 600 people will die.

Do you choose C or D?

Think VERY HARD if you choose A and D. Think VERY HARD if you choose B and C. 

5) Suppose for a family, a standard tax exemption is allowed for each child the family has. The amount of the exemption does not depend on the income of the family. Should the per child tax exemption be larger for the rich than for the poor?

Suppose instead for a family, a tax surcharge is levied for each child fewer than three the family has. The amount of the surcharge does not depend on the income of the family. Should the childless poor pay as large a surcharge as the childless rich?

If you answered NO to both of these questions, THINK VERY HARD about your answers. You cannot logically reject both proposals. Was your reaction based on the moral framing of the questions or the substance of the policy?

6) Read about the Allais Paradox. Now that you know about it, would you make the same selection if offered the same gamble again? What if you could take the gamble 100 times and real money was on the line? What would you choose? Now that you've learned something, would any of your answers to 3), 4), or 5) change?





Thursday, December 26, 2013

Bring Back the BCS Computers! (TFS, part 10)

I am reading Thinking, Fast and Slow, by Daniel Kahneman. In this series I will summarize key parts of the book and supply some comments and reflections on the material.

Part III: Overconfidence
Chapters 21-24

Summary:

Intuition often leads us astray. Simple formulas and simple statistics often out-predict experts in noisy environments. People are often hostile to algorithms.

Example. Trained counselors were asked to predict the grades of college freshman at the end of the school year. The counselors got to interview the students for 45 minutes and had access to high school grades, several aptitude test scores, and a four-page personal statement from each student. A simple regression based on only high school grades and one aptitude test did a better job of predicting the actual outcomes.

Planning Fallacy: Plans and forecasts are unrealistically close to the best-case scenario and could be improved by consulting the statistics of similar cases. (There is overoptimism on the part of planners and decision makers.)

How to mitigate the planning fallacy: Identify an appropriate reference class and obtain the statistics of the reference class. Use the statistics (not your intuition) as the baseline prediction.

End of Part III.

My Thoughts
:

Chapter 21 is, hands-down, the very best chapter in the book so far. In it, Kahneman explains the importance of using checklists and rubrics. He even has an interesting explanation of why checklists are so successful: they are like simple formulas; regressions without the weighting. So I need to take back all my griping about Kahneman not emphasizing checklists and rubrics from my previous posts! He does all the talking here!

On the BCS:

Every year college football decides a national champion. Next year, instead of the champion being determined by the outcome of the game between the #1 and #2 ranked schools (as determined by a combination of computer algorithms and coach's polls -- the BCS), the champion will be determined by a four team, single-elimination tournament. The four teams that play will be ENTIRELY determined by an important people/coach's poll/committee.

The goal of the change is to make the selection of the national champion less controversial. Instead, I think this move will make it more likely that the national champion will be controversial. Instead of choosing two teams, four have to be chosen, and now there is no defined criteria other than what the coaches feel like. I think the move to relying on the committee makes the decision process LESS structured and consistent from year to year.

One of the biggest perceived problems with the BCS people had was that a "computer" was choosing which one-loss team plays in the national championship. (In many years there is only one undefeated team, so the BCS had to select which other team to play against the undefeated team.) The move to a 4 team playoff chosen by coaches doesn't really solve the problem of which one-loss team(s) to include, because there are almost always more than four undefeated or one-loss teams. Instead, the decision will be less systematic and more controversial.

To me, the biggest problem with the BCS isn't that "computers" determine who plays but that the computer ranking algorithms were somewhat secret. Computers are great calculators. Things like strength of schedule and how to weight wins early in the season versus late in the season are debatable. Much better than letting coach's decide based on "feel" would be to publicly release the calculations done in the computer section of the BCS and be transparent about what they are trying to achieve. How is strength of schedule calculated? How much does an early loss matter? How much does a late loss matter? If there is a problem with the BCS computer formula, you should be able to identify WHY that problem exists, and adjust the formula from year-to-year.

At the very least, the algorithms should still be calculated and released even if only to provide information to the coaches. But based on the results Kahneman presents, there is no guarantee coaches will take these "computers" seriously, and will go out of there way to find "broken legs" the computer missed to overrule the ranking -- leading to poorer outcomes.

If I had to pick the ideal system for determining the champion (and who plays in the big bowl games), I would go with a Swiss-style tournament. The schedule throughout the season would be dynamic depending on the wins and losses of the previous week; the last round of the Swiss would be the championship game between the top two teams and the bowls. The season as a whole matters. A pure end-of-year tournament with lots of entries (like the NCAA basketball tournament) would make the end of the year matter almost exclusively. Teams only have to play well enough to make it into the tournament. There would be more games where the outcome doesn't matter. Even the worst team throughout the season of all the teams in the tournament could still become the national champion by playing the best in the tournament. To me, "national champion" should take into account the accomplishments of the whole year. Many people like the decisiveness of a tournament and the excitement of upsets, but just because you have a winner from a single-elimination tournament doesn't mean you've picked the best team of the year -- the national champion.

Friday, December 20, 2013

Luck or Skill? (TFS, part 9)

I am reading Thinking, Fast and Slow, by Daniel Kahneman. In this series I will summarize key parts of the book and supply some comments and reflections on the material.

Part III: Overconfidence
Chapters 19-20

Summary:

Halo effect, hindsight bias, outcome bias, illusion of validity, illusion of skill (thinking an outcome is a result of mostly skill when the outcome is a result of mostly luck).

Errors of prediction are inevitable because the world is unpredictable.  High subjective confidence is not to be trusted as an indicator of accuracy (someone admitting low confidence could be much more informative).

My Thoughts:


One thing Kahneman complains quite a bit about is how much some people get paid when it seems like their job is mostly -- he stops just short of explicitly saying all -- luck. He calls out stock traders and CEOs specifically, and questions why incentive pay exists at all if it's almost all luck.

I think Kahneman goes too far. For example, I don't think Microsoft's stock price wouldn't have jumped so much at the announcement of the retirement of a lackluster CEO if CEOs don't matter.

In one calculation, Kahneman looks at investment results for 25 investment advisers in one firm over 8 years and finds basically 0 average correlation in the relative rank of the advisers across every pair of years (I am not sure looking at every pair of years and averaging to get an overall correlation is a good idea, but whatever). He concludes from the zero correlation that their job is "mostly chance" and the firm was rewarding employees based on "luck rather than skill," so why pay bonuses? It's very easy to imagine that without the bonuses advisers would have worked less hard for the firm and all achieved worse results on their investments or brought in fewer clients, or whatever, to the detriment of the firm. The bonuses serve a very useful purpose of encouraging harder work. The zero correlation in rank of the employees over time does not necessarily mean the bonuses were serving no purpose as Kahneman seems to think.

Food for Thought:

1) "You should have known the market was going to crash!"

2)  "We had access to all the information we needed. Why didn't the government connect the dots to prevent 9/11?"

3) Why do people trade stocks at all? (No Trade Theorem)

4) An engineering firm hires a new class of employees with exactly the same credentials from a good college. The firm cares about producing patents. The ability of employees to come up with patentable ideas is a function of their credentials, hard work/effort, and a lot of luck. The firm cannot observe effort or luck. Employees decide how much effort to put into their job.
    a) Will the firm produce more patents if it pays bonuses based on the number of patents rather than a flat wage?
    b) Will the distribution of income among employees with identical credentials be more unequal if bonuses are paid out?
    c) Alice received a $10,000 bonus this year. Bob received no bonus. True or False: We know for sure that Alice worked harder than Bob this year.
    d) Suppose all employees choose to exert the same amount of effort under the incentive pay system. The difference in patents (and pay) among employees at the end of the year is thus entirely due to luck. Does that mean the incentive pay system is serving no useful purpose? What if it was known that income will be redistributed by the government at the end of the year to make everyone's pay equal. Pay differences are due to luck, after all. Would employees produce as many patents?
    d) Suppose more firms move to incentive pay rather than flat wages causing inequality in the whole country to increase. Is this a good thing? Is it fair?

Saturday, December 7, 2013

Tom and Linda (TFS, part 8)

I am reading Thinking, Fast and Slow, by Daniel Kahneman. In this series I will summarize key parts of the book and supply some comments and reflections on the material.

Part II: Heuristics and Biases
Chapters 14-16

Summary:

People tend to rely too heavily on representativeness and stereotypes when judging likelihoods. People are not good at intuiting Bayes's Rule or other probabilistic laws (especially when there is a good story to tell otherwise).

My Thoughts:

This section of the book is the weakest so far in terms of the quality of the research, robustness of the results, and the importance of the results in a broader context. I was going to rip it apart paragraph by paragraph, but Kahneman does a very surprising thing in chapter 15: he admits that it is the weakest body of research his ideas have sparked. I find this incredibly honorable (even though I still think the research is even weaker than he admits). One doesn't often see this in popular writings.

Case in point: I read Why Nations Fail last year, and while the book is good and from two great economists, the whole thing can be summarized in one sentence: Nations fail because of bad, extractive institutions while culture and geography don't matter. But the authors go way overboard in stating their case, reliance on questionable historical examples, and bashing opposing theories. The world is more complicated than they admit, and solutions are not as simple as clamoring for "inclusive institutions" (what does that even mean, anyway?).

But, I digress. I want to talk about two main studies, in this section of the book.

1) Profile of Tom W: "Tom W is of high intelligence, although lacking in true creativity. He has a need for order and clarity, and for neat and tidy systems in which every detail finds its appropriate place. His writing is rather dull and mechanical, occasionally enlivened by somewhat corny puns and flashes of imagination of the sci-fi type. He has a strong drive for competence. He seems to have little feel and little sympathy for other people, and does not enjoy interacting with others. Self-centered, he nonetheless has a deep moral sense."

Given the psychological profile of Tom W, people tend to be overly confident that he is a computer scientist. The error is that while Tom W may be close to the stereotypical/representative computer scientist, there aren't that many computer scientists relative to the population as a whole. There are many more people in humanities and education than computer science, so there should still be a significant chance (maybe even more likely) that Tom is in the social sciences. People seem to disregard the population base rate when given specific information that tells a good story.

However, as this study (which I've linked to before) summarized in the literature review/introduction, when researchers ask for frequencies ("how many out of 100?") rather than probabilities, base rate neglect disappears and subjects act like good Bayesians even when the subjects are not trained in statistics.

Here one more problem: Kahneman pulls a bit of a fast one in his description of the Tom W results. Rather than providing actual frequencies of nerdiness in different fields (which is important because stereotypes are real reflections of the environment and real selection effects and can be self-reinforcing), he claims people should have stayed close to the base rates because "the source of Tom W's description was not highly trustworthy." Say what? I mean, we know that and Kaheman knows that (he made up the description especially to fool a colleague -- it's not something that comes from an actual psychological profiling), but do the subjects know that? Probably not.

2) Linda. "Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations. Please rank order the following statements with respect to their probability:
    -- Linda is a bank teller.
    -- Linda is a bank teller and active in the feminist movement."

Of course, "Linda is a bank teller" is more likely. The probability rule is

                         Pr(A) = Pr(A and not B) + Pr(A and B).

In other words, the probability Linda is a bank teller is equal to the probability Linda is a bank teller and NOT active in the feminist movement plus the probability Linda is a bank teller and active in the feminist movement. Therefore, Linda is a bank teller must be more likely. The problem is that "Linda is a bank teller and active in the feminist movement" makes a good story. (The conjunction fallacy.)

However, if instead of "Please rank order the following statements with respect to their probability" we ask "To how many out of 100 people who are like Linda do the following statements apply?" subjects do not violate the probability rule. The results are very sensitive to how the question is asked.

General criticisms:

Probability is hard! Do people even know what that really means? Kahneman on the one hand says people substitute representativeness for probability because probability is hard but on the other hand discounts the research that asks for frequencies because people intuitively understand what probability means. I don't think he can have it both ways.

When there are no stakes, right answers and wrong answers don't matter. Thinking (especially about probabilities) is difficult and costly. One of Kaheman's students responded "so what?" when Kahneman pointed out he violated an elementary logical rule in the Linda study. Kahneman was discouraged by this. But he shouldn't be. Why should people think hard when they don't want to and there is no cost to getting it wrong?

One more thing. Kahneman mentions John List's studies on the market for baseball cards as confirming evidence for the existence of psychological biases in individuals. But (for example, this paper) even if individuals have strong biases and fall into all the traps psychologists lay for them, it may not matter at all at the market level. Markets punish mistakes and people learn. If you keep making mistakes, you will go bankrupt and be forced out of the market. The market participants may not have any idea what statistical rules they are learning, they just know they are doing better. They may even exhibit really bad biases in all other areas of life. But that market experience matters for that market. 

This is a critically important insight. We can make fun of economists all we want for modeling simple, perfectly rational individuals, when we know people are more complicated than that, but if at the market level these biases are competed out and the market looks like it's made up of simple, rational people, then that's all that matters.

Part II: Heuristics and Biases
Chapters 17-18

Summary:

Regression to the mean.

My Thoughts:

These chapters are not that interesting. There is one really good quotation, though:

"Whether undetected or wrongly explained, the phenomenon of regression is strange to the human mind. So strange, indeed, that it was first identified and understood two hundred years after the theory of gravitation and differential calculus."

Amen to that.

Monday, November 25, 2013

Availability Heuristic (TFS part 7)

I am reading Thinking, Fast and Slow, by Daniel Kahneman. In this series I will summarize key parts of the book and supply some comments and reflections on the material.

Part II: Heuristics and Biases
Chapters 12-13

Summary:

These chapters are all about the availability heuristic: the process of judging the frequency of something by the ease with which instances of the thing come to mind. This is an example of substitution: people substitute the easy question "how do I feel about it?" for the hard question "what do I think about it?"

Examples: the perceived number of divorces among celebrities versus the population at large, frequency of infidelity among politicians versus the population at large, the perceived safety of flying after the news reports a plane crash, people purchasing more insurance AFTER an accident or disaster, and the estimates of the main causes and probability of death being warped by media coverage.

Media coverage could lead to an availability cascade, a "self-sustaining chain of events, which may start from media reports of a relatively minor event and lead up to public panic and large-scale government action" (e.g. Love Canal, Alar scare of 1989, acts of terror).

How do you correct for this bias? Be aware of your own biases; focus on content, not feelings; be on the lookout for bias ("maintain high vigilance").

Some more interesting studies highlighted in these two chapters:

One study showed that the "awareness of your own biases can contribute to peace in marriages, and probably other joint projects." The study asked each member of a couple how much they contributed to keeping the place tidy in percentage terms. The total was larger than 100%. But then the observation that the total was greater than 100% was often enough to make people aware of their own bias and diffuse arguments.

Another study showed that people in power are much more likely to fall victim to this bias. In addition, "merely remembering a time when they had power increases trust in their own intuition."

Kahneman issues one word of caution: the error can go the other direction, too. A study by neuroscientist Antonio Damasio showed that "people who do not display the appropriate emotions before they decide, sometimes because of brain damage, also have an impaired ability to make good decisions. An inability to be guided by a 'healthy fear' of bad consequences is a disastrous flaw."

My Thoughts:

Kahneman's choice of examples of availability cascades is very interesting. Acts of terror can definitely lead to a cascade, and I believe the government has gone too far because of one. That is why I Opt Out. For more modern examples of availability cascades, I would have included the perceived importance of the federal response to Huricane Katrina and the perceived safety of schools in light of recent shootings (At least some media outlets are aware of the latter. USA Today reports: Schools are actually safer now by almost every measure than they were 20 years ago). The inclusion of Love Canal as a cascade is very jarring for me. I was taught this was one of the most disastrous environmental events in US history.

There is a big exposition in these chapters about experts and citizens which I think is clunky and not really very good. But there is one quotation which is good: "Every policy question involves assumptions about human nature, in particular the choices that people may make and the consequences of their choices for themselves and for society." If there was an award for the social science that does the best job of explaining its assumptions about human nature when recommending policy, economics would win the Gold Medal hands down. While being transparent allows all sorts of justified and unjustified mocking of the field, stating assumptions is much better than letting the assumptions go unsaid, which is often the case elsewhere. I get the feeling Kahneman is making hidden assumptions about human nature that he is not being explicit about when he makes policy recommendations in these chapters, but even so, psychology would probably take the silver medal -- but only when it acts like economics.

One last thing: Even though it's nice to think that our country is run by experts (it's their job to spend all day researching and debating on whether a law is good or not), our representatives are people, too. They can exhibit these biases and fail in representing us. One of my biggest pet peeves is when a politician either makes a policy or changes a policy stance FOR THE WHOLE COUNTRY based on ONE PERSONAL EXAMPLE that comes easily to mind. One of the more egregious recent examples of this was when Rob Portman changed his stance on gay marriage because his son came out as gay. All of a sudden, it was okay because of one personal experience.  No cost and benefit analysis; no talk of freedoms or rights or the role of the government in legislating morality. He even acknowledged he never really thought about the issue before. And based on the interview, he clearly he STILL hasn't thought about the issue since. Hopefully, Portman isn't put in charge of making any critical decisions for the country. This is not they way to make policy on ANYTHING (to be fair, Portman only talks about economic issues -- social issues are completely off his radar -- so he may make actual decisions on economic issues; but I wouldn't bet the house on it).

Food for Thought:

1) Kahneman closes these chapters with this observation: "Rational or not, fear is painful and debilitating, and policy makers must endeavor to protect the public from fear, not only from real dangers." Do you agree? Why or why not?

2) Given that our perceptions of the frequency of events comes mainly from memory and the media and that biased presentations in the media distort our perceptions of important frequencies in reality, is there a role for a "Fairness Act" to control the content of the media? What are the pluses and minus of unfettered freedom of the press?

Friday, November 22, 2013

Anchors Away! (TFS part 6)

I am reading Thinking, Fast and Slow, by Daniel Kahneman. In this series I will summarize key parts of the book and supply some comments and reflections on the material.

Part II: Heuristics and Biases
Chapter 11: Anchoring

Summary:

Anchoring occurs when people consider a particular value for an unknown quantity.

In a study, judges were asked to sentence a shoplifter after rolling (weighted) dice. "On average, those who had rolled a 9 said they would sentence [the shoplifter] to 8 months; those who rolled a 3 said they would sentence her to 5 months; the anchoring effect was 50%."

Why do we anchor? It is partly a System 1 bias of priming. It is partly a System 2 adjustment process. When answering we start at the anchor then adjust in the correct direction until we are "unsure." This leads to reporting the end of a confidence interval and systematic bias in the direction of the anchor.

How do you correct for anchoring in negotiations? Focus your attention and search memory for arguments against the anchor; focus on the minimal offer you think the opponent would accept. This more actively engages System 2 and removes the anchoring bias.

"The main moral of priming research is that our thoughts and our behavior are influenced, much more than we know or want, by the environment of the moment."

My Thoughts:

This is the first chapter where I got the feeling that every solution being presented to correct for System 1 biases can be summarized as "Pay more attention! Engage System 2!"   Which is fine; I agree; but there is only so much attention that can be paid (budget constraint!), and he doesn't talk much about ways to engage that System 2 thinking process.  There are lots of ways people engage System 2. For example, having rubrics, checklists, or standards in place that help you focus on the decision process, being consistent, and conforming to an established precedent.

If a really important decision is coming up, acknowledge you are attention constrained and focus on that problem and your psychological biases related to that problem rather than on whether you are being tricked by the way the supermarket stocks their shelves that day or something.

Rules also can help combat anchoring (and of course other biases), too. One of the reasons we have mandatory sentences is so that there is less discretion and more consistency in judges' sentences.

But rules don't always help; they can introduce another anchor, so there is a tradeoff here. Kahneman asks us to consider the effect of capping damages in personal injury cases to $1 million. This could anchor small cases, pulling up awards that otherwise should be much smaller. It could also prevent a correct award in the rare case more than a million dollars is deserved.

Food for Thought:

1) How free are we to make our own decisions? How much does the environment affect our decision process?

2) What is unbiased opinion?

Wednesday, November 20, 2013

The Law of Small Numbers? (TFS part 5)

I am reading Thinking, Fast and Slow, by Daniel Kahneman. In this series I will summarize key parts of the book and supply some comments and reflections on the material.

Part II: Heuristics and Biases
Chapter 10: The Law of Small Numbers


Summary: People are bad intuitive statisticians.

Detailed Summary:
  • We incorrectly want to apply the Law of Large Numbers to small numbers as well ("the Law of Small Numbers")
  • We have exaggerated faith in small samples. 
  • We pay more attention to message content than information about its reliability (e.g. sample size, variance).
  • We want to see patterns where there is only randomness.
  • There is no "hot hand" in basketball. 
  • Researchers err when choosing sample size for experiments. "[P]sychologists commonly chose samples so small that they exposed themselves to a 50% risk of failing to confirm their true hypothesis!"

My Thoughts on Chapter 10

First off, a quick quibble: "[P]sychologists commonly chose samples so small that they exposed themselves to a 50% risk of failing to confirm their true hypothesis!" should read "...a 50% risk of failing to reject a false null hypothesis!" 

I also have a quibble with how Kahneman explains our tendency to see patterns where there is only randomness. He claims "causal explanations of chance events are inevitably wrong." This is true, but misleading given the point he is trying to make. If you know the process behind the event is random, then yes, a causal explanation is wrong. But what if you don't know the mechanism? What if you are unsure what is going on? I think what he wants to say is that people are good at finding patterns and bad at accepting the absence of patterns ("randomness"), but I'm not sure. Also, when Kahneman mentions "randomness" and "chance" he almost always is assuming independence of events. Sometimes this assumption matters a lot and he seems to assume it in some places that are questionable. 

Sometimes the assumption of independence of events when events are in fact NOT independent can be disastrous. Case in point: the sub-prime mortgage crisis. All these fancy financial instruments were priced correctly assuming default on mortgages were relatively unrelated. In fact, they were highly correlated. Tim Harford has a fantastic basic description of the crisis and how bad assuming statistical independence can be when it's not true. Read it here. You will never think about eggs or mortgages in quite the same way again.

Given that statistical mismodeling can be so disastrous, is it such a bad thing that people search for patterns and take extra precautions after something unexpected or extremely unlikely happens? Even if it turns out that, often, it's just a fluke?

One last thing -- (and I will come back to this experiment several times in discussing these chapters) people may actually be pretty good intuitive statisticians! Results of experiments that ask difficult questions seem very sensitive to how questions are asked in a laboratory setting -- a lesson I thought we learned when we talked about happiness

Food for Thought:

1) At the casino, you are assured the roulette wheel is fair (there is an equal chance of red and black and a small chance of green). You observe the first spin lands on red. Then you see the second spin land on red. Then the third. You keep standing there, and you keep observing spins land on red. How many spins do you need to observe before you become more sure than not the wheel is NOT fair? How would your bets change over time? How do your answers depend on the reputation of the casino? on who told you the roulette wheel was fair?

If you were the owner of the casino, at what point would you close the roulette table and have the mechanism of the wheel inspected?

2) You are assured by a friend a coin is fair. Your friend tosses heads. Then heads again. And again, and again. How may heads in a row do you need to see before you are more sure than not that the coin is not fair or your friend is not tossing the coin "randomly"? How does this depend on how much you trust your friend?

(A quick anecdote: In high school AP Statistics, our teacher asked everyone in the class to flip a coin 10 times and record the flips. More than half the class "surprisingly" recorded all tails or all heads on their flips. Several students claimed we had defeated statistics. We redid the exercise the with requirement that we had to flip the coin high in the air and let it fall on the ground several feet away from us. The teacher monitored the class more closely. Everything came out normal that time.)

3) A hurricane of record strength devastates the coast. People who didn't have insurance now buy insurance. Should they? Why didn't they buy insurance before? Are they irrational? How does your answer depend on your beliefs about the effects of global warming (in particular, your beliefs that over time the average strength of hurricanes will increase)?

Wednesday, November 13, 2013

How Happy Are You? Five. (TFS part 4)

I am reading Thinking, Fast and Slow, by Daniel Kahneman. In this series I will summarize key parts of the book and supply some comments and reflections on the material.

Part I: Two Systems
Chapters 8-9

Summary:

System 1 is continuously monitoring the world, making assessments, and answering easy questions. System 2 can answer hard questions, but if a satisfactory answer is not found quickly enough, System 1 will try to substitute a "related" easy question to which it already knows the answer.

Detailed Summary:

These two chapters are dense! They are a catalog of studies about what System 1 is good at and what it is not. A lot of these studies are new to me, so my reading slowed down quite a bit here. Let's go through some of those results describing what some of the easy and hard questions are:

System 1 is good at determining whether someone is a friend or a foe. It is also good at determining two traits about people: their dominance and their trustworthiness. How dominant and trustworthy a political candidate looks influences how people vote (even more than "likability").

System 1 is good with averages, but poor at sums. This leads to a "prototype" bias where people become blind to quantities. For example:
In one of many experiments that were prompted by the litigation about the notorious Exxon Valdez oil spill, participants were asked about their willingness to pay for nets to cover oil ponds in which migratory birds often drown. Different groups of participants stated their willingness to pay to save 2000, 20,000, or 200,000 birds.... The average contributions of the three groups were $80, $78, and $88, respectively.... What the participants reacted to, in all three groups, was a prototype -- the awful image of a helpless bird drowning, its feathers soaked in thick oil. The almost complete neglect of quantity in such emotional contexts has been confirmed many times. 

System 1 is good at matching intensities across diverse dimensions. For example, it is good at answering questions like: "If Sam were as tall as he is intelligent, how tall would he be?"

People suffer from the affect heuristic, in which people let their likes, dislikes, and emotional feelings determine one's beliefs about the world and which arguments one finds persuasive.

And one more:

 "How happy are you?" is a difficult question to answer. Kahneman presents the following as "one of the best examples of substitution." The study asked two questions: "How happy are you?" and "How many dates have you gone on in the last month?" When the questions were asked in that order there was zero correlation between the answers. When the order was reversed ("dates?" then "happy?") the correlation was astronomically high. In the second group, people substituted the first question for the second because "dates?" is an easy question to answer that contributes to happiness that is fresh in their mind while "happy?" is a difficult question. Similar results hold when asking about finances or relationships with parents instead of dates.

My Thoughts:

I have been skeptical of the happiness literature that takes surveys about happiness too seriously ever since I saw Justin Wolfers present this paper. "On a scale of 1-7, how happy are you?" just seems like an impossible question to answer, let alone make systematic sense of many peoples' answers to it over time. Happiness depends on a lot of things, and it's unclear whether the feeling is absolute or relative, so what do those numbers mean anyway? My happiness is five. Is that on an absolute scale? Or relative to what my opportunities are right now? Or relative to my neighbor? Or is it an answer to some other question? I only needed introspection to figure out that "happy?" is a difficult question to answer, but I am glad there are actual studies that establish this fact.

This seems like a good time to emphasize the following: utility is not happiness (not even happiness perfectly measured). It is a mistake to think about utility as only squeals of glee. Happiness is one of many emotions. Utility is a more general measure of how well off you are, and a utility function is simply a representation of your preferences over bundles of goods or choices to be made. When you make a decision or consume a bundle of goods you can be better off but less happy. (Higher utility, but lower happiness.) Decisions that lead to lower happiness are not a mistake if they make you better off.

And that's the end! ... of Part I of V. Oh, my. There is a lot to this book! I am really getting my money's worth!

Food for Thought:

1) On a scale of 1-7, how happy are you?

2) Have you ever gone to a movie knowing it didn't have a happy ending? Where you better off because of it? Where you happier because of it?

3) What books, novels, or stories have you read that made you better off but less happy?

4) Have you ever sacrificed your own happiness for others? Where you better off because of that sacrifice? Was society better off?

Monday, November 11, 2013

Don't Grade the Smart Person's Test First! (TFS part 3)

I am reading Thinking, Fast and Slow, by Daniel Kahneman. In this series I will summarize key parts of the book and supply some comments and reflections on the material.

Part I: Two Systems
Chapters 4-7

Summary: 
Priming is important.

Q: How do you make people believe in falsehoods?
A: Frequent repetition, because familiarity is not easily distinguished from truth.

Q: How do you write persuasive messages?
A: Make them legible, use simple language (this increases the perceptions of credibility and intelligence!), make them memorable (e.g. verse, mnemonics), and use easy-to-pronounce sources.

The more often something happens, the more normal it seems.

Confirmation bias, the halo effect, overconfidence, discounting missing evidence (the "what you see is all there is" bias), framing effects, and base rate neglect are all important psychological tendencies people should be aware exist.

My Thoughts:

These systematic biases are very important and prominent in many peoples' behavior. They are the bread and butter of intro psych and becoming aware of them (and learning how to correct for them) was the most significant thing I got out of taking psychology as an undergrad. 

The best anecdote in this section is Kahneman's discovery that he wasn't grading his students' exams fairly. He originally graded his exams in the conventional way -- one student's exam at a time, in order. Here's the excerpt:
I began to suspect that my grading exhibited a halo effect, and that the first question I scored had a disproportionate effect on the overall grade. The mechanism was simple: if I had given a high score to the first essay, I gave the student the benefit of the doubt whenever I encountered a vague or ambiguous statement later on. This seemed reasonable. Surely a student who had done so well on the first essay would not make a foolish mistake on the second one! But there was a serious problem with my way of doing things. If a student had written two essays, one strong and one weak, I would end up with different final grades depending on which essay I read first. I had told the students that the two essays had equal weight, but that was not true: the first one had a much greater impact on the final grade than the second....
So, he tried grading the exams one question at a time rather than one student at a time.
Soon after switching to the new method, I made a disconcerting observation: my confidence in my grading was now much lower than it had been. The reason was that I frequently experienced a discomfort that was new to me. When I was disappointed with a student's second essay and went to the back page of the booklet to enter a poor grade, I occasionally discovered that I had given a top grade to the same student's first essay. I also noticed that I was tempted to reduce the discrepancy by changing the grade that I had not yet written down, and found it hard to follow the simple rule of never yielding to temptation. My grades for the essays of a single student often varied over a considerable range. The lack of coherence left me uncertain and frustrated....
The procedure I adopted to tame the halo effect conforms to a general principle: decorrelate error! ... To derive the most useful information from multiple sources of evidence, you should always try to make these sources independent of each other. 
I am a huge proponent of grading by a rubric in addition to grading one question at a time as much as possible in order to avoid exactly this issue. Grading with a rubric increases consistency and fairness and decorrelates error. Sometimes teachers grade the smart person's test first and using it as a key rather than making their own key with rubric. I disapprove of this (as tempting as it is to do it) -- it saves time, but as Kahneman points out, there is probably a big halo effect here.

Why don't more lecturers grade with a rubric? It takes much longer to grade this way and it is much more tiring because you are actually evaluating all the questions and everyone's responses equally. It also makes assigning the final grades more difficult because the scores are not as obviously "separated" into nice groups. Many lecturers just assign grades based on where the obvious breaks in the scores are without realizing they have created those breaks themselves from biased grading of perceived smart students' tests and perceived dumb students' tests. The halo effect.

Creating the rubric also shows you when discrepancies arise in real time. As I grade a question, sometimes I adjust how "wrong" I think a particular answer is. Then I have to go back and adjust all the previous exams with that answer to bring the scores in line with my new judgement. If I wasn't recording this, I might not catch these changes, unfairly punishing some students simply because of the order in which the exams were graded.

Grading from a rubric also almost completely eliminated from my classes a time honored tradition at the UofC: point mongering. Being able to state exactly why an answer is right or to what degree it is wrong is huge. Students know you are doing your job and respect you and accept the grade they earn. It also makes it much easier to identify (for the students and the teacher) actual grading errors.

Food for Thought:

1) Should the government take advantage of psychological biases and systematic errors people make when creating policy? To what extent should people be educated about them and then left to their own devices, and to what extent should the government create rules (reducing costly decision making) and "nudge" them?  The government is leaning toward more nudges.

2) The Obama campaign utilized academic social scientists more than ever. How are the biases highlighted in these chapters exploited in political campaigns?

3) How much information does it take for you to reverse your first impression of someone? When was the last time  you realized your first impression of someone was not correct? 

Friday, November 8, 2013

Restrictive Church Rules (TFS part 2)

I am reading Thinking, Fast and Slow, by Daniel Kahneman. In this series I will summarize key parts of the book and supply some comments and reflections on the material.

Part I: Two Systems
Chapter 3: The Lazy Controller

Summary: Paying attention is costly. Experiments supporting this show that exerting willpower is costly, self-control is tiring, there is ego depletion, and increased cognitive control is correlated with increased intelligence.

My Thoughts: This line of research is interesting and compelling. It attempts to explain why we sometimes make bad decisions in a very economic way: focus and attention are scarce resources that have to be allocated! 

What about those church rules? Maybe you remember you Sunday School teacher or your parents giving the following advice: don't put yourself in a position to be tempted (the "don't do anything fun" rule). Some denominations go as far as banning lots of activities that are not sinful in and of themselves, but can easily lead to sinful behavior if engaged in "too much" (e.g. drinking, or even dating). People (esp. kids/teens) often laugh -- what, you think we have no self control and don't know what's right and wrong? Well, not necessarily, but your will power gets depleted the more often you are in novel, challenging situations. And then you will do things you regret. It may not even be regret later -- it could be regret as you make the bad choices, and you know on net you are worse off, but you keep going anyway because you have no willpower left to stop.

The church advice is interesting because it acknowledges physical limitations and the psychology of the situation (breakdowns in rationality, some would say; simply acknowledging an often overlooked budget constraint, says me). But it also takes advantage of rational thinking: You (or at least your elders) KNOW if you put yourself in a bad situation your will power could get depleted leading you to make decisions that make you worse off. So don't put yourself in those positions in the first place! It's costly (you don't go to the fun party), but on net you are better off because of it (you didn't crash your parents car after succumbing to peer pressure and binge drinking -- a big plus). The rule is taught, thought about, and learned (maybe even legislated!) in a setting emotionally far removed from the situations in which it needs to be implemented. That's a rule that comes from straight-up, cold-blooded, forward-looking, rational thinking.

And here's a bonus -- once you learn the rule, it's much less costly to implement since you don't have to reason it out every time. But it becomes a heuristic that might lead you astray in other situations -- a System 1 bias.... Around and around we go.

Some smart people cite these ridiculous, "arbitrary" rules as reasons why they leave a particular church (sample: people I have known). But, the last result Kahneman presents in this chapter (control is correlated with intelligence) sheds a little light on why that's a trend (besides the obvious answer that it's not a trend and I am biased in my perception of the trend -- see the next post). Rules taught are tuned for "most people" or people "on average." In this case, it's for people with average will power and control. If smarter people have more will power to allocate, then these rules will be much more restrictive, and less good for them, than average people. It will be optimal for them to break more of these rules in practice and exert more self-control. It's also optimal for them to NOT encourage others who have less self control to break the rule.

Food for Thought:
1. "I don't buy bags of potato chips even though I know everything is fine in moderation. I can't eat just one. In fact, I know I will eat the whole bag in one sitting."

2. Alice: "I'm happy the FDA is about to ban trans-fats. Now I won't be tempted to eat so many doughnuts."
Bob: "I love trans-fatty donuts! They aren't as oily and taste better than other doughnuts. Sometimes I want to eat unhealthily (even though I know I may not live quite as long); let me eat the enjoyable, unhealthy doughnut I want!"

3. Who would be better off in a society where church rules are legally enforced? What if you could costlessly move to a society or city where your preferred church rules are enforced; would you?

4. How many difficult decisions can you make in a day before becoming too tired to make any more?

5. "If it's immoral then it should be illegal."

6. "If it's illegal then it's immoral."

Thinking, Fast and Slow (TFS part 1)

I am reading Thinking, Fast and Slow, by Daniel Kahneman. In this series I will summarize key parts of the book and supply some comments and reflections on the material.

Part I: Two Systems
Introduction, Chapters 1-2

Summary: Kahneman is a psychologist who dabbles in areas of interest to economists (he won the Nobel Prize in Economics a few years back). This book is mostly a summary of the current (as of 2011) state of research in the topics that interest him. Most of it isn't his own research, but is related to it.

Kahneman likes maintaining the following useful fiction: You are two agents, System 1 and System 2. System 1 is the automatic system which relies on intuition and heuristics to make quick, automatic, low cost decisions. System 2 is the effortful system, which is slow and requires conscious, mental effort to reason to a decision. Paying attention really is costly, so shortcuts (using System 1 first) are often necessary to navigate the world. Much of what interests Kahneman are systematic biases in decisions System 1 makes to our detriment (bonus if it's not what an economic actor would do in basic econ models) and what we do when these two systems are in conflict. Kahneman makes it absolutely clear that System 1 and System 2 are useful fictions -- not little people battling in our head -- but makes the statements of what research shows a lot easier to digest by allowing the use of simple, active language.

My thoughts: Economists could do better at explaining why "utility maximizing" is a useful fiction in the same vein. That's not really how individuals make decisions, but it is a useful fiction that makes explaining things a lot simpler. It avoids having "as if", aggregation, or "on average" statements everywhere and lets us focus on what's really important.

To me the biggest difference between economics and psychology is the level of focus. Psychology is really interested in how YOU, PERSONALLY make decisions. How the actual individual "works." And at it's most basic level, Thinking, Fast and Slow is a self-help book for the academically minded. Economists care much more about market level, group behavior, and allocations of scarce stuff (who gets what and why). In order to talk about these things coherently, economists need to over-simplify the individual's decision process and motives because it's one level down from what we actually care about. Of course, there is no hard line drawn, and there is much overlap between the fields; usually as a result of either economists or psychologists taking themselves too seriously. Hence the existence of fields like behavioral economics.

Saturday, November 2, 2013

How Science Goes Wrong

The Economist did a nice cover piece last week on How Science Goes Wrong. It is worth reading in it's entirety. Here's the leader and here's the cover story.

A major problem in science is the lack of incentives to replicate others' research. This is the thesis of the piece.

A problem that they don't mention directly, but alludes to, is that bad studies keep getting cited as truth even though they've been retracted, debunked, or otherwise discredited. EVEN IN THEIR OWN FIELD. Angrist and Krueger (1991) I AM LOOKING AT YOU. You would think that once someone showed their results could be qualitatively replicated using random data the paper (and technique of using quarter of birth as a valid instrument in the setting) would be relegated to the dustbin. You would be wrong. David, I feel your pain.

There are numerous examples underlying this thesis cited in the Economist, which should be concerning to anyone who values scientific research and its (subsidized) place in the modern economy.
A rule of thumb among biotechnology venture-capitalists is that half of published research cannot be replicated. Even that may be optimistic. Last year researchers at one biotech firm, Amgen, found they could reproduce just six of 53 “landmark” studies in cancer research. Earlier, a group at Bayer, a drug company, managed to repeat just a quarter of 67 similarly important papers. A leading computer scientist frets that three-quarters of papers in his subfield are bunk. In 2000-10 roughly 80,000 patients took part in clinical trials based on research that was later retracted because of mistakes or improprieties.
... nine separate experiments had not managed to reproduce the results of a famous study from 1998 purporting to show that thinking about a professor before taking an intelligence test leads to a higher score than imagining a football hooligan. 
Why the downturn in verification and replication?
The obligation to "publish or perish" has come to rule over academic life.... Careerism also encourages exaggeration and the cherry-picking of results.
The cover story does a good job of going through the statistics of how most published research (even when done right and in good faith) could be wrong. And it's not all in good faith. Because replications and reviews are not valued highly, and moderate or negative results are not usually interesting. It's important not only to do good work, but capture the interest of your fellow scientists when publishing. This leads to incentives to distort or exaggerate your work. If it's controversial, even better -- you'll get cited. Truth matters less than sparking debate. Journals publish successes, not failures.

And those success are rarely checked as carefully as they need to be by peer review once papers make it to the journal stage. Editors and reviewers need to spend time writing their own papers, not reading someone else's! Usually, the paper reviews get dumped to grad students as busy work, further reducing the quality of the review.
When a prominent medical journal ran research past other experts in the field, it found that most of the reviewers failed to spot mistakes it had deliberately inserted into papers, even after being told they were being tested.
... John Bohannon, a biologist at Harvard, recently submitted a pseudonymous paper on the effects of a chemical derived from lichen on cancer cells to 304 journals describing themselves as using peer review. An unusual move; but it was an unusual paper, concocted wholesale and stuffed with clangers in study design, analysis and interpretation of results. Receiving this dog’s dinner from a fictitious researcher at a made up university, 157 of the journals accepted it for publication.
 How to Fix the Problem?

One thing the Economist notes, which I fully agree with, is that applied scientists need to understand statistics better. Also, scientists should actually follow the scientific method. Unfortunately, something like this usually happens instead -- if you're lucky! Here's more of their recommendation:
Journals should allocate space for “uninteresting” work, and grant-givers should set aside money to pay for it. Peer review should be tightened—or perhaps dispensed with altogether, in favour of post-publication evaluation in the form of appended comments. That system has worked well in recent years in physics and mathematics. Lastly, policymakers should ensure that institutions using public money also respect the rules.

Why Does this all Matter, Anyway?
The governments of the OECD, a club of mostly rich countries, spent $59 billion on biomedical research in 2012, nearly double the figure in 2000. One of the justifications for this is that basic-science results provided by governments form the basis for private drug-development work. If companies cannot rely on academic research, that reasoning breaks down. When an official at America’s National Institutes of Health (NIH) reckons, despairingly, that researchers would find it hard to reproduce at least three-quarters of all published biomedical findings, the public part of the process seems to have failed. [emphasis added.]
And that's the problem in a nutshell. Basic research is funded publicly because it is a public good -- so every company ever doesn't have to reinvent the wheel -- that's wasteful. But if that basic research is mostly wrong, and every company that wants to use the result has to independently verify it, why are we funding basic research publicly, anyway?

Science has to be better at establishing actual Truth. A threat of retracting funding should be the stick that whips science into shape.

Tuesday, October 29, 2013

The Menu of Pain

The first step in solving a problem is acknowledging it exists. The growing fiscal gap is a problem that needs to be acknowledged.

The U.S. fiscal gap is 222 trillion dollars and growing.

How can we close it? For me,  Laurence Kotlikoff is the go-to economist on fiscal issues in the United States.  Kotlikoff's menu of pain contains a lot of sobering options of either renegotiating our promises or paying for them.

  • Closing the fiscal gap through tax increases alone would require an immediate and permanent 64 percent increase in ALL federal taxes.

  • Closing the fiscal gap through spending cuts alone would require an immediate and permanent 35 percent cut in ALL federal outlays (including welfare and payments on interest and principle of existing debt). 

These numbers get worse every year the problem is not addressed. 

Solving the problem requires first that people acknowledge the severity of the problem -- that our promises far exceed our ability to deliver on them. A little extra growth won't close the gap. Modest immigration reform and a small increase in the birthrate won't do it. Cutting all federal discretionary spending including defense to $0 won't do it. Even a 100% tax and enslavement of the richest 1% wouldn't be enough to close the fiscal gap. 

Why is it so hard to acknowledge the severity of the problem? I think a lot of it comes from a misunderstanding of how most welfare programs work (they're complicated, and change over time). Many people, especially current seniors, have this impression that they have paid into a system all their lives and are getting a fair return on their investment and benefits based on what they paid into it. 

The reality is that most large welfare programs are pay-as-you-go which rely on an ever-increasing pool of workers paying into them -- and that pool of workers is not getting big enough fast enough. There is no personal bank account with your name on it accumulating market interest and protecting your principle. Current and past seniors have gotten a fantastic deal, getting much more out in benefits than they paid in:
According to the [Urban Institute]'s data, a two-earner couple receiving an average wage — $44,600 per spouse in 2012 dollars — and turning 65 in 2010 would have paid $722,000 into Social Security and Medicare and can be expected to take out $966,000 in benefits. So, this couple will be paid about one-third more in benefits than they paid in taxes.
If a similar couple had retired in 1980, they would have gotten back almost three times what they put in. And if they had retired in 1960, they would have gotten back more than eight times what they paid in. The bigger discrepancies common decades ago can be traced in part to the fact that some of these individuals’ working lives came before Social Security taxes were collected beginning in 1937.
Some types of families did much better than average. A couple with only one spouse working (and receiving the same average wage) would have paid in $361,000 if they turned 65 in 2010, but can expect to get back $854,000 — more than double what they paid in. In 1980, this same 65-year-old couple would have received five times more than what they paid in, while in 1960, such a couple would have ended up with 14 times what they put in.
Such findings suggest that, even allowing for inflation and investment gains, many seniors will receive much more in benefits than what they paid in.

This is why public benefits programs (which are actually at their heart, just transfers) are so popular. It's also why Detroit's pensions have collapsed so spectacularly and why Social Security is going to "run out of money." 

I think as people realize how the programs actually work and the strain on workers of having to pay for unreasonable promises grows, people will be willing to talk about ways of making transfer programs work by funding a responsibly-sized pot (or a personal account) before taking from it. 

Thursday, September 26, 2013

I Can't Keep My Health Plan!

“If you like your doctor, you can keep your doctor. If you like your current health insurance plan you can keep it.”
    -- President Obama, New Hampshire Town Hall 2009

"Because no matter who you are, what stage of life you're in, this law is a good thing.... if you already have insurance you like, you can keep it."
    -- HHS Secretary Kathleen Sebelius, DNC 2012

"Keep your doctor, and your plan, if you like them."
    -- Minority Leader Nanci Pelosi's website, currently

Well, I wasn't going to write anything about the implementation of the ACA because my views haven't changed much since the last time I wrote about it (see here and here). New information released in the lead up to full implementation of the exchanges hasn't been too far from what was expected a year ago. And I still think the key to good health insurance market reform remains breaking the reliance on employer-provided insurance and passing reforms that promote price transparency and treat insurance as insurance, not as a health care payment plan.

However, my company is changing my health insurance options to become Affordable Care Act (ACA) compliant and I need to evaluate what choice I am going to make, so I may as well share some of that decision making process with you.

How the Affordable Care Act Affects My Insurance Plan

I have good insurance, but even my current plan is not ACA compliant! Here’s how the ACA affects my current plan:

1. “Under the Affordable Care Act, doctor, ER, and urgent care copays will now be included in the out-of-pocket maximum, and accordingly, the in-network out-of-pocket maximum will be increasing from $1,500 for individuals… to $2,650.”
2. “Similarly, to comply with the Affordable Care Act, there will now be a prescription plan in-network out-of-pocket maximum of $3,700 for individuals.” (There was previously no max.)

Here’s the cost comparison:


2013 Plan with Vision
2014 ACA Compliant Plan with Vision
Biweekly Cost to Me
$87.69
$96.05
Calendar Year Deductible
$400
$425

There are a couple of things to notice here. In order to pay for these two benefits (and pay for the increasing cost of health care), the insurance company raised costs in several ways: It directly raised the price I face by 9.5%, increased the deductible by 6.25%, and increased the out-of-pocket maximum by 77%. The employer contribution also probably went up, but I don’t see that in the information I was given. They correctly assume that when making my decision I don’t care about prices I don’t face. 

I suspect this type of thing will be experienced by many others even if they already have insurance through their employers. Luckily, the ACA-compliant version of my plan isn't drastically different from what was offered last year. Others may not be so lucky. Because of these changes, cost increases individuals face will feel larger than they actually are simply because people are forced to consume additional “benefits” that aren't very valuable to them even if the cost curve is “bending down.” 

My Choice

Given that I want to remain insured through my company, I will have two options: buy the ACA-compliant version of the current plan (open access plan or “OAP”) or (new this year) buy an alternative plan that also utilizes a health savings account (HSA). The HSA plan with Vision will have a lower premium ($88.66 biweekly) but a higher deductible ($1,750/year), higher out-of-pocket maximum ($4,000/year), and a different benefits structure. For each claim, the OAP tends to make one pay a small amount then covers the entire cost above that amount. The HSA plan usually covers a fixed percentage of the cost. For example, under the OAP, for an X-Ray I would pay $25 while under the HSA plan, I would pay 10% of the cost (as long as total expenses remain under the out-of-pocket max). 

HSAs combined with catastrophic (i.e. high deductible) insurance have some good properties stemming mainly from the fact that people have more of an incentive to pay attention to and are affected by more prices. When enough people shop around, this puts downward pressure on the cost of care overall. So they are great in theory and would be the backbone of a market-based alternative to Obamacare. Here's one view

I am young and healthy and like the idea of HSAs, so why am I hesitant to sign up for one? A few reasons:

1. The particular HSA plan I am being offered gives only a 9% break on the premium. This is only about $192 a year.

This is partly due to the minimums imposed by the ACA and partly because most of the benefits come from tax deductions from contributing to an HSA. However, I will not be contributing much, if anything, beyond the minimal employer contribution in the next 1-2 years. So the savings from switching are minimal for me right now.

2. Additional hassle of managing yet another account.

3. The account starts at $0. There is a minimal yearly employer contribution, but moderate sized health shocks will be more costly to me for the first couple years under the HSA plan than under the OAP.

4. There is significant turmoil and uncertainty in the health insurance market. Originally, HSA plans were targeted by Democrats during the drafting of the ACA. I am not convinced enough politicians have been converted on them, yet. They may be targeted again in the short term or made otherwise more disadvantageous. Will the money I or my employer contribute really remain tax free and roll-over from year to year indefinitely?

5. While HSAs could be the backbone of health insurance market reform, they are not currently the backbone of the ACA. Not very many people have them, so will the macro effects of price sensitivity by those individuals be felt? I don't think we are there yet. In order to hit a tipping point, HSA plans need to be actively promoted, not tacitly discouraged. 

6. When shopping for the best price, which price will people be looking at? The higher, fake insurance price that gets negotiated down later, or the lower cash price for those without insurance? My suspicion is that when few people have HSAs, they will be forced to shop on and pay 10% of the fake, high price because of the lack of bargaining power and relatively few people shopping around.

It's also relatively easier to switch to an HSA account in a year or two than from one, so I think I am going to swallow the ACA cost increases for now and stick with the traditional insurance.

I have a couple of weeks to decision time, so, what do you think?

Saturday, September 21, 2013

Impose a Living Wage and Reduce Unemployment?

What happens to unemployment when the minimum wage increases? What if a living wage is imposed, raising the minimum wage a lot?

The Standard Static Story

Here's the standard story from intro econ in one easy picture:



The difference between the amount of work people are willing to supply at the minimum wage and the labor demanded by firms at the minimum wage is unemployment. (Note that unemployment doesn't exist at the competitive equilibrium in this model. Unemployment exists at the minimum wage because at the higher wage people want to work or work more (i.e. are "underemployed") but firms are unwilling to provide that work opportunity).

So, an increase in the minimum wage would increase unemployment and decrease employment. If the minimum wage is increased a lot to impose a living wage, then unemployment and underemployment increase a lot.

A Crazy Alternative Story

The standard story is nice, but it's a story that actually makes MORE assumptions about the supply of labor than the standard labor supply curve in many neoclassical models. In the standard labor-leisure model (now were at intermediate econ!), an individual's supply of labor can be backward bending.

Imagine you are given a small raise and can choose the amount you work in a year. Would you work more -- or less? What if you were given a pay cut? Would you work less -- or more? What if you were given a large raise? What if you earned $1,000,000 an hour; how much would you work a year then?

If given a raise, it turns out that most people would work more if they have a relatively low wage to begin with, but would work less if they have a relatively high wage. This is the backward bending labor supply curve. If people who work "minimum wage" jobs exhibit this phenomenon and have relatively similar preferences, the aggregate labor supply curve will also be backward bending at some wage level.



Now if there is an increase in the minimum wage, unemployment may actually go down! This would happen if the minimum wage pushed the market well into the backward bending portion of the labor supply curve and labor demand was inelastic enough. Here, we get the interesting case where imposing a living wage doesn't increase unemployment, it decreases it! However, notice employment still declines, as before. So we have a regime of decreasing unemployment and decreasing employment.

Notice in both stories an increase in the minimum wage reduces the total surplus in the economy. In addition, workers as a whole are better off if the gains from increasing the wage on those who remain employed outweigh the losses to those to lose their job or otherwise work less than they would like at that wage. Firms are definitely worse off.

This is very close to an analysis of how unions affect the labor market in the simple classical model. It's usually a good deal for those who remain employed, but hurts workers who would be willing to work for a little less, hurts firms, and there is a net loss compared to the competitive outcome.

An Even Crazier Story

Suppose -- this is very much a hypothetical -- the aggregate labor supply is backward bending and it intersects the demand curve in multiple places like this:



I've also added the standard "dynamics" to the static model. The competitive equilibrium is "stable" in the following sense: if wages were slightly below the competitive wage, demand would exceed supply. In order to attract more workers, firms are willing to raise the wage offered to workers. At a higher wage, laborers are willing to work more, and more may enter the market. We move "up" the curves until we hit an intersection -- the competitive equilibrium. At wages just above the competitive equilibrium, the opposite happens; supply exceeds demand and workers are willing to accept a lower wage in order to get a job or work more. The wage declines until it hits the competitive equilibrium.

Notice that the second equilibrium up the demand curve does not have this stability property, but the highest wage one does. Now if a living wage is imposed, wages for the lowest skill workers could settle at ABOVE the living wage at a high wage, low work, no unemployment equilibrium. Again, this could be good for workers if the benefits to those who are working outweigh the costs to those who are not, but it's also likely that this could lead to a WORSE outcome for the workers, too! That is, if the surplus is small enough (as it in the figure), both workers and firms would prefer the economy to be at the low wage, high work equilibrium, but instead the economy is stuck in stagnation!

Is This Why Both Unemployment and Employment are Declining and the Economy is in the Doldrums; It's All About the Minimum Wage and Backward Bending Labor Supply?

Almost certainly NOT, so why did I bother with these crazy stories?

0) They're interesting! So what if it's not reality? It might describe some reality sometime.... Ok, you want some better reasons? Here you go:

1) It speaks to what MAY happen if there are LARGE changes in the minimum wage. The empirical literature on the minimum wage only talks about the effects on employment for small changes in the minimum wage around the levels we currently have (or have had in the past). In order to understand the effects of large changes, we have to have a model of the whole market. I just presented a couple. Crazy things can happen in less dubious models, too.

2) Unemployment is just an indicator. People aren't necessarily better off because it goes down. It depends on what else is going on in the economy. The crazy story above is one example of people being worse off when unemployment declines.

3) Multiple equilibria happen all the time. Often, some of the equilibiria are bad. If stuck in a bad equilibrium, even though everyone knows they are stuck in a bad spot, no individual actor (even perfectly rational ones!) can do anything to get out of it.

Friday, September 6, 2013

Why I Opt Out

To fly nowadays you need to go through some pretty serious security. You've probably noticed. 

But did you know that you can opt out? Well, you can opt out of the full body scanner, anyway, if you are selected to go through it. All you do is say "I would like to opt out" to the Transportation Security Administration (TSA) agent who directs you to go through the full body scanner. Then instead of going through the scanner you get an enhanced pat down.

I opt out every time I fly. It's a protest.

While 9/11 prompted an understandable increase in airline security, additional security comes at a cost. If you want more security, you are going to have to give up some freedom and pay for that security, too, in money and time. After 9/11 there was (and continued to be) a big increase in security measures after every (perceived) threat at the expense of some freedom.

I believe we have gone too far, trading too little security for too much freedom at too high a cost.

Bureaucratic Behavior

Benevolent public officials and bureaucrats should provide the optimal amount of security that we as a society demand, right? But if something goes wrong, the bureaucrat or public official in charge will be blamed for not doing enough. He would probably lose his job. To ensure his job, he increases security. He can do this without personally bearing practically any of the cost. Taxpayers pay for it (as long as Congress authorizes his funding).

Because of misaligned incentives, unchecked public officials will always ask for too much funding, implement regulatory policies at too high of a cost, and restrict freedoms too much.

I don't think the TSA is checked enough.

What About the NSA?

The bigger invader of privacy is the TSA not the NSA. It's no secret the NSA has been in the news lately because of the data it collects. There has been a lot of talk about invasion of privacy, search and seizure, and what kind of information government computers, agents, or analysts should have access to. Well, the TSA is worse than the NSA on almost every dimension. The NSA needs warrants issued and renewed regularly by FISA courts in order to just get metadata because it might collect information on Americans. Congress also checks it. The TSA does not need a warrant to get actual data on actual Americans standing right in front of them. NSA computers look at US metadata (except when procedure is violated). Actual TSA agents look at everyone's data every time you fly. The NSA targets foreign traffic, collecting most of its US data as a byproduct. The TSA targets American traffic intentionally. The NSA does not noticeably interfere with your use of the internet. The TSA does noticeably interfere with your use of transportation networks.

Is it Worth It to Protest?

I think it is worth it to protest.

The TSA does note the protest numbers. When I flew back to Chicago on Monday, I noticed the TSA agents were marking the number of opt outs. I was the 11th that day at that security station. Also, other people see you, which may encourage others to do the same or at least encourage others who are concerned about privacy that they are not alone.There has already been some success. Opt outs and privacy concerns were cited in the decision to move to displaying cartoon representations of people that highlight problem areas rather than the actual x-ray photos.

Yes, protesting is costly (the pat down can be very uncomfortable for both you and the agent and it may delay you), but it wouldn't be a credible protest if there wasn't some personal cost involved. It shows you care about the issue.

If you feel opting out isn't enough, you can also file formal complaints (I've done that before, too), mention security procedures concerns on airlines post-flight surveys (I've done that, as well), or write to your Congressman or the ACLU (I haven't done that, yet. Well, about this issue, anyway).

But at the very least, if you feel the government has traded off too much freedom for too little security, I hope  the next time you fly that you will opt out, too.

Friday, August 23, 2013

I Scream for (Quality) Ice Cream!

I stopped by the spot where Istria Cafe was on 51st in Hyde Park after work today hoping to get some gelato. Istria closed down earlier this year and was recently replaced by Bridgeport Coffee. Istria used to have good gelato -- one of the very few places around Hyde Park that serves reasonably priced, reasonably good frozen desserts. Unfortunately, the new owners are not continuing the tradition. Instead they are sticking to traditional coffee house drinks, some pastries, and limited sandwiches.

The owners of Istria had been threatening closure for a while. They closed their 57th street location in 2010 and gave this interview in 2012 about the strangulation effect Chicago over-regulation and bureaucratic discretion was having on their business:

So now I have one fewer frozen dessert places to choose from. In fact, there is not one decent ice cream place in Hyde Park! There are a couple of gelato places left, a frozen yogurt place, and a new ice cream shop, but the quality is only okay, and the prices are through the roof. I might even make the stronger claim that there is not a good ice cream place south of the north side of Chicago! And I love good ice cream, so this makes me sad. I would love to be proved wrong, so if you know of a good place, please let me know in the comments.

Maybe I'm just spoiled. Really spoiled. Really, really spoiled. I grew up down the street from a Graeter's and a Jeni's. That's not even mentioning Knights or the other quality homemade ice cream shops around central Ohio. After growing up with that, Coldstone and Baskin Robins just don't cut it for me.

Monday, August 12, 2013

Much To-Do About "Nothing" Done


The down side of a to-do list for me is that I am always depressed at the amount of things that don't get done. To-do lists don't really fit how I work that well. Often, I start on a small task (planning to do several small tasks on my list) but instead end up working on something else entirely that is more productive or more important and more time consuming, but not necessarily on my to-do list for that day, or at all. So even though I do more by deviating from my list, because the list exists, I feel like I haven't done that much when I look at it and see all those other things I have yet to do, was going to do, but didn't (if you get my drift).

The solution: a Done list. Yes, I still have a To-do list (some things need to get done eventually, and writing them down rather than trying to remember them is a good idea), but for the last few months I have been utilizing Notepad to help me log what I have actually done, as well.

Yes, Notepad. The most basic text editor that ships with Windows. Almost all you can do is type. Almost. Let me explain:

I create a text file on the Desktop called Done.txt (right click on the Desktop -> new -> text document). Then I add ".LOG" (without quotation marks) to the first line of the file, save and close. Now every time you open the file, it adds the date and time stamp to the next line! Then I briefly type what I did, save, and close the file. If you want to quickly save then exit, you can hit Ctrl+S, Enter, then Alt,F,x. *

Every couple of weeks I review what I have done (that makes me feel good!), rename the file Done_yyyymmdd.txt with the current date, move it to Documents/Archive/Logs, and create a fresh Done.txt file on the Desktop.

A Done.txt file fits in nicely with the Getting Things Done methodology as part of the weekly review step, if you are into that sort of thing. It also serves as a simple personal log or diary by cataloging what you did that day. No Dear Diary required.

If "Where, how, or with whom my time is spent" was good enough for George Washington, it's good enough for me.