Ant-Man Cosplay by William Tung (CC BY-SA 2.0)

If you have not yet viewed Marvel’s latest production, Ant-Man, take this as the obligatory spoiler alert. Those who have viewed this perplexing film about an ant-size superhero that saves the world, however, probably have several questions running through their minds: How can such a small superhero be so powerful? Will Ant-Man join other Marvel heroes in future films? But the most important question, one that has yet to be asked by the masses, is what the very idea of Ant-Man and the plot of Marvel’s film says about our morals and whether the ideas in this film allude to a bigger problem in terms of warfare.

Assault on Fort Sanders by Kurz & Allison, restored by Adam Cuerden (Public Domain)

Political pressure is mounting for the removal of the Confederate battle flag that flies on the state grounds of South Carolina in the wake of the Charleston Church shooting. In recent years, public debate over the flag that symbolizes racism to some and heritage to others has intensified. The divergent views towards the flag inflamed intense public dialogues about racism, culture and history ever since the mid-1950s. Under this circumstance in 1993, United States Senator from Illinois Carol Moseley-Braun claimed “it is a fundamental mistake to believe that one’s own perception of a flag’s meaning is the only legitimate meaning.”

As Moseley-Braun suggested, people should not impose one’s interpretation of the flag to others, and seeking to understand why people are offended by it and why people preserve it are actions that are necessary to take as educated citizens. In order to fully comprehend the implications that the flag gives to a diverse public, understanding the entire history of this symbol would become necessary as people from different backgrounds from a wide range of generations view the flag from a variety of perspectives.

Unlike common belief, during the Civil War the Confederate battle flag did not explicitly symbolize racism nor slavery. Among the southern white population, less than 5% were slave owners, and thus, the majority of the Confederate soldiers did not own slave property. Racism prevailed both in the Union and the Confederacy as both parties did not give African Americans the right to vote and fundamental human rights. In fact, historian James McPherson argues that the main Confederate “cause” of the Civil War was to preserve their country and the legacy of the Founding Fathers, which derived from southern nationalism. As Lincoln recalled in his Second Inaugural Address in 1865, that slavery was “somehow the cause of the war,” the modern debate arguing that the Civil War about slavery, and therefore, the battle flag represents slavery is a mere simplification of history. The war was centralized on the issue on slavery, but one cannot naively generalize that all Confederate soldiers were committed to slavery and supported going to war for that cause.

The history of the flag does not end with the Civil War, but expands after World War II with the rise of the Civil Rights Movement. To many people’s surprise, the proliferation of the Confederate battle flag happened after 1954, when the U.S. Supreme Court declared that racial segregation in public schools was unconstitutional via the Brown v. Board of Education case. Although segregation was illegal, many southern states were reluctant to integrate schools. To show their resistance against integration, the Confederate battle flag came back in the public sphere.

The following year of the Brown case, George Wallace, governor of Alabama, raised the flag as part of his “Segregation Forever” campaign and endorsed it as a symbol of resistance. Not only political figures, but also segregationists brandished the flag to contest integration. It was at this time when the flag entered American popular culture as a symbol of opposition to integration, as Jonathan Daniels, editor of Raleigh News & Observer, lamented in 1965 that the flag had become “just confetti in careless hands.”

So, how should governments, corporations and individual citizens cope with a cultural icon that ignites intense debates? One must acknowledge the difference between public and private display of this flag. Public display of the flag (e.g. on the South Carolina state grounds) should be prohibited with the understanding that this flag symbolizes racism for a wide majority of the public. A governmental institution should not naively display a symbol of racism and show innocence in front of people who are offended by it. One must ask: “what would it be like for a Black citizen living in a state where a symbol of racism waves on the state ground?” Confederate flag images can harass or intimidate citizens and the government must not endorse such figure on its public ground. Moreover, the presence of the flag on the state ground excludes and ignores the population who do not honor it.

On the other hand, it is essential to distinguish between the flag as a memorial and the flag as a symbol of exclusion. The Civil War is undeniably a fundamental part of American history and culture, whose events still fascinate many Americans today. The history of the Confederacy is as valuable as the history of the Union, and no one can erase the four years that the two parties had fought. It is necessary to acknowledge that for many southerners, the Confederacy is part of their family history and the flag is a tool to honor their ancestors. Confederate heritage organizations have the right to privately use this symbol with the understanding that explicit use of the Confederate battle flag may offend others who are uncomfortable with it.

We live in a country where people come from different cultures, nationalities, and family backgrounds, and therefore, it is natural that there is a variety of perspectives on how one considers the Confederate battle flag. Many Confederate descendants look at the flag as a symbol of heritage, but this does not make the flag an honorable icon for everyone. A large number of people view the flag as a symbol of racism, but no one can assume that everyone who raises the flag is a racist. However, a governmental institution must consider the negative implications that this icon gives to the public. Even in private settings, no one can naively use this powerful symbol without considering the message that this flag might give to a wide public. In sum, seeking to understand the diverse meanings of the flag and engaging in honest dialogues would lead us to a better understanding of the proper place of the Confederate battle flag in modern day society.



Edward Pessen, “How Different from Each Other Were the Antebellum North and South?” American Historical Review 85 (1980): 1119-49.

James M. McPherson, For Cause and Comrades: Why Men Fought in the Civil War (New York: Oxford University Press, 1997).

John M. Coski, The Confederate Battle Flag: America’s Most Embattled Emblem (Cambridge, MA : Belknap Press of Harvard University Press, 2005).


Prindle Institute for Ethics Staff Image

This post by Dr. Jeff McCall was originally published in The Indy Star on April 3, 2015.

The good news is that nearly 90 percent of recently surveyed millennials say they get news off Facebook. The bad news is that most of those social media users stumble into the “news” only when they go to the site for other purposes. Worse yet, what these millennials are getting as news from social media sites wouldn’t constitute news in the traditional sense.

A new report by the Media Insight Project indicates that while less than half of millennials go to social media specifically to find news, most say they do absorb a bit of news along the way while browsing party photos. Facebook is the primary site for such accidental news absorption, with Instagram, Twitter and YouTube also figuring in. The most likely “news” topics these people find on social media are pop culture, music/movies, social issues, fashion and sports. Only 40 percent of millennials report paying for any news site or app, but most are happy to pay for access to movies, video games and music.

News is not a commodity worth paying for. As one young survey respondent said, “I really wouldn’t pay for any type of news because as a citizen it’s my right to know the news.”

It’s a disturbing and ongoing trend that young adults aren’t interested in real news and don’t engage it. Millennials perform poorly on surveys of current events and public affairs. With little insight and awareness of important events, these young people are bystanders as the public policies that will affect their lives for decades get made. In the 1970s, half of college-age people read a daily newspaper. Now it is less than one in five, online or otherwise.

It figures then that voter turnout among young people is consistently lower than for other age groups. Young voter turnout has declined steadily during the last half century, except for slight upticks in 1992 to elect Bill Clinton and 2008 to elect Barack Obama. President Obama recently mentioned the prospect of mandatory voting. More voter participation is generally a good thing, but too many young voters would only be prepared to vote on their favorite new movie or what to put on the pizza.

Finding causes for the massive millennial news tune-out is a complex task. The news industry itself, particularly television, must shoulder some blame. Broadcast news agendas have softened over the years, with weather, pop news and cute animals in every newscast. No wonder millennials look for this sort of “news” online and identify it as such.

Media literacy education is deficient at all levels of the education system, leaving young people with little insight about how to stay informed and why. A national survey by the First Amendment Center shows that only 14 percent of Americans can name freedom of the press as a freedom articulated in the First Amendment.

Then there is the near fixation of young adults on their digital devices. Instagram photos of pets, tweets about the lunch menu and posts about tonight’s bash just have to be shared. Lives are not so much lived as they are recorded and processed through the digital universe. Such digital compulsion leaves too little time or brain space to become civically aware.

More evidence of the younger generation’s lack of interest in news can be found in enrollments at college journalism programs. Studies show journalism enrollments are on the decline. Of those students who are registered in journalism programs, 70 percent are studying public relations or advertising, not traditional journalism. It’s hard to blame these young people for studying public relations. They will earn 50 percent more than their counterparts who prepare for careers in journalism. Such is the state of the news industry, both economically and in terms of prestige.

None of this discussion is to suggest that all millennials are as uninformed as the people interviewed during Fox News’ “Watters’ World” segment. Of course, certain young adults take seriously their duty to be informed and civically engaged. The question is whether their numbers are sufficient to someday lead a democracy on the important issues of the day. The answer is less likely as long as millennials think “news” is best found while stumbling through social media sites.

 sötétben by Zoltán Horlik CC BY-NC-SA 2.0

A while ago I was about to embark in a long air travel and wanted something to read. I was in a town with one of those increasingly rare old-fashioned independent bookstores. So I felt a certain thrill when I decided to go in and buy a physical book, like in the good old days. I saw Kazuo Ishiguro’s Never Let Me Go, skimmed briefly the plot summary, and on an impulse I bought it.

I felt a little nervous, because it seemed to be a melancholic book, possibly an uneasy read. For the past decade, I have been avoiding watching, reading, or hearing about unsettling stories. I used to love books and movies that made me burst into tears, but as I have grown older I have become incapable of enduring those emotional storms. I feel that life is hard enough, and the real world is horrific enough, that I don’t need extra-doses of suffering in my spare time.

But recently I have come to reconsider the wisdom of this self-protection policy. First, because there is only so much I can do to protect myself from pain of various kinds: as philosopher Martha Nussbaum has argued, human goods are fragile and human happiness is inherently delicate. Developing a thick skin seems a wiser long-term strategy than the one I have been adopting. But, second, because there might actually be value in suffering. Emotions such as grief and jealousy, and sensations like pain are instrumentally valuable, since they play essential roles in our physical and psychological well-being, preventing injury, making us aversive to losing, and protective of, those with whom we share genes, and so forth.

Psychologist Randolph Nesse has coined the term “diagonal psychology” to designate the field that studies the benefits of negative emotions and the downside of positive ones. An example of such approach can be found in the work of June Gruber, a psychology professor at University of Colorado Boulder. In her TEDx talk The Dark Side of Happiness, she argues that too much positive affect can lead to decrease of creativity and to risky, harmful behaviors; that those who do not feel emotions such as grief and anger when it is appropriate are less emotionally adjusted; and that the pursuit of happiness itself can become a self-defeating objective. In the words of J. S. Mill: “Those only are happy… who have their minds fixed on some object other than their own happiness; on the happiness of others, on the improvement of mankind, even on some art or pursuit, followed not as a means, but as itself an ideal end. Aiming thus at something else, they find happiness by the way.”

In philosophy, a lot of attention has been traditionally and historically paid to the importance and nature of happiness. Recently, however, some philosophers have started thinking more about the value and role of pain. Some of these philosophers can be found in the interdisciplinary team that is behind the Value of Suffering Project. The aim of the project is to investigate the nature and role of suffering and affective experience in general. They even have a blog you might want to check out!

Earlier attempts at re-evaluating the importance of suffering can be found in the work some philosophers (such as Martha Nussbaum) have done on the importance from an ethical perspective of imaginatively engaging with fictions: when we see the world through the lens of a member of an oppressed minority, for instance, and we empathize with their pain, we might be able to see moral truths that were unavailable to us before.

Thinking about the experience of suffering in fictional engagement also suggests that suffering might have not only an instrumental value (as it plays an adaptive function at both the physical and psychological level), but also a non-instrumental one, in particular an epistemic one. Suffering is an unavoidable component of the human experience. If there is intrinsic value in knowing reality, independently from the use that we can do of that knowledge, then eschewing knowledge of a central aspect of reality is not a habit that a person should cultivate.

I might be ready for The Kite Runner.

 Social Media Cloud by Techndu by Mark Kens CC BY 2.0

The social media world is a crazy world, indeed, sparking firestorms over petty things, such as the color of some dress in Scotland. Most social media postings connect people to ideas, news, fun and each other. There is, however, a dark and demented corner of social media where posters threaten and scare individuals. This leaves law enforcement with the challenge of sorting out which online threats to take seriously and which goofy rants to ignore. In a society in which free speech is valued under the First Amendment, this is quite a quandary.

The Supreme Court is deliberating a case that could help law enforcement assess threats delivered through social media. Oral arguments were heard this winter for a case in which a Pennsylvania man, Anthony Elonis, was convicted for posting threatening messages on Facebook. His posts appeared to target his estranged wife, his former place of employment and even elementary schools in the area. One such post, believed to be directed at his wife, read, “I’m not going to rest until your body is a mess, soaked in blood and dying from all the little cuts.” Other posts were equally as frightening.

For his conviction to stand, the Supreme Court must find these social media rants were “true threats” in the eyes of a “reasonable person.” Elonis’ attorney tried to convince the court that his client had no intent to cause fear, and that Elonis’ wild-eyed, online outbursts were somehow the stuff of art and self-therapy.

Justice Ruth Bader Ginsburg seemed to sympathize with that argument, asking “How does one prove what’s in somebody else’s mind?” Well, for one thing, you look at the words that somebody uses. Words have meaning. A person’s words give a clear window into what that individual is thinking. A reasonable person could easily see a real threat in online posts that talk about the ex-wife’s head on a stick, her shallow grave and “a thousand ways to kill you.”

Justice Samuel Alito dismissed the artistic/therapeutic rationale, “This sounds like a road map for threatening a spouse and getting away with it.” Justice Antonin Scalia incredulously asked Elonis’ attorney, “This is valuable First Amendment language that you think has to be protected?” That is the key question for the Court to answer. There are very few categories of speech not allowed under the First Amendment. The Court must decide if threats delivered through social media should be included as protected.

The Supreme Court’s guidance is much needed to determine how threatening a digital message must be to actually break the law. Online threats pop up routinely these days. A 17-year-old in Brooklyn was arrested recently for a Facebook post that included an expletive-filled threat against police and a cartoon image of a policeman with three guns pointed at him. The charges were later dropped, with the teen’s attorney saying his client never intended to act on the threatening post. The decision didn’t sit well with Patrolmen’s Benevolent Association President Pat Lynch, who said in a published report that the message was “easily interpreted by any reasonable person as a call for violence against police officers.” The NYPD has recent sad history with violence against its officers.

Bomb threats posted on Twitter in recent weeks have been taken quite seriously by authorities, leading to the grounding or emergency landings of several airplanes. Those threats might not have been serious either, but the FAA and the airlines could hardly shrug them off.

In the Elonis arguments, Justice Elena Kagan reminded fellow justices that “the First Amendment requires a buffer zone to ensure that even stuff that is wrongful maybe is permitted because we don’t want to chill innocent behavior.” True enough, but it is hard to see how stopping social media death threats chills worthwhile speech. Staunch First Amendment defender Justice William Brennan wrote in 1957, “The unconditional phrasing of the First Amendment was not intended to protect every utterance.”

The upcoming Supreme Court decision must provide guidance to social media users and law enforcement about the boundaries of online threats. The key should not be the poster’s intent, but rather the effect of threatening posts on the receivers. Online threats are still threats.

Urban Chaos Theory by Jose Maria Cuellar CC BY-NC 2.0

Then comes the motherfuckin’ Christopher Columbus Syndrome. You can’t discover this! We been here. You just can’t come and bogart. There were brothers playing motherfuckin’ African drums in Mount Morris Park for 40 years and now they can’t do it anymore because the new inhabitants said the drums are loud.  – Spike Lee, Rant on Gentrification

Spike Lee’s rant against gentrification in the Brooklyn area during a Q+A at the Pratt Institute highlights the negative sentiment that many communities are facing  in the rapid change of the urban world.  Gentrification can be easily understood as the the process of renovation of low-income communities through housing and business development often originating from white middle income persons. However, within the good intentions of “developing” or “cleaning-up a neighborhood” comes many social issues that may contain a lasting harmful impact.

To start off, why is the millennial generation so interested in living in urban areas? First, there has been a major shift in the value change among millennials (those born after 1982) and the parenting generation. Many gentrifiers have grown up under the privilege of gated communities and post-white flight suburbia. Having 20 minutes of commuting time all one’s life from home to school, school to the grocery stores,  grocery stores to restaurants, the millennial and mostly white generation has mostly lived a life inside the commute of the soccer mom’s minvan. For many of us, we are starting to reject that narrative. We want to live, work, eat, and be social all within immediate proximity. As  gentrification has become rapid over the last ten years and millennials showing their ability to start businesses, the desire  for cheap rent found in low-income communities for low-cost social entrepreneurship becomes a high demand.

But good intentions are not always great for all of those living in the community. In fact, communities are not really being developed alongside and with the emergencing populations but are often facing forced evictions, rising property taxes, and drastic rental increase and now relocating to the former home of these gentrifiers- the suburbs. And with gentrification comes the rebranding (code for whitewashing) of communities as neighborhoods are losing their cultural history to names branded, market friendly brands by real estate developers.

Gentrification is a complex issue. On one hand, local business development, and the renovation of buildings  is a positive asset for a city. Fishtown, Philadelphia was an area known where one could easily score heroin but now it is covered with art, bars, and local restaurants. However, Spike Lee rightfully points out a criticism towards the city politicians and developers,“So, why did it take this great influx of white people to get the schools better? Why’s there more police protection in Bed Stuy and Harlem now? Why’s the garbage getting picked up more regularly? We been here!”

As the world grows more urban and as millennials continue to migrate to the cities, the complexity surrounding where one should live is loaded with impact that might just contribute to more harm than good in the long run of city life.  While access to gluten free markets, dog parks, locally sourced sriracha flavored ice cream, and organic kale juice bars might be the positive community one is looking for, chances are, there was evictions, relocations, and pain as some members lost the community they once held dear.

 This Guest Author post was written by Matt Cummings, Coordinator of Community Service at DePauw University.

Remote Control, Television - TV Controller by espensorvik CC BY-2.0

This post by Dr. Jeff McCall was originally published in The Indy Star.

Media executives nationwide are watching to see how the Indianapolis television market will respond to the CBS affiliate change from WISH-8 to WTTV-4. These execs want to know if a highly established television station can survive when its longstanding network affiliation ends. That’s because the challenges facing WISH-8 right now could be the challenges facing all local broadcasters in the future.

CBS unceremoniously dumped WISH-8 when contract talks stalled. Clearly, CBS puts no value in loyalty. WISH-8 had been with CBS for 58 years. CBS’ hardball strategy was designed to frighten affiliates nationwide, and the approach has worked. Other CBS affiliates, including 10 CBS affiliates owned by WISH-8’s corporate parent company, got in line, caved to the network and signed the checks.

CBS, like all the television networks, is demanding that affiliates pay higher rates for the opportunity to carry network programming. This practice is known as “reverse compensation” in the media industry, because until about 20 years ago, networks paid affiliates. Networks have the upper hand in negotiations with affiliates, particularly when other stations exist in the market that might want the programming. In Indianapolis, that willing new partner is WTTV-4.

Here’s hoping the new landscape can work for the stations and for the Indianapolis viewing audience. WTTV-4 has quickly built a fully operating news department, so the Indianapolis market now has five TV news operations. That’s a lot for a market of Indianapolis’ size. Maybe too many for the nation’s 26th largest market. But, more news and more journalists should be good for the region, as long as the news agenda expands and there is more original reporting. If the stations just duplicate the same content, the potential benefit is lost.

To fill some of the lost CBS hours, WISH-TV has expanded its news programming. Again, that’s a good thing, unless they simply rehash the same old stories over more air time. WISH-TV has picked up CW network programming for evenings, but that network’s ratings are dismal and declining, some of its shows getting fewer than a million viewers nationwide. Further, CW’s viewers are, putting it mildly, not news hounds. Thus, WISH-TV will get no audience flow from its CW shows into its late night news.

WISH-TV is facing today what many local stations could face down the road. Major networks now have the capacity to reach most of their viewers nationally without relying on local affiliates, or even cable and satellite distributors. New distribution models allow networks to go with over-the-top (OTT) content to reach viewers through streaming. The model works for Netflix, Hulu Plus and other programmers. HBO is getting into the OTT world, too.

CBS has already launched its stand-alone streaming service. CBS Chairman Les Moonves made a comment at the recent Consumer Electronics Show that should put a chill in the spine of every CBS affiliate. “We don’t care how or when you watch our content; we just want you to watch it,” Moonves said. Once that CBS streaming service gets traction, the network will have little need for local affiliates, and CBS can keep all of the commercial time for itself. Thus, the moment networks think affiliates can’t deliver enough cash or that the affiliate relationship is just not worth the hassle, the nets can go OTT, leaving the local broadcasters to fend for themselves. And fending for themselves will have to mean more than stale sitcom reruns, bizarre talk shows and goofy game shows.

The video-viewing world has changed, and traditional, local broadcasters must fight to maintain their relevance. Only 55 percent of millennials now consider television their primary viewing platform, with the others watching streamed content on laptops and smartphones. Digital viewing grew 62 percent last year among the 25-54 demo that advertisers love. More than seven in 10 households with broadband now stream full-length programs from the Internet. None of this is good news for traditional, local broadcasters.

Netflix CEO Reed Hastings recently threatened that broadcast television will be dead by 2030. That might seem far-fetched at the moment, but no doubt local television will look much different by then.

New Year Resolutions List via Wikimedia Commons

We’re now over a month into the new year – how are those resolutions coming? Even if you didn’t happen to make any for yourself, the cultural phenomenon of the new year’s resolution sheds interesting light on the persistent gap between what kinds of lives individuals think would be best for themselves and what kind of lives they’re currently leading. A survey of top resolutions, conducted by psychologists at the University of Scranton, reveals that they’re pretty much exactly what you’d predict, including losing weight, spending less, staying fit, and quitting smoking.

That study also revealed that, in general, people’s new year’s resolutions fail. Probably attempts to change oneself at other times of the year are likely to fail as well. While the psychology of change is really a topic unto itself, today the question at hand is whether the government ever has a proper role to play in promoting better behavior of the kinds people obviously hope to. To some extent, this is already happening – think government-funded smoking cessation programs, tax breaks for certain kinds of savings, and soda taxes.

“Libertarian paternalism,” a term coined by academics Cass Sunstein and Richard Thaler, is a research program founded on the idea that government can and should help to align individuals’ behavior with those individuals’ own self-interest by encouraging – not forcing – those choices at a policy level. “Soft paternalism” involves choice-manipulating components, but not the difficult-to-bypass tools of “hard paternalism,” like steep fines or criminal punishments. (For the record, despite its name, I actually don’t know many other libertarians who endorse libertarian paternalism, for reasons discussed towards the end of this piece).

Now-quintessential examples of libertarian paternalist policies include requiring store owners to keep the junk food away from the register where people are prone to impulse-purchase it, or having employers set up benefits enrollment forms so that workers must opt out of, rather than opt into, a hefty recurring retirement contribution.

The philosophical component of the libertarian paternalists’ claim is that a government legislating in this way is just, because helping someone to do what he actually wants to do does not thereby violate his rights. The empirical component of the claim is that “libertarian paternalist”-type policies, policies that “nudge” behavior, actually work.

Naturally, libertarian paternalism faces any number of philosophical and practical challenges. There are those who consider any government interference, however minimal, into people’s lives a miscarriage of justice – but hey, we were promised libertarian, not anarchic, paternalism, so that’s a debate for another level of abstraction.

Libertarian paternalist policies may crowd out some opportunities for mistake, self-reflection, and character development, but apparently not enough for this to be a clear reason to reject them. People are still free to consume the wrong foods and face the health consequences, albeit at slightly higher prices. And would it really be regrettable if “nudge”-type opt-out savings policies kept some individuals from suffering a lack of sufficient retirement funds? The “character development” opportunity provided by eating cat food in an unheated apartment in one’s old age is of questionable importance, considered in the totality of the circumstances.

The most serious philosophical challenge to libertarian paternalism asks us to notice that it’s prohibitively difficult (to literally impossible) to understand anyone’s motives for action, other than our own. Because legislators cannot account for this in their development of libertarian paternalist policies, they will inadvertently harm people for whom the nudge makes little sense – a person with low blood sugar who needs a very large soda to boost her blood sugar immediately, or a person who opts out of a savings plan and incurs additional taxes at the margin because he plans to expatriate to a lower-cost country in a few decades.

We must admit that a libertarian paternalist policy in some sense would make these particular individuals worse off than they might otherwise have been – but the harms of obesity and poverty also flow from its absence. But it invites us to inaccurately think of ourselves as ultra-special snowflakes when we argue that no one, least of all a legislator, could ever begin to grasp our best interests, even in principle. But the tenets of positive psychology suggest otherwise, that people are much more alike in their desires and conditions of well-being than they are different. If a particular proposed soft paternalist policy fails, then, it’s due to the specifics of how it would help some people at the expense of others, not because there’s any unique epistemological burden here as compared to ordinary policy considerations.

To be fair, even if libertarian paternalism is justified in theory, its policies might also work less well in the field than we’d hoped, for instance based on empirical results following compulsory menu board nutrition facts labeling in some places. And certainly the pop science media has done its typically poor job in explaining the nuances of unconscious behavior, making it sound like the automatic choice is always the right one. So we should temper our expectations for the power of the nudge – which actually is cause for philosophical critics to rejoice!

It will not be possible for the blunt tools of government to turn every citizen into the best possible version of herself, but it’s well within the scope of a moderately liberal (or moderately conservative) government to take steps in this direction, especially when setting some default option is unavoidable anyways. Libertarian paternalism does not provide a complete vision of the relationship between the citizen and the state – it doesn’t say much about the times when private and public interests genuinely conflict. And libertarian paternalism falls victim to all the difficulties of implementation that any government program does (for instance, the pitfalls explained by public choice economics).

But libertarian paternalism, as such, raises no new issues of legislation – as always, we face hard questions about how we may or may not trade off the welfare of one citizen against another. The assumption that the state exists to promote individual welfare is a double-edged sword, providing on the one hand a strong presumption of welfare-conducive individual liberty and, on the other, a reason to nudge people towards better lives without violating their rights, when possible. To assume that these rights preclude nudging just begs the question.

Rose Bowl Field 008 by Penn State CC BY-NC 2.0

This post by Dr. Jeff McCall was originally published by The Indy Star.

Let’s remember a key fact as we prepare snacks and grab the remote to watch the Super Bowl. The people playing in the game are important only because they happen to be good at football. They aren’t super heroes. They aren’t necessarily good at anything beyond football. Some are philanthropists who give back to their communities, but even those efforts are the result of being good at football.

The media world has concocted a culture in which athletes are idolized and their importance exaggerated. Rhetorically, television constantly tells us that a guy who can tackle or throw a ball contributes more to society than an elementary teacher or car mechanic. Americans need to carefully consider whether their favorite NFL quarterback is more important in their lives than the workman who hauls away the trash each week.

Television and the sports leagues have snookered society into an oversized obsession with athletes. And it is all about dollars. The media have to boost sports personalities because huge money is invested in contracts to broadcast sports. The NBA recently signed a $ 24 billion contract with ABC/ESPN and TNT to carry pro basketball for the next nine years. The NFL collects more than $5 billion a year for TV rights. It is no wonder the broadcasters and leagues supercharge the players to make sure people watch their favorite athletes.

Note how the networks promote upcoming telecasts. An Indianapolis Colts-Denver Broncos football game is billed as Andrew Luck vs. Peyton Manning. An NBA game is promoted as Kobe vs. Lebron, as if their teammates are unimportant.

The leagues allow, and television captures, every player’s display of grandstanding. A player who makes a routine play and then dances is sure to get a television replay and approving comment from the play-by-play announcer. Interestingly, a high school player who made such a display would likely be penalized by the officials and sanctioned by the coach.

The television spectacle includes sideline reporters who act star struck as they adoringly ask players how they feel and then nod approvingly as the athletes utter meaningless phrases about intensity or how much they wanted the win.

Sports sensationalism has even seeped into network news. The major networks all led their evening newscasts one day last week with Deflategate, the story about underinflated footballs at the AFC title game. The morning network newscast hosts have all squeezed underinflated balls on set, telling us more about football inflation in the last week than President Obama’s tax proposals. And all Deflategate coverage includes mandatory video of Patriot star Tom Brady’s supermodel girlfriend, Gisele Bundchen.

Most professional athletes are decent, sensible people. But the marketing geniuses at the networks and sports leagues don’t help sort the deserving from the undeserving. Former Ravens linebacker Ray Lewis’ checkered past was washed away by CBS and the NFL with a gaudy celebration of Lewis’ retirement two years ago. Lewis is now an analyst on ESPN. Steelers’ star Ben Roethlisberger remains an NFL poster boy, in spite of past sexual assault allegations. Former NBA bad boy Ron Artest parlayed his multiple altercations into a spot on ABC’s “Dancing with the Stars.” NASCAR’s Tony Stewart’s anger management issues haven’t diminished his profile on race telecasts.

Rasmussen Reports research shows 24 percent of Americans believe pro athletes are good role models for children. That’s up 9 percent from a year earlier, and comes after the Ray Rice domestic violence incident last summer.

Responsible adults now wear jerseys of prominent athletes. It hasn’t always been like that. Look at videos of crowds at sporting events in the 1940s and 1950s. Nobody wore player jerseys. Maybe that’s because the crowd included real heroes from the Greatest Generation that survived the Depression and won a world war.

Americans love sports, and for good reason. Americans enjoy competition and admire athleticism. Nothing wrong with that. But obsessing over and worshipping the people in the games distort cultural values and lead to misguided priorities. It is no wonder athletes with inflated egos think they are bigger than life and engage in ridiculous and destructive behaviors. And it is all because the media have created these big heads. Long gone are the days when a level-headed star athlete like Cardinals baseball legend Stan Musial offered to take a salary cut after a season in which he didn’t meet his own lofty standards.