The icon indicates free access to the linked research on JSTOR.

Television commercial for prescription drugs have become more ubiquitous than ever in recent years. Spending on the ads hit $6 billion in 2016, up 62 percent from 2012. As public health scholar Julie Donohue explains, direct-to-consumer drug advertising is a longstanding issue in the U.S, tangled with patients’ rights to make their own decisions, doctors’ professional status, and the ethics of profiting from powerful drugs.

JSTOR Teaching ResourcesJSTOR Teaching Resources

When federal regulation of drugs began in the early twentieth century, Donohue writes, people could buy almost any drug without a doctor’s permission. Prescriptions existed, but mostly just as a convenience. Drug advertising was common—mainly for patent medicines, which made all sorts of unproven claims in direct-to-consumer advertisements. At the time, these advertisements accounted for about half of newspapers’ advertising incomes.

In 1905, the American Medical Association began establishing standards for drugs, creating the category of “ethical” medicines—as distinct from patient medicines—including morphine, aspirin, and ether. In an effort to reduce self-treatment and boost the status of physicians, the AMA urged doctors not to prescribe, and medical journals not to accept advertisements for, any drug that was advertised directly to the public.

The first federal drug regulation was the 1906 Pure Food and Drugs Act, which banned misleading claims on drug labels and required drug makers to list certain dangerous ingredients. This act was grounded in the faith that informed consumers would use good judgement.

By the 1930s, however, New Deal-era consumer activists were demanding stronger regulation. The 1938 Food, Drug, and Cosmetic Act required for the first time that drugs be proven safe before being marketed. (Of course you can’t make everyone happy, and opponents saw this move as an attack on the right to self-treatment.)

In the decade that followed, the government made many drugs available by prescription only, shifting power from consumers to doctors and pharmacists. In fact, prescription drugs weren’t subject to the same labeling requirements as over-the-counter ones, requiring people who wanted the medications to rely on professionals to explain them. The problem was that doctors routinely withheld information about diagnoses and treatment from their patients. Meanwhile, drug advertisements directed toward doctors suffered from some of the same problems as consumer-targeted versions, including misinformation about drugs’ efficacy and side effects.

After the disaster of birth defects caused by thalidomide in the 1960s, the FDA stepped up regulation of prescription drugs. Then, in the ‘70s, new consumer rights groups like Ralph Nader’s Public Citizen began agitating for more patient-directed information, resulting in the requirement of patient package inserts.

Drug makers responded to the pendulum shift toward patient empowerment with a new wave of direct-to consumer advertising starting in the 1980s. Doctors and consumer groups opposed the rise of these ads. But public trust in professionals—including doctors—was on the wane, and, with the rising power of insurance companies and large health care providers, physicians were also losing power over the health care industry. Before long, direct-to-consumer drug advertising had reached today’s levels, not seen since the days of patent medicines.

Resources

JSTOR is a digital library for scholars, researchers, and students. JSTOR Daily readers can access the original research behind our articles for free on JSTOR.

The Milbank Quarterly, Vol. 84, No. 4 (2006), pp. 659-699
Wiley on behalf of Milbank Memorial Fund