Many studies have proved that cutting back on meat and dairy is the most significant single action individuals can adopt to combat climate change. Now think of it on a city scale: reducing the meat and dairy consumed by a major metropolitan area could drastically reduce emissions and save energy and improve the health of its residents. So how to encourage consumers to eat this way? At GOOD Ideas for Cities Portland, Ziba presented their idea for encouraging the citizens of Portland to buy and consume less meat. Ziba realized that the intervention needs to come at the decision-making phase, and created a series of apps that provide at-the-ready options for choosing foods and making grocery lists based on your economic, environmental and ethical values. They also propose a social club in which people try new veggie-focused meals and restaurants together, and a campaign that asks kids to choose and cook fruits and vegetables in exchange for incentives, putting pressure on their parents to eat better and ensuring healthier habits for life.
Challenge: A reduction of the amount of meat and dairy in local diets could have a profound impact on the city’s environmental footprint. How do we get the residents of Portland to consume less meat?
Sustainable Food Policy and Programs, City of Portland: Steve Cohen, Manager
Ideas for Cities from Ziba: Steve Lee, Molly Ackerman-Brimberg, Carl Alviani, Joo-Young Oh, Eric Park, In Baek, Maria Lalli, Chris Butler, Paul O’Connor
To learn more about this idea contact steve_lee[at]ziba[dot]com or carl_alviani[at]ziba[dot]com
More information on GOOD Ideas for Cities can be found at good.is/ideasforcities or on Twitter at @IdeasforCities
While these common gripes point to eccentric speech patterns, they don’t point to grammatical annihilation. English has weathered far worse.
Let’s start with something we can all agree on: Old English, spoken from approximately A.D. 450 to 1100, is pretty unintelligible to us today. Anyone who’s had the pleasure of reading “Beowulf” in high school knows how different English back then used to sound. Word endings did a lot more grammatical work, and verbs followed more complicated patterns. Remnants of those rules fuel lingering debates today, such as when to use “whom” over “who,” and whether the past tense of “sneak” is “snuck” or “sneaked.”
The language went on to experience centuries of tumult: Viking invasions, which introduced Old Norse influence; Anglo-Norman French rule, which shifted the language of the elite to French; and 18th-Century grammarians, who dictated norms with their elocution and grammar guides.
In that time, English has lost almost all of the more complex linguistic trappings it was born with to become the language we know and – at least, sometimes – love today. And as I explain in my new book, “Why We Talk Funny: The Real Story Behind Our Accents,” it was all thanks to the way that language naturally evolves to meet the social needs of its speakers.
From dropping the ‘l’ to dropping the ‘g’
The things we tend to label as “bad” or sloppy English – for instance, the “g” that gets lost from our -ing endings or the deletion of a “t” when we say a word like “innernet” – actually reflect speech habits that are centuries old.
Take, for example, “often.” Originally spoken with the “t,” that pronunciation gradually became less favored around the 15th century, alongside that “l” in “talk” and the “k” in know. Meanwhile, the “s” now stuck on the back of verbs like “does” and “makes” began as a dialectal variant that only became popular in 16th-century London. It gradually replaced “th” whenever third persons were involved, as in “The lady doth protest too much.”
While dropping the “l” in talk may have been initially frowned upon, today it would be strange if you pronounced the letter. And the shift makes sense: It smoothed out some linguistic awkwardness for the sake of efficiency.
If people learned to look at language more like linguists, they might come around to seeing that there is more than one perspective on what good speech consists of.
And yes, that absolutely is a sentence ending with a preposition – something many modern grammar guides discourage, even though the idea only took hold after 18th-century grammarian Robert Lowth intimated it was a less elegant choice based on the model of Latin.
Though Lowth voiced no hard and fast rule against it, many a grammar maven later misconstrued his advice as an admonition. Just like that, a mere suggestion became grammatical law.
The rise of the grammar sticklers
Many of today’s ideas about what constitutes correct English are based on a singular – often mistaken – 19th-century view of the forces that govern our language.
Emulation of upper-crust speech norms became popular among the nouveau riche. With literacy also on the rise, grammarians and elocutionists raced to dictate the terms of “proper” English on and off the page, which led to the rise of usage guides and dictionaries that were eager to sell a certain brand of speech.
Another example of grammarian angst reconfiguring the view of an otherwise perfectly fine form is the droppin’ of the “g.” It became so tied to slovenly speech that it was branded with an apostrophe in the 19th century to make sure no one missed its lackadaisical and nonstandard nature.
Up until the 19th century, however, no one seemed to care whether one pronounced it as “-in” or “-ing.”
Evidence suggests that -ing wasn’t even heard as the correct form. Many elocution guides from the 18th century provide rhyming word pairs like “herring/heron,” “coughing/coffin” and “jerking/jerkin,” which suggest that “-in” may have been the preferred pronunciation of words ending with “-ing.” Even writer and satirist Jonathan Swift – a frequent lobbyist for “proper” English – rhymes “brewing” with “ruin” in his 1731 poem “Verses on the Death of Dr. Swift, D.S.P.D..”
Embrace the change
Language has always shifted and evolved. People often bristle at changes from what they’ve known to what is new. And maybe that’s because this process often begins with speakers that society usually looks less favorably on: the young, the female, the poor, the nonwhite.
But it’s important to remember that being disliked and bad are not the same thing – that today’s speech pariahs are driven by the same linguistic and social needs as the Londoners who started going with “does” instead of “doth” or dropped the “t” in often.
So if you think the speech that comes from your lips is the “correct” version, think again. Thou, like every other English speaker, art literally the product of centuries of linguistic reinvention.
The first time the placebo effect really got under my skin was when I read that roughly one-third of people with irritable bowel syndrome improve on placebo treatments alone. Usually this statistic is presented as a fascinating quirk of medicine. My reaction was anger.
Humanity possesses an extremely effective treatment, with essentially zero side effects – and patients need someone else’s permission to use it.
The placebo effect refers to the improvements in symptoms that patients experience after they’re given an inert treatment like a sugar pill. Driven by expectation, context and social cues rather than pharmacology, the placebo effect is often dismissed as all in the mind. But decades of research have shown it is anything but imaginary.
Placebo treatments can trigger measurable changes in the brain, immune system and hormone function. In studies on pain, placebos cause the brain to release endorphins, the body’s natural opioids. In Parkinson’s disease, placebo injections increase dopamine activity in the brain. The placebo effect isn’t magic. It’s biology.
Having spent nearly a quarter-century teachingevolutionary medicine, I’ve come to see placebos not as curiosities of clinical trials but as windows into how human biology responds to social signals. And it’s that relationship that is exactly what makes the placebo effect unsettling.
When testing a new drug, scientists compare its effects to what patients experience on a placebo treatment like sugar pills, saline injections or sham surgery. If the drug doesn’t outperform the placebo, it rarely reaches the public. Placebo responses are common and powerful enough to rival active treatments.
Even surgery isn’t immune to the placebo effect. In several well-documented studies of knee procedures, patients who received sham operations – incisions without the full surgical repair – improved almost as much as those who received the real procedure.
Clearly something real is happening inside the body. But the strangest part of the placebo effect is not that it works. It’s what makes it work.
The prescription of belief
Placebo treatments tend to be more effective when delivered by credible authorities. Pills work better when prescribed by doctors wearing white coats. Expensive pills outperform cheap ones. Injections produce stronger responses than tablets.
Some researchers have even removed the deception from placebo experiments entirely. In open-label placebo studies, patients are directly told they are receiving a placebo; and yet many still report significant improvement.
But look more closely at how these studies are run. Patients are not simply handed a sugar pill and sent home. They receive an explanation from a clinician, in a medical setting, within a structured ritual of care: a context that may be doing much of the biological work.
Even when the deception disappears, the social scaffolding remains. The permission to heal is still being granted by someone else.
The placebo effect extends beyond the patient
The placebo effect is often framed as something happening inside an individual. But it does not operate in isolation.
Consider what happens in veterinary medicine. Dogs and cats cannot believe a treatment they’re given will work; they have no concept of receiving medication. Yet when owners and vets believe an animal is being treated, they consistently report improvements in pain and mobility that medical tests do not confirm.
In one study of dogs with osteoarthritis, owners reported improvement roughly 57% of the time for animals receiving only a placebo.
The animals themselves may not have improved. But the humans caring for them perceived they had. The healing signal, it turns out, travels through the humans in the room.
When healing makes things worse
There have been times when going to the doctor made you less likely to survive. In the 19th century, mainstream medicine was built on bloodletting, purging and doses of mercury and arsenic – treatments that killed as often as they cured.
Homeopathy emerged in the late 18th century precisely in this context. Its founder, Samuel Hahnemann, was a physician horrified by the harm the conventional medicine of his time was causing. His highly diluted versions of contemporary remedies did nothing pharmacologically. But they also did not kill people, which put them decisively ahead of the competition.
Homeopathic patients not only survived but also reported dramatic recoveries from chronic ailments and acute infections alike. During the cholera epidemics of the mid-1800s, patients at homeopathic hospitals had lower death rates than those receiving standard care. Why was that?
The standard cholera treatment of the era was aggressive and exhausting; for a disease that already caused massive fluid loss, doctors often prescribed further bloodletting, along with toxic purgatives such as calomel – a form of mercury – to “flush” the system. In contrast, homeopathic care involved extreme dilutions of substances in water or alcohol, effectively providing hydration and a calm, structured environment without the physiological assault.
Death rates were lower not because homeopathy worked but because the placebo effect – combined with not poisoning patients – was more effective than the medicine of the day.
Healing is not free
The body needs resources to heal from injury and disease. Activating systems such as immune responses, tissue repair and inflammation at the wrong time can be dangerous.
Some researchers have proposed that placebo responses reflect a kind of biological health governor: a system that regulates when the body invests heavily in recovery. Cues from trusted individuals may be exactly the signal the body waits for before committing resources to recovery. A caregiver’s reassurance, a physician’s authority and the rituals of medicine may tell the body that conditions are finally stable enough to devote energy to healing.
If that interpretation is correct, the placebo effect is not a trick of the mind. It is an ancient biological system responding to social information.
Body under stress
The placebo effect resembles another system people struggle with today: the stress response.
Stress evolved to keep you alive in the face of acute danger – predators, famine, immediate physical threat. These days, this useful piece of biological engineering might fire when someone hasn’t replied to your email. The system that once saved people’s lives now makes many miserable over things that would have been unimaginable to their ancestors.
You can talk back to the stress response, consciously reappraising the threat – in other words, reframing a looming deadline not as a catastrophe but as a manageable challenge – to help quiet it. But notice what you cannot do: You cannot simply decide to activate your placebo response. You cannot will yourself to release pain-relieving endorphins by believing hard enough in a sugar pill. For that, you still need the ritual, the white coat, the authority figure. You need someone else.
The stress response, misfiring as it is, remains yours. The placebo response has been outsourced: not because it wasn’t always social, but because even now, people still can’t seem to access it on their own.
The uncomfortable implication
The placebo effect is not a trick of the mind. It is a feature of human biology that people have largely surrendered to whoever performs authority most convincingly.
If belief can activate biological healing pathways, belief can also be manipulated. Charismatic figures, elaborate medical rituals and expensive treatments may produce real improvement in symptoms even when the underlying treatment is physiologically inert. That is how wellness culture works. It leverages the same social scaffolding of care to trigger the body’s internal pharmacy, regardless of whether the treatment itself does anything.
The placebo effect is often celebrated as proof that the mind can heal the body. But I believe that may not be its most interesting lesson. It also reveals that human physiology evolved to take its cues from other people. Your brain, immune system and pain response are not isolated machines. They are deeply intertwined with social signals, expectations and trust.
In a world filled with doctors, advertisements, wellness influencers and elaborate medical rituals, that insight is both fascinating and profoundly maddening. People are walking around with one of the most powerful healing systems ever documented locked inside them, and they can reliably access it only when someone in a position of authority gives them permission.
Someone on social media posed a simple question to Gen Z: do you consider $75,000 a year to be poor? The answers that came back weren’t simple at all and, taken together, they’re a pretty honest portrait of what it costs to exist in America right now.
The question came from u/NoHousing11 on r/GenZ, along with a screenshot of an X post that had already made the rounds. In it, an MSNBC commentator suggested that young people fresh out of college, earning $75k or $80k, would naturally be drawn to policies like student loan forgiveness and free healthcare. Another commenter fired back at the framing: “Imagine being so rich that $75k is what you think poor people earn.” Then the kicker: “$75k is a fantasy amount of money to me. I can’t even imagine what it’s like to make that much money a year.”
Some Gen Zers pushed back pointing out that $75k in NYC with a roommate and no car is actually workable if you’re disciplined about it. “There are people making $20 or even less living in Manhattan,” one commenter noted. The city, for all its expense, at least gives you options for getting by without a car, which in a lot of American suburbs isn’t remotely possible. Another commenter made the sharper point: people who claim to live paycheck to paycheck on $100k in NYC are usually doing it because they’re trying to keep up with wealthier friends enjoying dinners out, Broadway shows, and cabs everywhere.
Groceries are too much. Concert tickets unreasonable. Streaming services keep going up. Flights are overpriced, taking away perks and crashing. Rent already high and getting higher. Gas been stupid for years. Jobs are barely hiring and hardly paying a living wage. America.
But others had a different read entirely. In some high cost-of-living areas, a single person earning less than $80k is already classified as low income by local standards. “If you live in an HCOL area and have to pay every single one of your bills,” one commenter wrote, “then yes, you might be considered struggling.”
The geographic reality of American wages is the whole story here. “Depends on the area. NYC? Yes. Nebraska? No.” That two-sentence comment got a lot of upvotes because it’s basically correct, and also kind of depressing that it needs to be said at all. The number that represents a comfortable life in one zip code represents genuine hardship in another, and the policies, conversations, and assumptions that get built around a single national salary figure tend to miss that entirely.
What the thread really exposed wasn’t a generation with distorted expectations. It was a country where “how much is enough” doesn’t have a single answer anymore.