Justice Ruth Bader Ginsburg died on Friday, the Supreme Court announced.

Chief Justice John Roberts announced in a statement that “Our nation has lost a jurist of historic stature.”

Even before her appointment, she had reshaped American law. When he nominated Ginsburg to the Supreme Court, President Bill Clinton compared her legal work on behalf of women to the epochal work of Thurgood Marshall on behalf of African-Americans.

The comparison was entirely appropriate: As Marshall oversaw the legal strategy that culminated in Brown v. Board of Education, the 1954 case that outlawed segregated schools, Ginsburg coordinated a similar effort against sex discrimination.

Decades before she joined the court, Ginsburg’s work as an attorney in the 1970s fundamentally changed the Supreme Court’s approach to women’s rights, and the modern skepticism about sex-based policies stems in no small way from her lawyering. Ginsburg’s work helped to change the way we all think about women – and men, for that matter.

I’m a legal scholar who studies social reform movements and I served as a law clerk to Ginsburg when she was an appeals court judge. In my opinion – as remarkable as Marshall’s work on behalf of African-Americans was – in some ways Ginsburg faced more daunting prospects when she started.

Starting at zero

When Marshall began challenging segregation in the 1930s, the Supreme Court had rejected some forms of racial discrimination even though it had upheld segregation.

When Ginsburg started her work in the 1960s, the Supreme Court had never invalidated any type of sex-based rule. Worse, it had rejected every challenge to laws that treated women worse than men.

For instance, in 1873, the court allowed Illinois authorities to ban Myra Bradwell from becoming a lawyer because she was a woman. Justice Joseph P. Bradley, widely viewed as a progressive, wrote that women were too fragile to be lawyers: “The paramount destiny and mission of woman are to fulfil the noble and benign offices of wife and mother. This is the law of the Creator.”

And in 1908, the court upheld an Oregon law that limited the number of hours that women – but not men – could work. The opinion relied heavily on a famous brief submitted by Louis Brandeis to support the notion that women needed protection to avoid harming their reproductive function.

As late as 1961, the court upheld a Florida law that for all practical purposes kept women from serving on juries because they were “the center of the home and family life” and therefore need not incur the burden of jury service.

Challenging paternalistic notions

Ginsburg followed Marshall’s approach to promote women’s rights – despite some important differences between segregation and gender discrimination.

Segregation rested on the racist notion that blacks were less than fully human and deserved to be treated like animals. Gender discrimination reflected paternalistic notions of female frailty. Those notions placed women on a pedestal – but also denied them opportunities.

Either way, though, blacks and women got the short end of the stick.

Ginsburg started with a seemingly inconsequential case. Reed v. Reed challenged an Idaho law requiring probate courts to appoint men to administer estates, even if there were a qualified woman who could perform that task.

Sally and Cecil Reed, the long-divorced parents of a teenage son who committed suicide while in his father’s custody, both applied to administer the boy’s tiny estate.

The probate judge appointed the father as required by state law. Sally Reed appealed the case all the way to the Supreme Court.

Ginsburg did not argue the case, but wrote the brief that persuaded a unanimous court in 1971 to invalidate the state’s preference for males. As the court’s decision stated, that preference was “the very kind of arbitrary legislative choice forbidden by the Equal Protection Clause of the 14th Amendment.”

Two years later, Ginsburg won in her first appearance before the Supreme Court. She appeared on behalf of Air Force Lt. Sharron Frontiero. Frontiero was required by federal law to prove that her husband, Joseph, was dependent on her for at least half his economic support in order to qualify for housing, medical and dental benefits.

If Joseph Frontiero had been the soldier, the couple would have automatically qualified for those benefits. Ginsburg argued that sex-based classifications such as the one Sharron Frontiero challenged should be treated the same as the now-discredited race-based policies.

By an 8–1 vote, the court in Frontiero v. Richardson agreed that this sex-based rule was unconstitutional. But the justices could not agree on the legal test to use for evaluating the constitutionality of sex-based policies.

New York Times article about the Wiesenfeld case, which refers to Ginsburg as ‘a woman lawyer.’ New York Times

Strategy: Represent men

In 1974, Ginsburg suffered her only loss in the Supreme Court, in a case that she entered at the last minute.

Mel Kahn, a Florida widower, asked for the property tax exemption that state law allowed only to widows. The Florida courts ruled against him.

Ginsburg, working with the national ACLU, stepped in after the local affiliate brought the case to the Supreme Court. But a closely divided court upheld the exemption as compensation for women who had suffered economic discrimination over the years.

Despite the unfavorable result, the Kahn case showed an important aspect of Ginsburg’s approach: her willingness to work on behalf of men challenging gender discrimination. She reasoned that rigid attitudes about sex roles could harm everyone and that the all-male Supreme Court might more easily get the point in cases involving male plaintiffs.

She turned out to be correct, just not in the Kahn case.

Ginsburg represented widower Stephen Wiesenfeld in challenging a Social Security Act provision that provided parental benefits only to widows with minor children.

Wiesenfeld’s wife had died in childbirth, so he was denied benefits even though he faced all of the challenges of single parenthood that a mother would have faced. The Supreme Court gave Wiesenfeld and Ginsburg a win in 1975, unanimously ruling that sex-based distinction unconstitutional.

And two years later, Ginsburg successfully represented Leon Goldfarb in his challenge to another sex-based provision of the Social Security Act: Widows automatically received survivor’s benefits on the death of their husbands. But widowers could receive such benefits only if the men could prove that they were financially dependent on their wives’ earnings.

Ginsburg also wrote an influential brief in Craig v. Boren, the 1976 case that established the current standard for evaluating the constitutionality of sex-based laws.

Ginsburg at the 2015 State of the Union address. Reuters/Joshua Roberts

Like Wiesenfeld and Goldfarb, the challengers in the Craig case were men. Their claim seemed trivial: They objected to an Oklahoma law that allowed women to buy low-alcohol beer at age 18 but required men to be 21 to buy the same product.

But this deceptively simple case illustrated the vices of sex stereotypes: Aggressive men (and boys) drink and drive, women (and girls) are demure passengers. And those stereotypes affected everyone’s behavior, including the enforcement decisions of police officers.

Under the standard delineated by the justices in the Boren case, such a law can be justified only if it is substantially related to an important governmental interest.

Among the few laws that satisfied this test was a California law that punished sex with an underage female but not with an underage male as a way to reduce the risk of teen pregnancy.

These are only some of the Supreme Court cases in which Ginsburg played a prominent part as a lawyer. She handled many lower-court cases as well. She had plenty of help along the way, but everyone recognized her as the key strategist.

In the century before Ginsburg won the Reed case, the Supreme Court never met a gender classification that it didn’t like. Since then, sex-based policies usually have been struck down.

I believe President Clinton was absolutely right in comparing Ruth Bader Ginsburg’s efforts to those of Thurgood Marshall, and in appointing her to the Supreme Court.

This article originally appeared on The Conversation. You can read it here.

  • Italian man claims to be ‘human cheetah’ with lightning-fast reflexes
    Photo credit: CanvaA man with fast reflexes.

    At first glance, this probably looks like a camera trick. Ken Lee, an Italian content creator, has built a massive online following by doing something that doesn’t quite feel real. Viewers refer to him as the “human cheetah” because it appears he has near-instant reflexes.

    Grabbing objects out of the air with uncanny precision, flicking clothespins and lighters, and throwing a blur of punches and kicks at impossible speeds, it is easy to call him unbelievable. Half the audience thinks his viral speed videos are fake. The other half is just as convinced they are watching something incredibly rare.

    Hands so fast they blur time

    In the video above, a timer runs to confirm its authenticity. In what looks like half a second, he reaches out and snags the lighter from the table. To prove it is real, he does it twice.

    Having amassed millions of followers on his TikTok page, the identity behind the mysterious influencer remains largely unknown. Active since around 2022, with almost 100 million accumulated likes, Lee has cultivated a fandom around his self-proclaimed “Superhero per Hobby!”

    Do you believe it is real? Is this person the fastest human alive? Many followers cannot wait for the next video to be posted. Plenty of his fervent fans are Italian, so sifting through the remarks takes a bit of hunting. Here are some comments that sum up how much people enjoy the fun and the spectacle:

    “Ken lee the fastest and the best”

    “Most dangerous human”

    “Is this what the lighter sees before my homie steals it”

    “It was sped up during he grabbed the lighter, if u count up with the timer u would be off by like 0,5 seconds whenever he grabs the lighter.”

    “If the flash were human”

    “How is it possible to get such powers ?”

    “I blinked and I missed it”

    People love good entertainment

    The awe of peak performance attracts people to watch elite athletes, musicians, or even dancers. There is something that deeply satisfies all of us when a human appears to push a skill to its limit. Whether it is real or fake seems to matter less than the opportunity to chime in on some good entertainment.

    How far could any of us go by practicing and repeating a particular motion over and over until it is mastered? Beneath the flashy nickname and his viral speed videos, Lee’s content has a way of drawing people in. This is not a superpower. Just repetition. Focus. Obsession. And maybe some digital wizardry.

    Testing the science of speed

    If you wish to question the validity of Lee’s performances, maybe some basic science can help. Human reaction time is not just a reflex. A 2024 study found that the nervous system can fine-tune responses in real time. Practice can make movements appear almost automatic.

    It has been well established in research that the gap between seeing something and responding has a limit. A 2025 study concluded that the most elite extremes allow for reaction times of 100 milliseconds. At that speed, the human brain can barely process that something has happened.

    Science explains Lee is not necessarily moving as fast as we might perceive him to be. And therein lies all the fun of it. We cannot prove it is real, nor can we actually prove that it is fake.

    Maybe Lee is the “fastest man alive” or the so-called “human cheetah.” Or maybe he is just a remarkable entertainer. Either way, he has clearly tapped into something strange and fascinating: a blend of human ability and fantasy that people do not want to miss.

    To give context to Lee’s videos, watch this performance on Tú Sí Que Vales:

  • Despite all the likes, literallys and dropped g’s, English isn’t decaying before our eyes
    Photo credit: LisaStrachan/iStock via Getty Images Fear not: There isn’t anything that needs saving.

    As a linguistics professor, I’m often asked why English is decaying before our eyes, whether it’s “like” being used promiscuouslyt’s being dropped deleteriously or “literally” being deployed nonliterally.

    While these common gripes point to eccentric speech patterns, they don’t point to grammatical annihilation. English has weathered far worse.

    Let’s start with something we can all agree on: Old English, spoken from approximately A.D. 450 to 1100, is pretty unintelligible to us today. Anyone who’s had the pleasure of reading “Beowulf” in high school knows how different English back then used to sound. Word endings did a lot more grammatical work, and verbs followed more complicated patterns. Remnants of those rules fuel lingering debates today, such as when to use “whom” over “who,” and whether the past tense of “sneak” is “snuck” or “sneaked.”

    The language went on to experience centuries of tumult: Viking invasions, which introduced Old Norse influence; Anglo-Norman French rule, which shifted the language of the elite to French; and 18th-Century grammarians, who dictated norms with their elocution and grammar guides.

    In that time, English has lost almost all of the more complex linguistic trappings it was born with to become the language we know and – at least, sometimes – love today. And as I explain in my new book, “Why We Talk Funny: The Real Story Behind Our Accents,” it was all thanks to the way that language naturally evolves to meet the social needs of its speakers.

    From dropping the ‘l’ to dropping the ‘g’

    The things we tend to label as “bad” or sloppy English – for instance, the “g” that gets lost from our -ing endings or the deletion of a “t” when we say a word like “innernet” – actually reflect speech habits that are centuries old.

    Take, for example, “often.” Originally spoken with the “t,” that pronunciation gradually became less favored around the 15th century, alongside that “l” in “talk” and the “k” in know. Meanwhile, the “s” now stuck on the back of verbs like “does” and “makes” began as a dialectal variant that only became popular in 16th-century London. It gradually replaced “th” whenever third persons were involved, as in “The lady doth protest too much.”

    While dropping the “l” in talk may have been initially frowned upon, today it would be strange if you pronounced the letter. And the shift makes sense: It smoothed out some linguistic awkwardness for the sake of efficiency.

    If people learned to look at language more like linguists, they might come around to seeing that there is more than one perspective on what good speech consists of.

    And yes, that absolutely is a sentence ending with a preposition – something many modern grammar guides discourage, even though the idea only took hold after 18th-century grammarian Robert Lowth intimated it was a less elegant choice based on the model of Latin.

    Though Lowth voiced no hard and fast rule against it, many a grammar maven later misconstrued his advice as an admonition. Just like that, a mere suggestion became grammatical law.

    The rise of the grammar sticklers

    Many of today’s ideas about what constitutes correct English are based on a singular – often mistaken – 19th-century view of the forces that govern our language.

    In the late 18th century, the English-speaking world began experiencing class restructuring and higher literacy rates. As greater class mobility became possible, accent differences became class markers that separated new money from old money.

    Emulation of upper-crust speech norms became popular among the nouveau riche. With literacy also on the rise, grammarians and elocutionists raced to dictate the terms of “proper” English on and off the page, which led to the rise of usage guides and dictionaries that were eager to sell a certain brand of speech.

    Another example of grammarian angst reconfiguring the view of an otherwise perfectly fine form is the droppin’ of the “g.” It became so tied to slovenly speech that it was branded with an apostrophe in the 19th century to make sure no one missed its lackadaisical and nonstandard nature.

    Up until the 19th century, however, no one seemed to care whether one pronounced it as “-in” or “-ing.”

    Evidence suggests that -ing wasn’t even heard as the correct form. Many elocution guides from the 18th century provide rhyming word pairs like “herring/heron,” “coughing/coffin” and “jerking/jerkin,” which suggest that “-in” may have been the preferred pronunciation of words ending with “-ing.” Even writer and satirist Jonathan Swift – a frequent lobbyist for “proper” English – rhymes “brewing” with “ruin” in his 1731 poem “Verses on the Death of Dr. Swift, D.S.P.D..”

    Embrace the change

    Language has always shifted and evolved. People often bristle at changes from what they’ve known to what is new. And maybe that’s because this process often begins with speakers that society usually looks less favorably on: the young, the female, the poor, the nonwhite.

    But it’s important to remember that being disliked and bad are not the same thing – that today’s speech pariahs are driven by the same linguistic and social needs as the Londoners who started going with “does” instead of “doth” or dropped the “t” in often.

    So if you think the speech that comes from your lips is the “correct” version, think again. Thou, like every other English speaker, art literally the product of centuries of linguistic reinvention.

    This article originally appeared on The Conversation. You can read it here.

  • 10 boys and 10 girls were left alone in separate houses and the different results are just wild
    Photo credit: Ian Taylor PhotographerTwo young children play in the grass.

    It sounds like the plot of William Golding’s Lord of the Flies. However, in the mid-2000s, it was a very real and very controversial reality television experiment.

    Footage from the UK Channel 4 documentary Boys and Girls Alone is captivating audiences all over again. It offers a fascinating and chaotic look at what happens when you remove parents from the equation.

    The premise was simple but high stakes. Twenty children, aged 11 and 12, were split into two groups by gender. Ten boys and ten girls were placed in separate houses and told to live without adult supervision for five days.

    The Setup

    While there were safety nets in place, the day-to-day living was entirely up to the kids. A camera crew was present but instructed not to intervene unless safety was at risk. The children could also ring a bell to speak to a nurse or psychiatrist.

    The houses were fully stocked with food, cleaning supplies, toys, and paints. Everything they needed to survive was there. They just had to figure out how to use it.

    The Boys: Instant Chaos

    In the boys’ house, the unraveling was almost immediate. The newfound freedom triggered a rapid descent into high-energy anarchy.

    They engaged in water pistol fights and threw cushions. In one memorable instance, a boy named Michael covered the carpet in sticky popcorn kernels just because he could.

    The destruction eventually escalated to the walls. The boys covered the house in writing, drawing, and paint. But the euphoria of freedom eventually crashed into the reality of consequences.

    “We never expected to be like this, but I’m really upset that we trashed it so badly,” one boy admitted in the footage. “We were trying to explore everything at once and got too carried away in ourselves.”

    Their attempts to clean up were frantic and largely ineffective. Nutrition also took a hit. Despite having completed a cooking course, the boys survived mostly on cereal, sugar, and the occasional frozen pizza. By the end of the week, the house was trashed, and the group had fractured into opposing factions.

    The Girls: Organized Society

    The girls’ house looked like a different planet.

    In stark contrast to the mayhem next door, the girls immediately established a functioning society. They organized a cooking roster, with a girl named Sherry preparing their first meal. They baked cakes. They put on a fashion show. They even drew up a scrupulous chores list to ensure the house stayed livable.

    While their stay wasn’t devoid of interpersonal drama, the experiment highlighted a fascinating divergence in socialization. Left to their own devices, the girls prioritized community and maintenance. The boys tested the absolute limits of their environment until it broke.

    The documentary was controversial when it aired, with critics questioning the ethics of placing children in unsupervised situations for entertainment. But what made it so enduring, and why footage keeps resurfacing years later, is what it reveals about how kids are socialized long before anyone puts them in a house together. The boys weren’t born anarchists and the girls weren’t born organizers. They arrived at those houses already shaped by years of being told, implicitly and explicitly, what boys do and what girls do. Whether that’s a nature story or a nurture story is the question the documentary keeps asking without quite answering, which is probably why people are still watching and arguing about it nearly two decades later.

    This article originally appeared two years ago. It has been updated.

Explore More Culture Stories

Internet

Italian man claims to be ‘human cheetah’ with lightning-fast reflexes

Culture

Despite all the likes, literallys and dropped g’s, English isn’t decaying before our eyes

Culture

10 boys and 10 girls were left alone in separate houses and the different results are just wild

Media

9-year-old girl asks Steph Curry why his shoes aren’t in girls’ sizes. The response was perfect.