Since mankind took form, humans have been making art —from cave wall paintings to today’s computer-generated animation. Art has been made to serve the public for commercial purposes and for the illumination of the people. With that in mind, the question is whether the government should pay people to make it.

For the past three years, Ireland has invested in 2,000 artists, paying them around $1,500 per month in basic income. Some may argue that it would be a waste of taxpayer money. However, after the three-year trial ended, the endeavor was a net positive for the Irish economy, so the government made the program permanent.

What is the Basic Income for the Arts program?

Launched in 2022, the Basic Income for the Arts (BIA) program gave €325 (about $380) per week to offset the financial hardships most artists face, especially early in their careers. After all, there is a reason that “starving artist” is a common term. Another reason was to help boost the artistic community in Ireland after the COVID-19 pandemic forced many of them to cancel galleries and live shows, which led to ongoing financial stress after pandemic restrictions were lifted.

Providing money to artists without strings attached allowed program applicants to paint, sculpt, rehearse, and write with less worry about making ends meet. When the BIA pilot program ended in 2025, artists found their work improved, and Ireland benefited, too.

According to an external report from Alma Economics, for every €1 of public money invested in the pilot, society received €1.39 in return. An external report found that the pilot program cost €72 million ($84 million) but generated nearly €80 million ($91 million) in total benefits to the Irish economy.

Other data showed that the program motivated artists to work an average of four hours more per week on their various projects. Not only was Ireland receiving more artistic work for the public to admire, enjoy, and possibly purchase, but the BIA program also reduced the amount of money spent on other social programs, such as job seeking and unemployment benefits.

“The positive economic impact this report has revealed is a very encouraging outcome for the sector and the general public. The economic return on this investment in Ireland’s artists and creative arts workers is immediately having a positive impact for the sector and the economy overall,” Minister Patrick O’Donovan said in a statement.

A survey of the pilot found it was popular with Irish citizens, even among those who weren’t part of the art industry or had an interest in art. The pilot was successful enough that Ireland is making it permanent, with hopes of expanding the program to include 200 more artists, should the budget allow. It should be noted that the money given to these artists didn’t just help them make ends meet, but also allowed them to purchase quality supplies to make better art.

Several similar pilot programs have been implemented in the U.S. at the state and local levels in the past. Springboard for the Arts and Creatives for New York similarly provided no-strings income to artists in Minnesota and New York, respectively; however, those programs were funded by the private sector rather than public money. The Yerba Buena Center for the Arts has a guaranteed income program for artists, funded by public funds, but it is limited to resident artists in San Francisco.

Stanford University’s Basic Income Lab is monitoring various UBI pilot programs in 18 states with various payment rates. Supporters of UBI argue that current social safety net programs, with means testing and bureaucracy, aren’t as effective at helping lower-income families as straight-up, no-strings-attached payments.

UBI supporters tout the results of the largest pilot program in 2024, showing that UBI recipients used those payments for needs such as housing, food, and transportation to relieve financial stress and access better career opportunities. However, a 2020 Pew Research Poll finds that 54% of Americans oppose the federal government providing a universal basic income to all adults, with many believing that such income would make people lazier and more dependent on the government.

In any case, it will be interesting to see how Ireland’s BIA program takes form over the next few years, along with the results of other, similar basic income pilot programs in the U.S. It’s wonderful to see that such a positive program, paying people to make art, actually helps enrich the people and the bottom line.

  • Italian man claims to be ‘human cheetah’ with lightning-fast reflexes
    Photo credit: CanvaA man with fast reflexes.

    At first glance, this probably looks like a camera trick. Ken Lee, an Italian content creator, has built a massive online following by doing something that doesn’t quite feel real. Viewers refer to him as the “human cheetah” because it appears he has near-instant reflexes.

    Grabbing objects out of the air with uncanny precision, flicking clothespins and lighters, and throwing a blur of punches and kicks at impossible speeds, it is easy to call him unbelievable. Half the audience thinks his viral speed videos are fake. The other half is just as convinced they are watching something incredibly rare.

    Hands so fast they blur time

    In the video above, a timer runs to confirm its authenticity. In what looks like half a second, he reaches out and snags the lighter from the table. To prove it is real, he does it twice.

    Having amassed millions of followers on his TikTok page, the identity behind the mysterious influencer remains largely unknown. Active since around 2022, with almost 100 million accumulated likes, Lee has cultivated a fandom around his self-proclaimed “Superhero per Hobby!”

    Do you believe it is real? Is this person the fastest human alive? Many followers cannot wait for the next video to be posted. Plenty of his fervent fans are Italian, so sifting through the remarks takes a bit of hunting. Here are some comments that sum up how much people enjoy the fun and the spectacle:

    “Ken lee the fastest and the best”

    “Most dangerous human”

    “Is this what the lighter sees before my homie steals it”

    “It was sped up during he grabbed the lighter, if u count up with the timer u would be off by like 0,5 seconds whenever he grabs the lighter.”

    “If the flash were human”

    “How is it possible to get such powers ?”

    “I blinked and I missed it”

    People love good entertainment

    The awe of peak performance attracts people to watch elite athletes, musicians, or even dancers. There is something that deeply satisfies all of us when a human appears to push a skill to its limit. Whether it is real or fake seems to matter less than the opportunity to chime in on some good entertainment.

    How far could any of us go by practicing and repeating a particular motion over and over until it is mastered? Beneath the flashy nickname and his viral speed videos, Lee’s content has a way of drawing people in. This is not a superpower. Just repetition. Focus. Obsession. And maybe some digital wizardry.

    Testing the science of speed

    If you wish to question the validity of Lee’s performances, maybe some basic science can help. Human reaction time is not just a reflex. A 2024 study found that the nervous system can fine-tune responses in real time. Practice can make movements appear almost automatic.

    It has been well established in research that the gap between seeing something and responding has a limit. A 2025 study concluded that the most elite extremes allow for reaction times of 100 milliseconds. At that speed, the human brain can barely process that something has happened.

    Science explains Lee is not necessarily moving as fast as we might perceive him to be. And therein lies all the fun of it. We cannot prove it is real, nor can we actually prove that it is fake.

    Maybe Lee is the “fastest man alive” or the so-called “human cheetah.” Or maybe he is just a remarkable entertainer. Either way, he has clearly tapped into something strange and fascinating: a blend of human ability and fantasy that people do not want to miss.

    To give context to Lee’s videos, watch this performance on Tú Sí Que Vales:

  • Despite all the likes, literallys and dropped g’s, English isn’t decaying before our eyes
    Photo credit: LisaStrachan/iStock via Getty Images Fear not: There isn’t anything that needs saving.

    As a linguistics professor, I’m often asked why English is decaying before our eyes, whether it’s “like” being used promiscuouslyt’s being dropped deleteriously or “literally” being deployed nonliterally.

    While these common gripes point to eccentric speech patterns, they don’t point to grammatical annihilation. English has weathered far worse.

    Let’s start with something we can all agree on: Old English, spoken from approximately A.D. 450 to 1100, is pretty unintelligible to us today. Anyone who’s had the pleasure of reading “Beowulf” in high school knows how different English back then used to sound. Word endings did a lot more grammatical work, and verbs followed more complicated patterns. Remnants of those rules fuel lingering debates today, such as when to use “whom” over “who,” and whether the past tense of “sneak” is “snuck” or “sneaked.”

    The language went on to experience centuries of tumult: Viking invasions, which introduced Old Norse influence; Anglo-Norman French rule, which shifted the language of the elite to French; and 18th-Century grammarians, who dictated norms with their elocution and grammar guides.

    In that time, English has lost almost all of the more complex linguistic trappings it was born with to become the language we know and – at least, sometimes – love today. And as I explain in my new book, “Why We Talk Funny: The Real Story Behind Our Accents,” it was all thanks to the way that language naturally evolves to meet the social needs of its speakers.

    From dropping the ‘l’ to dropping the ‘g’

    The things we tend to label as “bad” or sloppy English – for instance, the “g” that gets lost from our -ing endings or the deletion of a “t” when we say a word like “innernet” – actually reflect speech habits that are centuries old.

    Take, for example, “often.” Originally spoken with the “t,” that pronunciation gradually became less favored around the 15th century, alongside that “l” in “talk” and the “k” in know. Meanwhile, the “s” now stuck on the back of verbs like “does” and “makes” began as a dialectal variant that only became popular in 16th-century London. It gradually replaced “th” whenever third persons were involved, as in “The lady doth protest too much.”

    While dropping the “l” in talk may have been initially frowned upon, today it would be strange if you pronounced the letter. And the shift makes sense: It smoothed out some linguistic awkwardness for the sake of efficiency.

    If people learned to look at language more like linguists, they might come around to seeing that there is more than one perspective on what good speech consists of.

    And yes, that absolutely is a sentence ending with a preposition – something many modern grammar guides discourage, even though the idea only took hold after 18th-century grammarian Robert Lowth intimated it was a less elegant choice based on the model of Latin.

    Though Lowth voiced no hard and fast rule against it, many a grammar maven later misconstrued his advice as an admonition. Just like that, a mere suggestion became grammatical law.

    The rise of the grammar sticklers

    Many of today’s ideas about what constitutes correct English are based on a singular – often mistaken – 19th-century view of the forces that govern our language.

    In the late 18th century, the English-speaking world began experiencing class restructuring and higher literacy rates. As greater class mobility became possible, accent differences became class markers that separated new money from old money.

    Emulation of upper-crust speech norms became popular among the nouveau riche. With literacy also on the rise, grammarians and elocutionists raced to dictate the terms of “proper” English on and off the page, which led to the rise of usage guides and dictionaries that were eager to sell a certain brand of speech.

    Another example of grammarian angst reconfiguring the view of an otherwise perfectly fine form is the droppin’ of the “g.” It became so tied to slovenly speech that it was branded with an apostrophe in the 19th century to make sure no one missed its lackadaisical and nonstandard nature.

    Up until the 19th century, however, no one seemed to care whether one pronounced it as “-in” or “-ing.”

    Evidence suggests that -ing wasn’t even heard as the correct form. Many elocution guides from the 18th century provide rhyming word pairs like “herring/heron,” “coughing/coffin” and “jerking/jerkin,” which suggest that “-in” may have been the preferred pronunciation of words ending with “-ing.” Even writer and satirist Jonathan Swift – a frequent lobbyist for “proper” English – rhymes “brewing” with “ruin” in his 1731 poem “Verses on the Death of Dr. Swift, D.S.P.D..”

    Embrace the change

    Language has always shifted and evolved. People often bristle at changes from what they’ve known to what is new. And maybe that’s because this process often begins with speakers that society usually looks less favorably on: the young, the female, the poor, the nonwhite.

    But it’s important to remember that being disliked and bad are not the same thing – that today’s speech pariahs are driven by the same linguistic and social needs as the Londoners who started going with “does” instead of “doth” or dropped the “t” in often.

    So if you think the speech that comes from your lips is the “correct” version, think again. Thou, like every other English speaker, art literally the product of centuries of linguistic reinvention.

    This article originally appeared on The Conversation. You can read it here.

  • 10 boys and 10 girls were left alone in separate houses and the different results are just wild
    Photo credit: Ian Taylor PhotographerTwo young children play in the grass.

    It sounds like the plot of William Golding’s Lord of the Flies. However, in the mid-2000s, it was a very real and very controversial reality television experiment.

    Footage from the UK Channel 4 documentary Boys and Girls Alone is captivating audiences all over again. It offers a fascinating and chaotic look at what happens when you remove parents from the equation.

    The premise was simple but high stakes. Twenty children, aged 11 and 12, were split into two groups by gender. Ten boys and ten girls were placed in separate houses and told to live without adult supervision for five days.

    The Setup

    While there were safety nets in place, the day-to-day living was entirely up to the kids. A camera crew was present but instructed not to intervene unless safety was at risk. The children could also ring a bell to speak to a nurse or psychiatrist.

    The houses were fully stocked with food, cleaning supplies, toys, and paints. Everything they needed to survive was there. They just had to figure out how to use it.

    The Boys: Instant Chaos

    In the boys’ house, the unraveling was almost immediate. The newfound freedom triggered a rapid descent into high-energy anarchy.

    They engaged in water pistol fights and threw cushions. In one memorable instance, a boy named Michael covered the carpet in sticky popcorn kernels just because he could.

    The destruction eventually escalated to the walls. The boys covered the house in writing, drawing, and paint. But the euphoria of freedom eventually crashed into the reality of consequences.

    “We never expected to be like this, but I’m really upset that we trashed it so badly,” one boy admitted in the footage. “We were trying to explore everything at once and got too carried away in ourselves.”

    Their attempts to clean up were frantic and largely ineffective. Nutrition also took a hit. Despite having completed a cooking course, the boys survived mostly on cereal, sugar, and the occasional frozen pizza. By the end of the week, the house was trashed, and the group had fractured into opposing factions.

    The Girls: Organized Society

    The girls’ house looked like a different planet.

    In stark contrast to the mayhem next door, the girls immediately established a functioning society. They organized a cooking roster, with a girl named Sherry preparing their first meal. They baked cakes. They put on a fashion show. They even drew up a scrupulous chores list to ensure the house stayed livable.

    While their stay wasn’t devoid of interpersonal drama, the experiment highlighted a fascinating divergence in socialization. Left to their own devices, the girls prioritized community and maintenance. The boys tested the absolute limits of their environment until it broke.

    The documentary was controversial when it aired, with critics questioning the ethics of placing children in unsupervised situations for entertainment. But what made it so enduring, and why footage keeps resurfacing years later, is what it reveals about how kids are socialized long before anyone puts them in a house together. The boys weren’t born anarchists and the girls weren’t born organizers. They arrived at those houses already shaped by years of being told, implicitly and explicitly, what boys do and what girls do. Whether that’s a nature story or a nurture story is the question the documentary keeps asking without quite answering, which is probably why people are still watching and arguing about it nearly two decades later.

    This article originally appeared two years ago. It has been updated.

Explore More Culture Stories

Internet

Italian man claims to be ‘human cheetah’ with lightning-fast reflexes

Culture

Despite all the likes, literallys and dropped g’s, English isn’t decaying before our eyes

Culture

10 boys and 10 girls were left alone in separate houses and the different results are just wild

Media

9-year-old girl asks Steph Curry why his shoes aren’t in girls’ sizes. The response was perfect.