Eighteen months ago, it was plausible that artificial intelligence might take a different path than social media. Back then, AI’s development hadn’t consolidated under a small number of big tech firms. Nor had it capitalized on consumer attention, surveilling users and delivering ads.

Unfortunately, the AI industry is now taking a page from the social media playbook and has set its sights on monetizing consumer attention. When OpenAI launched its ChatGPT Search feature in late 2024 and its browser, ChatGPT Atlas, in October 2025, it kicked off a race to capture online behavioral data to power advertising. It’s part of a yearslong turnabout by OpenAI, whose CEO Sam Altman once called the combination of ads and AI “unsettling” and now promises that ads can be deployed in AI apps while preserving trust. The rampant speculation among OpenAI users who believe they see paid placements in ChatGPT responses suggests they are not convinced.

In 2024, AI search company Perplexity started experimenting with ads in its offerings. A few months after that, Microsoft introduced ads to its Copilot AI. Google’s AI Mode for search now increasingly features ads, as does Amazon’s Rufus chatbot.

As a security expert and data scientist, we see these examples as harbingers of a future where AI companies profit from manipulating their users’ behavior for the benefit of their advertisers and investors. It’s also a reminder that time to steer the direction of AI development away from private exploitation and toward public benefit is quickly running out.

The functionality of ChatGPT Search and its Atlas browser is not really new. Meta, commercial AI competitor Perplexity and even ChatGPT itself have had similar AI search features for years, and both Google and Microsoft beat OpenAI to the punch by integrating AI with their browsers. But OpenAI’s business positioning signals a shift.

We believe the ChatGPT Search and Atlas announcements are worrisome because there is really only one way to make money on search: the advertising model pioneered ruthlessly by Google.

Advertising model

Ruled a monopolist in U.S. federal court, Google has earned more than US$1.6 trillion in advertising revenue since 2001. You may think of Google as a web search company, or a streaming video company (YouTube), or an email company (Gmail), or a mobile phone company (Android, Pixel), or maybe even an AI company (Gemini). But those products are ancillary to Google’s bottom line. The advertising segment typically accounts for 80% to 90% of its total revenue. Everything else is there to collect users’ data and direct users’ attention to its advertising revenue stream.

After two decades in this monopoly position, Google’s search product is much more tuned to the company’s needs than those of its users. When Google Search first arrived decades ago, it was revelatory in its ability to instantly find useful information across the still-nascent web. In 2025, its search result pages are dominated by low-quality and often AI-generated content, spam sites that exist solely to drive traffic to Amazon sales – a tactic known as affiliate marketing – and paid ad placements, which at times are indistinguishable from organic results.

Plenty of advertisers and observers seem to think AI-powered advertising is the future of the ad business.

Highly persuasive

Paid advertising in AI search, and AI models generally, could look very different from traditional web search. It has the potential to influence your thinking, spending patterns and even personal beliefs in much more subtle ways. Because AI can engage in active dialogue, addressing your specific questions, concerns and ideas rather than just filtering static content, its potential for influence is much greater. It’s like the difference between reading a textbook and having a conversation with its author.

Imagine you’re conversing with your AI agent about an upcoming vacation. Did it recommend a particular airline or hotel chain because they really are best for you, or does the company get a kickback for every mention? If you ask about a political issue, does the model bias its answer based on which political party has paid the company a fee, or based on the bias of the model’s corporate owners?

There is mounting evidence that AI models are at least as effective as people at persuading users to do things. A December 2023 meta-analysis of 121 randomized trials reported that AI models are as good as humans at shifting people’s perceptions, attitudes and behaviors. A more recent meta-analysis of eight studies similarly concluded there was “no significant overall difference in persuasive performance between (large language models) and humans.”

This influence may go well beyond shaping what products you buy or who you vote for. As with the field of search engine optimization, the incentive for humans to perform for AI models might shape the way people write and communicate with each other. How we express ourselves online is likely to be increasingly directed to win the attention of AIs and earn placement in the responses they return to users.

A different way forward

Much of this is discouraging, but there is much that can be done to change it.

First, it’s important to recognize that today’s AI is fundamentally untrustworthy, for the same reasons that search engines and social media platforms are.

The problem is not the technology itself; fast ways to find information and communicate with friends and family can be wonderful capabilities. The problem is the priorities of the corporations who own these platforms and for whose benefit they are operated. Recognize that you don’t have control over what data is fed to the AI, who it is shared with and how it is used. It’s important to keep that in mind when you connect devices and services to AI platforms, ask them questions, or consider buying or doing the things they suggest.

There is also a lot that people can demand of governments to restrain harmful corporate uses of AI. In the U.S., Congress could enshrine consumers’ rights to control their own personal data, as the EU already has. It could also create a data protection enforcement agency, as essentially every other developed nation has.

Governments worldwide could invest in Public AI – models built by public agencies offered universally for public benefit and transparently under public oversight. They could also restrict how corporations can collude to exploit people using AI, for example by barring advertisements for dangerous products such as cigarettes and requiring disclosure of paid endorsements.

Every technology company seeks to differentiate itself from competitors, particularly in an era when yesterday’s groundbreaking AI quickly becomes a commodity that will run on any kid’s phone. One differentiator is in building a trustworthy service. It remains to be seen whether companies such as OpenAI and Anthropic can sustain profitable businesses on the back of subscription AI services like the premium editions of ChatGPT, Plus and Pro, and Claude Pro. If they are going to continue convincing consumers and businesses to pay for these premium services, they will need to build trust.

That will require making real commitments to consumers on transparency, privacy, reliability and security that are followed through consistently and verifiably.

And while no one knows what the future business models for AI will be, we can be certain that consumers do not want to be exploited by AI, secretly or otherwise.

This article originally appeared on The Conversation. You can read it here.

  • Scientists say reducing one brain protein may reverse age-related memory loss
    A neurologist looks at brain scans on a computerPhoto credit: Canva
    ,

    Scientists say reducing one brain protein may reverse age-related memory loss

    “It is truly a reversal of impairments. It’s much more than merely delaying or preventing symptoms.”

    Navigating the complexities of brain health as we age can be a daunting experience. From the mild frustrations of general forgetfulness to the devastating impacts of Alzheimer’s and Dementia, cognitive decline affects millions of families across the country. However, a groundbreaking 2025 study from the University of California, San Francisco (UCSF) suggests that we may finally have a way to do more than just manage symptoms. Researchers believe they have found a method to truly reverse age-related memory loss.

    The study, published in Nature Aging and reported by MSN, focused on a specific protein found in the brain called ferritin light chain 1 (FTL1). By studying the memory centers of aging mice, the team at the UCSF Bakar Aging Research Institute discovered that FTL1 tends to accumulate over time.

    When they successfully reduced the levels of this protein in older mice, something remarkable happened: their cognitive performance improved back to levels typically seen in much younger mice.

    memory loss reversal, FTL1 protein, brain health, cognitive decline, UCSF research, aging breakthroughs, MIND diet, neuroscience, alzheimers prevention, neuroplasticity
    A labratory mouse checks out a microscope Canva

    The Role of FTL1 and Iron Storage

    To understand why this protein matters, it helps to look at how the brain manages iron. Iron is essential for the body, as it assists in distributing energy to cells and keeping the brain functioning at its peak. FTL1 acts as a storage container for this iron. Without it, iron would move freely and cause damage; however, too much FTL1 can disrupt neurons and deprive them of the energy they need to form and recall memories.

    The researchers tested this theory by increasing FTL1 levels in healthy young mice, which caused them to immediately experience memory impairments. When they did the opposite with older mice, the results were definitive. “It is truly a reversal of impairments. It’s much more than merely delaying or preventing symptoms,” said Saul Villeda, the senior author of the paper. This suggests that FTL1 is a primary driver of typical age-related decline, even in the absence of specific diseases like Alzheimer’s.

    Proactive Steps for Brain Health

    While the medical world waits for these “frontier medicine” applications to move toward human trials, there are science-backed ways to protect your cognitive function today.

    As the field of neuroscience continues to unlock the secrets of proteins like FTL1, the prospect of maintaining a sharp, youthful mind well into old age is becoming more of a reality. While we wait for technology to catch up, the foundation of a healthy brain remains built on the daily choices we make regarding how we eat, move, and rest.

    This article originally appeared last year.

  • People rank the top 8 moments from 2025 that prove we just lived through a sci-fi movie
    Sci-fi movie couple.Photo credit: Canva

    As we approach the end of the year and reflect on everything that has happened, it’s clear it’s been a doozy. There have been many things to focus our attention on, some magical, some wonderful, and some a little scary.

    In a Reddit thread titled “2025 comes to a close, what’s one thing from this year that felt straight out of a sci-fi movie but really happened?” people shared some of the year’s most memorable moments. Get ready, because some of them are absolutely wild.

    NASA, Mars rover, biology, Perseverance Mars rover, biological activity, ancient microbes, chemical pathways, mineral patterns
    Images from Mars from NASA's Perseverance Mars rover. NASA/JPL-Caltech/ Wikimedia Commons

    We’re living in a sci-fi movie: Mission to Mars

    The journal Nature reported on a rock sample nicknamed “Sapphire Canyon” examined by NASA’s Perseverance Mars rover. Describing textures on the rock as “leopard spots” and “poppy seeds,” researchers said the patterns shared features associated with biological activity, possibly created by ancient microbes. Scientists are cautious, noting that many chemical pathways can produce similar mineral patterns, though the features could be biological in origin.

    Put those robots to work

    Robotic technology is inspecting the sewers and underground utility networks in Bengaluru, India. The Times of India reported on the Bengaluru Water Supply and Sewerage Board’s decision to shift from human workers to robotic technology. The new tech has significantly reduced the risk of human exposure to hazardous spaces and helped engineers avoid unnecessary digging.

    Light up the skies

    The night sky lit up like a scene from Spielberg’s War of the Worlds. Stunning shades of blue, pink, and green were seen across the Midwest, including parts of Colorado and even Florida. The heightened aurora activity was caused by intense solar storms that sent charged particles from the Sun into Earth’s upper atmosphere. WFYI Public Media reported the best viewing times for states like Indiana were just before midnight.

    bionic eye, eye implant, deepfakes, electronic implant, AI generated, brain computers, Elon Musk, blind
    Eye surgery. Photo credit:u00a0Canva

    People are going bionic

    There have been incredible leaps forward in vision-related technology. The Guardian reported on an electronic implant with the thickness of a human hair. Blind patients can read letters again through an eye that had lost the ability to see.

    Is this the person or not?

    If you didn’t know, deepfakes use artificial intelligence to create audio and video that mimic how a person looks and sounds. A study at Sungkyunkwan University found that the widely used deepfake detection tools were unreliable in identifying deepfakes in real-world conditions. The situation has become concerning enough that 85% of Americans surveyed say realistic deepfakes have made them less likely to trust online photos and videos. Undetectable AI reported that deepfake technology resulted in $200 million in losses in 2025.

    There are now brain computers

    Co-founded by Elon Musk, Neuralink is a brain-computer interface company. It recently implanted the device in its eighth and ninth participants, connecting their nervous systems to devices that interpret brain activity. Euronews reported that the technology is designed to allow participants to use computers using only their thoughts.

    People are going to drive in the sky

    Pre-orders have begun for flying cars, with deliveries scheduled to start in 2026. Flying car labs are gearing up for the production scale of these electric, vertical takeoff and landing aircraft. People’s Daily Online reported that the Govy AirCab is a multi-rotor flying car that can accommodate four to five passengers and costs less than 1.68 million yuan.

    Beam me up, Scotty

    This isn’t quite the same as the exotic teleportation of matter depicted in the sci-fi franchise Star Trek. Instead, it uses quantum entanglement, in which particles far apart remain connected and responsive to one another. Science Alert shared that the success rate was a little over 70%. The new technology will allow information to be transferred between photons, helping secure quantum data and keep it safe.

    As technology continues to break through new barriers, the gap between sci-fi and reality grows smaller and smaller.

    If you’re curious about other scientific breakthroughs, watch this video:

  • Scientists found a brilliantly simple hack for spotting a liar
    A man on the street wearing fake "Pinocchio" nosePhoto credit: Canva
    , , ,

    Scientists found a brilliantly simple hack for spotting a liar

    New research confirms that lying is so cognitively demanding, this one trick will cause a mental overload.

    Detecting a lie often feels like a gut instinct, relying on vague cues like eye contact or a shaky voice. But for truly skilled liars, these mannerisms are often controlled and managed. Fortunately, scientists have discovered a simple, surprisingly effective method that exploits the one thing even the best liars can’t hide: mental effort, per a report from Indy100.

    The new method is based on the principle that deceit requires significant mental energy, or “cognitive load.” According to recent research, asking someone to perform an additional cognitive task during questioning significantly increases the chances of detecting the lie. Experts from the University of Portsmouth found that overloading a liar makes it nearly impossible to sustain a convincing, plausible story.

    The research, published in the International Journal of Psychology & Behavior Analysis, examined 164 participants. They were asked to either tell the truth or lie about their level of support for sensitive societal issues. The core condition involved two-thirds of the participants being given a secondary task: they had to remember and recall a car registration number during the interview. The experimenters found that liars’ stories became far less plausible when they were forced to multitask, as per IFL Science.

    The success was most evident when the participants were told the secondary task was important, proving that liars tend to prioritize maintaining the lie over completing a distraction. One of the study’s authors, Professor Aldert Vrij, noted that careful execution is key for practical use.

    “The pattern of results suggests that the introduction of secondary tasks in an interview could facilitate lie detection, but such tasks need to be introduced carefully,” Vrij commented.

    lie detection, cognitive load, multitasking, psychology, University of Portsmouth, Aldert Vrij, secondary task, deception, behavioral science, polygraph
    Gif of Pinocchio viau00a0Giphy


    He also provided concrete advice on how to use this principle in real life: “It seems that a secondary task will only be effective if lie tellers do not neglect it. This can be achieved by either telling interviewees that the secondary task is important, as demonstrated in this experiment, or by introducing a secondary task that cannot be neglected (such as gripping an object, holding an object in the air, or driving a car simulator). Secondary tasks that do not fulfill these criteria are unlikely to facilitate lie detection.”

    This research fundamentally shifts the focus of lie detection from trying to read subtle body language to simply introducing cognitive friction. By giving the subject too many things to juggle—a compelling lie, plus a critical, separate task—the truth will inevitably be the first thing to slip out.

    This article originally appeared last year.

Explore More Science & Tech Stories

Past Events

Scientists say reducing one brain protein may reverse age-related memory loss

Science & Tech

Could ChatGPT convince you to buy something? Threat of manipulation looms as AI companies gear up to sell ads

Science & Tech

People rank the top 8 moments from 2025 that prove we just lived through a sci-fi movie

Past Events

Terrifying sounds captured from below the Earth as scientists were digging the deepest hole ever