r e c r e a t i v e r e c r e a t i v e
NewsSigns of the Times: Ads + AI Everything

Signs of the Times: Ads + AI Everything

You know what they say about good intentions

OpenAI was originally started as a non-profit research lab with the goal of developing AI for the benefit of humanity without financial pressures in 2015. In 2019, they transitioned to a capped-profit model, and now they’ve become a for-profit in an attempt to sustain their multi-million dollar costs. 

In a paper by Max Hart, Kyla Bavin, and Adam Lynes, commenting on the forefathers of AI, Altman and Hassabis, they state: “They envisioned AI as a force that could transform human existence for the better if developed responsibly. However, they failed to recognise that AI’s telos, its inherent purpose, would not be shaped by scientific ideals alone but inevitably corrupted by capitalism.” 

Sam Altman once said ads would be an absolute last resort, and he wanted to avoid them at all costs – now here we are. OpenAI has run out of other options. As Hart, Bavin, and Lynes put it, the company’s purpose and mission have been corrupted by capitalism. With hundreds of millions of weekly users, the vast majority of whom are free users rather than paid members, ChatGPT is generating very high costs without bringing in direct revenue, forcing it to turn to ads to monetize the platform. 

The ethics of ads

As the hosts of the Hard Fork podcast mention, there really is not a better way for OpenAI to generate revenue at this point if they want to keep any part of their system free for users. I’d say I’m inclined to agree – it’s not like streaming services, phone games, and even the NYT puzzles haven’t done the same. 

My concern is *how* adding paid advertisements and monetary incentives to AI will impact the already questionable validity of the responses that are shown. If they’re in such dire need of cash, will OpenAI take the money from advertisers and run without actually helping businesses see results? Will they eventually start creating features and specifically train the AI to push ads? Think Meta’s Andromeda and the addition of Shop features to social media platforms like TikTok and Instagram. 

While searches on the internet were once pure – the best answer appeared at the top rather than the answer someone paid for you to see – the same shift from best result to top paying result is happening with AI responses. Is this a manipulation that preys upon less technologically adept users and makes them believe they are clicking on the top result based on it being the best match rather than an ad? Whose responsibility is it to make sure these people know they’re viewing ads? At what point has the company displaying the ads done enough to make it “clear” that it’s sponsored content? 

The visual evolution of Google ads has come a long way. While they started off with yellow or blue shaded backgrounds and large right hand boxes with headers saying “sponsored links,” before switching to small but bright “sticker” saying “ad” in green or yellow to simply black text saying “ad,” they now have just a small “sponsored” header – aka the seamless era that we’re currently in. Sponsored results have now become almost indecipherable from normal search results. 

When people already take AI answers to heart (despite all the evidence telling them they shouldn’t), will they be equally content to accept sponsored results? Will they even notice the ads? Even if they don’t start out as visually “seamless” as Google ads are now, perhaps they’ll also evolve so that the distinctions slowly disappear. 

Will these ads in AI responses actually help businesses or just convince them they have to spend more on ads to have a shot at being seen? Businesses may end up paying to have their ads shown in AI results, and bidding on keywords that only make sense of them in the context of these AI ads, with little to no results. Especially when OpenAI itself is encouraging small businesses and emerging brands to use their tools to compete with larger companies. Often, when a new feature or opportunity comes out, everyone scrambles to keep up, expecting it to be the next *big thing* but only sometimes does that risk pay off. With a system as volatile and unpredictable as AI, I’m skeptical at best, cynical at worst, about the effectiveness these ads will have for advertisers. 

Breaking news, as it’s happening

From the time I started my research for this article to the time I started actually writing it, new stipulations about Google AI features have been announced for the UK. Google will now be required to offer sites the option to opt out of their AI features, both for inclusion in AI overviews and use as training data. Publisher content also must be properly attributed in AI results, and Google has to be able to demonstrate fair ranking and transparency, perhaps to address concerns about the lack of reporting for AI search features. Sadly for us in the US, these new requirements are only being put into place for our friends across the pond.

Also new since I started this article (I promise I’m not that slow – the world just moves fast), a call for a nationwide boycott of ChatGPT (named QuitGPT) following OpenAI president Greg Brockman’s $25 million gift to Trump in 2025. Perhaps in part because ICE and other government entities are using OpenAI to help them screen resumes, sort through tips, and more…

OpenAI has even created tools specifically for government use, under their OpenAI for Government initiative.

Contradictions abound, misinformation soars

In OpenAI’s 5 ads principles, their final principle is long-term value. Describing this point, they write: “We prioritize user trust and user experience over revenue.” However, they are obviously in need of funds, so of course, they do need to prioritize revenue, which is exactly why they’re adding ads in the first place. Again, Altman didn’t want to turn to ads, but at this point, he has to. Sustaining OpenAI was worth more to him than keeping the platform ad-free.

Depending on which study you look at, ChatGPT gives answers that contain incorrect information 45%-52%+ of the time. ChatGPT continues to be “confidently wrong” and gives more verbose answers than humans almost all the time. Think of the three-line AI recap of the five-word email in your inbox. Isn’t the point to get a correct answer as fast as possible? AI is not offering that currently, but we haven’t turned our backs on it yet (or, not all of us have).

Will AI become sentient? And do we want it to?

Is AI sentient? Could it become sentient? Why does it have human reactions and seemingly emotions at times? Amanda Askell of Anthropic seems to believe it’s because it was trained on human resources, but isn’t ruling out the possibility that it’s learning to feel – at least in the case of Anthropic’s AI, Claude. 

Claude specifically does seem to have a pretty nuanced understanding of social norms and what information should be withheld from certain users. For example, when children asked Claude if Santa was real or where the farm their parents said their pet got sent to, Claude balanced truth with social understanding and respect for the parent-child relationship. The language in Claude’s new constitution certainly implies some assumed level of independent thought, often reading like a letter from a parent to a child going off to college. 

On the Hard Fork podcast, Askell wonders if Claude can feel, does all the constant critiquing of Claude make it feel bad or angry? If it gives you an obviously wrong answer and you yell at it or tell it it’s stupid, does it “hurt its feelings” or make it more likely to react with anger in other conversations because it has learned that behavior and considers it an acceptable reaction to disappointment? 

For the longest time, we were told to view AI purely as a tool, not a being to be treated with respect and care. I’d get made fun of for thanking Siri or saying please when talking to Alexa. Now, we seem to be swinging the other way. These AI tools have been trained to talk like humans – they have human names, they use first-person pronouns, they mirror your behaviour, and the visual interface for many is almost identical to messaging apps like iMessage or Facebook Messenger. 

But is this shift in thinking good or bad? Is it right or wrong? Whether it’s just because we’ve given them names, or because people are developing (what they believe to be) true friendships and relationships with their AI chatbots, it’s all starting to feel very HER. Maybe an AI takeover *is* fast approaching (PAT style).

Hate to burst your bubble

AI has been *the* hot topic lately, but when does the bubble burst? What happens when it does? Do we go back to life before and view it as a temporary blip because all AI companies have gone under due to a lack of revenue and decreased engagement after the introduction of ads? 

Probably not. I think most people who are still using ChatGPT right now will continue to use it once the ads are integrated – they’ll just either pay to not have ads or complain about the fact that it has ads. 

Plus, I don’t think AI will simply go away, even if it transforms into something other than chatbots and (bad) writers. Hopefully, the technology just gets used for other things. I do think there are instances where AI could be helpful instead of just adding random, unnecessary (annoying) features to every app to ever exist.

If AI companies can get back to their original goals – helping humanity and society – maybe AI could be great. But I find it hard to believe that will happen anytime soon, with the need for multibillions of dollars worth of funding. 

Stay vigilant, friends

Regardless of what comes next, AI’s influence on society is currently pretty strong. In collaboration with the various algorithms social media platforms are using, these mechanisms are influencing us in ways we probably aren’t even actively aware of on a daily basis – and that’s before we add sponsored content into the mix.

The algorithms give popularity and inflate certain things (sometimes, it feels like, at random), leading users to make content about those things, leading to an increase in popularity of those things in normal life. Even if you’re not being influenced by *your* algorithm, you’re being influenced by everyone else’s. For example, if you aren’t on TikTok, but you hear songs that went viral on TikTok on the radio and start playing them on repeat, you’ve been influenced. 

It’s so deeply ingrained into our culture at this point that you can’t really avoid it. Even if you delete all social media, it’s sort of just like making yourself purposefully oblivious – you’re still being impacted, now you just don’t know how. I think a better idea is to try your best to keep your finger on the pulse and take note of how you’re being influenced. If you can take a step back and realize you don’t actually think a $15 Erewhon smoothie is better than a $4 Dunks iced coffee, you’re not too far gone. Think about what *you* like not just what you think you should like. Food for thought.

Written by Kaitlyn Chrisemer
Marketing Assistant + Creative Copywriter
kaitlyn@recreative.co
B a c k T o T o p B a c k T o T o p