A few weeks ago, I ventured down Quinpool Road and visited a company office that doesn’t exist. The building is real. And the company—at least in some notional sense—is real. But the other parts—who works at the company, how they’re connected to Halifax, what it bodes for our new digital era—I’m far less certain of. I went there in search of a group of people who, despite my best efforts to prove otherwise, don’t appear to exist either. Or at the very least, a mounting pile of evidence suggests they are not who they claim to be. Which is troubling, because they claim to be journalists and writers working in Halifax. And that is only the beginning of the story.

It is a story whose tendrils stretch from a business park in Finland all the way to Canada’s West Coast. (And yes, to our small Atlantic peninsula, too.) It is a story, researchers and security experts tell me, with numerous—and grave—implications, not only for trust in a digital age but also the future of democracies everywhere. It is a story about the cutting edge of artificial intelligence and what lies in store for us. It is even, in small part, a story about pet-sitting services. (Just wait.)

Most of all, it is a warning: There are imposters among us. More than we probably know.

It started with an Instagram message. A Halifax bakery reached out to The Coast with something we hadn’t seen before: A sales rep had emailed them about “a brand new platform” and “trusted resource” dedicated to showing “the absolute best businesses Halifax has to offer.” The platform’s name? The Best Halifax. It claimed to be part of “a leading media group with a global presence” and to boast a writers’ roster of “Halifax die-hards” ranging from policy analysts to former investigative journalists. The bakery wanted to know: Was it affiliated with The Coast and our long-running Best of Halifax Awards? Our logos were splashed twice on its website, along with CBC, CTV News, MSN and The Globe and Mail—all outlets that had purportedly “featured” The Best Halifax before.

Which prompted two words in our mind:

The fuck?

The Best Halifax’s “About Us” page claims that the review site has been featured on The Coast, among other media outlets. Well, now it has. Credit: Illustration: The Coast (screenshots from The Best Halifax website)

It was, suffice to say, news to us. It wouldn’t be the first time The Coast found itself dealing with a purveyor of dubious claims. When you’re in the business of parsing fact from fiction, it comes with the territory. (Half the time, it comes from City Hall and Province House.)

But using our name? Our logo? On a so-called “trusted resource” that arrived in Halifax out of thin air, like a ghost ship emerging through the fog? A bridge too far, we say.

It was also too tantalizing a thread not to follow to its source. And what a rabbit hole it turned into.

Emails from The Best Halifax claim that for an annual fee, you can receive “highly targeted exposure” to your business on its review pages. Credit: Illustration: The Coast (screenshots from The Best Halifax website and emails shared with The Coast)

The Best Halifax debuted quietly in the spring of 2023. Cached metadata from the website’s homepage lists its publication date as Apr 27—though the first published stories didn’t follow until more than a year later. The earliest archived version of the site, from Apr 26, 2024, still showed no published articles, but that would change within weeks. Soon, a flurry of search engine-optimized listicles emerged, with headlines like “Top 12 Free Things to Do in Halifax for a No Cash, All Fun Day,” and “What Locals Consider the 20 Best Things to Do in Halifax.” It was, for all intents and purposes, standard web clickbait fare—sad as that state of affairs may be—and successful, at that: The Best Halifax’s pages soon ranked among the top Google results for searches like “best Halifax breakfast” and “best Halifax activities.” It was also almost assuredly bullshit.

Seven separate AI detectors, available online and trained to flag text generated by ChatGPT, Claude and other artificial intelligence tools concluded it’s highly likely that The Best Halifax used AI to write its articles. Three of those seven AI detectors determined that 100% of the writing samples we pulled from TBH were probably AI-generated. The rest of the seven estimated that half to three-quarters of The Best Halifax’s writing samples were done by a computer. (The Best Halifax has not responded to The Coast’s emails asking if it uses AI to write its stories.)

Conclusive or not—each AI detector cautions that it can only assess the likelihood that AI was used, and not prove it was used—those figures would be damning for any working journalist or writer. Earlier this year, a bestselling author sparked a social media firestorm after she forgot to delete an AI prompt in one of her self-published books. A Wyoming reporter resigned last August after he was caught using AI to fabricate quotes in his stories. Two decades earlier, the New York Times referred to one of its reporters who had made up scenes and plagiarized material as a “betrayal of trust and a low point in the 152-year history of the newspaper.”

The Best Halifax wouldn’t be the first media outlet to dabble in AI-generated news stories—and far from the most prominent. Sports Illustrated came under fire in 2023 after it was found to be publishing AI-generated stories written by fake authors. Last year, a Polish radio station raised eyebrows after replacing its hosts with AI presenters. At the same time, a US company that owns a network of local news sites replaced its on-the-ground reporters with AI authors, giving them fake human names.

All of which underlines a pressing question: Do The Best Halifax’s writers exist at all?

Seven separate AI detectors trained to flag text generated by ChatGPT, Claude and other artificial intelligence tools determined it’s highly likely that The Best Halifax uses AI to write its articles. Credit: Illustration: The Coast (screenshots from The Best Halifax website, Phrasly, GPTZero and Sapling)

The first smoking gun that The Best Halifax’s team members are not who they claim to be came from their profile photos. On the website’s “About Us” page, TBH claims that its staff are “the ones who brave the sidewalk ice in January for the perfect local brew, cheer during the downpours in summer, and genuinely think the fog adds a certain charm.” They are a group of five writers who, TBH would have you believe, “love this city more than donairs, the Mooseheads, and snowfall warnings combined.” (Believe it or not, the AI detectors gave both of those sentences a pass.) Look closely at the writers’ profile photos, however, and you’ll notice each has a wordmark in the bottom right corner. It’s subtle enough to almost blend right into the picture. But spend a minute in the world of AI portraits, and soon enough, it leaps off the screen.

The wordmark credits “Karras et al.,” shorthand for Tero Karras, Samuli Laine and Timo Aila. In the world of artificially generated human portraits, the three Finnish computer scientists are akin to gods. That isn’t hyperbole: The technology they created has birthed millions of faces out of binary code. Working out of a nondescript research park in Helsinki, the three Nvidia researchers shared a landmark paper in 2019, introducing a “new, highly varied and high-quality dataset of human faces” they’d created using “generative adversarial networks”: a deep-learning tool that trains computers by pitting them against each other. In short, they’d cracked the code to creating an infinite number of AI-generated portraits that were convincing enough to pass as real. They named it StyleGAN—shorthand for the Style Generative Adversarial Network—and shared the software with others to use.

“How do we prepare for a future where video and voice cloning mean that not only can you find yourself in a porn video without your consent, but also have to contend with identity fraudsters who’ve copied your likeness?”

The Finns didn’t create the portraits seen on The Best Halifax’s website—rather, they designed the software that, in all likelihood, was used to generate them. (The Coast asked The Best Halifax by email to share where its author portraits came from, but did not receive a reply.) An Nvidia spokesperson told The Coast that the watermark “suggests they have been generated by the site http://thispersondoesnotexist.com.”

That would refer to the work of software developer Phil Wang. The San Francisco-based AI skeptic and former Uber engineer created the website This Person Does Not Exist as a way to highlight how blurred the lines between what’s real and fake have become. The premise is simple: Refresh the page, and a new AI-generated face appears. The site went viral almost immediately upon its 2019 release—a reaction, Wang told Inverse at the time, that “speaks to how much people are in the dark about AI” and its potential. “I’m basically at the point in my life where I’m going to concede that superintelligence will be real and I need to devote my remaining life to [it],” he added. (The Coast was not able to connect with Wang before this story’s publication.)

Set aside The Best Halifax’s sloppiness in neglecting to crop out the most obvious proof that its writers are fake, and deeper questions linger about where we’ve come with AI, as well as where we’re headed. If it’s that easy to create a whole new person, how soon until the digital spaces we occupy are filled with nothing but AI lookalikes? How do we prepare for a future—hell, a present—where video and voice cloning mean that not only can you find yourself in a porn video without your consent, but also have to contend with identity fraudsters who’ve copied your likeness down to the smallest freckle? What does all of this mean for democracy and discourse when more and more of the public squares we occupy today are online?

Last August, the federal Conservatives found themselves mired in a scandal after a mass of Twitter bots—generated in Russia, among other places—claimed to be Canadian voters “buzzing with energy” after attending a Pierre Poilievre rally in Northern Ontario. (The Conservatives have denied paying for the bots or having prior knowledge of them.) Even after a follow-up study found no evidence of the Tories’ involvement in the astroturfing affair, the implication remains: What’s to stop another, more sophisticated attempt at influencing our political climate?

In August 2024, Twitter users began to notice oddly similar social media posts after a Pierre Poilievre rally in Kirkland Lake, Ont. It was later determined that they were posted by bot accounts. Credit: Illustration: The Coast (screenshots from X)

Seventy-five years ago, English mathematician and World War II codebreaker Alan Turing had a prescient question: “Can machines think?” Computers were a dazzling novelty at the time. The machines had helped the Allies defeat Nazi Germany, and the nascent technology loomed large—both in the public imagination and the literal sense. When the world’s first general-purpose digital computer, ENIAC, came out in 1945, its myriad panels, vacuum tubes, relays and resistors took up enough space to fill a large bungalow—about 1,800 square feet—and cost about the same in today’s dollars—$8.7 million—as a splashy mansion on the Northwest Arm. How far could the technology take us, Turing wondered? He proposed a blind test that would shape how we have thought about artificial intelligence almost ever since: If you asked the same questions to both a human and a computer, could you tell the two apart by their answers? How reliably could you pick out the human?

It took until 2014 for a computer to beat the Turing test, as it came to be known. Russian-born Vladimir Veselov and Ukrainian-born Eugene Demchenko developed a computer program that simulated a 13-year-old boy. It fooled one-third of the judges in a contest at the Royal Society of London. Eleven years later, the result already feels antiquated. Last February, researchers at Stanford and the University of Michigan ran the latest versions of ChatGPT—OpenAI’s artificial intelligence chatbot—through an updated Turing test of “interactive sessions” consisting of “behavioural economics games” and survey questions. The scientists pitted the AI chatbots’ answers against those from more than 100,000 human respondents, and found the computer intelligence showed behavioural and personality traits—trust, fairness, risk-aversion and cooperation—that are “statistically indistinguishable from a random human.” (If there was any noticeable difference, the researchers noted, the AI chatbots were actually “more cooperative and altruistic” than their human counterparts.)

It was a remarkable breakthrough—and one that venture capitalists, politicians and tech opportunists alike are betting will be outdone before long. When prime minister Justin Trudeau visited Halifax in December, he and premier Tim Houston spoke about the “importance of artificial intelligence” as a “shared economic priority,” according to a note from the PMO. Optimists see AI as holding the potential to revolutionize how we fight—and maybe one day cure—cancer. It could help farmers adapt to climate change. Boost the resilience of our power grids. And there are fortunes to be made: Bloomberg estimates the market for generative AI—one piece of the puzzle, worth about $40 billion in 2022—will exceed $1.3 trillion by 2032.

Prime minister Justin Trudeau and Nova Scotia premier Tim Houston spoke about the “importance of artificial intelligence” as a “shared economic priority” when the two met in December 2024. Credit: Prime minister Justin Trudeau / X

Wrapped up in all the hype surrounding AI’s potential, there’s also a healthy amount of concern. And not just of the robots-are-coming-to-kill-us variety. (But definitely that, too.) AI models are unscrupulous learners, for one thing—and already, there’s a fight over the intellectual-property rights of the thousands of authors and artists whose work developers have used, without permission, to train these computer programs. One Goldman Sachs report estimates that AI could replace 300 million full-time jobs worldwide. The Bank of Canada’s principal researcher in financial markets has cautioned that AI’s expanded role in trading could “cause more frequent and more intense financial crises” than what we’ve seen so far. But nothing feels quite as messy—and pervasive—as how AI is changing the way we consume information and discern real from fake online.

The concern isn’t all that different from the question at the heart of Turing’s test: How do we know when we’re being duped?

It’s getting more difficult to tell, according to those in the field.

“What these tools have done,” says Dimitris Dimitriadis, a London-based researcher and journalist who has covered government corruption and state-sponsored disinformation, “is they’ve basically driven the cost of misinformation and disinformation to zero—or to virtually zero.”

“The concern about AI and its use in news websites isn’t all that different from the question at the heart of Turing’s test: How do we know when we’re being duped?”

Born in Athens, Greece, Dimitriadis cut his teeth as an investigative journalist reporting on Russian propaganda in the UK and oil companies’ use of influencers to boost public opinion of their brands. Now the director of research and development at NewsGuard, a “nonpartisan organization” staffed by journalists and data analysts, he’s part of a global team who seeks out misinformation and disinformation online by combing news websites and, among other things, looking for evidence of AI.

“What these [AI] tools have done is they’ve basically driven the cost of misinformation and disinformation to zero—or to virtually zero,” NewsGuard’s Dimitris Dimitriadis tells The Coast. Credit: Dimitris Dimitriadis / X

Whereas in the past, Dimitriadis tells The Coast by video chat, a would-be saboteur or grifter “would need a small army of writers” to churn out propaganda or misinformation on a large scale, they now have “very cheap, if not free” access to AI tools “that can do all that work for them.” That can take all kinds of forms. Earlier this year, Halifax musician Ian Janes logged onto Spotify and discovered someone had posted a fraudulent, AI-generated album using his name. In the lead-up to the 2016 US election, Russian troll accounts flooded Facebook and Twitter with links to fake American news websites in an attempt to sway voters’ minds. (The effect of those bots is disputed.)

The rise of AI has made for busy work for Dimitriadis and his colleagues. In just one month—April 2023—NewsGuard found 49 news websites that it claims, based on its findings, “appear to be entirely or mostly generated by artificial intelligence language models designed to mimic human communication.” As of February 2025, that number has since ballooned to 1,254 websites “operating with little to no human oversight.”

The oversight part is a big deal to Dimitriadis. Even if an AI-generated news website isn’t spreading outright propaganda today, the door is open to future abuses. “It may not necessarily be spreading it right now,” he says, “but if the operation is entirely unsupervised, and if the revenue model just prioritizes volume over quality,” then it’s “very likely” that it might end up there.

“And I think that’s something that should worry us.”

None of the five authors listed on The Best Halifax’s “About Us” page turn up in any search results beyond the website itself. Which would be odd enough in an era when most Nova Scotians have a Facebook, Instagram, Twitter, LinkedIn or TikTok page. (Sometimes all five.) But for an entire team of five writers—including one alleged former investigative journalist—who work online not to have any other digital trace stretches the limits of belief well beyond credulity. It is the “my girlfriend who lives in Canada” trope come to life: Sure, they exist; you’ve just never met them before … No, you can’t visit. (We asked The Best Halifax by email if its writers are real; we have not received a reply.)

There are other oddities—some of which make for good comedy. One author bio for The Best Halifax’s James Martin describes him as a “former investigative journalist” who “never quite lost his hunger for truth.” According to that same bio, his writing combines “meticulous research, unflinching questions, and a knack for unravelling complex narratives to reveal the heart of the issue at hand.” Scan through his articles, and you’ll find such hard-hitting pieces as “10 Shops with the Best Cakes in Halifax to Satisfy Your Sweet Tooth,” “5 Best Self-Storage Units in Halifax for Stress-Free Decluttering” and “5 Best Pet Sitters in Halifax.”

You will not, however long you scour The Best Halifax, find complex narratives. What you’ll find instead is what Carl Bergstrom would refer to as “slop”—a catch-all term that has come to define the lowest-common-denominator-seeking AI-generated text, photos and videos that proliferate online. (“Once upon a time, I threw a Canada Day flop that would make a moose weep maple tears,” one archived TBH post reads. “It was so dull, even the fireworks yawned!”)

University of Washington biology professor Carl Bergstrom co-teaches the course “Calling Bullshit: Data Reasoning in a Digital World.” Credit: Carl Bergstrom

Bergstrom has become a somewhat unlikely authority on AI in recent years. The Harvard and Stanford-educated biologist began his academic career studying the spread of disease—only to notice, as social media blossomed, there were parallels happening with how lies and falsehoods spread online. Now a professor at the University of Washington, he co-teaches the course “Calling Bullshit: Data Reasoning in a Digital World.” He sees the state of AI “slop” today as the unintended consequence of decisions “we made a long time ago” about how the internet would work.

“Collectively, we headed down a road where content online is largely monetized by ad revenue, as opposed to being largely subscription-based or whatever,” Bergstrom says, speaking by video with The Coast. “And so that sets into motion the competition for attention we’re all dealing with.” When it becomes “very, very cheap” to create large amounts of content, he adds, you can flood the internet with AI-generated blogs, images and videos, “and just see what sticks.” Hence the rapid rise of Shrimp Jesus. Or the trend of YouTubers looking to profit from “Faceless Channels” where they’ve outsourced all the work to AI. (If it generates clicks, who cares if it’s any good?)

The problem is worsened, Bergstrom says, because search engines like Google and Bing—our main methods for wading through the endless supply of webpages for what we’re trying to find—are “doing a bad job” sorting for quality. A webpage stuffed with keywords and filled with AI text can outperform a human-produced one. In turn, “the stuff that’s worth looking at is getting crowded out by the stuff that isn’t.”

Which, you could argue, might explain why a former investigative journalist like James Martin would end up writing listicles—that is, if he ever existed. We’d ask him if we could. None of The Best Halifax’s authors have an email address in their byline or bio. “Unfortunately, there’s no way to contact our writers directly,” a TBH representative told The Coast by email. “All correspondence for our review pages… goes through us and is handled by our team.” Nor is there anyone working at The Best Halifax who will speak by phone. “All communications are handled by email,” the same representative wrote. (The representative has a name in their email signature—Andrew Morrison—but we haven’t found any evidence that they exist, either. And they’ve stopped answering our emails.)

There is a “Contact Us” page where anyone can submit their queries to The Best Halifax. But first, you have to check a box and confirm one thing: “I’m not a robot.” The rest of the contact page offers something even more useful than an email form with a cheeky sense of humour: It includes a business address.

“We’re ready, let’s talk,” says a caption on The Best Halifax’s “Contact Us” page. But only if you’re not a robot. Credit: Illustration: The Coast (screenshots from The Best Halifax website)

Rule number one for launching a bogus review site: If you’re going to claim to have an office, be ready for visitors. The Best Halifax’s “Contact Us” page claims the “trusted” review outlet can be found at 6156 Quinpool Road. (“We’re ready, let’s talk,” says a heading on the page.) The building is home to CoWork Halifax, which offers private offices and shared work spaces for more than 20 start-ups. The Best Halifax is not one of them. We visited on a February morning.

CoWork Halifax, for all that it offers, is still a fairly small and intimate space. It is the kind of coworking space where, even if you weren’t the type to speak to other people, you would recognize them. And no one we spoke to at CoWork Halifax had heard of The Best Halifax. Nor had they seen any of the faces pictured in The Best Halifax’s author photos.

“People see our virtual address and use it as their own, and then I get their mail. And I’m like, ‘Why would you do that?’”

CoWork Halifax community manager Brittany Jackson—who makes a point of greeting every member at the front door—says she’s never spoken with anyone from The Best Halifax before. The outlet doesn’t have any office space within the coworking facility, she adds, nor are they a virtual member. Not that it’s anything new to Jackson to discover a business has been claiming it’s based out of the Quinpool address when it’s not.

“People see our virtual address and use it as their own,” Jackson tells The Coast, “and then I get their mail. And I’m like, ‘Why would you do that?’”

The Coast asked The Best Halifax by email to explain the discrepancy between its claims of having an office at 6156 Quinpool Road and Jackson’s assertions that it doesn’t. We have not received a reply.

The Best Halifax claims to have an office at 6156 Quinpool Road. The space is home to CoWork Halifax. Community manager Brittany Jackson says The Best Halifax does not lease any office space in the building. Credit: Martin Bauman / The Coast

One rule of thumb for finding who’s who behind a local business is to check Nova Scotia’s Registry of Joint Stock Companies. With few exceptions, most businesses and non-profits in Nova Scotia are required to register with the province. (The Coast has a registration number, as do the Halifax Examiner, CTV News, Global News, AllNovaScotia and Postmedia, the latter of which now owns the Chronicle-Herald.) Sometimes, if a business owner—or owners—want to keep their identity private, they’ll register a second business as the owner of their primary business. But The Best Halifax has no registration number at all, as far as we can determine. (The Best Halifax’s “Terms & Conditions” page states that its terms “shall be construed and governed in accordance with the laws of Ireland,” but a search of Ireland’s business registry turns up no results for The Best Halifax, either.)

Cached metadata on The Best Halifax’s site links it with two other review sites: The Best Toronto and Waterview Vancouver. All three websites share identical layouts and near-identical content and branding. (Waterview Vancouver’s author photos, while different from the ones on The Best Halifax’s website, also share the same “Karras et al.” wordmark. The Best Toronto, on the other hand, appears to employ at least some real people.) But neither site gets us closer to finding out who’s behind them all. The Best Toronto is not registered as an Ontario business. Waterview Vancouver does not appear in BC’s business registry.

The Best Halifax, The Best Toronto and Waterview Vancouver share cached metadata, as well as near-identical branding. Credit: Illustration: The Coast (screenshots from The Best Halifax, The Best Toronto and Waterview Vancouver)

In many circumstances, a WHOIS search—an online directory of web domain names and their history—will reveal who owns a web domain. But The Best Halifax, The Best Toronto and Waterview Vancouver use a proxy service to keep their owner’s (or owners’) identity private.

Which gets at a thorny matter: Who is held responsible for the content AI generates? Where does the onus for truth lie?

A few weeks ago, Donald Trump shared an AI rendering of Gaza. In the video, children walk out of rubble to see a glittering cityscape that looks like a cross between Las Vegas and Dubai. One child holds a giant golden Trump balloon. Elon Musk dips pita into a bowl of hummus. Children catch US dollar bills falling from the sky. Amid all of the bizarre, hallucinatory scenes, an AI-generated voice sings: “No more tunnels, no more fear. Trump Gaza is finally here.” Weeks earlier, standing beside Israeli prime minister Benjamin Netanyahu, Trump told a pool of reporters that the US “will take over the Gaza Strip” and that he foresees “long-term ownership” of the region. Never mind if some lawyers say a US occupation of Gaza would put them at risk of charges by the International Criminal Court.

The AI video isn’t Trump’s first foray into using fake photos or videos to support his positions. Last August, he admitted to sharing fake photos of Taylor Swift and her fans endorsing him for president. But he rebuffed suggestions that he could be held liable for the photos. Days earlier, Trump had posted on X and his Truth Social website, sharing AI-generated photos of Swift dressed as Uncle Sam and a crowd of airbrushed women wearing “Swifties for Trump” t-shirts. In a segment with Fox Business Network, he told host Grady Trimble that he didn’t “know anything about” the photos, “other than somebody else generated them.”

AI is “always very dangerous in that way,” Trump added.

On the left, a child holds a golden Trump balloon. On the right, a woman wears a “Swifties for Trump” t-shirt. Neither of these images is real. Credit: Photos from Truth Social and X

The University of Washington’s “Calling Bullshit” professor Carl Bergstrom would agree. He sees generative AI as a “huge risk to democracy,” for the way it blurs the line between what’s real and fake. In 2023, an AI-generated photo of a bombing at the Pentagon went viral and caused a stock market crash. In the wake of Meta blocking Canadian news outlets from sharing links on Facebook, a cottage industry of scammers has bloomed with fake posts pretending to be from the CBC and other outlets with headlines like “The tragic end of Chrystia Freeland!” or “Jagmeet Singh suffers fatal accident on live television.” And at the same time, as conflicts swirl around the globe—including in Gaza, where the Israel-Hamas war has killed tens of thousands since October 2023—the spectre of AI-generated content has made it harder to believe the horrifying images we see.

That’s the dual threat of AI, Bergstrom tells The Coast: “It creates things that aren’t real, but it also creates doubt around things that are real.”

“[AI] creates things that aren’t real, but it also creates doubt around things that are real.”

In this hyper-skeptical, hyper-online world, anything could be faked—or be claimed as fake, even if it’s real. And that’s an issue courts are just beginning to grapple with. In September 2022, videos surfaced of Alberta’s former justice minister, Jonathan Denis, purportedly performing a racist caricature of an Indigenous person. More videos surfaced, including one of Denis allegedly mocking Indigenous people with Calgary councillor Dan MacLean. CBC News reported on the videos. MacLean apologized for “mistakes in the past”—and initially, Denis apologized too. But his lawyers later claimed the videos were deepfaked. The Alberta Court of King’s Bench agreed.

South of the Canada-US border, Tesla’s lawyers have tried the same “It Wasn’t Me” defense—albeit with less success. Back in 2016, CEO Elon Musk told a tech conference that Tesla’s Model S and Model X “can drive autonomously with greater safety than a person.” That statement came back at Tesla in a lawsuit after a Silicon Valley engineer, Walter Huang, died in a car crash in 2018. Huang’s family alleged that Tesla’s Autopilot software was at fault when his SUV crashed into a highway median. Tesla’s lawyers lobbed the Hail Mary of all Hail Marys and countered that Musk, “like many public figures, is the subject of many ‘deepfake’ videos and audio recordings that purport to show him saying and doing things he never actually said or did.” The judge dismissed that defence and ordered Musk to testify. Tesla settled the lawsuit with Huang’s family last April.

Elon Musk at the launch event for the Tesla Model X in 2015. The Tesla CEO told a tech conference that the car “can drive autonomously with greater safety than a person.” Credit: Steve Jurvetson (CC BY 2.0)

Which brings us back to The Best Halifax: Is a local review website’s use of fake authors on the same level as a fake bombing or a political astroturfing campaign? Hardly. There are no traditional “victims” in a keyword-stuffed site with articles about where to groom your pets, apart from anyone who reads The Best Halifax’s writing. (And literary offences, however flagrant they may be, are not covered under the Criminal Code of Canada.) No one is being slandered. No one is being tricked into sharing their personal information. Even the businesses who pay TBH to get featured on the site are only getting what they already wanted: Traffic.

Bergstrom would argue that, in fact, all of us are victims of AI’s outsized influence on our online worlds. Especially when it’s used to claim the appearance of something it’s not. He likens it to The Truman Show, where each of us lives in an online bubble “crafted for us alone”—and often with us none the wiser. What does truth become when we can no longer agree on a shared set of facts? How do we tell one person’s “trusted resource” apart from another’s scam?

“I feel like it’s tearing apart a sort of shared epistemic grounding in our society,” Bergstrom says.

One day, he worries, “every one of us will live in a reality that’s being created by AI—basically, to keep us engaged and buying things, or at least looking at ads.

“And that, I think, can be catastrophic to our sense of community and our shared understanding of the world.”

Martin Bauman is an award-winning journalist and interviewer, whose work has appeared in the Globe and Mail, Calgary Herald, Capital Daily, and Waterloo Region Record, among other places. In 2020, he was...

Join the Conversation

4 Comments

  1. Fantastic reporting! I stumbled on the website a few weeks ago and thought to myself “there is no way Halifax has the population to support this amount of writing that I’ve never heard of, this feels sketchy” and didn’t finish reading the article (or trust the reviews for whatever restaurant I was looking for). At least I can trust my subconscious AI detector for now… the final paragraph really got me, what a world we’re living in.

  2. All we can do is buy subscriptions to real journalism and allow Meta and X to die of malnutrition.

Leave a comment

Your email address will not be published. Required fields are marked *