Public Markets

Disinformation And Your Startup’s IPO: Why You Should Care

Illustration of conversation bubbles.

By Dan Brahmy

Between COVID-19 vaccines, the election, and the drastic uptick in screen time that came with the pandemic, it would be easy to classify 2020 as the year of disinformation. But with much of the discussion centered around public health and political turmoil, it can be easy to overlook the role disinformation can play in affecting a brand’s reputation.

Subscribe to the Crunchbase Daily

If the online chatter around market manipulation over the past few months is any indicator of what is to come, brands planning to go public will need to be hyper vigilant about internet conversations and their influence on their company. While we’ve already seen the effect disinformation can have on established companies, we’ve yet to turn our eyes to the damage disinformation can cause around the time of an IPO.

How disinformation is destroying your brand

With viral fails and calls to cancel, companies are often caught off guard when it comes to addressing online discourse targeting their brand. These very real conversations can snowball so quickly, it’s no wonder brand leaders might overlook bots’ role in these discussions.

For example, earlier this year when Sephora found itself in hot water for controversial statements made by one of its influencer partners, the brand faced calls to #BoycottSephora. And regardless of whether or not the criticism was warranted, it was found that social-media profiles were created right around the time the hashtag started gaining traction and immediately started engaging with it. In more direct attacks, we’ve seen bots and trolls target Starbucks and Wayfair, promoting fake products and offers to their consumers online.

Dan Brahmy is the co-founder and CEO of Cyabra.

Brands are already in the fight against disinformation, but as online attacks become more coordinated, it won’t be long before we start seeing more direct effects on companies’ stock prices.

In January, the country watched as GameStop’s price soared in response to the machinations of a subreddit forum on Reddit. With the phrase “reddit stocks” working its way into Wall Street vernacular, retail investors and hedge funds alike are resetting their assumptions as people invest in a post-GameStop world. And though we’ve yet to definitively see a disinformation campaign target an IPO specifically, in what has already been a record setting year for startup funding, we can only expect online communities to turn their eyes toward new stocks as these companies begin to go public.

What’s next

As we prepare for upcoming IPOs in a post-Gamestop world, companies need to integrate disinformation detection and social listening into their strategies.

At the end of the day, it isn’t too difficult to identify a fake account online. A missing profile picture, lack of followers and content that only engages with one or two controversial topics are all great indicators.

And brands can go through comments one by one to identify bots, but in order to establish links between these accounts and potentially discern information around their creation and intent, companies looking to get ahead of these attacks can use AI and machine learning software to really understand online narratives.

  • Is it a coordinated effort of online users looking to shock the market?
  • Are supporters of a competitor attempting to sow discord?
  • Or perhaps fake profiles are building upon existing discontent in your user base?

Once these disinformation actors are found, clues as to why the information was spread could be found in the language, any adjoining topics they’re inserting into the conversation and even the accounts they’re following.

Understanding how and why the negative sentiment or disinformation is spreading will give companies the opportunity to get ahead of the discussion before it has a chance to make a significant impact on your IPO or company overall.

From there, companies have a myriad of options when it comes to leveraging these insights. If conversations have yet to go viral, taking down and reporting fake accounts can be an option for brands. Companies can even compile the fake posts and address them directly in their messaging response. Similarly, brands might want to release preemptive campaigns focused on promoting accurate information or debunking false claims to get ahead of potentially damaging narratives. But regardless of how companies choose to respond, they need to be prepared to detect, track and trace disinformation in real time as the market adjusts to the impact of online conversations.

Arming your teams with the data of where these conversations are happening online—and how they’re spreading—is the critical first step to understanding how to take action. With the information in hand, leadership teams can make better decisions in how to respond publicly, how to engage or not engage in the conversation online, or inform any public-facing executives as it relates to their reputation.

Dan Brahmy is the co-founder and CEO of Cyabra, a SaaS platform that uses AI to measure impact and authenticity within online conversations. 

Illustration: Dom Guzman

Stay up to date with recent funding rounds, acquisitions, and more with the Crunchbase Daily.

Copy link