It’s Not Just Businesses Benefitting from AI—Threat Actors Are Gaining an Advantage Too

Protect your business from AI
Mark Anderson

By Mark Anderson, Founding Principal

Artificial Intelligence (AI) has blown all other recent tech trends out of the water over the past few years. In less than half a decade, it’s gone from sci-fi speculation to an accessible, everyday tool that’s revolutionized many industries, enabling businesses to streamline, simplify, and advance their operations.

While the business benefits of AI are undeniable, it’s important for small and medium-sized businesses (SMBs) in St. Louis and everywhere to recognize that this same technology is increasingly being used by threat actors to carry out more sophisticated and damaging cyberattacks. In this blog, we’ll explore how AI is helping cybercriminals move faster and better and what steps your business can take to protect itself.

Barriers to Entry, Begone!

AI has significantly lowered the barrier to entry in many fields. Using AI in a business context, for example, makes it easier and more affordable for SMBs to adopt advanced technologies and processes that drive them forward. However, this ease of access is a double-edged sword.

For threat actors, AI is opening doors that were previously closed, allowing even those with minimal technical expertise to create and deploy malware. Traditionally, developing harmful software required a certain level of proficiency, but now, with the assistance of AI, even beginners can generate highly effective cyberattacks.

In an effort to avoid detection, wannabe hackers are turning to jailbreaking services provided by more seasoned cybercriminals. These services help them gain anonymized access to large language models (LLMs) (your ChatGPTs, Geminis, etc.) and bypass the AI tools’ built-in guardrails. By doing so, they can generate various types of attacks without raising the alarms of law enforcement.

An ISC2 survey conducted earlier this year found that 75% of respondents were moderately to extremely concerned about AI being used for cyberattacks. Clearly, there’s a growing fear among businesses and security experts that, in the wrong hands, this technology could significantly amplify the threat landscape.

The Reconnaissance Resurgence

One of AI’s most lauded capabilities is its ability to analyze vast datasets quickly and accurately. For businesses, this means more effective data-driven decision-making. But for threat actors, it means they can conduct reconnaissance on potential targets more efficiently than ever before.

The process looks something like this:

  1. AI tools sift through social media profiles, public records, and other online resources to gather information about individuals and organizations. For example, cybercriminals can find details about a target’s industry, job title, colleagues, and even personal relationships.
  2. This data is then used to craft highly personalized spear-phishing attacks, which are almost indistinguishable from legitimate communications.
  3. Attackers target senior-level executives and other high-value individuals within organizations.

Their reward? A far higher success rate than generic phishing attempts.

Fast-Tracking Phishing Attempts

Phishing has long been a favored tactic among cybercriminals, but AI has taken this method to a new level. Not only are phishing attacks becoming more personalized, but they’re also being deployed faster, on a wider scale, and in a more realistic manner than ever before.

AI-driven tools allow cybercriminals to automate the creation and distribution of phishing emails, making it possible to target a vast number of potential victims simultaneously. Northdoor, a cybersecurity firm, even predicts that almost every phishing attack will incorporate AI by the end of the year.

Gone are the days when even basic phishing emails were easy to spot due to poor spelling and grammar or suspicious domain names. Today’s phishing attempts can be polished, professional, and convincing, thanks to AI.

A 2024 LastPass survey revealed that 95% of respondents believe content engineered using LLMs (large language models that fuel tools like ChatGPT and Copilot) makes it harder to detect phishing attempts. This is precisely why threat actors are turning to AI-powered tools—they enhance the effectiveness of social engineering tactics, making them harder to spot and more likely to succeed.

Deepfake Devastation

Deepfakes—AI-generated video or audio that mimics a real person—are another area where threat actors could be on track to gain the upper hand. The availability of information online, combined with rapidly advancing AI technology, allows cybercriminals to create voice or video clones with minimal input.

Admittedly, this tech is still less sophisticated than most language processing models. As such, there are some obvious giveaways, including:

  • An urgent message delivered in a flat tone of voice.

  • A lack of blinking and unnatural body movements.

  • Mismatched skin tone between the ears and face.

That said, deepfakes can still be convincing enough to result in incredible damage for businesses—especially if you’re not all that familiar with the ‘person’ you’re speaking to. For example, a deepfake video of a CEO could be used to instruct newer employees to transfer funds to a fraudulent account, or a fake audio clip could be used to manipulate a business partner.

The potential devastation of deepfakes can’t be overstated. For SMBs in St. Louis and across the world, the risk isn’t just financial but also reputational. A single successful deepfake attack—whether it leads to a data breach or comes in the form of a misleading video—could undermine customer trust and damage your brand’s image irreparably.

Exploiting Internal AI Use

As more businesses adopt AI tools, the risk of inadvertently exposing sensitive information increases. A significant percentage of businesses now use at least one generative AI tool (most commonly ChatGPT). While these tools offer numerous benefits, they also present new security challenges.

Threat actors can exploit vulnerabilities in these AI models, injecting malicious code or manipulating the data they produce—and the credentials you use to login to these platforms are just as easy for hackers to compromise as any others.

For instance, if an employee uses an AI tool to generate reports, there’s a risk that their weak password could allow an unauthorized party access to the tool itself, leading to the exposure of confidential information.

If your team isn’t properly trained on how to use these tools securely, they may inadvertently share sensitive data with third parties or fail to recognize potential threats. So how can businesses mitigate these dangers?

Tackling Today’s Threat Actors

Given the increasing sophistication of AI-powered cyberattacks, it’s essential for SMBs in St. Louis and elsewhere to stay ahead of the curve. In order to remain safe (and avoid insurance risks), here are five key steps your business can take to protect itself:
  1. Stay Informed
  2. Ensure that your team’s cyber-awareness training covers the latest forms of phishing and social engineering. IT support for St. Louis businesses can help tailor these training programs to address the specific threats your organization faces.
  3. Invest in AI-Powered Security
  4. Utilize AI-powered threat detection tools to keep your business protected around the clock. These tools can help identify and prevent potential attacks far earlier, enabling your IT team to address them long before they can cause damage.
  5. Be Mindful of What You Post Online
  6. Encourage everyone in your company to be cautious about sharing personal and professional information on social media. The less information available, the harder it is for threat actors to conduct reconnaissance.
  7. Verify Identities
  8. Implement protocols for verifying that the people you interact with are who they say they are. This is particularly important when dealing with high-value transactions or sensitive information.
  9. Establish Clear AI Usage Policies
  10. If you’re using AI in a business context, ensure that you have clear policies in place for doing so securely. Regularly review and update these as new threats emerge.

The Ongoing Battle Against AI-Powered Threats

The battle against AI-driven threats looks likely to be a marathon rather than a sprint. If you want to emerge victorious, vigilance and proactive measures will be key. By continuing to educate yourself, prioritizing cybersecurity, and embracing the right tools and strategies, your business can shield itself from the growing dangers of artificially-enhanced cybercrime.

Don’t let threat actors gain the upper hand—take steps today to ensure you can fully reap the benefits of this transformative technology without falling victim to its darker side.

Anderson Technologies: Real People Creating Business-Changing IT Solutions

For over 25 years, Anderson Technologies has leveraged our expertise for the benefit of our clients, supplying them with suitable, secure IT and strategic guidance for their technological future.

We’re a dynamic team of IT professionals with over 200 years of combined experience and specialist certifications to back up our knowledge. As a trusted advisor, we don’t just focus on today. We strive to take your technology light-years ahead of your competition and scale with your business’s success. 

Ready to secure your business? Contact us today to get started.