As we usher in 2024, the digital horizon is brimming with new cybersecurity challenges. With that in mind, our threat researchers have been hard at work examining the threat landscape of the past year to predict what the biggest threats of the next year might be.
From the proliferation of AI scams to deepfakes becoming the norm to security problems arising from BYOAI (Bring Your Own AI), here are our top seven Norton cybersecurity predictions for 2024. (And since Norton is part of Gen™—a global company with a family of trusted consumer brands—don't miss the full list of cybersecurity predictions on their website.)
1. AI will shift to on-device processing, enhancing privacy but raising new security questions
We’re going to see a major shift in AI in 2024, particularly in the realm of Large Language Models (LLMs). Moving away from traditional cloud-based models, we expect these advanced AI systems will be moving directly onto our devices.
This transition is driven by increasing demands for privacy and data security: Processing data locally minimises the risk of data breaches associated with cloud storage and enhances consumer privacy. However, the integration of AI onto personal devices is likely going to lead to all new security risks as malicious actors exploit these on-device AI models.
2. Big changes in generative AI will have both positive and negative consequences
2024 is set to be a game-changer with generative AI, especially when it comes to creating media like we've never seen before. One of the coolest things? The text-to-video feature. Imagine typing out a script and getting a video back–that's going to be a reality. This is big news for everyone from YouTubers to teachers, making it easier than ever to create awesome videos.
But here's the catch: with AI making videos that look real, the line between truth and fiction gets blurry. We could see a rise in fake videos that look legit, and that's a playground for scammers. Misinformation could spread faster making it tough to know what's real out there.
And it's not just videos. The way AI is learning to mimic human voices is pretty mind-blowing too. This could change the game for customer service – imagine talking to an AI that sounds just like a real person. Plus, it's a big help for folks who have trouble seeing or reading. But again, there's a flip side. These AI voices could be used to trick people with fake messages or even phishing scams. As we dive into this new world of AI, we've got to balance innovation with staying sharp on security.
3. Personal AI tools in the workplace will lead to increased data breach risks
In 2024, "Bring Your Own AI" (BYOAI) is going to be the new buzzword in the office. Imagine everyone using their own AI tools for work–from scheduling meetings to drafting emails. Handy, right? But here's where it gets tricky: when we use our personal AI for work, it's hard to keep our private life and work life separate. This mix-up could accidentally let out company secrets – not something we want floating around.
4. Social media is going to be flooded with AI scams
Get ready, because in 2024, social media's about to get even wilder with AI joining the party. Cybercriminals are eyeing this as a golden opportunity to pull off some pretty slick scams and spread false information. They'll be using AI to whip up everything from fake news to those eerily realistic deepfakes, and even ads that can tell you exactly what you want to hear.
Here's the deal: this AI-crafted content is so good at blending in, it's like a chameleon on your social media feed. It knows what you like, who you're friends with, and how to push just the right buttons. This means scams and misinformation can spread like wildfire, reaching more people faster.
5. Look out for emails from your “boss” (Spoiler alert: It’s a scam.)
In 2024, we're bracing for a major shift in the world of email scams. Business Email Compromise (BEC) attacks are leveling up to something even more cunning–Business Communication Compromise (BCC). It's like the classic email scam, but with a high-tech mask. Cyber crooks are getting their hands on AI and deepfake tech to mimic your boss or even your colleagues. It's going to be tougher to spot the fakes from the real deals.
6. Popular AI tools will become a hotspot for malware and hacking attempts
Imagine downloading what you think is the latest AI app, only to unknowingly invite a digital thief into your device. From pesky malware to serious threats like stealing your personal info or even hijacking your entire system – the risks are real. The charm of AI can sometimes make us a bit too trusting, letting our guard down when we see a fancy new tool.
The brains behind these AI tools–those complex Large Language Models–are also on the radar of cybercriminals. They're trying to sneak into the backend systems to grab sensitive stuff, like how the AI is built or the data it's been fed. Finally, it’s possible that cybercriminals will create their own software that looks and acts like an LLM, but which as a backdoor that allows them to get their hands on everything from your personal chats to your top-secret ideas.
7. Digital blackmail will become more sophisticated and targeted
Digital blackmail is moving past the days of basic ransomware. Now, it's all about getting up close and personal. Think of ransomware attacks that don't just lock up your data but dive into your identity, playing dirty with information that can harm both employees and customers. It's like a cybercriminal's version of a custom-tailored suit – it fits all your worst fears.
As digital citizens, we all need to be on high alert. These modern ransomware tactics won't just be about just about asking for cash to unlock files. Instead, they’ll be looking to steal your personal information and potentially your workplace information to potentially ruin your reputation. It's a whole new level of harm, affecting everything from your finances to how the world sees you and everything affiliated to you.
Email scams are getting a twist too. From blackmail emails that threaten to spill your secrets to new forms of digital threats, these are not your average spam. Cybercriminals are getting more imaginative, using things like fake images and videos to make their threats feel more real than ever.
And, if you want a little extra help, check out Norton 360 Premium, our latest solution for helping protect your company from the latest cyber threats.
The year 2024 is shaping up to be a landmark in the world of cybersecurity. With AI technologies becoming an ever more integral part of our lives, they're bringing a host of new challenges and opportunities our way. This year, we're seeing major shifts like AI moving onto our very own devices, along with a rise in more cunning forms of digital blackmail. It's clear – the cybersecurity landscape is transforming right before our eyes.
Navigating these changes will be crucial. Staying informed, keeping a vigilant eye, and being prepared are more important than ever in this era of rapid technological advancement. At Norton, we're here to guide you through this journey. We're dedicated to providing the latest insights and robust solutions to ensure you are well-equipped to face the dynamic challenges of 2024. Trust us to be your partner in securing your digital world in these transformative times.
Michal joined the company as a malware analyst and is now our threat intelligence director.
Editorial note: Our articles provide educational information for you. Our offerings may not cover or protect against every type of crime, fraud, or threat we write about. Our goal is to increase awareness about Cyber Safety. Please review complete Terms during enrollment or setup. Remember that no one can prevent all identity theft or cybercrime, and that LifeLock does not monitor all transactions at all businesses. The Norton and LifeLock brands are part of Gen Digital Inc.