AI/ML, AI benefits/risks

Top AI risks and fixes: Going beyond the Hype Cycle FUD

Three imminent risks of AI – and what the industry should do about them

The last half century has seen its share of disruptive and world-changing technology innovations — starting with the arrival of the PC, followed by the proliferation of the internet, then the cloud migration era and the rise of SaaS applications. There’s no denying that the next major hype cycle has happened now with AI — and especially with generative AI.

Gartner repots that generative AI now stands at the “peak of inflated expectations” for emerging technologies, projected to reach transformational benefit within two to five years. Most can agree that its impact has already been profoundly felt today. From customer service to copywriting, training, and even medical diagnostics — there’s a vast potential for disruption.

So, how prepared are we to manage AI's risks? When we compare the growth rate of AI to previous hype cycles, there’s an unprecedented trajectory. The cloud and PC eras saw a slow and gradual rate of adoption. Internet and SaaS technology grew much faster and reached consumers rapidly, but still nothing compared to AI, which saw widespread adoption seemingly overnight.

When technology accelerates this fast, it creates the potential for unmanaged risk. Thus, it’s important that the entire industry, and especially those who develop, use, and regulate AI understand and prioritize AI’s risk implications before they spiral.

Here are the three most imminent AI risks the industry needs to think about:

Malicious use of AI

One danger we already see today is when AI falls into the wrong hands. The broad accessibility of generative AI tools like ChatGPT stands as a double-edged sword—while they promise immense productivity benefits for employees, they also make bad actors more productive in their attacks.

For example, threat actors can now write socially-engineered email attacks more quickly and convincingly than ever before. We’ve identified thousands of AI-generated email attacks over the last year, and unfortunately, legacy threat detection tools that rely on identifying known indicators of compromise are struggling to keep pace. 

Deepfakes are the next evolution of social engineering threats. While they’re not yet a common attack tactic, deepfake incidents are starting to tick up. We’re right around the corner from seeing them become more widely used in schemes to steal money or sensitive information from employees and consumers alike.

There’s also growing concern around AI model poisoning, where malicious actors intentionally tamper with AI training data to manipulate their outputs. By hijacking training datasets and injecting them with misleading data, they can deceive systems into making potentially harmful decisions, without the legitimate end-user ever being the wiser.

Lack of transparency and data privacy

Most AI models today operate as a “black box,” offering very little visibility into how decisions are made. This can lead to biased or unsafe decisions. In industries such as banking, insurance, and medicine where decisions can have significant, life-changing impacts on users, these risks are exacerbated and could cause direct or indirect harm to individuals if AI gets misused.

Additionally, AI systems often collect personal data to customize the user experience or help train their models. Today, users have little control over their privacy when using these tools, with no specific federal laws that protect citizens from data privacy violations because of AI.

Job losses

Recent surveys have shown that AI could reduce the number of workers at thousands of companies over the next several years, with some predicting as many as 300 million jobs lost or diminished globally by the rise of generative AI. It’s especially true for jobs that are easily repeatable, where there’s less of a need for humans.

On the flip side, there are also opportunities for AI to create more jobs. That said, these will largely be technical jobs and companies need to ensure their workers are equipped with the skills to work effectively alongside AI.

The tech industry must start preparing for these AI challenges today. We must also acknowledge that everyone, from consumers, to businesses, and even the government, will have a part to play. 

Consumers, for example, will need to become more vigilant. As AI-generated social engineering attacks and deepfakes become more pervasive, they must become increasingly discerning of digital content and more proactive about seeking validation for authenticity.

Businesses have an even more important role in managing AI risk. Especially at large companies in vulnerable industries, this includes protecting employees through security awareness training and implementing advanced tools that can detect modern AI threats. Additionally, any company that leverages AI in their products should prioritize transparency as much as possible, with assurances around how the AI operates and how they manage user data. Any good product will also let a human make the final decision when it comes to executing – and potentially undoing – any actions taken by AI.

Finally, governments have a duty to address the public’s rising concern of AI risk. We’re already seeing movement in this area with the AI Executive Order in the United States and the AI Act in the European Union. However, governments must make sure there’s a balance between regulation and innovation. By narrowing their focus specifically on ethics, transparency, and privacy, government regulation can help improve consumer trust in AI, but without over-regulating or banning its use outright.

There’s no denying that we’re in an AI hype cycle, but at the same time, AI promises many new benefits. Any new technology innovation comes with risk, especially when they are accelerating as quickly as we’re seeing with AI. Everyone has a part to play in helping to effectively manage AI’s ripple effects, and we have to stay proactive about it right now if we want to reap its benefits safely.

Mike Britton, chief information security officer, Abnormal Security

Mike Britton

Mike Britton, chief information security officer at Abnormal Security, leads the company’s information security and privacy programs. Mike builds and maintains Abnormal Security’s customer trust program, performing vendor risk analysis, and protecting the workforce with proactive monitoring of the multi-cloud infrastructure. Mike brings 25 years of information security, privacy, compliance, and IT experience from multiple Fortune 500 global companies.

LinkedIn: https://www.linkedin.com/in/mrbritton/

X: https://twitter.com/AbnormalSec

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms and Conditions and Privacy Policy.