Why Do We Need Ethical Frameworks and Regulation for AI?
In this post, Forcepoint’s Nick Savvides examine some systemic problems that afflict AI/ML systems, with the goal of explaining why ethical frameworks and regulatory governance are important to help AI function efficiently and equitably. If organizations are looking primarily at the impact on systems, as is usually the case of cybersecurity teams, it can be tricky to immediately distinguish deliberate attacks from the unintentional effects of how AI systems are designed or where the data that they train on is sourced from.
“Ethical frameworks and regulation are necessary for AI and not just a distraction for organizations as they pursue their bottom line. We cannot avoid AI, as it’s the only way we can scale our operations in the asymmetrical cyber battlefield. Ethical frameworks and regulatory governance will become critically important to help AI function efficiently and equitably. Every new piece of software or service will have an AI or ML element to it. Establishing best practices for ethics in AI is a challenge because of how quickly the technology is developing, but several public- and private-sector organizations have taken it upon themselves to deploy frameworks and information hubs for ethical question. All of this activity is likely to spark increasing amounts of regulation in the major economies and trading blocks which for a while which could lead to an increasingly piecemeal regulatory landscape at least for now. It’s safe to predict that the current “Wild West” era of AI and ML will fade quickly, leaving organizations with a sizable compliance burden when they want to take advantage of the technology.” – Nick Savvide, Director of Strategic Accounts, Asia Pacific, Forcepoint
The AI Cyber Threat: Beyond the Hype – Fine-Tuning LLMs – Impact for Good & Bad
In this article, Forcepoint AI guru Aaron Mulgrew demonstrates the process of fine-tuning LLMs, includes the methods, tools, and techniques he applied in his research. Once the LLM is fine-tuned, a criminal gang can offer out usage of the LLM to other less-equipped gangs for a small subscription fee—a model that’s already been documented. It can even be used to generate convincing phishing emails and to securely exfiltrate information evading detection including modern DLP toolset.
“It’s easy to look at the cybersecurity implications of bad actors fine-tuning LLMs for nefarious purposes through an extremely negative lens. And while it is true that AI will enable hackers to scale the work that they do, the same holds true for security professionals. The good news is national governments aren’t sitting still. Building custom LLMs represents a viable path forward for other security-focused government agencies and business organizations. While only the largest well-funded big tech companies have the resources to build an LLM from scratch, many have the expertise and the resources to fine-tune open source LLMs in the fight to mitigate the threats bad actors—from tech-savvy teenagers to sophisticated nation-state operations—are in the process of building. It’s incumbent upon us to ensure that whatever is created for malicious purposes, an equal and opposite force is applied to create the equivalent toolsets for good.” – Aaron Mulgrew, Solutions Architect, Forcepoint