Practical AI Security: Attacks, Defenses, and Applications

Live On-Site / Live Virtual / On-demand

Learn Offensive and Defensive AI Security Strategies

This intensive course guides you from the foundations of artificial intelligence, machine learning, and neural networks into the world of large language models and transformers. You will explore how AI and LLMs can be weaponized and defended. Through immersive labs, you will train models, build LLM applications, and simulate real red team attacks.  Along the way, you will develop a deep understanding of sampling, prompting, embeddings, and attention. 

 

By the end of the course you will have practical code, projects, and security tools that are directly applicable to your professional work.

What You Will Learn

This course gives you a practical, hands-on path into AI security with a strong focus on LLMpowered applications. You’ll learn how modern AI systems are built, how they fail, and how to secure them against real threats. Everything is structured around doing rather than theory, so you immediately apply what you learn.

You begin with a solid technical foundation. Through hands-on labs, you’ll build working LLM
applications using the Hugging Face Transformers ecosystem, implement RAG pipelines
with LangChain, LlamaIndex, and FAISS, and explore how tokenization, embeddings, and
context windows shape model behavior. You’ll also learn advanced prompt engineering
patterns and build your own MCP servers to automate security tasks and integrate AI into
real workflows.

The security phase takes you deep into offensive and defensive techniques. You’ll practice
prompt injection, multimodal exploitation, and workflow manipulation against agents and
AI-generated (“vibe-coded”) applications. You’ll map these risks to Google’s Secure AI
Framework and learn how to threat model, and harden RAG systems, agent logic, and
custom MCP servers with proper authentication and validation.

You finish by learning how to use AI to accelerate your own work. You’ll use tools like Fabric AI, OpenRouter, and Perplexity to automate threat intelligence, research, and analysis, giving you a repeatable process to move faster with better accuracy.

By the end of the course, you’ll be able to design, assess, and secure AI-powered systems with confidence.

You’ll also be prepared for the Certified AI Security Researcher (CAISR) exam, with one exam attempt included.

 

By attending this course, you will get:

  • Certificate of completion
  • Complete course materials (slides, lab guides)
  • Source code for all vulnerable AI applications used in class
  • Source code for exploit PoCs used in assessments
  • All Python scripts and tools developed during the course
  • Cloud instances for the duration of the course
  • Access to pre-configured lab environment with required tools
  • Slack access for collaboration and AI security discussions
  • MCP server templates and custom tools repository
  • An attempt at the Certified AI Security Researcher (CAISR) exam

Key Objectives

  • Understand the core concepts distinguishing AI, Machine Learning, and LLMs, including supervised vs unsupervised learning, neural networks, Generative AI, diffusion models, and the complete ML model training lifecycle from data preprocessing to deployment.
  • Master the fundamentals of Large Language Models, including Transformer architecture, tokenization mechanisms (BPE), context windows, embeddings, and the differences between foundational and fine-tuned models like GPT vs BERT architectures.
  • Become proficient in Prompt Engineering techniques including system vs user prompts, prompt templates, leaked system prompts analysis, and controlling model output via sampling parameters (Temperature, Top-k, Top-p) for security-focused workflows like threat modeling assistants.
  • Learn to use essential AI development tools including Hugging Face Transformers, LangChain (with memory and tool integration), LlamaIndex (multi-file processing), OpenWebUI for local LLM deployment, vector databases like FAISS for RAG implementations, and fine-tuning workflows.
  • Build and deploy production-ready AI applications, including custom RAG (Retrieval-Augmented Generation) systems with vector storage, conversational agents with short and long-term memory, AI-powered security tools with proper rule-based and advanced guardrails, and FastAPI-based scanners.
  • Master Model Context Protocol (MCP) servers for integrating AI with security tools to understand MCP vs traditional connectors, build custom MCP servers, and leverage them for reverse engineering, Mobile malware analysis, and automated penetration testing workflows. Configure MCP with Cursor and Claude for enhanced AI-assisted security research.
  • Develop Offensive AI capabilities, including building autonomous AI agents and workflows for vulnerability scanning, CVE finding, reconnaissance, IAM policy analysis, threat intelligence gathering, and exploit development assistance using frameworks like LangChain.
  • Execute advanced attacks against AI systems, including Prompt Injection variants (direct, indirect, multimodal attacks on CV screeners, meeting summarizers, image analyzers), jailbreaking techniques, data exfiltration through prompt manipulation, and exploiting MCP server vulnerabilities (Confused Deputy attacks, information disclosure, bruteforcing, arbitrary file read/write).
  • Implement Defensive AI strategies, including securing AI-powered applications against prompt injection, analyzing vulnerabilities in “vibe-coded” AI-generated applications, securing MCP servers with proper authentication and authorization, and applying pre-launch security checklists for AI-assisted apps.
  • Deploy and configure AI Gateways to secure production LLM applications and learn to migrate existing apps behind AI Gateways, implement multi-layered guardrails for input/output validation, configure rate limiting policies, and leverage analytics and comprehensive logging for monitoring, compliance, and cost optimization.
  • Master AI-powered Threat Modeling using STRIDE methodology and understand the engineering logic of systematic threat modeling, leverage LLMs to identify threats across Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, and Elevation of Privilege categories, and develop practical mitigations with AI assistance.
  • Apply AI to enhance Security Operations and Reverse Engineering workflows using Fabric AI for knowledge mining, log parsing, email header analysis, threat intelligence processing, video knowledge extraction, breaking language barriers in security research, and integrating AI into tools like Ghidra and JADX for automated malware analysis.
  • Understand and implement enterprise AI security frameworks, including comprehensive coverage of Google’s Secure AI Framework (SAIF) with all 14 security risks (data poisoning, unauthorized training data, model tampering, prompt injection, model evasion, sensitive data disclosure, etc.).
  • Debug, intercept, and secure MCP implementations using MCP Inspector for debugging, Burp Suite for traffic interception and modification, and apply the comprehensive MCP Server Security Cheatsheet for identifying and remediating common vulnerabilities in both custom and third-party MCP servers.
  • Secure AI supply chains by pinning dependencies, verifying model signatures, understanding format risks, and detecting tampering or backdoors.
  • Earn the Certified AI Security Researcher (CAISR) certification by demonstrating mastery across all course modules from foundational AI/ML concepts through advanced offensive and defensive AI security techniques in real-world scenarios.

Duration

2 Days

Ways to Learn

Who Should Attend?

This course is ideal for anyone interested in learning about the application of AI in cybersecurity.

laptop Requirements

  • Laptop with 16+ GB RAM (32 GB recommended) and 100 GB free space
  • Access to Linux cloud instances (provided)
  • API keys for LLM providers (provided for exercises)

Administrative access on your local system Setup instructions and Slack details sent prior to course start

Need To Justify To Your Manager?

Need a Template to Justify the Training Request to your Manager? Download the Template below

Syllabus

  • Understanding Transformer Architecture
  • “Attention Is All You Need” paper breakdown
  • How attention mechanisms work
  • Evolution from RNNs to Transformers
  • Installing dependencies and setting up environment
  • Exploring transformer architectures through hands-on exercises

 

  • Introduction to Artificial Intelligence concepts
  • AI vs ML vs LLM distinctions
  • Supervised vs Unsupervised Learning paradigms
  • Neural Network architecture and principles
  • Generative AI fundamentals
  • Diffusion Models explained
  • Building Spam Detector (supervised)
  • Building Customer Clustering model (unsupervised)
  • Implementing Generative AI examples
  • StableDiffusion for image generation

 

  • What is a Large Language Model?
  • Understanding GPT, BERT, and related architectures
  • Interactive LLM experiments
  • ChatGPT vs Grok comparison
  • From Words to Tokens: text splitting
  • Building a simple tokenizer
  • Embeddings: giving meaning to numbers
  • Visualizing vectors and embeddings
  • The Context Window: understanding memory limitations
  • Temperature and parameter tuning exercises

 

  • Introduction to the Transformers library
  • Building NLP pipelines for tasks
  • Creating a News Summarizer
  • Building a Q&A tool from scratch
  • Building an Unsafe Text Detector
  • Running LLMs locally and via OpenWebUI
  • Comparing local vs cloud deployment

 

  • Fundamentals of Retrieval Augmented Generation
  • Vector storage and semantic search
  • RAG demo with FAISS and SentenceTransformers
  • Secure API key management
  • LangChain introduction and conversational memory
  • Custom prompts and external tool connections
  • LlamaIndex for document processing
  • Fine-tuning LLMs for security use cases

 

  • Fundamentals and best practices
  • System vs User Prompts
  • Analyzing real-world leaked system prompts
  • Prompt security and attack surfaces
  • Prompt template design for consistency
  • Creating a Threat Model Assistant
  • Adding UI for prompt-based tools
  • Advanced prompt engineering techniques

 

  • Model Context Protocol (MCP) fundamentals
  • Architecture, setup, and configuration
  • Connectors vs MCP Servers
  • Reverse Engineering with MCP Servers
  • Android Malware Analysis examples
  • Automated Pentesting using MCP Servers
  • Creating your first MCP Server Security implications and threat modeling

 

  • Fundamentals of AI Agents
  • Agent architecture design
  • Building CVE Finder and IAM Policy Analyzer
  • Creating Recon and CVE Analyzer Agents
  • Building a FastAPI-based vulnerability scanner
  • Workflow automation strategies

 

  • SAIF fundamentals and pillars 1–6
  • Implementation checklist and risks overview
  • 14 major AI risks including:
    • Data Poisoning
    • Model Tampering
    • Prompt Injection
    • Model Exfiltration
    • Evasion and Output Disclosure
  • Applying SAIF to real-world AI systems

 

  • Introduction to Fabric AI
  • Installation and configuration
  • Knowledge Mining & Distillation
  • AI-powered translation and OSINT
  • Threat Intelligence automation
  • Video and log analysis using Fabric

 

  • Prompt Injection taxonomy and classification
  • Exploiting AI-powered CV screeners, meeting summarizers, and image analyzers
  • Securing vulnerable systems
  • Multimodal attack strategies and defenses

 

  • “Vibe-coded” app vulnerabilities
  • Common misconfigurations and exploits
  • Security checklists and code review strategies
  • Identifying AI-specific flaw patterns

 

  • Debugging MCP Servers using MCP Inspector
  • Configuring Cursor & Claude for Interacting with MCP Servers
  • Intercepting and Modifying MCP Server Traffic using Burp Suite
  • Arbitrary file r/w in MCP Servers
  • Exploiting Confused Deputy Attack via Delegation in MCP Servers
  • Exploiting Information Disclosure in MCP Servers
  • Bruteforcing Accounts in MCP Servers
  • Securing MCP Servers against Attacks
  • MCP Server Cheatsheet

 

  • STRIDE methodology for Threat Modeling
  • Practical Threat Modeling using LLMs ‘

 

Prerequisites

To successfully participate in this course, attendees should possess the following:

  • Working knowledge of cybersecurity and testing fundamentals
  • Basic Python programming skills
  • Understanding of APIs and web services
  • Familiarity with command-line interfaces Basic knowledge of authentication, authorization, and encryption Basic web application security knowledge (recommended, not required)

TRUSTED TRAINING PROVIDERS

Our trainers boast more than ten years of experience delivering diverse training sessions at conferences such as Blackhat, HITB, Power of Community, Zer0con, OWASP Appsec, and more.

Take Your Skills To The Next Level

OUR MODES OF TRAINING

LIVE VIRTUAL

GET IN TOUCH FOR PRICING

Perfect for Teams in Multiple Location
 
  • Real-time interaction with our expert trainers over Zoom
  • Customizable content tailored to your team’s needs
  • Continued support after the training

LIVE ON-SITE

GET IN TOUCH FOR PRICING

Perfect for Teams in One Location
 
  • Real-time interaction with our expert trainers at an onsite location
  • Customizable content tailored to your team’s needs
  • Continued support after the training

ON DEMAND

Learn at your own pace

Ideal for Individuals
 
  • Immediate access to materials
  • Lecture recordings and self-assessments
  • 365 days of access
  • Certification of course completion
  • Dedicated email support
  • Certification exam

FAQ

Our Live Virtual and On-Site sessions replicate the interactive classroom experience, fostering real-time collaboration and engagement among participants.

No, the training that you purchase from 8kSec, including the course materials is exclusively for your individual use. You may not reproduce, distribute or display (post/upload) lecture notes, or recordings, or course materials in any other way — whether or not a fee is charged – without the express written consent of 8kSec.

For On-Site/Virtual Courses during private trainings/conferences, we provide a customized certificate after the completion of the course. Please note that the Certificate of Course Completion is different from the one obtained after clearning the Certification exam.

For Virtual/Live Trainings, we will provide you access to our Lab environment and an instruction guide during the training.

You can find our Training Schedule at https://8ksec.io/public-training/. To schedule a Live Virtual or Live On-site private training for a group of 5+ attendees, email trainings@8ksec.io and our logistics team will get in touch with you to organize one.

The information on this page is subject to change without notice.

CONTACT US

Please share with us the project requirements and the goals you want to achieve,  and one of our sales representatives will contact you within one business day.

Our Location

51 Pleasant St # 843, Malden, MA, Middlesex, US, 02148

General and Business inquiries

contact@8ksec.io

Trainings

trainings@8ksec.io

Press

press@8ksec.io

Phone

+1(347)-4772-006

SEND ENQUIRY