CodeCraftinghub logoCodeCraftingHub
HomeWorkArticlesCoursesAppsAbout
Get in Touch
HomeWorkArticlesCoursesAppsAbout
Digital Architect

Building the next generation of resilient digital infrastructure with technical integrity.

Connect
GitHubLinkedInYouTube
Resources
NewsletterCase StudiesManifesto

Status

AVAILABLE FOR PARTNERSHIPS
© 2024 Digital Architect. All rights reserved.
CodeCraftinghub logoCodeCraftingHub
HomeWorkArticlesCoursesAppsAbout
Get in Touch
HomeWorkArticlesCoursesAppsAbout
Digital Architect

Building the next generation of resilient digital infrastructure with technical integrity.

Connect
GitHubLinkedInYouTube
Resources
NewsletterCase StudiesManifesto

Status

AVAILABLE FOR PARTNERSHIPS
© 2024 Digital Architect. All rights reserved.
CodeCraftinghub logoCodeCraftingHub
HomeWorkArticlesCoursesAppsAbout
Get in Touch
Coding
Back to Articles

CodeCraftinghub

Coding

Writing Secure Code in an AI-Assisted World: Pitfalls and Good Practices

By Usman AliApril 13, 20268 min read

AI writes fast. It doesn't write secure. Hardcoded secrets, injection flaws, and over-privileged logic slip in constantly because models solve tasks, not threat models. Here's the practical checklist to own your code not just mop up after the robot.

AI coding assistants have moved from novelty to necessity. By early 2026, over 30% of senior developers report shipping mostly AI-generated code. Tools like Cursor, Copilot, and Claude Code have transformed how we write software, letting us generate features at speeds that manual typing could never match. But here‘s the uncomfortable truth: AI doesn’t write secure code unless you explicitly tell it to. In recent testing across popular LLMs, every model generated insecure code vulnerable to at least four common weaknesses when given naive prompts and some, like GPT-4o, produced insecure output vulnerable to eight out of ten tested issues even when asked for secure code.

Productivity is up, but so is the attack surface. This article walks through what’s breaking, why it’s happening, and most importantly what you can do about it.

Common Security Pitfalls in AI-Generated Code

AI models excel at solving the task you give them. They don‘t excel at understanding the security context around that task. Here are the patterns I see repeatedly in AI assisted codebases.

Hardcoded Secrets (CWE-798)

The most common AI security sin: the model happily embeds credentials directly in source code. It sees a pattern in training data and reproduces it no questions asked.

This isn‘t theoretical. GitGuardian’s 2026 report found 28.65 million new hardcoded secrets in public GitHub commits during 2025 alone a 34% year over year jump. AI service credentials specifically surged 81%. Even more telling: Claude Code assisted commits showed a 3.2% secret-leak rate, more than double the 1.5% baseline across all public commits.

Injection Vulnerabilities (SQLi, XSS, Log Injection)

AI loves string concatenation. It‘s simple, it works in the demo, and it’s everywhere in the training corpus. The model doesn‘t know your input is untrusted it just sees a pattern that compiles.

The secure version requires teaching the AI context it doesn’t naturally have: parameterized queries, DOM sanitization, and structured logging.

Insecure Cryptography and Over-Privileged Logic

AI models frequently recommend outdated algorithms (MD5, SHA-1 for passwords) or misuse modern ones. They‘ll generate authentication checks that can be bypassed, or grant excessive permissions because the training data overweights “just make it work” patterns over least-privilege design.

How Developers Misunderstand AI’s Security Knowledge

Here‘s the mental model shift that changes everything: AI solves the task, not the security requirement. The model’s objective function is to produce code that matches the prompt‘s surface request. It has no intrinsic understanding of threat models, compliance boundaries, or your organization’s specific risk tolerance.

Recent research underscores this gap. When Backslash Security tested LLMs with progressively more sophisticated prompts, they found a clear hierarchy: naive prompts produced vulnerable code; general security requests helped; OWASP compliant prompts helped more; but only rule bound prompts that explicitly addressed specific CWEs produced consistently secure output.

The implication? AI won‘t secure your code unless you teach it your security standards. This isn’t a flaw in the model it‘s a fundamental property of how LLMs work. They generate text probabilistically, not through reasoned security analysis. Expecting otherwise is like expecting spell-check to catch logical fallacies.

Good Practices for Writing Secure Code with AI

1. Treat AI as a Junior Dev, Not a Senior Architect

The single most effective mental framework: review AI-generated code with the same scrutiny you‘d apply to a talented but inexperienced junior developer’s PR. This means:

  • Never merge without reading. AI can write 500 lines in seconds. You can still only review about 200–400 lines per hour effectively. If you‘re merging faster than you can read, you’re accumulating technical debt.
  • Run the “why” test. Can you explain why this code is structured the way it is? If not, the AI owns you, not the other way around.
  • Watch for outdated patterns. If you see var instead of let/const, class components in a hooks-based React codebase, or componentDidMount vibes in functional components, pause.

2. Define and Enforce Project-Specific Security Rules

This is the most powerful technique I‘ve found. Most AI coding tools (Cursor, Windsurf, Claude Code) support rule files that inject security requirements directly into the model’s context before it generates code.

A minimal rules file might include:

When you bind generation to these rules, the AI shifts from “produce working code” to “produce code that complies with these constraints.” The Backslash research showed this approach eliminated all tested CWEs

3. Use AI to Generate Security-Relevant Comments and Tests

Flip the dynamic. Instead of only asking AI to write features, ask it to:

  • Write security-focused code comments: “Add comments explaining the security assumptions in this authentication flow.”
  • Generate security test cases: “Generate unit tests that verify this endpoint rejects malformed JWT tokens.”
  • Draft a threat model: “Given this API endpoint that accepts user file uploads, list the top five security risks I should consider.”

AI is genuinely good at these tasks, and they create artifacts that make your codebase more maintainable and auditable.

4. Integrate SAST/DAST into Your Workflow Early

Static analysis isn‘t new, but the volume of AI-generated code makes it essential rather than optional. The shift-left approach is now shifting earlier: tools like Harness SAST integrate directly with AI coding environments, scanning code as it’s generated rather than waiting for CI/CD. Amazon Q Developer offers real-time SAST scanning that flags SQL injection and hardcoded credentials before they hit version control.

The key insight: SAST catches the patterns AI tends to repeat. If you‘re not running some form of automated security scanning, you’re relying entirely on human review and humans miss things, especially when AI produces plausible-looking but subtly flawed code.

Organizational and Supply-Chain Risks

Data Leakage and Shadow AI

The Samsung incident remains the canonical warning: within twenty days of allowing employees to use ChatGPT, the company experienced three separate data leaks involving source code, meeting recordings, and hardware specifications. Bans don‘t work people route around them. What works is visibility and guardrails.

Security teams often lack visibility into what AI agents are touching or exposing. In an analysis of over 18,000 AI agent configuration files, UpGuard found that one in five developers had enabled high-risk actions without human oversight, including unrestricted file deletion and automatic commits bypassing code review.

Misconfigured AI Agents and Privilege Escalation

AI coding agents operate with permissions similar to other automation tools they can read and write files, execute commands, and download content. When these agents are granted broad access without approval prompts, they become powerful attack vectors. The IDEsaster research identified over 30 vulnerabilities across popular AI IDEs that chain prompt injection with legitimate IDE features to achieve data exfiltration and remote code execution.

AI-Generated Dependencies and Package Hallucinations

AI models can suggest packages that don‘t exist (hallucinations) or recommend vulnerable versions. Worse, they may pull in packages with known compromises. Tenable’s 2026 Cloud and AI Security Risk Report found that 86% of organizations had installed third party packages containing critical-severity vulnerabilities, and 13% had used packages associated with known supply-chain incidents.

Prompt Injection and AI-Agent Security

Prompt injection is the new SQL injection and it‘s harder to defend against. An attacker crafts input that overrides the AI’s intended behavior, often in ways invisible to the developer.

The Clinejection incident illustrates the severity. By opening a GitHub issue with a carefully crafted title, an attacker could inject instructions into Cline‘s AI-powered triage bot. The bot, running with arbitrary command execution permissions, could be manipulated into publishing unauthorized packages to npm affecting over 5 million users.

Similarly, the RoguePilot flaw allowed attackers to embed hidden instructions in GitHub issues that, when processed by Copilot in a Codespace, could exfiltrate the privileged.

Mitigations that actually help:

  • Never grant AI agents write access without human approval. Auto approve tools are convenient but dangerous.
  • Treat any AI-processed content from untrusted sources as potentially hostile. This includes issue comments, code comments in third-party libraries, and README files from external repos.
  • Run AI agents with least-privilege credentials. If an agent doesn‘t need to push to production, don’t give it those permissions.
  • Log and audit AI agent actions. You can‘t secure what you can’t see.

How AI Can Actually Help Security

It‘s not all doom. AI can be a powerful security ally when used intentionally.

Refactoring legacy code. AI excels at pattern-based transformations migrating from deprecated crypto APIs to modern ones, converting string concatenation to parameterized queries, or replacing hardcoded secrets with environment variable references.

Generating security tests. Ask AI to generate fuzzing inputs, edge-case test scenarios, or unit tests that verify security properties. Models are surprisingly good at designing test cases that catch business logic flaws humans might overlook.

Documenting threat models. Use AI to bootstrap threat modeling sessions. Prompt: “Given this API endpoint description, generate a list of potential threat scenarios and recommended mitigations.” The output won‘t be perfect, but it creates a structured starting point for human analysis.

Code review augmentation. Running AI generated code through a different model for security review can catch issues. Some teams use one model for generation and a security-focused model (or different provider) for audit.

The pattern is consistent: AI helps most when you provide the security context and judgment. It amplifies your expertise it doesn‘t replace it.

Short Checklist: Secure Coding with AI

Bookmark this. Run through it before merging any AI-assisted code.

  • Review everything. Did you read every line, or did you skim?
  • No hardcoded secrets. Scan for API keys, passwords, tokens in source.
  • Parameterized queries. SQL concatenation? Reject.
  • Input validation. Is user input sanitized before use in HTML, SQL, or shell commands?
  • Least privilege. Does this code request more permissions than it needs?
  • Dependency check. Are suggested packages real, maintained, and vulnerability-free?
  • AI agent config review. Are auto-approve settings appropriate for the risk level?
  • Security test coverage. Did you generate tests for the failure cases?

AI coding assistants are here to stay. They make us faster, but they don‘t make us smarter about security unless we deliberately build security into how we use them. The developers who thrive in this new world won’t be the ones who type the fastest prompts. They‘ll be the ones who treat AI as a powerful but fallible collaborator, applying the same security rigor to generated code that they’d apply to their own.

Comments

No approved comments yet.

Related Articles

The Myth of Comments: Why Self‑Explanatory Code Wins
Coding

The Myth of Comments: Why Self‑Explanatory Code Wins

The Art of the Silent Codebase: When to Speak and When to Code Every developer has seen it: a helpful comment that lies. The code was refactored, the comment remained, and now it actively misleads the next engineer. The root cause is a fundamental truth comments and code live separate lives. When you change code, you rarely remember to update the comment. Over time, this creates a silent swamp of outdated, even dangerous, documentation. The solution isn’t to ban comments. It’s to treat them as the exception, not the rule. By writing self explanatory code, you eliminate the need for most comments entirely. Your code becomes the single source of truth clean, expressive, and impossible to go out of sync. In this article, we’ll explore how to write professional, self‑documenting JavaScript. You’ll see concrete examples that make comments redundant and learn patterns that keep your codebase maintainable. The Cost of Comment‑Dependent Code Consider this typical snippet: At first glance, the comments seem helpful. But what happens when the discount logic changes to a tiered system? Or the tax rate becomes dynamic? The comments will almost certainly stay as they are, creating a trap for anyone who trusts them. Worse, the code itself is noisy with boilerplate and magic numbers. Now let’s rewrite it without a single explanatory comment, using only expressive naming and structure. Self Explanatory Code in Action 1. Use Descriptive Names Names are the most powerful tool for self‑documentation. A well‑chosen function or variable name tells you what and why. What changed? No comments needed. The function names (calculateSubtotal, applyDiscount, addTax) are mini‑documents. Constants replace magic numbers. Each function does one thing and has a clear, testable boundary. 2. Embrace Small, Pure Functions When a function does only one thing and its name describes that thing perfectly, comments become unnecessary. Now the code reads like a story. You don’t need a comment to understand the permission logic the function names and composition tell you everything. 3. Use Modern JavaScript to Express Intent Destructuring, default parameters, and object shorthand can turn cryptic code into clear declarations. The destructured parameter makes it obvious what inputs are expected. The default value for role is right where you need it. The code is compact yet perfectly clear. 4. Avoid “What” Comments Let the Code Speak Comments that restate what the code does are noise. The code itself can and should—do that. When you use higher‑order functions like reduce, map, or filter, the intent is embedded in the method name. You no longer need comments to explain iteration. 5. Use Meaningful Constants for Business Logic Magic numbers are a common source of “what” comments. Turn them into named constants. Now the condition reads like a business rule. Any future change to the threshold is isolated to the constant, and the function name tells you exactly what’s being checked. When Do You Use Comments? Self‑explanatory code doesn’t mean never write comments. It means using comments for things code cannot express: Why a certain approach was taken (a trade‑off, a workaround for a bug in a dependency). Complex business rules that are not obvious from the code alone. Public API documentation (JSDoc) to describe parameters and return types, especially in libraries. Example of a good comment: Conclusion Comments are not evil, but they are a liability when used as a crutch for unclear code. Every time you write a comment, ask yourself: Can I rewrite the code so this comment becomes unnecessary? Professional engineers know that code is read far more often than it is written. By investing in self‑explanatory code clear names, small functions, meaningful constants, and expressive modern syntax you build a codebase that is a joy to read, safe to change, and immune to the silent rot of outdated comments. The next time you’re about to add a comment, let the code speak for itself. Your future self (and your teammates) will thank you. Happy coding, and may your code always be its own best documentation.

Want More Engineering Deep Dives?

Join the newsletter for practical insights on architecture, code quality, and developer workflow.

HomeWorkArticlesCoursesAppsAbout
Digital Architect

Building the next generation of resilient digital infrastructure with technical integrity.

Connect
GitHubLinkedInYouTube
Resources
NewsletterCase StudiesManifesto

Status

AVAILABLE FOR PARTNERSHIPS
© 2024 Digital Architect. All rights reserved.
Coding
Coding

You're Not an AI Janitor: How to Stop Cleaning Up Robot Spaghetti and Start Owning Your Codebase

AI writes fast. You debug slow. Here's how to flip that script and actually own your codebase again.

javascript
// AI-generated insecure example
const API_KEY = "sk-abc123xyz789";
fetch(`https://api.service.com/v1/data?key=${API_KEY}`);

// What you should be using
const API_KEY = process.env.SERVICE_API_KEY;
fetch(`https://api.service.com/v1/data`, {
  headers: { Authorization: `Bearer ${API_KEY}` }
});
javascript
// AI-generated SQL query—dangerous
const query = `SELECT * FROM users WHERE email = '${userInput}'`;
db.query(query);

// AI might generate this XSS sink without sanitization
element.innerHTML = userComment;

// Or log injection that could poison your monitoring
console.log(`User ${username} performed action: ${action}`);
javascript
# Security Rules for TypeScript Projects
- Never hardcode secrets, API keys, or credentials in source code.
  Use environment variables or a secrets manager.
- Always use parameterized queries for SQL. Never concatenate user input.
- For HTML insertion, use textContent unless HTML is explicitly required,
  then use a sanitizer like DOMPurify.
- Cryptography: Use bcrypt or argon2 for passwords; AES-GCM for encryption.
- Validate and sanitize all user inputs. Define explicit allowlists.