Skip to content

Secure IT

Stay Secure. Stay Informed.

Primary Menu
  • Home
  • Sources
    • Krebs On Security
    • Security Week
    • The Hacker News
    • Schneier On Security
  • Home
  • The Hacker News
  • Product Walkthrough: A Look Inside Pillar’s AI Security Platform
  • The Hacker News

Product Walkthrough: A Look Inside Pillar’s AI Security Platform

[email protected] The Hacker News Published: July 30, 2025 | Updated: July 30, 2025 7 min read
0 views
Pillar Security AI Security Platform

In this article, we will provide a brief overview of Pillar Security’s platform to better understand how they are tackling AI security challenges.

Pillar Security is building a platform to cover the entire software development and deployment lifecycle with the goal of providing trust in AI systems. Using its holistic approach, the platform introduces new ways of detecting AI threats, beginning at pre-planning stages and going all the way through runtime. Along the way, users gain visibility into the security posture of their applications while enabling safe AI execution.

Pillar is uniquely suited to the challenges inherent in AI security. Co-founder and CEO Dor Sarig comes from a cyber-offensive background, having spent a decade leading security operations for governmental and enterprise organizations. In contrast, co-founder and CTO Ziv Karlinger spent over ten years developing defensive techniques, securing against financial cybercrime and securing supply chains. Together, their red team-blue team approach forms the foundation of Pillar Security and is instrumental in mitigating threats.

The Philosophy Behind the Approach

Before diving into the platform, it’s important to understand the underlying approach taken by Pillar. Rather than developing a siloed system where each piece of the platform focuses on a single area, Pillar offers a holistic approach. Each component within the platform enriches the next, creating a closed feedback loop that enables security to adapt to each unique use case.

The detections found in the posture management section of the platform are enriched by data detected in the discovery section. Likewise, adaptive guardrails that are utilized during runtime are built on insights from threat modeling and red teaming. This dynamic feedback loop ensures that live defenses are optimized as new vulnerabilities are discovered. This approach creates a powerful, holistic and contextual-based defense against threats to AI systems – from build to runtime.

AI Workbench: Threat Modeling Where AI Begins

The Pillar Security platform begins at what they call the AI workbench. Before any code is written, this secure playground for threat modeling allows security teams to experiment with AI use cases and proactively map potential threats. This stage is crucial to ensure that organizations align their AI systems with corporate policies and regulatory demands.

Developers and security teams are guided through a structured threat modeling process, generating potential attack scenarios specific to the application use case. Risks are aligned with the application’s business context, and the process is aligned with established frameworks such as STRIDE, ISO, MITRE ATLAS, OWASP Top Ten for LLMs, and Pillar’s own SAIL framework. The goal is to build security and trust into the design from day one.

AI Discovery: Real-Time Visibility into AI Assets

AI sprawl is a complex challenge for security and governance teams. They lack visibility into how and where AI is being used within their development and production environments.

Pillar takes a unique approach to AI security that goes beyond the CI/CD pipeline and the traditional SDLC. By integrating directly with code repositories, data platforms, AI/ML frameworks, IdPs and local environments, it can automatically find and catalog every AI asset within the organization. The platform displays a full inventory of AI apps, including models, tools, datasets, MCP servers, coding agents, meta prompts, and more. This visibility guides teams, helping form the foundation of the organizational security policy and enabling a clear understanding of the business use case, including what the application does and how the organization uses it.

Pillar Security AI Security Platform
Figure 1: Pillar Security automatically discovers all AI assets across the organization and flags unmonitored components to prevent security blind spots.

AI-SPM: Mapping and Managing AI Risk

After identifying all AI assets, Pillar is able to understand the security posture by analyzing each of the assets. During this stage, the platform’s AI Security Posture Management (AI-SPM) conducts a robust static and dynamic analysis of all AI assets and their interconnections.

By analyzing the AI assets, Pillar creates visual representations of the identified Agentic systems, their components and their associated attack surfaces. Furthermore, it identifies supply chain, data poisoning and model/prompt/tool level risks. These insights, which appear within the platform, enable teams to prioritize threats, as it show exactly how a threat actor may move through the system.

Pillar Security AI Security Platform
Figure 2: Pillar’s Policy Center provides a centralized dashboard for monitoring enterprise-wide AI compliance posture

AI Red Teaming: Simulating Attacks Before They Happen

Rather than waiting until the application is fully built, Pillar promotes a trust-by-design approach, enabling AI teams to test as they build.

The platform runs simulated attacks that are tailored to the AI system use case, by leveraging common techniques like prompt injections and jailbreaking to sophisticated attacks targeting business logic vulnerabilities. These Red Team activities help identify whether an AI agent can be manipulated into giving unauthorized refunds, leaking sensitive data, or executing unintended tool actions. This process not only evaluates the model, but also the broader agentic application and its integration with external tools and APIs.

Pillar also offers a unique capability through red teaming for tool use. The platform integrates threat modeling with dynamic tool activation, rigorously testing how chained tool and API calls might be weaponized in realistic attack scenarios. This advanced approach reveals vulnerabilities that traditional prompt-based testing methods are unable to detect.

For enterprises using third-party and embedded AI apps, such as copilots, or custom chatbots where they don’t have access to the underlying code, Pillar offers black-box, target-based red teaming. With just a URL and credentials, Pillar’s adversarial agents can stress-test any accessible AI application whether internal or external. These agents simulate real-world attacks to probe data boundaries and uncover exposure risks, enabling organizations to confidently assess and secure third-party AI systems without needing to integrate or customize them.

Pillar Security AI Security Platform
Figure 3: Pillar’s tailored red teaming tests real-world attack scenarios against an AI application’s specific use case and business logic

Guardrails: Runtime Policy Enforcement That Learns

As AI applications move into production, real-time security controls become essential. Pillar addresses this need with a system of adaptive guardrails that monitor inputs and outputs during runtime, designed to enforce security policies without interrupting application performance.

Unlike static rule sets or traditional firewalls, these guardrails are model agnostic, application-centric and continuously evolve. According to Pillar, they draw on telemetry data, insights gathered during red teaming, and threat intelligence feeds to adapt in real time to emerging attack techniques. This allows the platform to adjust its enforcement based on each application’s business logic and behavior, and be highly precise with alerts.

During the walkthrough, we saw how guardrails can be finely tuned to prevent misuse, such as data exfiltration or unintended actions, while preserving the AI’s intended behavior. Organizations can enforce their AI policy and custom code-of-conduct rules across applications with confidence that security and functionality will coexist.

Pillar Security AI Security Platform
Figure 4: Pillar’s adaptive guardrails monitor runtime activity to detect and flag malicious use and policy violations

Sandbox: Containing Agentic Risk

One of the most critical concerns is excessive agency. When agents can perform actions beyond their intended scopes, it can lead to unintended consequences.

Pillar addresses this during the Operate phase through secure sandboxing. AI agents, including advanced systems like coding agents and MCP servers, run inside tightly controlled environments. These isolated runtimes apply zero-trust principles to separate agents from critical infrastructure and sensitive data, while still enabling them to operate productively. Any unexpected or malicious behavior is contained without impacting the larger system. Every action is captured and logged in detail, giving teams a granular forensic trail that can be analyzed after the fact. With this containment strategy, organizations can safely give AI agents the room they need to operate.

AI Telemetry: Observability from Prompt to Action

Security doesn’t stop once the application is live. Throughout the lifecycle, Pillar continuously collects telemetry data across the entire AI stack. Prompts, agent actions, tool calls, and contextual metadata are all logged in real time.

This telemetry powers deep investigations and compliance tracking. Security teams can trace incidents from symptom to root cause, understand anomalous behavior, and ensure AI systems are operating within policy boundaries. It’s not enough to know what happened. It’s about understanding why something took place and how to prevent it from happening again.

Due to the sensitivity of the telemetry data, Pillar can be deployed on the customer cloud for full data control.

Final Thoughts

Pillar stands apart through a combination of technical depth, real-world insight, and enterprise-grade flexibility.

Founded by leaders in both offensive and defensive cybersecurity, the team has a proven track record of pioneering research that has uncovered critical vulnerabilities and produced detailed real-world attack reports. This expertise is embedded into the platform at every level.

Pillar also takes a holistic approach to AI security that extends beyond the CI/CD pipeline. By integrating security into the planning and coding phases and connecting directly to code repositories, data platforms and local environments, Pillar gains early and deep visibility into the systems being built. This context enables more precise risk analysis and highly targeted red team testing as development progresses.

The platform is powered by the industry’s largest AI threat intelligence feed, enriched by over 10 million real-world interactions. This threat data fuels automated testing, risk modeling, and adaptive defenses that evolve with the threat landscape.

Finally, Pillar is built for flexible deployment. It can run on premises, in hybrid environments, or fully in the cloud, giving customers full control over sensitive data, prompts, and proprietary models. This is a critical advantage for regulated industries where data residency and security are paramount.

Together, these capabilities make Pillar a powerful and practical foundation for secure AI adoption at scale, helping innovative organizations manage AI-specific risks and gain trust in their AI systems.

Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Google News, Twitter and LinkedIn to read more exclusive content we post.

About The Author

[email protected] The Hacker News

See author's posts

Original post here

What do you feel about this?

  • The Hacker News

Post navigation

Previous: Apple Patches Safari Vulnerability Also Exploited as Zero-Day in Google Chrome
Next: Chinese Firms Linked to Silk Typhoon Filed 15+ Patents for Cyber Espionage Tools

Author's Other Posts

India Orders Messaging Apps to Work Only With Active SIM Cards to Prevent Fraud and Misuse whatsapp-sim.jpg

India Orders Messaging Apps to Work Only With Active SIM Cards to Prevent Fraud and Misuse

December 2, 2025 0 0
Researchers Capture Lazarus APT’s Remote-Worker Scheme Live on Camera korean.jpg

Researchers Capture Lazarus APT’s Remote-Worker Scheme Live on Camera

December 2, 2025 0 1
GlassWorm Returns with 24 Malicious Extensions Impersonating Popular Developer Tools hacked.jpg

GlassWorm Returns with 24 Malicious Extensions Impersonating Popular Developer Tools

December 2, 2025 0 0
Malicious npm Package Uses Hidden Prompt and Script to Evade AI Security Tools npm-mal.jpg

Malicious npm Package Uses Hidden Prompt and Script to Evade AI Security Tools

December 2, 2025 0 1

Related Stories

whatsapp-sim.jpg
  • The Hacker News

India Orders Messaging Apps to Work Only With Active SIM Cards to Prevent Fraud and Misuse

[email protected] The Hacker News December 2, 2025 0 0
korean.jpg
  • The Hacker News

Researchers Capture Lazarus APT’s Remote-Worker Scheme Live on Camera

[email protected] The Hacker News December 2, 2025 0 1
hacked.jpg
  • The Hacker News

GlassWorm Returns with 24 Malicious Extensions Impersonating Popular Developer Tools

[email protected] The Hacker News December 2, 2025 0 0
npm-mal.jpg
  • The Hacker News

Malicious npm Package Uses Hidden Prompt and Script to Evade AI Security Tools

[email protected] The Hacker News December 2, 2025 0 1
iran-hacking.jpg
  • The Hacker News

Iran-Linked Hackers Hits Israeli Sectors with New MuddyViper Backdoor in Targeted Attacks

[email protected] The Hacker News December 2, 2025 0 0
SecAlerts.jpg
  • The Hacker News

SecAlerts Cuts Through the Noise with a Smarter, Faster Way to Track Vulnerabilities

[email protected] The Hacker News December 2, 2025 0 0

Trending Now

Drones to Diplomas: How Russia’s Largest Private University is Linked to a $25M Essay Mill Drones to Diplomas: How Russia’s Largest Private University is Linked to a $25M Essay Mill 1

Drones to Diplomas: How Russia’s Largest Private University is Linked to a $25M Essay Mill

December 6, 2025 0 0
SMS Phishers Pivot to Points, Taxes, Fake Retailers SMS Phishers Pivot to Points, Taxes, Fake Retailers 2

SMS Phishers Pivot to Points, Taxes, Fake Retailers

December 4, 2025 0 0
India Orders Messaging Apps to Work Only With Active SIM Cards to Prevent Fraud and Misuse whatsapp-sim.jpg 3

India Orders Messaging Apps to Work Only With Active SIM Cards to Prevent Fraud and Misuse

December 2, 2025 0 0
Researchers Capture Lazarus APT’s Remote-Worker Scheme Live on Camera korean.jpg 4

Researchers Capture Lazarus APT’s Remote-Worker Scheme Live on Camera

December 2, 2025 0 1

Connect with Us

Social menu is not set. You need to create menu and assign it to Social Menu on Menu Settings.

Trending News

Drones to Diplomas: How Russia’s Largest Private University is Linked to a $25M Essay Mill Drones to Diplomas: How Russia’s Largest Private University is Linked to a $25M Essay Mill 1
  • Uncategorized

Drones to Diplomas: How Russia’s Largest Private University is Linked to a $25M Essay Mill

December 6, 2025 0 0
SMS Phishers Pivot to Points, Taxes, Fake Retailers SMS Phishers Pivot to Points, Taxes, Fake Retailers 2
  • Uncategorized

SMS Phishers Pivot to Points, Taxes, Fake Retailers

December 4, 2025 0 0
India Orders Messaging Apps to Work Only With Active SIM Cards to Prevent Fraud and Misuse whatsapp-sim.jpg 3
  • The Hacker News

India Orders Messaging Apps to Work Only With Active SIM Cards to Prevent Fraud and Misuse

December 2, 2025 0 0
Researchers Capture Lazarus APT’s Remote-Worker Scheme Live on Camera korean.jpg 4
  • The Hacker News

Researchers Capture Lazarus APT’s Remote-Worker Scheme Live on Camera

December 2, 2025 0 1
GlassWorm Returns with 24 Malicious Extensions Impersonating Popular Developer Tools hacked.jpg 5
  • The Hacker News

GlassWorm Returns with 24 Malicious Extensions Impersonating Popular Developer Tools

December 2, 2025 0 0
Malicious npm Package Uses Hidden Prompt and Script to Evade AI Security Tools npm-mal.jpg 6
  • The Hacker News

Malicious npm Package Uses Hidden Prompt and Script to Evade AI Security Tools

December 2, 2025 0 1
Iran-Linked Hackers Hits Israeli Sectors with New MuddyViper Backdoor in Targeted Attacks iran-hacking.jpg 7
  • The Hacker News

Iran-Linked Hackers Hits Israeli Sectors with New MuddyViper Backdoor in Targeted Attacks

December 2, 2025 0 0

You may have missed

Drones to Diplomas: How Russia’s Largest Private University is Linked to a $25M Essay Mill
  • Uncategorized

Drones to Diplomas: How Russia’s Largest Private University is Linked to a $25M Essay Mill

Sean December 6, 2025 0 0
SMS Phishers Pivot to Points, Taxes, Fake Retailers
  • Uncategorized

SMS Phishers Pivot to Points, Taxes, Fake Retailers

Sean December 4, 2025 0 0
whatsapp-sim.jpg
  • The Hacker News

India Orders Messaging Apps to Work Only With Active SIM Cards to Prevent Fraud and Misuse

[email protected] The Hacker News December 2, 2025 0 0
korean.jpg
  • The Hacker News

Researchers Capture Lazarus APT’s Remote-Worker Scheme Live on Camera

[email protected] The Hacker News December 2, 2025 0 1
Copyright © 2026 All rights reserved. | MoreNews by AF themes.