Author: ravencybersec_r7bgbo

  • North Korean Hackers Use EtherHiding to Hide Malware Inside Blockchain Smart Contracts

    North Korean Hackers Use EtherHiding to Hide Malware Inside Blockchain Smart Contracts

    A threat actor with ties to the Democratic People’s Republic of Korea (aka North Korea) has been observed leveraging the EtherHiding technique to distribute malware and enable cryptocurrency theft, marking the first time a state-sponsored hacking group has embraced the method.
    The activity has been attributed by Google Threat Intelligence Group (GTIG) to a threat cluster it tracks as UNC5342,

    Read More

  • Hackers Abuse Blockchain Smart Contracts to Spread Malware via Infected WordPress Sites

    Hackers Abuse Blockchain Smart Contracts to Spread Malware via Infected WordPress Sites

    A financially motivated threat actor codenamed UNC5142 has been observed abusing blockchain smart contracts as a way to facilitate the distribution of information stealers such as Atomic (AMOS), Lumma, Rhadamanthys (aka RADTHIEF), and Vidar, targeting both Windows and Apple macOS systems.
    “UNC5142 is characterized by its use of compromised WordPress websites and ‘EtherHiding,’ a technique used

    Read More

  • [tl;dr sec] #301 – Security Leadership Master Class, DEF CON Cloud Village Talks, AI-Powered Honeypot

    Hey there,

    I hope you’ve been doing well!

    🤔 Reflections and Cooking


    First off, thanks so much to everyone who reached out with kind and encouraging words after my reflection last week 🙏 

    It put a huge smile on my face and means a ton. (Also, I’ll respond soon 😅)

    To be honest, it felt a bit overly indulgent writing it, but people seemed to appreciate it, so I’ll try to share my reflections more often.

    Some other recent updates: I’ve been absolutely cooking with Claude Code and Sonnet 4.5 this week 🧑‍🍳 

    Two to four sessions at the same time. Migrating code between languages, using new frameworks and libraries. Auto-writing tests.

    And kicking off detailed research queries comparing various tech stacks (e.g. Cloudflare vs Supabase) and libraries (AI eval frameworks) using voice to text when I’m taking walks.

    It’s actually been so fun and fast moving that it’s made me behind on some less fun but important things I need to do 😆 

    I hope your week has been full of joy too!

    P.S. I’m working on a new talk on applying AI to AppSec/future of AppSec, etc. If you’re doing something cool in this space, please reach out and tell me what you’re up to 🤓 

    Sponsor

    📣 CI/CD Pipeline Security Best Practices


    CI/CD pipelines power modern software delivery, but securing them can be a challenge.

    This new cheat sheet walks you through the OWASP Top 10 CI/CD security risks and  shares clear, actionable steps to help reduce your attack surface and strengthen your delivery processes.

    Inside, you’ll find:

    • The most common CI/CD attack vectors with real-world examples

    • Practical mitigations for each OWASP risk category

    • How Wiz helps detect and prevent misconfigurations, exposed secrets, and supply chain threats

    👉 Download Now 👈

    As the recent spree of supply chain attacks have shown, CI/CD security is critical 😅 

    AppSec


    BSidesSF 2026 CFP is Open
    The BSidesSF CFP is open until October 28th! The theme: BSidesSF: The Musical 😍. I don’t know how this happened, but I am filled with joy.

    BSidesSF is one of my favorite conferences- not too big, full of smart and friendly people, A+ networking with folks at cool companies doing awesome things. And it’s right before RSA. Hope to see you there!

    Security Leadership Master Class 1 : Leveling up your leadership
    The first of a 7 part series where former Google Cloud CISO Phil Venables groups prior posts into a theme. “Security leadership is about building flywheels not [just] fire stations.” See the bottom of the post for his top 10 posts on various leadership topics. Essential attributes of a leader include:

    • Act like a business executive, not an IT manager.

    • Master business-oriented communication and influence.

    • Build scalable, self-reinforcing security systems (flywheels).

    • Prioritize ruthlessly and focus on leverage.

    Software Factory Security Framework (SF²)
    GitLab VP of Product Security Julie Davila introduces the Software Factory Security Framework, a comprehensive mental model to help security leaders scale security capabilities while improving business outcomes. The framework consists of core components including a foundation, universal stewardship responsibilities, strategic positioning, investment portfolio guidance, and contextual modifiers to adapt to specific organizational situations. SF² complements existing standards like NIST SSDF, OWASP SAMM, BSIMM, and OWASP ASVS.

    💡 In the Investment Portfolio section, I like the discussion of evaluating potential investments, designing security capabilities that compound (e.g. paved road), and more.

    Sponsor

    📣 5 Critical Google Workspace Security Settings You Might Be Missing


    Google Workspace misconfigurations or disabled security settings can be easy to miss. This guide from Nudge Security provides a deep dive on the top 5 Google Workspace security settings that should be on your checklist.

    For each security setting, we cover:

    • Common misconfigurations to look out for

    • Best practices for effective risk reduction

    • Considerations for tailoring settings based on user privilege

    Learn what you can do today to improve your Google Workspace security posture.

    👉 Get the guide 👈

    I use Google Workspace but I’m not sure what hardening steps I should be doing, I need to check this out 👀 

    Cloud Security


    State of Cloud Security
    Updated report from Datadog, H/T Christophe Tafani-Dereeper for sharing. Stats in the web version of this issue.

    • In AWS, 86% use AWS Organizations, but only 40% use Service Control Policies (SCPs) and 6% use Resource Control Policies (RCPs).

    • In Google Cloud, 11% of GKE clusters and 23% of VMs are overprivileged, most often through the use of the Compute Engine default service account.

    • One in two EC2 instances enforce IMDSv2, up from 32% a year ago. Enforcement is unequal and overrepresented among recently launched instances: only 14% of instances created more than two years ago enforce it.

    • On average, an organization deploys 13 third-party integration roles, linked to an average of 2.5 distinct vendors.

      • 12.2% of third-party integrations are dangerously overprivileged, allowing the vendor to access all data in the account or to take over the whole AWS account.

      • 2.25% of third-party integration roles don’t enforce the use of an external ID.

    Introducing HoneyBee: How We Automate Honeypot Deployment for Threat Research
    Wiz’s Yaara Shriki announces the newly open sourced HoneyBee, a tool that automatically generates intentionally insecure Dockerfiles and Docker Compose manifests for popular applications to mimic real-world misconfigurations. Wiz uses HoneyBee internally for testing detection rules and orchestrating honeypots, allowing them to gather intelligence on attacker techniques.

    HoneyBee uses AI to automatically generate the misconfigurations as well as Nuclei templates to externally validate that attackers can indeed exploit the misconfiguration. (Shout-out: the Nuclei generation was based on a template from my bud Daniel Miessler’s Fabric project). You can also give HoneyBee a Jina API token to enable automatic extraction of misconfigurations from blogs or articles.

    💡 Using AI to automatically create honeypots, and auto-validators, and potentially even auto-source honeypot ideas from blog posts on vulnerabilities is quite clever. I think this idea/approach is super promising, and expect we’ll see a lot more like it.

    Cloud Village YouTube Channel
    Now has DEF CON 33 (2025) talks posted, 25 talks over 3 days. Including:

    Container Security


    More talks from Cloud Village DEF CON 33

    PaloAltoNetworks/KIEMPossible
    By Palo Alto’s Golan Myers: A tool designed to simplify Kubernetes Infrastructure Entitlement Management by allowing visibility of permissions and their usage across the cluster, to allow for real enforcement of the principle of least privilege

    madhuakula/spotter
    By Madhu Akula: Spotter is a comprehensive Kubernetes security scanner that uses Common Expression Language (CEL) based rules to identify security vulnerabilities, misconfigurations, and compliance violations across your Kubernetes clusters, manifests, and CI/CD pipelines. Spotter supports scanning both manifest files and live clusters with built-in rules covering OWASP Kubernetes Top 10, CIS Benchmark, and NSA/CISA guidelines, and allows custom rule creation.

    Sponsored Tool

    📣 Stop asking managers to approve
    access requests


    Access controls don’t scale with manual approvals.

    Our report shows what modern IT and security teams are doing instead:

    • Enforcing requirements automatically when access changes

    • Removing manager approvals that add no security value

    • Letting app owners handle their own access decisions

    • Automating what can be automated

    👉 Read the report 👈

    Access management is one of the top things that suck in security based on interviews with >50 security leaders. Nicely detailed report, I like it 👍️ 

    Supply Chain


    Adversis/sketchy
    By Adversis: A cross-platform security scanner that checks repositories, packages, and scripts for malicious patterns before you execute them. Sketchy detects over 25 types of suspicious behaviors including command overwrites, code execution patterns, reverse shells, credential theft, cloud metadata access, cryptocurrency miners, homograph attacks, and more. Detection patterns inspired by DataDog’s GuardDog.

    Introducing Socket Firewall: Free, Proactive Protection for Your Software Supply Chain
    Socket’s Dale Bustad announces Socket Firewall (sfw), a lightweight (non open source) tool that blocks malicious dependencies before they reach developer machines. The tool works by creating an ephemeral HTTP proxy that intercepts package manager traffic and checks with Socket’s API before allowing packages to be fetched, supporting npm/yarn/pnpm (JavaScript), pip/uv (Python), and cargo (Rust) with a simple prefix command pattern (e.g., sfw npm install lodash).

    Socket Firewall Free is provided under the PolyForm Shield License 1.0.0, which has Noncompete and Competition clauses (very smart 👍️).

    Dismantling a Critical Supply Chain Risk in VSCode Extension Marketplaces
    Wiz’s Rami McCarthy describes how they found over 550 leaked secrets in VSCode extensions, including 100+ VSCode Marketplace PATs and 30+ OVSX Access Tokens that could allow attackers to push malicious updates to 150,000+ users. Note that extensions auto-update by default, so victims wouldn’t need to take any action to be compromised 🫠 

    Interesting findings: much of the vulnerable install base was theme extensions, .env, .config.json, .mcp.json, and .cursorrules, package.json, and README.md were frequent leak sources, and some extensions are specifically for supporting a single company’s engineers or customers, but have been made public.

    Wiz spent 6 months working with Microsoft, who has now implementing preventative measures including secret scanning during extension publishing, revoking leaked tokens, and have published a roadmap for VSCode Marketplace security.

    💡 Working with big platforms to make improvements that benefit all users is likely a bit of drudgery and slow, but the impacts are huge. Hats off to Wiz, Rami, and Microsoft for improving the ecosystem 👍️ 

    AI + Security


    Adversis/mcp-snitch
    By Adversis: A macOS application that intercepts and monitors MCP server communications, providing security analysis (uses AI for threat detection and pattern-based detection for sensitive data like SSH keys, credentials, system files), access control, and audit logging for AI tool usage.

    A small number of samples can poison LLMs of any size
    A joint study between Anthropic, the UK AI Security Institute, and the Alan Turing Institute, “found that as few as 250 malicious documents can produce a “backdoor” vulnerability in an LLM—regardless of model size or training data volume. Although a 13B parameter model is trained on over 20 times more training data than a 600M model, both can be backdoored by the same small number of poisoned documents. These results challenge the common assumption that attackers need to control a percentage of training data; instead, they may just need a small, fixed amount.”

    💡 Thus, data poisoning attacks might be much more practical than previously believed, which matters when LLMs are trained on The Internet at large, including Reddit and people’s personal websites and blog posts. And tl;dr sec *looks at issue number* 😈

    MCP Tools: Attack Vectors and Defense Recommendations for Autonomous Agents
    Elastic’s Carolina Beretta, Gus Carlock, and Andrew Pease provide an overview of Model Context Protocol (MCP) tools, standard attack vectors such as tool poisoning (malicious instructions in a tool’s metadata or parameters), rug pull attacks (when a tool’s description or behavior is silently altered after user approval, turning a previously benign tool potentially malicious), and orchestration injection (attacks involving multiple tools or that cross different servers or agents).

    Nice round-up of a bunch of related work. The post also includes an example simple prompt of detecting malicious MCP tools.

    💡 If someone hasn’t already scanned the MCP ecosystem at scale for malicious servers/tools, someone should do that and write a blog about it.

    Cool Hacks


    Eavesdropping on Internal Networks via Unencrypted Satellites
    CCS 2025 paper by Wenyi Morty Zhang et al: “We pointed a commercial-off-the-shelf satellite dish at the sky and carried out the most comprehensive public study to date of geostationary satellite communication. A shockingly large amount of sensitive traffic is being broadcast unencrypted, including critical infrastructure, internal corporate and government communications, private citizens’ voice calls and SMS, and consumer Internet traffic from in-flight wifi and mobile networks. This data can be passively observed by anyone with a few hundred dollars of consumer-grade hardware.”

    Pixnapping: Bringing Pixel Stealing out of the Stone Age
    CCS 2025 paper by Alan Wang et al: “A new class of attacks that allows a malicious Android app to stealthily leak information displayed by other Android apps or arbitrary websites. Pixnapping exploits Android APIs and a hardware side channel that affects nearly all modern Android devices.

    We have demonstrated Pixnapping attacks on Google and Samsung phones and end-to-end recovery of sensitive data from websites including Gmail and Google Accounts and apps including Signal, Google Authenticator, Venmo, and Google Maps. Notably, our attack against Google Authenticator allows any malicious app to steal 2FA codes in under 30 seconds while hiding the attack from the user.”

    Mic-E-Mouse: Covert Eavesdropping through Computer Mice
    Paper, data, and GitHub PoC by Mohamad Fakih et al demonstrating how optical sensors in modern mice can be exploited as covert microphones, capturing speech vibrations transmitted through desk surfaces despite significant signal quality challenges.

    They present Mic-E-Mouse, a signal processing and machine learning pipeline that transforms these low-quality, non-uniformly sampled vibration data into intelligible speech, achieving 80% speaker recognition accuracy and 16.79% word error rate in human evaluations. This attack requires no hardware modifications and works with existing consumer-grade mice, potentially allowing attackers to eavesdrop on conversations through a seemingly innocuous mouse.

    Misc


    AI

    Feelz

    Politics

    ✉️ Wrapping Up


    Have questions, comments, or feedback? Just reply directly, I’d love to hear from you.

    If you find this newsletter useful and know other people who would too, I’d really appreciate if you’d forward it to them 🙏

    Thanks for reading!

    Cheers,
    Clint

    P.S. Feel free to connect with me on LinkedIn 👋 

    Read More

  • LinkPro Linux Rootkit Uses eBPF to Hide and Activates via Magic TCP Packets

    LinkPro Linux Rootkit Uses eBPF to Hide and Activates via Magic TCP Packets

    An investigation into the compromise of an Amazon Web Services (AWS)-hosted infrastructure has led to the discovery of a new GNU/Linux rootkit dubbed LinkPro, according to findings from Synacktiv.
    “This backdoor features functionalities relying on the installation of two eBPF [extended Berkeley Packet Filter] modules, on the one hand to conceal itself, and on the other hand to be remotely

    Read More

  • Video call app Huddle01 exposed 600K+ user logs

    The Cybernews research team found that video call app Huddle01 exposed email addresses, real names, and other identifiers through an unprotected Kafka broker.

    Think of an unprotected Kafka broker like a post office that stores and delivers confidential mail. Now, imagine the manager leaves the front doors wide open, with no locks, guards, or ID checks. Anyone can walk in, look through private letters and photos, and grab whatever catches their eye.

    Huddle01 is a video call app that focuses on decentralized Web Real-Time Communication (WebRTC). WebRTC is appealing because it lets people talk and share data directly between devices without using a central server. Done right, this can reduce latency, cut costs, and improve privacy.

    But leaving your Kafka broker open to anyone who happens to stumble upon it does not qualify as “doing privacy right.” The Kafka broker operated without authentication or encryption, meaning anyone could listen in, collect logs, or potentially alter data if write access existed. This demonstrates a fundamental misconfiguration that puts both users and the platform at risk.

    The Kafka instance contained over 621,000 log entries from the last 13 days, belonging to Huddle01, including:

    • Usernames (sometimes real names)
    • Email addresses
    • Crypto wallet addresses (Huddle01 supports many wallets across blockchains like Bitcoin and Ethereum)
    • Detailed activity data, such as which users joined specific calls, participants in each call, country, time, date, and duration
    • Other identifiers

    The app is popular among cryptocurrency users, but in this case the open Kafka instance could have deanonymized their wallets by tying their crypto wallets to usernames and email addresses. Which also paints a target on their back as potentially high-value targets.

    It also makes users more vulnerable to social engineering since attackers can craft credible emails or messages using real names and meeting data.

    And hold on for the worst part. Cybernews states it responsibly disclosed the data leak to the company behind Huddle01…

    “However, it did not respond to the initial disclosure and subsequent attempts. After one month, the exposed server remained accessible. It’s unclear how many other third parties might have accessed the data.”

    Security tips for affected users

    Knowing that the exposed information goes back about two weeks doesn’t help much, since anyone with access could have set up a data collector, listening in on the real-time data streaming and processing going on.

    So, any Huddle01 users should:

    • Change passwords on accounts linked to the exposed email or username, and use strong, unique passwords for each site.
    • Set up two-factor authentication (2FA) wherever possible to prevent unauthorized access.
    • Monitor inboxes for suspicious messages. Be extra cautious of emails or texts asking for crypto transactions or sensitive information, as targeted phishing is a possibility. Be especially wary of social engineering attempts that reference details from meeting logs, such as who you spoke to or when meetings occurred.
    • Stay updated on official statements from Huddle01 or news coverage, as they may release more guidance later.

    Pro tip: Did you know that you can submit suspicious messages like these to Malwarebytes Scam Guard, which instantly flags known scams?


    We don’t just report on scams—we help detect them

    Cybersecurity risks should never spread beyond a headline. If something looks dodgy to you, check if it’s a scam using Malwarebytes Scam Guard, a feature of our mobile protection products. Submit a screenshot, paste suspicious content, or share a text or phone number, and we’ll tell you if it’s a scam or legit. Download Malwarebytes Mobile Security for iOS or Android and try it today!

    Read More

  • Improving the trustworthiness of Javascript on the Web

    Improving the trustworthiness of Javascript on the Web

    The web is the most powerful application platform in existence. As long as you have the right API, you can safely run anything you want in a browser.

    Well… anything but cryptography.

    It is as true today as it was in 2011 that Javascript cryptography is Considered Harmful. The main problem is code distribution. Consider an end-to-end-encrypted messaging web application. The application generates cryptographic keys in the client’s browser that lets users view and send end-to-end encrypted messages to each other. If the application is compromised, what would stop the malicious actor from simply modifying their Javascript to exfiltrate messages?

    It is interesting to note that smartphone apps don’t have this issue. This is because app stores do a lot of heavy lifting to provide security for the app ecosystem. Specifically, they provide integrity, ensuring that apps being delivered are not tampered with, consistency, ensuring all users get the same app, and transparency, ensuring that the record of versions of an app is truthful and publicly visible.

    It would be nice if we could get these properties for our end-to-end encrypted web application, and the web as a whole, without requiring a single central authority like an app store. Further, such a system would benefit all in-browser uses of cryptography, not just end-to-end-encrypted apps. For example, many web-based confidential LLMs, cryptocurrency wallets, and voting systems use in-browser Javascript cryptography for the last step of their verification chains.

    In this post, we will provide an early look at such a system, called Web Application Integrity, Consistency, and Transparency (WAICT) that we have helped author. WAICT is a W3C-backed effort among browser vendors, cloud providers, and encrypted communication developers to bring stronger security guarantees to the entire web. We will discuss the problem we need to solve, and build up to a solution resembling the current transparency specification draft. We hope to build even wider consensus on the solution design in the near future.

    Defining the Web Application

    In order to talk about security guarantees of a web application, it is first necessary to define precisely what the application is. A smartphone application is essentially just a zip file. But a website is made up of interlinked assets, including HTML, Javascript, WASM, and CSS, that can each be locally or externally hosted. Further, if any asset changes, it could drastically change the functioning of the application. A coherent definition of an application thus requires the application to commit to precisely the assets it loads. This is done using integrity features, which we describe now.

    Subresource Integrity

    An important building block for defining a single coherent application is subresource integrity (SRI). SRI is a feature built into most browsers that permits a website to specify the cryptographic hash of external resources, e.g.,

    <script src="https://cdnjs.cloudflare.com/ajax/libs/underscore.js/1.13.7/underscore-min.js" integrity="sha512-dvWGkLATSdw5qWb2qozZBRKJ80Omy2YN/aF3wTUVC5+D1eqbA+TjWpPpoj8vorK5xGLMa2ZqIeWCpDZP/+pQGQ=="></script>

    This causes the browser to fetch underscore.js from cdnjs.cloudflare.com and verify that its SHA-512 hash matches the given hash in the tag. If they match, the script is loaded. If not, an error is thrown and nothing is executed.

    If every external script, stylesheet, etc. on a page comes with an SRI integrity attribute, then the whole page is defined by just its HTML. This is close to what we want, but a web application can consist of many pages, and there is no way for a page to enforce the hash of the pages it links to.

    Integrity Manifest

    We would like to have a way of enforcing integrity on an entire site, i.e., every asset under a domain. For this, WAICT defines an integrity manifest, a configuration file that websites can provide to clients. One important item in the manifest is the asset hashes dictionary, mapping a hash belonging to an asset that the browser might load from that domain, to the path of that asset. Assets that may occur at any path, e.g., an error page, map to the empty string:

    "hashes": {
    "81db308d0df59b74d4a9bd25c546f25ec0fdb15a8d6d530c07a89344ae8eeb02": "/assets/js/main.js",
    "fbd1d07879e672fd4557a2fa1bb2e435d88eac072f8903020a18672d5eddfb7c": "/index.html",
    "5e737a67c38189a01f73040b06b4a0393b7ea71c86cf73744914bbb0cf0062eb": "/vendored/main.css",
    "684ad58287ff2d085927cb1544c7d685ace897b6b25d33e46d2ec46a355b1f0e": "",
    "f802517f1b2406e308599ca6f4c02d2ae28bb53ff2a5dbcddb538391cb6ad56a": ""
    }
    

    The other main component of the manifest is the integrity policy, which tells the browser which data types are being enforced and how strictly. For example, the policy in the manifest below will:

    1. Reject any script before running it, if it’s missing an SRI tag and doesn’t appear in the hashes

    2. Reject any WASM possibly after running it, if it’s missing an SRI tag and doesn’t appear in hashes

    "integrity-policy": "blocked-destinations=(script), checked-destinations=(wasm)"

    Put together, these make up the integrity manifest:

    "manifest": {
      "version": 1,
      "integrity-policy": ...,
      "hashes": ...,
    }
    

    Thus, when both SRI and integrity manifests are used, the entire site and its interpretation by the browser is uniquely determined by the hash of the integrity manifest. This is exactly what we wanted. We have distilled the problem of endowing authenticity, consistent distribution, etc. to a web application to one of endowing the same properties to a single hash.

    Achieving Transparency

    Recall, a transparent web application is one whose code is stored in a publicly accessible, append-only log. This is helpful in two ways: 1) if a user is served malicious code and they learn about it, there is a public record of the code they ran, and so they can prove it to external parties, and 2) if a user is served malicious code and they don’t learn about it, there is still a chance that an external auditor may comb through the historical web application code and find the malicious code anyway. Of course, transparency does not help detect malicious code or even prevent its distribution, but it at least makes it publicly auditable.

    Now that we have a single hash that commits to an entire website’s contents, we can talk about ensuring that that hash ends up in a public log. We have several important requirements here:

    1. Do not break existing sites. This one is a given. Whatever system gets deployed, it should not interfere with the correct functioning of existing websites. Participation in transparency should be strictly opt-in.

    2. No added round trips. Transparency should not cause extra network round trips between the client and the server. Otherwise there will be a network latency penalty for users who want transparency.

    3. User privacy. A user should not have to identify themselves to any party more than they already do. That means no connections to new third parties, and no sending identifying information to the website.

    4. User statelessness. A user should not have to store site-specific data. We do not want solutions that rely on storing or gossipping per-site cryptographic information.

    5. Non-centralization. There should not be a single point of failure in the system—if any single party experiences downtime, the system should still be able to make progress. Similarly, there should be no single point of trust—if a user distrusts any single party, the user should still receive all the security benefits of the system.

    6. Ease of opt-in. The barrier of entry for transparency should be as low as possible. A site operator should be able to start logging their site cheaply and without being an expert.

    7. Ease of opt-out. It should be easy for a website to stop participating in transparency. Further, to avoid accidental lock-in like the defunct HPKP spec, it should be possible for this to happen even if all cryptographic material is lost, e.g., in the seizure or selling of a domain.

    8. Opt-out is transparent. As described before, because transparency is optional, it is possible for an attacker to disable the site’s transparency, serve malicious content, then enable transparency again. We must make sure this kind of attack is detectable, i.e., the act of disabling transparency must itself be logged somewhere.

    9. Monitorability. A website operator should be able to efficiently monitor the transparency information being published about their website. In particular, they should not have to run a high-network-load, always-on program just to notify them if their site has been hijacked.

    With these requirements in place, we can move on to construction. We introduce a data structure that will be essential to the design.

    Hash Chain

    Almost everything in transparency is an append-only log, i.e., a data structure that acts like a list and has the ability to produce an inclusion proof, i.e., a proof that an element occurs at a particular index in the list; and a consistency proof, i.e., a proof that a list is an extension of a previous version of the list. A consistency proof between two lists demonstrates that no elements were modified or deleted, only added.

    The simplest possible append-only log is a hash chain, a list-like data structure wherein each subsequent element is hashed into the running chain hash. The final chain hash is a succinct representation of the entire list.


    A hash chain. The green nodes represent the chain hash, i.e., the hash of the element below it, concatenated with the previous chain hash.

    The proof structures are quite simple. To prove inclusion of the element at index i, the prover provides the chain hash before i, and all the elements after i:


    Proof of inclusion for the second element in the hash chain. The verifier knows only the final chain hash. It checks equality of the final computed chain hash with the known final chain hash. The light green nodes represent hashes that the verifier computes.

    Similarly, to prove consistency between the chains of size i and j, the prover provides the elements between i and j:


    Proof of consistency of the chain of size one and chain of size three. The verifier has the chain hashes from the starting and ending chains. It checks equality of the final computed chain hash with the known ending chain hash. The light green nodes represent hashes that the verifier computes.

    Building Transparency

    We can use hash chains to build a transparency scheme for websites.

    Per-Site Logs

    As a first step, let’s give every site its own log, instantiated as a hash chain (we will discuss how these all come together into one big log later). The items of the log are just the manifest of the site at a particular point in time:


    A site’s hash chain-based log, containing three historical manifests.

    In reality, the log does not store the manifest itself, but the manifest hash. Sites designate an asset host that knows how to map hashes to the data they reference. This is a content-addressable storage backend, and can be implemented using strongly cached static hosting solutions.

    A log on its own is not very trustworthy. Whoever runs the log can add and remove elements at will and then recompute the hash chain. To maintain the append-only-ness of the chain, we designate a trusted third party, called a witness. Given a hash chain consistency proof and a new chain hash, a witness:

    1. Verifies the consistency proof with respect to its old stored chain hash, and the new provided chain hash.

    2. If successful, signs the new chain hash along with a signature timestamp.

    Now, when a user navigates to a website with transparency enabled, the sequence of events is:

    1. The site serves its manifest, an inclusion proof showing that the manifest appears in the log, and all the signatures from all the witnesses who have validated the log chain hash.

    2. The browser verifies the signatures from whichever witnesses it trusts.

    3. The browser verifies the inclusion proof. The manifest must be the newest entry in the chain (we discuss how to serve old manifests later).

    4. The browser proceeds with the usual manifest and SRI integrity checks.

    At this point, the user knows that the given manifest has been recorded in a log whose chain hash has been saved by a trustworthy witness, so they can be reasonably sure that the manifest won’t be removed from history. Further, assuming the asset host functions correctly, the user knows that a copy of all the received code is readily available.

    The need to signal transparency. The above algorithm works, but we have a problem: if an attacker takes control of a site, they can simply stop serving transparency information and thus implicitly disable transparency without detection. So we need an explicit mechanism that keeps track of every website that has enrolled into transparency.

    The Transparency Service

    To store all the sites enrolled into transparency, we want a global data structure that maps a site domain to the site log’s chain hash. One efficient way of representing this is a prefix tree (a.k.a., a trie). Every leaf in the tree corresponds to a site’s domain, and its value is the chain hash of that site’s log, the current log size, and the site’s asset host URL. For a site to prove validity of its transparency data, it will have to present an inclusion proof for its leaf. Fortunately, these proofs are efficient for prefix trees.


    A prefix tree with four elements. Each leaf’s path corresponds to a domain. Each leaf’s value is the chain hash of its site’s log.

    To add itself to the tree, a site proves possession of its domain to the transparency service, i.e., the party that operates the prefix tree, and provides an asset host URL. To update the entry, the site sends the new entry to the transparency service, which will compute the new chain hash. And to unenroll from transparency, the site just requests to have its entry removed from the tree (an adversary can do this too; we discuss how to detect this below).

    Proving to Witnesses and Browsers

    Now witnesses only need to look at the prefix tree instead of individual site logs, and thus they must verify whole-tree updates. The most important thing to ensure is that every site’s log is append-only. So whenever the tree is updated, it must produce a “proof” containing every new/deleted/modified entry, as well as a consistency proof for each entry showing that the site log corresponding to that entry has been properly appended to. Once the witness has verified this prefix tree update proof, it signs the root.


    The sequence of updating a site’s assets and serving the site with transparency enabled.

    The client-side verification procedure is as in the previous section, with two modifications:

    1. The client now verifies two inclusion proofs: one for the integrity policy’s membership in the site log, and one for the site log’s membership in a prefix tree.

    2. The client verifies the signature over the prefix tree root, since the witness no longer signs individual chain hashes. As before, the acceptable public keys are whichever witnesses the client trusts.

    Signaling transparency. Now that there is a single source of truth, namely the prefix tree, a client can know a site is enrolled in transparency by simply fetching the site’s entry in the tree. This alone would work, but it violates our requirement of “no added round trips,” so we instead require that client browsers will ship with the list of sites included in the prefix tree. We call this the transparency preload list

    If a site appears in the preload list, the browser will expect it to provide an inclusion proof in the prefix tree, or else a proof of non-inclusion in a newer version of the prefix tree, thereby showing they’ve unenrolled. The site must provide one of these proofs until the last preload list it appears in has expired. Finally, even though the preload list is derived from the prefix tree, there is nothing enforcing this relationship. Thus, the preload list should also be published transparently.

    Filling in Missing Properties

    Remember we still have the requirements of monitorability, opt-out being transparent, and no single point of failure/trust. We fill in those details now.

    Adding monitorability. So far, in order for a site operator to ensure their site was not hijacked, they would have to constantly query every transparency service for its domain and verify that it hasn’t been tampered with. This is certainly better than the 500k events per hour that CT monitors have to ingest, but it still requires the monitor to be constantly polling the prefix tree, and it imposes a constant load for the transparency service.

    We add a field to the prefix tree leaf structure: the leaf now stores a “created” timestamp, containing the time the leaf was created. Witnesses ensure that the “created” field remains the same over all leaf updates (and it is deleted when the leaf is deleted). To monitor, a site operator need only keep the last observed “created” and “log size” fields of its leaf. If it fetches the latest leaf and sees both unchanged, it knows that no changes occurred since the last check.

    Adding transparency of opt-out. We must also do the same thing as above for leaf deletions. When a leaf is deleted, a monitor should be able to learn when the deletion occurred within some reasonable time frame. Thus, rather than outright removing a leaf, the transparency service responds to unenrollment requests by replacing the leaf with a tombstone value, containing just a “created” timestamp. As before, witnesses ensure that this field remains unchanged until the leaf is permanently deleted (after some visibility period) or re-enrolled.

    Permitting multiple transparency services. Since we require that there be no single point of failure or trust, we imagine an ecosystem where there are a handful of non-colluding, reasonably trustworthy transparency service providers, each with their own prefix tree. Like Certificate Transparency (CT), this set should not be too large. It must be small enough that reasonable levels of trust can be established, and so that independent auditors can reasonably handle the load of verifying all of them.

    Ok that’s the end of the most technical part of this post. We’re now going to talk about how to tweak this system to provide all kinds of additional nice properties.

    (Not) Achieving Consistency

    Transparency would be useless if, every time a site updates, it serves 100,000 new versions of itself. Any auditor would have to go through every single version of the code in order to ensure no user was targeted with malware. This is bad even if the velocity of versions is lower. If a site publishes just one new version per week, but every version from the past ten years is still servable, then users can still be served extremely old, potentially vulnerable versions of the site, without anyone knowing. Thus, in order to make transparency valuable, we need consistency, the property that every browser sees the same version of the site at a given time.

    We will not achieve the strongest version of consistency, but it turns out that weaker notions are sufficient for us. If, unlike the above scenario, a site had 8 valid versions of itself at a given time, then that would be pretty manageable for an auditor. So even though it’s true that users don’t all see the same version of the site, they will all still benefit from transparency, as desired.

    We describe two types of inconsistency and how we mitigate them.

    Tree Inconsistency

    Tree inconsistency occurs when transparency services’ prefix trees disagree on the chain hash of a site, thus disagreeing on the history of the site. One way to fully eliminate this is to establish a consensus mechanism for prefix trees. A simple one is majority voting: if there are five transparency services, a site must present three tree inclusion proofs to a user, showing the chain hash is present in three trees. This, of course, triples the tree inclusion proof size, and lowers the fault tolerance of the entire system (if three log operators go down, then no transparent site can publish any updates).

    Instead of consensus, we opt to simply limit the amount of inconsistency by limiting the number of transparency services. In 2025, Chrome trusts eight Certificate Transparency logs. A similar number of transparency services would be fine for our system. Plus, it is still possible to detect and prove the existence of inconsistencies between trees, since roots are signed by witnesses. So if it becomes the norm to use the same version on all trees, then social pressure can be applied when sites violate this.

    Temporal Inconsistency

    Temporal inconsistency occurs when a user gets a newer or older version of the site (both still unexpired), depending on some external factors such as geographic location or cookie values. In the extreme, as stated above, if a signed prefix root is valid for ten years, then a site can serve a user any version of the site from the last ten years.

    As with tree inconsistency, this can be resolved using consensus mechanisms. If, for example, the latest manifest were published on a blockchain, then a user could fetch the latest blockchain head and ensure they got the latest version of the site. However, this incurs an extra network round trip for the client, and requires sites to wait for their hash to get published on-chain before they can update. More importantly, building this kind of consensus mechanism into our specification would drastically increase its complexity. We’re aiming for v1.0 here.

    We mitigate temporal inconsistency by requiring reasonably short validity periods for witness signatures. Making prefix root signatures valid for, e.g., one week would drastically limit the number of simultaneously servable versions. The cost is that site operators must now query the transparency service at least once a week for the new signed root and inclusion proof, even if nothing in the site changed. The sites cannot skip this, and the transparency service must be able to handle this load. This parameter must be tuned carefully.

    Beyond Integrity, Consistency, and Transparency

    Providing integrity, consistency, and transparency is already a huge endeavor, but there are some additional app store-like security features that can be integrated into this system without too much work.

    Code Signing

    One problem that WAICT doesn’t solve is that of provenance: where did the code the user is running come from, precisely? In settings where audits of code happen frequently, this is not so important, because some third party will be reading the code regardless. But for smaller self-hosted deployments of open-source software, this may not be viable. For example, if Alice hosts her own version of Cryptpad for her friend Bob, how can Bob be sure the code matches the real code in Cryptpad’s Github repo?

    WEBCAT. The folks at the Freedom of Press Foundation (FPF) have built a solution to this, called WEBCAT. This protocol allows site owners to announce the identities of the developers that have signed the site’s integrity manifest, i.e., have signed all the code and other assets that the site is serving to the user. Users with the WEBCAT plugin can then see the developer’s Sigstore signatures, and trust the code based on that.

    We’ve made WAICT extensible enough to fit WEBCAT inside and benefit from the transparency components. Concretely, we permit manifests to hold additional metadata, which we call extensions. In this case, the extension holds a list of developers’ Sigstore identities. To be useful, browsers must expose an API for browser plugins to access these extension values. With this API, independent parties can build plugins for whatever feature they wish to layer on top of WAICT.

    Cooldown

    So far we have not built anything that can prevent attacks in the moment. An attacker who breaks into a website can still delete any code-signing extensions, or just unenroll the site from transparency entirely, and continue with their attack as normal. The unenrollment will be logged, but the malicious code will not be, and by the time anyone sees the unenrollment, it may be too late.

    To prevent spontaneous unenrollment, we can enforce unenrollment cooldown client-side. Suppose the cooldown period is 24 hours. Then the rule is: if a site appears on the preload list, then the client will require that either 1) the site have transparency enabled, or 2) the site have a tombstone entry that is at least 24 hours old. Thus, an attacker will be forced to either serve a transparency-enabled version of the site, or serve a broken site for 24 hours.

    Similarly, to prevent spontaneous extension modifications, we can enforce extension cooldown on the client. We will take code signing as an example, saying that any change in developer identities requires a 24 hour waiting period to be accepted. First, we require that extension dev-ids has a preload list of its own, letting the client know which sites have opted into code signing (if a preload list doesn’t exist then any site can delete the extension at any time). The client rule is as follows: if the site appears in the preload list, then both 1) dev-ids must exist as an extension in the manifest, and 2) dev-ids-inclusion must contain an inclusion proof showing that the current value of dev-ids was in a prefix tree that is at least 24 hours old. With this rule, a client will reject values of dev-ids that are newer than a day. If a site wants to delete dev-ids, they must 1) request that it be removed from the preload list, and 2) in the meantime, replace the dev-ids value with the empty string and update dev-ids-inclusion to reflect the new value.

    Deployment Considerations

    There are a lot of distinct roles in this ecosystem. Let’s sketch out the trust and resource requirements for each role.

    Transparency service. These parties store metadata for every transparency-enabled site on the web. If there are 100 million domains, and each entry is 256B each (a few hashes, plus a URL), this comes out to 26GB for a single tree, not including the intermediate hashes. To prevent size blowup, there would probably have to be a pruning rule that unenrolls sites after a long inactivity period. Transparency services should have largely uncorrelated downtime, since, if all services go down, no transparency-enabled site can make any updates. Thus, transparency services must have a moderate amount of storage, be relatively highly available, and have downtime periods uncorrelated with each other.

    Transparency services require some trust, but their behavior is narrowly constrained by witnesses. Theoretically, a service can replace any leaf’s chain hash with its own, and the witness will validate it (as long as the consistency proof is valid). But such changes are detectable by anyone that monitors that leaf.

    Witness. These parties verify prefix tree updates and sign the resulting roots. Their storage costs are similar to that of a transparency service, since they must keep a full copy of a prefix tree for every transparency service they witness. Also like the transparency services, they must have high uptime. Witnesses must also be trusted to keep their signing key secret for a long period of time, at least long enough to permit browser trust stores to be updated when a new key is created.

    Asset host. These parties carry little trust. They cannot serve bad data, since any query response is hashed and compared to a known hash. The only malicious behavior an asset host can do is refuse to respond to queries. Asset hosts can also do this by accident due to downtime.

    Client. This is the most trust-sensitive part. The client is the software that performs all the transparency and integrity checks. This is, of course, the web browser itself. We must trust this.

    We at Cloudflare would like to contribute what we can to this ecosystem. It should be possible to run both a transparency service and a witness. Of course, our witness should not monitor our own transparency service. Rather, we can witness other organizations’ transparency services, and our transparency service can be witnessed by other organizations.

    Supporting Alternate Ecosystems

    WAICT should be compatible with non-standard ecosystems, ones where the large players do not really exist, or at least not in the way they usually do. We are working with the FPF on defining transparency for alternate ecosystems with different network and trust environments. The primary example we have is that of the Tor ecosystem.

    A paranoid Tor user may not trust existing transparency services or witnesses, and there might not be any other trusted party with the resources to self-host these functionalities. For this use case, it may be reasonable to put the prefix tree on a blockchain somewhere. This makes the usual domain validation impossible (there’s no validator server to speak of), but this is fine for onion services. Since an onion address is just a public key, a signature is sufficient to prove ownership of the domain.

    One consequence of a consensus-backed prefix tree is that witnesses are now unnecessary, and there is only need for the single, canonical, transparency service. This mostly solves the problems of tree inconsistency at the expense of latency of updates.

    Next Steps

    We are still very early in the standardization process. One of the more immediate next steps is to get subresource integrity working for more data types, particularly WASM and images. After that, we can begin standardizing the integrity manifest format. And then after that we can start standardizing all the other features. We intend to work on this specification hand-in-hand with browsers and the IETF, and we hope to have some exciting betas soon.

    In the meantime, you can follow along with our transparency specification draft, check out the open problems, and share your ideas. Pull requests and issues are always welcome!

    Acknowledgements

    Many thanks to Dennis Jackson from Mozilla for the lengthy back-and-forth meetings on design, to Giulio B and Cory Myers from FPF for their immensely helpful influence and feedback, and to Richard Hansen for great feedback.

    Read More

  • Architectures, Risks, and Adoption: How to Assess and Choose the Right AI-SOC Platform

    Architectures, Risks, and Adoption: How to Assess and Choose the Right AI-SOC Platform

    Scaling the SOC with AI – Why now? 
    Security Operations Centers (SOCs) are under unprecedented pressure. According to SACR’s AI-SOC Market Landscape 2025, the average organization now faces around 960 alerts per day, while large enterprises manage more than 3,000 alerts daily from an average of 28 different tools. Nearly 40% of those alerts go uninvestigated, and 61% of security teams admit

    Read More

  • Hackers Deploy Linux Rootkits via Cisco SNMP Flaw in “Zero Disco’ Attacks

    Hackers Deploy Linux Rootkits via Cisco SNMP Flaw in “Zero Disco’ Attacks

    Cybersecurity researchers have disclosed details of a new campaign that exploited a recently disclosed security flaw impacting Cisco IOS Software and IOS XE Software to deploy Linux rootkits on older, unprotected systems.
    The activity, codenamed Operation Zero Disco by Trend Micro, involves the weaponization of CVE-2025-20352 (CVSS score: 7.7), a stack overflow vulnerability in the Simple

    Read More

  • Beware the Hidden Costs of Pen Testing

    Beware the Hidden Costs of Pen Testing

    Penetration testing helps organizations ensure IT systems are secure, but it should never be treated in a one-size-fits-all approach. Traditional approaches can be rigid and cost your organization time and money – while producing inferior results. 
    The benefits of pen testing are clear. By empowering “white hat” hackers to attempt to breach your system using similar tools and techniques to

    Read More

  • Mango discloses data breach at third-party provider

    Mango has reported a data breach at one of its external marketing service providers. The Spanish fashion retailer says that only personal contact information has been exposed—no financial data.

    The breach took place at the service provider and did not affect Mango’s own systems. According to the breach notification, the stolen information was limited to:

    • First name (not your last name)
    • Country
    • Postal code
    • Email address
    • Telephone number

    “Under no circumstances has your banking information, credit cards, ID/passport, or login credentials or passwords been compromised.”

    Because Mango operates in more than 100 countries, affected individuals could be located across multiple regions where Mango markets to customers through its external partner. As Mango has not named the third-party provider or disclosed how many customers were affected, we cannot precisely identify where these customers are located.

    Mango has not released any details about the attackers behind the breach. Although the stolen data itself does not pose an immediate risk, cybercriminals often follow breaches like this with phishing campaigns, exploiting the limited personal information they obtained.

    We’ll update this story if Mango releases more information about the breach or the customers impacted.

    Protecting yourself after a data breach

    Affected customers say they have received a data breach notification of which we have seen screenshots in Spanish and English.

    If you think you have been the victim of a data breach, here are steps you can take to protect yourself:

    • Check the vendor’s advice. Every breach is different, so check with the vendor to find out what’s happened and follow any specific advice it offers.
    • Change your password. You can make a stolen password useless to thieves by changing it. Choose a strong password that you don’t use for anything else. Better yet, let a password manager choose one for you.
    • Enable two-factor authentication (2FA). If you can, use a FIDO2-compliant hardware key, laptop, or phone as your second factor. Some forms of 2FA can be phished just as easily as a password, but 2FA that relies on a FIDO2 device can’t be phished.
    • Watch out for fake vendors. The thieves may contact you posing as the vendor. Check the company’s website to see if it’s contacting victims and verify the identity of anyone who contacts you using a different communication channel.
    • Take your time. Phishing attacks often impersonate people or brands you know, and use themes that require urgent attention, such as missed deliveries, account suspensions, and security alerts.
    • Consider not storing your card details. It’s definitely more convenient to let sites remember your card details, but we highly recommend not storing that information on websites.
    • Set up identity monitoring, which alerts you if your personal information is found being traded illegally online and helps you recover after.

    Check your digital footprint

    Malwarebytes has a free tool for you to check how much of your personal data has been exposed online. Submit your email address (it’s best to give the one you most frequently use) to our free Digital Footprint scan and we’ll give you a report and recommendations.

    Read More