Skip to content

EDR Telemetry Project: From Misleading to Actively Deceptive?

The EDR Telemetry Project's website tells visitors to "validate detection logic" and endorses its use for guiding procurement decisions. The disclaimers saying it shouldn't be used for that exist only on GitHub. Public feedback suggesting it be clarified it can't be used for detection were ignored.

Last September, I published "Why the EDR Telemetry Project is Misleading." The goal was to point out that the project scores EDR products on whether they collect a given telemetry event, but not on whether that telemetry can actually be used for detection or response. A "fully implemented" score means the data exists somewhere in the vendor's pipeline. Whether an analyst can write a rule against it, or take action based on it is irrelevant to the score.

The project's creator responded by accusing me of spreading misinformation. His position was that the project had never been about detection and had only ever been about telemetry collection. I wrote a follow-up pointing out that his own announcement blog post listed helping users "better understand the data collected and use it to build custom detection rules" as a primary use case. When I asked him for a link to the specific criteria an event must meet to qualify as "fully implemented," he blocked me.

That was six months ago.

How the Project Gets Used

The EDR Telemetry Project gets referenced in security communities on a near-weekly basis, and almost always the same way: as evidence that one EDR product is better than another.

A few examples from Reddit:

  • In r/crowdstrike, a user wrote: "CrowdStrike is number 1. I saw someone post the edr-telemetry and thats the best resource you can use to compare products."
  • In r/sysadmin, someone asked about Arctic Wolf Aurora. The response was the scores page URL and three words: "Show them this."
  • When someone asked for "trustworthy tests for EDRs" in r/cybersecurity, the reply was the project URL with zero additional context.
  • A user looking for endpoint security recommendations received the link with the note: "MDE is way up there."
  • In r/DefenderATP, someone asking for opinions between Defender and SentinelOne was pointed to the telemetry comparison table as the basis for their decision.

None of these comments mention the project's disclaimers. None note that the scores only measure whether telemetry is collected. Every one of them treats the project as a definitive source for comparing EDR products.

There are many more where these came from across various other social media sites than other than reddit.

What the Website Says Now

Having seen the project cited for detection capability yet again just today, I went back to edr-telemetry.com to check whether the project had been updated with clearer guidance about its actual scope.

Instead of disclaimers, I found this on the homepage:

Screenshot 2026-03-05 at 11.32.39 PM.png

The headline reads "Make Evidence-Based EDR Decisions." One callout box tells visitors to "Deep dive into specific signals in Telemetry Categories to validate detection logic."

Screenshot 2026-03-05 at 11.48.34 PM.png

It turns out the scoring hasn't been updated either. A major EDR product that limits analysts to five usable events still sits among the top-ranked solutions.

Every category on the Telemetry Categories page includes a "Detection Examples" section listing specific threats (process injection, lateral movement, ransomware encryption, malware beaconing, credential theft). Each one also has a "Security Benefits" section describing what the telemetry helps detect. The word "detection" runs through the entire page as a core part of the project's value proposition.

Screenshot 2026-03-05 at 11.50.32 PM.png

A first-time visitor would reasonably conclude the project exists to help them compare EDR products, guide procurement, and validate detection capabilities. That's what the website tells them to do in plain language.

Where the Disclaimers Live

He does have disclaimers on GitHub now - yet, that disclaimer didn't seem to make it to the new website.

Screenshot 2026-03-06 at 12.20.36 AM.png

  1. The GitHub README: "The data presented reflects only the telemetry capabilities of each product, not their detection or prevention capabilities."
  2. The GitHub Wiki FAQ: "the data in the table do not represent the capability for each of the EDR products to detect or prevent a threat. This is ONLY a comparison in regards to the available telemetry."
  3. A separate Wiki FAQ entry noting the table "focuses on out-of-the-box events and not on signals (detections/correlating events)."
  4. Another Wiki FAQ entry stating the table "does not represent their ability to detect or prevent threats" and "is not an assessment of their overall performance or effectiveness."

Clear statements. All four of them live on GitHub.

The GitHub README's disclaimer includes the line: "For more details, please visit our FAQ page," and links directly to edr-telemetry.com/faq. That FAQ page contains no disclaimer. The README sends readers to the website for more information about the project's limitations. The website provides none.

The Commercial Layer

The project now offers Premium Services.

This is relevant because it changes the relationship between the project and the confusion around its scope. A community research project with vague messaging is a communication problem. A commercial product that sells advisory services while its marketing copy tells visitors to "Make Evidence-Based EDR Decisions" and "validate detection logic," with disclaimers that contradict this language located exclusively on a different platform, is a structural problem.

When someone posts the scores page in a "which EDR should I buy" thread, that generates traffic and visibility for the project and its commercial arm. When a security leader uses the scores to justify a procurement decision, that's a potential consulting lead. I'm not suggesting this is the creator's intent. I am observing that the incentive to keep the scope ambiguous runs in the same direction as the incentive to grow the business.

The Question

My first article called the project misleading. At that time, the website was simpler, and the disclaimers on GitHub were easy to miss but the website at least didn't actively contradict them.

That is no longer the case. The website now contains explicit calls to use the project for EDR decisions, vendor performance evaluation, procurement, and detection validation offering commercial services. The disclaimers that say the project cannot be used for those purposes are absent from the website and present only on GitHub.

The website's marketing language and the GitHub disclaimers directly contradict each other, and they exist on separate platforms. Whether that gap reflects poor communication or something more intentional is a question I'll leave to the reader.

What I will note is this: six months have passed since the original criticism. The community confusion documented above continues. Not only have disclamers still, not been added to clarify its use, it's gotten actively worse suggesting it to be used primariliy for detection, which as pointed out in the original article, is the one thing it specifically cannot be used for.

His contributions to TheDFIRReport (https://thedfirreport.com) are well-regarded. But nothing in his background suggests hands-on experience operating an EDR at scale, in a large enterprise or across an MSSP's client base. Anyone who has would know that detection teams, and especially midsize MDR/MSSPs supporting multiple EDR products, routinely offset EDR telemetry collection with Sysmon, particularly when used in conjunction with EDR tools that have limited detection capability.

Subscribe to join the discussion.

Subscribe