Methodology | Gridinsoft LLC
Gridinsoft Logo
animation img

Methodology

Last updated 2025-10-31
Overview

Our approach is built on evidence, transparency, and continuous refinement. We analyze malware, assess website safety, and publish reports through clear, repeatable processes.


Every conclusion is backed by data and peer review. As tactics evolve, we update engines, sources, and validation techniques to keep findings current and reliable.

1) Data Collection

We ingest files, URLs, and domains from trusted channels: user submissions, curated feeds, and partner reports. Suspicious items are executed in isolated sandboxes to observe behavior.


Context data—passive DNS, WHOIS/registrar records, hosting/ASN details—and verified user reports round out the picture before any judgment is made.

2) Data Correlation & Validation

Indicators of compromise (hashes, URLs, IPs, domains) are cross-checked across independent sources such as blacklists, threat intel, and registries. We prioritize primary evidence and note conflicts.


Behavioral analysis complements signatures: file activity, persistence, and network communications. Automated checks are followed by analyst review to reduce false positives.

3) Scoring & Evaluation

Findings are assigned confidence based on corroboration, recency, and consistency of signals. For websites, we classify risk on a 1–100 scale using factors like hosting quality, WHOIS integrity, blacklist status, and user feedback.


Key risk indicators—phishing patterns, malware delivery, anomalous behavior—are highlighted so readers and defenders can act quickly.

4) Report Generation

Reports start with a plain-language summary and recommended actions, followed by methods, indicators, and limitations. Where safe, we include hashes, timestamps, and annotated screenshots.


For malware, we document remediation (quarantine, rollback, removal). For websites, we outline block/report options and safety guidance.

5) Re-checking & Continuous Monitoring

High-risk items are reviewed on accelerated schedules; stable entries follow a defined cadence. Each page shows first-seen and last-reviewed times.


Signals are re-weighted as threats shift. User feedback is triaged, verified, and—when warranted—used to update conclusions.

Tools & Technology

Our stack combines proprietary detection (malware scanner, heuristics/ML, sandboxing) with respected external intelligence.


Reputation Intelligence API unifies WHOIS, DNS/passive DNS, blacklist data, and vetted reports to inform domain assessments in real time.

Independence & Integrity

Research outcomes are not for sale. We do not accept payment to alter or remove ratings or reports.


All publications undergo internal peer review and follow privacy-by-design: minimal collection, redaction of PII, and user controls over telemetry.

Updates & Changes

Methodology reviews occur regularly to reflect new attacker tactics, datasets, and product capabilities.


Significant changes that affect reports or scoring are documented and communicated for transparency.

Contact

Research questions: [email protected]


Feedback or concerns: [email protected]