Democratizing AI Safety with RiskRubric.ai

-


Gal Moyal's avatar


Constructing trust within the open model ecosystem through standardized risk assessment

Greater than 500,000 models will be found on the Hugging Face hub, nevertheless it’s not all the time clear to users methods to select the very best model for them, notably on the safety elements. Developers might discover a model that completely suits their use case, but don’t have any systematic approach to evaluate its security posture, privacy implications, or potential failure modes.

As models grow to be more powerful and adoption accelerates, we’d like equally rapid progress in AI safety and security reporting. We’re due to this fact excited to announce RiskRubric.ai, a novel initiative led by Cloud Security Alliance and Noma Security, with contributions by Haize Labs and Harmonic Security, for standardized and transparent risk assessment within the AI model ecosystem.



Risk Rubric, a brand new Standardized Assessment of Risk for models

RiskRubric.ai provides consistent, comparable risk scores across all the model landscape, by evaluating models across six pillars: transparency, reliability, security, privacy, safety, and fame.

The platform’s approach aligns perfectly with open-source values: rigorous, transparent, and reproducible. Using Noma Security capabilities to automate the trouble, each model undergoes:

  • 1,000+ reliability tests checking consistency and edge case handling
  • 200+ adversarial security probes for jailbreaks and prompt injections
  • Automated code scanning of model components
  • Comprehensive documentation review of coaching data and methods
  • Privacy assessment including data retention and leakage testing
  • Safety evaluation through structured harmful content tests

These assessments produce 0-100 scores for every risk pillar, rolling as much as clear A-F letter grades. Each evaluation also includes specific vulnerabilities found, really helpful mitigations, and suggestions for improvements.

RiskRubric also comes with filters to assist developers and organizations make deployment decisions based on what’s essential for them. Need a model with strong privacy guarantees for healthcare applications? Filter by privacy scores. Constructing a customer-facing application requiring consistent outputs? Prioritize reliability rankings.



What we found (as of September 2025)

Evaluating each open and closed models with the very same standards highlighted some interesting results: many open models actually outperform their closed counterparts in specific risk dimensions (particularly transparency, where open development practices shine).

Let’s have a look at general trends:

Risk distribution is polarized – most models are strong, but mid-tier scores show elevated exposure

total_score

The whole risk scores range from 47 to 94, with a median of 81 (on a 100 points). Most models cluster within the “safer” range (54% are A or B level), but a protracted tail of underperformers drags the common down. That split shows a polarization: models are inclined to be either well-protected or within the middle-score range, with fewer in between.

The models concentrated within the 50–67 band (C/D range) aren’t outright broken, but they do provide only medium to low overall protection. This band represents probably the most practical area of concern, where security gaps are material enough to warrant prioritization.

What this implies: Don’t assume the “average” model is protected. The tail of weak performers is real – and that’s where attackers will focus. Teams can use composite scores to set a minimum threshold (e.g. 75) for procurement or deployment, ensuring outliers don’t slip into production.

Safety risk is the “swing factor” – nevertheless it tracks closely with security posture

safety_histogram

The Safety & Societal pillar (e.g. harmful output prevention) shows the widest variation across models. Importantly, models that spend money on security hardening (prompt injection defenses, policy enforcement) almost all the time rating higher on safety as well.

What this implies: Strengthening core security controls goes beyond stopping jailbreaks, but additionally directly reduces downstream harms! Safety looks as if it’s a byproduct of sturdy security posture.

Guardrails can erode transparency – unless you design for it

Stricter protections often make models less transparent to finish users (e.g. refusals without explanations, hidden boundaries). This could create a trust gap: users may perceive the system as “opaque” even while it’s secure.

What this implies: Security shouldn’t come at the price of trust. To balance each, pair strong safeguards with explanatory refusals, provenance signals, and auditability. This preserves transparency without loosening defenses.

An updating results sheet will be accessed here



Conclusion

When risk assessments are public and standardized, all the community can work together to enhance model safety. Developers can see exactly where their models need strengthening, and the community can contribute fixes, patches, and safer fine-tuned variants. This creates a virtuous cycle of transparent improvement that is not possible with closed systems. It also helps the community at large understand what works and doesn’t, safety smart, by studying best models.

If you desire to participate on this initiative, you’ll be able to submit your model for evaluation (or suggest existing models!) to know their risk profile!

We also welcome all feedback on the assessment methodology and scoring framework



Source link

ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x