Scoring Model
The score is a weighted sum of well-defined metrics. Contrast density and contrast intensity combine into a single contrast term because strong contrast requires both; the remaining terms are linear for clarity. The Window metric is inverted before scoring (lower window = better score). ASET color targets use a deviation model — the optimizer minimizes the difference between actual and desired color percentages.
What goes in
- Metrics — Average Brightness, Contrast Density, Contrast Intensity, Scintillation Score, Compactness, Shannon Entropy, Window, and three ASET color percentages (Blue, Red, Green). The first seven are compared to the Hearts & Arrows reference; ASET colors are compared to user-defined target percentages.
- Weights — The ten sliders (
averageBrightness,contrastDensity,contrastIntensity,scintillationScore,compactness,shannonEntropy,window,asetBluePct,asetRedPct,asetGreenPct). Weights are always treated as zero or higher. ASET weights default to 0 (disabled).
How the score is built
-
Normalize the metrics
Everything is turned into a ratio vs the reference. If a metric is better than the reference, we keep it above 1.0 but gently squash it so runaway values don't dominate. -
Make contrast a single value
Contrast density and intensity are blended with a geometric mix (think "both must be good, one can't carry the other"). If both contrast sliders are equal, this behaves like the square root of (density × intensity). If one slider is higher, that side is favored — but a weak partner still drags the contrast term down. -
Set the overall weights
The two contrast sliders also decide how important contrast is overall: their average becomes the contrast weight. Brightness, scintillation, compactness, and contrast weights are then scaled so they always sum to 1 (extras are included the same way). -
Take the weighted sum
Final score = brightness part + contrast part + scintillation part + compactness part + entropy part + window part. Higher numbers mean closer to, or better than, the reference cut.
The Window metric
Window measures how much light passes straight through the stone without internal reflection — a "hole" in the optical performance. It deserves special attention because it behaves differently from every other metric.
Why Window is inverted
For all other metrics, higher is better: you want more brightness, more contrast, more scintillation. Window is the opposite — you want less of it. A perfect stone has zero window.
To make Window compatible with the scoring system ("higher score = better stone"), the engine internally flips the value before scoring:
scored value = 1.0 − raw window fraction
A stone with 0% window (perfect) becomes 1.0 — exactly like a metric that matches the reference. A stone with 20% window becomes 0.8, which is scored as "only 80% of perfect." This lets the optimizer treat all seven metrics uniformly: push everything toward 1.0 or higher.
How to read the numbers
The optimizer normalizes every metric against the Hearts & Arrows reference before displaying it. For Window, this means:
| Design | Raw window | Scored as | Normalized |
|---|---|---|---|
| Hearts & Arrows (reference) | 0.9% | 1.0 − 0.009 = 0.991 | 1.000× |
| Good brilliant | 3% | 1.0 − 0.03 = 0.97 | 0.979× |
| Moderate window | 15% | 1.0 − 0.15 = 0.85 | 0.858× |
| Severe window | 40% | 1.0 − 0.40 = 0.60 | 0.606× |
A Window value of 1.000× in the optimizer table means "as good as H&A." Values below 1.0 mean the design leaks more light. A value of 0.606× means the design retains only about 60% of the optical integrity that the reference achieves.
Using Window with other metrics
Because the reference stone already has a near-perfect window (~1%), the normalized Window score saturates quickly — improvements above ~0.98× are very small in absolute terms. In practice this means:
- If your design already has a small window, the Window weight contributes close to 1.0 and there is little room to improve. Set the weight lower and focus on the metrics that still have headroom.
- If your design has a significant window (say 10–30%), increasing the Window weight will strongly guide the optimizer to close the leak, and you'll see large delta percentages.
- Balanced presets typically set Window to 0.5 alongside the other metrics. This is a good default unless you know your cut style inherently produces a window (e.g., some step cuts).
ASET Color Targets
The Angular Spectrum Evaluation Tool (ASET) analyzes how light enters a gemstone from different angles. ProFacet renders a top-down ASET view and classifies every pixel into one of four colors, each representing a different range of light angles measured from the vertical (the observer's line of sight):
| Color | Angle from vertical | Meaning |
|---|---|---|
| Blue | 0°–15° | Light returns from near the observer's head — the obscuration zone. Indicates contrast and sparkle potential. |
| Red | 15°–45° | Direct light from above — the brilliance zone. The primary source of fire and brightness. |
| Green | 45°–90° | Light from the sides — the environmental zone. Too much green suggests the stone relies on ambient light rather than its own optical performance. |
| Black | — | No light returns. Indicates total internal reflection failure — light leaks out the bottom. |
How ASET targets work
Unlike the other seven metrics (where higher is always better, relative to a reference), ASET scoring uses a target percentage model. You specify how much of the stone's face-up area should show each color, and the optimizer tries to match those targets.
The optimizer panel shows three ASET weight sliders with target inputs:
- ASET Blue % — editable target (default: 20%)
- ASET Red % — editable target (default: 60%)
- ASET Green % — auto-computed as 100 − Blue − Red (shown read-only)
The ASET Black percentage is not a target — it is simply reported. A well-cut stone should have minimal black.
ASET scoring formula
For each ASET color, the optimizer computes:
aset_score = 1.0 − |actual% − target%| ÷ 100
This means:
- A perfect match (actual = target) scores 1.0
- Every percentage point of deviation subtracts 0.01 from the score
- The score bottoms out at 0.0 (100% deviation)
The ASET score contribution is then weighted and added to the total score like any other metric:
total += (aset_weight ÷ weight_sum) × aset_score
Worked example
Suppose you set Blue target = 20%, Red target = 60% (so Green target = 20%), and all three ASET weights to 0.3. If the stone measures Blue = 18%, Red = 55%, Green = 22%:
| Color | Target | Actual | Deviation | Score |
|---|---|---|---|---|
| Blue | 20% | 18% | 2% | 1.0 − 0.02 = 0.98 |
| Red | 60% | 55% | 5% | 1.0 − 0.05 = 0.95 |
| Green | 20% | 22% | 2% | 1.0 − 0.02 = 0.98 |
Each contributes 0.3 / weight_sum × score to the final total. The optimizer will nudge facet angles to bring the actual percentages closer to the targets.
Typical ASET targets
Good brilliant-cut stones typically show roughly:
- Blue: 15–25% — enough obscuration for contrast
- Red: 50–65% — dominant brilliance
- Green: 15–25% — moderate environmental contribution
- Black: < 5% — minimal light leakage
These defaults match the industry standard for a well-performing round brilliant. Fancy cuts (emerald, cushion, etc.) may have different characteristic distributions.
When to use ASET weights
- Set ASET weights to 0 (the default) if you want the optimizer to focus purely on the optical metrics (brightness, contrast, scintillation).
- Set ASET weights > 0 when you want to steer the stone toward a specific light-performance profile — for example, maximizing red (brilliance) while keeping blue (contrast) in a specific range.
- ASET weights combine naturally with the other metric weights. The optimizer normalizes all weights together, so an ASET weight of 0.3 alongside brightness at 0.4 means ASET has a proportional influence.
Notes
- The contest verifier, Optimizer, and UI all share this exact formula, so what you see is what gets verified.