Overview
Everythingist Research Dashboard — May 2026.
Salience in this framework is the constellation of behavioral and self-report signals that indicate which identity aspects are most prominent in a participant's self-system. It is operationalized through four independent data streams captured by the participant app (Everythingist):
- Creation order — the sequence in which the participant adds aspects (Linville-style primacy)
- Inter-aspect retrieval latency — time gaps between consecutive aspect creations (cognitive fluency)
- Focal dwell time — active-attention time spent on each aspect (effortful processing)
- Revision behavior — number of edit cycles per aspect (uncertainty / reconsideration)
This specification defines journey-level aggregates of these per-aspect signals, along with cross-stream alignment metrics that test whether implicit (behavioral) and explicit (rating-based) signals converge.
Salience metrics are reported as a profile, not a scalar. Each metric is computed over the per-aspect data within a single journey at the present period (past/future periods are typically lower-fidelity and currently exclude salience-tier aggregates by design).
Novel construct family developed by Sean P. Mullen, PhD, University of Illinois Urbana-Champaign. Sub-components draw on the dwell-time and retrieval-latency literatures (citations per metric).
Data Structure and Conventions
Per-aspect fields (captured by the participant app)
Each aspect carries a dwell block in the v1.7.0+ schema:
aspect.dwell: {
createdAt: ISO timestamp, // first creation
committedAt: ISO timestamp, // commit / finalization
activeDwellMs: number, // focal active time (focused window, recent input)
idleDurationMs: number, // lifetime idle (credited to all aspects during idle)
revisionCount: number // number of edit cycles
}
The dashboard's MetricsEngine.computePeriod() produces salienceData[] and salienceByIndex{} — per-aspect arrays/maps containing:
{
index: aspect index in the period array,
creationTimestamp: parsed from aspect.id (pattern: aspect_TIMESTAMP_COUNT) or null,
salienceRank: 1 = earliest creation, K = latest,
interAspectLatency: ms from previous aspect's creation (null for first aspect)
}
Journey-level fields (top-level on computedMetrics[period])
journey.sessionIntegrity: {
sessionStartedAt: ISO timestamp,
totalElapsedMs: number,
activeDwellMs: number, // session-total active time
idleDurationMs: number, // session-total idle time
sessionCleanliness: enum,
...
}
Notation
- $K$ = aspect count (
aspectCount) - $d_k$ = focal active dwell on aspect $k$ (
aspect.dwell.activeDwellMs) - $D = \sum_k d_k$ = total focal active dwell across aspects
- $\ell_k$ = inter-aspect latency for aspect $k$ (
salienceData[k].interAspectLatency) - $r_k$ = revision count for aspect $k$ (
aspect.dwell.revisionCount) - $imp_k$, $cert_k$ = importance and certainty ratings on aspect $k$ (1–11 scale)
Null Propagation & Edge Cases
Null propagation
- All Tier 1 metrics return
nullwhen their inputs are unavailable (e.g., legacy journeys lacking thedwellblock). - All Tier 2 metrics require both streams populated; otherwise null.
Edge cases
- $K = 0$ → all salience metrics return null.
- $K = 1$ → most metrics return null (no pair-wise quantities, no rank order beyond trivial).
- All dwell zero → concentration metrics return null.
- Pre-v1.7.0 journeys (no
dwellblock) → Tier 1 returns null; flag with_legacyJourney: trueon the salience output if implementing.
Tier 1 — Per-Stream Aggregates
These aggregate the four raw data streams into single journey-level numbers. No cross-stream correlation. No new statistical machinery — means, medians, and a Herfindahl-Hirschman concentration index.
1. Median Inter-Aspect Retrieval Latency
Median of the time gaps between consecutive aspect creations, across $K - 1$ valid latencies.
Operates over $K - 1$ valid latencies — the first aspect has null latency by definition.
salienceData[].interAspectLatency
- Lower → fluent retrieval — aspects come to mind quickly in sequence.
- Higher → effortful retrieval — gaps between aspects, suggesting deliberation or memory search.
- Median preferred over mean because latencies are heavy-tailed (one long interruption can dominate the mean).
- Reported in milliseconds. For analysis, log-transform recommended.
- Confounds genuine retrieval speed with interface friction (typing latency, UI clicks).
- Cannot distinguish "had to think" from "got distracted" — the ActivityTracker idle filter prevents wholesale idle contamination but micro-distractions still register.
How quickly do roles come to mind, one after another? Short gaps mean your identity system is fluent and easy to recall; long gaps mean each new role takes effort to surface.
2. First-Aspect Latency
Time from session start to the first aspect creation.
Where $createdAt_1$ is the timestamp of the first aspect added and $sessionStartedAt$ is from journey.sessionIntegrity.
aspect.dwell.createdAt(first aspect)journey.sessionIntegrity.sessionStartedAt
- Low → first identity aspect surfaces immediately at task onset.
- High → participant requires reflection before any identity surfaces.
- Conflates onboarding friction (reading instructions, clicking through scaffolds) with cognitive retrieval delay. Calibrate against demographic/onboarding controls.
sessionStartedAtis set lazily on first aspect creation in pre-v0.23.0 fixtures; for those, this metric is undefined.
How long before any role comes to mind at all? A long delay before the first role may indicate the task feels challenging or unfamiliar — or that the participant is being careful.
3. Mean Aspect Active Dwell
Average focal active dwell per aspect.
aspect.dwell.activeDwellMs(per aspect)
- High → on average, the participant spends substantial focal time on each aspect.
- Low → fast, surface-level engagement with each role.
- Sensitive to the ActivityTracker's idle definition (currently
ACTIVE_THINKING_CAP_MS). - Aggregated at aspect level; attribute-level dwell is vestigial per the participant app's Build 6.5 design (so no sub-aspect resolution).
On average, how long do you actually sit with each role? Big number means each role got real thought; small number means you moved through them quickly.
4. Dwell Concentration (Herfindahl-Hirschman)
Herfindahl-Hirschman concentration index applied to the per-aspect share of total focal dwell.
Where $D = \sum_k d_k$. Returns null when $D = 0$.
- $K = 1$: HHI = 1 (trivially concentrated).
- Uniform dwell across $K$ aspects: HHI = $1/K$ (e.g., $K = 5$ → HHI = 0.2).
- All dwell on one aspect: HHI = 1.
aspect.dwell.activeDwellMs(per aspect)
- High → attention concentrated on a few aspects (potential anchor identity).
- Low → attention spread across many aspects (balanced engagement).
- Direct adaptation of the Herfindahl-Hirschman concentration index from market-share economics. Bounded in $[1/K, 1]$.
- For cross-participant comparability, report alongside $K$, or report the normalized form $(HHI - 1/K) / (1 - 1/K)$ rescaled to $[0, 1]$.
Did one role dominate your attention, or did you spread it across all of them? Like an economist's market-concentration measure, applied to where your mental attention went.
5. Mean Revision Count
Average number of edit cycles per aspect.
aspect.dwell.revisionCount(per aspect)
- High → participant revised aspects frequently (uncertainty, perfectionism, exploratory).
- Low → first-pass commitment (confident, decisive, or rushed).
- Trivial typo corrections register as revisions. Cannot distinguish "I changed my mind" from "I fixed a misspelling." A future revision-typology pipeline could decompose this.
- Sensitive to the participant app's revision-detection logic. Document any threshold changes.
On average, how many times did you edit each role? Lots of revisions suggest uncertainty or active exploration; very few suggests confidence — or that the participant moved fast and didn't go back.
Tier 2 — Cross-Stream Alignment
These metrics test whether implicit (behavioral) and explicit (rating-based) signals converge. They are the research-grade contribution of the salience family — descriptive Tier 1 metrics are useful, but a participant whose behavioral and self-report signals diverge is doing something theoretically interesting.
6. Rank–Importance Concordance (Kendall's τ)
Kendall's τ between creation-order rank and self-reported importance.
Computed as Kendall's $\tau_b$ (handles ties), with importance signs inverted so that high importance pairs with low rank (= early creation).
- $\tau \in [-1, 1]$.
- $\tau \approx +1$: participants listed important roles first (concordant).
- $\tau \approx 0$: no relationship between order and importance.
- $\tau \approx -1$: participants listed least-important roles first.
salienceData[].salienceRankaspect.importance($imp_k$, 1–11)
- Kendall's $\tau_b$ chosen over Spearman $\rho$ because of high tie frequency in the importance ratings (1–11 scale, many ties expected at $K = 5$–8).
- Requires $K \geq 3$ for any meaningful estimate; return null when $K < 3$.
- The "negate rank or negate importance" choice is convention; document the direction in every report.
Did you list your most important roles first? If yes, your retrieval order tracks how much each role matters to you. If not, something else is driving the order (recency, salience of context, distraction, defensive ordering).
7. Dwell–Importance Correlation
Pearson correlation between per-aspect focal dwell and self-reported importance.
Use Spearman for samples where dwell is heavily right-skewed. Returns null when $K < 3$ or when variance in either stream is zero.
aspect.dwell.activeDwellMsaspect.importance
- Positive → focal time tracks self-reported importance (behavioral and explicit signals agree).
- Near-zero → time spent does not correspond to importance — possibly indicating reflective processing on low-importance aspects, or vice versa.
- Negative → time spent is anti-correlated with importance — flag for follow-up.
- High dwell on a low-importance aspect can mean either deep reflection or confusion. The metric alone can't disambiguate.
- Heavy-tailed dwell distributions inflate Pearson correlations; report Spearman in supplements.
Did you actually spend more time thinking about the roles you rate as more important? When behavior and ratings agree, you can trust both. When they disagree, one of them is telling a different story than the other.
8. Revision–Certainty Correlation
Pearson correlation between per-aspect revision count and self-reported certainty. Predicted negative: low-certainty aspects should be revised more often.
aspect.dwell.revisionCountaspect.certainty
- Strong negative → behavior matches self-report (low-certainty aspects get revised more).
- Near zero → certainty rating is decoupled from revision behavior (potential measurement-validity concern).
- Positive → puzzling pattern; flag for follow-up (high-certainty aspects revised more often — perhaps perfectionism on the items the participant cares about).
- Revision count has the same typo confound as the Tier 1 mean.
- Small samples ($K \approx 5$) produce noisy correlations; report with confidence intervals.
Do you revise the roles you're least certain about? Negative correlation = yes, behavior matches the rating. Other patterns are diagnostic of misalignment between what participants say and what they do.
9. Salience Coherence (Cronbach's α-like)
Cronbach's α over four z-scored salience streams, treated as parallel items measuring the same underlying construct.
Rank's sign is flipped so all four point the same way ("more salient = higher value"). Returns null when $K < 4$ or any stream has zero variance.
- Like Cronbach's α: roughly $[-\infty, 1]$, but values $> 0.7$ indicate good convergence and values $< 0$ indicate the streams point in different directions.
salienceData[].salienceRankaspect.dwell.activeDwellMsaspect.dwell.revisionCountaspect.importance
- High ($\alpha > 0.7$) → all four salience streams agree: the participant's behavioral and explicit signals point at the same most-salient aspects.
- Moderate ($0.3 < \alpha < 0.7$) → partial agreement.
- Low or negative → behavioral and explicit signals diverge — flag for case-level interpretation.
- Cronbach's α treats the four streams as parallel items measuring the same underlying construct ("salience"). The standard psychometric interpretation applies: high α = items measure the same thing.
- The sign convention is critical. Document: rank is reversed so "1st-listed" = high salience.
- α requires $K \geq 4$. For $K < 4$ the metric is undefined.
- Cronbach's α assumes tau-equivalence among the streams (each measures the same construct with equal weight). This assumption is strong here — rank, dwell, revisions, and ratings probably measure overlapping but not identical aspects of salience. Treat α as a coarse summary, not a definitive measurement-validity verdict.
- α can be inflated by stream correlations driven by individual differences in response style rather than identity content.
Across all four signals — what you listed first, what you spent time on, what you revised, and what you rated as important — do they all point at the same roles as most salient? When they agree, you have a coherent salience signal. When they don't, the participant's behavior and ratings are telling different stories.
Interpretation Summary
A compact lookup view across the nine salience metrics, with dashboard keys, tier, and plain-language gloss.
| Metric | Key | Tier | Plain language |
|---|---|---|---|
| Median inter-aspect latency | medianInterAspectLatencyMs | 1 | How fast roles come to mind in sequence |
| First-aspect latency | firstAspectLatencyMs | 1 | How long before any role surfaces |
| Mean aspect active dwell | meanAspectActiveDwellMs | 1 | Average focal time per role |
| Dwell concentration HHI | dwellConcentrationHHI | 1 | Was attention focused on a few, or spread evenly |
| Mean revision count | meanRevisionCount | 1 | Average edits per role |
| Rank–importance τ | salienceRankImportanceTau | 2 | Did important roles come first |
| Dwell–importance corr | dwellImportanceCorr | 2 | Did time spent track importance |
| Revision–certainty corr | revisionCertaintyCorr | 2 | Did uncertain roles get revised more |
| Salience coherence | salienceCoherence | 2 | Do all four salience signals agree |
Methodological Notes
- Salience is a profile across two tiers. Report both tiers together; do not collapse to a scalar.
- Tier 1 metrics describe the journey as-is. Tier 2 tests whether behavior matches self-report.
- Pre-v1.7.0 journeys lack the
dwellblock. Salience metrics return null for those journeys, and any aggregate-level salience reporting should mark legacy journeys with_legacyJourney: true. - All correlations and τ statistics should be reported with sample sizes ($K$) and confidence intervals. Most identity journeys live at $K = 5$–10, putting these correlations in low-power territory.
- For longitudinal salience (across multiple journeys per participant), report period-by-period and snapshot-by-snapshot — do not pool. Salience profiles are expected to change as identity systems evolve.
Reporting Guidelines
- Report Tier 1 as a five-element summary. Tier 2 as a separate paragraph.
- Tier 2 metrics (alignment) should be interpreted in the population, not just per-individual: low average alignment in a cohort is theoretically interesting; low alignment in one participant is a case for follow-up.
- Always report $K$ and the count of valid latencies / dwell streams / etc. when reporting salience metrics. Heavy reliance on $K = 3$ or $K = 4$ estimates needs to be flagged.
- Tier 1 — RetrievalMedian latency · First-aspect latency
- Tier 1 — AttentionMean dwell · Dwell concentration HHI
- Tier 1 — EffortMean revision count
- Tier 2 — AlignmentRank–importance τ · Dwell–importance · Revision–certainty · Salience coherence
Companion Specifications
Related framework documents that extend or interface with this specification.
How to Cite
This is a living specification. Always include the version number when citing.
Salience specification
Mullen, S. P. (2026). Salience measurement specification (Version 1.0). Self-Complexity Research Network. https://selfcomplexityresearch.org/docs/salience.html
@misc{mullen2026saliencespec,
author = {Mullen, Sean P.},
title = {Salience Measurement Specification},
year = {2026},
version = {1.0},
publisher = {Self-Complexity Research Network},
url = {https://selfcomplexityresearch.org/docs/salience.html}
}
In-preparation manuscript
Mullen, S. P. (in preparation). Behavioral salience in self-aspect retrieval: A multi-stream measurement framework. University of Illinois Urbana-Champaign.
Citation note
The Salience Measurement family is a novel construct family. It integrates concepts from:
- Cognitive psychology — retrieval fluency, dwell-time methodology
- Psychometrics — Cronbach's α as alignment summary, Kendall τ for ordinal concordance
- Implicit–explicit attitude research — alignment as a substantive variable
- Industrial organization — Herfindahl-Hirschman concentration index, adapted to attention distribution
Novel construct family developed by Sean P. Mullen, PhD, University of Illinois Urbana-Champaign.
References
- Cronbach, L. J. (1951). Coefficient alpha and the internal structure of tests. Psychometrika, 16(3), 297–334. https://doi.org/10.1007/BF02310555
- Greenwald, A. G., McGhee, D. E., & Schwartz, J. L. K. (1998). Measuring individual differences in implicit cognition: The Implicit Association Test. Journal of Personality and Social Psychology, 74(6), 1464–1480. https://doi.org/10.1037/0022-3514.74.6.1464
- Hayes, J. R., & Flower, L. S. (1980). Identifying the organization of writing processes. In L. W. Gregg & E. R. Steinberg (Eds.), Cognitive processes in writing (pp. 3–30). Erlbaum.
- Herfindahl, O. C. (1950). Concentration in the U.S. steel industry [Unpublished doctoral dissertation]. Columbia University.
- Hirschman, A. O. (1964). The paternity of an index. American Economic Review, 54(5), 761–762.
- Just, M. A., & Carpenter, P. A. (1980). A theory of reading: From eye fixations to comprehension. Psychological Review, 87(4), 329–354. https://doi.org/10.1037/0033-295X.87.4.329
- Kendall, M. G. (1938). A new measure of rank correlation. Biometrika, 30(1–2), 81–93. https://doi.org/10.1093/biomet/30.1-2.81
- Loftus, G. R., & Mackworth, N. H. (1978). Cognitive determinants of fixation location during picture viewing. Journal of Experimental Psychology: Human Perception and Performance, 4(4), 565–572. https://doi.org/10.1037/0096-1523.4.4.565
- Marcel, A. J. (1983). Conscious and unconscious perception: Experiments on visual masking and word recognition. Cognitive Psychology, 15(2), 197–237. https://doi.org/10.1016/0010-0285(83)90009-9
- Nelson, T. O., & Narens, L. (1990). Metamemory: A theoretical framework and new findings. In G. H. Bower (Ed.), The psychology of learning and motivation (Vol. 26, pp. 125–173). Academic Press.
- Schacter, D. L. (1987). Implicit memory: History and current status. Journal of Experimental Psychology: Learning, Memory, and Cognition, 13(3), 501–518. https://doi.org/10.1037/0278-7393.13.3.501