A researcher's walkthrough of the participant-facing self-complexity instrument — every screen, every choice, every affordance that quietly shapes the data.
ETC LabExercise, Technology, and CognitionUniversity of Illinois Urbana-Champaign
A Researcher's Walkthrough of the Self-Complexity Instrument
EXERCISE, TECHNOLOGY & COGNITION LAB
Department of Health and Kinesiology
University of Illinois Urbana-Champaign
Manual Revision 1.0 · 2026
Urbana · Champaign · Illinois
Cite as
Mullen, S. P., & Exercise, Technology, and Cognition Lab. (2026). The Everythingist Self-Space: A researcher’s manual (Manual rev. 1.0; App build 6.5; Schema v1.7.0). University of Illinois Urbana-Champaign. https://selfcomplexityresearch.org/docs/self-space-manual.html
License
This manual is released under the Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0). You are free to share and adapt the work for non-commercial purposes provided appropriate credit is given. The Everythingist Self-Space application is distributed separately under the terms specified in its repository.
Audience
This manual is written for researchers who will deploy the Self-Space in study designs and analyze the resulting JSON exports. A separate participant-facing companion is planned for a later release; that version will reframe the same workflow in plain language for people building their own self-maps.
This manual is one of three. The Everythingist Self-Space Specification v1.9 is the implementation reference. The Self-Complexity Measurement Specification v2.2 documents the formal metric definitions. The Dashboard Specification v1.21 describes the analytics layer that consumes the JSON exports.
Contact
Exercise, Technology & Cognition Lab · Department of Health and Kinesiology
University of Illinois Urbana-Champaign
Correspondence: etclab@illinois.edu
Typography
Set in Newsreader for body text, Inter for display and margin notes, and JetBrains Mono for schema field references. Designed and produced in the ETC Lab.
Contents
What's inside
PPrefacevii
FList of Figuresix
1What the Self-Space measures, and why it matters3
2The participant's first impression11
3The core task: building the past, present, and future maps19
4The rating system31
5Reviewing the map: data quality at export and during sessions43
6Possible selves and future-orientation features57
7Behind the scenes: how the session is instrumented65
8Submitting and exporting73
9What's coming, and known limitations83
GGlossary90
RReferences94
QQuick Reference Card96
Figures
List of figures
Fig 1.1App landing screen with the Everythingist Self-Space title and tagline
Fig 2.1Landing path-gate modal showing the two path cards
Fig 2.2Onboarding modal with temporal-depth scaffolding and the single-sitting callout
Fig 3.1Workspace with the three temporal tabs and an empty Present Self panel
Fig 3.2Past, present, and future as parallel maps (diagram)
Fig 3.3Reflection scaffold expanded above the input area on the Present tab
Fig 3.4Self-aspects, attributes, and shared-attribute overlap (diagram)
Fig 3.5An uncommitted aspect row next to a committed row showing full controls
Fig 3.6Aspect card with the Carry-to menu open showing target periods
Fig 4.1Expanded aspect card showing all sliders, valence dropdown, and attributes
Fig 4.2The 11-point rating scale and its unset state (diagram)
Fig 4.3An aspect with one slider engaged (in color) and one slider unset (dimmed em-dash)
Fig 5.1The four-stage validation gate (diagram)
Fig 5.2Pattern review wizard step showing the “extreme anchoring” flag
Fig 5.3The three idle thresholds (diagram)
Fig 5.4The two-hour mandatory review checklist with several aspects unchecked
Fig 6.1Intervention Mode modal with parent/child toggle and four temporal horizons
Fig 7.1Visible session timer showing active and idle minute totals
Fig 7.2The session cleanliness gradient (diagram)
Fig 8.1Export Identity Modal showing the personal/educational and research tracks
Fig 8.2PDF preview showing the three side-by-side network visualizations and metrics
Preface
A note from the lab
[Lab director's preface — replace this paragraph with a one-page note placing the Self-Space in the lab's broader research program, acknowledging contributors, and orienting the researcher to the spirit of the tool. Suggested length: 350–450 words. Topics that belong here: the lineage from Linville's H statistic through Sakaki's composite to the present multi-dimensional formulation; the design commitment to participant-facing transparency and process-trace metadata; collaborators across the CORTEX-II, HEATWAVES, and SMASH project teams; and the lab's intentional choice to ship the instrument as a single static HTML file that any institution can host.]
The Self-Space is a research instrument first and an interface second, but the interface is where the data is born. Every screen the participant encounters — the temporal tabs, the dimmed em-dash on an unset slider, the two-hour idle review — was designed against a specific measurement concern. Some of those concerns are decades old (priming, response sets, structural overlap). Others are new to this generation of instruments (idle dwell, contamination logging, the four-stage validation gate). This manual exists to make those design choices legible, so that when a researcher reads an exported JSON file they understand not just what the participant said but the conditions under which they said it.
It is also a working manual. The application is under active development. Schema versions move; features marked as stubs in §9 will eventually ship; some of the affordances described here will be refined or replaced. We have tried to flag what is current, what is provisional, and what is on the roadmap, so that the manual ages cleanly with the tool.
Sean P. Mullen
Director, Exercise, Technology, and Cognition Lab
Associate Professor, Department of Health and Kinesiology
University of Illinois Urbana-Champaign
01 · Chapter1
What the Self-Space measures, and why it matters
The Self-Space is a structured tool for asking a person how they are organized inside. Most identity instruments treat the self as a list of traits or a single global rating — how good do you feel about yourself today? Self-complexity research treats the self instead as a structure: a set of distinct self-aspects (roles, contexts, relationships, life-domains — "researcher," "parent," "musician," "patient," "friend") that each carry their own bundle of attributes (traits, feelings, behaviors).†
A person with many self-aspects whose attributes do not overlap is described as high in self-complexity. A person whose self-aspects all share the same attributes is described as low.
The framework traces back to Patricia Linville's foundational papers in the mid-1980s.
Linville's buffering hypothesis proposes that high self-complexity protects affective wellbeing under stress: when one domain takes a hit — a bad performance review, an injury — the spillover into the rest of the self is dampened because there is more elsewhere for the person to be.
The field has refined that picture considerably over the last four decades. Rafaeli-Mor and colleagues decomposed Linville's H statistic into components — number of aspects and degree of overlap — and showed that simply counting aspects, or measuring overlap directly, often predicts wellbeing more cleanly than H itself.
Sakaki introduced a composite measure (number of aspects divided by overlap) that has become a workhorse in the contemporary literature.
Schleicher and McConnell, and later McConnell, extended self-complexity into spatial and networked formulations, treating the self as a graph in which aspects are nodes and shared attributes are edges.
The Self-Space is the data-collection arm of a measurement system grounded in this lineage. The participant draws the map; the instrument records what they drew, how confident they were, how they felt about each piece, and — critically — the conditions under which they did the drawing.
Everything downstream — the 22 self-complexity metrics, the Identity Strength Index, the longitudinal change detection — depends on the quality of that mapping session. The manual that follows is therefore mostly a manual about that session.
Screenshot pending
FIG 1.1App landing screen with the Everythingist Self-Space title and tagline visible.
02 · Chapter2
The participant's first impression
When a participant opens the Self-Space, they land on a page-load sequence designed to settle two questions before any mapping begins: what kind of session is this, and how should the participant pace themselves. The sequence runs in this order: URL parameters are parsed silently in the background, then the landing path-gate appears, then the onboarding modal, then the workspace itself.
The landing path-gate is a single early modal with two large cards: Personal / Educational use and Research participation. A "Skip for now" link defers the choice to export time.
This early disambiguation matters more than it looks. It lets the app subtly adjust its later behavior — the export identity modal, the contact-email visibility for help requests, the framing of certain banners — to match the kind of session the participant is in. If the researcher has dispatched the link with a URL parameter, the gate is bypassed and the path is pre-resolved.
Screenshot pending
FIG 2.1Landing path-gate modal showing the two path cards.
After the path-gate (or instead of it, when bypassed), the onboarding modal appears. It welcomes the participant, gives a short how-it-works overview, and offers temporal-depth guidance — language about how identities can persist, emerge, fade, or be avoided across time.
The onboarding deliberately avoids naming any specific identity examples ("Student," "Athlete," "Parent") because those examples would prime the participant's identity salience and compromise the validity of the measurement. Near the bottom of the onboarding, a "Single sitting recommended" callout describes three idle-time thresholds the app uses to bound the session and points to the small clock-icon timer affordance the participant can turn on if they want to see how long they have been working. The participant can start fresh ("Start Mapping") or load demo data ("Load Demo") to explore the tool without committing to their own map. From here, the participant can also open a privacy-policy modal that emphasizes the app stores everything locally in the browser and requires no account.
Screenshot pending
FIG 2.2Onboarding modal with the temporal-depth scaffolding and the single-sitting callout visible.
Validity
Anywhere a participant could be primed by a specific identity example, the app refuses to provide one. This applies to onboarding text, placeholder text in input fields, prompt scaffolding, and any in-app illustrative content. Researchers configuring deployments should not modify these surfaces to add examples.
A small but important detail: the page header carries a Mode: … button that controls audience mode. There are four modes — general, student, older-adult, and researcher — and the mode subtly shifts the wording of reflection prompts, idle-warning copy, and the printed PDF. Researcher mode, in particular, suppresses several user-facing PDF sections (network-density plain-language interpretation, reflection-prompt sidebars, growth trajectories) and replaces the glossary with a citation block, on the theory that researchers reviewing data do not need participant-facing interpretive scaffolding.
Audience mode can be set by URL parameter (?audience=student), pre-resolved from a study or course parameter, or chosen explicitly from the modal.
URL-derived modes apply to that session only and do not persist. The "research mode" experience the participant sees is therefore a layered thing: what export track they are on (personal vs. research) is one axis, set by the path-gate; what audience they are framed as (general, student, older-adult, researcher) is another axis, set by audience mode. Most studies will set both via URL parameters.
FIG 2.3The four audience modes interact with the two export tracks to produce eight session profiles. Researcher mode suppresses idle warnings entirely; older-adult mode softens review copy; student mode adds course-aware framing.
03 · Chapter3
The core task: building the past, present, and future identity maps
After onboarding, the participant lands on the workspace. Three tabs sit across the top: Past Self, Present Self, Future Self. Each tab holds an independent map. The participant fills in all three, in any order, and the three maps will eventually be analyzed both individually and in comparison.
Screenshot pending
FIG 3.1Workspace with the three temporal tabs and an empty Present Self panel.
Why three time horizons? The decision is not neutral. Asking about the past surfaces aspects that shaped who the person is now; asking about the future surfaces hoped-for, expected, and feared possibilities;
asking about the present anchors the comparison. Together the three periods let downstream analysis distinguish stable identities from emerging or fading ones, and let researchers detect identity continuity, expansion, contraction, and disappearance across time.
The app's later "Change Across Time" section in the PDF, and the dashboard's longitudinal metrics, all depend on these three maps existing in parallel.
FIG 3.2The three time-period maps are filled in parallel. Each is its own data object; their comparison powers the dashboard's longitudinal analytics.
Each period begins empty. There is no pre-populated list of aspects waiting to be edited — by design, since seeding the map with examples would contaminate the measurement. At the top of the workspace, a per-period reflection scaffold offers non-suggestive, process-oriented prompts.
The past scaffold focuses on what was self-defining and what carried forward versus faded; the present scaffold on central current identities, taken-for-granted parts, and conflicted parts; the future scaffold on expected continuity, hoped-for development, and feared possibilities. The scaffold also carries a short summary note. That note's wording shifts with audience mode. Scaffolds can be dismissed per period and recovered via a help button.
Screenshot pending
FIG 3.3Reflection scaffold expanded above the input area on the Present tab.
§ 3.1Self-aspects and attributes
A self-aspect is a coherent slice of the person's identity — a role, a context, a relationship, a domain. "Researcher," "runner," "older sister," "person who manages a chronic illness," "musician at the open mic on Thursdays." There is no canonical list. The participant decides what counts.
The app caps each period at 15 self-aspects, with a soft warning at 10, on the theory that maps with more than roughly 15 aspects begin to lose meaningful structure and become punishingly long to rate.
An attribute is something the person uses to describe a particular self-aspect.
Attributes are the building blocks of self-complexity in Linville's original formulation: a participant high in self-complexity has many self-aspects with non-overlapping attributes; a participant low in self-complexity has fewer aspects whose attributes all overlap. In the Self-Space, the participant adds attributes to each aspect via an inline input field. Attributes can repeat across aspects (and the app uses that to compute overlap), or they can be unique to one aspect. The cap is 10 attributes per aspect and 100 in total.
FIG 3.4A four-aspect map with both unique and shared attributes. The two orange nodes ("creative," "caring") link multiple aspects — the field's classical operationalization of low-complexity overlap.
To create an aspect, the participant clicks "Add Self-Aspect." A new row appears with a single input field — placeholder text reads "Label this self-aspect." This is an uncommitted row: it has a dashed left border and a muted background, and it does not yet show sliders, valence dropdowns, or attributes. The first character the participant types into the name field commits the aspect: the row redraws with full controls — sliders, valence dropdown, attribute area, and a "Carry to…" menu — and focus is restored to the name input.
This progressive-disclosure pattern keeps the visual load low until the participant has decided that the aspect is worth filling in.
Screenshot pending
FIG 3.5An uncommitted aspect row next to a committed row showing full controls.
The placeholder text is content-neutral on purpose. It says "Label this self-aspect," not "e.g., Student, Athlete, Parent." Anywhere a participant could be primed by a specific identity example, the app refuses to provide one. This is a measurement-validity choice: identity priming would inflate the apparent salience of whatever was named.
§ 3.2Carry-to: tracking continuity across time
The participant frequently realizes mid-mapping that an aspect they entered for the present period also belongs in past or future. Each aspect card carries a "Carry to…" menu. Selecting a target period deep-copies the aspect — name, ratings, valence, all attributes — into that period with a fresh internal ID.
A toast notification reminds the participant to switch tabs and review the carried aspect; the carry is not automation. Continuity exists across time, but identities also change across time, and the participant is expected to revisit each carried aspect and adjust ratings to reflect how that identity differs in the new period.
Screenshot pending
FIG 3.6An aspect card with the Carry-to menu open showing target periods.
04 · Chapter4
The rating system
For each self-aspect, the participant sets four ratings (importance, certainty, descriptiveness, visibility) and a valence. For each attribute, they set three ratings (importance, certainty, descriptiveness) and a valence. All numeric ratings use an 11-point slider; valence is a three-option dropdown.
Each rating contributes to the downstream measurement model in a specific way, and it is worth understanding why each one is collected.
Screenshot pending
FIG 4.1An expanded aspect card showing all sliders, valence dropdown, and attribute list.
Importance (1–11). How central is this self-aspect (or attribute) to who the participant is? The operational definition follows Markus's self-schema framework:
schematic content — the parts of the self that are most important — is processed faster, recalled more readily, and resists disconfirming information. Importance is the rating that locates an aspect's place in the self-schema. In the visualization, more important aspects render closer to the central "Self" sun node. In the dashboard's Identity Strength Index and centrality computations, importance ratings weight several derived metrics, including the Identity Rigidity Index.
Importance is the rating researchers most often interpret first in interview-style debriefs, because it differentiates the identities the person acts on from the ones that are merely true of them.
Certainty (1–11). How sure is the participant that this aspect (or attribute) is theirs? The operational definition follows Pelham's work on the certainty of self-knowledge,
which established certainty as a dimension of self-belief distinct from importance — two aspects can be equally important but differ in how confidently the person holds them, with measurably different consequences for behavior, persistence, and emotional response under feedback.
Higher certainty implies the aspect is settled, well-rehearsed, internalized; lower certainty flags emerging, contested, or aspirational identities. In the visualization, certainty thickens the node stroke; in the dashboard, average certainty per period contributes to the Identity Strength Index and is also a longitudinal stability indicator. A participant whose future-self aspects are all rated 11 on certainty is making a different psychological claim than one whose future-self aspects are rated 4–5.
Descriptiveness (1–11). For an attribute attached to an aspect, descriptiveness asks: how well does this attribute describe me when I am in this aspect? For an aspect itself, descriptiveness asks: how well does this aspect describe me overall? Descriptiveness is the second pillar of Markus's self-schema construct — content qualifies as schematic when it is both important and self-descriptive, and the two ratings are deliberately collected separately so that researchers can distinguish identities the person centers (high importance) from identities they actually instantiate (high descriptiveness), and from identities that satisfy both (the strongest schematic candidates).
Descriptiveness also connects to the Sakaki self-complexity composite and to Schleicher and McConnell's spatial reformulation: when an attribute is highly descriptive in two different aspects, the two aspects are pulled closer together in the implied identity space, raising overlap and reducing overall complexity.
In the visualization, descriptiveness drives the radius of each node — bigger circles for more strongly descriptive elements.
Valence (positive / neutral / negative). The participant selects one of three options from a dropdown. Valence is what makes self-complexity matter for affect. Linville's buffering result is, mechanically, about the spread of negative-valence aspects: when bad news lands on one aspect, the dampening effect depends on how thematically isolated that aspect is from the rest. The dashboard's positivity ratio, valence distribution, and several spillover metrics all run on the valence field. Valence is also the most frequently flagged dimension during pattern review — for example, the wizard flags maps in which all aspects share the same valence, on the theory that uniform valence is rarely psychologically true.
Visibility (1–11, aspect-level only). The participant rates how public or private each self-aspect is, anchored at "Private" (1) and "Public" (11). Visibility is grounded in Petronio's Communication Privacy Management theory:
identities differ in how openly the person discloses them, and that disclosure boundary is itself an identity property. Attributes do not carry a visibility rating, since visibility is meaningful at the aspect level (the role, the domain) rather than at the trait level. The dashboard's Identity Strength Index treats average visibility as one of its six dimensions; researchers studying disclosure, stigma, or concealable identities will find this rating particularly useful.
Validity
New aspects are created with all ratings as null — not 6 (the midpoint), and not any other default value. The slider renders in a deliberately unset state: dimmed track, gray thumb, italic em-dash where the value would be. Screen readers receive a "(not yet rated)" suffix on unset slider labels. The thumb sits at the visual midpoint, but no value is recorded until the participant interacts with the slider.
This guarantees that every exported number reflects a deliberate choice. There is no silent default rating that survives to analysis. The pre-export gate (see Chapter 5) blocks export until all ratings are set.
FIG 4.2The 11-point slider in two states: engaged at position 8 (top), and unset (bottom). The midpoint thumb position on an unset slider is purely visual; no numeric value is stored until the participant interacts with the control.
Screenshot pending
FIG 4.3An aspect with one slider engaged (in color) and one slider unset (dimmed with em-dash).
05 · Chapter5
Reviewing the map: data-quality checks at export and during the session
The Self-Space treats data quality as a pipeline rather than a gate. The participant encounters review prompts at two points: at export time, when they try to submit their map; and during the session itself, when they have been idle long enough that the app suspects the second half of the map may have been entered in a meaningfully different mental state from the first half. Both serve the same goal — making sure the exported map is a coherent, considered artifact rather than an unconsidered or stale one — and it is cleaner to treat them together than to split them across sections. This chapter walks through both.
§ 5.1Review at export time: the four-stage validation gate
When the participant clicks "Export to PDF" or "Save Journey (JSON)," the export request flows through a four-stage pre-export validation gate before any file is written.
Stage 1 — Structural scan. The gate scans all three periods for uncommitted aspects (rows where the participant clicked "Add" but never typed a name), aspects with empty or whitespace-only names, and attributes with empty names. If any are found, a blocking modal offers two choices: "Go Back and Edit" returns the participant to the workspace; "Remove Unlabeled Items and Continue" cleans the structure and proceeds.
Stage 2 — Completion wizard. The gate then scans for any rating that was never set — null importance, null certainty, null descriptiveness, null visibility, or unselected valence. If any are found, the gate opens a step-by-step wizard.
Each step shows one aspect or attribute with its unset ratings and embedded interactive controls. The participant must finish each step before "Next" enables.
Stage 3 — Pattern review wizard. This is the most analytically interesting stage. A pattern detector runs across the participant's ratings looking for nine signatures of low-effort or low-validity responding. A few examples will give the flavor: uniform ratings across aspects (all aspects in a period share the identical full rating vector), extreme anchoring (≥80% of aspects rated within 1 of the maximum or minimum), straight-lining (four or more consecutive aspects with near-identical rating vectors), uniform valence (all aspects share the same valence across at least four aspects and eight items), flat profile (importance equals certainty equals descriptiveness on three or more aspects), contradictory extremes (an aspect with importance and descriptiveness more than 9 points apart), and at the gentlest end, zero attributes and single attribute on a committed aspect.
Patterns are ordered by severity. For each flagged pattern, the wizard shows the affected items with interactive controls and offers two paths: the participant can adjust the ratings directly, or click "I've reviewed this — these ratings are intentional" to confirm. The flag is then stored as reviewed, and the wizard re-scans on each step to drop patterns that are auto-resolved as a side effect.
The pattern review wizard is the heart of the data-quality story. It is not a screening tool that excludes participants — confirmed patterns produce a flag in the export, not a refusal.
It is a structured opportunity for the participant to slow down, see what their ratings are saying in aggregate, and decide whether the pattern is real ("yes, all my future-self aspects really are extremely positive — that is meaningful to me") or an artifact ("oh, I was clicking through quickly").
Stage 3.5 — Soft low-depth warning. A whole-map check fires if the total committed aspects across all periods is fewer than six, if any period has fewer than two aspects (when at least one exists in that period), or if the mean attributes per aspect is below 1.5.
This warning is non-blocking — distinct from the structural and pattern wizards, which gate the export. The participant sees a plain-language summary of the depth signal and three options: add more detail (returns to mapping), continue anyway (sets a low-depth-override flag visible in the export metadata), or cancel the export. The override is informational, not gate-blocking, because thin maps are sometimes legitimate (an older adult early in an intervention; a student in their first session). The flag preserves analytic transparency.
Return-to-Mapping escape. Every stage of the gate exposes a "Return to Mapping" button alongside its primary action. Clicking it closes the active wizard, locates the first incomplete or flagged item, switches to that item's period, and focuses its card. The participant is not trapped in the wizard if they realize mid-way that they need to do real revision work. Each new export attempt re-runs the gate cleanly — there is no half-acknowledged state to worry about.
FIG 5.1The four-stage gate runs on every export attempt. Stages 1–3 are blocking; Stage 3.5 is informational and sets a metadata flag rather than refusing the export. The Return-to-Mapping escape is available from any stage.
Screenshot pending
FIG 5.2Pattern review wizard step showing the “extreme anchoring” flag with embedded sliders and the “I've reviewed this” confirmation button.
§ 5.2Review during the session: idle thresholds and the review checklist
Self-complexity ratings have a hidden time signature. A participant who labels six aspects in one focused half-hour is producing a different kind of data than a participant who labels three aspects on Tuesday afternoon, walks away, and finishes the other three on Thursday morning. The latencies differ; the working frame differs; the priming differs. Linville-style measurement traditionally ignores this, and most identity instruments do too. The Self-Space, as of Build 6.5, instruments and acts on it directly through three escalating idle-time thresholds.
30 minutes idle — soft warning. A dismissible modal asks whether the participant wants to continue mapping or review their work first. Continuing is fine; this is the gentlest end of the contamination signal, and most participants will encounter it without consequence.
2 hours idle — mandatory review. A modal opens a single-screen review checklist listing every committed aspect across all periods. Each row is checked by default ("keep"); the participant unchecks any aspect they no longer recognize as theirs — because the session has stretched too far, because their mood or context has changed, because they have reconsidered the framing. Unchecking triggers a confirmation dialog naming the count to be removed. On confirm, the unchecked aspects are deleted and the review is logged as part of the export's session-integrity metadata (covered in Chapter 7).
12 hours idle — forced restart. The workspace is wiped. A contamination event is logged, recording when the session started, when the restart fired, the active and idle dwell at restart, and the aspect counts in each period at restart. The contamination log persists across workspace wipes — a participant cannot lose this record.
FIG 5.3Three escalating thresholds. Visual weight increases with severity: a faint blue dot at 30 minutes, a saturated orange dot at 2 hours, a heavy orange dot at 12 hours. The contamination log persists across workspace wipes.
Screenshot pending
FIG 5.4The two-hour mandatory review checklist with several aspects unchecked.
All three idle modals are audience-aware. Older-adult mode receives gentler phrasing ("Take a fresh look together…") on each warning; the other modes receive the spec'd phrasing. In researcher mode, idle warnings are suppressed entirely on the assumption that researchers reviewing their own data should not be interrupted by their own instrument.
§ 5.3Why pruning matters for data quality
Pruning shows up across both kinds of review, and it is worth a sentence on what it is doing in each. At export time, pruning means the participant correcting unlabeled items, missing ratings, and improbable patterns before the map leaves the browser — protecting internal validity of the rating instrument. During the session, pruning means the participant removing aspects they no longer recognize after a long pause — protecting temporal coherence of the mapping session. Both produce a cleaner artifact, but they address different validity threats, and the export carries metadata distinguishing which kind of cleaning occurred (see Chapter 7's discussion of the cleanliness gradient).
Tip
Neither form of pruning is an exclusion mechanism. The export proceeds with flags rather than refusals, leaving the analytic decision to the researcher rather than the instrument. Treat the cleanliness gradient as a covariate or a stratifying variable in your analysis pipeline, not as an automatic exclusion criterion.
06 · Chapter6
Possible selves and future-orientation features
The Self-Space includes a separate workflow for Markus and Nurius's possible selves,
accessible via an Intervention Mode toggle. This workflow is partially independent of the core self-mapping task: a participant can complete one without touching the other, though many studies will use both. The intervention modal is full-screen and scrollable; it can be closed and reopened.
Screenshot pending
FIG 6.1Intervention Mode modal with the parent/child toggle and the four temporal horizon cards visible.
Three structural elements organize the modal. First, a parent/child toggle lets the participant fill out their own possible selves or — if the study calls for it — their child's possible selves (used in older-adult and family-focused interventions). Second, the horizon timeline runs across 1, 5, 10, and 20 years: the participant lists hoped-for and feared possible selves at each horizon. Third, a panel of preset quick-start templates — hoped-for identities (Healthy Person, Physically Active, Successful Athlete, Academic Achiever, Creative Individual, Social Leader, Financially Secure, Mindful Person, Autonomous Individual) and feared identities (Lazy Person, Codependent Person, Chronically Ill, and similar) — is available for participants who need a starting point.
Researchers using this for measurement should be aware that the preset templates do prime selection, and many study designs will instruct participants to skip them.
The Barrier Buster Challenge is a three-step intervention exercise woven through the modal. Step 1 asks the participant to identify up to ten barriers to their hoped-for selves. Step 2 elicits potential solutions. Step 3 is a drag-and-drop matching game pairing barriers with solutions, accompanied by a percentage-based feedback system. The exercise produces a plain-text intervention report that the participant can export separately.
For research purposes, the key point is that all of this round-trips through the JSON export. As of schema 1.7.0, the main export carries five top-level fields covering this workflow: possibleSelves, childSelves, interventionBarriers, interventionSolutions, and barrierMatches.
From the dashboard's perspective, these are pass-through fields — the dashboard's metric pipeline does not currently compute on them — but they are available for downstream analysis, for qualitative coding, or for use by intervention researchers who want both the structured map and the possible-selves data in a single artifact.
07 · Chapter7
Behind the scenes: how the session is instrumented
Chapter 5 covered what the participant experiences when data-quality checks fire. This chapter covers what runs underneath those experiences and what the resulting metadata looks like in the export. Researchers using the tool will analyze this metadata; participants will mostly never see it.
A module called ActivityTracker runs in the background of every non-demo session.
It tracks two parallel quantities: active dwell (time the participant is genuinely engaged — typing, dragging sliders, clicking) and idle dwell (time the page is open but the participant is not interacting). The tracker counts pauses up to 60 seconds as continued active engagement, on the theory that thinking is a form of work; beyond 60 seconds, the time accrues to idle.
It also notes a focal aspect ID — which aspect card the participant is currently engaged with — and credits per-aspect active dwell to that aspect. All five timing fields are saved in a per-aspect dwell block (createdAt, committedAt, activeDwellMs, idleDurationMs, revisionCount) and exported in the JSON.
The same fields exist on attributes for schema symmetry, though attribute-level dwell credit is currently aspect-only — a known limitation flagged in Chapter 9.
A small clock-icon button in the action row toggles the visible session timer, which displays running totals of active and idle minutes. The timer is hidden by default; the participant opts in. It is hidden entirely during demo sessions, where the ActivityTracker itself is suspended (no idle warnings fire, no dwell is credited, no contamination is logged) on the principle that demo data should not produce timing artifacts.
Screenshot pending
FIG 7.1Visible session timer showing active and idle minute totals in the action button row.
The export carries a top-level sessionIntegrity block that summarizes everything the tracker observed: when the session started and ended, total elapsed time, active dwell, idle dwell, the count of idle warnings triggered at each threshold, the timestamps of any review confirmations, the running count of aspects removed during reviews, the full contamination-event log, whether the timer was visible, and a single rolled-up session cleanliness gradient:
clean — no idle warnings triggered any review.
review-confirmed — at least one review fired, but the participant kept all aspects.
review-pruned — at least one review fired and at least one aspect was removed.
previously-contaminated — the session contains at least one logged contamination event (typically a 12-hour forced restart in this session's history).
FIG 7.2The four-stop cleanliness gradient. Color saturation tracks the level of caution warranted in downstream analysis. review-pruned is not worse than clean; it carries different information.
For analysis, the cleanliness gradient is meant to be used as a covariate, an exclusion criterion, or a stratifying variable — not as a binary "good vs. bad" filter. A review-pruned session may be the most carefully considered map in the dataset; a clean session may be a participant who clicked through quickly without idle pauses. The point is to make the temporal conditions of the mapping visible to downstream analysis, not to grade them. Researchers studying response latency, identity stability under cognitive load, or differences in how reflectively different populations engage with self-mapping will find the dwell and integrity blocks particularly useful — they are the closest thing the field currently has to a process-trace record of self-complexity measurement.
08 · Chapter8
Submitting and exporting
Once the participant is ready, they click Save Journey (JSON) for the structured research export, or Export to PDF for a printable summary, or both. Either path runs the pre-export validation gate described in Chapter 5. Assuming the gate clears, the participant lands on the Export Identity Modal.
Screenshot pending
FIG 8.1Export Identity Modal showing the personal/educational and research tracks.
The identity modal is two-track. The Personal/Educational track asks for first and last name and an optional context field (often pre-filled with course information when the URL parameters carry a course value). The Research Study track offers two sub-options: an assigned Study ID with optional regex validation, or anonymous participation, in which case the app generates a session-stable ANON-xxxxxxxx ID via the browser's cryptographic UUID generator and persists it in session storage.
An expandable "I don't have my Study ID" section displays a configurable contact email. Once an identity is set in a session, subsequent exports show a quick-confirm modal instead of the full form, with a "Change" option for the unusual case where the participant needs to switch tracks.
The JSON export is what you, the researcher, will most often analyze. It contains: formal metadata (journey ID, created/updated timestamps, schema version, app version, source flag), the participant identity block, an export-context block recording when the path was chosen and when the export fired, a data-quality summary (including the low-depth-override flag if relevant), the cleaned self-aspect data across all three periods, calculated metrics for each period, cross-temporal change analysis, the physical-identity-detection results, the five possible-selves family fields, the full session-integrity block, and the per-aspect and per-attribute dwell blocks.
The filename embeds the export path: SelfSpace_PERSONAL-EDUCATIONAL_Lastname-Firstname_YYYY-MM-DD_HHMM.json, SelfSpace_RESEARCH_StudyID_YYYY-MM-DD_HHMM.json, or SelfSpace_RESEARCH_ANON-xxxxxxxx_YYYY-MM-DD_HHMM.json. Demo data is blocked from JSON export entirely; PDF exports of demo data are allowed but carry a DEMO_ filename prefix and a "DEMO" annotation in the title.
The PDF export is the participant's deliverable. It is a single layered document with seven sections: title, identity header, demo banner (when relevant), a visualization row showing the past/present/future force-directed graphs, a metrics table with across-time delta arrows, a change-across-time section, a reflection-prompts sidebar, a growth-trajectory grid for repeat users with multiple snapshots, and a footer.
Audience mode governs section visibility: researcher mode suppresses the network-density interpretation, the reflection sidebar, and the growth trajectory, replacing the glossary with a citation block. The PDF is the artifact the participant sees and keeps; the JSON is the artifact you analyze.
Screenshot pending
FIG 8.2PDF preview showing the three side-by-side network visualizations and the metrics table.
A small autosave note. The browser autosaves working state to localStorage every 1.5 seconds after each mutation, so a closed tab does not destroy in-progress work. A "last-saved" indicator shows relative time ("Saved 3 minutes ago") near the action buttons. If a participant returns to the app more than 30 days after their last session, a Welcome Back modal offers four paths: continue the prior map intact, take a fresh look (auto-archives the prior map and clears the workspace), compare past and present (same as fresh look, but pre-routes the comparison flow once the new map is built), or skip (same effect as continue).
Auto-archived snapshots receive a source: 'auto-archive' provenance marker that distinguishes them downstream from user-initiated saves. Researcher mode suppresses Welcome Back entirely.
09 · Chapter9
Brief notes on what's coming and known limitations
A few notes for researchers planning to adopt the tool in its current state.
The schema is at version 1.7.0. The recent jumps added the possible-selves family, the session-integrity block, and per-aspect dwell timing; these did not exist in earlier exports, and the dashboard handles 1.6.0 imports by null-defaulting the missing fields.
Researchers running longitudinal designs across the schema boundary should plan to interpret missing dwell and session-integrity fields as "not collected for this import" rather than as zero.
Two participant-facing features are stubs. The Year in Review animation produces a five-month constellation visualization but does not currently feed the dashboard; the relational-self mode and its share panel generate a demo email link rather than actually sending mail. Both are documented as such in the spec; researchers should not promise participants that these features do anything beyond display.
Validity
Several extensions are on the build plan but not yet shipped. Build 8 will add a possible-selves realization tracker surfaced in the participant PDF, letting participants mark progress against their hoped-for selves over time. Build 9 will surface the dashboard's Identity Strength Index in the participant PDF as a single composite figure, suitable for clinical or coaching contexts. Neither is in the current release. Manuals and consent documents written for participants should not promise either feature.
Finally, the app is single-file vanilla JavaScript, runs entirely client-side, requires no account, and stores nothing outside the participant's browser unless the participant explicitly downloads or shares an export. This is a privacy posture, not an implementation accident: it lets researchers deploy the tool from any static URL, including an institutional server, without standing up infrastructure, and lets participants close the tab and walk away without leaving anything behind. The tradeoff is that the app cannot recover lost browser data, cannot enforce study-specific schedules, and cannot detect duplicate participants across devices.
Researchers running tightly controlled studies will want to pair the app with their own enrollment and identity-management layer; for most exploratory and intervention work, the local-first design is the right shape.
The companion documents for this manual are the Everythingist Self-Space Specification v1.9 (implementation reference), the Self-Complexity Measurement Specification v2.2 (formal metric definitions), and the Dashboard Specification v1.21 (analytics layer that consumes the JSON exports).
Back matter
Glossary
Terms defined in the body of the manual, alphabetical.
Active dwell
Time the participant is genuinely engaged with the app — typing, dragging sliders, clicking. Pauses up to 60 seconds count as continued engagement; beyond 60 seconds, time accrues to idle dwell.
ActivityTracker
The background module that tracks active and idle dwell, focal aspect ID, and contamination events for every non-demo session.
Anonymous participation
Research-track export option in which the app generates a session-stable ANON-xxxxxxxx identifier in place of a researcher-assigned Study ID.
Aspect
See self-aspect.
Attribute
A trait, feeling, or behavior the participant uses to describe a particular self-aspect. The building block of self-complexity in Linville's original formulation.
Audience mode
One of four framings — general, student, older-adult, researcher — that subtly shifts copy in reflection prompts, idle warnings, and the printed PDF. Set by URL parameter or chosen explicitly.
Buffering hypothesis
Linville's (1987) proposal that high self-complexity dampens the spillover of stress from one domain into the rest of the self.
Carry-to
Per-aspect menu that deep-copies an aspect — name, ratings, valence, attributes — into another time period with a fresh internal ID.
Certainty
1–11 rating of how sure the participant is that an aspect or attribute is theirs. Operationalized after Pelham (1991).
Cleanliness gradient
Four-stop summary of session integrity: clean, review-confirmed, review-pruned, or previously-contaminated. Intended for use as a covariate, not as a binary filter.
Committed aspect
An aspect row that has received a name (one or more characters typed). Triggers progressive disclosure of sliders, valence dropdown, and attribute area.
Contamination event
A 12-hour idle threshold breach that wipes the workspace and writes a persistent log entry recording session start, restart time, dwell totals, and aspect counts at restart.
Descriptiveness
1–11 rating of how well an attribute or aspect describes the participant in context. The second pillar of Markus's (1977) self-schema construct, alongside importance.
Export track
One of two paths set at the landing path-gate: Personal/Educational or Research. Determines the export identity modal, filename convention, and several pieces of in-app copy.
Focal aspect ID
The aspect card the participant is currently engaged with. Per-aspect active dwell is credited to this ID by the ActivityTracker.
Idle dwell
Time the page is open but the participant is not interacting (beyond a 60-second grace period that counts as active).
Identity Strength Index
A composite dashboard metric incorporating importance, certainty, descriptiveness, valence, visibility, and structural features. Build 9 will surface it in the participant PDF.
Importance
1–11 rating of how central a self-aspect or attribute is to the participant. Operationalized after Markus's (1977) self-schema framework.
Intervention Mode
Full-screen modal hosting the possible-selves workflow and the Barrier Buster Challenge.
Landing path-gate
The first modal a participant sees, presenting two cards (Personal/Educational, Research) and a defer-to-export-time skip link.
Low-depth override
A flag set in the export metadata when a participant continues past the Stage 3.5 soft warning. Informational, not gate-blocking.
Overlap
The proportion of attributes that recur across two or more aspects in a single period. Higher overlap implies thematically interconnected identities and lowers complexity.
Possible selves
Hoped-for, expected, and feared future identities at horizons of 1, 5, 10, and 20 years. After Markus & Nurius (1986). Captured in a separate workflow inside Intervention Mode.
Pre-export validation gate
The four-stage pipeline (structural scan, completion wizard, pattern review wizard, low-depth warning) that runs before any export is written.
Progressive disclosure
Design pattern in which a control surface is revealed only after the user commits to using it. Used in the uncommitted-to-committed aspect transition.
Reflection scaffold
Per-period prompt panel offering non-suggestive, process-oriented questions. Audience mode shifts only the closing summary, not the prompts themselves.
Research mode
Layered concept: a participant in research mode is on the Research export track and may be in any of four audience modes. Most studies set both via URL parameters.
Return-to-Mapping
Escape control available from any stage of the validation gate. Closes the active wizard and focuses the first incomplete or flagged item.
Self-aspect
A coherent slice of the participant's identity — a role, context, relationship, or domain. Decided by the participant; the app provides no canonical list.
Self-complexity
Structural property of the self in which many distinct aspects carry non-overlapping attributes. After Linville (1985, 1987).
Self-schema
Markus's (1977) construct: the cognitive structure organizing self-relevant information. Schematic content is high in both importance and descriptiveness.
Session integrity block
Top-level export field summarizing dwell, idle warnings, review confirmations, contamination events, timer visibility, and the rolled-up cleanliness gradient.
Study ID
Researcher-assigned identifier captured on the Research export track, with optional regex validation.
Uncommitted aspect
A fresh aspect row before any name has been typed. Visually distinguished by a dashed border and muted background; full controls are not yet shown.
Valence
Positive / neutral / negative selection per aspect or attribute. Drives the dashboard's positivity ratio and several spillover metrics.
Visibility
1–11 rating of how publicly the participant discloses an aspect, anchored "Private" (1) to "Public" (11). After Petronio's (2002) Communication Privacy Management theory. Aspect-level only.
Welcome Back modal
Modal shown when a participant returns more than 30 days after a prior session, offering four routing options. Auto-archived snapshots receive a source: 'auto-archive' provenance marker.
No subject index is provided. Use ⌘ F (macOS) or Ctrl F (Windows / Linux) to search this manual on screen, or rely on the table of contents and list of figures for navigation.
Back matter
References
Citations from the manual, formatted in APA 7th.
Linville, P. W. (1985). Self-complexity and affective extremity: Don't put all of your eggs in one cognitive basket. Social Cognition, 3(1), 94–120.
Linville, P. W. (1987). Self-complexity as a cognitive buffer against stress-related illness and depression. Journal of Personality and Social Psychology, 52(4), 663–676.
Markus, H. (1977). Self-schemata and processing information about the self. Journal of Personality and Social Psychology, 35(2), 63–78. https://doi.org/10.1037/0022-3514.35.2.63
Markus, H., & Nurius, P. (1986). Possible selves. American Psychologist, 41(9), 954–969.
McConnell, A. R. (2011). The multiple self-aspects framework: Self-concept representation and its implications. Personality and Social Psychology Review, 15(1), 3–27.
Pelham, B. W. (1991). On confidence and consequence: The certainty and importance of self-knowledge. Journal of Personality and Social Psychology, 60(4), 518–530.
Petronio, S. (2002). Boundaries of privacy: Dialectics of disclosure. SUNY Press.
Rafaeli-Mor, E., Gotlib, I. H., & Revelle, W. (1999). The meaning and measurement of self-complexity. Personality and Individual Differences, 27(2), 341–356.
Sakaki, M. (2004). Self-complexity and depression: Two paths from a self-aspect to depressive symptoms. Japanese Psychological Research, 46(3), 197–210.
Schleicher, D. J., & McConnell, A. R. (2005). The complexity of self-complexity: An associated systems theory approach. Social Cognition, 23(5), 387–416.
Scott, W. A. (1969). Structure of natural cognitions. Journal of Personality and Social Psychology, 12(4), 261–278.
Back matter
Quick Reference Card
One page, designed to be torn out, screenshotted, or pinned beside your analysis pipeline.