Skip to content

Scores

leadr.scores

Modules:

leadr.scores.adapters

Modules:

  • orm – Score ORM models.
leadr.scores.adapters.orm

Score ORM models.

Classes:

  • ScoreEventORM – Score event ORM model for append-only event sourcing.
  • ScoreFlagORM – Score flag ORM model for anti-cheat detections.
  • ScoreSubmissionMetaORM – Score submission metadata ORM model for anti-cheat tracking.
leadr.scores.adapters.orm.ScoreEventORM

Bases: ImmutableBase

Score event ORM model for append-only event sourcing.

Represents an immutable fact about a score submission in the database. ScoreEvents are never updated or deleted - they are append-only. Maps to the score_events table with foreign keys to accounts, games, boards, and identities.

Attributes:

# leadr.scores.adapters.orm.ScoreEventORM.account
account: Mapped[AccountORM] = relationship('AccountORM')
# leadr.scores.adapters.orm.ScoreEventORM.account_id
account_id: Mapped[UUID] = mapped_column(ForeignKey('accounts.id', ondelete='CASCADE'), nullable=False, index=True)
# leadr.scores.adapters.orm.ScoreEventORM.board
board: Mapped[BoardORM] = relationship('BoardORM')
# leadr.scores.adapters.orm.ScoreEventORM.board_id
board_id: Mapped[UUID] = mapped_column(ForeignKey('boards.id', ondelete='CASCADE'), nullable=False, index=True)
# leadr.scores.adapters.orm.ScoreEventORM.city
city: Mapped[str | None] = mapped_column(String, nullable=True, default=None)
# leadr.scores.adapters.orm.ScoreEventORM.country
country: Mapped[str | None] = mapped_column(String, nullable=True, default=None)
# leadr.scores.adapters.orm.ScoreEventORM.created_at
created_at: Mapped[timestamp]
# leadr.scores.adapters.orm.ScoreEventORM.event_payload
event_payload: Mapped[dict[str, Any]] = mapped_column(JSONB, nullable=False, default=dict, server_default='{}')
# leadr.scores.adapters.orm.ScoreEventORM.game
game: Mapped[GameORM] = relationship('GameORM')
# leadr.scores.adapters.orm.ScoreEventORM.game_id
game_id: Mapped[UUID] = mapped_column(ForeignKey('games.id', ondelete='CASCADE'), nullable=False, index=True)
# leadr.scores.adapters.orm.ScoreEventORM.id
id: Mapped[uuid_pk]
# leadr.scores.adapters.orm.ScoreEventORM.identity
identity: Mapped[IdentityORM] = relationship('IdentityORM')
# leadr.scores.adapters.orm.ScoreEventORM.identity_id
identity_id: Mapped[UUID] = mapped_column(ForeignKey('identities.id', ondelete='CASCADE'), nullable=False, index=True)
# leadr.scores.adapters.orm.ScoreEventORM.is_test
is_test: Mapped[bool] = mapped_column(Boolean, nullable=False, default=False)
# leadr.scores.adapters.orm.ScoreEventORM.metadata
metadata = Base.metadata
# leadr.scores.adapters.orm.ScoreEventORM.registry
registry = Base.registry
# leadr.scores.adapters.orm.ScoreEventORM.timezone
timezone: Mapped[str | None] = mapped_column(String, nullable=True, default=None)
leadr.scores.adapters.orm.ScoreFlagORM

Bases: Base

Score flag ORM model for anti-cheat detections.

Records suspicious patterns detected by the anti-cheat system. Flags can be reviewed by admins to confirm or dismiss detections. Uses score_event_id instead of score_id, linking to the immutable ScoreEvent in the event-sourcing architecture.

Functions:

  • from_domain – Convert domain entity to ORM model.
  • to_domain – Convert ORM model to domain entity.

Attributes:

# leadr.scores.adapters.orm.ScoreFlagORM.confidence
confidence: Mapped[str] = mapped_column(String, nullable=False, index=True)
# leadr.scores.adapters.orm.ScoreFlagORM.created_at
created_at: Mapped[timestamp]
# leadr.scores.adapters.orm.ScoreFlagORM.deleted_at
deleted_at: Mapped[nullable_timestamp]
# leadr.scores.adapters.orm.ScoreFlagORM.flag_metadata
flag_metadata: Mapped[dict[str, Any]] = mapped_column('metadata', JSONB, nullable=False, default=dict, server_default='{}')
# leadr.scores.adapters.orm.ScoreFlagORM.flag_type
flag_type: Mapped[str] = mapped_column(String, nullable=False, index=True)
# leadr.scores.adapters.orm.ScoreFlagORM.from_domain
from_domain(entity)

Convert domain entity to ORM model.

# leadr.scores.adapters.orm.ScoreFlagORM.id
id: Mapped[uuid_pk]
# leadr.scores.adapters.orm.ScoreFlagORM.reviewed_at
reviewed_at: Mapped[datetime | None] = mapped_column(DateTime(timezone=True), nullable=True, default=None)
# leadr.scores.adapters.orm.ScoreFlagORM.reviewer_decision
reviewer_decision: Mapped[str | None] = mapped_column(String, nullable=True, default=None)
# leadr.scores.adapters.orm.ScoreFlagORM.reviewer_id
reviewer_id: Mapped[UUID | None] = mapped_column(nullable=True, default=None)
# leadr.scores.adapters.orm.ScoreFlagORM.score_event
score_event: Mapped[ScoreEventORM] = relationship('ScoreEventORM')
# leadr.scores.adapters.orm.ScoreFlagORM.score_event_id
score_event_id: Mapped[UUID] = mapped_column(ForeignKey('score_events.id', ondelete='CASCADE'), nullable=False, index=True)
# leadr.scores.adapters.orm.ScoreFlagORM.status
status: Mapped[str] = mapped_column(String, nullable=False, default='pending', index=True)
# leadr.scores.adapters.orm.ScoreFlagORM.to_domain
to_domain()

Convert ORM model to domain entity.

# leadr.scores.adapters.orm.ScoreFlagORM.updated_at
updated_at: Mapped[timestamp] = mapped_column(onupdate=(func.now()))
leadr.scores.adapters.orm.ScoreSubmissionMetaORM

Bases: Base

Score submission metadata ORM model for anti-cheat tracking.

Tracks submission history per identity/board combination to enable detection of suspicious patterns like rapid-fire submissions. Uses identity_id as the tracking key instead of device_id, aligning with the event-sourcing architecture where identity is the ranking key.

Functions:

  • from_domain – Convert domain entity to ORM model.
  • to_domain – Convert ORM model to domain entity.

Attributes:

# leadr.scores.adapters.orm.ScoreSubmissionMetaORM.board
board: Mapped[BoardORM] = relationship('BoardORM')
# leadr.scores.adapters.orm.ScoreSubmissionMetaORM.board_id
board_id: Mapped[UUID] = mapped_column(ForeignKey('boards.id', ondelete='CASCADE'), nullable=False, index=True)
# leadr.scores.adapters.orm.ScoreSubmissionMetaORM.created_at
created_at: Mapped[timestamp]
# leadr.scores.adapters.orm.ScoreSubmissionMetaORM.deleted_at
deleted_at: Mapped[nullable_timestamp]
# leadr.scores.adapters.orm.ScoreSubmissionMetaORM.from_domain
from_domain(entity)

Convert domain entity to ORM model.

# leadr.scores.adapters.orm.ScoreSubmissionMetaORM.id
id: Mapped[uuid_pk]
# leadr.scores.adapters.orm.ScoreSubmissionMetaORM.identity
identity: Mapped[IdentityORM] = relationship('IdentityORM')
# leadr.scores.adapters.orm.ScoreSubmissionMetaORM.identity_id
identity_id: Mapped[UUID] = mapped_column(ForeignKey('identities.id', ondelete='CASCADE'), nullable=False, index=True)
# leadr.scores.adapters.orm.ScoreSubmissionMetaORM.last_score_value
last_score_value: Mapped[float | None] = mapped_column(Float, nullable=True, default=None)
# leadr.scores.adapters.orm.ScoreSubmissionMetaORM.last_submission_at
last_submission_at: Mapped[datetime] = mapped_column(DateTime(timezone=True), nullable=False)
# leadr.scores.adapters.orm.ScoreSubmissionMetaORM.score_event
score_event: Mapped[ScoreEventORM] = relationship('ScoreEventORM')
# leadr.scores.adapters.orm.ScoreSubmissionMetaORM.score_event_id
score_event_id: Mapped[UUID] = mapped_column(ForeignKey('score_events.id', ondelete='CASCADE'), nullable=False, index=True)
# leadr.scores.adapters.orm.ScoreSubmissionMetaORM.submission_count
submission_count: Mapped[int] = mapped_column(Integer, nullable=False, default=1)
# leadr.scores.adapters.orm.ScoreSubmissionMetaORM.to_domain
to_domain()

Convert ORM model to domain entity.

# leadr.scores.adapters.orm.ScoreSubmissionMetaORM.updated_at
updated_at: Mapped[timestamp] = mapped_column(onupdate=(func.now()))

leadr.scores.api

Modules:

leadr.scores.api.score_event_routes

API routes for score event management (admin only).

Functions:

Attributes:

leadr.scores.api.score_event_routes.create_score_event
create_score_event(request, auth, score_service, board_service, background_tasks)

Create a score event (Admin API).

Creates a score event using the same processing as client submissions:

  • Runs anti-cheat checks
  • Updates rankings (BoardState/RunEntry)
  • Validates board type and payload

This endpoint is for admin testing, data seeding, and demo purposes.

Parameters:

Returns:

Raises:

  • 404 – Board not found
  • 403 – Non-superadmin accessing another account's board
  • 400 – Validation error (wrong board type, etc.)
leadr.scores.api.score_event_routes.get_score_event
get_score_event(event_id, auth, service)

Get a single score event by ID (Admin API).

Parameters:

Returns:

Raises:

  • 404 – Score event not found.
  • 403 – Non-superadmin trying to access another account's event.
leadr.scores.api.score_event_routes.list_score_events
list_score_events(auth, service, pagination, account_id=None, board_id=None, identity_id=None, is_test=None)

List score events (Admin API).

Returns a paginated list of score events. Score events are immutable facts about score submissions and cannot be updated or deleted.

For regular admins: account_id defaults to their account. For superadmins: can view events across all accounts.

Parameters:

Returns:

Raises:

  • 400 – Invalid pagination cursor.
leadr.scores.api.score_event_routes.router
router = APIRouter()
leadr.scores.api.score_event_schemas

API request and response models for score events.

Classes:

leadr.scores.api.score_event_schemas.ScoreEventCreateRequest

Bases: BaseModel

Request model for creating a score event (admin only).

Creates a score event using the same processing as client submissions:

  • Runs anti-cheat checks
  • Updates rankings (BoardState/RunEntry)
  • Validates board type and payload

Attributes:

# leadr.scores.api.score_event_schemas.ScoreEventCreateRequest.board_id
board_id: BoardID = Field(description='Board to submit to')
# leadr.scores.api.score_event_schemas.ScoreEventCreateRequest.city
city: str | None = Field(default=None, description='Optional city name')
# leadr.scores.api.score_event_schemas.ScoreEventCreateRequest.country
country: str | None = Field(default=None, description='Optional country code')
# leadr.scores.api.score_event_schemas.ScoreEventCreateRequest.identity_id
identity_id: IdentityID = Field(description='Identity submitting the score')
# leadr.scores.api.score_event_schemas.ScoreEventCreateRequest.is_test
is_test: bool = Field(default=False, description='Whether this is a test event')
# leadr.scores.api.score_event_schemas.ScoreEventCreateRequest.player_name
player_name: str | None = Field(default=None, description='Optional display name')
# leadr.scores.api.score_event_schemas.ScoreEventCreateRequest.timezone
timezone: str | None = Field(default=None, description='Optional timezone')
# leadr.scores.api.score_event_schemas.ScoreEventCreateRequest.value
value: float = Field(description='Score value (or delta for COUNTER boards)')
leadr.scores.api.score_event_schemas.ScoreEventResponse

Bases: BaseModel

Response model for a score event (admin only).

Score events are immutable facts about score submissions. They are append-only and cannot be updated or deleted.

Functions:

  • from_domain – Convert domain entity to response model.

Attributes:

# leadr.scores.api.score_event_schemas.ScoreEventResponse.account_id
account_id: AccountID = Field(description='ID of the account this event belongs to')
# leadr.scores.api.score_event_schemas.ScoreEventResponse.board_id
board_id: BoardID = Field(description='ID of the board this event was submitted to')
# leadr.scores.api.score_event_schemas.ScoreEventResponse.city
city: str | None = Field(default=None, description='City name from GeoIP lookup')
# leadr.scores.api.score_event_schemas.ScoreEventResponse.country
country: str | None = Field(default=None, description='Country code from GeoIP lookup')
# leadr.scores.api.score_event_schemas.ScoreEventResponse.created_at
created_at: datetime = Field(description='Timestamp when the event was created (UTC)')
# leadr.scores.api.score_event_schemas.ScoreEventResponse.event_payload
event_payload: dict[str, Any] = Field(description='Board-type-specific payload (value for RUN boards, delta for COUNTER)')
# leadr.scores.api.score_event_schemas.ScoreEventResponse.from_domain
from_domain(event)

Convert domain entity to response model.

Parameters:

  • event (ScoreEvent) – The domain ScoreEvent entity to convert.

Returns:

  • ScoreEventResponse – ScoreEventResponse with all fields populated from the domain entity.
# leadr.scores.api.score_event_schemas.ScoreEventResponse.game_id
game_id: GameID = Field(description='ID of the game this event belongs to')
# leadr.scores.api.score_event_schemas.ScoreEventResponse.id
id: ScoreEventID = Field(description='Unique identifier for the score event')
# leadr.scores.api.score_event_schemas.ScoreEventResponse.identity_id
identity_id: IdentityID = Field(description='ID of the identity that submitted this score')
# leadr.scores.api.score_event_schemas.ScoreEventResponse.is_test
is_test: bool = Field(description='True if this was a test submission')
# leadr.scores.api.score_event_schemas.ScoreEventResponse.timezone
timezone: str | None = Field(default=None, description='Timezone from GeoIP lookup')
leadr.scores.api.score_flag_routes

API routes for score flag management.

Functions:

Attributes:

leadr.scores.api.score_flag_routes.get_score_flag
get_score_flag(flag_id, service, auth)

Get a score flag by ID.

Parameters:

Returns:

Raises:

  • 403 – User does not have access to this flag's account.
  • 404 – Flag not found or soft-deleted.
leadr.scores.api.score_flag_routes.list_score_flags
list_score_flags(auth, service, pagination, account_id=None, board_id=None, game_id=None, status=None, flag_type=None)

List score flags for an account with optional filters and pagination.

Returns paginated flags for the specified account, with optional filtering by board, game, status, or flag type. Supports cursor-based pagination with bidirectional navigation and custom sorting.

For regular users, account_id is automatically derived from their API key. For superadmins, account_id is optional - if omitted, returns flags from all accounts.

Parameters:

  • auth (AdminAuthContextDep) – Authentication context with user info.
  • service (ScoreFlagServiceDep) – Injected score flag service dependency.
  • pagination (Annotated[PaginationParams, Depends()]) – Pagination parameters (cursor, limit, sort).
  • account_id (Annotated[AccountID | None, Query(description='Account ID filter')]) – Optional account_id query parameter (superadmins can omit to see all).
  • board_id (BoardID | None) – Optional board ID to filter by.
  • game_id (GameID | None) – Optional game ID to filter by.
  • status (str | None) – Optional status to filter by (pending, confirmed_cheat, etc.).
  • flag_type (str | None) – Optional flag type to filter by (velocity, duplicate, etc.).

Returns:

Raises:

  • 400 – Invalid cursor or sort field.
  • 403 – User does not have access to the specified account.
leadr.scores.api.score_flag_routes.router
router = APIRouter()
leadr.scores.api.score_flag_routes.update_score_flag
update_score_flag(flag_id, request, service, auth)

Update a score flag (review or soft-delete).

Allows reviewing a flag (updating status and reviewer decision) or soft-deleting the flag.

Parameters:

Returns:

Raises:

  • 403 – User does not have access to this flag's account.
  • 404 – Flag not found.
  • 400 – Invalid update request.
leadr.scores.api.score_flag_schemas

API request and response models for score flags.

Classes:

leadr.scores.api.score_flag_schemas.ScoreFlagResponse

Bases: BaseModel

Response model for a score flag.

Functions:

  • from_domain – Convert domain entity to response model.

Attributes:

# leadr.scores.api.score_flag_schemas.ScoreFlagResponse.confidence
confidence: str = Field(description='Confidence level of the flag (low, medium, high)')
# leadr.scores.api.score_flag_schemas.ScoreFlagResponse.created_at
created_at: datetime = Field(description='Timestamp when the flag was created (UTC)')
# leadr.scores.api.score_flag_schemas.ScoreFlagResponse.flag_type
flag_type: str = Field(description='Type of flag (e.g., velocity, duplicate, rate_limit)')
# leadr.scores.api.score_flag_schemas.ScoreFlagResponse.from_domain
from_domain(flag)

Convert domain entity to response model.

Parameters:

  • flag (ScoreFlag) – The domain ScoreFlag entity to convert.

Returns:

  • ScoreFlagResponse – ScoreFlagResponse with all fields populated from the domain entity.
# leadr.scores.api.score_flag_schemas.ScoreFlagResponse.id
id: ScoreFlagID = Field(description='Unique identifier for the score flag')
# leadr.scores.api.score_flag_schemas.ScoreFlagResponse.metadata
metadata: dict[str, Any] = Field(description='Additional metadata about the flag')
# leadr.scores.api.score_flag_schemas.ScoreFlagResponse.reviewed_at
reviewed_at: datetime | None = Field(default=None, description='Timestamp when flag was reviewed, or null')
# leadr.scores.api.score_flag_schemas.ScoreFlagResponse.reviewer_decision
reviewer_decision: str | None = Field(default=None, description="Admin's decision/notes, or null")
# leadr.scores.api.score_flag_schemas.ScoreFlagResponse.reviewer_id
reviewer_id: UserID | None = Field(default=None, description='ID of the user who reviewed this flag, or null')
# leadr.scores.api.score_flag_schemas.ScoreFlagResponse.score_event_id
score_event_id: ScoreEventID = Field(description='ID of the score event that was flagged')
# leadr.scores.api.score_flag_schemas.ScoreFlagResponse.status
status: str = Field(description='Status: pending, confirmed_cheat, false_positive, or dismissed')
# leadr.scores.api.score_flag_schemas.ScoreFlagResponse.updated_at
updated_at: datetime = Field(description='Timestamp of last update (UTC)')
leadr.scores.api.score_flag_schemas.ScoreFlagUpdateRequest

Bases: BaseModel

Request model for updating a score flag (reviewing).

Attributes:

# leadr.scores.api.score_flag_schemas.ScoreFlagUpdateRequest.deleted
deleted: bool | None = Field(default=None, description='Set to true to soft delete the flag')
# leadr.scores.api.score_flag_schemas.ScoreFlagUpdateRequest.reviewer_decision
reviewer_decision: str | None = Field(default=None, description="Admin's decision/notes about the flag")
# leadr.scores.api.score_flag_schemas.ScoreFlagUpdateRequest.status
status: str | None = Field(default=None, description='Updated status: pending, confirmed_cheat, false_positive, or dismissed')
leadr.scores.api.score_routes

API routes for score management.

Functions:

Attributes:

leadr.scores.api.score_routes.client_router
client_router = APIRouter()
leadr.scores.api.score_routes.create_score_client
create_score_client(score_request, request, service, board_service, background_tasks, auth, identity_service, pre_create_hook, post_create_hook)

Create a new score (Client API).

Creates a new score submission for a board. All IDs (account_id, game_id, identity_id) are automatically derived from the authenticated session.

Parameters:

  • score_request (ScoreClientCreateRequest) – Score creation details including board_id, player_name, and value.
  • request (Request) – FastAPI request object for accessing geo data.
  • service (ScoreServiceDep) – Injected score service dependency.
  • board_service (BoardServiceDep) – Injected board service for board lookup.
  • background_tasks (BackgroundTasks) – FastAPI background tasks for async metadata updates.
  • auth (ClientAuthContextWithNonceDep) – Client authentication context with device and identity info.
  • pre_create_hook (PreCreateScoreHookDep) – Hook called before score creation (for quota checks).
  • post_create_hook (PostCreateScoreHookDep) – Hook called after successful score creation.

Returns:

  • ScoreClientResponse – ScoreClientResponse with the created score (excludes device_id).

Raises:

  • 404 – Board not found.
  • 400 – Validation failed (board doesn't belong to account, or game doesn't match board's game).
  • 403 – Score rejected by anti-cheat (rate limit exceeded).
leadr.scores.api.score_routes.get_score
get_score(score_id, service, auth)

Get a score by ID.

Returns the score with its computed rank based on the board's sort direction. The rank represents the score's position in the leaderboard (1 = first place).

Parameters:

Returns:

  • ScoreResponse – ScoreResponse with the score details including rank.

Raises:

  • 403 – User does not have access to this score's account.
  • 404 – Score not found or soft-deleted.
leadr.scores.api.score_routes.get_score_client
get_score_client(score_id, service, auth)

Get a score by ID (Client API).

Returns the score with its computed rank based on the board's sort direction. The rank represents the score's position in the leaderboard (1 = first place).

Clients can only access scores from boards belonging to the same game as their authenticated device.

Parameters:

  • score_id (ScoreID) – Score identifier to retrieve.
  • service (ScoreServiceDep) – Injected score service dependency.
  • auth (ClientAuthContextDep) – Client authentication context with device info.

Returns:

Raises:

  • 403 – Client does not have access to this score's game.
  • 404 – Score not found or soft-deleted.
leadr.scores.api.score_routes.handle_list_scores
handle_list_scores(auth, service, board_service, pagination, account_id, board_id, game_id, identity_id, is_test=None, around_score_id=None, around_score_value=None)

Handle list scores logic for both admin and client endpoints.

This shared handler implements the core list scores functionality and returns different response models based on the authentication type:

  • Admin auth: Returns ScoreResponse with geo fields
  • Client auth: Returns ScoreClientResponse without geo fields

Parameters:

  • auth (AuthContext) – Authentication context (admin or client).
  • service (ScoreService) – Score service for data access.
  • board_service – Board service for fetching board details.
  • pagination (PaginationParams) – Pagination parameters (cursor, limit, sort).
  • account_id (AccountID | None) – Optional account ID filter.
  • board_id (BoardID) – Board ID to list scores for.
  • game_id (GameID | None) – Optional game ID filter.
  • identity_id (IdentityID | None) – Optional identity ID filter.
  • is_test (bool | None) – Optional filter for test scores. True returns only test scores, False returns only production scores, None returns all scores.
  • around_score_id (ScoreID | None) – Optional score ID to center results around.
  • around_score_value (float | None) – Optional score value to center results around (with placeholder).

Returns:

Raises:

  • HTTPException – 400 if cursor is invalid, sort field is invalid, or validation fails for around_score_id/around_score_value.
  • HTTPException – 404 if around_score_id score not found.
leadr.scores.api.score_routes.list_scores_admin
list_scores_admin(auth, service, board_service, pagination, board_id, account_id=None, game_id=None, identity_id=None, is_test=IsTestFilter.FALSE, around_score_id=None, around_score_value=None)

List scores for a board with optional filters and pagination.

Returns paginated scores for the specified board, with optional filtering by game or identity. Supports cursor-based pagination with bidirectional navigation and custom sorting.

For regular admin users, account_id is automatically derived from their API key. For superadmins, account_id must be explicitly provided as a query parameter.

Pagination:

  • Default: 20 items per page, sorted by created_at:desc,id:asc
  • Custom sort: Use ?sort=value:desc,created_at:asc
  • Valid sort fields: id, value, player_name, created_at, updated_at
  • Navigation: Use next_cursor/prev_cursor from response

Around Score:

  • Use around_score_id to get scores centered around a specific score
  • Use around_score_value to get scores centered around a hypothetical value (returns a placeholder score with is_placeholder=True)
  • Mutually exclusive with cursor pagination and each other
  • Returns a window of scores with the target in the middle
  • Respects limit (e.g., limit=5 returns 2 above + target + 2 below)
Example GET /v1/scores?board_id=brd_123&limit=50&sort=value:desc,created_at:asc GET /v1/scores?board_id=brd_123&around_score_id=scr_456&limit=11 GET /v1/scores?board_id=brd_123&around_score_value=1500&limit=11

Parameters:

  • auth (AdminAuthContextDep) – Authentication context with user info.
  • service (ScoreServiceDep) – Injected score service dependency.
  • pagination (Annotated[PaginationParams, Depends()]) – Pagination parameters (cursor, limit, sort).
  • board_id (BoardID) – Board ID to list scores for.
  • account_id (Annotated[AccountID | None, Query(description='Account ID filter')]) – Optional account_id query parameter (required for superadmins).
  • game_id (GameID | None) – Optional game ID to filter by.
  • identity_id (IdentityID | None) – Optional identity ID to filter by.
  • around_score_id (Annotated[ScoreID | None, Query(description='Center results around this score ID')]) – Optional score ID to center results around.
  • around_score_value (Annotated[float | None, Query(description='Center results around this score value (returns placeholder)')]) – Optional value to center results around (with placeholder).

Returns:

Raises:

  • 400 – Invalid cursor, sort field, cursor state mismatch, or around validation.
  • 400 – Superadmin did not provide account_id.
  • 403 – User does not have access to the specified account.
  • 404 – around_score_id score not found.
leadr.scores.api.score_routes.list_scores_client
list_scores_client(auth, service, board_service, pagination, board_id, identity_id=None, around_score_id=None, around_score_value=None)

List scores for a board with optional filters and pagination.

Returns paginated scores for the specified board, with optional filtering by identity. Supports cursor-based pagination with bidirectional navigation and custom sorting.

Pagination:

  • Default: 20 items per page, sorted by created_at:desc,id:asc
  • Custom sort: Use ?sort=value:desc,created_at:asc
  • Valid sort fields: id, value, player_name, created_at, updated_at
  • Navigation: Use next_cursor/prev_cursor from response

Around Score:

  • Use around_score_id to get scores centered around a specific score
  • Use around_score_value to get scores centered around a hypothetical value (returns a placeholder score with is_placeholder=True)
  • Mutually exclusive with cursor pagination and each other
  • Returns a window of scores with the target in the middle
  • Respects limit (e.g., limit=5 returns 2 above + target + 2 below)
Example GET /client/scores?board_id=brd_123&limit=50&sort=value:desc,created_at:asc GET /client/scores?board_id=brd_123&around_score_id=scr_456&limit=11 GET /client/scores?board_id=brd_123&around_score_value=1500&limit=11 GET /client/scores?board_id=brd_123&identity_id=me (filter to current identity)

Parameters:

  • auth (ClientAuthContextDep) – Authentication context with user info.
  • service (ScoreServiceDep) – Injected score service dependency.
  • pagination (Annotated[PaginationParams, Depends()]) – Pagination parameters (cursor, limit, sort).
  • board_id (BoardID) – Board ID to list scores for.
  • identity_id (Annotated[IdentityID | Literal['me'] | None, Query(description="Identity ID to filter by, or 'me' for current identity")]) – Optional identity ID to filter by, or "me" for current identity.
  • around_score_id (Annotated[ScoreID | None, Query(description='Center results around this score ID')]) – Optional score ID to center results around.
  • around_score_value (Annotated[float | None, Query(description='Center results around this score value (returns placeholder)')]) – Optional value to center results around (with placeholder).

Returns:

Raises:

  • 400 – Invalid cursor, sort field, cursor state mismatch, or around validation.
  • 403 – User does not have access to the specified account.
  • 404 – around_score_id score not found.
leadr.scores.api.score_routes.router
router = APIRouter()
leadr.scores.api.score_schemas

API request and response models for scores.

Classes:

leadr.scores.api.score_schemas.IsTestFilter

Bases: str, Enum

Filter options for is_test query parameter in admin score listing.

Attributes:

# leadr.scores.api.score_schemas.IsTestFilter.ALL
ALL = 'all'
# leadr.scores.api.score_schemas.IsTestFilter.FALSE
FALSE = 'false'
# leadr.scores.api.score_schemas.IsTestFilter.TRUE
TRUE = 'true'
leadr.scores.api.score_schemas.ScoreClientCreateRequest

Bases: ScoreCreateRequestBase

Request model for creating a score (Client API).

For client authentication, account_id, game_id, and device_id are automatically derived from the authenticated device session. Only game-specific fields are required.

Note: Timezone, country, and city are automatically populated from the client's IP address via GeoIP middleware.

Functions:

Attributes:

leadr.scores.api.score_schemas.ScoreClientResponse

Bases: BaseModel

Response model for a score returned to clients.

Similar to ScoreResponse but excludes sensitive geo data (timezone, country, city) that clients should not see for other players' scores.

Functions:

  • from_board_state – Convert BoardState to ScoreClientResponse with masked ID.
  • from_run_entry – Convert RunEntry to ScoreClientResponse with masked ID.

Attributes:

# leadr.scores.api.score_schemas.ScoreClientResponse.account_id
account_id: AccountID = Field(description='ID of the account this score belongs to')
# leadr.scores.api.score_schemas.ScoreClientResponse.board_id
board_id: BoardID = Field(description='ID of the board this score belongs to')
# leadr.scores.api.score_schemas.ScoreClientResponse.created_at
created_at: datetime = Field(description='Timestamp when the score was created (UTC)')
# leadr.scores.api.score_schemas.ScoreClientResponse.from_board_state
from_board_state(state, account_id, game_id, rank)

Convert BoardState to ScoreClientResponse with masked ID.

Uses denormalized fields from BoardState directly, no joins required.

Parameters:

  • state (BoardState) – The BoardState entity representing materialized ranking.
  • account_id (AccountID) – The account ID (from board lookup).
  • game_id (GameID) – The game ID (from board lookup).
  • rank (int) – The computed rank position (1-indexed).

Returns:

# leadr.scores.api.score_schemas.ScoreClientResponse.from_run_entry
from_run_entry(entry, account_id, game_id, rank)

Convert RunEntry to ScoreClientResponse with masked ID.

Uses denormalized fields from RunEntry directly, no joins required.

Parameters:

  • entry (RunEntry) – The RunEntry entity representing a single run.
  • account_id (AccountID) – The account ID (from board lookup).
  • game_id (GameID) – The game ID (from board lookup).
  • rank (int) – The computed rank position (1-indexed).

Returns:

# leadr.scores.api.score_schemas.ScoreClientResponse.game_id
game_id: GameID = Field(description='ID of the game this score belongs to')
# leadr.scores.api.score_schemas.ScoreClientResponse.id
id: ScoreID = Field(description='Unique identifier for the score')
# leadr.scores.api.score_schemas.ScoreClientResponse.identity_id
identity_id: IdentityID = Field(description='ID of the identity that submitted this score')
# leadr.scores.api.score_schemas.ScoreClientResponse.is_placeholder
is_placeholder: bool = Field(default=False, description='True if this is a synthetic placeholder score (from around_score_value query)')
# leadr.scores.api.score_schemas.ScoreClientResponse.is_test
is_test: bool = Field(default=False, description='True if this score was submitted in test mode')
# leadr.scores.api.score_schemas.ScoreClientResponse.metadata
metadata: Any | None = Field(default=None, description='Game-specific metadata, or null')
# leadr.scores.api.score_schemas.ScoreClientResponse.player_name
player_name: str = Field(description='Display name of the player')
# leadr.scores.api.score_schemas.ScoreClientResponse.rank
rank: int | None = Field(default=None, description='Leaderboard position (1 = first). Null if not querying by board_id.')
# leadr.scores.api.score_schemas.ScoreClientResponse.status
status: ScoreStatus = Field(description='Score lifecycle status (active, under_review, rejected)')
# leadr.scores.api.score_schemas.ScoreClientResponse.updated_at
updated_at: datetime = Field(description='Timestamp of last update (UTC)')
# leadr.scores.api.score_schemas.ScoreClientResponse.value
value: float = Field(description='Numeric value of the score')
# leadr.scores.api.score_schemas.ScoreClientResponse.value_display
value_display: str | None = Field(default=None, description='Formatted display string, or null')
leadr.scores.api.score_schemas.ScoreCreateRequestBase

Bases: BaseModel

Base request model for score creation with common fields.

Functions:

Attributes:

# leadr.scores.api.score_schemas.ScoreCreateRequestBase.board_id
board_id: BoardID = Field(description='ID of the board this score belongs to')
# leadr.scores.api.score_schemas.ScoreCreateRequestBase.metadata
metadata: Any | None = Field(default=None, description='Optional JSON metadata for game-specific data (max 1KB)')
# leadr.scores.api.score_schemas.ScoreCreateRequestBase.player_name
player_name: str = Field(description='Display name of the player')
# leadr.scores.api.score_schemas.ScoreCreateRequestBase.validate_metadata_size
validate_metadata_size(v)

Validate that metadata does not exceed size limit.

# leadr.scores.api.score_schemas.ScoreCreateRequestBase.value
value: float = Field(description='Numeric value of the score for sorting/comparison')
# leadr.scores.api.score_schemas.ScoreCreateRequestBase.value_display
value_display: str | None = Field(default=None, description="Optional formatted display string (e.g., '1:23.45', '1,234 points')")
leadr.scores.api.score_schemas.ScoreResponse

Bases: BaseModel

Response model for a score.

This response model is built from BoardState or RunEntry data with denormalized fields for query efficiency.

Functions:

  • from_board_state – Convert BoardState to ScoreResponse with masked ID.
  • from_run_entry – Convert RunEntry to ScoreResponse with masked ID.

Attributes:

# leadr.scores.api.score_schemas.ScoreResponse.account_id
account_id: AccountID = Field(description='ID of the account this score belongs to')
# leadr.scores.api.score_schemas.ScoreResponse.board_id
board_id: BoardID = Field(description='ID of the board this score belongs to')
# leadr.scores.api.score_schemas.ScoreResponse.city
city: str | None = Field(default=None, description='City for categorization, or null')
# leadr.scores.api.score_schemas.ScoreResponse.country
country: str | None = Field(default=None, description='Country for categorization, or null')
# leadr.scores.api.score_schemas.ScoreResponse.created_at
created_at: datetime = Field(description='Timestamp when the score was created (UTC)')
# leadr.scores.api.score_schemas.ScoreResponse.from_board_state
from_board_state(state, account_id, game_id, rank)

Convert BoardState to ScoreResponse with masked ID.

Uses denormalized fields from BoardState directly, no joins required.

Parameters:

  • state (BoardState) – The BoardState entity representing materialized ranking.
  • account_id (AccountID) – The account ID (from board lookup).
  • game_id (GameID) – The game ID (from board lookup).
  • rank (int) – The computed rank position (1-indexed).

Returns:

  • ScoreResponse – ScoreResponse with ID masked from bst_ to scr_ prefix.
# leadr.scores.api.score_schemas.ScoreResponse.from_run_entry
from_run_entry(entry, account_id, game_id, rank)

Convert RunEntry to ScoreResponse with masked ID.

Uses denormalized fields from RunEntry directly, no joins required.

Parameters:

  • entry (RunEntry) – The RunEntry entity representing a single run.
  • account_id (AccountID) – The account ID (from board lookup).
  • game_id (GameID) – The game ID (from board lookup).
  • rank (int) – The computed rank position (1-indexed).

Returns:

  • ScoreResponse – ScoreResponse with ID masked from run_ to scr_ prefix.
# leadr.scores.api.score_schemas.ScoreResponse.game_id
game_id: GameID = Field(description='ID of the game this score belongs to')
# leadr.scores.api.score_schemas.ScoreResponse.id
id: ScoreID = Field(description='Unique identifier for the score')
# leadr.scores.api.score_schemas.ScoreResponse.identity_id
identity_id: IdentityID = Field(description='ID of the identity that submitted this score')
# leadr.scores.api.score_schemas.ScoreResponse.is_placeholder
is_placeholder: bool = Field(default=False, description='True if this is a synthetic placeholder score (from around_score_value query)')
# leadr.scores.api.score_schemas.ScoreResponse.is_test
is_test: bool = Field(default=False, description='True if this score was submitted in test mode')
# leadr.scores.api.score_schemas.ScoreResponse.metadata
metadata: Any | None = Field(default=None, description='Game-specific metadata, or null')
# leadr.scores.api.score_schemas.ScoreResponse.player_name
player_name: str = Field(description='Display name of the player')
# leadr.scores.api.score_schemas.ScoreResponse.rank
rank: int | None = Field(default=None, description='Leaderboard position (1 = first). Null if not querying by board_id.')
# leadr.scores.api.score_schemas.ScoreResponse.status
status: ScoreStatus = Field(description='Score lifecycle status (active, under_review, rejected)')
# leadr.scores.api.score_schemas.ScoreResponse.timezone
timezone: str | None = Field(default=None, description='Timezone for categorization, or null')
# leadr.scores.api.score_schemas.ScoreResponse.updated_at
updated_at: datetime = Field(description='Timestamp of last update (UTC)')
# leadr.scores.api.score_schemas.ScoreResponse.value
value: float = Field(description='Numeric value of the score')
# leadr.scores.api.score_schemas.ScoreResponse.value_display
value_display: str | None = Field(default=None, description='Formatted display string, or null')
leadr.scores.api.score_submission_meta_routes

API routes for score submission metadata management.

Functions:

Attributes:

leadr.scores.api.score_submission_meta_routes.get_submission_meta
get_submission_meta(meta_id, service, auth)

Get score submission metadata by ID.

Parameters:

Returns:

Raises:

  • 403 – User does not have access to this metadata's account.
  • 404 – Submission metadata not found or soft-deleted.
leadr.scores.api.score_submission_meta_routes.list_submission_meta
list_submission_meta(auth, service, pagination, account_id=None, board_id=None)

List score submission metadata for an account with optional filters and pagination.

Returns paginated submission metadata for the specified account, with optional filtering by board. Supports cursor-based pagination with bidirectional navigation and custom sorting.

For regular users, account_id is automatically derived from their API key. For superadmins, account_id is optional - if omitted, returns metadata from all accounts.

Parameters:

Returns:

Raises:

  • 400 – Invalid cursor or sort field.
  • 403 – User does not have access to the specified account.
leadr.scores.api.score_submission_meta_routes.router
router = APIRouter()
leadr.scores.api.score_submission_meta_schemas

API schemas for score submission metadata.

Classes:

leadr.scores.api.score_submission_meta_schemas.ScoreSubmissionMetaResponse

Bases: BaseModel

Response model for score submission metadata.

Functions:

  • from_domain – Convert domain entity to API response.

Attributes:

# leadr.scores.api.score_submission_meta_schemas.ScoreSubmissionMetaResponse.board_id
board_id: BoardID
# leadr.scores.api.score_submission_meta_schemas.ScoreSubmissionMetaResponse.created_at
created_at: datetime
# leadr.scores.api.score_submission_meta_schemas.ScoreSubmissionMetaResponse.from_domain
from_domain(meta)

Convert domain entity to API response.

# leadr.scores.api.score_submission_meta_schemas.ScoreSubmissionMetaResponse.id
id: ScoreSubmissionMetaID
# leadr.scores.api.score_submission_meta_schemas.ScoreSubmissionMetaResponse.identity_id
identity_id: IdentityID
# leadr.scores.api.score_submission_meta_schemas.ScoreSubmissionMetaResponse.last_score_value
last_score_value: float | None
# leadr.scores.api.score_submission_meta_schemas.ScoreSubmissionMetaResponse.last_submission_at
last_submission_at: datetime
# leadr.scores.api.score_submission_meta_schemas.ScoreSubmissionMetaResponse.score_event_id
score_event_id: ScoreEventID
# leadr.scores.api.score_submission_meta_schemas.ScoreSubmissionMetaResponse.submission_count
submission_count: int
# leadr.scores.api.score_submission_meta_schemas.ScoreSubmissionMetaResponse.updated_at
updated_at: datetime

leadr.scores.domain

Modules:

  • anti_cheat – Anti-cheat domain models and enums.
  • score_event – ScoreEvent domain model for append-only score event sourcing.
leadr.scores.domain.anti_cheat

Anti-cheat domain models and enums.

Modules:

  • enums – Anti-cheat enums for flag types, confidence levels, and actions.
  • models – Anti-cheat domain models.

Classes:

  • AntiCheatResult – Result of anti-cheat analysis on a score submission.
  • FlagAction – Action to take based on anti-cheat analysis.
  • FlagConfidence – Confidence level for anti-cheat detection.
  • FlagType – Type of anti-cheat flag detected.
  • ScoreFlag – Record of an anti-cheat flag raised for a score submission.
  • ScoreSubmissionMeta – Metadata tracking submission history for anti-cheat analysis.
  • TrustTier – Trust tier for devices/users, determining anti-cheat thresholds.
leadr.scores.domain.anti_cheat.AntiCheatResult

Bases: BaseModel

Result of anti-cheat analysis on a score submission.

This is a value object that encapsulates the decision made by the anti-cheat system. It indicates whether to accept, flag, or reject a score submission, along with the reasoning and supporting metadata.

Attributes:

# leadr.scores.domain.anti_cheat.AntiCheatResult.action
action: FlagAction = Field(description='Action to take (ACCEPT/FLAG/REJECT)')
# leadr.scores.domain.anti_cheat.AntiCheatResult.confidence
confidence: FlagConfidence | None = Field(default=None, description='Confidence level of detection (if flagged/rejected)')
# leadr.scores.domain.anti_cheat.AntiCheatResult.flag_type
flag_type: FlagType | None = Field(default=None, description='Type of flag detected (if flagged/rejected)')
# leadr.scores.domain.anti_cheat.AntiCheatResult.metadata
metadata: dict[str, Any] | None = Field(default=None, description='Additional context and data supporting the decision')
# leadr.scores.domain.anti_cheat.AntiCheatResult.model_config
model_config = {'frozen': True}
# leadr.scores.domain.anti_cheat.AntiCheatResult.reason
reason: str | None = Field(default=None, description='Human-readable reason for the action')
leadr.scores.domain.anti_cheat.FlagAction

Bases: str, Enum

Action to take based on anti-cheat analysis.

Determines how the score submission should be handled.

Attributes:

  • ACCEPT – Accept the score submission without any flags.
  • FLAG – Accept the score but flag it for manual review.
  • REJECT – Reject the score submission (do not save to database).
# leadr.scores.domain.anti_cheat.FlagAction.ACCEPT
ACCEPT = 'accept'

Accept the score submission without any flags.

# leadr.scores.domain.anti_cheat.FlagAction.FLAG
FLAG = 'flag'

Accept the score but flag it for manual review.

# leadr.scores.domain.anti_cheat.FlagAction.REJECT
REJECT = 'reject'

Reject the score submission (do not save to database).

leadr.scores.domain.anti_cheat.FlagConfidence

Bases: str, Enum

Confidence level for anti-cheat detection.

Determines the action taken when a flag is raised:

  • HIGH: Auto-reject submission
  • MEDIUM: Flag for manual review, accept submission
  • LOW: Log for analysis, accept submission

Attributes:

  • HIGH – High confidence detection - reject submission.
  • LOW – Low confidence detection - log but accept.
  • MEDIUM – Medium confidence detection - flag for review but accept.
# leadr.scores.domain.anti_cheat.FlagConfidence.HIGH
HIGH = 'high'

High confidence detection - reject submission.

# leadr.scores.domain.anti_cheat.FlagConfidence.LOW
LOW = 'low'

Low confidence detection - log but accept.

# leadr.scores.domain.anti_cheat.FlagConfidence.MEDIUM
MEDIUM = 'medium'

Medium confidence detection - flag for review but accept.

leadr.scores.domain.anti_cheat.FlagType

Bases: str, Enum

Type of anti-cheat flag detected.

Each flag type represents a different detection tactic used to identify potentially suspicious score submissions.

Attributes:

  • CLUSTER – Multiple users submitting identical scores in short time window.
  • DUPLICATE – Identical score value submitted multiple times in short time window.
  • IMPOSSIBLE_VALUE – Score contains mathematically impossible value (negative, NaN, etc).
  • OUTLIER – Score is statistically anomalous compared to board distribution.
  • PATTERN – Suspicious pattern detected in submission history (all round numbers, etc).
  • PROGRESSION – Unrealistic improvement percentage between submissions.
  • RATE_LIMIT – Score submission exceeds rate limits for the user/board.
  • VELOCITY – Submissions are happening too quickly (< 2 seconds apart).
# leadr.scores.domain.anti_cheat.FlagType.CLUSTER
CLUSTER = 'cluster'

Multiple users submitting identical scores in short time window.

# leadr.scores.domain.anti_cheat.FlagType.DUPLICATE
DUPLICATE = 'duplicate'

Identical score value submitted multiple times in short time window.

# leadr.scores.domain.anti_cheat.FlagType.IMPOSSIBLE_VALUE
IMPOSSIBLE_VALUE = 'impossible_value'

Score contains mathematically impossible value (negative, NaN, etc).

# leadr.scores.domain.anti_cheat.FlagType.OUTLIER
OUTLIER = 'outlier'

Score is statistically anomalous compared to board distribution.

# leadr.scores.domain.anti_cheat.FlagType.PATTERN
PATTERN = 'pattern'

Suspicious pattern detected in submission history (all round numbers, etc).

# leadr.scores.domain.anti_cheat.FlagType.PROGRESSION
PROGRESSION = 'progression'

Unrealistic improvement percentage between submissions.

# leadr.scores.domain.anti_cheat.FlagType.RATE_LIMIT
RATE_LIMIT = 'rate_limit'

Score submission exceeds rate limits for the user/board.

# leadr.scores.domain.anti_cheat.FlagType.VELOCITY
VELOCITY = 'velocity'

Submissions are happening too quickly (< 2 seconds apart).

leadr.scores.domain.anti_cheat.ScoreFlag

Bases: Entity

Record of an anti-cheat flag raised for a score submission.

Represents a suspicious pattern detected by the anti-cheat system. Flags can be reviewed by admins to confirm or dismiss the detection.

Uses score_event_id instead of score_id, linking to the immutable ScoreEvent in the event-sourcing architecture.

Functions:

  • restore – Restore a soft-deleted entity.
  • soft_delete – Mark entity as soft-deleted.

Attributes:

# leadr.scores.domain.anti_cheat.ScoreFlag.confidence
confidence: FlagConfidence = Field(description='Confidence level of detection')
# leadr.scores.domain.anti_cheat.ScoreFlag.created_at
created_at: datetime = Field(default_factory=(lambda: datetime.now(UTC)), description='Timestamp when entity was created (UTC)')
# leadr.scores.domain.anti_cheat.ScoreFlag.deleted_at
deleted_at: datetime | None = Field(default=None, description='Timestamp when entity was soft-deleted (UTC), or null if active')
# leadr.scores.domain.anti_cheat.ScoreFlag.flag_type
flag_type: FlagType = Field(description='Type of suspicious behavior detected')
# leadr.scores.domain.anti_cheat.ScoreFlag.id
id: ScoreFlagID = Field(frozen=True, default_factory=ScoreFlagID, description='Unique score flag identifier')
# leadr.scores.domain.anti_cheat.ScoreFlag.is_deleted
is_deleted: bool

Check if entity is soft-deleted.

Returns:

  • bool – True if the entity has a deleted_at timestamp, False otherwise.
# leadr.scores.domain.anti_cheat.ScoreFlag.metadata
metadata: dict[str, Any] = Field(default_factory=dict, description='Supporting data for the detection')
# leadr.scores.domain.anti_cheat.ScoreFlag.model_config
model_config = ConfigDict(validate_assignment=True)
# leadr.scores.domain.anti_cheat.ScoreFlag.restore
restore()

Restore a soft-deleted entity.

Clears the deleted_at timestamp, making the entity active again.

Example > > > account.soft_delete() > > > account.restore() > > > assert account.is_deleted is False
# leadr.scores.domain.anti_cheat.ScoreFlag.reviewed_at
reviewed_at: datetime | None = Field(default=None, description='When the flag was reviewed by an admin')
# leadr.scores.domain.anti_cheat.ScoreFlag.reviewer_decision
reviewer_decision: str | None = Field(default=None, description="Admin's decision/notes on the flag")
# leadr.scores.domain.anti_cheat.ScoreFlag.reviewer_id
reviewer_id: UserID | None = Field(default=None, description='ID of the admin who reviewed the flag')
# leadr.scores.domain.anti_cheat.ScoreFlag.score_event_id
score_event_id: ScoreEventID = Field(description='ID of the flagged score event')
# leadr.scores.domain.anti_cheat.ScoreFlag.soft_delete
soft_delete()

Mark entity as soft-deleted.

Sets the deleted_at timestamp to the current UTC time. Entities that are already deleted are not affected (deleted_at remains at original deletion time).

Example > > > account = Account(name="Test", slug="test") > > > account.soft_delete() > > > assert account.is_deleted is True
# leadr.scores.domain.anti_cheat.ScoreFlag.status
status: ScoreFlagStatus = Field(default=(ScoreFlagStatus.PENDING), description='Review status (PENDING/CONFIRMED_CHEAT/FALSE_POSITIVE/DISMISSED)')
# leadr.scores.domain.anti_cheat.ScoreFlag.updated_at
updated_at: datetime = Field(default_factory=(lambda: datetime.now(UTC)), description='Timestamp of last update (UTC)')
leadr.scores.domain.anti_cheat.ScoreSubmissionMeta

Bases: Entity

Metadata tracking submission history for anti-cheat analysis.

Tracks the number and timing of score submissions per identity/board combination to enable detection of suspicious patterns like rapid-fire submissions or excessive submission rates.

Uses identity_id as the tracking key instead of device_id, aligning with the event-sourcing architecture where identity is the ranking key.

Functions:

  • restore – Restore a soft-deleted entity.
  • soft_delete – Mark entity as soft-deleted.

Attributes:

# leadr.scores.domain.anti_cheat.ScoreSubmissionMeta.board_id
board_id: BoardID = Field(description='ID of the board being submitted to')
# leadr.scores.domain.anti_cheat.ScoreSubmissionMeta.created_at
created_at: datetime = Field(default_factory=(lambda: datetime.now(UTC)), description='Timestamp when entity was created (UTC)')
# leadr.scores.domain.anti_cheat.ScoreSubmissionMeta.deleted_at
deleted_at: datetime | None = Field(default=None, description='Timestamp when entity was soft-deleted (UTC), or null if active')
# leadr.scores.domain.anti_cheat.ScoreSubmissionMeta.id
id: ScoreSubmissionMetaID = Field(frozen=True, default_factory=ScoreSubmissionMetaID, description='Unique submission metadata identifier')
# leadr.scores.domain.anti_cheat.ScoreSubmissionMeta.identity_id
identity_id: IdentityID = Field(description='ID of the identity submitting scores')
# leadr.scores.domain.anti_cheat.ScoreSubmissionMeta.is_deleted
is_deleted: bool

Check if entity is soft-deleted.

Returns:

  • bool – True if the entity has a deleted_at timestamp, False otherwise.
# leadr.scores.domain.anti_cheat.ScoreSubmissionMeta.last_score_value
last_score_value: float | None = Field(default=None, description='Value of the most recent score submission for duplicate detection')
# leadr.scores.domain.anti_cheat.ScoreSubmissionMeta.last_submission_at
last_submission_at: datetime = Field(description='Timestamp of the most recent submission')
# leadr.scores.domain.anti_cheat.ScoreSubmissionMeta.model_config
model_config = ConfigDict(validate_assignment=True)
# leadr.scores.domain.anti_cheat.ScoreSubmissionMeta.restore
restore()

Restore a soft-deleted entity.

Clears the deleted_at timestamp, making the entity active again.

Example > > > account.soft_delete() > > > account.restore() > > > assert account.is_deleted is False
# leadr.scores.domain.anti_cheat.ScoreSubmissionMeta.score_event_id
score_event_id: ScoreEventID = Field(description='ID of the most recent score event submission')
# leadr.scores.domain.anti_cheat.ScoreSubmissionMeta.soft_delete
soft_delete()

Mark entity as soft-deleted.

Sets the deleted_at timestamp to the current UTC time. Entities that are already deleted are not affected (deleted_at remains at original deletion time).

Example > > > account = Account(name="Test", slug="test") > > > account.soft_delete() > > > assert account.is_deleted is True
# leadr.scores.domain.anti_cheat.ScoreSubmissionMeta.submission_count
submission_count: int = Field(default=1, description='Total number of submissions by this identity to this board')
# leadr.scores.domain.anti_cheat.ScoreSubmissionMeta.updated_at
updated_at: datetime = Field(default_factory=(lambda: datetime.now(UTC)), description='Timestamp of last update (UTC)')
leadr.scores.domain.anti_cheat.TrustTier

Bases: str, Enum

Trust tier for devices/users, determining anti-cheat thresholds.

Different tiers have different rate limits and detection thresholds:

  • Tier A (Trusted): Most lenient thresholds, highest rate limits
  • Tier B (Verified): Moderate thresholds and rate limits
  • Tier C (Unverified): Strictest thresholds, lowest rate limits

Attributes:

  • A – Tier A - Trusted devices with verified attestation.
  • B – Tier B - Verified devices without full attestation.
  • C – Tier C - Unverified or new devices.
# leadr.scores.domain.anti_cheat.TrustTier.A
A = 'a'

Tier A - Trusted devices with verified attestation.

# leadr.scores.domain.anti_cheat.TrustTier.B
B = 'b'

Tier B - Verified devices without full attestation.

# leadr.scores.domain.anti_cheat.TrustTier.C
C = 'c'

Tier C - Unverified or new devices.

leadr.scores.domain.anti_cheat.enums

Anti-cheat enums for flag types, confidence levels, and actions.

Classes:

  • FlagAction – Action to take based on anti-cheat analysis.
  • FlagConfidence – Confidence level for anti-cheat detection.
  • FlagType – Type of anti-cheat flag detected.
  • ScoreFlagStatus – Status of a score flag review.
  • ScoreStatus – Lifecycle status of a score in the anti-cheat workflow.
  • TrustTier – Trust tier for devices/users, determining anti-cheat thresholds.
# leadr.scores.domain.anti_cheat.enums.FlagAction

Bases: str, Enum

Action to take based on anti-cheat analysis.

Determines how the score submission should be handled.

Attributes:

  • ACCEPT – Accept the score submission without any flags.
  • FLAG – Accept the score but flag it for manual review.
  • REJECT – Reject the score submission (do not save to database).
## leadr.scores.domain.anti_cheat.enums.FlagAction.ACCEPT
ACCEPT = 'accept'

Accept the score submission without any flags.

## leadr.scores.domain.anti_cheat.enums.FlagAction.FLAG
FLAG = 'flag'

Accept the score but flag it for manual review.

## leadr.scores.domain.anti_cheat.enums.FlagAction.REJECT
REJECT = 'reject'

Reject the score submission (do not save to database).

# leadr.scores.domain.anti_cheat.enums.FlagConfidence

Bases: str, Enum

Confidence level for anti-cheat detection.

Determines the action taken when a flag is raised:

  • HIGH: Auto-reject submission
  • MEDIUM: Flag for manual review, accept submission
  • LOW: Log for analysis, accept submission

Attributes:

  • HIGH – High confidence detection - reject submission.
  • LOW – Low confidence detection - log but accept.
  • MEDIUM – Medium confidence detection - flag for review but accept.
## leadr.scores.domain.anti_cheat.enums.FlagConfidence.HIGH
HIGH = 'high'

High confidence detection - reject submission.

## leadr.scores.domain.anti_cheat.enums.FlagConfidence.LOW
LOW = 'low'

Low confidence detection - log but accept.

## leadr.scores.domain.anti_cheat.enums.FlagConfidence.MEDIUM
MEDIUM = 'medium'

Medium confidence detection - flag for review but accept.

# leadr.scores.domain.anti_cheat.enums.FlagType

Bases: str, Enum

Type of anti-cheat flag detected.

Each flag type represents a different detection tactic used to identify potentially suspicious score submissions.

Attributes:

  • CLUSTER – Multiple users submitting identical scores in short time window.
  • DUPLICATE – Identical score value submitted multiple times in short time window.
  • IMPOSSIBLE_VALUE – Score contains mathematically impossible value (negative, NaN, etc).
  • OUTLIER – Score is statistically anomalous compared to board distribution.
  • PATTERN – Suspicious pattern detected in submission history (all round numbers, etc).
  • PROGRESSION – Unrealistic improvement percentage between submissions.
  • RATE_LIMIT – Score submission exceeds rate limits for the user/board.
  • VELOCITY – Submissions are happening too quickly (< 2 seconds apart).
## leadr.scores.domain.anti_cheat.enums.FlagType.CLUSTER
CLUSTER = 'cluster'

Multiple users submitting identical scores in short time window.

## leadr.scores.domain.anti_cheat.enums.FlagType.DUPLICATE
DUPLICATE = 'duplicate'

Identical score value submitted multiple times in short time window.

## leadr.scores.domain.anti_cheat.enums.FlagType.IMPOSSIBLE_VALUE
IMPOSSIBLE_VALUE = 'impossible_value'

Score contains mathematically impossible value (negative, NaN, etc).

## leadr.scores.domain.anti_cheat.enums.FlagType.OUTLIER
OUTLIER = 'outlier'

Score is statistically anomalous compared to board distribution.

## leadr.scores.domain.anti_cheat.enums.FlagType.PATTERN
PATTERN = 'pattern'

Suspicious pattern detected in submission history (all round numbers, etc).

## leadr.scores.domain.anti_cheat.enums.FlagType.PROGRESSION
PROGRESSION = 'progression'

Unrealistic improvement percentage between submissions.

## leadr.scores.domain.anti_cheat.enums.FlagType.RATE_LIMIT
RATE_LIMIT = 'rate_limit'

Score submission exceeds rate limits for the user/board.

## leadr.scores.domain.anti_cheat.enums.FlagType.VELOCITY
VELOCITY = 'velocity'

Submissions are happening too quickly (< 2 seconds apart).

# leadr.scores.domain.anti_cheat.enums.ScoreFlagStatus

Bases: str, Enum

Status of a score flag review.

Indicates whether a flag has been reviewed and what decision was made.

Attributes:

  • CONFIRMED_CHEAT – Admin confirmed this is cheating behavior.
  • DISMISSED – Admin dismissed the flag without a specific determination.
  • FALSE_POSITIVE – Admin determined this was legitimate gameplay.
  • PENDING – Flag has not been reviewed yet.
## leadr.scores.domain.anti_cheat.enums.ScoreFlagStatus.CONFIRMED_CHEAT
CONFIRMED_CHEAT = 'confirmed_cheat'

Admin confirmed this is cheating behavior.

## leadr.scores.domain.anti_cheat.enums.ScoreFlagStatus.DISMISSED
DISMISSED = 'dismissed'

Admin dismissed the flag without a specific determination.

## leadr.scores.domain.anti_cheat.enums.ScoreFlagStatus.FALSE_POSITIVE
FALSE_POSITIVE = 'false_positive'

Admin determined this was legitimate gameplay.

## leadr.scores.domain.anti_cheat.enums.ScoreFlagStatus.PENDING
PENDING = 'pending'

Flag has not been reviewed yet.

# leadr.scores.domain.anti_cheat.enums.ScoreStatus

Bases: str, Enum

Lifecycle status of a score in the anti-cheat workflow.

Tracks the score from submission through review, determining visibility on leaderboards.

Attributes:

  • ACTIVE – Score passed anti-cheat checks and is visible on leaderboards.
  • PROVISIONAL – Initial transient state before anti-cheat check completes.
  • REJECTED – Admin confirmed cheating - hidden from leaderboards.
  • UNDER_REVIEW – Score was flagged by anti-cheat, pending admin review. Still visible.
## leadr.scores.domain.anti_cheat.enums.ScoreStatus.ACTIVE
ACTIVE = 'active'

Score passed anti-cheat checks and is visible on leaderboards.

## leadr.scores.domain.anti_cheat.enums.ScoreStatus.PROVISIONAL
PROVISIONAL = 'provisional'

Initial transient state before anti-cheat check completes.

## leadr.scores.domain.anti_cheat.enums.ScoreStatus.REJECTED
REJECTED = 'rejected'

Admin confirmed cheating - hidden from leaderboards.

## leadr.scores.domain.anti_cheat.enums.ScoreStatus.UNDER_REVIEW
UNDER_REVIEW = 'under_review'

Score was flagged by anti-cheat, pending admin review. Still visible.

# leadr.scores.domain.anti_cheat.enums.TrustTier

Bases: str, Enum

Trust tier for devices/users, determining anti-cheat thresholds.

Different tiers have different rate limits and detection thresholds:

  • Tier A (Trusted): Most lenient thresholds, highest rate limits
  • Tier B (Verified): Moderate thresholds and rate limits
  • Tier C (Unverified): Strictest thresholds, lowest rate limits

Attributes:

  • A – Tier A - Trusted devices with verified attestation.
  • B – Tier B - Verified devices without full attestation.
  • C – Tier C - Unverified or new devices.
## leadr.scores.domain.anti_cheat.enums.TrustTier.A
A = 'a'

Tier A - Trusted devices with verified attestation.

## leadr.scores.domain.anti_cheat.enums.TrustTier.B
B = 'b'

Tier B - Verified devices without full attestation.

## leadr.scores.domain.anti_cheat.enums.TrustTier.C
C = 'c'

Tier C - Unverified or new devices.

leadr.scores.domain.anti_cheat.models

Anti-cheat domain models.

Classes:

  • AntiCheatResult – Result of anti-cheat analysis on a score submission.
  • ScoreFlag – Record of an anti-cheat flag raised for a score submission.
  • ScoreSubmissionMeta – Metadata tracking submission history for anti-cheat analysis.
# leadr.scores.domain.anti_cheat.models.AntiCheatResult

Bases: BaseModel

Result of anti-cheat analysis on a score submission.

This is a value object that encapsulates the decision made by the anti-cheat system. It indicates whether to accept, flag, or reject a score submission, along with the reasoning and supporting metadata.

Attributes:

## leadr.scores.domain.anti_cheat.models.AntiCheatResult.action
action: FlagAction = Field(description='Action to take (ACCEPT/FLAG/REJECT)')
## leadr.scores.domain.anti_cheat.models.AntiCheatResult.confidence
confidence: FlagConfidence | None = Field(default=None, description='Confidence level of detection (if flagged/rejected)')
## leadr.scores.domain.anti_cheat.models.AntiCheatResult.flag_type
flag_type: FlagType | None = Field(default=None, description='Type of flag detected (if flagged/rejected)')
## leadr.scores.domain.anti_cheat.models.AntiCheatResult.metadata
metadata: dict[str, Any] | None = Field(default=None, description='Additional context and data supporting the decision')
## leadr.scores.domain.anti_cheat.models.AntiCheatResult.model_config
model_config = {'frozen': True}
## leadr.scores.domain.anti_cheat.models.AntiCheatResult.reason
reason: str | None = Field(default=None, description='Human-readable reason for the action')
# leadr.scores.domain.anti_cheat.models.ScoreFlag

Bases: Entity

Record of an anti-cheat flag raised for a score submission.

Represents a suspicious pattern detected by the anti-cheat system. Flags can be reviewed by admins to confirm or dismiss the detection.

Uses score_event_id instead of score_id, linking to the immutable ScoreEvent in the event-sourcing architecture.

Functions:

  • restore – Restore a soft-deleted entity.
  • soft_delete – Mark entity as soft-deleted.

Attributes:

## leadr.scores.domain.anti_cheat.models.ScoreFlag.confidence
confidence: FlagConfidence = Field(description='Confidence level of detection')
## leadr.scores.domain.anti_cheat.models.ScoreFlag.created_at
created_at: datetime = Field(default_factory=(lambda: datetime.now(UTC)), description='Timestamp when entity was created (UTC)')
## leadr.scores.domain.anti_cheat.models.ScoreFlag.deleted_at
deleted_at: datetime | None = Field(default=None, description='Timestamp when entity was soft-deleted (UTC), or null if active')
## leadr.scores.domain.anti_cheat.models.ScoreFlag.flag_type
flag_type: FlagType = Field(description='Type of suspicious behavior detected')
## leadr.scores.domain.anti_cheat.models.ScoreFlag.id
id: ScoreFlagID = Field(frozen=True, default_factory=ScoreFlagID, description='Unique score flag identifier')
## leadr.scores.domain.anti_cheat.models.ScoreFlag.is_deleted
is_deleted: bool

Check if entity is soft-deleted.

Returns:

  • bool – True if the entity has a deleted_at timestamp, False otherwise.
## leadr.scores.domain.anti_cheat.models.ScoreFlag.metadata
metadata: dict[str, Any] = Field(default_factory=dict, description='Supporting data for the detection')
## leadr.scores.domain.anti_cheat.models.ScoreFlag.model_config
model_config = ConfigDict(validate_assignment=True)
## leadr.scores.domain.anti_cheat.models.ScoreFlag.restore
restore()

Restore a soft-deleted entity.

Clears the deleted_at timestamp, making the entity active again.

Example > > > account.soft_delete() > > > account.restore() > > > assert account.is_deleted is False
## leadr.scores.domain.anti_cheat.models.ScoreFlag.reviewed_at
reviewed_at: datetime | None = Field(default=None, description='When the flag was reviewed by an admin')
## leadr.scores.domain.anti_cheat.models.ScoreFlag.reviewer_decision
reviewer_decision: str | None = Field(default=None, description="Admin's decision/notes on the flag")
## leadr.scores.domain.anti_cheat.models.ScoreFlag.reviewer_id
reviewer_id: UserID | None = Field(default=None, description='ID of the admin who reviewed the flag')
## leadr.scores.domain.anti_cheat.models.ScoreFlag.score_event_id
score_event_id: ScoreEventID = Field(description='ID of the flagged score event')
## leadr.scores.domain.anti_cheat.models.ScoreFlag.soft_delete
soft_delete()

Mark entity as soft-deleted.

Sets the deleted_at timestamp to the current UTC time. Entities that are already deleted are not affected (deleted_at remains at original deletion time).

Example > > > account = Account(name="Test", slug="test") > > > account.soft_delete() > > > assert account.is_deleted is True
## leadr.scores.domain.anti_cheat.models.ScoreFlag.status
status: ScoreFlagStatus = Field(default=(ScoreFlagStatus.PENDING), description='Review status (PENDING/CONFIRMED_CHEAT/FALSE_POSITIVE/DISMISSED)')
## leadr.scores.domain.anti_cheat.models.ScoreFlag.updated_at
updated_at: datetime = Field(default_factory=(lambda: datetime.now(UTC)), description='Timestamp of last update (UTC)')
# leadr.scores.domain.anti_cheat.models.ScoreSubmissionMeta

Bases: Entity

Metadata tracking submission history for anti-cheat analysis.

Tracks the number and timing of score submissions per identity/board combination to enable detection of suspicious patterns like rapid-fire submissions or excessive submission rates.

Uses identity_id as the tracking key instead of device_id, aligning with the event-sourcing architecture where identity is the ranking key.

Functions:

  • restore – Restore a soft-deleted entity.
  • soft_delete – Mark entity as soft-deleted.

Attributes:

## leadr.scores.domain.anti_cheat.models.ScoreSubmissionMeta.board_id
board_id: BoardID = Field(description='ID of the board being submitted to')
## leadr.scores.domain.anti_cheat.models.ScoreSubmissionMeta.created_at
created_at: datetime = Field(default_factory=(lambda: datetime.now(UTC)), description='Timestamp when entity was created (UTC)')
## leadr.scores.domain.anti_cheat.models.ScoreSubmissionMeta.deleted_at
deleted_at: datetime | None = Field(default=None, description='Timestamp when entity was soft-deleted (UTC), or null if active')
## leadr.scores.domain.anti_cheat.models.ScoreSubmissionMeta.id
id: ScoreSubmissionMetaID = Field(frozen=True, default_factory=ScoreSubmissionMetaID, description='Unique submission metadata identifier')
## leadr.scores.domain.anti_cheat.models.ScoreSubmissionMeta.identity_id
identity_id: IdentityID = Field(description='ID of the identity submitting scores')
## leadr.scores.domain.anti_cheat.models.ScoreSubmissionMeta.is_deleted
is_deleted: bool

Check if entity is soft-deleted.

Returns:

  • bool – True if the entity has a deleted_at timestamp, False otherwise.
## leadr.scores.domain.anti_cheat.models.ScoreSubmissionMeta.last_score_value
last_score_value: float | None = Field(default=None, description='Value of the most recent score submission for duplicate detection')
## leadr.scores.domain.anti_cheat.models.ScoreSubmissionMeta.last_submission_at
last_submission_at: datetime = Field(description='Timestamp of the most recent submission')
## leadr.scores.domain.anti_cheat.models.ScoreSubmissionMeta.model_config
model_config = ConfigDict(validate_assignment=True)
## leadr.scores.domain.anti_cheat.models.ScoreSubmissionMeta.restore
restore()

Restore a soft-deleted entity.

Clears the deleted_at timestamp, making the entity active again.

Example > > > account.soft_delete() > > > account.restore() > > > assert account.is_deleted is False
## leadr.scores.domain.anti_cheat.models.ScoreSubmissionMeta.score_event_id
score_event_id: ScoreEventID = Field(description='ID of the most recent score event submission')
## leadr.scores.domain.anti_cheat.models.ScoreSubmissionMeta.soft_delete
soft_delete()

Mark entity as soft-deleted.

Sets the deleted_at timestamp to the current UTC time. Entities that are already deleted are not affected (deleted_at remains at original deletion time).

Example > > > account = Account(name="Test", slug="test") > > > account.soft_delete() > > > assert account.is_deleted is True
## leadr.scores.domain.anti_cheat.models.ScoreSubmissionMeta.submission_count
submission_count: int = Field(default=1, description='Total number of submissions by this identity to this board')
## leadr.scores.domain.anti_cheat.models.ScoreSubmissionMeta.updated_at
updated_at: datetime = Field(default_factory=(lambda: datetime.now(UTC)), description='Timestamp of last update (UTC)')
leadr.scores.domain.score_event

ScoreEvent domain model for append-only score event sourcing.

Classes:

  • ScoreEvent – Append-only score event entity.
leadr.scores.domain.score_event.ScoreEvent

Bases: ImmutableEntity

Append-only score event entity.

ScoreEvent represents an immutable fact about a score submission. Unlike regular entities, ScoreEvents:

  • Have no updated_at (immutable after creation)
  • Have no deleted_at (append-only, never soft-deleted)
  • Are the source of truth for score history

The event_payload contains board-type-specific data:

  • RUN_IDENTITY/RUN_RUNS: {"value": }
  • COUNTER: {"delta": }
  • RATIO: No direct events (derived from other boards)

Attributes:

# leadr.scores.domain.score_event.ScoreEvent.account_id
account_id: AccountID = Field(description='Account that owns this event')
# leadr.scores.domain.score_event.ScoreEvent.board_id
board_id: BoardID = Field(description='Board this event was submitted to')
# leadr.scores.domain.score_event.ScoreEvent.city
city: str | None = Field(default=None, description='City name from GeoIP lookup')
# leadr.scores.domain.score_event.ScoreEvent.country
country: str | None = Field(default=None, description='Country code from GeoIP lookup')
# leadr.scores.domain.score_event.ScoreEvent.created_at
created_at: datetime = Field(default_factory=(lambda: datetime.now(UTC)), description='Timestamp when entity was created (UTC)')
# leadr.scores.domain.score_event.ScoreEvent.event_payload
event_payload: dict[str, Any] = Field(description='Board-type-specific payload (value for RUN boards, delta for COUNTER)')
# leadr.scores.domain.score_event.ScoreEvent.game_id
game_id: GameID = Field(description='Game this event belongs to')
# leadr.scores.domain.score_event.ScoreEvent.id
id: ScoreEventID = Field(frozen=True, default_factory=ScoreEventID, description='Unique identifier for this event')
# leadr.scores.domain.score_event.ScoreEvent.identity_id
identity_id: IdentityID = Field(description='Identity that submitted this score')
# leadr.scores.domain.score_event.ScoreEvent.is_test
is_test: bool = Field(default=False, description='Whether this is a test submission')
# leadr.scores.domain.score_event.ScoreEvent.model_config
model_config = ConfigDict(validate_assignment=True)
# leadr.scores.domain.score_event.ScoreEvent.timezone
timezone: str | None = Field(default=None, description='Timezone from GeoIP lookup')

leadr.scores.services

Modules:

leadr.scores.services.anti_cheat_repositories

Anti-cheat repository services.

Classes:

leadr.scores.services.anti_cheat_repositories.ScoreFlagRepository

Bases: BaseRepository[ScoreFlag, ScoreFlagORM]

Repository for managing score flag persistence.

Functions:

  • create – Create a new entity in the database.
  • delete – Soft delete an entity by setting its deleted_at timestamp.
  • filter – Filter flags by account and optional criteria with pagination.
  • get_by_id – Get an entity by its ID.
  • get_flags_by_score_event_id – Get all flags for a specific score event.
  • get_pending_flags – Get all pending (unreviewed) flags.
  • update – Update an existing entity in the database.

Attributes:

# leadr.scores.services.anti_cheat_repositories.ScoreFlagRepository.SORTABLE_FIELDS
SORTABLE_FIELDS = {'id', 'score_event_id', 'flag_type', 'confidence', 'status', 'created_at', 'updated_at'}
# leadr.scores.services.anti_cheat_repositories.ScoreFlagRepository.create
create(entity)

Create a new entity in the database.

Parameters:

Returns:

  • DomainEntityT – Created domain entity with refreshed data
# leadr.scores.services.anti_cheat_repositories.ScoreFlagRepository.delete
delete(entity_id)

Soft delete an entity by setting its deleted_at timestamp.

Parameters:

Raises:

# leadr.scores.services.anti_cheat_repositories.ScoreFlagRepository.filter
filter(account_id=None, board_id=None, game_id=None, status=None, flag_type=None, *, pagination, **kwargs)

Filter flags by account and optional criteria with pagination.

Joins with score_events table to filter by account_id since flags don't have a direct account relation.

Parameters:

  • account_id (AccountID | None) – Optional account ID to filter by. If None, returns all flags (superadmin use case). Regular users should always pass account_id.
  • board_id (BoardID | None) – Optional board ID to filter by
  • game_id (GameID | None) – Optional game ID to filter by
  • status (str | None) – Optional status to filter by (PENDING, CONFIRMED_CHEAT, etc.)
  • flag_type (str | None) – Optional flag type to filter by (VELOCITY, DUPLICATE, etc.)
  • pagination (PaginationParams) – Pagination parameters (required)
  • **kwargs (Any) – Additional filter parameters (reserved for future use)

Returns:

# leadr.scores.services.anti_cheat_repositories.ScoreFlagRepository.get_by_id
get_by_id(entity_id, include_deleted=False)

Get an entity by its ID.

Parameters:

  • entity_id (UUID4 | PrefixedID) – Entity ID to retrieve
  • include_deleted (bool) – If True, include soft-deleted entities. Defaults to False.

Returns:

  • DomainEntityT | None – Domain entity if found, None otherwise
# leadr.scores.services.anti_cheat_repositories.ScoreFlagRepository.get_flags_by_score_event_id
get_flags_by_score_event_id(score_event_id)

Get all flags for a specific score event.

Parameters:

  • score_event_id (ScoreEventID) – ID of the score event to get flags for

Returns:

  • list[ScoreFlag] – List of flags for the score event (excludes soft-deleted)
# leadr.scores.services.anti_cheat_repositories.ScoreFlagRepository.get_pending_flags
get_pending_flags()

Get all pending (unreviewed) flags.

Returns:

  • list[ScoreFlag] – List of flags with status PENDING (excludes soft-deleted)
# leadr.scores.services.anti_cheat_repositories.ScoreFlagRepository.session
session = session
# leadr.scores.services.anti_cheat_repositories.ScoreFlagRepository.update
update(entity)

Update an existing entity in the database.

Parameters:

  • entity (DomainEntityT) – Domain entity with updated data

Returns:

  • DomainEntityT – Updated domain entity with refreshed data

Raises:

leadr.scores.services.anti_cheat_repositories.ScoreSubmissionMetaRepository

Bases: BaseRepository[ScoreSubmissionMeta, ScoreSubmissionMetaORM]

Repository for managing score submission metadata persistence.

Functions:

  • create – Create a new entity in the database.
  • delete – Soft delete an entity by setting its deleted_at timestamp.
  • filter – Filter submission metadata by account and optional criteria with pagination.
  • get_by_id – Get an entity by its ID.
  • get_by_identity_and_board – Get submission metadata for an identity/board combination.
  • update – Update an existing entity in the database.

Attributes:

# leadr.scores.services.anti_cheat_repositories.ScoreSubmissionMetaRepository.SORTABLE_FIELDS
SORTABLE_FIELDS = {'id', 'identity_id', 'board_id', 'submission_count', 'last_submission_at', 'last_score_value', 'created_at', 'updated_at'}
# leadr.scores.services.anti_cheat_repositories.ScoreSubmissionMetaRepository.create
create(entity)

Create a new entity in the database.

Parameters:

Returns:

  • DomainEntityT – Created domain entity with refreshed data
# leadr.scores.services.anti_cheat_repositories.ScoreSubmissionMetaRepository.delete
delete(entity_id)

Soft delete an entity by setting its deleted_at timestamp.

Parameters:

Raises:

# leadr.scores.services.anti_cheat_repositories.ScoreSubmissionMetaRepository.filter
filter(account_id=None, board_id=None, identity_id=None, *, pagination, **kwargs)

Filter submission metadata by account and optional criteria with pagination.

Joins with score_events table to filter by account_id since submission meta doesn't have a direct account relation.

Parameters:

  • account_id (AccountID | None) – Optional account ID to filter by. If None, returns all metadata (superadmin use case). Regular users should always pass account_id.
  • board_id (BoardID | None) – Optional board ID to filter by
  • identity_id (IdentityID | None) – Optional identity ID to filter by
  • pagination (PaginationParams) – Pagination parameters (required)
  • **kwargs (Any) – Additional filter parameters (reserved for future use)

Returns:

# leadr.scores.services.anti_cheat_repositories.ScoreSubmissionMetaRepository.get_by_id
get_by_id(entity_id, include_deleted=False)

Get an entity by its ID.

Parameters:

  • entity_id (UUID4 | PrefixedID) – Entity ID to retrieve
  • include_deleted (bool) – If True, include soft-deleted entities. Defaults to False.

Returns:

  • DomainEntityT | None – Domain entity if found, None otherwise
# leadr.scores.services.anti_cheat_repositories.ScoreSubmissionMetaRepository.get_by_identity_and_board
get_by_identity_and_board(identity_id, board_id)

Get submission metadata for an identity/board combination.

Parameters:

  • identity_id (IdentityID) – ID of the identity submitting scores
  • board_id (BoardID) – ID of the board being submitted to

Returns:

# leadr.scores.services.anti_cheat_repositories.ScoreSubmissionMetaRepository.session
session = session
# leadr.scores.services.anti_cheat_repositories.ScoreSubmissionMetaRepository.update
update(entity)

Update an existing entity in the database.

Parameters:

  • entity (DomainEntityT) – Domain entity with updated data

Returns:

  • DomainEntityT – Updated domain entity with refreshed data

Raises:

leadr.scores.services.anti_cheat_service

Anti-cheat service for detecting suspicious score submissions.

Classes:

leadr.scores.services.anti_cheat_service.AntiCheatService
AntiCheatService(session)

Service for anti-cheat detection and analysis.

Implements various detection tactics to identify suspicious score submissions:

  • Rate limiting: Prevents excessive submissions per identity/board
  • Duplicate detection: Identifies repeated identical scores
  • Velocity detection: Detects rapid-fire submissions
  • Statistical outliers: Identifies anomalous scores
  • Pattern detection: Finds suspicious submission patterns

Uses identity_id as the tracking key instead of device_id, aligning with the event-sourcing architecture where identity is the ranking key.

Functions:

Attributes:

Parameters:

  • session (AsyncSession) – Database session for querying metadata
# leadr.scores.services.anti_cheat_service.AntiCheatService.check_submission_for_event
check_submission_for_event(score_event, trust_tier, identity_id, board_id)

Check a score event submission for suspicious patterns.

Parameters:

  • score_event (ScoreEvent) – ScoreEvent being submitted
  • trust_tier (TrustTier) – Trust tier of the identity (A/B/C)
  • identity_id (IdentityID) – ID of the identity submitting the score
  • board_id (BoardID) – ID of the board being submitted to

Returns:

  • AntiCheatResult – AntiCheatResult indicating action to take (ACCEPT/FLAG/REJECT)
# leadr.scores.services.anti_cheat_service.AntiCheatService.meta_repo
meta_repo = ScoreSubmissionMetaRepository(session)
# leadr.scores.services.anti_cheat_service.AntiCheatService.session
session = session
leadr.scores.services.dependencies

Score service dependencies for FastAPI dependency injection.

Functions:

Attributes:

leadr.scores.services.dependencies.ScoreEventServiceDep
ScoreEventServiceDep = Annotated[ScoreEventService, Depends(get_score_event_service)]
leadr.scores.services.dependencies.ScoreFlagServiceDep
ScoreFlagServiceDep = Annotated[ScoreFlagService, Depends(get_score_flag_service)]
leadr.scores.services.dependencies.ScoreServiceDep
ScoreServiceDep = Annotated[ScoreService, Depends(get_score_service)]
leadr.scores.services.dependencies.ScoreSubmissionMetaServiceDep
ScoreSubmissionMetaServiceDep = Annotated[ScoreSubmissionMetaService, Depends(get_score_submission_meta_service)]
leadr.scores.services.dependencies.get_score_event_service
get_score_event_service(db)

Get ScoreEventService dependency.

Parameters:

Returns:

leadr.scores.services.dependencies.get_score_flag_service
get_score_flag_service(db)

Get ScoreFlagService dependency.

Parameters:

Returns:

leadr.scores.services.dependencies.get_score_service
get_score_service(db)

Get ScoreService dependency.

Parameters:

Returns:

leadr.scores.services.dependencies.get_score_submission_meta_service
get_score_submission_meta_service(db)

Get ScoreSubmissionMetaService dependency.

Parameters:

Returns:

leadr.scores.services.repositories

Score repository services.

Classes:

leadr.scores.services.repositories.ScoreEventRepository

Bases: ImmutableBaseRepository[ScoreEvent, ScoreEventORM]

Repository for managing score event persistence.

Score events are immutable (append-only) so this repository does not support update or delete operations.

Functions:

  • create – Create a new immutable entity in the database.
  • filter – Filter score events based on criteria with pagination.
  • get_by_id – Get an immutable entity by its ID.

Attributes:

# leadr.scores.services.repositories.ScoreEventRepository.create
create(entity)

Create a new immutable entity in the database.

Parameters:

Returns:

# leadr.scores.services.repositories.ScoreEventRepository.filter
filter(account_id=None, board_id=None, identity_id=None, is_test=None, *, pagination, **kwargs)

Filter score events based on criteria with pagination.

Parameters:

  • account_id (AccountID | None) – Optional account ID filter
  • board_id (BoardID | None) – Optional board ID filter
  • identity_id (IdentityID | None) – Optional identity ID filter
  • is_test (bool | None) – Optional filter for test events
  • pagination (PaginationParams) – Required pagination parameters

Returns:

# leadr.scores.services.repositories.ScoreEventRepository.get_by_id
get_by_id(entity_id)

Get an immutable entity by its ID.

Parameters:

Returns:

# leadr.scores.services.repositories.ScoreEventRepository.session
session = session
leadr.scores.services.score_event_service

Score event service for managing immutable score events.

Classes:

leadr.scores.services.score_event_service.ScoreEventService
ScoreEventService(session)

Service for managing score events.

Score events are immutable (append-only) facts about score submissions. This service only provides create, get, and list operations. No update or delete operations are available.

Functions:

Attributes:

Parameters:

# leadr.scores.services.score_event_service.ScoreEventService.create_score_event
create_score_event(account_id, game_id, board_id, identity_id, event_payload, is_test=False, timezone=None, country=None, city=None)

Create a new score event.

Parameters:

  • account_id (AccountID) – Account that owns this event
  • game_id (GameID) – Game this event belongs to
  • board_id (BoardID) – Board this event was submitted to
  • identity_id (IdentityID) – Identity that submitted this score
  • event_payload (dict[str, Any]) – Board-type-specific payload (value or delta)
  • is_test (bool) – Whether this is a test submission
  • timezone (str | None) – Timezone from GeoIP lookup
  • country (str | None) – Country code from GeoIP lookup
  • city (str | None) – City name from GeoIP lookup

Returns:

# leadr.scores.services.score_event_service.ScoreEventService.get_by_id_or_raise
get_by_id_or_raise(event_id)

Get a score event by ID, raising if not found.

Parameters:

Returns:

Raises:

# leadr.scores.services.score_event_service.ScoreEventService.get_score_event
get_score_event(event_id)

Get a score event by ID.

Parameters:

Returns:

  • ScoreEvent | None – ScoreEvent if found, None otherwise
# leadr.scores.services.score_event_service.ScoreEventService.list_score_events
list_score_events(account_id=None, board_id=None, identity_id=None, is_test=None, limit=50)

List score events with optional filters.

Parameters:

  • account_id (AccountID | None) – Optional filter by account
  • board_id (BoardID | None) – Optional filter by board
  • identity_id (IdentityID | None) – Optional filter by identity
  • is_test (bool | None) – Optional filter by test flag
  • limit (int) – Maximum number of results

Returns:

# leadr.scores.services.score_event_service.ScoreEventService.repository
repository = ScoreEventRepository(session)
# leadr.scores.services.score_event_service.ScoreEventService.session
session = session
leadr.scores.services.score_flag_service

Score flag service for managing flag operations.

Classes:

  • ScoreFlagService – Service for managing score flag lifecycle and operations.
leadr.scores.services.score_flag_service.ScoreFlagService
ScoreFlagService(session)

Bases: BaseService[ScoreFlag, ScoreFlagRepository]

Service for managing score flag lifecycle and operations.

This service orchestrates flag listing, retrieval, and review operations by coordinating between the domain models and repository layer.

Functions:

  • delete – Soft-delete an entity.
  • get_by_id – Get an entity by its ID.
  • get_by_id_or_raise – Get an entity by its ID or raise EntityNotFoundError.
  • get_flag – Get a flag by its ID.
  • list_all – List all non-deleted entities.
  • list_flags – List score flags for an account with optional filters and pagination.
  • review_flag – Review a flag and update its status.
  • soft_delete – Soft-delete an entity and return it before deletion.
  • update_flag – Update a flag's status and/or reviewer decision.

Attributes:

Parameters:

  • session (AsyncSession) – SQLAlchemy async session for database operations
# leadr.scores.services.score_flag_service.ScoreFlagService.delete
delete(entity_id)

Soft-delete an entity.

Parameters:

  • entity_id (UUID | PrefixedID) – The ID of the entity to delete

Raises:

# leadr.scores.services.score_flag_service.ScoreFlagService.get_by_id
get_by_id(entity_id)

Get an entity by its ID.

Parameters:

  • entity_id (UUID | PrefixedID) – The ID of the entity to retrieve

Returns:

  • DomainEntityT | None – The domain entity if found, None otherwise
# leadr.scores.services.score_flag_service.ScoreFlagService.get_by_id_or_raise
get_by_id_or_raise(entity_id)

Get an entity by its ID or raise EntityNotFoundError.

Parameters:

  • entity_id (UUID | PrefixedID) – The ID of the entity to retrieve

Returns:

Raises:

  • EntityNotFoundError – If the entity is not found (converted to HTTP 404 by global handler)
# leadr.scores.services.score_flag_service.ScoreFlagService.get_flag
get_flag(flag_id)

Get a flag by its ID.

Parameters:

  • flag_id (ScoreFlagID) – The ID of the flag to retrieve

Returns:

  • ScoreFlag | None – The flag if found, None otherwise
Example > > > flag = await service.get_flag(flag_id)
# leadr.scores.services.score_flag_service.ScoreFlagService.list_all
list_all()

List all non-deleted entities.

Returns:

# leadr.scores.services.score_flag_service.ScoreFlagService.list_flags
list_flags(account_id, board_id=None, game_id=None, status=None, flag_type=None, *, pagination)

List score flags for an account with optional filters and pagination.

Parameters:

  • account_id (AccountID | None) – Account ID to filter by. If None, returns all flags (superadmin use case).
  • board_id (BoardID | None) – Optional board ID to filter by
  • game_id (GameID | None) – Optional game ID to filter by
  • status (str | None) – Optional status to filter by (PENDING, CONFIRMED_CHEAT, etc.)
  • flag_type (str | None) – Optional flag type to filter by (VELOCITY, DUPLICATE, etc.)
  • pagination (PaginationParams) – Pagination parameters (required)

Returns:

Example > > > flags = await service.list_flags( > > > ... account_id=account.id, > > > ... status="pending", > > > ... pagination=PaginationParams(cursor=None, limit=100, sort=None), > > > ... )
# leadr.scores.services.score_flag_service.ScoreFlagService.repository
repository = repository if repository is not None else self._create_repository(session)
# leadr.scores.services.score_flag_service.ScoreFlagService.review_flag
review_flag(flag_id, status, reviewer_decision=None, reviewer_id=None)

Review a flag and update its status.

Note: Ranking updates for flag status changes are not yet implemented in the event-sourcing architecture.

Parameters:

  • flag_id (ScoreFlagID) – The ID of the flag to review
  • status (ScoreFlagStatus) – New status (CONFIRMED_CHEAT, FALSE_POSITIVE, DISMISSED)
  • reviewer_decision (str | None) – Optional admin notes/decision
  • reviewer_id (UserID | None) – Optional ID of the reviewing admin

Returns:

Raises:

Example > > > flag = await service.review_flag( > > > ... flag_id=flag.id, > > > ... status=ScoreFlagStatus.CONFIRMED_CHEAT, > > > ... reviewer_decision="Verified cheating behavior", > > > ... )
# leadr.scores.services.score_flag_service.ScoreFlagService.session
session = session
# leadr.scores.services.score_flag_service.ScoreFlagService.soft_delete
soft_delete(entity_id)

Soft-delete an entity and return it before deletion.

Useful for endpoints that need to return the deleted entity in the response.

Parameters:

  • entity_id (UUID | PrefixedID) – The ID of the entity to delete

Returns:

Raises:

# leadr.scores.services.score_flag_service.ScoreFlagService.update_flag
update_flag(flag_id, **updates)

Update a flag's status and/or reviewer decision.

Accepts any fields to update as keyword arguments. Only fields explicitly provided will be updated, allowing null values to clear optional fields.

Note: When status is updated, reviewed_at is automatically set to the current time.

Parameters:

  • flag_id (ScoreFlagID) – The ID of the flag to update
  • **updates (Any) – Field names and values to update

Returns:

Raises:

Example > > > flag = await service.update_flag( > > > ... flag_id=flag.id, > > > ... status=ScoreFlagStatus.FALSE_POSITIVE, > > > ... )
leadr.scores.services.score_service

Score service for managing score operations.

Classes:

  • ScoreService – Service for managing score lifecycle and operations.
leadr.scores.services.score_service.ScoreService
ScoreService(session)

Service for managing score lifecycle and operations.

This service orchestrates score submission via event-sourcing, and provides query methods that delegate to BoardStateService and RunEntryService for reading materialized ranking data.

The Score entity has been replaced by:

  • ScoreEvent: immutable event log
  • BoardState/RunEntry: materialized ranking views

All GET queries return BoardState or RunEntry data with IDs masked to scr_ prefix.

Functions:

  • get_score_by_id – Get a score by its ID with computed rank.
  • list_scores – List scores for a board with optional filters and pagination.
  • submit_score – Submit a score using the event-sourcing architecture.

Attributes:

Parameters:

# leadr.scores.services.score_service.ScoreService.get_score_by_id
get_score_by_id(score_id, account_id=None, game_id=None)

Get a score by its ID with computed rank.

The score_id uses scr_ prefix but internally maps to BoardState (bst_) or RunEntry (run_) based on board type. This method tries both services.

Parameters:

  • score_id (ScoreID) – The score ID (scr_ prefix).
  • account_id (AccountID | None) – Optional account ID for authorization check.
  • game_id (GameID | None) – Optional game ID for authorization check.

Returns:

  • BoardState | RunEntry – Tuple of (BoardState or RunEntry, Board, rank) with the ranking data,
  • Board – board, and computed rank (1-indexed).

Raises:

# leadr.scores.services.score_service.ScoreService.list_scores
list_scores(board_id, account_id=None, game_id=None, identity_id=None, is_test=None, *, pagination, around_score_id=None, around_score_value=None)

List scores for a board with optional filters and pagination.

Delegates to BoardStateService or RunEntryService based on board type.

Parameters:

  • board_id (BoardID) – Board ID to list scores for.
  • account_id (AccountID | None) – Optional account ID to filter by (for authorization).
  • game_id (GameID | None) – Optional game ID filter.
  • identity_id (IdentityID | None) – Optional identity ID filter.
  • is_test (bool | None) – Optional filter for test scores.
  • pagination (PaginationParams) – Pagination parameters.
  • around_score_id (ScoreID | None) – Optional score ID to center results around.
  • around_score_value (float | None) – Optional value to center results around.

Returns:

# leadr.scores.services.score_service.ScoreService.session
session = session
# leadr.scores.services.score_service.ScoreService.submit_score
submit_score(board_id, identity_id, value=None, delta=None, player_name=None, timezone=None, country=None, city=None, is_test=False, trust_tier=TrustTier.B, background_tasks=None)

Submit a score using the event-sourcing architecture.

This method creates a ScoreEvent, runs anti-cheat checks, and then updates the appropriate materialized view (BoardState or RunEntry) based on the board type and anti-cheat result.

Parameters:

  • board_id (BoardID) – The board to submit to.
  • identity_id (IdentityID) – The identity submitting the score.
  • value (float | None) – Score value for RUN_IDENTITY and RUN_RUNS boards.
  • delta (float | None) – Delta value for COUNTER boards.
  • player_name (str | None) – Optional display name for the player.
  • timezone (str | None) – Optional timezone from GeoIP.
  • country (str | None) – Optional country code from GeoIP.
  • city (str | None) – Optional city name from GeoIP.
  • is_test (bool) – Whether this is a test submission.
  • trust_tier (TrustTier) – Trust tier for anti-cheat thresholds (defaults to B).
  • background_tasks (BackgroundTasks | None) – Optional BackgroundTasks for async ratio updates.

Returns:

Raises:

  • ValueError – If validation fails (missing required fields, invalid board type).
  • EntityNotFoundError – If board or identity doesn't exist.
leadr.scores.services.score_submission_meta_service

Service for score submission metadata management.

Classes:

leadr.scores.services.score_submission_meta_service.ScoreSubmissionMetaService

Bases: BaseService[ScoreSubmissionMeta, ScoreSubmissionMetaRepository]

Service for managing score submission metadata.

Provides read-only access to submission metadata for debugging and analysis.

Functions:

  • delete – Soft-delete an entity.
  • get_by_id – Get an entity by its ID.
  • get_by_id_or_raise – Get an entity by its ID or raise EntityNotFoundError.
  • get_submission_meta – Get submission metadata by its ID.
  • list_all – List all non-deleted entities.
  • list_submission_meta – List score submission metadata for an account with optional filters and pagination.
  • soft_delete – Soft-delete an entity and return it before deletion.

Attributes:

# leadr.scores.services.score_submission_meta_service.ScoreSubmissionMetaService.delete
delete(entity_id)

Soft-delete an entity.

Parameters:

  • entity_id (UUID | PrefixedID) – The ID of the entity to delete

Raises:

# leadr.scores.services.score_submission_meta_service.ScoreSubmissionMetaService.get_by_id
get_by_id(entity_id)

Get an entity by its ID.

Parameters:

  • entity_id (UUID | PrefixedID) – The ID of the entity to retrieve

Returns:

  • DomainEntityT | None – The domain entity if found, None otherwise
# leadr.scores.services.score_submission_meta_service.ScoreSubmissionMetaService.get_by_id_or_raise
get_by_id_or_raise(entity_id)

Get an entity by its ID or raise EntityNotFoundError.

Parameters:

  • entity_id (UUID | PrefixedID) – The ID of the entity to retrieve

Returns:

Raises:

  • EntityNotFoundError – If the entity is not found (converted to HTTP 404 by global handler)
# leadr.scores.services.score_submission_meta_service.ScoreSubmissionMetaService.get_submission_meta
get_submission_meta(meta_id)

Get submission metadata by its ID.

Parameters:

Returns:

Example > > > meta = await service.get_submission_meta(meta_id)
# leadr.scores.services.score_submission_meta_service.ScoreSubmissionMetaService.list_all
list_all()

List all non-deleted entities.

Returns:

# leadr.scores.services.score_submission_meta_service.ScoreSubmissionMetaService.list_submission_meta
list_submission_meta(account_id, board_id=None, *, pagination)

List score submission metadata for an account with optional filters and pagination.

Parameters:

  • account_id (AccountID | None) – Account ID to filter by. If None, returns all metadata (superadmin use case).
  • board_id (BoardID | None) – Optional board ID to filter by
  • pagination (PaginationParams) – Pagination parameters (required)

Returns:

Example > > > metas = await service.list_submission_meta( > > > ... account_id=account.id, > > > ... board_id=board.id, > > > ... pagination=PaginationParams(cursor=None, limit=100, sort=None), > > > ... )
# leadr.scores.services.score_submission_meta_service.ScoreSubmissionMetaService.repository
repository = repository if repository is not None else self._create_repository(session)
# leadr.scores.services.score_submission_meta_service.ScoreSubmissionMetaService.soft_delete
soft_delete(entity_id)

Soft-delete an entity and return it before deletion.

Useful for endpoints that need to return the deleted entity in the response.

Parameters:

  • entity_id (UUID | PrefixedID) – The ID of the entity to delete

Returns:

Raises: