Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 9 additions & 0 deletions Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,15 @@ FROM python:3.10
RUN git config --global --add safe.directory /app
WORKDIR /app

# Future work:
# - This image currently uses an older Python base image version.
# - Update the Python version to align with current LO/WO support targets.
# - Some configurations may also require extra roster files mounted/provided
# at runtime (for example: admins.yaml or teachers.yaml).
# - Consider a layered setup:
# 1) maintain a shared "base LO" Docker image
# 2) extend it with a WO-specific image (or other module-set images)

# TODO start redis in here
# see about docker loopback
RUN apt-get update && \
Expand Down
1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -65,6 +65,7 @@ that the core approach and APIs are correct.
We have a short guide to [installing the system](docs/tutorials/install.md).
Getting the base system working is pretty easy. To create a new module
for the system to use, check out our [cookiecutter module guide](docs/tutorials/cookiecutter-module.md).
For a current package/module overview, see the [module inventory](docs/reference/module_inventory.md).

### System requirements

Expand Down
2 changes: 1 addition & 1 deletion VERSION
Original file line number Diff line number Diff line change
@@ -1 +1 @@
0.1.0+2026.04.16T12.07.43.534Z.b0eade5c.berickson.20260406.portfolio.fixes
0.1.0+2026.04.23T14.17.45.604Z.c8e75908.berickson.20260423.documentation
4 changes: 4 additions & 0 deletions autodocs/reference.rst
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,8 @@ Detailed, structured information about APIs, configurations, and technical detai
- :doc:`Linting Rules <docs/reference/linting>` - Review the automated checks that keep the codebase healthy and how to run them locally.
- :doc:`Testing Strategy <docs/reference/testing>` - Explore the testing layers we rely on and guidelines for writing reliable tests.
- :doc:`Versioning and Releases <docs/reference/versioning>` - See how we tag releases, manage dependencies, and maintain backward compatibility.
- :doc:`lo_assets Built Packages <docs/reference/lo_assets>` - Understand how prebuilt package artifacts are published and consumed.
- :doc:`Module Inventory <docs/reference/module_inventory>` - Get a quick at-a-glance list of current monorepo modules and their roles.
- :doc:`Module Reference <modules>` - Dive into the autogenerated API reference for Python modules within Learning Observer.
- :doc:`API Reference <api>` - Inspect the internal functionality of the system.

Expand All @@ -23,5 +25,7 @@ Detailed, structured information about APIs, configurations, and technical detai
docs/reference/linting.md
docs/reference/testing.md
docs/reference/versioning.md
docs/reference/lo_assets.md
docs/reference/module_inventory.md
modules
api
63 changes: 62 additions & 1 deletion devops/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,4 +10,65 @@ We would like to be cross-platform, and evenually support both
Debian-based distros and RPM-based distros. We're not there yet
either. We'd also like to support multiple cloud providers. We're not
there yet either. However, we probably won't accept PRs which move us
away from this goal.
away from this goal.

## Create an AWS account and an EC2 instance.
* Select Ubuntu, nano AMI. The cost should be 0.5 cents per hour.
* In security groups, add HTTP, HTTPS rules. Open up only to your computer's IP address (the client).
* Launch instance.
* Suppose your instance public DNS is {ec2_ip}, e.g., ec2-18-223-122-172.us-east-2.compute.amazonaws.com.
* Create aSSH key pair, save PEM file say under ~/.ssh, chmod to u+r.

## Set up the EC2 instance.
* SSH into the machine: `bash ssh -i {pem_file} ubuntu@{ec2_ip}`.
* (Optional) Create a user account: sudo useradd {user}
* Download the server code (same repository as the extension):
```bash
cd
git clone https://github.com/ETS-Next-Gen/writing_analysis.git writing_analysis
```
* Install Ansible.
```bash
sudo apt-get update
sudo apt-get upgrade
sudo apt-get install ansible
````
* Configure Ansible.
sudo pico /etc/ansible/hosts
Add
```
[localhost]
127.0.0.1
```
* `cd ~/writing_analysis/configuration`.
* Run `sudo ansible-playbook local.yaml`. This may take a while on an EC2 nanon machine.
If all goes well, you should see an output with no errors, like this:
```
bash
...
PLAY RECAP ******************************************************************************************
127.0.0.1 : ok=5 changed=4 unreachable=0 failed=0
```
* Navigate to http://{ec2_ip}; you should see the message "Welcome to nginx!" if it's working.

## Obtain a free domain name
* Go to noip.com.
* Sign up

## Obtain a free SSL Certificate Using Certbot
* Run the following commands:
```bash
sudo apt-get install software-properties-common
Sudo add-apt-repository universe
sudo add-apt-repository ppa:certbot/certbot
sudo apt-get update
sudo apt-get install certbot python-certbot-nginx
```
```
bash
sudo certbot --nginx
```
-- Put in your {mydomain}.hopto.org address.
-- Choose 1 - no redirect.

## Stand up a backend server on the EC2 instance.
8 changes: 8 additions & 0 deletions docs/concepts/system_settings.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,14 @@ logic expands directories into sorted file lists, then loads each file as a
ends in `.pmss`. Any other file suffix is skipped with a warning so you can
keep README files or notes alongside the rulesets without breaking startup.

This argument is intended for normal startup entry points too (for example, a
`Makefile` target or deployment startup scripts), not just ad-hoc local runs.
For example:

```bash
python -m learning_observer.main --pmss-rulesets creds.yaml schools.pmss
```

## The role of `creds.yaml`

Most installations load configuration from `creds.yaml`. When the process
Expand Down
22 changes: 15 additions & 7 deletions docs/how-to/offline_replay.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,15 +21,14 @@ Use the helper functions in `learning_observer/offline.py` to initialize the env
```bash
python - <<'PY'
import asyncio
import learning_observer.settings
from learning_observer import offline

# Prepare the in-memory KVS and reducers for offline replay
offline.init('creds.yaml')

# TODO: Offline replay currently initializes PMSS with default rulesets only.
# If you need custom `.pmss` overlays or alternate YAML files, update the
# offline initializer to accept ruleset paths.

# Provide YAML settings and optional PMSS overlays directly.
offline.init(
'creds.yaml',
pmss_rulesets=['creds.yaml', 'schools.pmss']
)

# Replace this path with the study log you want to replay
log_path = "/path/to/your/session.study.log"
Expand All @@ -42,6 +41,15 @@ asyncio.run(main())
PY
```

`offline.init()` accepts `pmss_rulesets` and forwards them to
`learning_observer.settings.init_pmss_settings(...)` before loading settings.
This mirrors startup behavior where the web server accepts
`--pmss-rulesets creds.yaml schools.pmss`.

The offline module still does not expose a first-class CLI parser like
`learning_observer.main`; there is a TODO in `offline.py` to add an
`argparse`-based entrypoint for settings and ruleset arguments.

- `process_file` only accepts study logs ending in `.study.log` or `.study.log.gz` and will reject other log types.
- If you do not provide a `userid`, a random safe username is generated to avoid inadvertently reusing PII from the log.

Expand Down
63 changes: 63 additions & 0 deletions docs/reference/lo_assets.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,63 @@
# lo_assets Built Package Reference

This document explains how Learning Observer consumes built packages from [`ArgLab/lo_assets`](https://github.com/ArgLab/lo_assets), why this exists, and current operational caveats.

## Why `lo_assets` exists

Some frontend-related packages require an `npm` build step before they are usable. That build tooling is not always available on deployment servers, so we publish prebuilt artifacts to `lo_assets` and fetch them during setup/startup.

In `lo_assets`, each package typically has its own directory containing:

- versioned archives (for example, `package-name-v1.2.3.tar.gz`)
- a `package-name-current.tar.gz` pointer file (a plain-text file containing the current archive filename)

## Where this repository is referenced in Learning Observer

### 1) Python package install path (`Makefile`)

The top-level `Makefile` installs `lo_dash_react_components` by:

1. downloading `lo_dash_react_components-current.tar.gz` from `lo_assets`
2. reading the returned filename
3. installing that resolved tarball with `pip`

This is implemented in the `install` target and is currently specialized to LODRC.

### 2) Runtime frontend asset fetch (`remote_assets.py`)

`learning_observer.remote_assets.fetch_module_assets()` supports a generic pointer-file workflow for prebuilt frontend bundles (for example exported Next.js apps):

1. read a `*-current.tar.gz` pointer file from `lo_assets`
2. resolve it to a concrete archive name
3. download and extract the archive into a target module directory

This logic is used for modules that ship prebuilt frontend assets outside of the main repo.

## Package-specific notes

### LODRC (`lo_dash_react_components`)

- This is a Python package, so downloading and reinstalling is an acceptable update path.
- Current behavior is handled by the top-level `Makefile` install process.

### Portfolio Diff (`wo_portfolio_diff`)

- This is a Next.js-derived frontend bundle copied into `modules/wo_portfolio_diff/wo_portfolio_diff`.
- Current fetch logic can pull the bundle when assets are missing.
- Current behavior does **not** automatically refresh when a newer remote bundle exists and local assets are already present.

## Operational caveats

- Whenever a new package version is published in `lo_assets`, the `*-current.tar.gz` pointer file should also be updated to point to that new archive.
- Most consuming code paths intentionally resolve the `current` pointer rather than pinning a hard-coded version.
- In some cases, users may need to remove local assets and re-run setup/startup steps to force retrieval of the latest bundle.

## Future improvement (TODO)

A generic Learning Observer startup check should:

1. track the resolved remote archive version in a local state file
2. compare local version vs. current `*-current.tar.gz` pointer target
3. auto-refresh assets when the remote current version changes

This would eliminate manual remove-and-refetch workflows and make behavior consistent across modules.
58 changes: 58 additions & 0 deletions docs/reference/module_inventory.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,58 @@
# Module Inventory

This page is a quick-reference inventory of modules/packages in this monorepo:
what they are and what they do.

## Current modules at a glance

### Common modules

| Module | What it does |
| --- | --- |
| `lo_dash_react_components` | Shared custom Dash UI components used across dashboards. |
| `lo_gpt` | Shared interfaces/utilities for accessing LLMs. |
| `lo_template_module` | Cookiecutter template used to scaffold new Learning Observer modules. |

### Additional module

| Module | What it does |
| --- | --- |
| `ccss` | Common Core State Standards-related package/resources. |

### Writing-focused modules

| Module | What it does |
| --- | --- |
| `portfolio_diff` | Next.js code for the portfolio dashboard UI. |
| `wo_classroom_text_highlighter` | Classroom text highlight dashboard module. |
| `wo_bulk_essay_analysis` | Classroom dashboard for bulk essay analysis (LLM-assisted workflows). |
| `wo_portfolio_diff` | Learning Observer entrypoint module for the built/deployed `portfolio_diff` dashboard. |
| `writing_observer` | Core writing-process analytics module and integrations. |

### Unused / superseded modules

| Module | Current state |
| --- | --- |
| `language_tool` | Standalone module is not actively used; relevant functionality now lives in `writing_observer`. |
| `websocket_debug` | Standalone module is not actively used; websocket debugging is mostly handled by scripts in `scripts/`. |

### Prototype modules

| Module | What it does | Current status |
| --- | --- | --- |
| `lo_action_summary` | Dashboard listing student events/actions. | Prototype. |
| `lo_lti_grade_demo` | Demo for LTI grade submission and reading flow. | Prototype/demo. |
| `lo_event` | Event-focused module (includes Next.js dashboard component work). | Moved to its own repository. |
| `toy-assess` | Initial LO Blocks implementation. | Being phased out (still used in some studies). |
| `lo_toy_sba` | Save/load state module used primarily with `toy-assess` + `lo_event`. | Prototype support module. |

### Legacy prototype dashboards (not fully migrated)

When the communication protocol changed to an async-generator pattern, some
prototype dashboards were not migrated.

| Module | What it does | Note |
| --- | --- | --- |
| `wo_common_student_errors` | Aggregate classroom LanguageTool dashboard. | Legacy prototype; may be folded into newer dashboard patterns. |
| `wo_document_list` | Toy dashboard showing documents across students. | Legacy prototype. |
| `wo_highlight_dashboard` | Initial highlight dashboard implementation. | Superseded by newer highlight work. |
2 changes: 1 addition & 1 deletion learning_observer/VERSION
Original file line number Diff line number Diff line change
@@ -1 +1 @@
0.1.0+2026.04.06T14.01.45.433Z.6e6d9aca.berickson.20260406.portfolio.fixes
0.1.0+2026.04.23T13.51.16.444Z.f82c7ba3.berickson.20260423.documentation
15 changes: 12 additions & 3 deletions learning_observer/learning_observer/offline.py
Original file line number Diff line number Diff line change
Expand Up @@ -41,10 +41,17 @@
}


def init(settings=INTERACTIVE_SETTINGS):
def init(settings=INTERACTIVE_SETTINGS, pmss_rulesets=None):
'''
Initialize the Learning Observer library.

Args:
settings (str|dict): Path to a YAML settings file or an in-memory
settings dictionary.
pmss_rulesets (list[str]|None): Optional PMSS ruleset files/directories
to initialize before loading settings. If omitted, defaults to the
standard PMSS initialization behavior.

Returns:
None

Expand All @@ -55,8 +62,10 @@ def init(settings=INTERACTIVE_SETTINGS):
# anything before we've loaded the settings. This might not be necessary,
# depending on the (still-changing) startup order
learning_observer.log_event.DEBUG_LOG_LEVEL = learning_observer.log_event.LogLevel.NONE
# TODO: Allow offline callers to pass PMSS ruleset files or directories.
learning_observer.settings.init_pmss_settings()
learning_observer.settings.init_pmss_settings(pmss_rulesets)
# TODO: Add an argparse-powered CLI entrypoint (similar to main.py) so
# offline workflows can accept settings and PMSS rulesets directly from
# command-line arguments without wrapper scripts.
learning_observer.settings.load_settings(settings)
learning_observer.prestartup.startup_checks_and_init()
learning_observer.kvs.kvs_startup_check() # Set up the KVS
Expand Down
4 changes: 2 additions & 2 deletions modules/portfolio_diff/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,11 +27,11 @@ After build completion, the static export is written to `out/`.
If you need an artifact for an assets repository, archive the exported files from `out/`:

```bash
VERSION_TAG=$(date +%Y%m%d-%H%M%S)
VERSION_TAG=$(date +%Y%m%d)
tar -C out -czf "portfolio_diff_${VERSION_TAG}.tar.gz" .
```

That creates a tarball in `modules/portfolio_diff` (for example `portfolio_diff-20260710-153010.tar.gz`) containing the static site root.
That creates a tarball in `modules/portfolio_diff` (for example `portfolio_diff-20260710.tar.gz`) containing the static site root.

You should include copy this to the [`lo_assets` repository](github.com/ArgLab/lo_assets) and update the `portfolio_diff-current.tar.gz` link to point to the new version.

Expand Down
Loading