Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions AGENTS.md
Original file line number Diff line number Diff line change
Expand Up @@ -80,8 +80,8 @@ cargo fmt
3. **Rust Formatting**: Always run `cargo fmt` - CI will fail without it.
4. **Linear History**: Use squash merge for PRs.
5. **React Components**: Avoid large ternary operations, instead break out the two pieces into components and use a simple ternary operation e.g. `condition ? <ComponentA /> : <ComponentB />`
6. **React Utilities**: Separate styling from logic and abstract reuseable functions into utils files.
7. **React Styling**: Attempt to reference existing pages and components for style guidelines, be sure to re-use components and match styling to maintain consistency.
6. **React Utilities**: Separate styling from logic and abstract reusable functions into utils files.
7. **React Styling**: Attempt to reference existing pages and components for style guidelines, be sure to reuse components and match styling to maintain consistency.
8. **Prefer Line-of-Sight Coding**: Avoid large indentation by returning early and keeping an unnested control flow, for example:

```golang
Expand Down
2 changes: 1 addition & 1 deletion bin/reflective_loader/Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ edition = "2021"
crate-type = ["cdylib"]

[profile.dev]
opt-level = "z" # This reduces the numebr of symbols not found.
opt-level = "z" # This reduces the number of symbols not found.
lto = true
codegen-units = 1
panic = "abort"
Expand Down
6 changes: 3 additions & 3 deletions bin/reflective_loader/src/loader.rs
Original file line number Diff line number Diff line change
Expand Up @@ -160,7 +160,7 @@ struct PeFileHeaders64 {
section_headers: [IMAGE_SECTION_HEADER; MAX_PE_SECTIONS],
}

// Pares the PE file from a series of bytes
// Parses the PE file from a series of bytes
#[cfg(target_arch = "x86_64")]
impl PeFileHeaders64 {
fn new(dll_bytes_ptr: *mut c_void) -> Self {
Expand Down Expand Up @@ -390,7 +390,7 @@ fn process_import_address_tables(
function_ordinal_ptr,
) as _;
} else {
// Calculate a refernce to the function name by adding the dll_base and name's RVA.
// Calculate a reference to the function name by adding the dll_base and name's RVA.
let image_import_ptr: *mut IMAGE_IMPORT_BY_NAME = (new_dll_base as usize
+ unsafe { library_thunk.u1.AddressOfData } as usize)
as *mut IMAGE_IMPORT_BY_NAME;
Expand Down Expand Up @@ -819,7 +819,7 @@ mod tests {
assert_eq!(base_reloc_entry.reloc_type, 0xa);
}

// PE Headers change everytime create file dll is built
// PE Headers change every time create file dll is built
// #[test]
// fn test_reflective_loader_parse_pe_headers() -> () {

Expand Down
2 changes: 1 addition & 1 deletion docs/_docs/admin-guide/tavern.md
Original file line number Diff line number Diff line change
Expand Up @@ -498,7 +498,7 @@ mutation tempLink {
}
```

This will create a link that allows the link to be active until Feburary 2nd 2026 at 21:33:18 UTC with 10 downloads. These two conditions are or'd so if either is allowed the download will work.
This will create a link that allows the link to be active until February 2nd 2026 at 21:33:18 UTC with 10 downloads. These two conditions are or'd so if either is allowed the download will work.

If no path is specified a random 6 character path will be generated. In the graphql query above we request the path back to ensure we know where to grab the file.

Expand Down
2 changes: 1 addition & 1 deletion docs/_docs/user-guide/eldritch.md
Original file line number Diff line number Diff line change
Expand Up @@ -809,7 +809,7 @@ It can use specific timestamps (epoch seconds or string format) or copy timestam
`file.write(path: str, content: str) -> None`

The **file.write** method writes to a given file path with the given content.
If a file already exists at this path, the method will overwite it. If a directory
If a file already exists at this path, the method will overwrite it. If a directory
already exists at the path the method will error.

### file.write_binary
Expand Down
4 changes: 2 additions & 2 deletions docs/_docs/user-guide/imix.md
Original file line number Diff line number Diff line change
Expand Up @@ -257,5 +257,5 @@ To change the default uniqueness behavior you can set the `IMIX_UNIQUE` environm

By default IMIX_UNIQUE is about equal to: `export IMIX_UNIQUE='[{"type":"env"},{"type":"file"},{"type":"file","args":{"path_override":"/etc/system-id"}},{"type":"macaddr"}]'`

To proiritize stealth we reccomend removing the file uniqueness selectors: `export IMIX_UNIQUE='[{"type":"env"},{"type":"macaddr"}]'`
If you know the environment will have VMs cloned without sysprep we recommend proritizing the file selectors and removing macaddr: `export IMIX_UNIQUE='[{"type":"env"},{"type":"file"},{"type":"file","args":{"path_override":"/etc/system-id"}}]'`
To prioritize stealth we recommend removing the file uniqueness selectors: `export IMIX_UNIQUE='[{"type":"env"},{"type":"macaddr"}]'`
If you know the environment will have VMs cloned without sysprep we recommend prioritizing the file selectors and removing macaddr: `export IMIX_UNIQUE='[{"type":"env"},{"type":"file"},{"type":"file","args":{"path_override":"/etc/system-id"}}]'`
2 changes: 1 addition & 1 deletion implants/Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -108,7 +108,7 @@ rand_chacha = { version = "0.3.1", default-features = false, features = ["std"]}
regex = { version = "1.5.5", default-features = false}
reqwest = { version = "0.12.15", default-features = false }
russh = "0.37.1"
russh-sftp = "=2.0.8" # `thiserror` dependcy in older versions causes downstream issues in other libraries.
russh-sftp = "=2.0.8" # `thiserror` dependency in older versions causes downstream issues in other libraries.
russh-keys = "0.37.1"
rustls = "0.23"
rust-embed = "8.5.0"
Expand Down
2 changes: 1 addition & 1 deletion implants/golem/src/main.rs
Original file line number Diff line number Diff line change
Expand Up @@ -137,7 +137,7 @@ fn main() -> anyhow::Result<()> {
});
}
}
// If we havent specified tomes in INPUT, we need to look through the asset locker for tomes to run
// If we haven't specified tomes in INPUT, we need to look through the asset locker for tomes to run
if parsed_tomes.is_empty() {
match locker.list() {
Ok(assets) => {
Expand Down
2 changes: 1 addition & 1 deletion implants/golem/tests/cli.rs
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@ fn test_golem_main_loaded_files() -> anyhow::Result<()> {

// Test running `./golem -a ../../bin/golem_cli_test/ -e`
// NOTE: Depending on how this test is run, the commands may not actually be run
// therefor we only test the output of eldritch and not the stdlib
// therefore we only test the output of eldritch and not the stdlib
#[test]
fn test_golem_main_loaded_and_embdedded_files() -> anyhow::Result<()> {
let mut cmd = Command::new(cargo_bin!("golem"));
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -136,7 +136,7 @@ impl StdAssetsLibrary {
// Make a hashset of the new asset names
let new_assets: HashSet<String> =
backend.assets().into_iter().map(Cow::into_owned).collect();
// See if any name overlap with existin assets
// See if any name overlap with existing assets
let colliding_names: Vec<&str> = self
.asset_names
.intersection(&new_assets)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ async fn handle_ncat(address: String, port: i32, data: String, protocol: String)
// Connect to remote host
let mut connection = TcpStream::connect(&address_and_port).await?;

// Write our meessage
// Write our message
connection.write_all(data.as_bytes()).await?;

// Read server response
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -402,7 +402,7 @@ async fn handle_port_scan(

let mut result: Vec<(String, i32, String, String)> = vec![];
// Await results of each job.
// We are not acting on scan results indepently so it's okay to loop through each and only return when all have finished.
// We are not acting on scan results independently so it's okay to loop through each and only return when all have finished.
for task in all_scan_futures {
match task.await? {
Ok(res) => {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -244,7 +244,7 @@ mod tests {
offset: u64,
data: Vec<u8>,
) -> Result<Status, Self::Error> {
//Warning this will only write one chunk - subsequesnt chunks will overwirte the old ones.
//Warning this will only write one chunk - subsequent chunks will overwrite the old ones.
// Tests over the size of the chunk will fail
let tmp_data = String::from_utf8(data).unwrap();
fs::write(handle, tmp_data.trim_end_matches(char::from(0))).unwrap();
Expand Down
2 changes: 1 addition & 1 deletion implants/lib/transport/Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ http = { workspace = true }
tower = { workspace = true }
# Use hyper 1.x for grpc
hyper = { workspace = true, features = ["client"] }
# These crates are kinda funky gonna keep them in just transprot for now.
# These crates are kinda funky gonna keep them in just transport for now.
hyper_legacy = { package = "hyper", version = "0.14", features = ["client", "http1", "http2", "stream"] }
hyper-util = { version = "0.1", features = ["client", "client-legacy", "http1", "http2"] }
hyper-rustls = { version = "0.27", default-features = false, features = ["webpki-tokio", "http2"] }
Expand Down
4 changes: 2 additions & 2 deletions tavern/app.go
Original file line number Diff line number Diff line change
Expand Up @@ -337,10 +337,10 @@ func NewServer(ctx context.Context, options ...func(*Config)) (*Server, error) {
AllowUnactivated: true,
},
"/auth/rda/approve": tavernhttp.Endpoint{
Handler: tavernhttp.NewRDAApproveHandler(client),
Handler: tavernhttp.NewRDAApproveHandler(client),
},
"/auth/rda/revoke": tavernhttp.Endpoint{
Handler: tavernhttp.NewRDARevokeHandler(client),
Handler: tavernhttp.NewRDARevokeHandler(client),
},
"/api/auth/signout": tavernhttp.Endpoint{
Handler: tavernhttp.NewSignoutHandler(),
Expand Down
2 changes: 0 additions & 2 deletions tavern/cli/auth/options.go
Original file line number Diff line number Diff line change
@@ -1,7 +1,5 @@
package auth



type AuthOptions struct {
EnvAPIKeyName string
CachePath string
Expand Down
2 changes: 1 addition & 1 deletion tavern/config.go
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ var (
EnvOAuthClientSecret = EnvString{"OAUTH_CLIENT_SECRET", ""}
EnvOAuthDomain = EnvString{"OAUTH_DOMAIN", ""}

// EnvMySQLAddr defines the MySQL address to connect to, if unset SQLLite is used.
// EnvMySQLAddr defines the MySQL address to connect to, if unset sqlite is used.
// EnvMySQLNet defines the network used to connect to MySQL (e.g. unix).
// EnvMySQLUser defines the MySQL user to authenticate as.
// EnvMySQLPasswd defines the password for the MySQL user to authenticate with.
Expand Down
4 changes: 2 additions & 2 deletions tavern/config_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ func TestConfigureMySQLFromEnv(t *testing.T) {
require.NoError(t, os.Unsetenv(EnvDBMaxConnLifetime.Key))
}

t.Run("SQLLite", func(t *testing.T) {
t.Run("sqlite", func(t *testing.T) {
defer cleanup()

cfg := &Config{}
Expand Down Expand Up @@ -80,7 +80,7 @@ func TestConfigureMySQLFromEnv(t *testing.T) {
assert.Equal(t, "root@tcp(127.0.0.1)/tavern?parseTime=true", cfg.mysqlDSN)
})

t.Run("SQLLite", func(t *testing.T) {
t.Run("sqlite", func(t *testing.T) {
defer cleanup()
require.NoError(t, os.Setenv(EnvDBMaxIdleConns.Key, "1337"))
require.NoError(t, os.Setenv(EnvDBMaxOpenConns.Key, "420"))
Expand Down
6 changes: 3 additions & 3 deletions tavern/internal/builder/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -100,11 +100,11 @@ at `GET /assets/download/{name}`.
- Add a way for the server to interrupt and cancel a build.
- Add support for build caching between jobs (will speed up rust builds a lot)
- Instead of assuming `/home/vscode` create a correctly permissioned build dir
- Add support for mulitple transports in the builder
- Add support for multiple transports in the builder

### future
- Add terraform for build server
- Register redirectors so bulider callback uri can be a drop down.
- Register redirectors so builder callback uri can be a drop down.
- Modifying the agent IMIX_CONFIG currently requires changes to both imix and tavern code bases now. Is there a way to codegen a YAML spec from tavern to the agent?
- De-dupe agent builds should the API stop builds that have the same params and point to the existing build? Or is this a UI thing?

Expand All @@ -122,7 +122,7 @@ at `GET /assets/download/{name}`.
- Target OS + Target Format ---> rust target
- TargetOS's only support certain formats
- where to get the realm source code from - pull public repo?
- Currentt pattern with arbitrary bulid script is RCE as a service. Scope and limit this to just build configuration options.
- Current pattern with arbitrary build script is RCE as a service. Scope and limit this to just build configuration options.
- upstream should be free form
- pubkey can be set by the server

Expand Down
2 changes: 1 addition & 1 deletion tavern/internal/builder/auth.go
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ const (
// Keys ending in "-bin" use gRPC binary metadata encoding.
mdKeyBuilderCert = "builder-cert-bin"
mdKeyBuilderSignature = "builder-signature-bin"
mdKeyBuilderTimestamp = "builder-timestamp"
mdKeyBuilderTimestamp = "builder-timestamp"

// Maximum age for a timestamp to be considered valid.
maxTimestampAge = 5 * time.Minute
Expand Down
22 changes: 11 additions & 11 deletions tavern/internal/builder/build_config.go
Original file line number Diff line number Diff line change
Expand Up @@ -21,9 +21,9 @@ const (

// DefaultTransports is the default transport configuration for a build task.
var DefaultTransports = []builderpb.BuildTaskTransport{{
URI: "http://127.0.0.1:8000",
Interval: 5,
Type: c2pb.Transport_TRANSPORT_GRPC,
URI: "http://127.0.0.1:8000",
Interval: 5,
Type: c2pb.Transport_TRANSPORT_GRPC,
}}

// TargetFormat is an alias for builderpb.TargetFormat.
Expand All @@ -50,11 +50,11 @@ type buildKey struct {

// buildCommands maps (target_os, target_format) -> cargo build command.
var buildCommands = map[buildKey]string{
{c2pb.Host_PLATFORM_LINUX, builderpb.TargetFormat_TARGET_FORMAT_BIN}: "cargo build --release --bin imix --target=x86_64-unknown-linux-musl",
{c2pb.Host_PLATFORM_MACOS, builderpb.TargetFormat_TARGET_FORMAT_BIN}: "cargo zigbuild --release --target aarch64-apple-darwin",
{c2pb.Host_PLATFORM_WINDOWS, builderpb.TargetFormat_TARGET_FORMAT_BIN}: "cargo build --release --target=x86_64-pc-windows-gnu",
{c2pb.Host_PLATFORM_LINUX, builderpb.TargetFormat_TARGET_FORMAT_BIN}: "cargo build --release --bin imix --target=x86_64-unknown-linux-musl",
{c2pb.Host_PLATFORM_MACOS, builderpb.TargetFormat_TARGET_FORMAT_BIN}: "cargo zigbuild --release --target aarch64-apple-darwin",
{c2pb.Host_PLATFORM_WINDOWS, builderpb.TargetFormat_TARGET_FORMAT_BIN}: "cargo build --release --target=x86_64-pc-windows-gnu",
{c2pb.Host_PLATFORM_WINDOWS, builderpb.TargetFormat_TARGET_FORMAT_WINDOWS_SERVICE}: "cargo build --release --features win_service --target=x86_64-pc-windows-gnu",
{c2pb.Host_PLATFORM_WINDOWS, builderpb.TargetFormat_TARGET_FORMAT_CDYLIB}: "cargo build --release --lib --target=x86_64-pc-windows-gnu",
{c2pb.Host_PLATFORM_WINDOWS, builderpb.TargetFormat_TARGET_FORMAT_CDYLIB}: "cargo build --release --lib --target=x86_64-pc-windows-gnu",
}

// ValidateTargetFormat checks whether the given format is supported for the given OS.
Expand Down Expand Up @@ -101,10 +101,10 @@ func TransportTypeToString(t c2pb.Transport_Type) string {

// ImixTransportConfig represents the transport section of the IMIX configuration.
type ImixTransportConfig struct {
URI string `yaml:"URI"`
Interval int `yaml:"interval"`
Type string `yaml:"type"`
Extra string `yaml:"extra"`
URI string `yaml:"URI"`
Interval int `yaml:"interval"`
Type string `yaml:"type"`
Extra string `yaml:"extra"`
}

// ImixConfig represents the IMIX agent configuration YAML.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,8 @@ import "realm.pub/tavern/internal/c2/c2pb"

// BuildTaskTransport represents a single transport configuration stored in the BuildTask entity.
type BuildTaskTransport struct {
URI string `json:"uri"`
Interval int `json:"interval"`
Type c2pb.Transport_Type `json:"type"`
Extra string `json:"extra,omitempty"`
URI string `json:"uri"`
Interval int `json:"interval"`
Type c2pb.Transport_Type `json:"type"`
Extra string `json:"extra,omitempty"`
}
2 changes: 1 addition & 1 deletion tavern/internal/builder/client.go
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ const (
maxConcurrentBuilds = 4

maxOutputChSize = 64
maxErrorChSize = 64
maxErrorChSize = 64
)

// builderCredentials implements grpc.PerRPCCredentials for mTLS authentication.
Expand Down
30 changes: 15 additions & 15 deletions tavern/internal/builder/executor_integration_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -78,9 +78,9 @@ func TestExecutorIntegration_ClaimAndExecuteWithMock(t *testing.T) {
SetBuildImage("golang:1.21").
SetBuildScript("go build ./...").
SetTransports([]builderpb.BuildTaskTransport{{
URI: "https://callback.example.com",
Interval: 5,
Type: c2pb.Transport_TRANSPORT_GRPC,
URI: "https://callback.example.com",
Interval: 5,
Type: c2pb.Transport_TRANSPORT_GRPC,
}}).
SetBuilderID(builders[0].ID).
SaveX(ctx)
Expand Down Expand Up @@ -226,9 +226,9 @@ func TestExecutorIntegration_ClaimAndExecuteWithMockError(t *testing.T) {
SetBuildImage("golang:1.21").
SetBuildScript("go build ./...").
SetTransports([]builderpb.BuildTaskTransport{{
URI: "https://callback.example.com",
Interval: 5,
Type: c2pb.Transport_TRANSPORT_GRPC,
URI: "https://callback.example.com",
Interval: 5,
Type: c2pb.Transport_TRANSPORT_GRPC,
}}).
SetBuilderID(builders[0].ID).
SaveX(ctx)
Expand Down Expand Up @@ -379,9 +379,9 @@ func TestExecutorIntegration_StreamBuildOutput(t *testing.T) {
SetBuildImage("golang:1.21").
SetBuildScript("go build ./...").
SetTransports([]builderpb.BuildTaskTransport{{
URI: "https://callback.example.com",
Interval: 5,
Type: c2pb.Transport_TRANSPORT_GRPC,
URI: "https://callback.example.com",
Interval: 5,
Type: c2pb.Transport_TRANSPORT_GRPC,
}}).
SetBuilderID(builders[0].ID).
SaveX(ctx)
Expand Down Expand Up @@ -512,9 +512,9 @@ func TestExecutorIntegration_StreamBuildOutputWithError(t *testing.T) {
SetBuildImage("golang:1.21").
SetBuildScript("go build ./...").
SetTransports([]builderpb.BuildTaskTransport{{
URI: "https://callback.example.com",
Interval: 5,
Type: c2pb.Transport_TRANSPORT_GRPC,
URI: "https://callback.example.com",
Interval: 5,
Type: c2pb.Transport_TRANSPORT_GRPC,
}}).
SetBuilderID(builders[0].ID).
SaveX(ctx)
Expand Down Expand Up @@ -642,9 +642,9 @@ func TestExecutorIntegration_UploadBuildArtifact(t *testing.T) {
SetBuildImage("golang:1.21").
SetBuildScript("go build -o /app/output/binary ./...").
SetTransports([]builderpb.BuildTaskTransport{{
URI: "https://callback.example.com",
Interval: 5,
Type: c2pb.Transport_TRANSPORT_GRPC,
URI: "https://callback.example.com",
Interval: 5,
Type: c2pb.Transport_TRANSPORT_GRPC,
}}).
SetArtifactPath("/app/output/binary").
SetBuilderID(builders[0].ID).
Expand Down
Loading
Loading